Configuring Modular QoS Service Packet ClassificationThis chapter covers these topics: Show
Packet Classification OverviewPacket classification involves categorizing a packet within a specific group (or class) and assigning it a traffic descriptor to make it accessible for QoS handling on the network. The traffic descriptor contains information about the forwarding treatment (quality of service) that the packet should receive. Using packet classification, you can partition network traffic into multiple priority levels or classes of service. The source agrees to adhere to the contracted terms and the network promises a quality of service. Traffic policers and traffic shapers use the traffic descriptor of a packet to ensure adherence to the contract. Traffic policers and traffic shapers rely on packet classification features, such as IP precedence, to select packets (or traffic flows) traversing a router or interface for different types of QoS service. After you classify packets, you can use other QoS features to assign the appropriate traffic handling policies including congestion management, bandwidth allocation, and delay bounds for each traffic class. The Modular Quality of Service (QoS) CLI (MQC) defines the traffic flows that must be classified, where each traffic flow is called a class of service, or class. Later, a traffic policy is created and applied to a class. All traffic not identified by defined classes fall into the category of a default class. You can classify packets at the ingress on L3 subinterfaces for (CoS, DEI) for IPv4, IPv6, and MPLS flows. IPv6 packets are forwarded by paths that are different from those for IPv4. To enable classification of IPv6 packets based on (CoS, DEI) on L3 subinterfaces, run the hw-module profile qos ipv6 short-l2qos-enable command and reboot the line card for the command to take effect. Traffic Class ElementsThe purpose of a traffic class is to classify traffic on your router. Use the class-map command to define a traffic class. A traffic class contains three major elements:
Packets are checked to determine whether they match the criteria that are specified in the match commands. If a packet matches the specified criteria, that packet is considered a member of the class and is forwarded according to the QoS specifications set in the traffic policy. Packets that fail to meet any of the matching criteria are classified as members of the default traffic class. This table shows the details of match types that are supported on the router.
Default Traffic ClassUnclassified traffic (traffic that does not meet the match criteria specified in the traffic classes) is treated as belonging to the default traffic class. If the user does not configure a default class, packets are still treated as members of the default class. However, by default, the default class has no enabled features. Therefore, packets belonging to a default class with no configured features have no QoS functionality. These packets are then placed into a first in, first out (FIFO) queue and forwarded at a rate determined by the available underlying link bandwidth. This FIFO queue is managed by a congestion avoidance technique called tail drop. For egress classification, match on traffic-class (1-7) is supported. Match traffic-class 0 cannot be configured. The class-default in the egress policy maps to traffic-class 0. This example shows how to configure a traffic policy for the default class:
Create a Traffic ClassTo create a traffic class containing match criteria, use the class-map command to specify the traffic class name, and then use the match commands in class-map configuration mode, as needed. Guidelines
Configuration ExampleYou have to accomplish the following to complete the traffic class configuration:
Use this command to verify the class-map configuration:
Also see, Running Configuration. Also see, Verification. Related Topics
Associated CommandsTraffic Policy ElementsA traffic policy contains three elements:
After choosing the traffic class that is used to classify traffic to the traffic policy, the user can enter the QoS features to be applied to the classified traffic. The MQC does not necessarily require that the users associate only one traffic class to one traffic policy. The order in which classes are configured in a policy map is important. The match rules of the classes are programmed into the TCAM in the order in which the classes are specified in a policy map. Therefore, if a packet can possibly match multiple classes, only the first matching class is returned and the corresponding policy is applied. The router supports 32 classes per policy-map in the ingress direction and 8 classes per policy-map in the egress direction. This table shows the supported class-actions on the router.
WRED supports default and discard-class options; the only values to be passed to the discard-class being 0 and 1. Create a Traffic PolicyThe purpose of a traffic policy is to configure the QoS features that should be associated with the traffic that has been classified in a user-specified traffic class or classes. To configure a traffic class, see . After you define a traffic policy with the policy-map command, you can attach it to one or more interfaces to specify the traffic policy for those interfaces by using the service-policy command in interface configuration mode. With dual policy support, you can have two traffic policies, one marking and one queuing attached at the output. See, Attach a Traffic Policy to an Interface. Configuration ExampleYou have to accomplish the following to complete the traffic policy configuration:
See, Running Configuration. See, Verification. Related Topics
Associated Commands
Scaling of Unique Ingress Policy Maps
Traditionally, when unique policy maps were associated to the same template — that is, having the same match criteria and actions in the same order — each map was assigned a different TCAM entry. This resulted in inefficient TCAM entry management and also restricted the number of policy maps that could be created. With this functionality, unique policy maps associated to the same template are shared in TCAM, thus enabling scaling of—in other words, creating more number of—policy maps. The other way to understand this functionality is that two policy maps with the same combination of criteria and actions use one template. This way, up to 250 templates are supported for association to policy map combinations. As an example, consider the following policy maps (policy-map ncs_input1 and policy-map ncs_input2) having the same class maps (class COS7_DEI0 and class COS7_DEI1):
Earlier, when the policy maps were attached to interface, they used different TCAM entries, although the match criteria and actions were the same, except for the policer action. With this functionality, both policy maps share the TCAM entry instead of selecting different entries, thus freeing up TCAM entries for more policy maps. Limitations and Restrictions
Attach a Traffic Policy to an InterfaceAfter the traffic class and the traffic policy are created, you must attach the traffic policy to interface, and specify the direction in which the policy should be applied.
Configuration ExampleYou have to accomplish the following to attach a traffic policy to an interface:
Running Configuration
Verification
Related Topics
Associated Commands
Packet MarkingThe packet marking feature provides users with a means to differentiate packets based on the designated markings. The router supports egress packet marking. match on discard-class on egress, if configured, can be used for a marking policy only. The router also supports L2 ingress marking. For ingress marking: Ingress traffic— For the ingress pop operation, re-marking the customer VLAN tag (CoS, DEI) is not supported. Egress traffic— The ingress ‘pop VLAN’ is translated to a ‘push VLAN’ for the egress traffic, and (CoS, DEI) marking is supported for newly pushed VLAN tags. If two VLAN tags are pushed to the packet header at the egress side, both inner and outer VLAN tags are marked. For example: 1. rewrite ingress tag pop 1 symmetric 2. rewrite ingress tag pop 2 symmetric 3. rewrite ingress tag translate 2-to-1 dot1q <> symmetric Limitations
Supported Packet Marking OperationsThis table shows the supported packet marking operations.
Class-based Unconditional Packet MarkingThe packet marking feature allows you to partition your network into multiple priority levels or classes of service, as follows:
Handling QoS for Locally Originated PacketsPackets that are generated and transmitted by a router are called Locally Originated Packets (LOPs). These are different from packets that pass through the router. Each device uses a default precedence value as determined by the device. The default value, used by Locally Originated Control Protocols (LOCPs) such as BGP, OSPF, CCM(CSM), and RSVP, is a precedence of 6 or Differentiated Services Codepoint (DSCP) of 48. Locally Originated Management Protocols (LOMPs) such as Telnet and SSH use a precedence value of 2 or DSCP of 16. SNMP uses a precedence value of 0. Some protocols such as BGP, RSVP, CFM, and LDP and the management protocols are capable of setting a specific precedence or DSCP value.
The following applies to Traffic Class (TC) alignment:
On the router, datapath packets and injected packets aren’t differentiated if both their traffic classes share the same Virtual Output Queues (VOQs). Therefore, in the case of a congested VOQ, the LOCP packets are dropped. To avoid the LOCP packets drop, Cisco recommends that you have a different traffic class for data path traffic. Alternatively, you can also specify a higher bandwidth for traffic-class 7 (if ingress traffic rate is predictable). Classifying traffic helps the router to recognize traffic as a certain type and mark that traffic. By marking traffic early on its travel, you can prevent excessive reclassification later. You can mark traffic at the protocol level as shown in the following examples: Ethernet The following configuration shows that the outbound Control Hub packets are marked with a precedence value of 2 and EXP of 2, instead of a precedence and EXP value of 6. The SSH packets have a precedence value of 3 instead of 2.
BGP
MPLS LDP
Telnet
SNMP
Syslog
NTP
All LOCPs originating on the RP or LC CPU have the discard priority set in the appended Buffer Header (BHDR). The discard priority ensures that the LOCPs are not dropped internally (under normal circumstances). Such LOCPs include non-IP (IS-IS and ARP) based control packets. The discard priority is not set for LOMPs. Therefore, such packets are treated as normal traffic, both in terms of classification and re-marking, and may be dropped under congestion conditions. Therefore, you must ensure that you do not inadvertently re-mark and drop such traffic.
LOCPs are not subject to traffic policing, Weighted Random Early Detection (WRED), or Tail-drop queue-limit operation. The LOCP packets are not subject to WRED, even if the max_th value is being met. The tail-drop queue-limit must be hit before the LOCP packets are dropped. All LOCPs with the discard priority set are by default put into an implicitly allocated high priority queue of each physical egress interface. By default, all LOCPs that have the discard priority set are put into an implicitly allocated high priority queue of each physical egress interface. When configuring QoS policies, you may attach a policy to the physical interface, which then references the sub-interfaces. Or, alternatively, you may attach QoS policies to the sub-interfaces directly. If you attach QoS policies to the sub-interfaces directly, the operator is prevented from attaching a QoS policy to the physical interface. LOCPs, including those being transmitted on a sub-interface, are always sent out on the default high-priority queue of the physical interface. The operator is therefore prevented from assigning any bandwidth to the physical interface, which could be reserved for use by LOCPs. During over-subscription, it may lead to a LOCPs drop and as a result, sessions may be terminated. To prevent session termination, a minimum bandwidth of MIN (1% of interface BW, 10 mbps) is reserved for the default high-priority queue associated with the physical interface that has no QoS policy applied. If a QoS policy is applied to the physical interface, the minimum bandwidth for the default HP queue is controlled by the configured policy.
LOCPs can be mapped to a corresponding QoS group. The following example illustrates how this can be achieved:
The precedence value of the control packet is mapped to the respective QoS group number. Protecting Locally Originated BFD PacketsBFD packets are injected into traffic-class 6, with drop priority 0 (equivalent of discard-class 0). If transit traffic is also classified into traffic-class 6 and the associated egress queue is congested, BFD packets may be dropped. The recommendation is to configure transit traffic-class 6 in the ingress QoS policy with discard-class 1 or 2. You must then configure WRED in the egress QoS policy that drops these packets before dropping discard-class 0.
Example
Hardware Programming
Prioritization of IS-IS and ARP Packets to Manage Transit Traffic
Overview of IS-IS and ARP Traffic PrioritizationTransit traffic refers to all traffic that enters an ingress interface, is compared against the forwarding table entries, and forwarded out an egress interface toward its destination. While the exact path of the transit path may not be of interest to the sender or receiver, you may still want some of the Integrated Intermediate System-to-Intermediate System (IS-IS) and Address Resolution Protocol (ARP) transit traffic to be managed and routed efficiently between specific source and destination addresses. You can now achieve higher levels of flexibility and fine-tune the traffic profile management for transit traffic by enabling the ability to assign the highest priority level to IS-IS and ARP traffic on Layer 2 networks. This feature is useful if you manage environments such as data centers where you have complete end-to-end control over your network, and you want to avoid any drops in IS-IS and ARP traffic during congestion. Cisco IOS XR Release 7.5.1 introduces the hw-module profile qos arp-isis-priority-enable command to enable prioritization of IS-IS and ARP traffic in transit on Layer 2 networks. Configuring this command assigns a priority level of TC 7 to transit traffic.
Guidelines
Enabling IS-IS and ARP Traffic PrioritizationTo enable IS-IS and ARP traffic prioritization, configure the hw-module profile qos arp-isis-priority-enable command.
Prioritization is based on IS-IS destination MAC address (01:80:c2:00:00:14 and 01:80:c2:00:00:15) and ARP ether type 0x080. When you configure the hw-module profile qos arp-isis-priority-enable command, priority level for IS-IS and ARP and traffic is set as TC 7. The following example shows the verification command for NC 57 line cards. The assigned priority level is TC 07.
QoS Re-marking of IP Packets in Egress DirectionThe router support the marking of IP DSCP bits of all IP packets to zero, in the egress direction. This feature helps to re-mark the priority of IP packets, which is mostly used in scenarios like IP over Ethernet over MPLS over GRE. This functionality is achieved using the ingress policy-map with set dscp 0 option configured in class-default. Configuration Example
Running Configuration
QoS Re-marking of Ethernet Packets in Egress DirectionThe router supports Layer 2 marking of Ethernet packets in the egress direction. QoS L2 Re-marking of Ethernet Packets in Egress DirectionThe router supports Layer 2 marking of Ethernet packets in the egress direction. To enable this feature, you must:
Running Configuration
QoS L2 Re-Marking of Ethernet Packets on L3 Flows in Egress DirectionThe router supports Layer 2 marking of Ethernet packets on Layer 3 flows in the egress direction. To enable this feature, you must:
RestrictionsThe following restrictions apply while configuring the Layer 2 marking of Ethernet packets on Layer 3 flows in the egress direction.
Running ConfigurationIngress Policy: You must first set up the qos-group at ingress.
Egress Policy: At the egress, run these commands to mark the packets.
Layer 2 Ingress QoS Matching for IPv4 and IPv6 Destination Addresses
OverviewAs a service provider, you provide Layer 2 connectivity for different classes of customer traffic across your network. With aggregated customer traffic arriving at your ingress, you need to provide differential treatment depending on specific destination addresses for the traffic. Such ability gives you granular control over traffic, allowing you to classify specific traffic flows depending on the type of services for which your customers have signed up. You can match class maps to IPv4 and IPv6 destination addresses on Layer 2 networks to ensure such granular control. The interface service policy has the relevant class maps, actioning them for ingress QoS marking. Guidelines and Limitations
Configure Layer 2 Ingress QoS Matching for IPv4 and IPv6 Destination AddressesPerform the following steps to configure Layer 2 ingress QoS matching for IPv4 and IPv6 destination addresses. This example covers:
Configuration
You have successfully configured Layer 2 ingress QoS matching for IPv4 and IPv6 destination addresses. Running Configuration
VerificationTo verify that the configuration was successful, run the sh policy-map pmap-name command for the policy map you created with all class maps associated. The output displays all the match-any and match-all configurations for IPv4 and IPv6 addresses.
Bundle Traffic PoliciesA policy can be bound to bundles. When a policy is bound to a bundle, the same policy is programmed on every bundle member (port). For example, if there is a policer or shaper rate, the same rate is configured on every port. Traffic is scheduled to bundle members based on the load balancing algorithm. Both ingress and egress traffic is supported. Percentage-based policies , absolute rate-based policies, and time-based policies are supported.
For details, see Configure QoS on Link Bundles. Shared Policy Instance
Traditionally, when services required by your end-customers mapped one-on-one to an interface, attaching the QoS policy-map directly to the interface was the way to meet customer SLAs. However, with increasing demand for triple play configurations—requiring the management of voice and video queues in addition to data queues —you may have several forwarding constructs. This scenario calls for the need to apply an aggregate QoS policy across interfaces to provide the necessary traffic. After you create the traffic class and traffic policy, you can optionally use a shared policy instance to allocate a single set of QoS resources and share them across a group of subinterfaces. With shared policy instance, you can share a single instance of a QoS policy across multiple subinterfaces, allowing for aggregate shaping, policing, and marking of the subinterfaces to one rate. All the subinterfaces that share the instance of a QoS policy must belong to the same main interface. The number of subinterfaces that share the QoS policy instance can range from 2 to the maximum number of subinterfaces on the main interface. When a shared policy instance of a policy map is shared by several subinterfaces, QoS operations such as aggregate shaping, policing, and marking are applied for traffic on all the interfaces that use the same shared policy instance. Traditionally, policies were bound to interfaces. However, different types of interfaces, such as Layer 2 and Layer 3, can use a single shared-policy-instance, which allows flexibility in the "attachment point" that binds the policy map. As an example, consider the following policy configuration:
The keyword shared-policy-instance and the instance name hqos_gold_customer1 identify the subinterfaces that share an aggregate SLA. These are shared on a physical main interface or a bundle member. In other words, in a mix of Layer 2 and Layer 3 subinterfaces in the same shared policy instance, both layers support classification criteria and action. In the case of bundles, sharing is applicable within a bundle member and not the entire bundle. Depending on the traffic hashing, shared policy instance may or may not take effect across the subinterface under the bundle main interface. All subinterfaces that share the same shared policy instance share resources as well. Hence, the show policy-map statistics values and show qos values for all the subinterfaces are the same. Restrictions and GuidelinesThe following restrictions and guidelines apply while configuring shared policy instance for a policy map.
Attaching a Shared Policy Instance to Multiple SubinterfacesTo attach a shared policy instance to multiple subinterfaces:
Running Configuration
VerificationThe show policy-map shared-policy-instance command includes an option to display counters for the shared policy instance.
For example, for a physical interface:
Use the clear qos counters shared-policy-instance command to clear counters for the shared policy instance.
For example, for a physical interface: The show qos shared-policy-instance command allows you to display the QoS hardware programming values.
Ingress Short-PipeWhen QoS traffic leaves an MPLS network, the MPLS label stack is removed on the penultimate ingress Label Switch Router (LSR), leaving an IPv4 or IPv6 packet to be forwarded. MPLS experimental bits (or EXP or pipe mode) carries out this disposition process and the packet is marked with a Differentiated Services Code Point (DSCP) or precedence value (also called DSCP or Precedence-based classification). Usually, QoS traffic supports DSCP and precedence-based classifications only when there is no MPLS label in the packet. Using the ingress short-pipe feature, however, you can classify a packet that contains one MPLS label using the type-of-service (ToS) field of the IPv4 or IPv6 header. This classification method is called ingress short-pipe. To classify an IP packet this way, you must:
With the ingress short-pipe feature, you get an increased visibility into traffic packets. Plus, the feature also removes the limitation of classifying MPLS packets that come into IPv4 or IPv6 networks. Restrictions and Other Important PointsEnsure that you read these points before you configure the ingress short-pipe feature.
Configure Ingress Short-PipeThis section details a sample configuration for the ingress short-pipe feature and another sample to configure classification for labeled and non-labeled packets under the same parent class. Sample configuration to classify a packet that contains one MPLS label using the type-of-service (ToS) field of the IPv4 or IPv6 header (or the ingress short-pipe method):
You can configure classification for both labeled and non-labeled packets under the same parent class as in the following sample configuration. In this example, for MPLS labeled packets, DSCP configured under the child class is classified, while for non-labeled packets, DSCP/ToS configured in the match dscp <value> statement is classified. DSCP value range is from 0 through 63. The range option is not supported. Up to 8 items per class are supported. Up to 64 match dscp values in total.
Associated Commands
Selective Egress Policy-Based Queue MappingWith selective egress policy-based queue mapping, you can combine traffic class (TC) maps in various permutations at the egress.
The primary aim of introducing the egress TC (traffic class) mapping is to classify the traffic in the ingress using a single policy and place the classified traffic into queues, by assigning the traffic classes. At the egress, you can support different grouping of TCs. Based on different Service Level Agreements (SLAs) that each customer has signed up for, you can group some TCs into priority queues for real time (RT) traffic, other TCs into guaranteed bandwidth (BW) traffic, and the rest into best effort (BE) traffic delivery. Let us consider an example where three customers have purchased these services, based on their requirements:
Using the selective egress policy-based queue mapping, you can create three profiles this way:
Using the egress TC-mapping, you can create three different profiles that you can use for each customer based on their SLAs with the provider. Figure 1. Selective Egress Policy-Based Queue Mapping Helps Create Customer Profiles Based on Their SLAsRestrictions and Other Important PointsEnsure that you read these points before you configure the selective egress policy-based queue-mapping feature.
Configure Selective Egress Policy-Based Queue MappingThis section details a sample configuration for the selective egress policy-based queue-mapping feature and a use case to show how this feature works. Sample configuration
VerificationRun the show qos interface and show policy-map interface commands. When TC mapping class is present in a policy map, the class default does not have any values calculated. show qos interface bundle-Ether 44 output sample
show policy-map interface bundle-Ether 44 output sample
Use CaseWith the ingress traffic matching the same match criteria, you can group the egress traffic up to three unique TC mapped profiles. Using this feature, you can provide differentiated services to customers based on the SLAs they have signed up for. In the example that follows, the ingress policy-map sets the ingress match criteria for the traffic class from 0 through 5. Based on the SLAs, you can group the TC values at the egress PM to deliver differentiated services. After you group the TC values, you can apply specific egress actions under that class. Ingress match:
Egress match: Sample TC mapped class for policy-map PM1
Sample TC mapped class for policy-map PM2
Sample TC mapped class for policy-map PM3
Configuring QoS Groups with an ACLYou can create QoS groups and configure ACLs to classify traffic into the groups based on a specified match condition. In this example, we match by the QoS group value (0-511). PrerequisitesBefore you can configure QoS groups with an ACL, the QoS peering profile must be enabled on the router or the line card. After enabling QoS peering, the router or line card must be reloaded, as shown in the following configuration. Enabling QoS Peering Profile on the RouterEnter the global configuration mode and enable the QoS peering profile for the router as shown: Enabling QoS Peering Profile on the Line CardEnter the global configuration mode and enable the QoS peering profile for the line card as shown:
ConfigurationUse the following set of configuration statements to configure an ACL with QoS groups.
Running ConfigurationConfirm your configuration.
You have successfully configured an ACL with QoS groups. QoS Egress Marking and Queuing Using Dual Policy-MapTo achieve QoS Egress marking/queuing, the router utilizes the dual policy model on the Egress with independent policies for marking and queuing. Egress marking can be achieved by applying a policy-map on the ingress interface by setting qos-group/discard-class. Then the qos-group which is set by the ingress policy-map is used by the egress-policy map along with DP (drop-precedence or discard class) value to remark the cos/dei bits of the outgoing L2 packet. Similarly Egress queuing can be achieved by applying a policy-map on the ingress interface by setting the traffic-class. Then the traffic-class is used by the egress-policy map to perform queuing actions. Benefits
QoS Egress Marking and Queueing can be summarized in the following three steps—
Restrictions
Ingress QoS Scale LimitationRefer to the below table for Ingress QoS Scale Limitation. Table 5. Ingress QoS Scale Limitation
Example: For default configuration, which is normal (two counter mode) QoS mode & 32 class-map size, you can configure 127 interfaces with ingress policy per core. Restrictions
Restrictions for Peering QoS Profile
Restrictions for QoS on BVI
Restrictions for TCAM
Restrictions Specific to NCS 540 VariantsThe following table lists Ingress QoS Scale limitation for these variants of the NCS 540 Series Routers.
The table below lists Ingress QoS Scale limitation for these variants of the NCS 540 Series Routers.
Example: For Default Configuration, which is Normal (2 counter mode) QoS Mode & 32 Class Map-Size, you can configure 127 interfaces with Ingress Policy per core. Other restrictions to follow:
Restrictions for Peering QoS Profile
Restrictions for QoS on BVI
Restrictions for Egress Drop Action
In-Place Policy ModificationThe In-Place policy modification feature allows you to modify a QoS policy even when the QoS policy is attached to one or more interfaces. A modified policy is subjected to the same checks that a new policy is subject to when it is bound to an interface. If the policy-modification is successful, the modified policy takes effect on all the interfaces to which the policy is attached. However, if the policy modification fails on any one of the interfaces, an automatic rollback is initiated to ensure that the pre-modification policy is in effect on all the interfaces. You can also modify any class map used in the policy map. The changes made to the class map take effect on all the interfaces to which the policy is attached.
VerificationIf unrecoverable errors occur during in-place policy modification, the policy is put into an inconsistent state on target interfaces. No new configuration is possible until the configuration session is unblocked. It is recommended to remove the policy from the interface, check the modified policy and then re-apply accordingly. References for Modular QoS Service Packet Classification Specification of the CoS for a Packet with IP PrecedenceUse of IP precedence allows you to specify the CoS for a packet. You can create differentiated service by setting precedence levels on incoming traffic and using them in combination with the QoS queuing features. So that, each subsequent network element can provide service based on the determined policy. IP precedence is usually deployed as close to the edge of the network or administrative domain as possible. This allows the rest of the core or backbone to implement QoS based on precedence. Figure 2. IPv4 Packet Type of Service FieldYou can use the three precedence bits in the type-of-service (ToS) field of the IPv4 header for this purpose. Using the ToS bits, you can define up to eight classes of service. Other features configured throughout the network can then use these bits to determine how to treat the packet in regard to the ToS to grant it. These other QoS features can assign appropriate traffic-handling policies, including congestion management strategy and bandwidth allocation. For example, queuing features such as LLQ can use the IP precedence setting of the packet to prioritize traffic. IP Precedence Bits Used to Classify PacketsUse the three IP precedence bits in the ToS field of the IP header to specify the CoS assignment for each packet. You can partition traffic into a maximum of eight classes and then use policy maps to define network policies in terms of congestion handling and bandwidth allocation for each class. Each precedence corresponds to a name. IP precedence bit settings 6 and 7 are reserved for network control information, such as routing updates. These names are defined in RFC 791. IP Precedence Value SettingsBy default, the routers leave the IP precedence value untouched. This preserves the precedence value set in the header and allows all internal network devices to provide service based on the IP precedence setting. This policy follows the standard approach stipulating that network traffic should be sorted into various types of service at the edge of the network and that those types of service should be implemented in the core of the network. Routers in the core of the network can then use the precedence bits to determine the order of transmission, the likelihood of packet drop, and so on. Because traffic coming into your network can have the precedence set by outside devices, we recommend that you reset the precedence for all traffic entering your network. By controlling IP precedence settings, you prohibit users that have already set the IP precedence from acquiring better service for their traffic simply by setting a high precedence for all of their packets. The class-based unconditional packet marking and LLQ features can use the IP precedence bits. IP Precedence Compared to IP DSCP MarkingIf you need to mark packets in your network and all your devices support IP DSCP marking, use the IP DSCP marking to mark your packets because the IP DSCP markings provide more unconditional packet marking options. If marking by IP DSCP is undesirable, however, or if you are unsure if the devices in your network support IP DSCP values, use the IP precedence value to mark your packets. The IP precedence value is likely to be supported by all devices in the network. You can set up to 8 different IP precedence markings and 64 different IP DSCP markings. Conditional Marking of MPLS Experimental bits for L3VPN TrafficThe conditional marking of MPLS experimental bits is achieved for Layer 3 Virtual Private Network (L3VPN) traffic by applying a combination of ingress and egress policy-maps on the Provider Edge (PE) router. In the ingress policy-map, the qos-group or discard-class is set either based on the result of the policing action or implicitly. The egress policy-map matches on qos-group or discard-class and sets the mpls experiment bits to the corresponding value. This feature is supported on both IPv4 and IPv6 traffic in the L3VPN network. Conditional marking can be used to mark the MPLS experimental bits differently for in-contract and out-of-contract packets. In-contract packets are the confirmed packets with the color green and discard-class set to 0. Out-of-contract packets are the packets which have exceeded the limit and have the color yellow and discard-class set to 1. Conditional marking of MPLS experimental bits for L3VPN traffic is supported on both physical and bundle main interfaces as well as sub-interfaces. Restrictions for Conditional Marking of MPLS Experimental bits on L3VPN
Conditional Marking of MPLS Experimental bits for L2VPN TrafficConditional marking of MPLS EXP bits is supported on Virtual Private Wire Service (VPWS), Virtual Private LAN Service (VPLS) and Ethernet Virtual Private Network (EVPN) traffic in the L2VPN network. The conditional marking of MPLS experimental bits is achieved for Layer 2 Virtual Private Network (L2VPN) traffic by applying a combination of ingress and egress policy-maps on the Provider Edge (PE) router. In the ingress policy-map, the qos-group or discard-class is set either based on the result of the policing action or implicitly. The egress policy-map matches on qos-group or on a combination of qos-group and discard-class and sets the mpls experiment bits to the corresponding value. Conditional marking can be used to mark the MPLS experimental bits differently for in-contract and out-of-contract packets. In-contract packets are the confirmed packets with the color green and discard-class set to 0. Out-of-contract packets are the packets which have exceeded the limit and have the color yellow and discard-class set to 1. Conditional marking of MPLS experimental bits for L2VPN traffic is supported on both physical and bundle main interfaces as well as sub-interfaces. Restrictions for Conditional Marking of MPLS Experimental bits on L2VPN
Policy-map for conditional marking of incoming trafficThe incoming packets on the Power Edge router are classified based on the ingress policy-map and these actions are taken.
Running Configuration:
Policy-map for conditional marking of outgoing MPLS trafficThe ingress packet undergoes MPLS encapsulation during the egress processing in the PE router which performs the label imposition. The MPLS experimental bits are marked on the basis of egress policy-map which performs the following actions:
Running Configuration:
Conditional Marking of MPLS Experimental Bits for EVPN-VPWS Single-Homing Services
The conditional marking of MPLS experimental bits is achieved for EVPN-VPWS single-homing services by applying a combination of ingress and egress policy-maps on the provider edge (PE) router. In the ingress policy-map, the qos-group or discard-class is set either based on the result of the policing action or implicitly. The egress policy-map matches on qos-group or on a combination of qos-group and discard-class and sets the MPLS experiment bits to the corresponding value. Conditional marking can be used to mark the MPLS experimental bits differently for in-contract and out-of-contract packets. In-contract packets are the confirmed packets with the color green and discard-class set to 0. Out-of-contract packets are the packets that have exceeded the limit and have the color yellow and discard-class set to 1. Conditional marking of MPLS experimental bits for EVPN-VPWS single-homing services are supported on both physical and bundle main interfaces as well as sub-interfaces. MPLS EXP Marking for EVPN Multi-Homed ServicesTable 10. Feature History Table
Configuration
Running Configuration
VerificationVerify that you have configured conditional marking of MPLS experimental bits for EVPN-VPWS single-homing services successfully.
QPPBQoS Policy Propagation via BGP (QPPB) is a mechanism that allows propagation of quality of service (QoS) policy and classification by the sending party that is based on the following:
Thus, helps in classification that is based on the destination address instead of the source address. QoS policies that differentiate between different types of traffic are defined for a single enterprise network. For instance, one enterprise may want to treat important web traffic, not-important web traffic, and all other data traffic as three different classes. And thereafter, use the different classes for the voice and video traffic. Hence, QPPB is introduced to overcome the following problems:
QPPB allows marking of packets that are based on QoS group value associated with a Border Gateway Protocol (BGP) route. Benefits of QPPB
Router A learns routes from AS 200 and AS 100. QoS policy is applied to any ingress interface of Router A to match the defined route maps with destination prefixes of incoming packets. Matching packets on Router A to AS 200 or AS 100 are sent with the appropriate QoS policy from Router A. BGP maintains a scalable database of destination prefixes, QPPB, by using BGP table maps. BGP adds the ability to map a qos-group value to desired IP destinations. These qos-group values are used in QOS policies applied locally on ingress interfaces. Whenever a packet bound for such destinations is encountered, the qos-group value matching that destination route looks up with work inside the policy classmap, and marks that packet for any configured policy actions. Configuration WorkflowUse the following configuration workflow for QPPB:
Define route policyA routing policy instructs the router to inspect routes, filter them, and potentially modify their attributes as they are accepted from a peer, advertised to a peer, or redistributed from one routing protocol to another. The routing policy language (RPL) provides a language to express routing policy. You must set up destination prefixes either to match inline values or one of a set of values in a prefix set.
Put Route policy at table-policy attach point under BGPThe table-policy attach point permits the route policy to perform actions on each route as they are installed into the RIB routing table. QPPB uses this attachment point to intercept all routes as they are received from peers. Ultimately the RIB will update the FIB in the hardware forwarding plane to store destination prefix routing entries, and in cases where table policy matches a destination prefix, the qos-group value is also stored with the destination prefix entry for use in the forwarding plane.
Ingress interface QOS and ipv4/ipv6 bgp configurationQPPB would be enabled per interface and individually for V4 and V6. An ingress policy would match on the qos groups marked by QPPB and take desired action. If a packet is destined for a destination prefix on which BGP route policy has stored a qos-group, but it ingresses on an interface on which qppb is not enabled, it would not be remarked with qos-group. Earlier, router supported matching on qos-group only in peering profile ‘hw-module profile qos ingress-model peering location <>’ . QPPB now permits classmaps to match qos-group in the default “non peering mode qos” as well. Also QPPB and hierarchical QOS policy profiles can work together if Hqos is used.
Configuring QPPB on an Interface
Egress Interface ConfigurationThe traffic-class set on ingress has no existence outside the device. Also, traffic-class is not a part of any packet header but is associated internal context data on relevant packets. It can be used as a match criteria in an egress policy to set up various fields on the outgoing packet or shape flows. Restrictions:
What are the two main categories of UPSes?- UPSes are classified into two general categories: Standby UPS, also called SPS (standby power supply) and Online UPS.
What refers to the technique of grouping multiple devices?Clustering. The technique of grouping multiple devices so they appear as a single device to the rest of the network.
What refers to how consistently and reliably a connection system or other network resource can be accessed by authorized personnel?availability. Refers to how consistently and reliably a file or system can be accessed by authorized personnel.
Which network management protocol provides for both authentication and encryption?IPsec protocols
IPsec authenticates and encrypts data packets sent over both IPv4- and IPv6-based networks. IPsec protocol headers are found in the IP header of a packet and define how the data in a packet is handled, including its routing and delivery across a network.
|