B ACI Best Practices
B ACI Best Practices
Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,
INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.
THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH
THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY,
CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version of
the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS" WITH ALL FAULTS.
CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT
LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS
HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network
topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional
and coincidental.
All printed copies and duplicate soft copies of this document are considered uncontrolled. See the current online version for the latest version.
Cisco has more than 200 offices worldwide. Addresses and phone numbers are listed on the Cisco website at www.cisco.com/go/offices.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL:
https://www.cisco.com/c/en/us/about/legal/trademarks.html. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a
partnership relationship between Cisco and any other company. (1721R)
© 2016–2020 Cisco Systems, Inc. All rights reserved.
CONTENTS
PREFACE Preface xv
Audience xv
Document Conventions xv
Related Documentation xvii
Documentation Feedback xviii
Obtaining Documentation and Submitting a Service Request xviii
CHAPTER 1 Overview 1
PART I Design 3
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
iii
Contents
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
iv
Contents
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
v
Contents
Transit Routing 49
About Transit Routing 49
Prerequisites for Transit Routing 51
Guidelines and Limitations for Transit Routing 51
Recommended Configuration Procedure Transit Routing 51
Verifying the Transit Routing Configuration 63
Additional References for Transit Routing 64
L3Out Ingress Policy Enforcement 64
About L3Out Ingress Policy Enforcement 64
Prerequisites for L3Out Ingress Policy Enforcement 66
Guidelines and Limitations for L3Out Ingress Policy Enforcement 66
Recommended Configuration Procedure for L3Out Ingress Policy Enforcement 66
Additional References for L3Out Ingress Policy Enforcement 67
L3Out MTU Considerations 67
About L3Out MTU Considerations 67
Recommended Configuration Procedure for Setting MTU 68
Setting OSPF MTU Ignore 68
Shared L3Outs 69
About Shared L3Outs 69
Prerequisites for Shared L3Outs 71
Guidelines and Limitations for Shared L3Outs 71
Use Cases for Shared L3Outs 71
Configuration Example for Shared L3Outs Using the GUI 72
L3Out Router IDs 73
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
vi
Contents
Microsegmentation 83
About Microsegmentation 83
Guidelines and Limitations for Microsegmentation 83
Intra-Endpoint Group Isolation 84
uSeg Endpoint Group for a Physical Domain 86
uSeg Endpoint Group for a VMM Domain 87
Additional References for Microsegmentation 89
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
vii
Contents
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
viii
Contents
Reusing a Single Layer 4 to Layer 7 Device for Multiple Service Graphs 125
About Reusing a Single Layer 4 to Layer 7 Device for Multiple Service Graphs 125
Prerequisites for Reusing a Single Layer 4 to Layer 7 Device for Multiple Service Graphs 125
Guidelines and Limitations for Reusing a Single Layer 4 to Layer 7 Device for Multiple Service
Graphs 125
Configuration Example for a Virtual Appliance That is Used By Multiple Service Graphs 126
Configuration Example for a Physical Appliance That is Used By Multiple Service Graphs 127
Verifying the Service Graph Configuration for a Device That is Used By Multiple Service Graphs
Using the GUI 128
Additional References for Reusing a Single Layer 4 to Layer 7 Device for Multiple Service Graphs
128
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
ix
Contents
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
x
Contents
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
xi
Contents
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
xii
Contents
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
xiii
Contents
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
xiv
Preface
This preface includes the following sections:
• Audience, on page xv
• Document Conventions, on page xv
• Related Documentation, on page xvii
• Documentation Feedback, on page xviii
• Obtaining Documentation and Submitting a Service Request, on page xviii
Audience
This guide is intended primarily for data center administrators with responsibilities and expertise in one or
more of the following:
• Virtual machine installation and administration
• Server administration
• Switch and network administration
• Cloud administration
Document Conventions
Command descriptions use the following conventions:
Convention Description
bold Bold text indicates the commands and keywords that you enter literally
as shown.
Italic Italic text indicates arguments for which the user supplies the values.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
xv
Preface
Preface
Convention Description
{x | y} Braces enclosing keywords or arguments separated by a vertical bar
indicate a required choice.
variable Indicates a variable for which you supply values, in context where italics
cannot be used.
string A nonquoted set of characters. Do not use quotation marks around the
string or the string will include the quotation marks.
Convention Description
screen font Terminal sessions and information the switch displays are in screen font.
boldface screen font Information you must enter is in boldface screen font.
italic screen font Arguments for which you supply values are in italic screen font.
Note Means reader take note. Notes contain helpful suggestions or references to material not covered in the manual.
Caution Means reader be careful. In this situation, you might do something that could result in equipment damage or
loss of data.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
xvi
Preface
Related Documentation
Related Documentation
Cisco Cloud APIC Documentation
The Cisco Cloud APIC documentation is available at the following URL: https://www.cisco.com/c/en/us/
support/cloud-systems-management/cloud-application-policy-infrastructure-controller/
tsd-products-support-series-home.html
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
xvii
Preface
Documentation Feedback
Documentation Feedback
To provide technical feedback on this document, or to report an error or omission, please send your comments
to [email protected]. We appreciate your feedback.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
xviii
CHAPTER 1
Overview
• About This Document, on page 1
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
1
Overview
About This Document
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
2
PA R T I
Design
• ACI Constructs Design, on page 5
• Routing Design, on page 49
• Security Design, on page 83
• Virtualization Design, on page 91
• Layer 4 to Layer 7 Design, on page 109
• Miscellaneous Design, on page 139
CHAPTER 2
ACI Constructs Design
• Common Tenant and User-Configured Tenant Policy Usage, on page 5
• Common Pervasive Gateway, on page 8
• Contracts and Policy Enforcement, on page 11
• Contract Labels, on page 17
• Taboo Contracts, on page 19
• Bridge Domains, on page 21
• Application-Centric and Network-Centric Deployments, on page 28
• Layer 2 Extension, on page 31
• Infrastructure VXLAN Tunnel Endpoint Pool, on page 33
• Virtual Routing and Forwarding Instances, on page 35
• Stretched Fabric, on page 35
• Access Policies, on page 37
• Managed Object Naming Convention , on page 47
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
5
Design
Prerequisites for Common Tenant and User-Configured Tenant Policy Usage
Procedure
Step 1 Export a contract. On the menu bar, choose Tenants > All Tenants.
Step 2 In the Work pane, double-click the desired tenant's name.
Step 3 In the Navigation pane, choose Tenant tenant_name > Security Policies > Contracts.
Step 4 In the Work pane, choose Actions > Export Contract.
Step 5 In the Export Contract dialog box, fill out the fields as necessary.
For a contract to be used between endpoint groups within separate VRFs, the contract scope must be changed
to Global. The scope is set to VRF by default.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
6
Design
Verifying the Common Tenant and User-Configured Tenant Policy Usage
Step 6 Export a Layer 4 to Layer 7 device. On the menu bar, choose Tenants > All Tenants.
Step 7 In the Work pane, double-click the user-configured tenant's name from which you will export the contract.
Step 8 In the Navigation pane, choose Tenant tenant_name > L4-L7 Services > L4-L7 Devices.
Step 9 In the Work pane, choose Actions > Export L4-L7 Devices.
Step 10 In the Export L4-L7 Devices dialog box, fill out the fields as necessary.
Procedure
The dn parameter has a value of "uni/tn-ACI-BP/brc-BP-contract." Without examining the classes, you can
see that this contract exists directly under tenant ACI-BP and that the contract name is "BP-contract."
ConfigurationExamplesforCommonTenantandUser-ConfiguredTenantPolicy
Usage
When selecting a policy for use, you can typically see the tenant association during the selection process. For
example, when attempting to associate a contract to an endpoint group within a user-configured tenant, a
variety of contract choices might display, such as in the following example list:
• multiservice/CTRCT1
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
7
Design
Additional References for Common Tenant and User-Configured Tenant Policy Usage
• multiservice/JT-BigIP1
• multiservice/JT-BigIP2
• common/TK_common
• common/TK_dev
• common/TK_shared
The contract naming convention is "tenant/contract_name." From the example contract names, you can infer
that all choices that begin with "common/" exist within the common tenant, while all choices prefixed with
"multiservice/" have been created within the user-configured tenant "multiservice."
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
8
Design
Prerequisites for Common Pervasive Gateway
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
9
Design
Verifying the Common Pervasive Gateway Using the GUI
• The vMAC across matching bridge domains should be configured the same across both ACI fabrics that
are utilizing CPG.
• The VIP address will be set as a virtual IP and will act as the gateway for hosts within this subnet.
Procedure
Step 5 The Custom MAC Address field is the pMAC that must be unique between both Cisco Application Centric
Infrastructure (ACI) fabrics sharing the CPG. By default, all ACI fabrics have the same value. If the value is
the same for both fabrics, change the value either of the fabrics.
Step 6 The Virtual MAC Address field is the vMAC that must be the same between both bridge domains across
both ACI fabrics. Replace the “Not Configured” text with a valid MAC address.
Step 7 Put a check in the Treat as virtual IP address check box to define the subnet to be the VIP address under
the bridge domain.
This should be done for the address that will be shared across both bridge domains and act as the GW for
hosts on this subnet. Otherwise, another subnet/bridge domain address will need to be created that is unique
to this fabric. For example, assume that 192.168.1.1 will be the VIP and exist as the virtual IP address on both
fabrics' bridge domains. Fabric 1 will have a second subnet under the bridge domain set as 192.168.1.2, and
Fabric 2 will have a second subnet under the bridge domain set as 192.168.1.3. These second subnets will not
be virtual IPs, but instead will act as the bridge domain SVI.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
10
Design
Contracts and Policy Enforcement
Policy information in Cisco Application Centric Infrastructure (ACI) is programmed into two TCAM tables:
• Policy TCAM contains entries for the allowed endpoint-group-to-endpoint-group traffic
• App TCAM contains shared destination Layer 4 port ranges
The size of the policy TCAM depends on the generation of Cisco ASIC that is in use. For ALE-based systems,
the policy TCAM size is 4k entries. For ALE2-based systems, 32k hardware entries are available. In certain
larger scale environments, it is important to take policy TCAM usage into account and ensure that the limits
are not exceeded.
TCAM entries are generally specific to each endpoint group pair. In other words, even if the same contract
is reused, new TCAM entries are installed for every pair of endpoint groups, as shown in the following figure:
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
11
Design
About Contracts and Policy Enforcement
vzAny
The "Any" endpoint group is a collection of all of the endpoint groups within a context, which is also known
as a virtual routing and forwarding (VRF), that allows for a shorthand way to refer to all of the endpoint groups
within that context. This shorthand referral eases management by allowing for a single point of contract
configuration for all endpoint groups within a context, and also optimizes hardware resource consumption by
applying the contract to this one group rather than to each endpoint group individually.
Consider the example shown in the following figure:
Figure 4: Multiple Endpoint Groups Consuming a Single Contract
In this scenario, a single endpoint group named "Shared" is providing a contract, with multiple endpoint groups
consuming that contract. Although this setup works, it has some drawbacks. First, the administrative burden
increases, as each endpoint group must be configured separately to consume the contract. Second, the number
of hardware TCAM entries increases each time an endpoint group associates with a contract. A very high
number of endpoint groups all providing or consuming a contract can, in extreme cases, lead to exhaustion
of the hardware resources.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
12
Design
About Contracts and Policy Enforcement
To overcome these issues, the "vzAny" object can be used. vzAny is a managed object within Cisco Application
Centric Infrastructure (ACI) that represents all endpoint groups within a VRF. This object can be used to
provide or consume contracts, so in the example above, you can consume the contract from vzAny with the
same results, as shown in the following figure:
Figure 5: vzAny Consuming a Contract
This is not only easier to configure (although automation can eliminate this benefit), but also represents the
most efficient use of fabric hardware resources, so is recommended to be used in cases where every endpoint
group within a VRF must consume or provide a given contract.
Whenever the use of the vzAny object is being considered, the administrator must plan for its use carefully.
Once the vzAny object is configured to provide or consume a contract, any new endpoint groups that are
associated with the VRF will inherit the policy; a new endpoint group added to the VRF will provide or
consume the same contracts that are configured under vzAny. If it is likely that new endpoint groups will
need to be added later and which might not need to consume the same contract as every other endpoint group
in the VRF, then vzAny might not be the most suitable choice. You should carefully consider this situation
before you use vzAny.
To apply a contract to the vzAny group, choose a tenant in the Application Policy Infrastructure Controller
(APIC) GUI. In the Navigation pane, navigate to Tenant tenant_name > Networking > VRFs > vrf_name >
EPG Collection for Context. vrf_name is the name of the VRF for which you want to configure vzAny.
EPG Collection for Context is the vzAny object; contracts can be applied here.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
13
Design
About Contracts and Policy Enforcement
In this example, two contracts are configured for SSH and HTTP. Both contracts are provided by EPG2 and
consumed by EPG1. The Apply Both Directions and Reverse Filter Ports options are checked, resulting in
the four TCAM entries shown in the figure.
You can reduce the TCAM utilization by half by making the contract unidirectional, as shown in the following
figure:
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
14
Design
About Contracts and Policy Enforcement
However, having a unidirectional contract presents a problem: return traffic is not allowed in the contract,
and therefore the connections cannot be completed and traffic fails. To allow return traffic to pass, you can
configure a rule that allows traffic between all ports with the "established" flag. We can take advantage of
vzAny in this case to configure a single contract for the "established" traffic and apply it to the entire VRF,
as shown in the following figure:
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
15
Design
Guidelines and Limitations for Contracts and Policy Enforcement
In an environment with a large number of contracts being consumed and provided, this can reduce the number
of TCAM entries significantly.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
16
Design
Additional References for Contracts and Policy Enforcement
• If there are ranges in the filter with a vzAny contract, the port range will be done in TCAM to implement
the ranges.
Contract Labels
About Contract Labels
Contracts are key objects within the Cisco Application Centric Infrastructure (ACI) policy model to express
intended communication flows. Endpoint groups can only communicate with other endpoint groups according
to the contract rules. A contract can be thought of as an ACL that opens ports between endpoint groups. An
administrator uses a contract to select the types of traffic that can pass between endpoint groups, including
the protocols and ports allowed. If there are no contracts connecting two endpoint groups, inter-endpoint
group communication is disabled by default as long as the VRF is set to Enforced. This is a representation
of the white-list policy model that ACI is built around. There is no contract required for intra-endpoint group
communication; intra-endpoint group communication is always implicitly allowed regardless of VRF settings.
There are configurations that can block intra-endpoint group communication, but is provided by
microsegmentation and is not covered in this section.
Contracts can contain multiple communication rules, and multiple endpoint groups can both consume and
provide multiple contracts. Labels allow for control over which subjects and filters to apply when
communicating between a specific pair of endpoint groups. Without labels, a contract will apply every subject
and filter between consumer and provider endpoint groups. A policy designer can use labels to compactly
represent a complex communication scenario, within the scope of a single contract, then re-use this contract
while specifying only a subset of its policies across multiple endpoint groups.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
17
Design
Recommended Configuration Procedure for Contract Labels
• Understand the scope of a label. Labels can be applied to a variety of provider and consumer managed
objects. This includes endpoint groups, contracts, bridge domains, DHCP relay policies, and DNS policies.
Labels do not apply across object types; a label on an application endpoint group has no relevance to a
label on a bridge domain.
• Labels are managed objects with only one property: a name. Labels enable the classification of which
objects can and cannot communicate with one another. Label matching is done first. If the labels do not
match, no other contract or filter information is processed.
• Label matching can be applied based on logical operators. The label match attribute can be one of these
values: at least one (the default), all, none, or exactly one.
• Because labels are named references, do not to use duplicate label names unless the intent is to chain
those flows together.
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
18
Design
Additional References for Contract Labels
Procedure
Step 1 Configure contract labels (consumer and provider). On the menu bar, choose Tenants > All Tenants.
Step 2 In the Work pane, double-click the tenant's name.
Step 3 In the Navigation pane, choose Tenant tenant_name > Security Policies > Contracts > contract_name >
contract_subject_name.
Step 4 In the Work pane, choose the Policy > Label tabs.
The Work pane displays the existing consumed and provided contract labels, and you can configure new
labels.
Step 5 Configure endpoint group subject labels. In the Navigation pane, choose Tenant tenant_name > Application
Profiles > application_profiles_name > Application EPGs > EPG EPG_name.
Step 6 In the Work pane, choose the Policy > Subject Labels tabs.
The Work pane displays the existing consumed and provided endpoint group subject labels, and you can
configure new labels.
Step 7 Configure an endpoint group label when associating a contract as a consumer or provider. In the Navigation
pane, choose Tenant tenant_name > Application Profiles > application_profiles_name > Application
EPGs > EPG EPG_name > Contracts.
Step 8 In the Work pane, choose Action > Add Provided Contract or Action > Add Consumed Contract.
Step 9 In the Add Provided Contract or Add Consumed Contract dialog box, fill out the fields as appropriate and
specify the contract label and subject label.
Taboo Contracts
About Taboo Contracts
Taboo contracts are special contract managed objects in the model that the network administrator can use to
deny specific classes of traffic. Taboos can be used to drop traffic matching a pattern, such as any endpoint
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
19
Design
Prerequisites for Taboo Contracts
group, a specific endpoint group, or matching results from a filter. Taboo rules are applied in the hardware
before the rules of regular contracts are applied.
Procedure
Step 1 Configure a taboo contract within the security policies of a tenant. On the menu bar, choose Tenants > All
Tenants.
Step 2 In the Work pane, double-click the desired tenant's name.
Step 3 In the Navigation pane, choose Tenant tenant_name > Security Policies > Taboo Contracts.
Step 4 In the Work pane, choose Action > Create Taboo Contract.
Step 5 In the Create Taboo Contract dialog box, fill in the fields as necessary. You must specify the Name and
add at least one subject.
The subject determines what flow to deny explicitly when the taboo contract is applied.
Step 6 Add a taboo contract to an endpoint group. In the Navigation pane, choose Tenant tenant_name > Application
Profiles > application_profile_name > Application EPGs > EPG_name > Contracts.
Step 7 In the Work pane, choose Action > Add Taboo Contract.
Step 8 In the Add Taboo Contract dialog box, choose an existing taboo contract or create a new taboo contract.
When adding a taboo contract to an endpoint group, there is no consumer/provider relationship needed to
complete the contract flow. The taboo contract will insert a deny specific to that endpoint group once it has
been associated to an endpoint group.
Step 9 (Optional) If you are creating a new taboo contract, in the Create Taboo Contract dialog box, fill in the
fields as necessary. You must specify the Name and add at least one subject.
The subject determines what flow to deny explicitly when the taboo contract is applied.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
20
Design
Configuration Examples for Taboo Contracts
Bridge Domains
About Bridge Domains
Within a private network, one or more bridge domains must be defined. A bridge domain is a Layer 2 forwarding
construct within the fabric, used to constrain broadcast and multicast traffic.
Bridge domains in Cisco Application Centric Infrastructure (ACI) have a number of configuration options to
allow the administrator to tune the operation in various ways. The configuration options are as follows:
• L2 Unknown Unicast—This option can be set to either Flood or Hardware Proxy. If this option is set to
Flood, Layer 2 unknown unicast traffic will be flooded inside the fabric. If the Hardware Proxy option
is set, the fabric mapping database will be queried for Layer 2 unknown unicast traffic. This option does
not have any impact on what the mapping database actually learns; the mapping database is always
populated for Layer 2 entries regardless of this configuration.
• ARP Flooding—If ARP flooding is enabled, ARP traffic will be flooded inside the fabric as per regular
ARP handling in traditional networks. If this option is disabled, the fabric will attempt to unicast the
ARP traffic to the destination. This option only applies if unicast routing is enabled on the bridge domain.
If unicast routing is disabled, ARP traffic is always flooded, regardless of the status of the ARP Flooding
option.
• Unicast Routing—This option enables the learning of IP addresses on the bridge domain in the endpoint
table. MAC addresses are always learned by the endpoint table. Using the unicast routing option may be
required for some advanced functionality, such as dynamic endpoint attachment with Layer 4 to Layer
7 services. Enabling unicast routing helps to reduce flooding in a bridge domain, as disabling ARP
flooding depends upon it. When considering unicast routing, you must consider the desired topology. If
an external device (such as a firewall) is acting as the default gateway and there is routing between two
bridge domains, enabling unicast routing might cause traffic to be routed on the fabric and bypass the
external device. Therefore, as a general best practice, we recommend that you disable unicast routing in
a bridge domain that only handles Layer 2 traffic, which is a so-called Layer 2 bridge domain.
• Enforce Subnet Check for IP Learning—If this option is checked, the fabric will not learn IP addresses
from a subnet other than the one configured on the bridge domain. For example, if a bridge domain is
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
21
Design
Guidelines and Limitations for Bridge Domains
configured with a subnet address of 10.1.1.0/24, the fabric would not learn the IP address of an endpoint
by using an address that is outside of this range, such as 20.1.1.1/24. This feature does not affect the data
path; in other words, it will not drop packets coming from the wrong subnet. The feature simply prevents
the fabric from learning endpoint information in this scenario.
Given the above options, it might not be immediately obvious how a bridge domain should be configured.
The following sections explain when and why particular options should be selected.
Given the above requirements, the recommended bridge domain settings are as follows:
• L2 Unknown Unicast—Hardware Proxy
• Unicast Routing—Enabled
• ARP Flooding—Disabled
• Subnet Configured—Yes, if required
• Enforce Subnet Check for IP Learning—Yes
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
22
Design
Recommended Configuration Procedure for Bridge Domains
In this scenario, most of the bridge domain settings can be left at their default, optimized values. A subnet
(that is, a gateway address) should be configured as required and you should enforce the subnet check for IP
learning.
Given the above requirements, the recommended bridge domain settings are as follows:
• L2 Unknown Unicast—Hardware Proxy
• Unicast Routing—Enabled
• ARP Flooding—Disabled
• Subnet Configured—Yes
• Enforce Subnet Check for IP Learning—Yes
The bridge domain settings for this scenario are similar to scenario 1; however, in this case the subnet address
should be configured. As silent hosts can exist within the bridge domain, a mechanism must exist to ensure
those hosts are learned correctly inside the Cisco Application Centric Infrastructure (Cisco ACI) fabric. Cisco
ACI implements an ARP gleaning mechanism that allows the spine switches to generate an ARP request for
an endpoint using the subnet IP address as the source address. This ARP gleaning mechanism ensures that
silent hosts are always learned, even when using optimized bridge domain features such as hardware proxy.
The following figure shows the ARP gleaning mechanism when endpoints are not present in the mapping
database:
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
23
Design
Recommended Configuration Procedure for Bridge Domains
If a subnet IP address cannot be configured for any reason, ARP flooding should be enabled as an alternative
to allow the silent hosts to be learned.
Given the above requirements, the recommended bridge domain settings are as follows:
• L2 Unknown Unicast: Flood
• Unicast Routing: Disabled
• ARP Flooding: N/A (enabled automatically due to no unicast routing)
• Subnet Configured: No
• Enforce Subnet Check for IP Learning: N/A
In this scenario, all optimizations inside the bridge domain are disabled and the bridge domain is operating
in a "traditional" manner. Silent hosts are dealt with through normal ARP flooding, which is always enabled
when unicast routing is turned off.
Also, when operating the bridge domain in a "traditional" mode, the size of the bridge domain should be kept
manageable. That is, limit the subnet size and number of hosts as you would in a regular VLAN environment.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
24
Design
Recommended Configuration Procedure for Bridge Domains
Scenario 4: Non-IP Address or IP Address-Based, Routed or Switched Traffic, Possible "Floating" IP Addresses
In this scenario, the bridge domain has the following configuration:
• IP address-based or non-IP address-based routed or switched traffic
• Firewalls and load balancers cannot be connected to this bridge domain
• Hosts or devices where the IP address might "float" between MAC addresses
• Silent hosts are not expected to be connected to the bridge domain
Given the above requirements, the recommended bridge domain settings are as follows:
• L2 Unknown Unicast: Hardware Proxy
• Unicast Routing: Enabled
• ARP Flooding: Enabled
• Subnet Configured: Yes
• Enforce Subnet Check for IP Learning: Yes
In this scenario, the bridge domain contains devices where the IP address might move from one device to
another, meaning that the IP address moves to a new MAC address. This might be the case where routed
firewalls are operating in active/standby mode, or where server clustering is used. Where this is a requirement,
it is useful for gratuitous ARPs to be flooded inside the bridge domains to update the ARP cache of other
hosts.
In this example, unicast routing and subnet configuration are enabled for troubleshooting purposes, such as
for using traceroute, or for advanced features that require it, such as dynamic endpoint attachment.
Scenario 5: Migrating to Cisco ACI, Legacy Network Connected Through a Layer 2 Extension, Gateways on
Legacy Network
In this scenario, you are migrating to Cisco ACI. You are extending Layer 2 from Cisco ACI to your legacy
network, and Layer 3 gateways still reside on the legacy network.
The default gateway used by the workloads to establish communication outside of the workloads' IP subnet
is initially maintained in the legacy network. This implies that the Cisco ACI fabric initially provides only
Layer 2 services for devices that are part of an EPG, and the workloads that are already migrated to the Cisco
ACI fabric send traffic to the legacy network when they need to communicate with devices that are external
to their IP subnet.
Given the above requirements, the recommended bridge domain settings are as follows:
• L2 Unknown Unicast: Flood
Layer 2 unknown unicast requests that originated from devices connected to the Cisco ACI fabric should
be able to reach the default gateway or other endpoints that are part of the same IP subnet and are still
connected to the legacy network. Because those entities are unknown to the Cisco ACI fabric, you must
enable Layer 2 unknown traffic requests to flood across the Cisco ACI fabric and toward the legacy
network.
• L2 Unknown Multicast Flooding: Flood
Layer 2 unknown multicast requests that originated from devices connected to the Cisco ACI fabric
should be able to reach the default gateway or other endpoints that are part of the same IP subnet and
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
25
Design
Recommended Configuration Procedure for Bridge Domains
are still connected to the legacy network. Because those entities are unknown to the Cisco ACI fabric,
you must enable Layer 2 unknown traffic requests to flood across the Cisco ACI fabric and toward the
legacy network.
• Unicast Routing: Disabled
The Cisco ACI fabric must behave as a Layer 2 network in this initial migration phase, therefore you
must disable the Unicast Routing capabilities. As a consequence, the Cisco ACI fabric will only forward
traffic for endpoints that are part of this bridge domain by performing Layer 2 look ups and only MAC
address information would be stored in the Cisco ACI database for those workloads (that is, their IP
addresses will not be learned).
• ARP Flooding: Enabled
ARP requests that originated from devices connected to the Cisco ACI fabric should be able to reach the
default gateway or other endpoints that are part of the same IP subnet and are still connected to the legacy
network. Because those entities are unknown to the Cisco ACI fabric, you must enable ARP requests to
flood across the Cisco ACI fabric and toward the legacy network.
• Subnet Configured: If required
• Enforce Subnet Check for IP Learning: If required
In this scenario, the user is migrating hosts and services from the legacy network into the Cisco ACI fabric.
A Layer 2 connection has been set up between the two environments and the Layer 3 gateway functionality
will continue to exist in the legacy network for some time. The following figure illustrates the topology of
this configuration:
Figure 10: Layer 2 Connection to Fabric with External Gateways
Afer all or the majority of the workloads belonging to the IP subnet are migrated into the Cisco ACI fabric,
you can then migrate the default gateway into the Cisco ACI domain. This migration is done by turning on
Cisco ACI routing in the bridge domain and disabling the default gateway function on the legacy network
devices.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
26
Design
Recommended Configuration Procedure for Bridge Domains
Cisco ACI allows you statically to configure the MAC address associated to the default gateway defined for
a specific bridge domain. You can therefore use the same MAC address that you previously used for the
default gateway in the legacy network so that the gateway move is completely seamless for the workloads
connected to the Cisco ACI fabric. That is, there is no need to refresh the workloads' ARP cache entry.
After the migration of an application is completed, you can leverage all of the flooding containment
functionalities offered by the Cisco ACI fabric. Specifically, you can disable ARP flooding as well as Layer
2 unknown unicast flooding.
This is possible only if there are no workloads belonging to that specific Layer 2 broadcast domain that remain
connected to the legacy network. That is, all of the workloads, physical and virtual, have been migrated to
the Cisco ACI fabric. In real life deployments, there are often specific hosts that remain connected to the
legacy network for quite a long time. This is usually the case for bare-metal servers, such as Oracle RAC
databases that remain untouched until the following refresh cycle. Even in this case it may make sense to
move the default gateway for those physical servers to the Cisco ACI fabric. This method will provide the
environment with a centralized point of management for security policies, which can be applied between IP
subnets; however, the flooding of traffic must remain enabled.
After the default gateway for different IP subnets is moved to the Cisco ACI fabric, routing communication
between workloads belonging to the migrated subnets will always occur on the Cisco ACI leaf nodes, leveraging
the distributed anycast gateway functionality.
This is true for workloads that are still connected to the legacy network. Routing happens on the pair of border
leaf nodes interconnecting legacy and new network. After workloads are migrated to the Cisco ACI fabric,
traffic will be routed by leveraging the anycast gateway functionality on the leaf node where the workloads
are connected.
Migrating the workloads and the workloads' default gateway to the Cisco ACI fabric brings advantages even
when maintaining the security policies at the IP subnet level, as the migration allows the Cisco ACI fabric to
become the single point of security policy enforcement between IP subnets, which provides a sort of ACL
management functionality. You can achieve this by following a gradual procedure: after the default gateway
for the different IP subnets has been moved to the Cisco ACI fabric, you can enable full and open connectivity
between endpoints that are connected to different EPGs (IP subnets) by applying a "permit any" contract
between the different EPGs.
With this configuration in place, every time a workload tries to communicate with a device in a different EPG
(IP subnets), a centrally managed security policy is applied to the Cisco ACI leaf switch where the distributed
default gateway function is enabled. Given the fact that the policy has a single "permit any" statement, this
results in open connectivity between the devices.
Because routing between different IP subnets is performed at the Cisco ACI fabric level, the security policy
can be enforced not only between hosts that are connected to the Cisco ACI fabric, but the security policy can
also be applied to devices that are connected to VLAN segments in the legacy network.
A key advantage of the Cisco ACI centrally-managed policy system is the ability to restrict communication
between hosts belonging to different IP subnets. With Cisco ACI, you can restrict communication between
hosts in a holistic manner by applying a central policy from the Cisco Application Policy Infrastructure
Controller (Cisco APIC), dictating which traffic flows are allowed and to and from each of the respective
EPGs.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
27
Design
Application-Centric and Network-Centric Deployments
Application-Centric Deployment
When taking an application-centric approach to an ACI deployment, the applications within an organization
should be allowed to define the network requirements. A true application-centric deployment will make full
use of the available fabric constructs, such as endpoint groups, contracts, filters, labels, external endpoint
groups, and so on, to define how applications and the tiers should communicate.
With an application-centric approach, it is generally the case that the gateways for endpoints will reside in
the fabric itself (rather than on external entities such as firewalls or load balancers). This enables the application
environment to get the maximum benefit from the ACI fabric.
In an application-centric deployment, much of the complexity associated with traditional networks (such as
VRFs, VLANs, and subnets) is hidden from the administrator.
The following figure shows an example of an application-centric deployment:
Figure 11: Application-Centric Deployment
Application-centric approach is generally recommended when users fully understand their application profiles,
such as the application tier and components, and what applications (or application components) need to
communicate with each other and on what protocol or ports.
Application-centric deployment is also seen as an approach to on board new applications.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
28
Design
About Application-Centric and Network-Centric Deployments
Network-Centric Deployment
A network-centric deployment takes the opposite approach to the application-centric deployment in that the
traditional network constructs, such as VLANs and VRFs, are mapped as closely as possible to the new
constructs within the ACI fabric.
As an example, a traditional network deployment might consist of the following tasks:
• Define 2 server VLANs at the access and aggregation layers
• Configure the access ports to map server to VLANs
• Define a VRF at the aggregation layer
• Define an SVI for each VLAN, and map them to the VRF
• Define the HSRP parameters for each SVI
• Apply features such as ACLs to control traffic between server VLANs, and from server VLANs to the
core
The comparable ACI deployment when taking a network-centric approach might be as follows:
• Deploy the fabric
• Create a tenant and VRF
• Define bridge domains for the purposes of external routing entity communication
• Create an external/outside endpoint group to communicate with external networks
• Create two bridge domains and assign a network to each indicating the gateway IP address (such as
10.10.10.1/24 and 10.10.11.1/24)
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
29
Design
About Application-Centric and Network-Centric Deployments
If external gateways are defined (such as firewalls or load balancers) for endpoints to use, this constitutes a
network-centric approach. In this scenario, no contracts are required to allow access to the default gateway
from endpoints. Although there are still benefits to be had in terms of centralized control, the fabric might
become more of a Layer 2 transport in certain situations where the gateways are not inside the fabric.
The following figure shows an example of a network-centric approach:
Figure 12: Network-Centric Deployment Approach
Network-centric deployment is typically seen as a starting point for initially migrating from a legacy network
to the ACI fabric, where their legacy infrastructure is segmented by VLANs, and by doing VLAN=EPG=BD
mapping helps the VLANs to understand the ACI constructs better and make the transition easier.
Using this approach does not require any changes to the existing infrastructure or processes. It still can leverage
the benefits that ACI offers, as listed below:
• Enables a next-generation data center network with high-speed 10- and 40-Gbps access or an aggregation
network
• East-west data center traffic optimization to support virtualized, dynamic environments as well as
non-virtualized workloads
• Supports workload mobility and flexibility, with placement of computing and storage resources anywhere
in the data center
• Capability to manage the fabric as a whole instead of using device-centric operations
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
30
Design
Layer 2 Extension
• Capability to monitor the network as a whole using the APIC in addition to the existing operation
monitoring tools; the APIC offers new monitoring and troubleshooting tools, such as health scores and
atomic counters
• Lower TCO and a common network that can be shared securely across multiple tenants in the data center
• Rapid network deployment and agility through programmability and integrated automation
• Centralized auditing of configuration changes
• Direct visibility into the health of the application infrastructure
Layer 2 Extension
About Layer 2 Extension
When extending a Layer 2 domain outside of the Cisco Application Centric Infrastructure (ACI) fabric to
support migrations from the existing network to a new ACI fabric, or to interconnect dual ACI fabrics at Layer
2, there are the two methods to extend your Layer 2 domain:
• Extend the endpoint group out of the ACI fabric using endpoint group static path binding
• Extend the bridge domain out of the ACI fabric using an external bridged domain (also known as a Layer
2 outside)
Note When extending the bridge domain, only a single Layer 2 outside can be created per bridge domain.
Endpoint group extension is the most popular approach to extend Layer 2 domains, where each individual
endpoint group is extended using a dedicated VLAN beyond the fabric. This method is the most commonly
used, as it is easy to deploy and does not require the use of contracts between the inside and outside networks.
However, if you use one bridge domain with multiple endpoint groups, then when you interconnect ACI
fabrics in Layer 2, you should not use the endpoint group extension method due to the risk of loops.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
31
Design
Additional References for Layer 2 Extension
Figure 13: Interconnect Fabrics at Layer 2 with Multiple Endpoint Groups per Bridge Domain (Scenario Not Recommended)
In this example, multiple endpoint groups are associated with a single bridge domain. In this scenario, you
should not extend each individual endpoint group between fabrics as shown in the figure, as this might result
in loops between the fabrics. Instead, a Layer 2 Outside should be used to extend the entire bridge domain
using a single VLAN, as shown in the following figure:
Figure 14: Interconnect Fabrics at Layer 2 - Multiple Endpoint Groups per Bridge Domain (Recommended Scenario)
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
32
Design
Infrastructure VXLAN Tunnel Endpoint Pool
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
33
Design
Recommended Configuration Procedure for Infrastructure VXLAN Tunnel Endpoint Pool
Procedure
Use the moquery –c dhcpPool command to view the TEP pool confugration.
Example:
Apic1# moquery –c dhcpPool
...
dn : prov-3/net-[10.0.0.0/16]/pool-7
Specifically within the output distinguished name of this class, there is a section that begins with "net-". In
the example snippet above, the APIC was configured with 10.0.0.0/16 as its TEP pool within the setup script
of the APIC.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
34
Design
Virtual Routing and Forwarding Instances
Stretched Fabric
About Stretched Fabric
The stretched fabric allows users to manage multiple datacenter sites as a single fabric by using the same
Application Policy Infrastructure Controller (APIC) controller cluster. The stretched Cisco Application Centric
Infrastructure (ACI) fabric behaves the same way as a regular ACI fabric to support workload portability and
virtual machine mobility. The following figure illustrates the stretched fabric topology:
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
35
Design
Guidelines and Limitations for Stretched Fabric
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
36
Design
Additional References for Stretched Fabric
Access Policies
About Access Policies
The Fabric tab in the Cisco Application Policy Infrastructure Controller (APIC) GUI is used to configure
system-level features including, but not limited to, device discovery and inventory management, diagnostic
tools, domain configuration, and switch and port behavior. The fabric pane is split into three sections: Inventory,
Fabric Policies, and Access Policies. Understanding how fabric and access policies configure the fabric is
key for maintaining these policies for the purposes of internal connections between fabric leaf nodes,
connections to external entities such as servers, networking equipment, and storage arrays.
This section lists guidelines and provides common configuration examples for key objects in the Fabric >
Access Policies view. The Access Policies view is split into folders separating out different types of policies
and objects that affect fabric behavior. For example, the Interface Policies folder is where port behavior is
configured such as port speed and the controls for specifying whether or not to run protocols, such as LACP,
on switch interfaces. Domains and AEPs are also created in the Access Policies view. The fabric access
policies provide the fabric with the base configuration of the access ports on the leaf switches. For more
information, see Additional References for Access Policies, on page 42.
Note The usage of these policies can be viewed by clicking the Show Usage button in
the Application Policy Infrastructure Controller (APIC) GUI. Use this to determine
what objects are using a certain policy to understand the impact when making
changes.
• Avoid using the Basic GUI or Quick Start wizards, as these may create many automatic configurations
that are not intuitive during troubleshooting.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
37
Design
Configuration Examples for Access Policies
Domain Guidelines
• Build one physical domain per tenant for bare metal servers or servers without hypervisor integration
requiring similar treatment.
• Build one external routed/bridged domain per tenant for external connectivity.
• For VMM domains, if both DVS and AVS is in use, create a separate VMM domain to support each
environment.
• For large deployments where domains (physical/VMM/etc) need to be leveraged across multiple tenants,
a single physical domain or VMM domain can be created and associated with all leaf ports where services
are connected.
AEP Guidelines
• Multiple domains can be associated to a single AEP for simplicity. There are some cases where multiple
AEPs may need to be configured to enable the infrastructure VLAN, such as overlapping VLAN pools,
or to limit the scope of the presence of VLANs across the fabric.
• Another scenario in which multiple AEPs should be utilized is when making an association to VMM
domains. The AAEP also contains relationships to the vSwitch policies, which are then pushed to the
vCenter VDS or AVS. If there are multiple VMM domains deployed with differing vSwitch policies,
multiple AAEPs should be created to account for the various potential vSwitch policy combinations.
• When utilizing an AVS for VMM, HyperV, SCVMM, or OpenStack OpFlex integration, the AAEP is
where the option to enable infra vlan is selected. For the most part, we do not want to extend this VLAN
outside of the fabric aside for when performing this integration. For that purpose, it will be beneficial to
create an AEP specific to the AVS VMM Domain if being utilized.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
38
Design
Creating a Switch Profile
will only be made under interface profiles, as those interface profiles are already associated to the corresponding
switch profiles.
Consider the following vPC topology as an example:
• When a switch profile is created for each leaf switch individually regardless of vPC definitions:
• Switch profiles example: Leaf_201, Leaf_202
• Interface profiles example: Leaf_201_IPR, Leaf_202_IPR
In the example above, all ports (vPC or non-vPC) are added in both Leaf_201_IPR and Leaf_202_IPR
respectively.
The benefits of creating a switch profile for each leaf individually regardless of vPC definitions are that
there are less switch and interface profiles to manage, it's more flexible to change the ports if needed,
and it supports asymmetric connections for host-facing ports. However, the interface policy group needs
to be configured consistently on both interface selectors.
• When a switch profile is created for each leaf switch individually and also for each vPC pair:
• Switch profiles example: Leaf_201, Leaf_202, Leaf_201_202
• Interface profiles example: Leaf_201_IPR, Leaf_202_IPR, Leaf_201_202_IPR
In the example above, vPC related ports are only added in Leaf_201_202_IPR. Non-vPC related ports
are added to either Leaf_201_IPR or Leaf_202_IPR respectively.
The benefit of creating a switch profile for each leaf switch and also for each vPC pair is that the
configurations are simpler in a large-scale environment with symmetric in discipline and replicated setup.
However, it is difficult to repurpose the ports that are already in use. Changing those interfaces will
impact both of the switches.
This section explains how to create and associate switch and interface profiles.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
39
Design
Creating an Interface Profile
Procedure
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
40
Design
Creating a Port Channel Policy
This section explains how to associate switch profiles with interface profiles.
Procedure
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
41
Design
Additional References for Access Policies
Unlike traditional vPC design, there is no requirement for setting up either a vPC peer-link or vPC
peer-keepalive in the Cisco Application Centric Infrastructure (ACI) fabric. The fabric itself serves as the
peer-link. The rich interconnectivity between spine switches and leaf switches makes it very unlikely that all
the redundant paths between vPC peers fail at the same time. Hence, if the peer switch becomes unreachable,
it is assumed to have crashed. The slave switch does not bring down vPC links.
For more information, see the Operating Cisco Application Centric Infrastructure document at the following
URL: http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html.
Procedure
Mis-Cabling Protocol
About the Mis-Cabling Protocol
Unlike traditional networks, the Cisco Application Centric Infrastructure (ACI) fabric does not participate in
the Spanning Tree Protocol (STP) and does not generate bridge protocol data units (BPDUs). BPDUs are
instead transparently forwarded through the fabric between ports mapped to the same endpoint group. Therefore,
Cisco ACI relies to a certain degree on the loop prevention capabilities of external devices.
Some scenarios, such as the accidental cabling of two leaf ports together, are handled directly using LLDP in
the fabric. However, there are some situations where an additional level of protection is necessary; in those
cases, enabling the Mis-Cabling Protocol (MCP) can help.
Consider the example in the following figure:
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
42
Design
Configuration Examples for the Mis-Cabling Protocol
In this example, two endpoint groups are configured on the Cisco ACI fabric, both associated with the same
bridge domain. An external switch has one port connected to each of the endpoint groups. In this example, a
misconfiguration has occurred whereby the external switch is allowing VLAN 10 on port 1/20; however, the
endpoint group associated with port 1/10 on leaf 102 is configured for VLAN 11. In this case, port 1/10 on
leaf 102 will not be able to receive BPDUs for VLAN 10. As a result, the spanning tree cannot detect the loop
and all ports will be forwarding.
The MCP protocol, if enabled, provides additional protection against this type of misconfiguration. MCP is
a lightweight protocol designed to protect against loops that cannot be discovered by either STP or LLDP.
You should enable MCP on all ports facing external switches or similar devices.
Note Per-VLAN MCP will only run on 256 VLANs per interface. If there are more than 256 VLANs, then the first
numerical 256 VLANs are chosen.
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
43
Design
Additional References for the Mis-Cabling Protocol
Step 5 Enable MCP on the interface level, which is done when you create an access port policy group. On the menu
bar, choose Fabric > Access Policies.
Step 6 In the Navigation pane, choose Interface Policies > Policy Groups.
Step 7 In the Work pane, choose Actions > Create Access Policy Group.
Step 8 In the Create Access Policy Group dialog box, in the MCP Policy drop-down list, choose MCP-Enabled.
Step 9 Fill out the remaining fields as necessary.
Step 10 Click Submit.
Port Tracking
About Port Tracking
Port tracking policies are used to monitor the status of links between leaf switches and spine switches. When
an enabled port tracking policy is triggered, the leaf switches take down all access interfaces on the switch
that have endpoint groups deployed on them.
Port tracking addresses a scenario in which a leaf node might lose connectivity to the spine node and where
hosts connected to the affected leaf node in an active/standby manner might not be aware of the failure for a
period of time. The following figure illustrates this scenario:
The port tracking feature detects a loss of fabric connectivity on a leaf node and brings down the host facing
ports. This allows the host to fail over to the second link, as shown in the following figure:
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
44
Design
Guidelines and Limitations for Port Tracking
Note The preferred host connectivity to the Cisco Application Centric Infrastructure (ACI) fabric is vPC wherever
possible. Port tracking is useful in situations where hosts are connected using active/standby NIC teaming.
Procedure
Step 1 In the Advanced GUI, navigate to the Port Tracking window. Click Fabric > Access Policies > Global
Policies > Port Tracking.
Step 2 In the Port Tracking window, locate the Port Tracking state field and click on.
Step 3 Set the Delay restore timer parameter.
This timer controls the number of seconds the fabric waits before bringing host ports up after the leaf spine
links re-converge.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
45
Design
VLAN Pools
VLAN Pools
About VLAN Pools
Within Cisco Application Centric Infrastructure (ACI), there is the concept of access policies, which are a
group of objects that define how traffic can get access into the fabric. Access policy definition matters when
an EPG is created for use. For example, an EPG that has a static path (for example, node 101, int eth1/10,
trunked with VLAN 10) without access policies is essentially telling the EPG to use a set of policies to which
it does not have access. At this point, you will see faults indicating path issues. The access policies and
subsequent domain-to-EPG association tell this EPG that it now has access to a subset of nodes, interfaces,
and VLANs that it can now use in path definitions.
VLAN pools are just one piece of the complete access policies definition. A VLAN pool is a container that
is comprised of encap blocks, which contain the actual VLAN definitions.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
46
Design
Additional References for VLAN Pools
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
47
Design
About the Managed Object Naming Convention
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
48
CHAPTER 3
Routing Design
• Transit Routing, on page 49
• L3Out Ingress Policy Enforcement, on page 64
• L3Out MTU Considerations, on page 67
• Shared L3Outs, on page 69
• L3Out Router IDs, on page 73
• Multiple External Connectivity, on page 77
Transit Routing
About Transit Routing
The Cisco Application Centric Infrastructure (ACI) solution allows you to use standard Layer 3 technologies
to connect to external networks. These can be Layer 3 connections to an existing network, WAN routers,
firewalls, mainframes, or any other Layer 3 device. Border leaf switches within the Cisco ACI fabric provide
connectivity to the external Layer 3 devices. Cisco ACI supports Layer 3 connections using static routing
(IPv4 and IPv6) or the following dynamic routing protocols:
• OSPFv2 (IPv4) and OSPFv3 (IPv6)
• BGP (IPv4 and IPv6)
• EIGRP (IPv4 and IPv6)
Within the Cisco ACI fabric, multiprotocol BGP (MP-BGP) is implemented between the leaf and spine
switches to propagate external routes within the fabric. The BGP route reflector technology is deployed to
support many leaf switches within a single fabric. All of the leaf and spine switches are in one single BGP
autonomous system (AS). Once the border leaf learns the external routes, it can then redistribute the external
routes of a given VRF instance to an MP-BGP address family (VPNv4 or VPNv6). MP-BGP maintains a
separate BGP routing table for each VRF instance. Within MP-BGP, the border leaf switch advertises routes
to a spine switch, which is a BGP route reflector. The routes are then propagated to all the leaf switches where
the VRF instances are instantiated.
Before Cisco Application Policy Infrastructure Controller (Cisco APIC) release 2.3(1f), transit routing was
not supported within a single L3Out profile. In Cisco APIC release 2.3(1f) and later, you can configure transit
routing with a single L3Out profile, with the following limitations:
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
49
Design
About Transit Routing
• If the VRF instance is unenforced, an external subnet (l3extSubnet) of 0.0.0.0/0 can be used to allow
traffic between the routers sharing the same Layer 3 EPG.
• If the VRF instance is enforced, an external default subnet (0.0.0.0/0) cannot be used to match both
source and destination prefixes for traffic within the same Layer 3 EPG. To match all traffic within the
same Layer 3 EPG, the following prefixes are supported:
• IPv4
• 0.0.0.0/1—with External Subnets for the External EPG
• 128.0.0.0/1—with External Subnets for the External EPG
• 0.0.0.0/0—with Import Route Control Subnet, Aggregate Import
• IPv6
• 0::0/1—with External Subnets for the External EPG
• 8000::0/1—with External Subnets for the External EPG
• :0:0/0—with Import Route Control Subnet, Aggregate Import
• Alternatively, a single default subnet (0.0.0.0/0) can be used when combined with a VzAny contract. For
example:
• Use a VzAny providing contract and a Layer 3 EPG consuming contract (matching 0.0.0.0/0), or a
VzAny consuming contract and Layer 3 EPG providing contract (matching 0.0.0.0/0).
• Use the subnet 0.0.0.0/0—with Import/Export Route Control Subnet, Aggregate Import, and
Aggregate Export.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
50
Design
Prerequisites for Transit Routing
• You must have configured a BGP route reflector policy for Cisco Application Centric Infrastructure
(ACI) fabric
Not all transit routing combinations are currently supported in ACI. For information about the currently
supported transit routing combinations, see the Cisco APIC and Transit Routing document at the following
URL:
http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
51
Design
Recommended Configuration Procedure Transit Routing
From a routing perspective, the Cisco ACI fabric does not function as a single logical router, but rather as a
network of routers that are connected to an MP-BGP core. All routes learned from an L3Out are leaked into
MP-BGP and then redistributed to every leaf switch in the fabric where the VRF instance is deployed. If
another L3Out is configured on another leaf switch, those routes can be advertised back out the other L3Out.
This provides transit routing functionality to the Cisco ACI fabric. Transit routing is supported on the same
leaf switch or on different leaf switches and is supported for a number of different combinations, such as
OSPF to OSPF, BGP to OSPF, and EIGRP to static.
Both L3Outs are configured in the same VRF instance and use the same OSPF area ID, but are in different
OSPF domains. Routes learned on border leaf switch 1 in OSPF area 10 will appear as OSPF learned routes
on border leaf switch 1. These routes will appear as BGP learned routes on all other leaf switches in the fabric
where VRF1 is instantiated, including border leaf switch 2. The following output shows the OSPF learned
routes received on border leaf switch 1:
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
52
Design
Recommended Configuration Procedure Transit Routing
The bolded line is the external route that is learned from the L3Out (OSPF).
The following output shows the same route learned on border leaf switch 2, in which the route is learned
through MP-BPG:
BL-2# show ip route 10.100.100.0/24 vrf prod:ctx1
IP Route Table for VRF "prod:ctx1"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preferences/metric]
'%<string>' in via output denotes VRF <string>
The bolded lines are the route that is learned from the fabric (MP-BPG).
By default, Cisco ACI will not advertise routes learned from one L3Out back out another L3Out. The Cisco
ACI does not allow transit by default. Transit routing is controlled by creating export route control policies
for the L3Out. Export route control policies control which transit prefixes are redistributed into the L3Out
protocol. These policies will be instantiated on the leaf switch as route maps and IP prefix-lists.
By looking at the OSPF process information on border leaf switch 2, you can see how this policy is instantiated
on border leaf switch 2 using redistribution with route-maps and IP prefix-lists:
BL-2# show ip ospf vrf prod:ctx1
Routing Prcoess default with ID 1.1.1.103 VRF prod:ctx1
Stateful High Availability enabled
Supports only single TOS(TOS0) routes
Supports opaque LSA
Table-map using route-map exp-ctx-3047429-deny-external-tag
Redistributing External Routes from
static route-map exp-ctx-st-3047429
direct route-map exp-ctx-st-3047429
bgp route-map exp-ctx-proto-3047429
eigrp route-map exp-ctx-proto-3047429
The bolded lines of output show the redistribution of external routes from BGP and EIGRP.
BL-2# show route-map exp-ctx-st-3047429
route-map exp-ctx-st-3047429, permit, sequence 7801
Match clauses:
ip address prefix-lists: IPv6-deny-all IPv4-proto32771-3047429-exc-ext-inferred-export-dst
Set clauses:
tag 4294967295
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
53
Design
Recommended Configuration Procedure Transit Routing
The OSPF database on border leaf switch 2 shows that the prefix 10.100.100.0/24 is learned by redistribution
into OSPF and not as an intra-area prefix. Both OSPF L3Outs that are being deployed on different border leaf
switches use the same area ID, but are in different OSPF domains. Each border leaf switch is an ASBR that
redistributes fabric learned prefixes into the OSPF process that is local to that leaf switch.
BL-2# show ip ospf database 10.100.100.0 vrf prod:ctx1
OSPF Router with ID (1.1.1.103) (Process ID default VRF prod:ctx1)
The output shows that the route L3Out on border leaf switch 1 is added as a type 5 external LSA on border
leaf switch 2.
The following figure shows the same topology from a routing protocol view:
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
54
Design
Recommended Configuration Procedure Transit Routing
Figure 19: OSPF to OSPF Transit on Different Leaf Switches from a Routing Protocol View
The border leaf switches run both BGP (within the fabric) and OSPF for external connectivity. The mutual
redistribution is done on the border leaf switches.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
55
Design
Recommended Configuration Procedure Transit Routing
Before Cisco APIC, release 2.3(1f), transit routing was not supported within a single L3Out profile. In
Cisco APIC, release 2.3(1f) and later, you can configure transit routing within a single L3Out profile,
with limitations; for details, see About Transit Routing, on page 49.
An L3Out can only belong to one area; therefore, when connecting to different OSPF areas, different L3Outs
must be used. Cisco ACI still blocks transit routes between different L3Outs unless permitted by a policy, but
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
56
Design
Recommended Configuration Procedure Transit Routing
instantiation of this policy is different for an ABR. Cisco ACI blocks transit routes between different prefixes
using an OSPF area filter-list. The OSPF filter-list blocks OSPF type-3 LSAs.
Note The area filter-list implementation only filters type-3 LSAs. If external type-5 or type-7 (NSSA) LSAs are
learned from an OSPF L3Out on the ABR, these routes will be permitted to other areas connected to the ABR.
When export route control subnets are added to the L3Out, the IP prefix-list for the subnet will be added to
the route-map used for the filter-list as well as the redistribute command.
Number of active areas is 2, 2 normal, 0 stub, 0 nssa
Area (00.0.0.10) (Inactive)
Area has existed for 00:28:57
Interfaces in this area: 2 Active interfaces: 2
Passive interfaces: 1 Loopback interfaces: 1
SPF calculation has run 11 times
Last SPF ran for 0.000117s
Area ranges are
Area-filter in 'exp-ctx-proto-2949124'
Number of LSAs: 3, checksum sum 0x0
Area (backbone)
Area has existed for 03:14:11
Interfaces in this area: 2 Active interfaces: 1
Passive interfaces: 1 Loopback interfaces: 1
SPF calculation has run 21 times
Last SPF ran for 0.000234s
Area ranges are
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
57
Design
Recommended Configuration Procedure Transit Routing
Area-filter in 'exp-ctx-proto-2949124'
Number of LSAs: 2, checksum sum 0x0
The bolded lines are the route-maps that are used with the OSPF area filter.
BL-1# show route-map exp-ctx-st-2949124'
route-map exp-ctx-st-2949124', permit, sequence 7801
Match clauses:
ip address prefix-lists: IPv6-deny-all IPv4-proto49155-2949124-exc-ext-inferred-export-dst
Set clauses:
tag 4294967295
Leaf-3# show ip prefix-list IPv4-proto49155-2949124-exc-ext-inferred-export-dst
ip prefix-list IPv4-proto49155-2949124-exc-ext-inferred-export-dst: 1 entries
seq 1 permit 10.1.1.0/24
Note When multiple OSPF L3Outs are configured on the same border leaf switch, they are configured under the
same OSPF process. Export route control subnets and public bridge domain and endpoint group subnets are
added to route-maps used by redistribution into OSPF. When a subnet is allowed out one OSPF L3Out on the
border leaf switch, it will apply to all OSPF L3Outs on the same border leaf switch. This is also true for
multiple EIGRP L3Outs on the same border leaf switch.
Note Before Cisco APIC, release 2.3(1f), transit routing was not supported within a single L3Out profile. In Cisco
APIC, release 2.3(1f) and later, you can configure transit routing within a single L3Out profile, with limitations;
for details, see About Transit Routing, on page 49.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
58
Design
Recommended Configuration Procedure Transit Routing
Figure 22: Same OSPF Area Connected to the Same Border Leaf Switch
Each external router is connected to the same area and learns the same routing information. There is only one
L3Out, so route control policies are not needed and there are no issues from a routing perspective. All devices
that connect to the Cisco ACI fabric are placed into endpoint groups, including networks reachable through
an L3Out. The endpoint group classification for an L3Out is based on configuration policy (it is not based on
routing information). In this configuration all peers are configured under the same L3Out and will belong to
the same external endpoint group. Even though they are in the same endpoint group traffic will not be permitted
unless the prefix classifier is configured for the external endpoint group. This classifier is configured with the
External Subnets for the External EPG policy.
The external endpoint group classifier is a longest prefix match classifier. When the subnet 0.0.0.0/0 is
configured for the external endpoint group classifier, this will match all traffic between different L3Outs.
There is a special case for traffic within the same L3Out. In this case, an implicit deny is configured for traffic
between external devices within the same L3Out when using the 0.0.0.0/0 prefix. To allow traffic forwarding
through the border leaf switch for traffic within the same L3Out, a more specific prefix classifier must be
used.
In the following example, the Cisco ACI border leaf switch will be used for transit traffic between the
192.168.1.0/24 and 172.16.1.0/24 networks.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
59
Design
Recommended Configuration Procedure Transit Routing
Figure 23: Border Leaf Switch Used for Transit Traffic Between Two Networks
The 0.0.0.0/0 prefix cannot be used as a classifier due to the default deny rule for this prefix. Therefore,
you must create two subnets that will match the external networks.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
60
Design
Recommended Configuration Procedure Transit Routing
The following command output shows the EIGRP to EIGRP transit routing configuration:
BL-2# show ip bgp 10.40.1.0/24 vrf hr:ctx1
BGP routing table information for VRF hr:ctx1, address
family IPv4 Unicast
BGP routing table information for 10.40.1.0/24, version 44
Paths: (1 available, best #1)
Flags: (0x80c0002) on xmit-list, is not in urib, exported
vpn: version 550, (0x100002) on xmit-list
Multipath: eBGP iBGP
The bolded line shows that the EIGRP AS is carried in the BGP extended community.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
61
Design
Recommended Configuration Procedure Transit Routing
The route entry on the external router shows the prefix as an internal EIGRP prefix:
wan-router# show ip route 10.40.1.0 vrf wan
IP Route Table for VRF "wan"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preferences/metric]
'%<string>' in via output denotes VRF <string>
In some cases, you might not want the routing loop protection. When a transit route from one VRF instance
is advertised back into another VRF instance with OSPF or EIGRP, the route will be blocked. The following
figure shows that VRF:PN4 is a transit VRF instance and is advertising routes learned from BGP out of OSPF:
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
62
Design
Verifying the Transit Routing Configuration
Figure 25: Transit VRF Instance That is Advertising Routes Learned from BGP out of OSPF
These routes will be tagged with tag 4294967295. The L3Out is connected through a firewall back to another
L3Out in a different VRF instance. This L3Out also uses the same route tag policy and will block these routes.
The route tag policy can be changed per VRF instance. To change the route tag policy, configure a new route
tag policy under protocol policies and assign this policy to the VRF instance.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
63
Design
Additional References for Transit Routing
Note An endpoint group has a unique class ID. The source class and destination class refer only to relative policy
enforcement (which direction is being enforced).
Endpoint group classification occurs when a packet arrives on the leaf. For endpoints within the fabric, the
classification can be VLAN, VXLAN, MAC address, IP address, VM attribute, and so on. For traffic arriving
from an L3Out connection, traffic is classified based on network and mask.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
64
Design
About L3Out Ingress Policy Enforcement
The policy rules (scope, source class ID, dest class ID, and filter) are programmed on the leaf switches in
ternary content addressable memory (TCAM).
When a policy is enforced between endpoint groups, it can be enforced on the ingress leaf switch or on the
egress leaf switch for internal endpoint groups. On ACI releases prior to 1.2(1), the policy for traffic from an
internal endpoint group to an external endpoint group (L3Out endpoint group) is enforced on the egress leaf
switch where the L3Out is deployed. A common network design has a large number leaf switches connecting
to the compute environment, but only a pair of border leaf switches. Because internal to external policy
enforcement is done on the egress switch (border leaf), this can create a resource (TCAM) bottleneck on the
border leaf switch.
Figure 27: Fabric Policy Application Before Release 1.2(1) for Endpoint Group-to-Outside Mapping
The ingress policy enforcement feature is a configurable option to enable ingress policy enforcement for
internal to external communications. With ingress policy enforcement, the destination class lookup for the
destination prefix can be done on the ingress leaf switch. This distributes the enforcement of the policy across
more switches since there are typically more compute leaf switches than border leaf switches, reducing the
likelihood of a bottleneck at the border leaf switches.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
65
Design
Prerequisites for L3Out Ingress Policy Enforcement
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
66
Design
Additional References for L3Out Ingress Policy Enforcement
The following procedure creates a VRF that uses ingress policy enforcement:
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
67
Design
Recommended Configuration Procedure for Setting MTU
MTU mismatches do not prevent BGP or EIGRP adjacencies from being established, but you should still
match MTU values for these peering adjacencies.
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
68
Design
Shared L3Outs
border leaf switch. If the ACI border leaf switch is sending the higher MTU value, then the MTU ignore
setting should be configured on the remote device.
The MTU ignore feature can be used to establish OSPF peer adjacencies when MTU values are mismatched
and cannot be modified. This does not affect Path MTU discovery behavior or traffic passing through the
border leaf switch. This traffic can still experience fragmentation due to an MTU mismatch. You should match
MTU values and only use MTU Ignore in cases where matching is not possible.
The following procedure enables the MTU Ignore setting.
Procedure
Shared L3Outs
About Shared L3Outs
Using a shared L3Out is an option for a multitenant configuration where each tenant is isolated from each
other, but might require access to external shared services, such as DHCP, DNS, and syslog. The Cisco
Application Centric Infrastructure (ACI) fabric is very flexible and provides the following options for
configuring access to external shared services (shared L3Outs):
1. Create a VRF, bridge domains, and L3Out in the common tenant. Create endpoint groups in individual
tenant spaces. In this configuration, tenants share the same VRF and cannot have overlapping IP addresses.
All objects created under the common tenant are also visible to each tenant.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
69
Design
About Shared L3Outs
Figure 29: Shared L3Out Option 1: Bridge Domain, Subnet, and L3Out Under the Common Tenant
2. Create a VRF and L3Out in the common tenant. Create bridge domains and endpoint groups in individual
tenant spaces. In this configuration, tenants share the same VRF and cannot have overlapping IP addresses.
The bridge domain is configured under the individual tenant spaces and is not visible to other tenants.
Figure 30: Shared L3Out Option 2: Bridge Domain and Subnet Under a User Tenant
3. Create separate tenants with separate VRF instances, bridge domains, and endpoint group. Each tenant
has its own VRF instance and can use overlapping IP addresses, as long the overlapping subnets are not
leaked into the common tenant. A contract is exported from the tenant that is providing the shared service.
Route leaking between VRF instances is performed to provide connectivity between the consumer and
provider.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
70
Design
Prerequisites for Shared L3Outs
Figure 31: Shared L3Out Option 3: VRF, Bridge Domain, and Subnet Under a User Tenant
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
71
Design
Configuration Example for Shared L3Outs Using the GUI
Figure 32: External Shared Service That is Accessed Through an L3Out in a Tenant
Procedure
Step 12 In the Work pane, choose Actions > Add Consumed Contract Interface.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
72
Design
L3Out Router IDs
Step 13 In the Add Consumed Contract Interface dialog box, fill in the fields as required, except as specified below:
a) For the Contract Interface drop-down list, choose the contract interface to export to the consumer tenant.
Step 14 Click Submit.
The L3Out provides the contract and the consumer tenant consumes the contract interface.
Step 15 In the Navigation pane, choose Tenant tenant_name > Application Profiles > application_profile_name >
Application EPGs > application_EPG_name > Contracts.
Choose the application profile and application endpoint group that you created in this procedure.
Step 16 In the Work pane, choose Actions > Add Provided Contract.
Step 17 In the Add Provided Contract dialog box, fill in the fields as required, except as specified below:
a) For the Contract drop-down list, choose the contract that you created in this procedure.
Step 18 Click Submit.
In the Work pane, you can see that the consumer is using the contract interface.
Step 19 In the Navigation pane, choose Tenant tenant_name > Application Profiles > application_profile_name >
Application EPGs > application_EPG_name > Subnets.
Choose the application profile and application endpoint group that you created in this procedure.
Step 20 In the Work pane, choose Actions > Create EPG Subnet.
Step 21 In the Create EPG Subnet dialog box, fill in the fields as required, except as specified below:
a) For the Private to VRF check box, remove the check.
You do not want to advertise the subnet to the L3Out in its own VRF instance.
b) For the Advertised Externally check box, add a check.
You want to advertise the subnet to the L3Out outside of its own VRF instance.
c) For the Shared between VRFs check box, add a check.
You want to leak the subnet to the VRF instance in which the provider endpoint group resides.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
73
Design
Best Practices for Assigning L3Out Router IDs
The Logical Node Profile managed object is used to identify the nodes (leaf switches) where the L3Out
will be instantiated. The Node managed object is where the node and router ID is configured.
Dynamic routing protocols (OSPF, EIGRP, and BGP) all use the same decision process when assigning a
router ID:
1. Manually configure the router ID under the protocol configuration (OSPF, EIGRP, or BGP).
2. If no router ID is configured, use the highest up loopback interface IP address.
3. If no loopback interfaces are configured, use the highest up physical interface IP address.
In ACI, the router ID that is specified in the node profile is always configured as a manual router ID under
the protocol that is configured for the L3Out. Therefore, the first selection for the router ID selection process
is always used.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
74
Design
Best Practices for Assigning L3Out Router IDs
• You should not create 2 separate objects, such as a router ID and a loopback interface, with the same IP
address.
The node profile also has an option to create a loopback interface with the same value as the router ID.
This option is only needed for BGP if you are establishing BGP peering sessions from a loopback interface
with the router ID value. For OSPF and EIGRP, you should disable this option.
Note If the L3Out will be used for Layer 3 multicast (PIM enabled), then always put
a check in the Use Router ID as Loopback Address check box.
• Create a loopback interface for BGP multi-hop peering between loopback addresses.
For BGP, this option can be enabled if you are peering to the loopback address (BGP multi-hop) and are
using the router ID address for the peering. You are not required to peer to the router ID address. You
can also establish BGP peers to a loopback address that is not the router ID. For this configuration, disable
the Use Router ID as Loopback Address option and specify a loopback address that is different than
the router ID.
• Each node (leaf switch) should use a unique router ID.
Do not use the same router ID on different nodes in a single routing domain. Duplicate router IDs can
cause routing issues. When configuring L3Outs on multiple border leaf switches, each switch (node
profile) should have a unique router ID.
• You should use per-VRF instance router IDs.
• Use the same router ID value for all L3Outs on the same node within the same VRF instance.
When configuring multiple L3Outs on the same node and the same VRF instance, you must use the same
router ID value on all L3Outs. Using different router IDs is not supported. A fault will be raised if different
router IDs are configured for L3Outs on the same node. If you have multiple VRF instances, you can
have per-VRF instance router IDs on the same node.
• Configure a router ID for static L3Outs.
The router ID is a mandatory field for the node policy. It must be specified even if no dynamic routing
protocol is used for the L3Out. When creating an L3Out for a static route, you must still specify a router
ID value. The Use Router ID as Loopback Address check box should be unchecked and the same rules
apply regarding router ID value: use the same router ID for all L3Outs on the same node in the same
VRF instance and different router ID for different nodes in the same VRF instance.
The router ID values should be unique in a routing domain. ACI supports separate Layer 3 domains (VRF
instances). The router ID should be unique for each node in a VRF instance. The same router ID value
can be used on the same node in different VRF instances. If the VRF instances are joined to the same
routing domain by an external device then same router ID should not be used in the different VRF
instances. The following example shows the two VRF instances joined to the same Layer 3 domain
through an external firewall. In this case the router IDs should be different in each VRF instance.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
75
Design
Guidelines and Limitations for L3Out Router IDs
Figure 34: VRF Instances Joined to the Same Layer 3 Domain Through an External Firewall
Note The router ID for OSPF and EIGRP is a 32-bit number represented in the IP address format. Both OSPF and
EIGRP support router ID values that are not valid IPv4 addresses, such as 0.0.0.1. The router ID for BGP
must be a valid IPv4 address. ACI only supports valid IPv4 unicast addresses for router IDs regardless of the
protocol used.
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
76
Design
Multiple External Connectivity
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
77
Design
Guidelines and Limitations for Multiple External Connectivity
different L3Outs depends on the type of connection. The L3Out managed object is the top-level object for
the L3Out and is the container for L3Out logical node profiles and interface profiles.
General Guidelines for Multiple External Connectivity through Multiple or Single L3Out Objects
• The L3Out object defines the protocol and some protocol parameters that will be used by all nodes and
interfaces configured under the L3Out.
• For OSPF L3Outs, the OSPF area is defined at the L3Out level. If an OSPF L3Out will connect to
multiple external devices on the same border leaf, one L3Out should be configured.
• Similarly, the EIGRP AS is configured at the L3Out level. If connecting to multiple EIGRP devices
in the same AS from the same leaf, one L3Out should be used.
• A different L3Out must be used when connecting to OSPF neighbors in different areas or when
connecting to EIGRP neighbors in different AS.
• For BGP L3Outs the peer-connectivity profile is configured under the node (for peering to loopback
addresses) or under the physical interface (for direct connection peering). Multiple BGP peers can
be defined under the same L3Out.
• Another decision for single vs multiple L3Outs depends on the type of physical interface.
• If connecting to multiple external devices on the same VLAN (same subnet) this connection would
use an L3Out with SVI interfaces.
• This connection will typically span multiple leaf switches for redundancy.
• These connections can be on physical ports, port-channels, or virtual port-channels (vPCs).
• When an L3Out is configured with an SVI, this will create an external bridge domain (VXLAN
VNI) that is extended across the different switches where the L3Out is deployed..
• The VLAN/external bridge domain must be configured on a single L3Out. Different L3Outs cannot
use the same SVI VLAN/external bridge domain.
• When connecting L3Outs to routed or routed sub-interface links, the choice of whether to use one L3Out
or multiple L3Outs depends on the protocol and security policy requirement.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
78
Design
Guidelines and Limitations for Multiple External Connectivity
When BGP is transported over OSPF for BGP multi-hop connections, the OSPF process that is created
on the leaf is only used to learn route the remote BGP peer. OSPF routes in this case are not redistributed
into MP-BGP.
BGP over OSPF and regular OSPF L3Outs are not supported on the same leaf.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
79
Design
Guidelines and Limitations for Multiple External Connectivity
If traffic from L3out-2 should be blocked from accessing the web EPG, best practice is to use non-overlapping
prefixes for the external EPGs and only add classification for the networks that should be permitted to access
that service.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
80
Design
Recommended Configuration Procedure for Multiple External Connectivity
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
81
Design
Recommended Configuration Procedure for Multiple External Connectivity
b) In the Path field, click the drop-down arrow to specify the node and interface to add to the interface
profile.
c) In the IPv4 Primary /IPv6 Preferred field, enter the IP address and subnet mask assigned to the interface.
d) Specify any other settings that apply, then click OK.
e) To specify additional routed interface entries, repeat steps a through d.
Step 8 After you finish adding routed interface entries, complete all appropriate fields in the Create Interface Profile
dialog box and click OK to save the interface profile.
Step 9 After you save the interface profile, complete all appropriate fields in the Create Node Profile dialog box
and click OK to save the node profile.
Step 10 After you save the node profile, complete all appropriate fields in the Create Routed Outside dialog box and
click OK to save the L3Out object.
The resulting L3Out object supports external connectivity through multiple interfaces as specified through
node profile and interface profile association.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
82
CHAPTER 4
Security Design
• Microsegmentation, on page 83
Microsegmentation
About Microsegmentation
Cisco Application Centric Infrastructure (ACI) architecture was designed with multitenancy in mind. ACI
has built-in segmentation (with the help of endpoint groups and contracts) and security as part of the architecture,
but customers want the ability to secure and segment their data centers and the physical and virtual workloads
for more control and manageability reasons. For more granular and dynamic segmentation and to enhance
security inside of the data center, the ACI release 1.1(1) added support for microsegmentation.
Interface and VLAN/VXLAN IDs are used for endpoint group classification. In addition, you can use more
granular endpoint group derivation based on MAC, IP, or VM information. Even if endpoints are connected
to the fabric with a VLAN/VXLAN ID on the same port, you can provide a different security policy for each
one. This section describes these microsegmentation capabilities (intra-endpoint group isolation, IP-based
endpoint group, and uSeg endpoint group) and how to configure them.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
83
Design
Intra-Endpoint Group Isolation
Intra-endpoint group isolation for VMware 1.2(2x) or later Legacy mode bridge domain is not
vDS and physical domain supported.
Intra-endpoint group isolation for AVS 1.3(1x) or later None.
domain
Note Only use this feature when the VRF is in enforced mode, because the feature relies on the correct isolation
based on the deployment of contracts.
For example, assume that you have three endpoints: two are in the client endpoint group, while the other
endpoint is in the Web endpoint group. If there is a contract between endpoint groups, they can talk each
other, as shown in the following figure:
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
84
Design
Intra-Endpoint Group Isolation
If you enable intra-EPG isolation on the client endpoint group, the endpoints in the endpoint group cannot
talk each other, but inter-EPG communication is still permitted if there is a contract, as shown in the following
figure:
Figure 36: Intra-EPG Isolation with a Contract
Callout Description
1 Endpoints in the same endpoint group cannot communicate with one another.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
85
Design
uSeg Endpoint Group for a Physical Domain
The backend uses PVLAN (private VLAN). After enabling intra-EPG isolation on the endpoint group, the
APIC changes the vDS and port group configuration, and pushes the policy to the physical leaf, which prevents
communication between endpoint in the same endpoint group. The following screenshots show this
configuration:
By default, you do not need to specify a VLAN encapsulation ID for port groups. The APIC chooses a VLAN
from the dynamic VLAN pool that is associated with the VMM domain.
When you use PVLAN, if you have intermediate switches, such as UCS fabric interconnect, between the
server and ACI leaf switch, you must configure PVLAN on the intermediate switches. That means that you
must confirm which VLAN ID will be used. If you add a static VLAN pool in the VMM domain, you can
specify the VLAN ID from the static VLAN pool.
In the figure, both Server-A and Server-B can connect to both Storage-A and Storage-B.
With an IP-based endpoint group, you can use an IP address for endpoint group classification. For example,
192.168.1.1 is in endpoint group Storage-A and 192.168.1.2 is in endpoint group Storage-B even if they are
in the same VLAN and interface. The different endpoint groups enable you to apply different security policies
to each endpoint.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
86
Design
uSeg Endpoint Group for a VMM Domain
In the figure, Server-A can only connect to Storage-A, while Server-B can only connect to Storage-B.
To create this configuration, you must create a base endpoint group "Storage" and associate it with a physical
domain with static bindings (path or leaf switches). Thus, both 192.168.1.1 and 192.168.1.2 are in the base
endpoint group.
Next, create the uSeg endpoint groups "Storage-A" and "Storage-B", which are also associated with a physical
domain with static bindings (leaf switches). You can set multiple uSeg attributes in the uSeg endpoint groups.
This example uses 192.168.1.1/32 for “Storage-A” and 192.168.1.2/32 for “Storage-B”, but you can specify
a larger subnet, such as 172.16.1.0/24.
You must use the following configuration guidelines for the bridge domain and endpoint group setting:
• The base endpoint group and uSeg endpoint group must be in the same bridge domain.
• The bridge domain subnet is required and unicast routing must be enabled because IP-based endpoint
group classification applies only for routed traffic.
• Deployment immediacy must be Immediate on the uSeg endpoint group.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
87
Design
uSeg Endpoint Group for a VMM Domain
In the figure, the virtual machine "Web03" is classified in a uSeg EPG, and so the virtual machine "Web03"
cannot communicate with other virtual machines.
With the base endpoint group, the uSeg endpoint group can have a contract, and so another use case is migrating
the endpoint between different environments. Assume that you are setting up a new application on a server
for a test environment and the virtual machine "Test-Webxxx" is in the "Test-Web" endpoint group. Once
virtual machine gets ready, you change the virtual machine name to "Prod-Webxxx," which will move the
virtual machine to Prod-Web endpoint group.
The following figure illustrates this scenario:
Figure 40: uSeg endpoint group Use Case: Migration
In the figure, the test network and production network are isolated. After changing the virtual machine name,
the virtual machine is moved to the production network.
To create this configuration, you must create a base endpoint group and uSeg endpoint group, which are
associated with the VMM domain. For example, we have virtual machine "Win7-1" in Base endpoint group
"Client" and "Win2012-Web1" in Base endpoint group "Web."
Next, create the uSeg endpoint group "Win2012," which is also associated with the same VMM domain that
is specified by the virtual machine attribute. In this example, if virtual machine name contains "2012," it will
be in the uSeg endpoint group. Once win2012-Web1 is moved to the uSeg endpoint group, it does not appear
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
88
Design
Additional References for Microsegmentation
in the base endpoint group "Web." If you remove the uSeg attribute, the virtual machine moves back to the
base endpoint group "Web."
You can define multiple types of attributes in the uSeg endpoint group with the following precedences:
When you define string, you can choose one of the following operator types:
• Contains
• Ends With
• Equals
• Starts With
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
89
Design
Additional References for Microsegmentation
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
90
CHAPTER 5
Virtualization Design
• VMM Integration with UCS-B, on page 91
• VMM Integration with AVS or VDS, on page 93
• VMM Domain Resolution Immediacy, on page 96
• OpenStack and Cisco ACI, on page 98
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
91
Design
Guidelines and Limitations for VMM Integration with UCS-B
Procedure
Step 1 All intermediate devices should have the dynamic block range of VLANs allowed. In the case of UCS, this
means that the user must still navigate the UCS Manager and allow the range of configured VLANs on all
VNICs and uplink ports that are going to the ACI fabric.
Example:
The design asks to use VLANs 100-200 for VMM integration with UCS-B. The user must go into UCSM
and perform the following tasks:
a) Create VLANS 100-200.
b) Allow the VLANs on the Uplink interfaces.
c) Prune the VLANs from undesired uplink interfaces.
d) Allow the VLANs on the vNICs of all hosts that will be integrated.
Step 2 In the APIC GUI, create a MAC-pinning port channel policy.
a) On the menu bar, choose Fabric > Access Policies.
b) In the Navigation pane, choose Interface Policies > Policies > Port Channel Policies.
c) In the Work pane, choose Actions > Create Port Channel Policy.
d) In the Create Port Channel Policy dialog box, fill out the fields as necessary
This policy must be associated to the attachable access entity profile as a vSwitch port channel policy to
take effect. This only changes the vSwitch port channel policy, not the port channel policy that is associated
with the physical interfaces that are utilized by the end hosts.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
92
Design
Verifying the VMM Integration with UCS-B Configuration
Step 3 Associate the port channel policy to the attachable access entity profile as a vSwitch port channel policy.
a) On the menu bar, choose Fabric > Access Policies.
b) In the Navigation pane, choose Global Policies > Attachable Access Entity Profiles > AAEP_name.
c) In the Work pane, choose Actions > Config vSwitch Policies.
d) In the Config vSwitch Policies dialog box, fill out the fields as necessary
Step 1 Verify the node neighbors by using SSH to connect to the leaf node and run either the show cdp neighbors
or show lldp neighbors command, depending on what configuration is used within this deployment.
Step 2 Verify neighborship directly on the fabric interconnects to ensure that the hypervisor vNICs are forming a
neighborship through CDP or LLDP.
Step 3 Verify compute node VLAN programming by using SSH to connect to the node and running the show VLAN
extended command.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
93
Design
Prerequisites for VMM Integration with AVS or VDS
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
94
Design
Verifying the vNIC Status
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
95
Design
VMM Domain Resolution Immediacy
The value in having a stricter resolution immediacy means that various configurations can be staged from an
APIC configuration view without having to worry about resource utilization until truly needed (VM attachment
to a port group). However, there are certain virtualization scenarios where this is not ideal and the setting of
"Pre-Provision" is truly needed. One such scenario is when migrating a hypervisor management VMK over
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
96
Design
Recommended Configuration Procedure for VMM Domain Resolution Immediacy
to the VDS from a standard vSwitch. Another scenario would be if the NICs of the attached hosts do not
support either CDP or LLDP.
Procedure
Procedure
VLAN programming can be verified by logging into the compute node CLI and running the following
command:
show vlan extended
Depending on the immediacy, certain criteria must be met before you will see the VLAN programmed on any
interfaces.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
97
Design
Additional References for VMM Domain Resolution Immediacy
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
98
Design
About OpenStack and Cisco ACI
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
99
Design
About OpenStack and Cisco ACI
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
100
Design
About OpenStack and Cisco ACI
Figure 43: Logical OpenStack Network Connectivity with Distributed Neutron Services
Note The management/API network for OpenStack can be connected to servers using an additional virtual
NIC/sub-interface on a common uplink with tenant networking to the ACI fabric, or by way of a separate
physical interface.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
101
Design
Prerequisites for OpenStack and Cisco ACI
Subnet Subnet
Router Contract
Policy Action --
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
102
Design
Guidelines and Limitations for OpenStack and Cisco ACI
Scalability Guidelines
There is a 1:1 correlation between the OpenStack tenant and the ACI tenant, and for each OpenStack tenant,
the plugin automatically creates ACI tenants named according to the following convention:
convention_apic_system_id_openstack_tenant_name
You should consider the scalability parameters for supporting the number of required tenants.
Calculate the fabric scale limits for endpoint groups, bridge domains, tenants, and contracts before deployment.
Doing so will limit the number of tenant/projects networks and routers that can be created in OpenStack.
There are per leaf and per fabric limits. Make sure to check the scalability parameters for the deployed release
before deployment. In the case of GBP deployment, it can take twice as many endpoint groups and bridge
domains than ML2 mode. The following tables list the Application Policy Infrastructure Controller (APIC)
resources that are needed for each OpenStack resource in GBP and ML2 configurations.
L3 Policy 1 context
Ruleset 1 contract
Router 1 contract
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
103
Design
Guidelines and Limitations for OpenStack and Cisco ACI
Availability Guidelines
For redundancy, use bonded interfaces (vPCs) by connecting 2 interfaces to two leaf switches and creating a
vPC in ACI.
You should deploy redundant OpenStack controller nodes to avoid a single point of failure.
The external network should also be designed to avoid a single point of failure and service interruption.
For information about the external connectivity in OpFlex plugin, see the Cisco ACI with OpenStack OpFlex
Architectural Overview document:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/openstack/b_ACI_with_OpenStack_
OpFlex_Architectural_Overview.html
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
104
Design
Verifying the OpenStack Configuration
Physical Interfaces
OpFlex uses the untagged fabric interface for an uplink trunk in VLAN mode. This means the fabric interface
cannot be used for PXE, since PXE usually requires an untagged interface. If you require PXE in a VLAN
mode deployment, you must use a separate interface for PXE. This interface can be connected through ACI
or an external switch. This issue is not present in VXLAN mode since tunnels are created using the tagged
interface for infra VLAN.
Blade servers
When deploying on the blade servers, you must make sure there is no intermediate switch between the fabric
and the physical server interfaces. Check the OpenStack ACI plugin release notes to make sure the configuration
is supported. At the time of this writing, there is limited support for B-Series blade servers and the support is
limited to VLAN mode only.
Procedure
Step 1 Verify that a VMM domain was created for the OpenStack system ID defined during installation. The nodes
connected to the fabric, running OpFlex agent, should be visible under Hypervisors. The virtual machines
running on the hypervisor should be visible upon selecting that hypervisor. All networks created for this tenant
should also be visible under the DVS submenu and selecting the network should show you all endpoints
connected to that network.
Step 2 Look at the health score and faults for the entity to verify correct operation. If the hypervisors are not visible
or show as disconnected, check the OpFlex connectivity.
Step 3 Verify that there is a tenant created for the OpenStack tenant/project. All of the networks created in OpenStack
should show up as endpoint groups and corresponding bridge domains. Choose the Operational tab for the
endpoint group to show all of the endpoints for that endpoint group.
Step 4 Choose the Health Score tab and Faults tab to make sure that there are no issues.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
105
Design
Configuration Examples for OpenStack and Cisco ACI
In the configuration file, the optimized metadata service is disabled by default. To enable the optimized
metadata, add the following line::
enable_optimized_metadata = True
For more information, see the Cisco ACI with OpenStack OpFlex Deployment Guide for Ubuntu:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/openstack/b_ACI_with_OpenStack_
OpFlex_Deployment_Guide_for_Red_Hat.html for your distribution.
For more information, see the Cisco ACI with OpenStack OpFlex Deployment Guide for Red Hat:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/openstack/b_ACI_with_OpenStack_
OpFlex_Deployment_Guide_for_Red_Hat.html for your distribution.
The host_pool_cidr defines the SNAT subnet. The floating IP subnet is defined by creating an external
network in Neutron, or an external policy in GBP. The name of the external network or policy should use the
same name as "apic_external_network" defined in the file (in this case "DC-Out").
It is possible to disable NAT by adding enable_nat = False in the above section. You can have multiple
external networks using different Layer 3 Out on ACI, and have a mix of NAT and non-NAT external networks.
For more information on external network configuration, see the Cisco ACI with OpenStack OpFlex Deployment
Guide for Ubuntu:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/openstack/b_ACI_with_OpenStack_
OpFlex_Deployment_Guide_for_Red_Hat.html.
For more information on external network configuration, see the Cisco ACI with OpenStack OpFlex Deployment
Guide for Red Hat:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/openstack/b_ACI_with_OpenStack_
OpFlex_Deployment_Guide_for_Red_Hat.html.
The above pool will be used to allocate networks for created policy groups. You must make sure that the pool
is large enough for the intended number of groups.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
106
Design
Additional references for Openstack and Cisco ACI
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
107
Design
Additional references for Openstack and Cisco ACI
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
108
CHAPTER 6
Layer 4 to Layer 7 Design
• Service Graphs and Layer 4 to Layer 7 Services Integration, on page 109
• Firewall Service Graphs, on page 113
• Service Node Failover, on page 117
• Service Graphs with Multiple Consumers and Providers, on page 119
• Reusing a Single Layer 4 to Layer 7 Device for Multiple Service Graphs, on page 125
• Service Graphs with Route Peering, on page 128
• The Common Tenant and User Tenants, on page 135
For example, you might find the service graph useful if you want to create a portal from which administrators
can create and decommission network infrastructure. The portal includes the configuration of firewalls and
load balancers. In this case, a service graph with managed mode can automate the configuration of the firewall
and load balancers and expose the firewall and load balancers to the portal using the Application Policy
Infrastructure Controller (APIC) API. To use a service graph with managed mode, you need a device package
for the service node.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
109
Design
Layer 4 to Layer 7 Services Integration Options
Callout Description
1 Configure the Cisco Application Centric Infrastructure (ACI) fabric for the Layer 4 to Layer
7 service appliance.
With the service graph with managed mode, the configuration of the Layer 4 to Layer 7 device is part of the
configuration of the entire network infrastructure. You must consider the security and load balancing rules at
the time that you configure network connectivity for the Layer 4 to Layer 7 device. This approach is different
from that of traditional service insertion in that if you do not use the service graph, you can configure the
security and load balancing rules at a different time than when you configure network connectivity.
If you prefer to manage the configuration of the firewalls and load balancers by using an existing method,
such as by using the CLI, GUI, and API of the service device directly, because of the current operation model,
a service graph with unmanaged mode is a good option. Since the APIC does not configure the service node
itself, a device package is not required for unmanaged mode.
Figure 45: Service Graph with Unmanaged Mode
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
110
Design
When to Use a Service Graph for Layer 4 to Layer 7 Services Integration
Callout Description
1 Configure the Cisco Application Centric Infrastructure (ACI) fabric for the Layer 4 to Layer
7 service appliance.
If all that you need is a topology with a perimeter firewall that controls the access to the data center from
external servers, and if this firewall is not decommissioned and provisioned again periodically, then a service
graph is not necessary. You can create endpoint groups for firewall interfaces and configure the contracts so
that the client endpoint can access the firewall external interface and the firewall internal interface can access
the web endpoint. In this configuration, communication between the client and web occurs through the firewall,
as shown in the following figure:
Figure 46: No Service Graph (Using an Endpoint Group as a Service Node)
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
111
Design
Additional References for Layer 4 to Layer 7 Services Integration
When choosing whether to use a service graph or traditional bridge domain stitching, you must take into
account the following points:
• Do you need the firewall and load balancers to be configured dynamically through the Application Policy
Infrastructure Controller (APIC), or should a different administrator configure them? In the second case,
you should not use the service graph with managed mode.
• Do you need to be able to commission, use, and decommission a firewall or a load balancer frequently,
as in a cloud service, or will these services be used in the same way for a long period of time? In the
second case, you might not see much advantage in using a service graph.
The following flowchart shows how to choose the service graph deployment method:
Figure 47: Service Graph Decision Flowchart
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
112
Design
Firewall Service Graphs
• If you are using Cisco ASA, then ASAv must be deployed on an ESXi that is participating in a VMware
vDS VMM domain.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
113
Design
Recommended Configuration Procedure for a Firewall Service Graph
The procedure assumes that the VRF, bridge domains, and endpoint groups are already created.
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
114
Design
Recommended Configuration Procedure for a Firewall Service Graph
Step 15 In the Work pane, in the Configuration State section, ensure that the Device State is Stable before proceeding
with this procedure.
Step 16 In the Navigation pane, choose Tenant tenant_name > L4-L7 Services > L4-L7 Service Graph Template.
Step 17 In the Work pane, choose Actions > Create L4-L7 Service Graph Template.
Step 18 In the Create L4-L7 Service Graph Template dialog box, perform the following actions:
• In the Graph Name field, enter a name for the service graph template.
• Drag and drop the Layer 4 to Layer 7 device that you created from Device Clusters section to the graph.
• For the Firewall radio buttons, click Routed or Transparent as appropriate for your desired configuration.
• In the Profile drop-down list, choose a function profile.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
115
Design
Verifying a Firewall Service Graph Using the GUI
• In the Provider EPG / External Network drop-down list, choose the provider EPG where you want to
insert ASAv.
In the Contract Information section, you can either choose an existing contract where you want to attach
the service graph, or you can create a new one.
Parameter Value
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
116
Design
Additional References for a Firewall Service Graph
security-level 50
ip address 192.168.1.101 255.255.255.0
Typically, use of a dedicated physical interface and a directly cabled pair of failover devices is recommended.
If failover interfaces are connected to each service device directly, Cisco Application Centric Infrastructure
(ACI) fabric does not need to manage the failover network. If you prefer to have in-band failover traffic within
the ACI fabric, create an endpoint group for failover traffic.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
117
Design
About Service Node Failover
If you use a physical appliance and you prefer in-band failover traffic, create an endpoint group for failover
using static bindings. This case is similar to the bare metal endpoint case.
If you use a virtual appliance and you prefer to use out-of-band failover traffic, create a port group manually
and use it. If you prefer in-band failover traffic, create an endpoint group for failover using a VMM domain,
which is similar to the virtual machine endpoint case.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
118
Design
Service Node Failover
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
119
Design
Configuration Example of a Security Policy Before and After Deploying a Service Graph
Procedure
Step 1 In the advanced GUI, on the menu bar, choose Tenants > All Tenants.
Step 2 In the Work pane, double-click T1.
Step 3 In the Navigation pane, choose Tenant T1 > Networking > VRFs > VRF1.
Step 4 In the Work pane, search for the Segment field to find the VRF segment scope ID. Ensure that the ID is
correct.
Step 5 In the Navigation pane, choose Tenant T1 > Application Profiles > ANP > Application EPGs > EPG
Client.
Step 6 In the Work pane, search for the pcTag(sclass) field to find the endpoint group class ID. Ensure that the ID
is correct.
Step 7 In the Navigation pane, choose Tenant T1 > Application Profiles > ANP > Application EPGs > EPG
Web.
Step 8 In the Work pane, search for the pcTag(sclass) field to find the endpoint group class ID.
Step 9 In the CLI, run the show zoning-rule command. Leafs have a zoning rule that permits the traffic between
this source endpoint group and destination endpoint group.
Example:
Leaf1# show zoning-rule
Rule ID SrcEPG DstEPG FilterID operSt Scope Action Priority
======= ====== ====== ======== ====== ===== ====== ========
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
120
Design
Configuration Example of a Security Policy Before and After Deploying a Service Graph
...
4115 49155 49154 default enabled 3112960 permit src_dst_any(8)
4103 49154 49155 default enabled 3112960 permit src_dst_any(8)
Step 11 To see the updated zoning rules, in the CLI, run the show system internal policy-mgr stats command.
Example:
Leaf1# show system internal policy-mgr stats | grep 3112960
Rule (4104) DN (sys/actrl/scope-3112960/rule-3112960-s-49155-d-16390-f-default)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Rule (4105) DN (sys/actrl/scope-3112960/rule-3112960-s-16390-d-49155-f-default)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Rule (4106) DN (sys/actrl/scope-3112960/rule-3112960-s-32772-d-49154-f-default)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Rule (4107) DN (sys/actrl/scope-3112960/rule-3112960-s-49154-d-32772-f-default)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
The service node is in the middle between consumer and provider endpoint group. If you have only one
contract subject to which the service graph is applied, there is no permit rule between the Client endpoint
group (49154) and Web endpoint group (49155). In this case, the endpoint groups cannot talk to each other
directly.
Step 12 If you want to allow specific traffic between the Client endpoint group and Web endpoint group even after
applying a service graph, use two subjects under the contract.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
121
Design
Configuration Example of a Security Policy Before and After Deploying a Service Graph
You must use policy-based redirect and the Application Delivery Controller (ADC) with SNAT as a virtual
IP address. The real server IP address can be on a different bridge domain and subnet.
As an example, assume that you have Subject1 and Subject2 under the contract with the following
configurations:
• Subject1—permit ICMP without a service graph
• Subject2—permit all with a service graph
In this case, the zoning rule allows ICMP traffic between the Client endpoint group (49154) and Web endpoint
group (49155).
Figure 53: ICMP Traffic Between the Client Endpoint Group and Web Endpoint Group
a) To see the zoning rules that allow ICMP traffic between the Client endpoint group (49154) and Web
endpoint group (49155), in the CLI, run the show system internal policy-mgr stats command.
Example:
Leaf1# show system internal policy-mgr stats | grep 3112960
...
Rule (4104) DN (sys/actrl/scope-3112960/rule-3112960-s-49155-d-16390-f-default)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Rule (4105) DN (sys/actrl/scope-3112960/rule-3112960-s-16390-d-49155-f-default)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Rule (4106) DN (sys/actrl/scope-3112960/rule-3112960-s-32772-d-49154-f-default)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Rule (4107) DN (sys/actrl/scope-3112960/rule-3112960-s-49154-d-32772-f-default)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Rule (4108) DN (sys/actrl/scope-3112960/rule-3112960-s-49154-d-49155-f-5)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Rule (4109) DN (sys/actrl/scope-3112960/rule-3112960-s-49155-d-49154-f-5)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
122
Design
Configuration Example of a Security Policy Before and After Deploying a Service Graph
Step 13 Before you apply the service graph, if you have multiple consumer and provider endpoint groups for the
contract, zoning rules are created for each consumer and provider endpoint group combination.
Figure 54: Zoning Rules for the Consumers Endpoint Groups and Provider Endpoint Groups
a) To see the zoning rules that are created, in the CLI, run the show system internal policy-mgr stats
command.
Example:
Leaf1# show system internal policy-mgr stats | grep 3112960
...
Rule (4122) DN (sys/actrl/scope-3112960/rule-3112960-s-49154-d-49159-f-default)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Rule (4123) DN (sys/actrl/scope-3112960/rule-3112960-s-49159-d-49154-f-default)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Rule (4124) DN (sys/actrl/scope-3112960/rule-3112960-s-49154-d-49155-f-default)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Rule (4125) DN (sys/actrl/scope-3112960/rule-3112960-s-49155-d-49154-f-default)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Rule (4126) DN (sys/actrl/scope-3112960/rule-3112960-s-49159-d-49158-f-default)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Rule (4127) DN (sys/actrl/scope-3112960/rule-3112960-s-49158-d-49159-f-default)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Rule (4128) DN (sys/actrl/scope-3112960/rule-3112960-s-49155-d-49158-f-default)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Rule (4129) DN (sys/actrl/scope-3112960/rule-3112960-s-49158-d-49155-f-default)
Ingress: 0, Egress: 0, Pkts: 0 RevPkts: 0
Step 14 After applying the service graph with multiple consumers and providers, the service graph updates the rule
to insert service nodes between endpoint groups.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
123
Design
Configuration Example of a Security Policy Before and After Deploying a Service Graph
Step 15 Check the class ID for service nodes in the deployed device.
a) On the menu bar, choose Tenants > All Tenants.
b) In the Work pane, double-click T1.
c) In the Navigation pane, choose Tenant T1 > L4-L7 Services > Deployed Devices > ASAv-VRF1.
d) In the Work pane, you can see the resource (class) IDs.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
124
Design
Reusing a Single Layer 4 to Layer 7 Device for Multiple Service Graphs
Guidelines and Limitations for Reusing a Single Layer 4 to Layer 7 Device for
Multiple Service Graphs
You can create multiple cluster interfaces on a concrete device and then specify which cluster interface that
is defined in the Layer 4 to Layer 7 device will be used for the connector in the device selection policy. This
cluster interface can be shared by using multiple service graph instantiations.
In the Application Policy Infrastructure Controller (APIC) release 2.0 and earlier, port group VLAN trunking
for virtual appliance is not supported. If you use a virtual appliance as a Layer 4 to Layer 7 device and you
need to add service node interfaces in a different bridge domain, you must have different cluster interfaces
on the virtual appliance.
For the endpoint groups, the Layer 4 to Layer 7 device and service graph templates are within one tenant in
the following example. The Layer 4 to Layer 7 device that is defined in a tenant cannot be referenced from
other tenants. If you want to share a Layer 4 to Layer 7 device with other tenants, export the Layer 4 to Layer
7 device to other tenants. The device will appear as an imported device in the other tenants.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
125
Design
Configuration Example for a Virtual Appliance That is Used By Multiple Service Graphs
The following steps provide information about creating a Layer 4 to Layer 7 device with shared interfaces to
prepare a virtual appliance to be used by multiple service graphs.
Procedure
The Cisco ASA DMZ interface (192.168.2.1) is the consumer and also the provider, and so you must choose
the consumer and provider type for the cluster interface.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
126
Design
Configuration Example for a Physical Appliance That is Used By Multiple Service Graphs
Service-Graph2 uses the DMZ cluster interface as the consumer connector and the internal cluster interface
as the provider connector.
This example has one consumer endpoint group and two provider endpoint groups.
The following procedure creates the example configuration.
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
127
Design
Verifying the Service Graph Configuration for a Device That is Used By Multiple Service Graphs Using the GUI
Service-Graph1 uses the consumer cluster interface as the consumer connector and the provider cluster
interface as the provider connector. The provider side is BD2.
Service-Graph2 uses the consumer cluster interface as the consumer connector and the provider cluster
interface as the provider connector. The provider side is BD3.
Verifying the Service Graph Configuration for a Device That is Used By Multiple
Service Graphs Using the GUI
After a service graph is deployed successfully, you can see the service graph in the Deployed Devices properties
as having multiple cluster interfaces.
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
128
Design
About Service Graphs with Route Peering
There are some routing considerations. Traffic is routed based on the destination IP address, as illustrated in
the following figure:
Figure 59: Traffic is Routed Based on the Destination IP Address
Table 10: Callouts for Traffic is Routed Based on the Destination IP Address
Callout Description
If the Cisco ASA firewall does not do NAT, ACI VRF1 needs to know the 192.168.2.0/24 route. However,
if the ACI fabric has subnet 192.168.2.254/24 in BD2, then the traffic from the L3Out will be going directly
to the Web server instead of going through the Cisco ASA firewall. As such, you must add a static route or
enable dynamic routing between the ACI fabric and Cisco ASA firewall accordingly.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
129
Design
About Service Graphs with Route Peering
In ACI, use an L3Out to add a static route or enable dynamic routing on the VRF. With an L3Out, you connect
the Cisco ASA firewall as an external router in another L3Out (ASA-external). This is one example of when
to use a service graph with route peering, which is illustrated in the following figure:
Figure 60: L3Out Route Peering
Callout Description
Another example of when to use a service graph with route peering is if you want to use an ACI anycast
gateway as the default gateway of the servers, as illustrated in the following figure:
Figure 61: Anycast Gateway as the Default Gateway of the Servers
Table 12: Callouts for Anycast Gateway as the Default Gateway of the Servers
Callout Description
1 Traffic is not going through the Cisco ASA firewall because the Cisco Application Centric
Infrastructure (ACI) fabric in VRF1 knows the 192.168.20.0/24 route as the direct connect
route.
If the Cisco ASA firewall does not do NAT, you must use route peering and different VRFs, as illustrated in
the following figure:
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
130
Design
Prerequisites for Service Graphs with Route Peering
Callout Description
RecommendedConfigurationProcedureforServiceGraphswithRoutePeering
The following procedure provides an overview of the steps for configuring a service graph with route peering.
For more information about any of the steps, see the Cisco APIC Layer 4 to Layer 7 Services Deployment
Guide at the following URL:
http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
131
Design
Configuration Examples for Service Graphs with Route Peering
Procedure
Procedure
Step 6 Create L3Out ASA-external and ASA-internal for service node connectivity.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
132
Design
Configuration Examples for Service Graphs with Route Peering
The VLAN used in the logical interface profile of the L3Outs will be used for the service node configuration.
The APIC and service graph will automatically pick up the VLAN ID routing information and will configure
OSPF on the service node.
If you use OSPF, you must configure L3Out subnets accordingly. The subnets are bridge domain subnets that
will be advertised to the Cisco ASA firewall when they are marked with the Advertised Externally scope.
b) In the CLI, check the leaf routing table (VRF1) to make sure that VRF1 has the 10.10.10.0/24 route.
Example:
Leaf3# show ip route vrf T1:VRF1
<snip>
1.1.1.1/32, ubest/mbest: 1/0
*via 192.168.30.1, eth1/21, [110/41], 5d02h, ospf-default, intra
10.10.10.0/24, ubest/mbest: 1/0
*via 192.168.2.101, vlan20, [110/20], 00:00:27, ospf-default, type-2, tag 200
11.11.11.11/32, ubest/mbest: 2/0, attached, direct
*via 11.11.11.11, lo3, [1/0], 5d02h, local, local
*via 11.11.11.11, lo3, [1/0], 5d02h, direct
192.168.1.0/24, ubest/mbest: 1/0
*via 192.168.2.101, vlan20, [110/14], 00:15:51, ospf-default, intra
192.168.2.0/24, ubest/mbest: 1/0, attached, direct
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
133
Design
Dynamic Routing Protocol Parameters for OSPF and BGP
c) Check the leaf routing table (VRF2) to make sure that VRF1 has the 192.168.20.0/24 route.
Example:
Leaf3# show ip route vrf T1:VRF2
...
10.10.10.0/24, ubest/mbest: 1/0, attached, direct, pervasive
*via 10.0.80.64%overlay-1, [1/0], 00:16:05, static
10.10.10.254/32, ubest/mbest: 1/0, attached
*via 10.10.10.254, vlan13, [1/0], 00:16:05, local, local
192.168.1.0/24, ubest/mbest: 1/0, attached, direct
*via 192.168.1.254, vlan16, [1/0], 04:48:44, direct
192.168.1.254/32, ubest/mbest: 1/0, attached
*via 192.168.1.254, vlan16, [1/0], 04:48:44, local, local
192.168.2.0/24, ubest/mbest: 1/0
*via 192.168.1.101, vlan16, [110/14], 00:15:53, ospf-default, intra
192.168.10.0/24, ubest/mbest: 1/0, attached, direct, pervasive
*via 10.0.80.64%overlay-1, [1/0], 00:01:52, static
192.168.20.0/24, ubest/mbest: 1/0
*via 192.168.1.101, vlan16, [110/20], 00:01:48, ospf-default, type-2, tag 100
Property Value
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
134
Design
Additional References for Service Graphs with Route Peering
router-id 10.10.10.1
network 192.168.1.0 255.255.255.0 area 1
network 192.168.2.0 255.255.255.0 area 1
area 1
log-adj-changes
...
In the device selection policy, you can choose the Redistribute option, which is also reflected in the service
node if device package supports redistribution.
For more information, see the Cisco APIC Layer 4 to Layer 7 Services Deployment Guide at the following
URL:
http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
135
Design
Prerequisites for the Common Tenant and User Tenants
Object Consideration
Contract This object must be visible from the provider and consumer endpoint
groups.
Service graph template This object must be visible from the contract.
Layer 4 to Layer 7 device This object must be visible from the device selection policy.
Device selection policy This object must be defined under the provider side endpoint group
tenant. This object must be able to see the cluster interfaces in the
Layer 4 to Layer 7 device, bridge domains, and L3Out.
Objects defined in the common tenant can be referenced from other tenants, but objects defined in a user
tenant can be referenced only from the same tenant. The following examples show that where you define these
objects depends on your requirements:
Contract:
• If you want to enable a tenant user to manage the contract filter, the contract must be defined in the
provider side endpoint group tenant and the contract must be exported to consumer side endpoint group
tenant.
• If you want to hide the security policy from the user tenant, the contract must be defined in the common
tenant. The security policy cannot be changed from a user tenant and can be referenced from user tenants
without being exported.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
136
Design
Example of Where to Define Layer 4 to Layer 7-Related Objects
• If the device selection policy is in a user tenant, the bridge domain or L3Out for the cluster interface
must be in the same tenant or the common tenant.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
137
Design
Additional References for the Common Tenant and User Tenants
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
138
CHAPTER 7
Miscellaneous Design
• Hardware Choices, on page 139
• Leaf Node Categorization, on page 143
• Fabric Provisioning, on page 144
• About Fabric Provisioning, on page 144
Hardware Choices
About Hardware Choices
Cisco Application Centric Infrastructure (ACI) offers a variety of hardware platforms. Choose a platform
based on the type of physical layer connectivity you need, the amount of ternary content-addressable memory
(TCAM) space and buffer space you need, and whether you want to use IP-based classification of workloads
into endpoint groups (EPGs).
The following table provides a summary of the hardware options that were available for the Application Policy
Infrastructure Controller (APIC) 1.3(2f) release. You should refer to the Cisco product page for the most
up-to-date information.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
139
Design
About Hardware Choices
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
140
Design
About Hardware Choices
Expansion Modules
You can choose among three expansion modules according to the switches you are using and your needs:
• Cisco M12PQ—Twelve 40-Gbps ports with an additional 40 MB of buffer space and a smaller TCAM
compared to the other models. It can be used with the Cisco Nexus 9396PX, 9396TX, and 93128TX
switches.
• Cisco M6PQ—Six 40-Gbps ports with additional policy TCAM space. It can be used with the Cisco
Nexus 9396PX, 9396TX, and 93128TX switches.
• Cisco M6PQ-E—Six 40-Gbps ports with additional policy TCAM space. It can be used with the Cisco
Nexus 9396PX, 9396TX, and 93128TX switches and allows you to classify workloads into EPGs based
on the IP address of the originating workload.
Leaf Switches
In the ACI, all workloads connect to leaf switches. The leaf switches used in an ACI fabric are ToR switches.
They are divided into four main types based on their hardware:
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
141
Design
About Hardware Choices
• Border Leaf—The border leaf switches are ACI leaf switches that provide Layer 2 or Layer 3 external
connectivity to outside networks. The border leaf supports routing protocols to exchange routes with
external routers, and it also applies and enforces policies for traffic between internal and external endpoints.
• Service Leaf—The service leaf switches are ACI leaf switches that connect to Layer 4-7 service appliances,
such as firewall, load balancer, and such. The connectivity between the service leaf and the service
appliance can be Layer 2 or Layer 3 depending on design scenarios.
• Compute Leaf—The compute leaf switches are ACI leaf switches that connect to compute systems. The
compute leaf supports individual port, port channel, and virtual port channel (vPC) interfaces, based on
the nature and requirements of the application or the system. It also applies and enforces policies for
traffic to and from local endpoints.
• IP Storage Leaf—The storage leaf switches are ACI leaf switches that connect to IP storage systems. It
supports individual port, port channel, and virtual port channel (vPC) interfaces based on the nature and
requirements of the application and the system. It also applies and enforces policies for traffic to and
from local endpoints.
While it is not a requirement to have dedicated switches to serve certain functions, it is preferred especially
for a large data center.
It is easier to standardize configuration templates and enables applications to flexibly tap into any available
resources.
For example, a large data center that supports high volume of traffic between the ACI fabric and the core
network, might choose to designate two border leaf switches for high availability and scalability considerations.
Spine Switches
The Cisco ACI fabric forwards traffic primarily based on host lookups. A mapping database stores the
information about the ToR switch on which each IP address resides. This information is stored in the fabric
cards of the spine switches.
The spine switches have several form factors. The models also differ in the number of endpoints that they can
hold in the mapping database, which depends on the number of fabric modules installed. Modular switches
equipped with six fabric modules can hold the following numbers of endpoints:
• Fixed form-factor Cisco Nexus 9336PQ—Up to 200,000 endpoints
• Modular 4-slot switch—Up to 300,000 endpoints
• Modular 8-slot switch—Up to 600,000 endpoints
• Modular 16-slot switch—Up to 1.2 million endpoints
Note You can mix spine switches of different types, but the total number of endpoints that the fabric supports is
the minimum common denominator. You should stay within the maximum tested limits for the software,
which are shown in the Capacity Dashboard in the APIC GUI. At the time of this writing, the maximum
number of endpoints that can be used in the fabric is 180,000.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
142
Design
Additional References for Hardware Choices
• Verify that the features that you want to deploy are supported on the selected platform. For example, the
IP-based EPG feature requires the -E, -EX, or later versions of leaf switches.
• Make sure the leaf switch TCAM size is large enough to support the contracts or application rules that
will be deployed within the fabric.
• When using two leaf switches for a vPC pair, make sure to use the same switch model to avoid any corner
issues.
• Use two or more spine switches for higher bandwidth and for redundant connections to external networks.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
143
Design
Additional References for Leaf Node Categorization
• Border Leaf—This leaf node is typically connected to L3Outs. L3Outs can serve as a path into the WAN,
or into the core of a legacy network.
• Compute Leaf—This leaf node is typically connected to compute resources, whether the resources are
physical or virtualized servers.
• Services Leaf—Services within ACI are typically those given by Layer 4 to Layer 7 services. Services
include firewalls, load balancers, and intrusion prevention systems. Services do not need to be integrated
into ACI through a service graph template to be considered a service; that is a definition from the
applications point of view.
• Storage Leaf—This leaf node is typically connected to storage devices for compute resources. This can
include iSCSI, NFS, or other Ethernet medium storage devices.
Leaf nodes do not need to be delegated to only one category. Depending on the design, the categories can
overlap. For example, a leaf node serving as a border leaf node can also provide compute resources.
Fabric Provisioning
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
144
Design
About Fabric Provisioning
Figure 66: Extending the Infrastructure IP Range Beyond the ACI Fabric
If the infrastructure range overlaps with other subnets elsewhere in the network, routing problems might occur.
The minimum supported subnet size in the recommended three APIC scenario is /22. The number of addresses
required depends on a variety of factors, including the number of APICs in your fabric, the number of leaf
and spine nodes, the number of AVS instances, and the number of virtual port channels required. To avoid
issues with address exhaustion, you should consider allocating a /16 or /17 range if possible.
Note When considering the preceding requirements, remember that changing either the infrastructure IP address
range or the VLAN after initial provisioning is not possible without rebuilding the fabric.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
145
Design
About Fabric Provisioning
In many cases, VLAN 3967 is a good choice for the ACI infrastructure VLAN to avoid the issue outlined in
the preceding section.
For more information about fabric infrastructure VLAN recommendations, see the Cisco APIC Getting Started
Guide at the following URL:
http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
146
Design
About Fabric Provisioning
Note Node IDs 1 through 29 are reserved for APICs, which cannot be changed.
When APIC redundancy is configured, you should use IDs 1 to 19 for active
APICs and IDs 20 to 29 for standby APICs. This allows for expansion of the
fabric.
• When a pair of switches is used for the server uplink connectivity using either vPC or active/standby,
consider using sequential numbers for the leaf node ID for those switch pairs. For example, node ID 201
for the vPC side A connectivity and node ID 202 for side B. That way, it is easier to configure and easier
to manage an upgrade when using maintenance groups.
• If only one ToR switch is deployed, reserve the even leaf ID for future use.
Note Once the fabric node ID is assigned, the ID is difficult to change unless the fabric nodes (spine and leaf) are
decommissioned from the fabric and cleanly rebooted.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
147
Design
About Fabric Provisioning
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
148
PA R T II
Implementation
• ACI Constructs Implementation, on page 151
• Routing Implementation, on page 167
• Virtualization Implementation, on page 175
• Miscellaneous Implementation, on page 183
CHAPTER 8
ACI Constructs Implementation
• Configuration Zones, on page 151
• Shared Services, on page 153
• EPG Static Binding, on page 155
• In-Band and Out-of-Band Management, on page 158
• Out-of-Band Management Contracts, on page 161
Configuration Zones
About Configuration Zones
Configuration zones divide the Cisco Application Centric Infrastructure (ACI) fabric into different zones that
can be updated with configuration changes at different times. This limits the risk of deploying a faulty
configuration on the entire fabric at once that might disrupt traffic or even bring the fabric down. An
administrator can deploy a configuration to a defined non-critical zone, and then deploy it to defined critical
zones when satisfied that it is suitable. Similar to the way that UCS Manager functions, a configuration zone
is essentially an additional "user acknowledge" type of policy that forces users to verify configuration changes
before applying the changes.
You can choose one of the following deployment modes for a configuration zone:
• Enabled—Pending updates are sent immediately
• Disabled—New updates are postponed
• Triggered—Pending updates are sent immediately, and the deployment mode is reset to the value it had
before being triggered
Without configuration zones enabled, policy changes will take effect on all fabric nodes once the configuration
is set and standard programming criteria are met. With configuration zones enabled, you can have these policy
changes transition to a state of "postponed" until a user acknowledges the change to be applied in specific
zones.
Zones can encompass an entire POD, or can encompass a subset of fabric nodes.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
151
Implementation
Prerequisites for Configuration Zones
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
152
Implementation
Configuration Examples for Configuration Zones
Shared Services
About Shared Services
Shared services is the paradigm of taking endpoints within one tenant/VRF and allowing them to communicate
with endpoints within another tenant/VRF. Shared services enables this communications across tenants while
preserving the isolation and security policies of the individual tenants. A routed connection to an external
network is an example of a shared service that multiple tenants use.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
153
Implementation
Recommended Configuration Procedure of Shared Services Using the GUI
• Contracts for shared service must have the scope set to Global. The default scope is VRF and will not
work for shared services.
• For BD-to-BD shared services: Given User-Tenant A to User-Tenant B, each tenant has a contract that
is associated as a provider under an EPG and is exported to the other tenant. The same EPG takes the
subsequently imported contract and has it applied as a consumed contract interface.
• All EPGs that communicate with BD to BD Shared Services have at least two contract relationships,
one as a provider and one as a consumed contract interface.
• When using BD-to-BD shared services, due to the extra configuration and rules associated with having
a provider set within both tenants, limit the fabric to roughly 16k EPGs.
• In the case of vzAny, you must define the provider EPG shared subnet under the EPG in order to properly
derive the pcTag (classification) of the destination from the consumer (vzAny) side. If you are migrating
from a BD-to-BD shared services configuration, where both the consumer and provider subnets are
defined under bridge domains, to vzAny acting as a shared service consumer, you must take an extra
configuration step where you add the provider subnet to the EPG with the shared flags at minimum.
Note If you add the EPG subnet as a duplicate of the defined BD subnet, ensure that
both definitions of the subnet always have the same flags defined. Failure to do
so can result in unexpected fabric forwarding behavior.
• Subnets leaked from multiple consumer networks into a VRF, or vice versa, must be disjointed and must
not overlap. If two consumers are mistakenly configured with the same subnet, recovery from this
condition is done by removing the subnet configuration for both then reconfiguring the subnets correctly.
• Subnets leaked across VRFs must have the Shared between VRFs and ND RA Prefix options enabled,
to be defined on the BD or the EPG.
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
154
Implementation
Configuration Examples for Shared Services Using the GUI
Procedure
Step 1 To set the shared services contract as a provider for an EPG with a shared subnet: on the menu bar, choose
Tenants > tenant_name.
Step 2 In the Navigation pane, choose tenant_name > Application Profiles > profile_name > Application EPGs >
epg_name > Contracts.
Step 3 Right-click on Contracts, choose Add Provided Contract, and enter a name for the contract in the Name
field.
Step 4 To export the contract from one tenant to another: in the Navigation pane, choose Security Profiles >
Contracts.
a) Right-click on Contracts and choose Export Contract. Enter the appropriate information for the Name,
Contract and Tenant fields. Click Submit when finished.
Step 5 To apply the contract to the consumer EPG within the imported tenant as a consumed contract interface: in
the Navigation pane, choose tenant_name > Application Profiles > profile_name > Application EPGs >
epg_name > Contracts.
Step 6 Right-click on Contracts, choose Add Consumed Contract Interface, and enter a name for the contract in
the Name field.
Note If performing BD-BD shared services, repeat the procedure between tenants before communication
will be successful between both EPGs.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
155
Implementation
Prerequisites for EPG Static Binding Modes
• When utilizing 802.1p defined ports with other definitions on the same port as trunked, packets will
egress this interface as VLAN-0, or as untagged in the case of EX switches.
• Most devices process VLAN-0 as an untagged packet and have no issues.
• For hosts that cannot VLAN-0 as an untagged packet, the setting must be Untagged.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
156
Implementation
Configuration Examples for EPG Static Binding Modes Using the GUI
Procedure
Configuration Examples for EPG Static Binding Modes Using the GUI
The following procedure provides an example of configuring EPG static binding modes using the Application
Policy Infrastructure Controller (APIC) GUI.
Procedure
Step 1 Configure contract labels (consumer and provider). On the menu bar, choose TENANTS > All Tenants.
Step 2 In the Work pane, double-click the desired tenant's name.
• If you are using the Advanced GUI Mode of the APIC GUI, then from the Navigation pane, expand
Application Profiles > profile_name > Application EPGs > application_epg_name.
• If you are using the Basic GUI Mode of the APIC GUI, then from the Navigation pane, expand
tenant_name > Application Profiles > profile_name > Application EPGs > application_epg_name.
Step 3 In the Navigation pane, right-click on Static Ports to open the Deploy Static EPG On PC, VPC, Or Interface
dialog box and perform the following tasks:
a) In the Path Type and Path fields, click the port type and the drop-down menu to navigate the node path.
b) In the Port Encap field, enter in the VLAN ID.
c) In the Deployment Immediacy field, choose the deployment type.
d) In the Mode field, choose the mode type.
e) Click Submit.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
157
Implementation
In-Band and Out-of-Band Management
http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
In-band management refers to utilizing the data plane for management traffic. In the case of ACI, this refers
to having Application Policy Infrastructure Controller (APIC)-sourced management ports go through the leaf
nodes to allow for management communication to devices hanging directly off of these leaf switch ports. The
following figure illustrates in-band management:
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
158
Implementation
Prerequisites for In-Band and Out-of-Band Management
You can utilize both in-band and out-of-band management simultaneously, but there are limitations that must
be taken into account for this scenario.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
159
Implementation
Recommended Configuration procedure of In-Band and Out-of-Band Management
• In-band management ports are the front panel ports on the leaf nodes and the two PCIE VIC ports
connected to the fabric on the APIC.
• Out-of-band and in-band management connectivity policies reside within tenant "mgmt."
• The out-of-band management address assignment that is set during the APIC startup script does not have
an object created to represent that assignment. This must be done after fabric initialization to get an object
representation within the MIT.
• The APIC management address sources traffic to the management address of various devices for
integrations. For example, the APIC management must have communication to the management address
of vCenter for VMM integration to be successful. This can be through in-band or out-of-band.
• When in-band management is set up, the APIC always prefers in-band for any traffic sourced from the
APIC. Out-of-band is still accessible for devices that are sending requests to the out-of-band address
specifically.
• There is no configuration available to leak the out-of-band management plane from the APIC into the
data plane. This can only be accomplished by physically cabling out-of-band network devices directly
into the data plane. Cisco does not recommend this setup. The preferred setup for this type of design
would be to utilize in-band management.
• When utilizing in-band management with multi-tenancy, shared services will be used extensively to leak
tenant management subnets into the fabric's in-band subnet.
Procedure
Step 3 In the Navigation pane, choose Tenant mgmt > Node Management Addresses > name_of_policy.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
160
Implementation
Verifying the In-Band and Out-of-Band Management Configuration Using the NX-OS-Style CLI
In the Work pane, you can see the dynamic address assignments that can be created to provision mgmt
addresses. If created, they specify the node ID, address assignment, and in-band or out-of-band assignment
of the addresses.
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
161
Implementation
Prerequisites for Out-of-Band Management Contracts
RecommendedConfigurationProcedureofOut-of-BandManagementContracts
Using the GUI
The following procedure restricts out-of-band management through contract and subnet definitions within
the node management EPG and external management network connectivity profile using the Cisco APIC
GUI.
Procedure
Step 1 Configure out-of-band management. On the menu bar, choose Tenants > mgmt.
Step 2 In the Navigation pane, choose Tenant mgmt > Node Managment EPGs > .
Step 3 In the Work pane, double-click Out-of-Band_name and expand the Provided Out-of-Band Contract table
to configure.
Step 4 Configure the consumer contract association and consumer subnet. In the Navigation pane, choose External
Management Network Instance Profiles.
Step 5 In the Work pane, double-click External Management Network Instance Profile_name and expand the
Consumed Out-of-Band Contracts and Subnets tables to configure.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
162
Implementation
Configuration Examples for Out-of-Band Management Contracts
Procedure
In the chain for fp-default, there are entries based on the defined subnet. In the example below, only subnet
192.168.1.0/24 is allowing oob access.
Example:
pod3-apic1# iptables -L
….
snippet
….
Where a new target entry named fp-default exists, there are entries based on the defined subnet at the chain
for fp-default. In the above example, only subnet 192.168.1.0/24 is being allowed out-of-band access.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
163
Implementation
Additional References for Out-of-Band Management Contracts
Procedure
Step 4 In the Create Out-of-Band Contracts dialog box, perform the following tasks:
a) In the Name field, enter a name for the contract.
b) Expand Subjects. In the Create Contract Subject dialog box, in the Name field, enter a subject name.
c) Expand Filters, and in the Name field from the drop-down list, choose the name of the filter (default).
Click Update and click OK.
d) In the Create Out-of-Band Contract dialog box, click Submit.
Step 5 Right-click Node Management EPGs and click Create Out-of-Band Management EPG.
An out-of-band management endpoint group consists of switches (leaves/spines) and Cisco APICs that are
part of the associated out-of-band management zone.
Step 6 In the Create Out-of-Band Management EPG dialog box, perform the following tasks:
a) In the Name field, enter a name for the EPG.
b) Expand Provided Out-of-Band Contracts, and in the OOB Contract field, from the drop-down list,
choose the name of the contract you created. Click Update, and click OK.
The out-of-band contract is associated with the node management EPG.
c) In the Create Out-of-Band Management EPG dialog box, click Submit.
Step 7 Right-click External Management Network Instance Profiles and click Create External Management
Network Instance Profile.
Hosts that are part of regular endpoint groups cannot communicate with the nodes in the out-of-band
management endpoint group. Any host that is part of a special group known as the instance profile can
communicate with the nodes in an out-of-band management endpoint group using special out-of-band contracts.
Step 8 In the Create External Management Network Instance Profile dialog box, perform the following tasks:
a) In the Name field, enter a name for the instance profile.
b) Expand Consumed Out-of-Band Contracts, and in the Out-of-Band Contract field, from the drop-down
list, choose the name of the contract you created. Click Update.
c) Expand Subnets and type the external subnet IP address and subnet mask of the managing hosts. Click
Update, and click OK.
The out-of-band contract is associated with the subnet.
d) In the Create External Management Network Instance Profile dialog box, click Submit.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
164
Implementation
Additional References for Out-of-Band Management Contracts
http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
165
Implementation
Additional References for Out-of-Band Management Contracts
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
166
CHAPTER 9
Routing Implementation
• L3Out Subnets, on page 167
L3Out Subnets
About Defining L3Out Subnets
L3Outs are the Cisco Application Centric Infrastructure (ACI) objects used to provide external connectivity
in external Layer 3 networks. The L3Out is where you configure the interfaces, protocols, and protocol
parameters that are used to provide IP connectivity to external routers. The following list contains the different
managed objects configured under the L3Out.
• Export Route Control Subnet—Controls which external networks are advertised out of the fabric using
route-maps and IP prefix-lists.
• External Subnets for External EPG—Classifier for the external EPG. The rules and contracts defined
in this external EPG apply to networks matching this subnet.
• L3Outside—Top object for the L3Outside connection. This is where protocol selection (BGP, OSPF,
or EIGRP) is done. OSPF area and area definition (regular, nssa, or stub area) and area cost is configured
here. EIGRP autonomous system is configured here. VRF selection and external domain is assigned at
the L3Out.
• Logical Interface Profiles—The interface configuration for the L3Out is configured. This is the IP
address configuration, VLAN configuration, MTU configuration.
• Logical Node Profiles—Node profiles are configured under the logical node profile. This is where the
leaf switch selection, router-id, and static route configuration is performed. When and L3Out spans
multiple leaf switches, all nodes can be configured under one node profile.
• Networks (L3Out Network Instance Profile—The external EPG configuration for the L3Out. This
where the routing controls, EPG classification, and contract configuration is done here. There can be
multiple externals per L3Out and assigned to different contracts.
• Match Rules for Route Maps—L3Outs in ACI support route-map configuration. This section is where
route-map match statements are configured.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
167
Implementation
Prerequisites for Defining L3Out Subnets
• Protocol Policies—Routing protocol policies are configured here. Policies include interface policies
(timers, OSPF network type, passive interface, BFD policies, route summarization policies, and protocol
knobs are configured here).
Note L3Outs across different tenants will use similar protocol policies. For example,
many OSPF L3Outs may use the same network type or all EIGRP L3Outs may
use default interface settings. If protocol policies are defined under the common
tenant, all other tenants can use them. This eliminates having to configure the
same policies across all tenants.
• Set Rules for Route Maps—Route-map set statements are configured. Route map set statements are
used to influence routing decisions. Set statements include BGP communities, local preference, weight,
route dampening, MED, OSPF metric, and metric type.
• Shared Route-Control Subnet—Controls which external prefixes are advertised to other tenants for
shared services.
• Shared Security-Import Subnet—Configures the classifier for the subnets in the VRF where the routes
are leaked.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
168
Implementation
Recommended Procedures for Defining L3Out Subnets
• Export Route Control Subnet—Controls which external networks are advertised out of the fabric,
using route-maps and IP prefix-lists.
• External Subnets for the External EPG—Sets the classifier for the external EPG. The rules and
contracts assigned in this external EPG apply to networks matching this subnet.
• Shared Route Control Subnet—Controls which external prefixes are advertised to other tenants for
shared services.
• Shared Security Import Subnet—Sets the classifier for the subnets in the VRF where the routes are
advertised.
Note This section refers to the Cisco APIC GUI at Tenants > tenant-name > Networking > External Routed
Networks > Create Routed Outside > External EPG Networks > Create External Network > Subnet >
Create Subnet > Export Route Control Subnet.
Export route control determines which transit prefixes are advertised on the Layer 3 outside network associated
with an external EPG. An IP prefix-list is created on the border leaf for each subnet that is defined here. A
route-map is configured with all IP prefix-lists and is used for redistribution into OSPF or EIGRP L3Outs, or
as an outbound route-map for BGP L3Outs.
The following command output shows the route-maps created:
BL-1# show ip ospf vrf T1:ctx1
Routing Process default with ID 1.1.1.103 VRF T1:ctx1
Stateful High Availabiltiy enabled
Supports only single TOS(TOS0)routes
Supports opaque LSA
Table-map using route-map exp-ctx-2883588-deny-external-tag
Redistributing External Routes from
static route-map exp-ctx-st-2883588
direct route-map exp-ctx-st-2883588
bgp route-map exp-ctx-proto-2883588
eigrp route-map exp-ctx-proto-2883588
If no subnets are added to export route control, a route-map is not created. In the following example, no routes
are redistributed into OSPF because the route-map being referenced by the redistribution command does not
exist.
BL-1# show route-map exp-ctx-st-2883588
% Policy exp-ctx-st-2883588 not found
The route-map and IP prefix-list are created when the first subnet is added to export route control.
For example, if you set the Create Subnet dialog box to use the 172.16.25.0/24 IP address and the scope set
to Export Route Control Subnet, the following route-map and IP prefix-list are displayed in the output of
the show route-map command:
BL-1# show route-map exp-ctx-proto-2883588
route-map exp-ctx-proto-2883588, permit, sequence 7801
Match clauses:
ip-address prefix-lists: IPv6-deny-all
IPv4-proto16390-2883588-exc-ext6-inferred-export-dst
Set clauses:
tag 4294967295
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
169
Implementation
Recommended Procedures for Defining L3Out Subnets
BL-1#show ip prefix-list
IPv4-proto16390-2883588-exc-ext-inferred-export-dst
ip prefix-list IPv4-proto16390-2883588-exc-ext-inferred-export-dst: 1 entries
seq 2 permit 172.16.25.0/24
BGP L3Outs do not use redistribution to advertise the transit routes because routes received from L3Outs are
already redistributed into MP-BGP. Therefore, they already exist in the BGP table on the border leaf. BGP
uses outbound route-maps for export route control. The same rules apply to creation of the route-map and IP
prefix-list. They are not created until the first export route-control subnet is configured. The following example
shows the resulting outbound route-map:
Inbound route-map configured is permit-all, handle obtained
Outbound route-map configured is exp-l3out-BGP2-peer-2293764, handle obtained
When configuring export route-control subnets you must specify the exact prefix match. For example, an
export route-control subnet of 172.16.0.0/16 only matches route 172.16.0.0/16. It does not match longer prefix
length routes, such as 172.16.1.0/24 or 172.16.2.0/24. An exception to this is the 0.0.0.0/0 subnet. If you use
this subnet, you can enable Aggregate Export on the Create Subnet dialog box. When aggregate export is
enabled, the route control subnet matches all routes. If aggregate export is not enabled with the 0.0.0.0/0
subnet, then only the default route is advertised.
Note Export route control is not used to advertise tenant subnets. Instead you configure that in the bridge domain/EPG
subnet policy. The Advertised Externally option is used to advertise tenant subnets externally on the L3Out.
See the Create Subnet dialog box at Tenants > tenant-name > Networking > Bridge Domains > BD-name >
Create Subnet.
For example, on this Create Subnet dialog box, if you configure the Gateway IP address 10.1.1.1/24 and
enable the Advertised Externally option, the system adds the tenant subnet to a static-redistribution route-map.
Note You enable import route control when you create an L3Out, at Tenants > tenant-name > Networking >
External Routed Networks > Create Routed Outside.
For example, in the Create Routed Outside dialog box, if you enable BGP or OSPF and enable Route
Control Enforcement for Import, an inbound route-map is configured for the BGP neighbor. Similar to
export route control, the route-map is not created until an import route-control subnet is added to the L3Out.
Import route control follows the same rules as export route control (allowing exact prefix match or an aggregate
for the 0.0.0.0/0 subnet).
In the following example, similar route-control subnets have been configured for the inbound and outbound
route-maps:
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
170
Implementation
Recommended Procedures for Defining L3Out Subnets
Import route control is not only used to filter routes. It is also used to apply route-map match and set statements
to route-maps. Use Set Rules for Route Maps and Match Rules for Route Maps to create set and match
statements for route-maps. The Cisco APIC then assigns the profile to a route control profile.
In the following example, (at Tenants > tenant-name > Networking > External Routed Networks > Set
Rules for a Route Map) you create a rule to set the BGP local preference value and assign this to the default
import route control profile.
The default import route-control profile only applies the inbound route-map if import route control is enabled.
For example, if you set import route control for the 0.0.0.0/0 aggregate subnet, this matches all prefixes and
permits them into the fabric. It also sets the BGP local preference to 200. See the following show command
output:
BL-1# show route-map imp-l3out-BGP2-peer-2293764
route-map imp-l3out-BGP2-peer-2293764, permit, sequence 8001
Match clauses:
ip address prefix-lists: IPv6-deny-all
IPv4-peer49153-2293764-agg-ext-in-default-import4rct10pfx-only-dst
Set clauses:
local-preference 200
You can also apply set rules for specific prefixes while still allowing all other prefixes into the fabric. In this
case, (at Tenants > tenant-name > Networking > External Routed Networks > Create Routed Outside >
External EPG Networks > Create Route Profile) create a different route control policy instead of using
the default import policy. (Select Match Prefix and Routing Policy and set the order to 0.)
To apply this policy to specific prefixes, first create an import route control policy for the 0.0.0.0/0 aggregate
subnet to match all prefixes, and use an empty default import route control profile. Then, configure an import
route control policy for the prefixes that will use the route-control profile to set the BGP local preference. For
example, enter the subnet, 10.206.19.0/24, and in the Route Control Profile field, identify the route control
profile you just created for exceptions.
The route-map is created in the correct order to set the local preference for the specific route and match all
other routes in the last sequence, as displayed in the following example showing the route-map creation
sequence:
BL-1# show route-map imp-l3out-BGP2-peer-2293764
route-map imp-l3out-BGP2-peer-2293764, permit, sequence 2001
Match clauses:
ip address prefix-lists: IPv6-deny-all
IPv4-peer49153-2293764-exc-ext-in-local-pref-3001local-pref-3000pfx-only dst
Set clauses:
local-preference 300
route-map imp-l3out-BGP2-peer-2293764, permit, sequence 8001
Match clauses:
ip address prefix-lists: IPv6-deny-all
IPv4-peer49153-2293764-agg-ext-in-default-import4all-routes0pfx-only-dst
Set clauses:
BL-1# show ip prefix-list
IPv4-peer49153-2293764-exc-ext-in-local-pref-3001local-pref-3000pfx-only
dst
ip prefix-list IPv4-peer49153-2293764-exc-ext-in-local-pref-3001local-pref-3000pfx-only
dst: 1 entries
seq 2 permit 10.206.19.0/24
BL-1# show ip prefix-list
IPv4-peer49153-2293764-agg-ext-in-default-import4all-routes0pfx-only-dst
ip prefix-list IPv4-peer49153-2293764-agg-ext-in-default-import4all-routes0pfx-only-dst: 1
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
171
Implementation
Recommended Procedures for Defining L3Out Subnets
entries
seq 1 permit 0.0.0.0/0 le 32
Note This section refers to the Create Subnet dialog box at Tenants > tenant-name > Networking > External
Routed Networks > Create Routed Outside > External EPG Networks > Create External Network >
Create Subnet > External Subnets for the External EPG.
The external subnets for an external EPG are used to define the subnets that should be classified to the external
EPG. This policy does not affect routing. It is similar to an Access Control List (ACL) that assigns a prefix
to the class id (pcTag) of the external EPG.
Even though the external subnet for the external EPG is configured with the L3Out, the ACL is applied at the
VRF level. This means that if a prefix is configured for L3Out-1 and traffic with a source address matching
that prefix arrives on L3Out-2 the traffic is classified to the external EPG of L3Out-1. The following diagram
explains this behavior:
Figure 70: Action of External EPG ACL
In this example, two Layer 3 outside networks are both using the 0.0.0.0/0 subnet. Traffic arriving on L3Out-2
is classified to the external EPG of L3Out-1 and is permitted to access the Web EPG even though there is no
contract configured for the external EPG of L3Out-2.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
172
Implementation
Recommended Procedures for Defining L3Out Subnets
If networks from L3Out-2 should not access the web EPG, then specific prefixes should be configured to
match the subnets expected on each L3Out. The following example shows specific subnets configured for
each L3Out:
Figure 71: Specific Subnets Defined for Each L3Out
External subnets for an external EPG are longest prefix-match subnets. This allows you to configure multiple
external EPGs under one L3Out and apply different security policies (contracts) to each external EPG. The
following table shows three external EPGs configured under the same L3Out. EPG-2 and EPG-3 are configured
with subnets that are longer prefix-match subnets in the same subnet range as EPG-1.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
173
Implementation
Verifying L3Out Subnet Definitions
Note This section refers to the Create Subnet dialog box at Tenants > tenant-name > Networking > External
Routed Networks > Create Routed Outside > External EPG Networks > Create External Network >
Create Subnet > Shared Route Control Subnet.
Shared route-control subnets are used with shared L3Outs. They control which external prefixes are advertised
to other VRFs, which have a contract interface to the shared L3Out. This subnet type is similar to export route
control with one exception: the Aggregate Shared Routes option applies to any subnet, not just the 0.0.0.0/0
subnet. For example, if you configure subnet 192.168.0.0/16 with the aggregate shared routes option, this
matches the 192.168.0.0/16 subnet and all 192.168.0.0 subnets with longer prefix lengths. This is equivalent
to configuring an IP prefix-list with the le 32 keyword (less than or equal to).
Note This section refers to the Create Subnet dialog box at Tenants > tenant-name > Networking > External
Routed Networks > Create Routed Outside > External EPG Networks > Create External Network >
Create Subnet > Shared Security Import Subnet.
Shared security-import subnets are used with shared L3Out configuration, not used for routing control. This
setting configures an ACL similar to Export Route-Control Subnets, but the ACL is configured in the VRF
that is consuming the shared L3Out. This is a longest prefix-match subnet.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
174
CHAPTER 10
Virtualization Implementation
• Cisco AVS Distributed Firewall, on page 175
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
175
Implementation
About Cisco AVS Distributed Firewall
The Distributed Firewall, with the help of the physical leaf switches, will not allow such SYN attacks. The
leaf switch evaluates the packet and allows TCP packets only if the ACK flag is set, which prevents SYN
attacks. Cisco AVS maintains the connection table to track the flow and allows TCP packets only if Cisco
AVS has flow entry.
Figure 73: Hardware-Assisted Distributed Firewall
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
176
Implementation
About Cisco AVS Distributed Firewall
The following figure illustrates how to prevent a SYN and ACK attack from the provider:
Figure 75: Preventing a SYN and ACK Attack from the Provider
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
177
Implementation
Guidelines and Limitations for Cisco AVS Distributed Firewall
The handling of FIN packets without the ACK bit set differs based on the type of the operating system,
which enables such packets to be be used for a FIN scan attack to determine the operating system.
Dropping such packets can prevent this attack.
Configuration Examples for Cisco AVS Distributed Firewall Using the GUI
You configure the Distributed Firewall by choosing one of the following modes:
• Enabled—Enforces the Distributed Firewall.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
178
Implementation
Configuration Examples for Cisco AVS Distributed Firewall Using the GUI
• Disabled—Does not enforce the Distributed Firewall. Use this mode only if you do not want to use the
Distributed Firewall. Disabling the Distributed Firewall removes all flow information on the Cisco AVS.
• Learning—Cisco AVS monitors all TCP communication and creates flows in a flow table, but does not
enforce the firewall. Learning is the default firewall mode in Cisco AVS Release 5.2(1)SV3(1.5) and
Release 5.2(1)SV3(1.10). Learning mode provides a way to enable the firewall without losing traffic.
The following procedure provides an example of configuring the Cisco AVS Distributed Firewall with the
Enabled mode using the advanced GUI mode.
Procedure
Step 1 Reflective ACL in the hardware is programmed to allow TCP packets only if the ACK flag is set. The following
steps demonstrate how to configure a leaf switch to check the ACK flag:
a) On the menu bar, choose Tenants > tenant_name.
b) In the Navigation pane, expand tenant_name > Security Policies > Filters.
The Security Policies - Filters panel appears in the Work pane. Your filters are displayed as rows inside
a summary table.
c) Click the table row to display the Filter panel.
The Entries table is displayed at the bottom of the Filter panel with a list of network traffic classification
properties. To configure a leaf switch to check the ACK flag and allow TCP packets, the Stateful check
box in the Entries table must be checked (set to True). By default, the Stateful check box is unchecked
(set to False).
d) To check the Stateful check box, double-click on the row in the Entries table that represents the filter
you want to configure. The filter will have tcp in the Protocol column and False in the Stateful column.
The chosen row expands and enables you to edit the network traffic classification properties.
e) Put a check in the Stateful check box.
f) Click Update.
Step 2 In receiving the first TCP SYN packet, Cisco AVS creates a flow table entry. If Cisco AVS does not have a
flow entry, it drops the packets. The following steps demonstrate how to configure Cisco AVS to enable the
distributed firewall and maintain a connection table to track the flow:
a) On the menu bar, choose Fabric > Access Policies.
b) In the Navigation pane, choose Interface Policies > Policies > Firewall > default.
The Firewall Policy - default panel appears.
c) In the Mode field, click the Enabled button. This property is referred to by VMM domain vSwitch policies.
By default the Mode is Learning.
d) From the menu bar, choose VM NETWORKING > Inventory > VMware > ACI_AVS_name.
e) From the ACI_AVS_name pane, in the VSwitch Policy section, ensure the Firewall Policy field is default.
If the Firewall Policy field is not set to default, you must be in the advanced GUI mode to change it.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
179
Implementation
TCP Packet Handling Example
If the data packets have the ACK bit set, the leaf switch permits the packets. If the connection is established,
a flow entry exists on Cisco AVS and the packets are permitted. If the RST packets also have the ACK bit
set, they are handled similarly to the data packets.
FIN packets with the ACK bit set are also handled similarly to the data packets. The FIN packets without the
ACK bit set are dropped by the leaf switch.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
180
Implementation
FTP Traffic Handling Example
Note • The handling of FIN packets without the ACK bit set differs based on the type of the operating system.
So it can be used for FIN scan attacks to determine the operating system.
• Dropping FIN packets without the ACK bit set can prevent such an attack.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
181
Implementation
Additional References for Cisco AVS Distributed Firewall
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
182
CHAPTER 11
Miscellaneous Implementation
• The Basic GUI and the Advanced GUI, on page 183
• Migrating Existing Networks to Cisco ACI, on page 184
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
183
Implementation
Verifying the Basic GUI vs the Advance GUI
• If a Cisco ACI fabric was deployed with the Basic Mode, you should continue to use theBasic Mode
configuration deployment.
• Switching between the Basic Mode and Advanced Mode configurations within the same fabric is not
supported. Going back and forth between GUI modes while performing configurations can cause undesired
relationships between objects if great care is not taken.
• The Basic Mode is designed for usage on small scale, greenfield deployments. This is due to the fact
that every instance of policy created within the Basic Mode is a new instance. The Basic Mode is not
built around policy reuse.
• L4-L7 services configuration is not available within the Basic Mode.
• Objects created due to the Basic Mode will show up with a prefix of “__ui__” when viewed from the
Advanced GUI. They cannot be removed in the Advanced GUI. For the steps to remove unwanted _ui_
objects, see Troubleshooting Unwanted _ui_ Objects in the Cisco APIC Troubleshooting Guide.
• The Basic Mode and the NX-OS-Style CLI utilize the same set of scripts to perform configuration. As
such, the NX-OS-Style CLI has the same limitations associated with the Basic Mode.
Additional References for Using the Basic GUI and Advanced GUI
• For Basic Mode and Advanced Mode configuration examples, see the Cisco APIC Getting Started
Guide at the following URL: http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
184
Implementation
Recommended Configuration Procedure for Migrating Existing Networks to Cisco ACI
Procedure
Step 1 Design and deploy the new Cisco ACI POD; it is likely that the size of such a deployment is initially small
with plans to grow in time with the number of applications that are migrated.
A typical Cisco ACI POD consists of at least two spine switches and two leaf switches and is managed by a
cluster of Cisco APIC controllers.
Step 2 Perform the integration between the existing DC network infrastructure and the new Cisco ACI POD.
Layer 2 and Layer 3 connectivity between the two networks is required to allow successful applications and
workload migration across the two network infrastructures.
Step 3 Migrate the workloads between the existing network and the new network.
It is likely that this application migration process may take several months to complete (depending also on
the number and complexity of the applications being migrated), so communication between new and existing
networks through the Layer 2 and Layer 3 connections previously mentioned is utilized during this phase.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
185
Implementation
Additional References for Migrating Existing Networks to Cisco ACI
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
186
PA R T III
Operations
• ACI Constructs Operations, on page 189
• Layer 4 to Layer 7 Operations, on page 195
• Miscellaneous Operations, on page 199
CHAPTER 12
ACI Constructs Operations
• AAA RBAC and Roles, on page 189
• Endpoint Loop Protection, on page 192
The ACI fabric manages access privileges at the managed object (MO) level. A privilege is an MO that enables
or restricts access to a particular function within the system. For example, fabric-equipment is a privilege bit.
This bit is set by the APIC on all objects that correspond to equipment in the physical fabric.
A role is a collection of privilege bits. For example, because an "admin" role is configured with privilege bits
for "fabric-equipment" and "tenant-security," the "admin" role has access to all objects that correspond to
equipment of the fabric and tenant security.
A security domain is a tag that is associated with a certain subtree in the ACI MIT object hierarchy. For
example, the default tenant "common" has a domain tag "common." Similarly, a special domain tag "all"
includes the entire MIT object tree. An admin user can assign custom domain tags to the MIT object hierarchy.
For example, a "solar" domain tag is assigned to the tenant solar. Within the MIT, only certain objects can be
tagged as security domains. For example, a tenant can be tagged as a security domain, but objects within a
tenant cannot.
If a virtual machine management (VMM) domain is tagged as a security domain, the users contained in the
security domain can access the correspondingly tagged VMM domain. For example, if a tenant named "solar"
is tagged with the security domain called "sun" and a VMM domain is also tagged with the security domain
called "sun," then users in the solar tenant can access the VMM domain according to their access rights.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
189
Operations
Prerequisites for AAA RBAC and Roles
• You should leave the "fallback" domain as local authentication in case an issue arises with the remote
authentication server. If that is done, you can specify the local domain by using the above syntax, but
with the domain specified as "fallback." For example:
apic# fallback\\your_local_username
• The APIC Management Information Model Reference lists every privilege that has read and write access
to a given class. For example, looking at the class of a bridge domain (fvBD), you get the following
information:
Class fv:BD (CONCRETE)
Class ID:1887
Class Label: Bridge Domain
Encrypted: false - Exportable: true - Persistent: true - Configurable: true
Write Access: [admin, tenant-connectivity-l2]
Read Access: [admin, nw-svc-device, nw-svc-policy, tenant-connectivity-l2,
tenant-connectivity-mgmt, tenant-epg, tenant-ext-connectivity-l2,
tenant-network-profile,
tenant-protocol-l2, tenant-protocol-l3]
Creatable/Deletable: yes (see Container Mos for details)
Semantic Scope: EPG
Semantic Scope Evaluation Rule: Explicit
Monitoring Policy Source: Explicit
Monitoring Flags : [ IsObservable: true, HasStats: true, HasFaults: true, HasHealth:
true,
HasEventRules: false ]
The information indicates that for a user to be able to write changes to a bridge domain, the user must
have a role that contains either the "admin" bits or the "tenant-connectivity-l2" bits. These privileges can
be found when viewing pre-existing roles or creating new ones.
• Security domains allow a user to be exposed to only specific branches of the Management Information
Tree (MIT). Typically, this allows ACI administrators to expose only specific tenants to users to
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
190
Operations
Recommended Configuration Procedure for AAA RBAC and Roles
give the fabric the aspect of multi-tenancy in that users only have access to view and make changes
to their own tenant.
• A fabric-wide administrator uses RBAC rules to selectively expose physical resources to users that
otherwise are inaccessible because they are in a different security domain. While an RBAC rule
exposes an object to a user in a different part of the management information tree, it is not possible
to use the CLI to navigate to such an object by traversing the structure of the tree. However, as long
as the user knows the distinguished name of the object that is included in the RBAC rule, the user
can use the CLI to locate the object by using the MO find command.
• Modifying the "all" security domain to give a user access to resources outside of that user's security
domain is bad practice. Such a user will then have access to resources that are provisioned for other
users.
Procedure
Step 1 On the menu bar, choose welcome, user_name > AAA > View My Permissions.
Step 2 In the User Permissions dialog box, you can view any security domains to which you have access, along
with the tenants that are associated specifically to those domains.
Configuration Examples for AAA RBAC and Roles Using the GUI
The following procedure provides an example of configuring AAA role-based access control (RBAC) and
roles using the Application Policy Infrastructure Controller (APIC) GUI.
Procedure
Step 1 Create a security domain. On the menu bar, choose Admin > AAA.
Step 2 In the Navigation pane, choose Security Management > Security Domains.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
191
Operations
Additional References for AAA RBAC and Roles
Step 3 In the Work pane, choose Action > Create Security Domain.
Step 4 In the Create Security Domain dialog box, fill out the fields as necessary.
Step 5 Associate the security domain with a tenant. On the menu bar, choose Tenants > All Tenants.
Step 6 In the Work pane, double-click the tenant's name.
Step 7 In the Security Domains section, put a check in the check boxes that correspond to the security domain that
you want to associate with the tenant.
Step 8 Create the RBAC rules. On the menu bar, choose Admin > AAA.
Step 9 In the Navigation pane, choose Security Management > RBAC Rules.
Step 10 In the Work pane, choose Action > Create RBAC Rule.
Step 11 In the Create RBAC Rule dialog box, fill out the fields as necessary. You must specify the distinguished
name (DN) of the object to be acted upon and the domain to add the rule. You can also specify write privileges
for this RBAC rule.
The recommendation is to enable endpoint loop protection using the following default parameters:
• Loop detection interval: 60
• Loop detection multiplication factor: 4
• Action: Port Disable
The above parameters state that if an endpoint moves more than four times within a sixty second period, then
the endpoint loop protection will take the specified action of disabling the port.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
192
Operations
Configuration Example for Endpoint Loop Protection
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
193
Operations
Configuration Example for Endpoint Loop Protection
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
194
CHAPTER 13
Layer 4 to Layer 7 Operations
• Device Packages, on page 195
Device Packages
About Device Packages
A device package is used to insert and configure network service functions on a network service appliance
(device). A device package contains the following components:
• Device Specification (XML)—The configuration of the Application Policy Infrastructure Controller
(APIC) is represented as an object model consisting of a large number of managed objects (MOs). A
device type is defined by a tree of MOs with a meta device (MDev) at the root.
• Device Script (py)—The integration between the APIC and a device is performed by a device script,
which maps APIC event function calls that are defined in the device script.
When you upload a device package to the APIC, the APIC creates a hierarchy of MOs that represent the device
and validates the device script interface.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
195
Operations
Recommended Procedure for Importing a Device Package Using the GUI
• Uploading a new device package with a minorversion change overwrites the existing device package.
All graphs and device clusters pointing to the old device package start pointing to the new package
automatically. The upgrade is non-disruptive and there should be no impact for existing service graphs
or device clusters.
• A minor version change is the default recommendation for partners for any new device package revisions.
• When using a device package, a device cluster can be managed by only one device package at any time.
• A node in a service graph can associate to only one device package at any time.
• The node in the service graph and the associated device cluster should point to the same device package.
That is, you cannot have a node in a service graph that points to the old device package while the device
cluster points to the new package.
• The Application Policy Infrastructure Controller (APIC) treats the version field as an opaque string. A
change from "1.0" to "2.0" as well as "1.0" to "1.1" both look the same to the APIC and are considered
to be a major version change.
• The APIC images are backward compatible with old device packages.
• If a device package is already uploaded and the APIC is upgraded, the old device package will continue
to work without any disruption.
• Newer device packages might not work on older versions. In such cases, the device package upload step
will fail with an appropriate error.
• Make sure that the device package is supported on the vendor device (hardware and software) and that
the device package is compatible with the Cisco Application Centric Infrastructure (ACI) platform. For
more information, see the L4-L7 Compatibility List Solution Overview document at the following location:
http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-734587.html
• Understand the differences between major/minor version changes on the device package and the impact
when upgrading a device package.
• Understand the features to be configured through the APIC by way of the device package to the services
appliance. For example, understand the features on the firewalls or load balancers that the administrator
wishes to configure versus the features that are supported by the device package.
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
196
Operations
Verifying the Device Package Versions
Step 6 Click Open. The Application Policy Infrastructure Controller (APIC) can take several seconds to open the
device package.
Step 7 In the Import Device Package dialog box, click Submit.
The device package gets imported into the APIC. You can see the device package in the Work pane.
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
197
Operations
Verifying the Device Package Versions
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
198
CHAPTER 14
Miscellaneous Operations
• API Inspector, on page 199
• Audit Logs, on page 200
• GUI Application Settings, on page 202
• Health Scores, on page 203
• Using the Cisco NX-OS Style CLI, on page 207
• Upgrading the Fabric, on page 208
• Snapshot and Configuration Rollback, on page 210
• Tags and Aliases, on page 211
• QuickStart in the Cisco APIC GUI, on page 213
API Inspector
About API Inspector
The API Inspector is a built-in tool in the Cisco Application Infrastructure Controller (APIC) GUI that allows
you to capture internal REST API messaging as you perform tasks in the Cisco APIC GUI. The captured
messages show the managed objects (MOs) being accessed and the JSON data exchanges of the REST API
calls. You can use this data when designing Python API calls to perform similar functions.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
199
Operations
Configuration Example for API Inspector
Procedure
Audit Logs
About Audit Logs
Within the Cisco Application Centric Infrastructure (ACI) fabric, the majority of what is viewable using the
GUI is made possible through the underlying management information tree (MIT). Networking constructs
and management constructs have been abstracted and represented as objects. The same applies to audit logs.
The audit logs within the ACI fabric are objects that are records of user-initiated events such as login, logout,
object creation, and attribute changes under existing objects. These can be useful for tracking erroneous
changes within the environment or simply for keeping an audit of changes that have occurred within the ACI
fabric.
There is no configuration associated with audit logs.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
200
Operations
Verifying the Audit Logs Using the GUI
Procedure
Step 5 Double-click each item for more information, including the old and new states of the change.
Procedure
The command output can be redirected to a file with the following syntax:
pod3-apic1# moquery -c aaaModLR > /tmp/audit_logs.txt
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
201
Operations
GUI Application Settings
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
202
Operations
Configuring GUI Application Settings
Procedure
Step 1 On the menu bar, on the far right, click welcome user_name > Settings.
Step 2 In the Application Settings dialog box, put a check in the check boxes for the desired settings.
Step 3 Click OK.
This completes the GUI application settings.
Health Scores
About Health Score
The Application Policy Infrastructure Controller (APIC) uses a policy model to combine the current status of
all the manage objects including links, devices, and such into a health score. It provides the operator visibility
and a quick overview into their entire Cisco Application Centric Infrastructure (ACI) system.
Cisco ACI fabric health information is available for the following areas of the system:
• System—Aggregation of system-wide health including pod health scores, tenant health scores, system
fault counts by domain and type, and the Cisco APIC cluster health state.
• Pod—Aggregation of health scores for a pod (a group of spine and leaf switches) and pod-wide fault
counts by domain and type.
• Tenant—Aggregation of health scores for a tenant, including performance data for objects such as
applications and EPGs that are specific to a tenant and tenant-wide fault counts by domain and type.
• Managed Object—Health score policies for managed objects (MOs) which include their dependent and
related MOs. These policies can be customized by an administrator.
The following figure displays a diagram describing the health scoring policy.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
203
Operations
Prerequisites for Health Score
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
204
Operations
Recommended Configuration Procedure for Health Scores
• Health scores are based on the faults generated in the fabric. Each fault reduces the health score based
on the severity of the fault. The higher the fault severity is, the more penalty it will receive and impact
the health score.
• Health score is calculated from a range of 0 to 100 (100 is the perfect health score).
• It is aggregated and available in different system level views.
• The health score of an application component can be distributed across multiple leaf switches. For
example, a hardware fault impacts the health score of an application component.
• Starting with Cisco APIC release 1.2(2g), Cisco APIC supports the health score evaluation to ignore
acknowledged faults, such as for those faults that can be safely ignored and prevent the health score from
being degraded.
• You can modify the health score evaluation policy based on the penalty of the health score at the fault
severity level. The health score evaluation policy can be configured as desired by navigating in the GUI
to Fabric > Fabric Policies > Monitoring Policies > Common Policy > Health Score Evaluation
Policies > Health Score Evaluation Policy_name . In the Work pane, under Properties, choose the
desired settings.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
205
Operations
Verifying Health Score
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
206
Operations
Using the Cisco NX-OS Style CLI
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
207
Operations
Configuration Examples for the Cisco NX-OS-Style CLI
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
208
Operations
Additional References for Upgrading the Fabric
• Make sure that the Cisco APIC cluster is in fully fit status and that all devices are in the active status
before upgrading.
• Make sure you have console access to all fabric nodes, in the case that case you must troubleshoot an
issue.
• Make sure there are no outstanding faults before upgrading the fabric.
• Unless the release notes for the release specify otherwise, you can upgrade (or downgrade) the controllers
before the switches, or upgrade (or downgrade) the switches before the controllers.
• Understand the supported downgrade path in the case that you are required to roll back the version.
• Make sure the controllers and switches are using the same software release.
• You can use a single firmware group for the upgrade process.
• When you create the maintenance group, verify the following items:
• The vPC or active and standby pair of leaf switches are in two different groups so that while one
of the switches is upgrading, the other switch can still pass the traffic.
• Spine switches that are configured as MP-BGP router reflectors are in two different groups, otherwise
you will lose external connectivity during the upgrade.
• Divide switches into two or more groups and upgrade one group at a time.
• A specific release, or a combination of releases, might have some limitations and recommendations for
the upgrade or downgrade procedure. Look for any limitations and recommendations in the release notes
for the specific release before upgrading or downgrading your Cisco Application Centric Infrastructure
(ACI) fabric. If the release notes do not specify such limitations or recommendations, follow the guidelines
to upgrade or downgrade your Cisco ACI fabric.
• Monitor the system faults to look for troubleshooting issues, and resolve any issues immediately.
• Verify that the Cisco APIC cluster is fully fit after the upgrade, before upgrading the spine switches and
leaf switches.
• In the Run Mode field, choose the Pause only Upon Upgrade Failure radio button if it is not already
chosen. This is the default mode.
• The default concurrent cap in a group is 20. This cap limits how many switches can go down
simultaneously. You can increase the cap through a policy configuration.
• Verify that each maintenance group for the spine and leaf switches return to the active state after the
upgrade.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
209
Operations
Snapshot and Configuration Rollback
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
210
Operations
Verifying a Snapshot and Rollback Configuration
Procedure
File : ce2_defaultOneTime-2016-05-10T19-02-06.tar.gz
Created : 2016-05-10T19:02:13.018-05:00
Root :
Size : 180118
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
211
Operations
Guidelines and Limitations for Tags and Aliases
Note Tags and aliases are metadata, and have no functional impact on the networking aspect of Cisco ACI.
Procedure
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
212
Operations
Additional References for Tags and Aliases
Get http://x.x.x.x/api/tag/BP-Tag.xml
• An example of a response is every object that has that tag associated is as follows:
<?xml version="1.0" encoding="UTF-8"?>
<imdata totalCount="2">
<fvTenant childAction="" descr="" dn="uni/tn-ACI-BP" lcOwn="local"
modTs="2016-05-10T09:06:37.165-07:00" monPolDn="uni/tn-common/monepg-default"
name="ACI-BP" ownerKey="" ownerTag="" status="" uid="15374"/>
</imdata>
• An example of a response is the object to which the alias was assigned as follows:
<?xml version="1.0" encoding="UTF-8"?>
<imdata totalCount="1">
<fvTenant childAction="" descr="" dn="uni/tn-ACI-BP" lcOwn="local"
modTs="2016-05-10T09:06:37.165-07:00" monPolDn="uni/tn-common/monepg-default"
name="ACI-BP" ownerKey="" ownerTag="" status="" uid="15374"/>
</imdata>
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
213
Operations
Guidelines and Limitations for QuickStart
Procedure
Step 4 Click the dialog box link next to the Policy Group Name field to view the policies and Attached Entity Profile
associated with that policy group.
Step 5 Click the dialog box link next to the Policy field to view the details of the associated object.
Step 6 Click the dialog box link next to the Attached Entity Profile field to view the details of the associated domains.
Cisco Application Centric Infrastructure Best Practices Guide, Release 1.3(1) and Earlier
214