|
Network Virtualization and Resource Control (Crossbow) FAQTable of ContentsOverview
IP Instances
Virtual NICs (VNICs)
Flow Management
Miscellaneous
OverviewCrossbow is a set of technologies that provide network virtualization and greatly improves the resource control, performance, and network utilization needed to achieve true OS virtualization, utility computing, and server consolidation. Crossbow consists of multiple components: Crossbow is designed to add network virtualization to Solaris without introducing any performance penalty. Some of the underlying work is delivering better network performance. Receive rings and hardware classification, and multiple MAC address support contribute to better performance and enhance the virtualization provided. Flow management could introduce some overhead. If the NIC or VNIC is not doing any fanout and B/W control, Crossbow can map a flow directly to a receive (RX) ring and use hardware to classify it. In that case, there is no performance impact. In cases where the NIC or VNIC is already doing bandwidth control or traffic fanout across multiple CPUs, any flow configured on top will have to go through an additional classification layer and there will be a small performance hit. Crossbow is initially available as a BFU on top of OpenSolaris. The IP Instance portion of Crossbow has been integrated into Solaris Nevada build 57, and is in Solaris 10 8/07. VNICs and Flow Management are not yet integrated into Nevada, and may be delivered in a follow-on update. (This is as of March 2008) Currently (March 2008), you can install Crossbow Snapshot onto Solaris Nevada build 81. Integrated ISOs with the Crossbow bits built-in are available, as are BFU bit to apply to an existing Nevada build 81 installation. A beta is in progress at this time and will run through April 2008.
IP InstancesIP Instances are separate views of the IP stack, so that visibility and control is limited to the entity (zone) that the instance is assigned to. By default, all of Solaris has one view of IP, and therefor central visibility and control. With zones, the ability to view and control is limited by privileges, and all zones' network traffic decisions are made with a global view by the kernel. When IP instances are used, the view is limited to that information that applies to the instance, not the full kernel. So routing decisions, for example, are made based on the information only in this instance, and does not use any of the additional information that other instances on the same kernel may have. Similarly, control is delegated to this instance, so that a non-global zone can set network parameters such as routes, ndd(1m) values, IP address(es). Snooping of the interface(s) in the IP Instance is also possible. There is no visibility into any of the other IP Instances that may be sharing this Solaris instance and kernel. Another feature with IP Instances is that traffic between zones must pass the whole path down the stack to the underlying NIC. This is the result of the zone's IP not knowing where the destination address is, and it must thus be put on the wire. If the zone is using a VNIC, whether the traffic stays within the system or exists on a physical netowrk interface depends on whether the destination also using a VNIC sharing the same physical NIC. If a NIC is shared for VNICs, traffic directly between the VNICs will be switched by the VNICs' virtual switch to the destination VNIC, and it will not leave the system.
IP Instances are in Solaris Nevada build 57 and later. IP Instances are in Solaris 10 8/07 released on 4 September 2007. Only NICs supported by the Generic LAN Driver version 3 (GLDv3) are supported with IP Instances. The way to determine if a NIC is GLDv3, run the dladm(1m) command with the 'show-link' subcommand and look for links that are not of type 'legacy'. The is one exception. The ce interfaces can also be used now. See Which NICs are known to work with IP Instances? for details, such as Nevada build and Solaris 10 patches required.This is how non-GLDv3 interfaces will look.
And how GLDv3 interfaces look. # dladm show-link bge0 type: non-vlan mtu: 1500 device: bge0 bge1 type: non-vlan mtu: 1500 device: bge1 bge1001 type: vlan 1 mtu: 1500 device: bge1 bge2001 type: vlan 2 mtu: 1500 device: bge1 bge2 type: non-vlan mtu: 1500 device: bge2 bge3 type: non-vlan mtu: 1500 device: bge3 aggr1 type: non-vlan mtu: 1500 aggregation: key 1
* NOTE: The ce NIC is not a GLDv3 device, but has been made to work with IP Instances. The Solaris 10 patches required are:
* NOTE: The e1000g driver replaces ipge in Solaris 10 11/06 and later for these NICS: However, a shim is planned as part of Nemu Unification within Project Clearview that will allow those interfaces to be used together with IP Instances. (The list is based on most of the NICs for which drivers are included in Solaris.) There are two Change Requests to enable IP Instances with the ce driver. See What's Up ce-Doc? for some details. These fixes have been put into OpenSolaris and are available in Nevada build 80 and later, and available for Solaris 10 with patchesYes. The maximum number of IP Instances is the same as the maximum number of non-global zones, which currently is 8191 (8K – 1). A non-global zone can have only one IP Instance. By default, a zone is in the global instance sharing IP with the global zone and all other zones without an exclusive IP Instance. When a zone is configured to have an exclusive IP Instance, its view of IP is now isolated from the rest of the system. No. Commands at the IP level such as The The Using the All interfaces assigned to a non-global zone can be identified by running 'ifconfig -a plumb', followed by 'ifconfig -a'.
If you have, for example, an nge interface, one method is to create the file /etc/hostname.nge0 in the non-global zone.
Generally, you will set up the /etc/hosts file, /etc/defaultrouter if using static routes, /etc/netmasks, /etc/resolv.conf, and the like, as with any stand-alone system. With a shared IP Instance, much of this was managed by the adminstrator(s) in the global zone.
After configuring and installing the zone, copy or create an /etc/sysidcfg file. For example,
A non-global zone can still be an NFS client (not of the global zone on the same system), but can not be an NFS server. The in-ability of a non-global zone to be an NFS server is not related to networking, but rather to file system and virtual memory interaction. You can not load private kernel modules in a non-global zone, even if you have your own instance. Also, IPfilter rulesets are controlled from the global zone at this time. A linux branded zone does not work with IP Instances at this time. Virtual NICs (VNICs)A VNIC is a virtualized network interface that presents the same media access control (MAC) interface that an actual interface would provide. Multiple VNICs can be configured on top of the same interface, allowing multiple consumers to share that interfaces. If the interface has hardware clasification capabilities, when data arrives on the NIC, the hardware can automatically direct the datagrams to receiver buffers (rings) associated with a specific VNIC. It may be possible to selectively turn interrupts on and off per ring, allowing the host to control the rate of arrival of packets into the system. For hardware that does not have these capabilities, these features are provided via software. VNICs are support on Generic LAN Driver version 3 (GLDv3) supported interfaces. For a list, see Which NICs are known to work with IP Instances? You can also create a VNIC on top of an aggregation or VLAN that is built using GLDv3 NICs. The maximum number of VNICs per NIC is limited by the total number of VNICs per system, which at this time is 899 (VNIC Ids 1-899). However, for NICs with hardware classification capabilities, maximum performance is achieved when the number of VNICs does not exceed the number of hardware classifiers on the NIC. VNIC ids 900 to 999 are reserved for use by Xen. Currently the maximum number of VNICs on a system is 899 user defined (1-899). As is typically the case, each VNIC will require additional system resources such as CPU. So there will be a practical maximum per system based on the type of system, they type of NICs, and traffic patterns. This limit may be increased with the delivery of Clearview. The dladm(1m) command is used to create, modify, and delete VNICs. Use To use a factory provided MAC address, run The MAC address for a VNIC can be set when the VNIC is created with the Yes, the MAC address must be a valid MAC address as per IEEE. It can not be a multicast or broadcast address. This is the case today, but in the future we will allow the MAC address to be chosen randomly, or from the hardware if the underlying NIC supports provides more than one factory MAC address. Yes. You can do most of the things that you can do with a physical NIC. Things you can not do with a VNIC include: create a link aggregation, set a frame size larger than the underlink link. TEST: create a VLAN.Yes, but no larger than the MTU allowed by the underlying NIC. Flow ManagementFlow management is the ability to manage networking bandwidth resource for a transport, service, or a virtual machine. A service is specified as a combination of transport (e.g. TCP, UDP) and port, while a virtual machine is specified by its mac address or an IP address. Flows are managed with the Flows are defined as a set of attributes based on Layer 2, 3, and 4 headers which can be used to identify a protocol, service, or a [virtual machine] instance, such as a zone or Xen domain.
Flows support the following parameters:
No, it is possible to create a flow without limits yet bind it to software or hardware resources. MiscellaneousIt is difficult to determine a NIC's hardware capabilities. Please provide feedback on experiences with specific NICs and the information will be aggregated here. Thanks. We are planning to provide an option of dladm which will display these hardware capabilities in a future version of Crossbow. At this time you can not tell. There is work underway to add such a capability to the
|