By James Kehr, Networking Support Escalation Engineer
The next Container network type on the list is called, transparent. Production Container workloads, outside of swarms and special Azure circumstances, should be using a transparent network. Unless you need to use L2 bridging in an SDN environment, but production will mostly use the transparent network type.
Transparent Networks
The transparent network essentially removes WinNAT from the NBL path. This gives the Container a more direct path through Windows. Management of transparent network attached Containers is as easy as managing any other system. It’s like the virtual network is transparent to the system admins…hmmm.
The only configuration needed with a Container on a transparent network is the IP address information. Since the Container is directly attached to the network infrastructure this can be done through static assignment or regular old DHCP. The Container connects to a normal vmSwitch on the default Compartment 1 (see part 2). On top of easy management, the NBL only travels through four or three hops within Windows, depending on whether LBFO NIC teaming is enabled or not. This decreases latency to normal Hyper-V guest like numbers.
There is at least one circumstance where your Containers will end up on a separate network compartment. This primarily happens when the Container is attached to a Hyper-V created vmSwitch. Hyper-V traffic will use Compartment ID 1, and Container traffic will use something else, like Compartment ID 2. Each Container network attached to the Hyper-V vmSwitch will use a unique Compartment ID. This is done to keep traffic secure and isolation between all the various virtual networks. See part 2 for more details about Compartments.
The traffic looks identical to the Hyper-V traffic, except you see the Container traffic on the host system, where the endpoint is not visible in Hyper-V. The Container vmNIC sends an NBL to the vmSwitch, which routes the traffic to the host NIC, where the NBL is converted to a packet, and off to the wire it goes.
Source | Destination | Module | Summary |
cntr.contoso.com | bing.com | ICMP | Echo Request |
Microsoft_Windows_Hyper_V_VmSwitch | NBL received from Nic (Friendly Name: Container NIC 066a9b55) in switch (Friendly Name: vmSwitch) | ||
cntr.contoso.com | bing.com | ICMP | Echo Request |
Microsoft_Windows_Hyper_V_VmSwitch | NBL routed from Nic (Friendly Name: Container NIC 066a9b55) to Nic /DEVICE/ (Friendly Name: Host NIC) on switch (Friendly Name: vmSwitch) | ||
cntr.contoso.com | bing.com | ICMP | Echo Request |
Microsoft_Windows_Hyper_V_VmSwitch | NBL delivered to Nic /DEVICE/ (Friendly Name: Host NIC) in switch (Friendly Name: vmSwitch) | ||
cntr.contoso.com | bing.com | ICMP | Echo Request |
The total time for the ping (Echo Request) to go from the Container to the wire in this example was 0.0144 milliseconds, or 14.4 microsecods. About 47 times faster than the NAT network with PortProxy example. That is quite the significant improvement.
Transparent Container networks offer easier management, less resource consumptions, and significantly improved throughput when compared to NAT networks. Especially when compared to NAT with PortProxy’s. Which is why production Container workloads should, in basic production circumstances, use the Transparent network.
L2Bridge networks
For the purposes of this article the L2Bridge network is identical to a Transparent network. The NBL traverses through Windows in the exact same way. The difference between Transparent and L2Bridge are spelled out in the Windows Container Networking doc.
l2bridge – containers attached to a network created with the ‘l2bridge’ driver will be will be in the same IP subnet as the container host. The IP addresses must be assigned statically from the same prefix as the container host. All container endpoints on the host will have the same MAC address due to Layer-2 address translation (MAC re-write) operation on ingress and egress.
There doesn’t appear to be any obvious way to see the MAC address rewriting in an ETW trace, so there’s nothing new to look at. The L2Bridge is used mainly with Windows SDN. And that’s that.
Hyper-V Isolation
One of the touted features of Docker on Windows is Hyper-V isolation. This feature runs the container inside of a special, highly optimized VM. This adds an extra layer of security, which is good. But there’s no way to troubleshoot what happens to the network traffic once it leaves the host and goes to the optimized VM. Hyper-V isolation is a networking black hole.
The obvious question is… why? The answer boils down to how the optimization is done. Neither the optimized VM nor the Container image have a way to capture packets. The Container because that subsystem is shared with the host kernel, as discussed in part 1. The VM doesn’t have packet capture because it is optimized to the point that it only includes the minimum number of components and features needed to run a Container. That list does not include packet capture. And that’s assuming you could even access the VM, which I haven’t found a way to do.
How exactly does someone go about troubleshooting networking on an isolated Container, then? The Container lives separately from the optimized VM. Isolation is performed when the correct parameter is applied as part of the docker run command when starting the Container. Specifically, the “–isolation=hyperv” parameter. Remove the isolation parameter so the Container starts normally, and then all the normal troubleshooting rules apply. While it’s not a complete apples-to-apples comparison, it does provide some ability to troubleshoot potential networking issues within the Container.
That concludes this series on Windows Virtualization and Container networking. I hope you learned a thing or three. Please keep in mind that technologies such as Containers and Windows networking change a lot. Especially when the technology is young. Don’t be afraid to dig in and learn something new, even if this article just acts as a primer to get your started.
-James