It’s actually quite amazing that many many many IT professionals out there is actually quite ignorant about some basic network understanding. Perhaps, forgivable if your IT exposure is high up in the application stack… for those who live and breath system administration… and even for those who are network administrators… no excuse at all.
So, what’s this about? This is about fundamental configuration of physical ethernet network connections. Something a vast majority takes for granted. This applies very much to the days of 10/100Mbps networks, which is still very prevalent in current environment, but is slowly going away.
In ethernet, when 2 devices are connected together with a regular UTP cable, they’ll need to communicated at the same speed and mode of transmission. For optimal operations, both ends have got to be operating at the same settings. For speed, there’s a choice of 10Mbps or 100Mbps, and transmission mode is full or half duplex.
If there is speed mismatch, there won’t be any communications at all, and it’s easily corrected. So once you see there’s a link, the speed is definitely a match. However, what’s not obvious at all is that the transmission mode (duplex settings) may not match. And when they don’t match, the result is horrifyingly slow throughput.
So what does full duplex and half duplex mean? It simply means when and how each device is allowed to transmit. In half duplex mode, it means only 1 of the devices is allowed to transmit at any one time. The other just listens. In full duplex mode, both devices are allowed to transmit at the same time. In the very early days of ethernet over UTP, the cores in the wire are shared, such that the transmission cables are shared, as such when more than 1 device transmits, the message on the cable gets “noisy” and collision occurs.
Imagine, that if at one end, device A is told to communicate at half duplex, and at the other end, device B thinks full duplex is in operation. So, when A is sending out signal to B, and B needs to send something back to A, B thinks it’s ok and sends out the message. A is not expecting anything from B at all, and so is unable to handle the traffic from B. As a result collision occurs. A will keep resending it’s data thinking that it never reached B, and in the end, a whole lot of miscommunication occurs. This causes what appears to be very very slow throughput.
In this day and age, I would say all cat 5 UTP cables and devices are full duplex capable. Also, there is a 5th setting called “Autodetect”. It’s this “autodetect” setting which introduces a lot convenience as well as headache. Personally, I love “autodetect” and advocate the use of it. But by those who don’t understand it, avoids it at all cost. In fact, some IT shops will default to move away from “autodetect” at all costs.
So, why are some network people so afraid of “autodetect”? More often than not, they’ve had bad experience with poor performances due to duplex mismatch. There is some history to this…. when autodetect first showed up, there was interoperability issues between different vendors. Naturally, for that reason, it’s fair to set a standard to avoid “autodetect” as a default setting. But this is something one will observe maybe 15-20 years ago.
Soon after, the IEEE stepped in to resolve this by standardizing how “autodetect” should work. It’s a good thing, but the standardization is quite peculiar. Here’s why….
Autodetect is great with speed negotiation, if there’s no cabling issue, both devices will negotiate at the highest possible speed. Usually no issues here. Even if one end is put to a manual setting on speed, the other end on auto will still get it right.
Then, here comes the fun part… if one end is set manually on the duplex, the end which is on auto will default to half duplex. Yes, half, not full. I never understood what the reason is. But you can see that if someone sets one end to full duplex, the other end on auto will always talk in half duplex. As a result, we have a duplx mismatch, and performance will be bad.
So, if no one takes care to understand why performance tanks when network settings don’t match at both ends, you’ll get an environment without a standard. You’ll see some ports are set to autonegotiate, and some are set to manual… and some do have mismatch and some don’t.
My take is that, if you’ve got fairly recent equipment, just leave everything on default to be on “autonegotiate”. If you’ve got flaky performance, it’s more likely due to bad cabling, and have that fixed. If you are using manual or static type settings, cable issues are less likely to surface, and it’s not obvious that there may be cabling issue at all.