There is another type of data-link layer connection device, called a switch, which has largely replaced the bridge in the modern network, and which is replacing routers in many instances as well. A switch is a box with multiple cable jacks in it that looks a lot like a hub. In fact, some manufacturers have hubs and switches of various sizes that are all but identical in appearance, except for their markings. The difference between a hub and a switch is that while a hub forwards every incoming packet out through all of its ports, a switch forwards each incoming packet only to the port that provides access to the destination system, as shown in Figure 3.4.
Because they forward data to a single port only, switches essentially convert the LAN from a shared network medium to a dedicated one. If you have a small network that uses a switch instead of a hub (such a switch is sometimes called a switching hub), each packet takes a dedicated path from the source computer to the destination, forming a separate collision domain for those two computers. Switches still forward broadcast messages to all of their ports, but not unicasts and multicasts. No systems receive packets destined for other systems, and no collisions occur during unicast transmissions because every pair of computers on the network has what amounts to a dedicated cable segment connecting them. Thus, while a bridge reduces unnecessary traffic congestion on the network, a switch all but eliminates it.
Another advantage of switching is that each pair of computers has the full bandwidth of the network dedicated to it. A standard Ethernet LAN using a hub might have 20 or more computers sharing the same 10 Mbps of bandwidth. Replace the hub with a switch, and every pair of computers has its own dedicated 10 Mbps channel. This can greatly improve the overall performance of the network without the need for any workstation modifications at all. In addition, some switches provide ports that operate in Full-duplex mode, which means that two computers can send traffic in both directions at the same time using separate wire pairs within the cable. Full-duplex operation can effectively double the throughput of a 10 Mbps network to 20 Mbps.
Switches generally aren't needed on small networks that only use a single hub. They are more often found on larger networks, where they're used instead of bridges or routers. If you take a standard enterprise network consisting of a backbone and a series of segments and replace the routers with switches, the effect is profound. On the routed network, the backbone must carry the internetwork traffic generated by all the segments. This can lead to high traffic conditions on the backbone, even if it uses a faster protocol than the segments. On a switched network, you connect the computers to individual workgroup switches, which are in turn connected to a high-performance backbone switch, as shown in Figure 3.5. The result is that any computer on the network can open a dedicated channel to any other computer, even when the data path runs through several switches.
There are many different ways to use switches on a complex internetwork; you don't have to replace all of the hubs and routers with switches at one time. For example, you can continue to use your standard shared network hubs and connect them all to a multiport switch instead of a router. This increases the efficiency of your internetwork traffic. On the other hand, if your network tends to generate more traffic within the individual LANs than between them, you can replace the workgroup hubs with switches to increase the available intranetwork bandwidth for each computer while leaving the backbone network intact.
The problem with replacing all of the routers on a large internetwork with switches is that you create one huge broadcast domain, instead of several small ones. The issue of collision domains is no longer a problem, because there are far fewer collisions. However, switches relay every broadcast generated by a computer anywhere on the network to every other computer, which increases the number of unnecessary packets processed by each system. There are several technologies that address this problem, such as the following:
There are two basic types of switches: cut-through and store-and-forward. A cut-through switch forwards packets immediately by reading the destination address from their data-link layer protocol headers as soon as they're received and relaying the packets out through the appropriate port with no additional processing. The switch doesn't even wait for the entire packet to arrive before it begins forwarding it. In most cases, cut-through switches use a hardware-based mechanism that consists of a grid of input/output circuits, which enable data to enter and leave the switch through any port. This is called matrix switching or crossbar switching. This type of switch is relatively inexpensive and minimizes the delay incurred during the processing of packets by the switch (which is called latency).
A store-and-forward switch waits until an entire packet arrives before forwarding it to its destination. This type of unit can be a shared-memory switch, which has a common memory buffer that stores the incoming data from all of the ports, or a bus architecture switch with individual buffers for each port, connected by a bus. While the packet is stored in the switch's memory buffers, the switch takes the opportunity to verify the data by performing a cyclical redundancy check (CRC). The switch also checks for other problems, peculiar to the data-link layer protocol involved, which result in malformed frames—commonly (and colorfully) known as runts, giants, and a jabber condition. This checking naturally introduces additional latency into the packet forwarding process, and the additional functions make store-and-forward switches more expensive than cut-through switches.