Ethernet is the most popular local area network (LAN) protocol operating at the data-link layer, and has been for decades. In most cases, when people talk about a LAN, they are referring to an Ethernet LAN. The Ethernet protocol was developed in the 1970s and has since been upgraded repeatedly to satisfy the changing requirements of networks and network users. Today's Ethernet networks run at speeds of 10, 100, and 1,000 Mbps (1 Gbps), which enables them to fulfill roles ranging from home and small business networks to high-capacity backbones.
There have been two sets of Ethernet standards in place over the years. The first was the original Ethernet protocol, as developed by Digital Equipment Corporation (DEC), Intel, and Xerox, and which came to be known as DIX Ethernet. The DIX Ethernet standard was first published in 1980 and defined a network running at 10 Mbps using RG8 coaxial cable in a bus topology. This standard is known as thick Ethernet, Thicknet, or 10Base5. The DIX Ethernet II standard, published in 1982, added a second physical layer option to the protocol using RG58 coaxial cable. This standard is called thin Ethernet, Thinnet, Cheapernet, or 10Base2.
Around the same time that these standards were published, an international standards-making body called the Institute of Electrical and Electronic Engineers (IEEE) set about creating an international standard defining this type of network, which would not be held in private hands as was the DIX Ethernet standard. In 1980, the IEEE assembled what they called a working group with the designation IEEE 802.3, which began the development of an Ethernet-like network standard. They couldn't call their network Ethernet because Xerox had trademarked the name, but in 1985, they published a document called the "IEEE 802.3 Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications." This document included specifications of the same two coaxial cable options as DIX Ethernet and, after further development, added a specification of the unshielded twisted pair (UTP) cable option known as 10BaseT. Additional documents published by the IEEE 802.3 group in later years include IEEE 802.3u in 1995, which includes the 100 Mbps Fast Ethernet specifications, and IEEE 802.3z and IEEE 802.3ab, which are the 1,000 Mbps Gigabit Ethernet standards.
The IEEE 802.3 standard differs only slightly from the DIX Ethernet standard. The IEEE standard contains additional physical layer options, as already noted, and some differences in the frame format. Despite the continued use of the name Ethernet in the marketplace, however, the protocol that networks use today is actually IEEE 802.3, because this version provides the additional physical layer options and the Fast Ethernet and Gigabit Ethernet standards. Development of the DIX Ethernet standards ceased after Ethernet II, and when people use the term Ethernet today, it is understood that they actually mean IEEE 802.3. The only element of the DIX Ethernet standard still in common use is the Ethernet II frame format, which contains the Ethertype field that is used to identify the network layer protocol that generates the data in each packet.
Both the IEEE 802.3 and DIX Ethernet standards consist of the following three basic components:
The physical layer specifications included in the Ethernet standards describe the types of cables you can use to build the network, define the topology, and provide other crucial guidelines such as the maximum cable lengths and the number of repeaters you can use. The basic specifications for the Ethernet physical layer guidelines are listed in Table 5.1. Observing these guidelines is an important part of building a reliable Ethernet network, because they limit the effect of problems like attenuation and crosstalk, which are common to all networks and can inhibit the functionality of the CSMA/CD mechanism. The precise timing involved in Ethernet's collision detection mechanism makes the length of the network cables and the number of repeaters used highly significant to the network's smooth operation.
Table 5.1 Ethernet physical layer specifications
Designation | Cable Type | Topology | Speed | Maximum Segment Length |
---|---|---|---|---|
10Base5 |
RG8 Coaxial |
Bus |
10 Mbps |
500 meters |
10Base2 |
RG58 Coaxial |
Bus |
10 Mbps |
185 meters |
10BaseT |
Category 3 UTP |
Star |
10 Mbps |
100 meters |
FOIRL |
62.5/125 Multimode Fiber Optic |
Star |
10Mbps |
1,000 meters |
10BaseFL |
62.5/125 Multimode Fiber Optic |
Star |
10 Mbps |
2,000 meters |
10BaseFB |
62.5/125 Multimode Fiber Optic |
Star |
10 Mbps |
2,000 meters |
10BaseFP |
62.5/125 Multimode Fiber Optic |
Star |
10 Mbps |
500 meters |
100BaseTX |
Category 5 UTP |
Star |
100 Mbps |
100 meters |
100BaseT4 |
Category 3 UTP |
Star |
100 Mbps |
100 meters |
100BaseFX |
62.5/125 Multimode Fiber Optic |
Star |
100 Mbps |
412 meters |
1000BaseLX |
9/125 Singlemode Fiber Optic |
Star |
1,000 Mbps |
5,000 meters |
1000BaseLX |
50/125 or 62.5/ 125 Multimode Fiber Optic |
Star |
1,000 Mbps |
550 meters |
1000BaseSX |
50/125 Multimode Fiber Optic (400 MHz) |
Star |
1,000 Mbps |
500 meters |
1000BaseSX |
50/125 Multimode Fiber Optic (500 MHz) |
Star |
1,000 Mbps |
550 meters |
1000BaseSX |
62.5/125 Multimode Fiber Optic (160 MHz) |
Star |
1,000 Mbps |
220 meters |
1000BaseSX |
62.5/125 Multimode Fiber Optic (200 MHz) |
Star |
1,000 Mbps |
275 meters |
1000BaseLH |
9/125 Singlemode Fiber Optic |
Star |
1,000 Mbps |
10 km |
1000BaseZX |
9/125 Singlemode Fiber Optic |
Star |
1,000 Mbps |
100 km |
1000BaseCX |
150-ohm Shielded Copper Cable |
Star |
1,000 Mbps |
25 meters |
1000BaseT |
Category 5 (or 5E) UTP |
Star |
1,000 Mbps |
100 meters |
The coaxial Ethernet standards (10Base5 and 10Base2) are the only standards that call for a bus topology. The maximum segment length indicates the length of the entire bus, from one terminator to the other, with all of the computers in between, as shown in Figure 5.1. A cable segment that connects more than two computers is called a mixing segment. The coaxial standards are no longer in use today, except on a few older networks, because they are more difficult to install and maintain than UTP and because they are limited to a maximum speed of 10 Mbps.
All of the other Ethernet physical layer specifications use the star topology, in which a separate cable segment connects each computer to a hub. A cable segment that connects only two devices is called a link segment. Unshielded twisted pair (UTP) is the most popular type of cable used on Ethernet networks today, because it is easy to install and it is upgradeable from 10 Mbps to 100 or even 1,000 Mbps. 10BaseT Ethernet uses link segments up to 100 meters long to connect computers to a repeating hub, which enables the incoming signals to go out to a computer another 100 meters away, as shown in Figure 5.2. 10BaseT uses only two of the four wire pairs in the cable, one pair for transmitting data and one pair for receiving it.
The Fast Ethernet standard (IEEE 802.3u) includes two UTP cable specifications, both of which retain the 100-meter maximum segment length. 100BaseTX does this by requiring a higher grade of cable, Category 5, which provides better signal transmission capabilities. 100BaseT4, however, provides the increased speed using the same Category 3 cable as older Ethernet and telephone networks. The difference between the two is that 100BaseTX uses only two pairs of wires, just like 10BaseT, while 100BaseT4 uses all four wire pairs. In addition to the transmit and receive pairs, 100BaseT4 uses the other two pairs for bi-directional communications.
Most of the physical layer specifications for Gigabit Ethernet use fiber optic cable, but there is one UTP option, defined in a separate document called IEEE 802.3ab, that does not. The 1000BaseT standard, designed specifically as an upgrade for existing UTP networks with 100-meter cable segments, calls for Category 5 cable, but is better serviced by the higher performance cables now being marketed as Enhanced Category 5 or Category 5E. The Category 5E cable rating has been officially ratified by the Electronics Industry Association and Telecommunications Industry Association (EIA/TIA). It doubles the bandwidth provided by Category 5 cable and is much less prone to signal interference resulting from crosstalk. 1000BaseT achieves its great speed by using all four wire pairs like 100BaseT4, and by using a different signaling scheme called Pulse Amplitude Modulation-5 (PAM-5).
The use of fiber optic cable has been an Ethernet physical layer option since its early days. The Fiber Optic Inter-Repeater Link (FOIRL) was part of the DIX Ethernet II standard, and the IEEE 802.3 standards later included the 10BaseFL, 10BaseFB, and 10BaseFP specifications, which were intended for various types of networks. None of these fiber solutions were extremely popular, because running a fiber optic network at 10 Mbps is a terrible waste of potential. Fiber Distributed Data Interface (which is not a form of Ethernet), running at 100 Mbps, soon became the fiber optic backbone protocol of choice. Later, Fast Ethernet arrived with its own 100 Mbps fiber optic option, 100BaseFX. 100BaseFX uses the same hardware as 10BaseFL, but limits the length of a cable segment to 412 meters.
Gigabit Ethernet is the newest form of Ethernet, and raises the network transmission speed to 1,000 Mbps. Gigabit Ethernet relies heavily on fiber optic cabling, and provides a variety of physical layer options using different types of cable to achieve different segment lengths. Singlemode fiber cable is designed to span extremely long distances, which makes Gigabit Ethernet suitable for connecting distant networks or large campus backbones.
Repeating is an essential part of most Ethernet networks, and the standards include rules regarding the number of repeaters that you can use on a single LAN. For the original 10 Mbps Ethernet, the use of repeaters is governed by the 5-4-3 rule, which states that you can have up to five cable segments, connected by four repeaters, with no more than three of these segments being mixing segments. In the days of coaxial cable networks, this meant that you could have up to three mixing segments of 500 or 185 meters each (for 10Base5 and 10Base2, respectively) populated with multiple computers and connected by two repeaters. You could also add two additional repeaters to extend the network with another two cable segments of 500 or 185 meters each, as long as these were link segments connected directly to the next repeater in line, with no intervening computers, as shown in Figure 5.3. A 10Base2 network could therefore span up to 925 meters and a 10Base5 network up to 2,500 meters.
On networks using the star topology, all of the segments are link segments, so this means that you can connect up to four repeating hubs together using their uplink ports and still adhere to the 5-4-3 rule (see Figure 5.4). As long as the traffic between the two most distant computers doesn't pass through more than four hubs, the network is configured properly. Because the hubs function as repeaters, each 10BaseT cable segment can be up to 100 meters long, for a maximum network span of 500 meters.
Because Fast Ethernet networks run at higher speeds, they can't support as many hubs as 10 Mbps Ethernet. The Fast Ethernet standard defines two types of hubs, Class I and Class II, which must be marked with the appropriate Roman numeral in a circle. Class I hubs connect Fast Ethernet cable segments of different types, such as 100BaseTX to 100BaseT4 or UTP to fiber optic, while Class II hubs connect segments of the same type. You can have as many as two Class II hubs on a network, with a total cable length (for all three segments) of 205 meters when using UTP cable and 228 meters using fiber optic. Since Class I hubs must perform an additional signal translation, which slows down the transmission process, you can have only one hub on the network, with maximum cable lengths of 200 and 272 meters for UTP and fiber optic, respectively.
One of the primary functions of the Ethernet protocol is to encapsulate the data it receives from the network layer protocol in a frame, in preparation for its transmission across the network. The frame consists of a header and a footer that are divided into fields containing specific information needed to get each packet to its destination. The format of the Ethernet frame is shown in Figure 5.5.
The functions of the fields are as follows:
The Destination Address and Source Address fields use the 6-byte hardware addresses coded into network interface adapters to identify systems on the network. Every network interface adapter has a unique hardware address (also called a MAC address), which consists of a 3-byte value called an organizationally unique identifier (OUI), which is assigned to the adapter's manufacturer by the IEEE, plus another 3-byte value assigned by the manufacturer itself.
Ethernet, like all data-link layer protocols, is concerned only with transmitting packets to another system on the local network. If the packet's final destination is another system on the LAN, the Destination Address field contains the address of that system's network adapter. If the packet is destined for a system on another network, the Destination Address field contains the address of a router on the local network that provides access to the destination network. It is then up to the network layer protocol to supply a different kind of address (such as an Internet Protocol [IP] address) for the system that is the packet's ultimate destination.
The 2-byte field after the Source Address field is the primary difference between the DIX Ethernet and IEEE 802.3 standards. For any network that uses multiple protocols at the network layer, it is essential for the Ethernet frame to somehow identify which network layer protocol has generated the data in a particular packet. The DIX Ethernet frame does this very simply by specifying an Ethertype in this field, using values like those shown in Table 5.2. The IEEE 802.3 standard uses this field to specify the length of the data field.
Table 5.2 Common Ethertype values, in hexadecimal
Network Layer Protocol | Ethertype |
---|---|
Internet Protocol (IP) |
0800 |
X.25 |
0805 |
Address Resolution Protocol (ARP) |
0806 |
Reverse ARP |
8035 |
AppleTalk on Ethernet |
809B |
NetWare IPX |
8137 |
The IEEE 802.3 takes a different approach. In this frame, the field after the Source Address specifies the length of the data in the packet. How then does the frame identify the network layer protocol? The answer is by using an additional frame component called Logical Link Control (LLC). The IEEE's 802 working group is not devoted solely to the development of Ethernet-like protocols. In fact, there are other protocols that fit into the IEEE 802 architecture, the most prominent of which (aside from IEEE 802.3) is IEEE 802.5, which is a Token Ring–like protocol. To make the IEEE 802 architecture adaptable to these various protocols, the data-link layer is split into two sublayers, as shown in Figure 5.6.
The MAC sublayer is the part that contains the elements particular to the IEEE 802.3 specification, such as the Ethernet physical layer options, the frame, and the CSMA/CD MAC mechanism. The functions of the LLC sublayer are defined in a separate document, published as IEEE 802.2. This same LLC sublayer is also used with the MAC sublayers of other IEEE 802 protocols, such as 802.5.
The LLC standard defines an additional 3-byte or 4-byte subheader that is carried within the Data field, which contains Service Access Points (SAPs) for the source and destination systems. These SAPs identify locations in memory where the source and destination systems store the packet data. To provide the same function as the Ethertype field, the LLC subheader can use a SAP value of 170, which indicates that the Data field also contains a second subheader called the Subnetwork Access Protocol (SNAP). The SNAP subheader is 5 bytes long and contains a 2-byte Local Code that performs the same function as the Ethertype field in the Ethernet II header.
It is typical for computers on a Transmission Control Protocol/Internet Protocol (TCP/IP) network to use the Ethernet II frame, because the Ethertype field performs the same function as the LLC and SNAP subheaders, and saves 8 to 9 bytes per packet. Windows servers and clients automatically negotiate a common frame type when communicating, and when you install a NetWare server, you can select the frame type you want to use. There are two crucial factors to be aware of when it comes to Ethernet frame types. The first is that computers must use the same frame type in order to communicate. The second is that if you are using multiple network layer protocols on your network, such as TCP/IP for Windows networking and Internetwork Packet Exchange (IPX) for NetWare, you must use a frame type that contains an Ethertype or its functional equivalent, such as Ethernet II or Ethernet SNAP.
The MAC mechanism is the single most defining element of the Ethernet standard. A protocol that is very similar to Ethernet in other ways, such as 100VG AnyLAN, is placed in a separate category because it uses a different MAC mechanism. Carrier Sense Multiple Access with Collision Detection may be a confusing name, but the basic concept is simple. It's only when you get into the details that things become complicated.
Run the c05dem01 and c05dem02 videos located in the Demos folder on the CD-ROM accompanying this book for a demonstration of 100VG AnyLAN's demand priority mechanism.
When an Ethernet system has data to transmit, it first listens to the network to see if it is in use by another system. This is the carrier sense phase. If the network is busy, the system does nothing for a given period and then checks again. If the network is free, the system transmits the data packet. This is the multiple access phase, because all of the systems on the network are contending for access to the same network medium.
Run the c05dem03 video located in the Demos folder on the CD-ROM accompanying this book for a demonstration of the carrier sense and multiple access phases.
Even though an initial check is performed during the carrier sense phase, it is still possible for two systems on the network to transmit at the same time, causing a collision. For example, when a system performs the carrier sense, another computer may have already begun transmitting, but its signal has not yet reached the sensing system. The second computer then transmits and the two packets collide somewhere on the cable. When a collision occurs, both packets are discarded and the systems must retransmit them. These collisions are a normal and expected part of Ethernet networking, and they are not a problem unless there are too many of them or the computers are unable to detect them.
Run the c05dem04 video located in the Demos folder on the CD-ROM accompanying this book for a demonstration of a collision.
100VG AnyLAN is a data-link layer protocol that was developed by Hewlett Packard and AT&T in the early 1990s as a rival to the emerging Fast Ethernet standard. Like Fast Ethernet, 100VG AnyLAN runs at 100 Mbps over UTP cable. When using Category 3 cable, the maximum segment length is 100 meters, but Category 5 cable extends the maximum length to 200 meters. The protocol uses all four pairs of wires in the cable (like 100BaseT4), with a technique called quartet signaling. The primary element that differentiates 100VG AnyLAN from Ethernet is demand priority, a new MAC mechanism in which the hub determines which system on the network can transmit at any given time. 100VG AnyLAN never achieved the popularity that Fast Ethernet did, and today it remains a marginal technology with few advocates.
The collision detection phase of the transmission process is the most important part of the operation. If the systems can't tell when their packets collide, corrupted data may reach the destination system and be treated as valid data. Ethernet networks are designed so that packets are large enough to fill the entire network cable with signals before the last bit leaves the transmitting computer. This is why Ethernet packets must be at least 64 bytes long, why systems pad out short packets to 64 bytes before transmission, and why the Ethernet physical layer guidelines impose strict limitations on the lengths of cable segments.
As long as a computer is still in the process of transmitting, it is capable of detecting a collision on the network. On a UTP or fiber optic network, a system assumes that a collision has occurred if it detects signals on both its transmit and receive wires at the same time. On a coaxial network, a voltage spike indicates the occurrence of a collision. If the network cable is too long or if the packet is too short, a system might finish transmitting before the collision occurs.
When a system detects a collision, it immediately stops transmitting data and starts sending a jam pattern instead. The jam pattern serves as a signal to every system on the network that a collision has taken place, that it should discard any partial packets it may have received, and that it should not attempt to transmit any data until the network has cleared. After transmitting the jam pattern, the system waits a specific period of time before attempting to transmit again. This is called the backoff period, and both of the systems involved in a collision compute the length of their own backoff periods using a randomized algorithm called truncated binary exponential backoff. They do this to try to avoid causing another collision by backing off for the same period of time.
Because of the way that CSMA/CD works, the more systems you have on a network or the more data the systems transmit over the network, the more collisions there are. Collisions are a normal part of Ethernet operation, but they still cause delays, because systems have to retransmit packets. When the number of collisions is nominal, the delays aren't noticeable, but when network traffic increases, the number of collisions increases, and the accumulated delays can begin to have a palpable effect on network performance. For this reason, it is not a good idea to run an Ethernet network at high traffic levels. You can reduce the traffic on the network by installing a bridge or switch, or by splitting it into two LANs and connecting the LANs with a router.
Using CSMA/CD may seem to be an inefficient way of controlling access to the network medium, but the process by which the systems contend for access to the network and recover from collision occurs many times a second, so rapidly that the delays caused by a moderate number of collisions are negligible.
Run the c05dem05 video located in the Demos folder on the CD-ROM accompanying this book for a demonstration of how Ethernet systems contend for access to the network.
Place the following steps of the CSMA/CD transmission process in the proper order.