Ethernet and Fast Ethernet equipment. Fast Ethernet technology, its features, physical layer, building rules At what level the 100base scrambler is applied

The most widespread among standard networks is the Ethernet network. It first appeared in 1972 (developed by the well-known company Xerox). The network turned out to be quite successful, and as a result of this, in 1980 it was supported by such major companies as DEC and Intel (the merger of these companies was named DIX after the first letters of their names). Through their efforts in 1985, the Ethernet network became an international standard, it was adopted by the largest international standards organizations: the 802 IEEE committee (Institute of Electrical and Electronic Engineers) and ECMA (European Computer Manufacturers Association).

The standard was named IEEE 802.3 (in English it reads as eight oh two dot three). It defines multiple access to a mono-channel of the bus type with collision detection and transmission control, that is, with the already mentioned CSMA / CD access method. Some other networks also met this standard, since the level of detail is low. As a result, networks of the IEEE 802.3 standard were often incompatible with each other in terms of both design and electrical characteristics. Recently, however, the IEEE 802.3 standard has been considered the standard for the Ethernet network.

Key features of the original IEEE 802.3 standard:

  • topology - bus;
  • transmission medium - coaxial cable;
  • transmission speed - 10 Mbit / s;
  • maximum network length - 5 km;
  • the maximum number of subscribers is up to 1024;
  • network segment length - up to 500 m;
  • number of subscribers on one segment - up to 100;
  • access method - CSMA / CD;
  • transmission is narrowband, that is, without modulation (mono channel).

Strictly speaking, there are minor differences between the IEEE 802.3 and Ethernet standards, but they usually prefer not to be remembered.

Ethernet is now the most popular in the world (over 90% of the market), and it is expected to remain so in the coming years. This was largely due to the fact that from the very beginning the characteristics, parameters, protocols of the network were open, as a result of which a huge number of manufacturers around the world began to produce Ethernet equipment that is fully compatible with each other.

A classic Ethernet network used a 50-ohm coaxial cable of two types (thick and thin). However, in recent years (since the beginning of the 90s), the most widespread version of the Ethernet is using twisted pairs as a transmission medium. A standard has also been defined for the use of fiber optic cable in a network. Additions have been made to the original IEEE 802.3 standard to accommodate these changes. In 1995, an additional standard appeared for a faster version of Ethernet operating at 100 Mbit / s (the so-called Fast Ethernet, IEEE 802.3u standard), using twisted pair or fiber optic cable as the transmission medium. In 1997, a version with a speed of 1000 Mbit / s appeared (Gigabit Ethernet, IEEE 802.3z standard).

In addition to the standard bus topology, passive star and passive tree topologies are increasingly being used. This assumes the use of repeaters and repeater hubs connecting different parts (segments) of the network. As a result, a tree-like structure can be formed on segments of different types (Fig. 7.1).

Rice. 7.1. Classic Ethernet topology

A segment (part of a network) can be a classic bus or a single subscriber. The bus segments use coaxial cable, and the passive star beams (for connecting single computers to the hub) use twisted pair and fiber optic cables. The main requirement for the resulting topology is that there are no closed paths (loops) in it. In fact, it turns out that all subscribers are connected to a physical bus, since the signal from each of them propagates in all directions at once and does not return back (as in a ring).

The maximum cable length of the network as a whole (maximum signal path) can theoretically reach 6.5 kilometers, but practically does not exceed 3.5 kilometers.

Fast Ethernet does not include physical topology bus, only passive star or passive tree is used. In addition, Fast Ethernet has much more stringent requirements for the maximum network length. Indeed, if the transmission speed is increased by 10 times and the format of the packet is preserved, its minimum length becomes ten times shorter. Thus, the permissible value of the double signal transit time through the network is reduced by 10 times (5.12 μs versus 51.2 μs in Ethernet).

The standard Manchester code is used to transfer information on the Ethernet network.

Access to the Ethernet network is carried out using a random CSMA / CD method, which ensures the equality of subscribers. The network uses packets of variable length with the structure shown in Fig. 7.2. (numbers show the number of bytes)

Rice. 7.2. Ethernet packet structure

The Ethernet frame length (that is, the packet without the preamble) must be at least 512 bit intervals or 51.2 µs (this is the limit for the double transit time in the network). Provides individual, multicast and broadcast addressing.

The Ethernet packet includes the following fields:

  • The preamble consists of 8 bytes, the first seven are the code 10101010, and the last byte is the code 10101011. In the IEEE 802.3 standard, the eighth byte is called the Start of Frame Delimiter (SFD) and forms a separate field of the packet.
  • The recipient (receiver) and sender (transmitter) addresses are 6 bytes each and are built according to the standard described in the Packet Addressing section of Lecture 4. These address fields are processed by the subscribers' equipment.
  • The control field (L / T - Length / Type) contains information about the length of the data field. It can also determine the type of protocol used. It is generally accepted that if the value of this field is not more than 1500, then it indicates the length of the data field. If its value is more than 1500, then it determines the frame type. The control field is processed programmatically.
  • The data field must contain between 46 and 1500 bytes of data. If the packet is to contain less than 46 bytes of data, then the data field is padded with padding bytes. According to the IEEE 802.3 standard, a special padding field (pad data) is allocated in the packet structure, which can have a length of zero when there is enough data (more than 46 bytes).
  • The Frame Check Sequence (FCS) field contains a 32-bit cyclic packet checksum (CRC) and is used to check the correctness of the packet transmission.

Thus, the minimum frame length (packet without preamble) is 64 bytes (512 bits). It is this value that determines the maximum allowable double delay of signal propagation over the network in 512 bit intervals (51.2 μs for Ethernet or 5.12 μs for Fast Ethernet). The standard assumes that the preamble may shrink as the packet passes through various network devices, so it is ignored. The maximum frame length is 1518 bytes (12144 bits, i.e. 1214.4 μs for Ethernet, 121.44 μs for Fast Ethernet). This is important for choosing the size of the buffer memory of network equipment and for assessing the overall network load.

The choice of the preamble format is not accidental. The fact is that the sequence of alternating ones and zeros (101010 ... 10) in the Manchester code is characterized by the fact that it has transitions only in the middle of the bit intervals (see Section 2.6.3), that is, only information transitions. Of course, it is easy for the receiver to tune (synchronize) with such a sequence, even if for some reason it is shortened by a few bits. The last two unit bits of the preamble (11) differ significantly from the sequence 101010 ... 10 (transitions also appear at the border of the bit intervals). Therefore, the already tuned receiver can easily select them and thereby detect the beginning of useful information (the beginning of the frame).

For an Ethernet network operating at a speed of 10 Mbit / s, the standard defines four main types of network segments, focused on different media:

  • 10BASE5 (thick coaxial cable);
  • 10BASE2 (thin coaxial cable);
  • 10BASE-T (twisted pair);
  • 10BASE-FL (fiber optic cable).

The segment name includes three elements: the number 10 means the transmission rate of 10 Mbit / s, the BASE word means transmission in the main frequency band (that is, without modulating the high-frequency signal), and the last element means the permissible segment length: 5 - 500 meters, 2 - 200 meters (more precisely, 185 meters) or the type of communication line: T - twisted pair (from English twisted-pair), F - fiber optic cable (from English fiber optic).

Likewise, for an Ethernet network operating at a speed of 100 Mbps (Fast Ethernet), the standard defines three types of segments, differing in the types of transmission media:

  • 100BASE-T4 (twisted pair);
  • 100BASE-TX (twisted pair);
  • 100BASE-FX (fiber optic cable).

Here, the number 100 stands for a transmission rate of 100 Mbps, the letter T for a twisted pair, and the letter F for a fiber optic cable. The types 100BASE-TX and 100BASE-FX are sometimes combined under the name 100BASE-X, and 100BASE-T4 and 100BASE-TX under the name 100BASE-T.

The features of Ethernet equipment, as well as the CSMA / CD exchange control algorithm and the cyclic checksum (CRC) calculation algorithm will be discussed in more detail later in special sections of the course. It should be noted here only that the Ethernet network does not differ in either record characteristics or optimal algorithms; it is inferior in a number of parameters to other standard networks. But thanks to its powerful support, the highest level of standardization, huge volumes of production of technical means, Ethernet favorably stands out among other standard networks, and therefore it is customary to compare any other network technology with Ethernet.

The evolution of Ethernet technology is moving away from the original standard. The use of new transmission media and switches can significantly increase the size of the network. Abandoning the Manchester code (on Fast Ethernet and Gigabit Ethernet) results in higher data rates and reduced cable requirements. Rejection of the CSMA / CD control method (with full-duplex exchange mode) makes it possible to dramatically increase the efficiency of work and remove restrictions on the length of the network. However, all of the newer types of networking are also referred to as Ethernet.

Token-Ring network

The Token-Ring (token ring) network was proposed by IBM in 1985 (the first option appeared in 1980). It was designed to network all types of computers made by IBM. The very fact that it is supported by IBM, largest manufacturer computer technology, suggests that it needs to be given special attention. But no less important is the fact that Token-Ring is currently the international standard IEEE 802.5 (although there are minor differences between Token-Ring and IEEE 802.5). This puts this network on the same level as Ethernet in status.

Developed by Token-Ring as a reliable alternative to Ethernet. Although Ethernet is now superseding all other networks, Token-Ring is not hopelessly obsolete. More than 10 million computers worldwide are connected by this network.

IBM has done everything to make its network as widespread as possible: detailed documentation has been released up to schematic diagrams adapters. As a result, many companies, for example, 3COM, Novell, Western digital, Proteon and others have started making adapters. By the way, the NetBIOS concept was developed specifically for this network, as well as for another IBM PC Network. Whereas in the previously created PC Network, NetBIOS programs were stored in the read-only memory built into the adapter, in the Token-Ring network, a NetBIOS emulation program was already used. This made it possible to more flexibly respond to the peculiarities of the hardware and maintain compatibility with higher-level programs.

The Token-Ring network has a ring topology, although it looks more like a star in appearance. This is due to the fact that individual subscribers (computers) are not connected to the network directly, but through special hubs or multi-station access devices (MSAU or MAU - Multistation Access Unit). Physically, the network forms a star-ring topology (Figure 7.3). In reality, the subscribers are nevertheless united in a ring, that is, each of them transmits information to one neighboring subscriber, and receives information from another.

Rice. 7.3. Star-ring token-ring network topology

At the same time, the hub (MAU) allows you to centralize the configuration task, disconnect faulty subscribers, monitor network operation, etc. (fig. 7.4). It does not perform any information processing.

Rice. 7.4. Ringing Token-Ring Subscribers Using a Hub (MAU)

For each subscriber, a special Trunk Coupling Unit (TCU) is used as part of the hub, which provides automatic inclusion of the subscriber in the ring if it is connected to the hub and is working properly. If the subscriber disconnects from the hub or is faulty, the TCU automatically restores the integrity of the ring without participation this subscriber... The TCU is triggered by a DC signal (the so-called phantom current), which comes from a subscriber who wants to join the ring. The subscriber can also disconnect from the ring and carry out a self-test procedure (the far right subscriber in Fig. 7.4). The phantom current does not affect the information signal in any way, since the signal in the ring does not have a constant component.

Structurally, the hub is a self-contained unit with ten connectors on the front panel (Fig. 7.5).

Rice. 7.5. Token-Ring Hub (8228 MAU)

Eight central connectors (1 ... 8) are intended for connecting subscribers (computers) using adapter cables or radial cables. The two extreme connectors: input RI (Ring In) and output RO (Ring Out) are used to connect to other hubs using special trunk cables (Path cables). Wall-mount and desktop-mount options are available.

There are both passive and active MAUs. An active hub recovers the signal coming from the subscriber (that is, it acts as an Ethernet hub). The passive hub does not perform signal recovery, it only re-switches the communication lines.

The hub in the network can be the only one (as in Figure 7.4), in this case, only the subscribers connected to it are closed in the ring. Outwardly, this topology looks like a star. If more than eight subscribers need to be connected to the network, then several hubs are connected by trunk cables and form a star-ring topology.

As noted, ring topology is very sensitive to ring cable breaks. To increase the survivability of the network, Token-Ring provides a so-called ring folding mode, which allows you to bypass the break point.

In normal mode, the hubs are connected in a ring by two parallel cables, but information is transmitted only through one of them (Fig. 7.6).

Rice. 7.6. Combining MAUs in Normal Mode

In the event of a single damage (breakage) of the cable, the network transmits through both cables, thereby bypassing the damaged section. At the same time, the order of bypassing subscribers connected to concentrators is even preserved (Fig. 7.7). True, the total length of the ring increases.

In the event of multiple cable faults, the network splits into several parts (segments) that are not connected to each other, but remain fully operational (Fig. 7.8). The maximum part of the network remains connected, as before. Of course, this no longer rescues the network as a whole, but it allows, with the correct distribution of subscribers to concentrators, to preserve a significant part of the functions of the damaged network.

Several hubs can be structurally combined into a group, a cluster, within which subscribers are also connected in a ring. The use of clusters allows you to increase the number of subscribers connected to one center, for example, up to 16 (if the cluster includes two hubs).

Rice. 7.7. Collapsing the ring when the cable is damaged

Rice. 7.8. Ring disintegration with multiple cable damage

At first, twisted pair, both unshielded (UTP) and shielded (STP), were used as a transmission medium in the IBM Token-Ring network, but then there were options for equipment for coaxial cable, as well as for fiber optic cable in the FDDI standard.

The main technical characteristics of the classic version of the Token-Ring network:

  • the maximum number of IBM 8228 MAU type hubs is 12;
  • the maximum number of subscribers in the network is 96;
  • maximum cable length between the subscriber and the hub - 45 meters;
  • maximum cable length between hubs - 45 meters;
  • the maximum length of the cable connecting all the hubs is 120 meters;
  • data transfer rate - 4 Mbit / s and 16 Mbit / s.

All specifications are based on the use of an unshielded twisted pair cable. If a different transmission medium is used, the characteristics of the network may differ. For example, when using shielded twisted pair (STP), the number of subscribers can be increased to 260 (instead of 96), the cable length - up to 100 meters (instead of 45), the number of hubs - up to 33, and the total length of the ring connecting the hubs - up to 200 meters ... Fiber optic cable allows to extend the cable length up to two kilometers.

To transfer information in Token-Ring, a biphase code is used (more precisely, its version with a mandatory transition in the center of the bit interval). As with any star topology, no additional electrical termination or external grounding is required. Matching is done by hardware network adapters and hubs.

Token-Ring cables use RJ-45 (unshielded twisted pair) connectors, MIC and DB9P connectors. The wires in the cable connect the same pins of the connectors (that is, the so-called straight cables are used).

The Token-Ring network in the classic version is inferior to the Ethernet network both in the allowable size and in the maximum number of subscribers. In terms of transmission speed, there are currently 100 Mbps (High Speed ​​Token-Ring, HSTR) and 1000 Mbps (Gigabit Token-Ring) versions of Token-Ring. Companies that support Token-Ring (including IBM, Olicom, Madge) do not intend to abandon their network, seeing it as a worthy competitor to Ethernet.

Compared to Ethernet hardware, Token-Ring hardware is noticeably more expensive, since it uses a more complex exchange control method, so the Token-Ring network is not so widespread.

However, unlike Ethernet, Token-Ring network maintains a high load level much better (more than 30-40%) and provides guaranteed access time. This is necessary, for example, in industrial networks, where a delay in reaction to an external event can lead to serious accidents.

The Token-Ring network uses the classic token access method, that is, a token constantly circulates around the ring, to which subscribers can attach their data packets (see Fig. 7.8). This implies such an important advantage of this network as the absence of conflicts, but there are also disadvantages, in particular, the need to control the integrity of the token and the dependence of the functioning of the network on each subscriber (in the event of a malfunction, the subscriber must be excluded from the ring).

The maximum packet transfer time in Token-Ring is 10 ms. With a maximum number of 260 subscribers, the full cycle of the ring will be 260 x 10 ms = 2.6 s. During this time, all 260 subscribers will be able to transfer their packages (if, of course, they have something to transfer). During this time, a free marker will surely reach every subscriber. This interval is also the upper limit of the Token-Ring access time.

Each subscriber of the network (its network adapter) must perform the following functions:

  • identification of transmission errors;
  • network configuration control (network restoration in case of failure of the subscriber that precedes him in the ring);
  • control of numerous time relationships adopted in the network.

The large number of functions, of course, complicates and increases the cost of the network adapter hardware.

To control the integrity of the token in the network, one of the subscribers is used (the so-called active monitor). Moreover, his equipment is no different from the rest, but his software monitor the temporal relationships in the network and form, if necessary, a new marker.

The active monitor performs the following functions:

  • launches a marker into the ring at the beginning of work and when it disappears;
  • regularly (every 7 seconds) informs about his presence with a special control package (AMP - Active Monitor Present);
  • removes from the ring a packet that was not removed by the subscriber who sent it;
  • monitors the allowed packet transmission time.

The active monitor is selected when the network is initialized; it can be any computer on the network, but, as a rule, it becomes the first subscriber connected to the network. The subscriber, who has become an active monitor, includes its buffer (shift register) into the network, which guarantees that the marker will fit into the ring even with the minimum ring length. The size of this buffer is 24 bits for 4 Mbps and 32 bits for 16 Mbps.

Each subscriber constantly monitors how the active monitor performs its duties. If the active monitor fails for some reason, a special mechanism is activated through which all other subscribers (spare, backup monitors) decide on the appointment of a new active monitor. To do this, the subscriber who detects the failure of the active monitor transmits a control packet (token request packet) over the ring with its MAC address. Each subsequent subscriber compares the MAC address from the packet with its own. If its own address is less, it passes the packet on unchanged. If more, then it sets its own MAC address in the packet. The active monitor will be the subscriber whose MAC-address value is higher than that of the others (he must receive back a packet with his MAC-address three times). A sign of failure of the active monitor is its failure to perform one of the listed functions.

The Token-Ring network token is a control packet containing only three bytes (Figure 7.9): the Start Delimiter byte (SD), Access Control byte (AC), and End Delimiter byte (ED). All these three bytes are also included in the information package, although their functions in the marker and in the package are somewhat different.

The leading and trailing separators are not just a sequence of zeros and ones, but contain signals of a special kind. This was done so that the delimiters could not be confused with any other packet bytes.

Rice. 7.9. Token-Ring Token Format

The initial SD delimiter contains four non-standard bit intervals (Figure 7.10). Two of them, denoted by J, represent low level signal during the entire bit interval. The other two bits, labeled K, represent a high signal level for the entire bit interval. It is clear that such timing failures are easily detected by the receiver. Bits J and K can never occur among the bits of useful information.

Rice. 7.10. Leading (SD) and Ending (ED) Delimiter Formats

The final delimiter ED also contains four special bits (two J bits and two K bits), as well as two one bits. But, in addition, it also includes two information bits, which are meaningful only as part of an information package:

  • Bit I (Intermediate) is a sign of an intermediate packet (1 corresponds to the first in a chain or an intermediate packet, 0 - to the last in a chain or a single packet).
  • The E (Error) bit is a sign of a detected error (0 corresponds to the absence of errors, 1 to their presence).

The Access Control (AC) byte is divided into four fields (Figure 7.11): a priority field (three bits), a marker bit, a monitor bit, and a reservation field (three bits).

Rice. 7.11. Access Control Byte Format

The priority bits (field) allow the subscriber to assign priority to his packets or token (priority can be from 0 to 7, with 7 being the highest priority and 0 being the lowest). The subscriber can attach his package to the marker only when his own priority (the priority of his packages) is the same or higher than the priority of the token.

The marker bit determines whether a packet is attached to the marker or not (one corresponds to a marker without a packet, zero - to a marker with a packet). The monitor bit, set to one, indicates that this marker was transmitted by the active monitor.

Reservation bits (field) allow the subscriber to reserve his right to further seize the network, that is, to take a queue for service. If the subscriber's priority (the priority of his packets) is higher than the current value of the reservation field, then he can write his priority there instead of the previous one. After looping around the ring, the highest priority of all subscribers will be recorded in the reservation field. The content of the reservation field is similar to the content of the priority field, but indicates the future priority.

As a result of the use of priority and reservation fields, only subscribers with the highest priority packets for transmission are able to access the network. Lower priority packets will be served only when higher priority packets are exhausted.

The format of the information packet (frame) Token-Ring is shown in Fig. 7.12. In addition to the start and end delimiters, and the access control byte, this packet also includes the packet control byte, receiver and transmitter network addresses, data, checksum, and packet status byte.

Rice. 7.12. Packet (frame) format of the Token-Ring network (the length of the fields is given in bytes)

Purpose of the fields of the packet (frame).

  • The leading delimiter (SD) is the start of the packet, the format is the same as in the marker.
  • The Access Control (AC) byte has the same format as the token.
  • The Packet Control Byte (FC - Frame Control) defines the type of packet (frame).
  • The six-byte source and destination MAC addresses of a packet follow the standard format described in Chapter 4.
  • The data field (Data) includes the transmitted data (in an information packet) or information for control of the exchange (in a control packet).
  • The Frame Check Sequence (FCS) field is a 32-bit cyclic packet checksum (CRC).
  • The trailing separator (ED), as in the marker, indicates the end of the packet. In addition, it determines whether Current Package intermediate or final in the sequence of transmitted packets, and also contains a sign of packet error (see Fig. 7.10).
  • The packet status byte (FS - Frame Status) tells what happened to the given packet: whether it was seen by the receiver (that is, whether there is a receiver with the specified address) and copied into the receiver's memory. From it, the sender of the packet knows whether the packet arrived at its destination and without errors, or if it needs to be transmitted again.

It should be noted that the larger allowable size of the transmitted data in one packet compared to an Ethernet network can be a decisive factor in increasing network performance. Theoretically, for transfer rates of 16 Mbit / s and 100 Mbit / s, the length of the data field can even reach 18 Kbytes, which is essential when transferring large amounts of data. But even at 4 Mbps, Token-Ring often delivers faster actual transfer rates than 10 Mbps Ethernet, thanks to token-based access. The advantage of Token-Ring is especially noticeable at high loads (over 30-40%), since in this case the CSMA / CD method requires a lot of time to resolve repeated conflicts.

A subscriber wishing to transmit a packet waits for a free token to arrive and captures it. The captured marker is transformed into the frame of the information packet. Then the subscriber transmits information package into the ring and awaits his return. It then releases the token and sends it back to the network.

In addition to the token and the usual packet, a special control packet can be transmitted in the Token-Ring network, which serves to interrupt the transmission (Abort). It can be sent anytime and anywhere in the data stream. This package consists of two one-byte fields - the initial (SD) and final (ED) delimiters of the described format.

Interestingly, the faster version of Token-Ring (16 Mbps and higher) uses the so-called Early Token Release (ETR) method. It avoids network overhead while the data packet is looped back to its sender.

The ETR method boils down to the fact that immediately after transmitting its packet attached to the token, any subscriber issues a new free token to the network. Other subscribers can start transmitting their packets immediately after the end of the packet of the previous subscriber, without waiting for him to complete the traversal of the entire network ring. As a result, there can be several packets on the network at the same time, but there will always be no more than one free token. This pipeline is especially effective on long-haul networks that have significant propagation delay.

When a subscriber is connected to the hub, it performs the procedure of autonomous self-test and cable testing (it does not turn on in the ring yet, since there is no phantom current signal). The subscriber sends himself a number of packets and checks the correctness of their passage (his input is directly connected to his output by the TCU, as shown in Fig. 7.4). After that, the subscriber includes himself in the ring, sending a phantom current. At the moment of switching on, the packet transmitted over the ring can be corrupted. Next, the subscriber sets up synchronization and checks for an active monitor on the network. If there is no active monitor, the subscriber starts the competition for the right to become one. Then the subscriber checks the uniqueness of his own address in the ring and collects information about other subscribers. After which he becomes a full participant in the exchange over the network.

In the course of the exchange, each subscriber monitors the health of the previous subscriber (around the ring). If he suspects the refusal of the previous subscriber, he starts the procedure automatic recovery rings. A special control package (buoy) tells the previous subscriber to conduct a self-test and, possibly, disconnect from the ring.

The Token-Ring network also provides for the use of bridges and switches. They are used to divide a large ring into several ring segments that can exchange packets with each other. This allows you to reduce the load on each segment and increase the proportion of time provided to each subscriber.

As a result, you can form a distributed ring, that is, the combination of several ring segments into one large backbone ring (Figure 7.13) or a star-ring structure with a central switch to which the ring segments are connected (Figure 7.14).

Rice. 7.13. Connecting segments with a trunk ring using bridges

Rice. 7.14. Aggregation of segments with a central switch

Arcnet (or ARCnet from Attached Resource Computer Net) is one of the oldest networks. It was developed by the Datapoint Corporation back in 1977. There are no international standards for this network, although it is she who is considered the ancestor of the token access method. Despite the lack of standards, the Arcnet network until recently (in 1980 - 1990) was popular, even seriously competing with Ethernet. A large number of companies (for example, Datapoint, Standard Microsystems, Xircom, etc.) have produced equipment for this type of network. But now the production of Arcnet equipment is practically discontinued.

Among the main advantages of the Arcnet network in comparison with Ethernet are the limited amount of access time, high reliability of communication, ease of diagnostics, as well as the relatively low cost of adapters. The most significant disadvantages of the network include low speed information transfer (2.5 Mbit / s), addressing system and packet format.

To transfer information in the Arcnet network, a rather rare code is used, in which a logical one corresponds to two pulses during a bit interval, and a logical zero corresponds to one pulse. Obviously this is self-timing code that requires even more cable bandwidth than even Manchester's.

As a transmission medium in the network, a coaxial cable with a characteristic impedance of 93 Ohm is used, for example, of the RG-62A / U brand. Twisted pair options (shielded and unshielded) are not widely used. Fiber optic options have been proposed, but they haven't saved Arcnet either.

As a topology, the Arcnet network uses the classic bus (Arcnet-BUS) as well as a passive star (Arcnet-STAR). Hubs are used in the star. It is possible to combine bus and star segments into a tree topology using hubs (as with Ethernet). The main limitation is that there should be no closed paths (loops) in the topology. Another limitation is that the number of daisy chained segments using hubs must not exceed three.

Hubs are of two types:

  • Active concentrators (restore the shape of incoming signals and amplify them). The number of ports is from 4 to 64. Active hubs can be interconnected (cascaded).
  • Passive hubs (just mix the incoming signals without amplification). The number of ports is 4. Passive hubs cannot be connected to each other. They can only link active hubs and / or network adapters.

Bus segments can only be connected to active hubs.

There are also two types of network adapters:

  • High impedance (Bus) for use in bus segments:
  • Low impedance (Star) designed for use in a passive star.

Low impedance adapters differ from high impedance adapters in that they contain 93-ohm matching terminators. When using them, external approval is not required. In bus segments, low impedance adapters can be used as terminating adapters for bus termination. High impedance adapters require external 93 ohm termination. Some network adapters have the ability to switch from a high impedance state to a low impedance state, they can work in the bus and in the star.

Thus, the topology of the Arcnet network looks like this (Figure 7.15).

Rice. 7.15. Arcnet network topology of bus type (B - adapters for working in the bus, S - adapters for working in a star)

The main technical characteristics of the Arcnet network are as follows.

  • Transmission medium - coaxial cable, twisted pair.
  • The maximum length of the network is 6 kilometers.
  • The maximum cable length from the subscriber to the passive hub is 30 meters.
  • The maximum cable length from the subscriber to the active hub is 600 meters.
  • The maximum cable length between active and passive hubs is 30 meters.
  • The maximum cable length between active hubs is 600 meters.
  • The maximum number of subscribers in the network is 255.
  • The maximum number of subscribers on a bus segment is 8.
  • The minimum distance between subscribers in the bus is 1 meter.
  • The maximum length of a bus segment is 300 meters.
  • The data transfer rate is 2.5 Mbps.

When creating complex topologies, it is necessary to ensure that the delay in the propagation of signals in the network between subscribers does not exceed 30 μs. The maximum attenuation of the signal in the cable at a frequency of 5 MHz should not exceed 11 dB.

Arcnet uses token access (pass-through), but is slightly different from Token-Ring. This method is closest to the one provided in the IEEE 802.4 standard. The sequence of actions of subscribers with this method:

1. The subscriber who wants to transmit is waiting for the arrival of the token.

2. Having received the token, he sends a request to transmit information to the receiving subscriber (asks if the receiver is ready to receive his packet).

3. The receiver, having received the request, sends a response (confirms its readiness).

4. Having received confirmation of readiness, the sender subscriber sends his packet.

5. On receiving the packet, the receiver sends an acknowledgment of the packet.

6. The transmitter, having received an acknowledgment of packet reception, ends its communication session. After that, the token is passed to the next subscriber in descending order of network addresses.

Thus, in this case, the packet is transmitted only when there is confidence in the readiness of the receiver to receive it. This significantly increases the reliability of the transmission.

As with Token-Ring, conflicts are completely eliminated in Arcnet. Like any token network, Arcnet holds the load well and guarantees the amount of network access time (as opposed to Ethernet). The total round trip time of all subscribers by the marker is 840 ms. Accordingly, the same interval determines the upper limit of the network access time.

The token is generated by a special subscriber - the network controller. It is the subscriber with the minimum (zero) address.

If the subscriber does not receive a free token within 840 ms, then he sends a long bit sequence to the network (to ensure the destruction of the damaged old token). After that, the procedure for monitoring the network and assigning (if necessary) a new controller is carried out.

The Arcnet package size is 0.5 KB. In addition to the data field, it also includes 8-bit receiver and transmitter addresses and a 16-bit cyclic checksum (CRC). Such a small packet size turns out to be not very convenient with a high traffic intensity over the network.

Arcnet network adapters differ from other network adapters in that they need to set their own network address using switches or jumpers (there can be 255 of them, since the last, 256th address is used in the network for broadcasting mode). The control over the uniqueness of each network address is entirely the responsibility of the network users. Connecting new subscribers becomes quite difficult at the same time, since it is necessary to set the address that has not yet been used. The choice of the 8-bit address format limits the number of network subscribers to 255, which may not be enough for large companies.

As a result, all this led to the almost complete abandonment of the Arcnet network. There were 20 Mbit / s versions of the Arcnet network, but these were not widely adopted.

Articles to read:

Lecture 6: Standard Ethernet / Fast Ethernet Segments

Introduction

The purpose of this report was a short and accessible presentation of the basic principles of operation and features of computer networks, using the example of Fast Ethernet.

A network is a group of connected computers and other devices. The main purpose of computer networks is the sharing of resources and the implementation of interactive communication both within one firm and outside it. Resources are data, applications, and peripherals such as an external drive, printer, mouse, modem, or joystick. Interactive communication between computers implies real-time messaging.

There are many sets of standards for data transmission in computer networks. One of the kits is the Fast Ethernet standard.

From this material you will learn about:

  • Fast Ethernet technologies
  • Switches
  • FTP cable
  • Connection types
  • Topologies computer network

In my work, I will show the principles of a network based on the Fast Ethernet standard.

Local area network (LAN) switching and Fast Ethernet technologies were developed in response to the need to improve the efficiency of Ethernet networks. By increasing bandwidth, these technologies can eliminate network bottlenecks and support applications that require high data rates. The appeal of these solutions is that you don't have to choose one or the other. They are complementary, so the efficiency of the network can most often be improved by using both technologies.

The collected information will be useful both to persons beginning to study computer networks and to network administrators.

1. Network diagram

2. Fast Ethernet technology

fast ethernet computer network

Fast Ethernet is the result of the evolution of Ethernet technology. Based on and keeping intact the same CSMA / CD (Channel Polling and Collision Detection Shared Access) method, Fast Ethernet devices operate at 10 times the speed of Ethernet. 100 Mbps. Fast Ethernet provides sufficient bandwidth for applications such as computer-aided design and manufacturing (CAD / CAM), graphics and imaging, and multimedia. Fast Ethernet is compatible with 10 Mbps Ethernet, so integrating Fast Ethernet into your LAN is more conveniently done using a switch rather than a router.

Switch

Using switches many workgroups can be linked together to form a large LAN (see Figure 1). Inexpensive switches perform better than routers for better LAN performance. Fast Ethernet workgroups of one or two hubs can be connected via a Fast Ethernet switch to further increase the number of users as well as cover a wider area.

As an example, consider the following switch:

Rice. 1 D-Link-1228 / ME

The DES-1228 / ME series of switches include configurable Fast Ethernet switches of layer 2 "premium" class. With advanced functionality, DES-1228 / ME devices are inexpensive solution to create a secure and high-performance network. The switch features high port density, 4 Gigabit Uplink ports, small increments for bandwidth management, and improved network management. These switches allow you to optimize the network both in terms of functionality and cost characteristics. The DES-1228 / ME series switches are optimal solution both in terms of functionality and cost characteristics.

FTP cable

LAN-5EFTP-BL cable consists of 4 pairs of solid copper conductors.

Conductor diameter 24AWG.

Each conductor is encased in HDPE (high density polyethylene) insulation.

Two conductors twisted at a specially selected pitch form one twisted pair.

4 twisted pairs are wrapped in plastic wrap and enclosed in general screen foil and PVC sheath.

Straight through

It serves:

  • 1. To connect a computer to a switch (hub, switch) via network card computer
  • 2. To connect to a switch (hub, switch) of network peripheral equipment - printers, scanners
  • 3. for UPLINK "and on the upstream switch (hub, switch) - modern switches can automatically configure the inputs in the connector for reception and transmission

Crossover

It serves:

  • 1. For direct connection of 2 computers to a local network, without the use of switching equipment (hubs, switches, routers, etc.).
  • 2. for uplink, connection to a higher-standing switch in a complex local network structure, for old types of switches (hubs, switches), they have a separate connector, or marked "UPLINK" or X.

Star topology

To the stars- the basic topology of a computer network, in which all computers on the network are connected to a central node (usually a switch), forming a physical network segment. Such a network segment can function both separately and as part of a complex network topology (usually a “tree”). All information exchange is carried out exclusively through the central computer, on which a very large load is imposed in this way, therefore it cannot be engaged in anything other than the network. As a rule, it is the central computer that is the most powerful, and it is on it that all the functions of managing the exchange are entrusted. In principle, no conflicts in a network with a star topology are possible, because the management is completely centralized.

Application

Classic 10 megabit Ethernet has been satisfying for most users for about 15 years. However, in the early 90s, its insufficient bandwidth began to be felt. For computers based on Intel 80286 or 80386 processors with ISA (8 MB / s) or EISA (32 MB / s) buses, the throughput of the Ethernet segment was 1/8 or 1/32 of the memory-to-disk channel, and this was in good agreement with the ratio the amount of data processed locally and the data transferred over the network. For more powerful client stations with a PCI bus (133 MB / s), this share dropped to 1/133, which was clearly not enough. As a result, many 10Mbit Ethernet segments became congested, server responsiveness dropped dramatically, and collision rates increased dramatically, further reducing usable bandwidth.

There is a need to develop a "new" Ethernet, that is, a technology that would be as efficient in terms of price / quality ratio at a performance of 100 Mbps. As a result of searches and research, specialists were divided into two camps, which ultimately led to the emergence of two new technologies - Fast Ethernet and l00VG-AnyLAN. They differ in the degree of continuity with classic Ethernet.

In 1992, a group of networking equipment manufacturers, including leaders in Ethernet technology such as SynOptics, 3Com, and several others, formed the Fast Ethernet Alliance, a non-profit alliance to develop the standard new technology, which was supposed to preserve the features of Ethernet technology as much as possible.

The second camp was led by Hewlett-Packard and AT&T, which offered to take advantage of the opportunity to address some of the known flaws in Ethernet technology. Some time later, IBM joined these companies, which contributed to the proposal to provide some compatibility with Token Ring networks in the new technology.

At the same time, a research group was formed in committee 802 of the IEEE to study the technical potential of new high-speed technologies. Between the end of 1992 and the end of 1993, the IEEE group examined 100-megabit solutions from various manufacturers. In addition to the Fast Ethernet Alliance offering, the group also reviewed high-speed technology offered by Hewlett-Packard and AT&T.

Discussions centered on the issue of preserving random method CSMA / CD access. The Fast Ethernet Alliance proposal maintained this method and thereby ensured the continuity and consistency of 10 Mbps and 100 Mbps networks. A coalition of HP and AT&T, which had the backing of significantly fewer vendors in the networking industry than the Fast Ethernet Alliance, proposed a completely new access method called Demand Priority- priority access on demand. It significantly changed the picture of the behavior of nodes in the network, so it could not fit into the Ethernet technology and the 802.3 standard, and a new IEEE 802.12 committee was organized to standardize it.

In the fall of 1995, both technologies became IEEE standards. The IEEE 802.3 committee adopted the Fast Ethernet specification as an 802.3 standard and is not a stand-alone standard, but an addition to the existing 802.3 standard in the form of chapters 21 to 30. The 802.12 committee adopted l00VG-AnyLAN technology, which uses the new Demand Priority access method and supports frames in two formats - Ethernet and Token Ring.

v Physical layer of Fast Ethernet technology

All the differences between Fast Ethernet technology and Ethernet are concentrated on the physical layer (Fig. 3.20). The MAC and LLC layers in Fast Ethernet remain exactly the same, and are described in the previous chapters of the 802.3 and 802.2 standards. Therefore, considering Fast Ethernet technology, we will study only a few options for its physical layer.

The more complex structure of the physical layer of Fast Ethernet technology is caused by the fact that it uses three variants of cable systems:

  • · Fiber-optic multimode cable, two fibers are used;
  • · Twisted pair of category 5, two pairs are used;
  • · Twisted pair of category 3, four pairs are used.

Coaxial cable, which gave the world the first Ethernet network, was not included in the number of allowed data transmission media of the new Fast Ethernet technology. This is a common trend in many new technologies, since over short distances, Category 5 twisted pair can transmit data at the same speed as coaxial cable, but the network is cheaper and easier to use. Over long distances, optical fiber has much higher bandwidth than coax, and the network cost is not much higher, especially when you consider the high troubleshooting costs of a large coaxial cabling system.


Differences between Fast Ethernet technology and Ethernet technology

The rejection of coaxial cable has led to the fact that Fast Ethernet networks always have a hierarchical tree structure built on hubs, like l0Base-T / l0Base-F networks. The main difference between Fast Ethernet network configurations is the reduction of the network diameter to about 200 m, which is explained by a 10-fold reduction in the transmission time of a minimum frame length due to a 10-fold increase in the transmission speed compared to 10-megabit Ethernet.

Nevertheless, this circumstance does not really impede the construction of large networks based on Fast Ethernet technology. The fact is that the mid-90s were marked not only by the widespread use of inexpensive high-speed technologies, but also by the rapid development of local area networks based on switches. When using switches, the Fast Ethernet protocol can operate in full-duplex mode, in which there are no restrictions on the total length of the network, and only restrictions on the length of the physical segments connecting neighboring devices (adapter - switch or switch - switch) remain. Therefore, when creating long-distance LAN backbones, Fast Ethernet technology is also actively used, but only in a full-duplex version, together with switches.

This section discusses the half-duplex variant of Fast Ethernet operation, which fully complies with the definition of an access method described in the 802.3 standard.

Compared to the options for the physical implementation of Ethernet (and there are six of them), in Fast Ethernet, the differences between each option from the others are deeper - both the number of conductors and the coding methods change. And since the physical versions of Fast Ethernet were created simultaneously, and not evolutionarily, as for Ethernet networks, it was possible to define in detail those sublayers of the physical layer that do not change from version to version, and those sublevels that are specific to each version of the physical environment.

The official 802.3 standard established three different specifications for the Fast Ethernet physical layer and gave them the following names:

Fast Ethernet physical layer structure

  • · 100Base-TX for two-pair cable on unshielded twisted pair UTP category 5 or shielded twisted pair STP Type 1;
  • · 100Base-T4 for a four-pair cable on an unshielded twisted pair UTP category 3, 4 or 5;
  • · 100Base-FX for multimode fiber optic cable, two fibers are used.

The following statements and characteristics apply to all three standards.

  • · Fast Ethernetee frame formats are different from 10Mbit Ethernet frames.
  • · The Inter-Frame Gap (IPG) is 0.96 µs and the Bit Gap is 10 ns. All the time parameters of the access algorithm (backoff interval, transmission time of the minimum frame length, etc.), measured in bit intervals, remained the same, therefore, no changes were made to the sections of the standard concerning the MAC level.
  • · A sign of the free state of the medium is the transmission of the Idle symbol of the corresponding redundancy code over it (and not the absence of signals, as in the Ethernet 10 Mbit / s standards). The physical layer includes three elements:
  • o reconciliation sublayer;
  • o media independent interface (Mil);
  • o physical layer device (PHY).

The negotiation layer is needed so that the MAC layer, designed for the AUI interface, can work with the physical layer through the IP interface.

The physical layer device (PHY) consists, in turn, of several sublevels (see Fig. 3.20):

  • · The sublayer of logical data coding, which converts bytes coming from the MAC level into 4B / 5B or 8B / 6T code symbols (both codes are used in Fast Ethernet technology);
  • · Physical interconnection sublayers and physical media dependency (PMD) sublayers that provide signaling in accordance with a physical coding technique such as NRZI or MLT-3;
  • · An auto-negotiation sublayer that allows two communicating ports to automatically select the most efficient mode of operation, such as half or full duplex (this sublayer is optional).

The IP interface supports a physical medium independent way of exchanging data between the MAC sublayer and the PHY sublayer. This interface is similar in purpose to the AUI interface of classic Ethernet, except that the AUI interface was located between the sublayer of physical signal coding (for all cable variants, the same physical coding method was used - Manchester code) and the sublayer of physical connection to the medium, and the IP interface is located between the MAC sublayer and signal coding sublevels, of which there are three in the Fast Ethernet standard - FX, TX and T4.

The MP connector, unlike the AUI connector, has 40 pins, the maximum cable length for the MP is one meter. The signals transmitted via the MP interface have an amplitude of 5 V.

Physical layer 100Base-FX - multimode fiber, two fibers

This specification defines Fast Ethernet operation over multimode fiber in half and full duplex modes based on the well-proven FDDI coding scheme. As in the FDDI standard, each node is connected to the network by two optical fibers coming from the receiver (R x) and from the transmitter (T x).

There are many similarities between the l00Base-FX and l00Base-TX specifications, so the properties common to the two specifications will be given under the generic name l00Base-FX / TX.

While 10 Mbps Ethernet uses Manchester coding to represent data when transmitted over a cable, Fast Ethernet defines a different coding method, 4V / 5V. This method has already shown its effectiveness in the FDDI standard and has been transferred without changes to the l00Base-FX / TX specification. In this method, every 4 bits of MAC sublayer data (called symbols) are represented by 5 bits. The redundant bit allows candidate codes to be applied by representing each of the five bits as electrical or optical pulses. The existence of prohibited combinations of characters allows you to reject erroneous characters, which increases the stability of networks with l00Base-FX / TX.

To separate the Ethernet frame from the Idle symbols, a combination of Start Delimiter symbols is used (a pair of symbols J (11000) and K (10001) of the 4B / 5B code, and after the end of the frame, a T symbol is inserted before the first Idle symbol.


Continuous data stream of 100Base-FX / TX specifications

After converting 4-bit portions of MAC codes into 5-bit portions of the physical layer, they must be represented as optical or electrical signals in the cable connecting the network nodes. The l00Base-FX and l00Base-TX specifications use different methods physical coding - NRZI and MLT-3, respectively (as in FDDI technology when working through fiber and twisted pair).

Physical layer 100Base-TX - twisted pair DTP Cat 5 or STP Type 1, two pairs

The l00Base-TX specification uses a Category 5 UTP cable or an STP Type 1 cable as a transmission medium. The maximum cable length in both cases is 100 m.

The main differences from the l00Base-FX specification are the use of the MLT-3 method to transmit signals of 5-bit 4V / 5V code portions over a twisted pair, as well as the presence of the Auto-negotiation function to select the port operation mode. The auto-negotiation scheme allows two physically connected devices that support several physical layer standards, differing in bit rate and number of twisted pairs, to choose the most advantageous mode of operation. Usually, the auto-negotiation procedure occurs when you connect a network adapter, which can operate at speeds of 10 and 100 Mbps, to a hub or switch.

The Auto-negotiation scheme described below is now the standard for l00Base-T technology. Prior to this, manufacturers used various proprietary schemes for automatically detecting the speed of the interacting ports, which were not compatible. The standard Auto-negotiation scheme was originally proposed by National Semiconductor under the name NWay.

A total of 5 different operating modes are currently defined that can be supported by l00Base-TX or 100Base-T4 twisted pair devices;

  • L0Base-T - 2 pairs of category 3;
  • L0Base-T full-duplex - 2 pairs of category 3;
  • L00Base-TX - 2 pairs of category 5 (or Type 1ASTP);
  • 100Base-T4 - 4 pairs of category 3;
  • 100Base-TX full-duplex - 2 pairs of Category 5 (or Type 1A STP).

L0Base-T has the lowest call priority and 100Base-T4 full duplex has the highest. The negotiation process occurs when the device is powered on, and can also be initiated at any time by the device control module.

The device that started the auto-negotiation process sends a burst of special pulses to its partner. Fast Link Pulse burst (FLP), which contains an 8-bit word that encodes the proposed communication mode, starting with the highest supported by this node.

If the partner node supports the auto-negotuiation function and can also support the proposed mode, it responds with a burst of FLP pulses, in which it confirms this mode, and the negotiation ends there. If the partner node can support a lower priority mode, then it indicates it in the response, and this mode is selected as the working one. Thus, the highest priority common mode of the nodes is always selected.

A node that only supports l0Base-T technology sends Manchester pulses every 16 ms to check the continuity of the line connecting it to the neighboring node. Such a node does not understand the FLP request that the Auto-negotiation node makes to it and continues to send its pulses. A node that has received only continuity check pulses in response to the FLP request, realizes that its partner can only work according to the l0Base-T standard, and sets this mode of operation for itself.

Physical layer 100Base-T4 - twisted pair UTP Cat 3, four pairs

The 100Base-T4 specification was designed to leverage existing Category 3 twisted-pair wiring for high-speed Ethernet. This specification improves overall throughput by simultaneously transmitting bit streams across all 4 cable pairs.

The 100Base-T4 specification came later than other Fast Ethernet physical layer specifications. The developers of this technology primarily wanted to create physical specifications that were closest to the l0Base-T and l0Base-F specifications, which worked on two data lines: two pairs or two fibers. To implement work on two twisted pairs, we had to switch to a higher quality category 5 cable.

At the same time, the developers of the competing l00VG-AnyLAN technology initially relied on Category 3 twisted pair cables; the main advantage was not so much in cost, but in the fact that it had already been laid in the overwhelming majority of buildings. Therefore, after the release of the l00Base-TX and l00Base-FX specifications, the developers of Fast Ethernet technology implemented their own version of the physical layer for twisted pair Category 3.

Instead of 4V / 5V coding, this method uses 8V / 6T coding, which has a narrower signal spectrum and at a rate of 33 Mbps fits into the 16 MHz band of a twisted pair cable of category 3 (when coding 4V / 5V, the signal spectrum does not fit into this band) ... Every 8 bits of MAC layer information are encoded with 6 ternary symbols, that is, digits with three states. Each ternary digit is 40 ns long. A group of 6 ternary digits is then transmitted to one of the three transmit twisted pairs, independently and sequentially.

The fourth pair is always used to listen to the carrier frequency for collision detection. The data rate for each of the three transmit pairs is 33.3 Mbps, so the total speed of the 100Base-T4 protocol is 100 Mbps. At the same time, due to the adopted coding method, the signal change rate on each pair is only 25 Mbaud, which allows the use of a Category 3 twisted pair cable.

In fig. 3.23 shows the connection of the MDI port of the 100Base-T4 network adapter with the MDI-X port of the hub (the prefix X says that at this connector the connections of the receiver and transmitter are swapped in pairs of the cable compared to the connector of the network adapter, which makes it easier to connect the pairs of wires in the cable - without crossing). Pair 1 -2 always required to transfer data from MDI port to MDI-X port, pair 3 -6 - to receive data by the MDI port from the MDI-X port, and pairs 4 -5 and 7 -8 are bi-directional and are used for both receiving and transmitting, depending on the need.


Connection of nodes according to the 100Base-T4 specification

Fast Ethernet - the IEEE 802.3 u specification officially adopted on October 26, 1995, defines a data link protocol standard for networks operating using both copper and fiber-optic cables at a speed of 100 Mb / s. The new specification is the successor to the Ethernet IEEE 802.3 standard, using the same frame format, CSMA / CD media access mechanism and star topology. Several physical layer configuration elements have evolved to increase throughput, including cable types, segment lengths, and number of hubs.

Physical layer

The Fast Ethernet standard defines three types of 100 Mbps Ethernet signaling media.

· 100Base-TX - two twisted pairs of wires. Transmission is carried out in accordance with the standard for data transmission in a twisted physical medium, developed by ANSI (American National Standards Institute - American National Standards Institute). Coiled data cable can be shielded or unshielded. Uses 4B / 5B data coding algorithm and MLT-3 physical coding method.

· 100Base-FX - two cores, fiber optic cable. Transmission is also carried out in accordance with the ANSI standard for data transmission in fiber optic media. Uses 4B / 5B data coding algorithm and NRZI physical coding method.

· 100Base-T4 is a special specification developed by the IEEE 802.3u committee. According to this specification, data transmission is carried out over four twisted pairs of telephone cable, which is called UTP Category 3 cable. It uses 8B / 6T data coding algorithm and NRZI physical coding method.

Multimode cable

This type of fiber optic cable uses a fiber with a 50 or 62.5 micrometer core and a 125 micrometer outer sheath. Such a cable is called a 50/125 (62.5 / 125) micrometer multimode fiber optic cable. An LED transceiver with a wavelength of 850 (820) nanometers is used to transmit a light signal over a multimode cable. If a multimode cable connects two ports of switches operating in full duplex mode, then it can be up to 2000 meters long.

Single mode cable

Singlemode fiber has a smaller core diameter of 10 micrometers than multimode fiber and uses a laser transceiver for transmission over singlemode cable, which collectively provides efficient transmission over long distances. The wavelength of the transmitted light signal is close to the core diameter, which is 1300 nanometers. This number is known as the zero dispersion wavelength. In a single-mode cable, dispersion and signal loss are very low, which allows light signals to be transmitted over long distances than in the case of multimode fiber.


38. Gigabit Ethernet technology, general characteristics, specification of the physical environment, basic concepts.
3.7.1. general characteristics standard

Soon after the Fast Ethernet products hit the market, network integrators and administrators began to feel certain limitations when building corporate networks. In many cases, servers connected over a 100 Mbps channel overloaded network backbones that also operate at 100 Mbps - FDDI and Fast Ethernet backbones. There was a need for the next level of the speed hierarchy. In 1995, only ATM switches could provide a higher level of speed, and in the absence of convenient means of migration of this technology to local networks at that time (although the LAN Emulation - LANE specification was adopted in early 1995, its practical implementation was still ahead), they were to be implemented in almost no one dared to the local network. In addition, ATM technology was distinguished by a very high level of cost.

So the next step, taken by the IEEE, seemed logical - 5 months after the final adoption of the Fast Ethernet standard in June 1995, the IEEE High Speed ​​Technology Research Group was ordered to look into the possibility of developing an Ethernet standard with an even higher bit rate.

In the summer of 1996, an 802.3z group was announced to develop a protocol similar to Ethernet as much as possible, but with a bit rate of 1000 Mbps. As with Fast Ethernet, the message was received with great enthusiasm by Ethernet proponents.



The main reason for the enthusiasm was the prospect of the same smooth migration of network backbones to Gigabit Ethernet, similar to the migration of congested Ethernet segments located at the lower levels of the network hierarchy to Fast Ethernet. In addition, the experience of transferring data at gigabit speeds was already available, both in territorial networks (SDH technology) and in local networks - Fiber Channel technology, which is mainly used to connect high-speed peripherals to large computers and transmits data over a fiber-optic cable at near-gigabit speed using an 8B / 10B redundancy code.

The first version of the standard was reviewed in January 1997, and the 802.3z standard was finally adopted on June 29, 1998 at a meeting of the IEEE 802.3 committee. Work on the implementation of Gigabit Ethernet on twisted pair category 5 was transferred to a special committee 802.3ab, which has already considered several versions of the draft of this standard, and since July 1998 the project has become quite stable. The final adoption of the 802.3ab standard is expected in September 1999.

Without waiting for the standard to be adopted, some companies released the first Gigabit Ethernet equipment on fiber optic cable by the summer of 1997.

The main idea of ​​the developers of the Gigabit Ethernet standard is to preserve the ideas of classical Ethernet technology as much as possible while reaching a bit rate of 1000 Mbps.

Since when developing a new technology, it is natural to expect some technical innovations that follow the general course of the development of network technologies, it is important to note that Gigabit Ethernet, like its slower counterparts, at the protocol level will not support:

  • quality of service;
  • redundant connections;
  • testing the operability of nodes and equipment (in the latter case - with the exception of testing port-to-port communication, as is done for Ethernet 10Base-T and 10Base-F and Fast Ethernet).

All three named properties are considered very promising and useful in modern networks, and especially in the networks of the near future. Why are the authors of Gigabit Ethernet abandoning them?

main idea of the developers of Gigabit Ethernet technology is that there are and will continue to exist quite a few networks in which the high speed of the backbone and the ability to assign priority packets in the switches will be sufficient to ensure the quality of transport services for all network clients. And only in those rare cases, when the backbone is sufficiently loaded, and the requirements for the quality of service are very strict, it is necessary to use ATM technology, which, due to its high technical complexity, guarantees the quality of service for all major types of traffic.


39. Structural cabling system used in network technologies.
A Structured Cabling System (SCS) is a set of switching elements (cables, connectors, connectors, cross-over panels and cabinets), as well as a technique for their joint use, which allows you to create regular, easily expandable communication structures in computer networks.

The structured cabling system is a kind of "constructor", with the help of which the network designer builds the required configuration from standard cables connected by standard connectors and switched on standard cross-over panels. If necessary, the configuration of connections can be easily changed - add a computer, segment, switch, remove unnecessary equipment, and also change the connections between computers and hubs.

When building a structured cabling system, it is assumed that each workplace in the enterprise should be equipped with sockets for connecting a telephone and a computer, even if this is not required at the moment. That is, a good structured cabling system is redundant. This can save money in the future, as changes to the connection of new devices can be made by re-connecting existing cables.

A typical hierarchical structure of a structured cabling system includes:

  • horizontal subsystems (within a floor);
  • vertical subsystems (inside the building);
  • a campus subsystem (within one territory with several buildings).

Horizontal subsystem connects the floor marshalling cabinet to the users' outlets. Subsystems of this type correspond to the floors of a building. Vertical subsystem connects the marshalling cabinets of each floor to the central control room of the building. The next step in the hierarchy is campus subsystem, which connects several buildings to the main control room of the entire campus. This part of the cabling system is commonly referred to as the backbone.

There are many advantages to using structured cabling instead of chaotic cables.

· Versatility. A structured cabling system with a well-thought-out organization can become a unified medium for transferring computer data in a local computer network, organizing a local telephone network, transmitting video information and even transmitting signals from fire safety sensors or security systems. This allows you to automate many processes of control, monitoring and management of economic services and life support systems of the enterprise.

· Increased service life. The obsolescence of a well-structured cabling system can be 10-15 years.

· Reducing the cost of adding new users and changing their placements. It is known that the cost of a cable system is significant and is determined mainly not by the cost of the cable, but by the cost of laying it. Therefore, it is more profitable to carry out a one-time work on laying the cable, possibly with a large margin in length, than to carry out the laying several times, increasing the length of the cable. With this approach, all work on adding or moving a user is reduced to connecting the computer to an existing outlet.

· Possibility of easy network expansion. The structured cabling system is modular and therefore easy to expand. For example, a new subnet can be added to a trunk without affecting the existing subnets. You can change the cable type on a separate subnet independently of the rest of the network. The structured cabling system is the basis for dividing the network into easily manageable logical segments, since it is itself already divided into physical segments.

· Providing more efficient service. The structured cabling system is easier to service and troubleshoot than bus cabling. In the case of bus cabling, the failure of one of the devices or connecting elements leads to a difficult-to-locate failure of the entire network. In structured cabling systems, the failure of one segment does not affect others, since the aggregation of segments is carried out using hubs. Concentrators diagnose and localize the faulty area.

· Reliability. A structured cabling system has increased reliability, since the manufacturer of such a system guarantees not only the quality of its individual components, but also their compatibility.


40. Hubs and network adapters, principles, use, basic concepts.
Hubs, along with network adapters and cabling, represent the minimum amount of equipment that can be used to create a local area network. Such a network will represent a common shared environment.

Network Adapter (Network Interface Card, NIC) together with its driver, it implements the second, link layer of the open systems model in the final network node - the computer. More precisely, in a network operating system, a pair of adapter and driver performs only the functions of the physical and MAC layers, while the LLC layer is usually implemented by an operating system module that is the same for all drivers and network adapters. Actually, this is how it should be in accordance with the model of the IEEE 802 protocol stack. For example, in Windows NT, the LLC level is implemented in the NDIS module, which is common to all network adapter drivers, regardless of which technology the driver supports.

The network adapter together with the driver perform two operations: frame transmission and reception.

In adapters for client computers, much of the work is shifted to the driver, making the adapter simpler and cheaper. The disadvantage of this approach is the high degree of loading of the computer's central processor by routine work on transferring frames from the computer's RAM to the network. The central processor is forced to do this work instead of performing the user's application tasks.

The network adapter must be configured before being installed in a computer. Configuring an adapter typically specifies the IRQ used by the adapter, the DMA channel (if the adapter supports DMA mode), and the base address of the I / O ports.

In almost all modern technologies of local networks, a device is defined that has several equal names - hub(concentrator), hub (hub), repeater (repeater). Depending on the field of application of this device, the composition of its functions and design changes significantly. Only the main function remains unchanged - it is frame repetition either on all ports (as defined in the Ethernet standard), or only on some ports, according to the algorithm defined by the corresponding standard.

A hub usually has several ports to which the end nodes of the network - computers - are connected using separate physical segments of the cable. The concentrator combines individual physical network segments into a single shared environment, access to which is carried out in accordance with one of the considered LAN protocols - Ethernet, Token Ring, etc. technologies produced their own hubs - Ethernet; Token Ring; FDDI and 100VG-AnyLAN. For a specific protocol, sometimes its own, highly specialized name of this device is used, which more accurately reflects its functions or is used by virtue of traditions, for example, the name MSAU is characteristic of Token Ring concentrators.

Each hub performs some basic function defined in the corresponding protocol of the technology it supports. Although this function is defined in some detail in the technology standard, when it is implemented, the concentrators different manufacturers may differ in details such as number of ports, support for multiple cable types, etc.

In addition to the main function, the hub can perform a number of additional functions, which are either not defined at all in the standard, or are optional. For example, a Token Ring hub can perform the function of shutting down malfunctioning ports and switching to a backup ring, although such capabilities are not described in the standard. The hub turned out to be a convenient device for performing additional functions that facilitate the monitoring and operation of the network.


41. The use of bridges and switches, principles, features, examples, limitations
Structuring with bridges and switches

the network can be divided into logical segments using two types of devices - bridges and / or switches (switch, switching hub).

Bridge and switch are functional twins. Both of these devices advance frames based on the same algorithms. Bridges and switches use two types of algorithms: an algorithm transparent bridge, described in the IEEE 802.1D standard, or the algorithm source routing bridge from IBM for Token Ring networks. These standards were developed long before the first switch was introduced, so they use the term "bridge". When the first industrial switch model for Ethernet technology was born, it performed the same IEEE 802.ID frame forwarding algorithm, which had been worked out by bridges of local and global networks for a dozen years.

The main difference between a switch and a bridge is that the bridge processes frames sequentially, while the switch processes frames in parallel. This circumstance is due to the fact that bridges appeared in those days when the network was divided into a small number of segments, and intersegment traffic was small (it obeyed the 80 by 20% rule).

Today bridges still work on networks, but only on fairly slow global links between two remote LANs. These bridges are called remote bridges, and they work in the same way as 802.1D or Source Routing.

Transparent bridges can, in addition to transmitting frames within the same technology, translate LAN protocols, such as Ethernet to Token Ring, FDDI to Ethernet, etc. This property of transparent bridges is described in the IEEE 802.1H standard.

In what follows, we will call a device that advances frames using the bridge algorithm and works in a local network, the modern term "switch". When describing the 802.1D and Source Routing algorithms themselves in the next section, we will traditionally call the device a bridge, as it is actually called in these standards.


42. Switches for local networks, protocols, modes of operation, examples.
Each of the 8 10Base-T ports is served by one Ethernet Packet Processor (EPP). In addition, the switch has a system module that coordinates the work of all EPP processors. The system module maintains the general address table of the switch and provides SNMP management of the switch. To transfer frames between ports, a switching fabric is used, similar to those found in telephone switches or multiprocessor computers, connecting multiple processors with multiple memory modules.

Switching matrix works on the principle of switching channels. For 8 ports, the matrix can provide 8 simultaneous internal channels with half-duplex ports and 16 - with full-duplex, when the transmitter and receiver of each port work independently of each other.

When a frame arrives at a port, the EPP processor buffers the first few bytes of the frame to read the destination address. After receiving the destination address, the processor immediately decides to transfer the packet, without waiting for the remaining bytes of the frame to arrive.

If the frame needs to be transmitted to another port, the processor turns to the switching matrix and tries to establish a path in it that connects its port with the port through which the route to the destination address goes. The switching fabric can only do this if the destination port is free at that moment, that is, not connected to another port; if the port is busy, then, as in any circuit switched device, the matrix fails the connection. In this case, the frame is fully buffered by the input port processor, after which the processor waits for the output port to be released and the switching matrix forms the desired path. Once the desired path is established, the buffered frame bytes are sent to it, which are received by the output port processor. As soon as the downstream processor accesses the attached Ethernet segment using the CSMA / CD algorithm, the frame bytes are immediately transferred to the network. The described method of transmitting a frame without its full buffering is called “on-the-fly” or “cut-through” switching. The main reason for improving network performance when using a switch is parallel processing multiple frames This effect is illustrated in Fig. 4.26. The figure depicts an ideal situation in terms of improving performance, when four out of eight ports transmit data at a maximum speed of 10 Mb / s for the Ethernet protocol, and they transmit this data to the other four ports of the switch without conflict - data flows between network nodes are distributed so that each frame receiving port has its own output port. If the switch manages to process the ingress traffic even at the maximum rate of incoming frames to the ingress ports, then overall performance switch in the given example will be 4x10 = 40 Mbit / s, and when generalizing the example for N ports - (N / 2) xlO Mbit / s. It is said that the switch provides each station or segment connected to its ports with dedicated protocol bandwidth. Naturally, the situation in the network does not always develop as shown in Fig. 4.26. If two stations, for example stations connected to ports 3 and 4, at the same time you need to write data to the same server connected to the port 8, then the switch will not be able to allocate a 10 Mbps data stream to each station, since port 5 cannot transmit data at 20 Mbps. Station frames will wait in internal queues of input ports 3 and 4, when the port becomes free 8 to transmit the next frame. Obviously, good decision for such a distribution of data streams, it would be possible to connect the server to a higher-speed port, for example Fast Ethernet. Since the main advantage of the switch, thanks to which it has won very good positions in local networks, is its high performance, the developers of switches are trying to release the so-called non-blocking switch models.


43. Algorithm of the transparent bridge.
Transparent bridges are invisible to the network adapters of end nodes, since they independently build a special address table, on the basis of which you can decide whether you need to transfer the incoming frame to some other segment or not. When transparent bridges are used, network adapters work in the same way as in the absence of them, that is, they do not take any additional action to get the frame through the bridge. The transparent bridging algorithm is independent of the LAN technology in which the bridge is being installed, so transparent Ethernet bridges work just like transparent FDDI bridges.

A transparent bridge builds its address table based on passive monitoring of traffic circulating in segments connected to its ports. In this case, the bridge takes into account the addresses of the sources of data frames arriving on the bridge ports. Based on the frame source address, the bridge concludes that this node belongs to one or another network segment.

Consider the process automatic creation the bridge address table and its use for the example of a simple network shown in Fig. 4.18.

Rice. 4.18. How a transparent bridge works

The bridge connects two logical segments. Segment 1 consists of computers connected with one length of coaxial cable to port 1 of the bridge, and segment 2 consists of computers connected with another length of coaxial cable to port 2 of the bridge.

Each bridge port acts as an endpoint on its segment with one exception - a bridge port does not have its own MAC address. The port of the bridge operates in the so-called promisquous packet capture mode, when all packets arriving on the port are stored in the buffer memory. With the help of this mode, the bridge monitors all traffic transmitted in the segments attached to it, and uses the packets passing through it to study the composition of the network. Since all packets are written to the buffer, the bridge does not need a port address.

V original state the bridge does not know anything about the computers with which MAC addresses are connected to each of its ports. Therefore, in this case, the bridge simply transmits any captured and buffered frame on all its ports except for the one from which this frame was received. In our example, the bridge has only two ports, so it transmits frames from port 1 to port 2, and vice versa. When a bridge is about to send a frame from segment to segment, for example from segment 1 to segment 2, it tries again to access segment 2 as an end node according to the rules of the access algorithm, in this example, according to the rules of the CSMA / CD algorithm.

Simultaneously with the transmission of the frame to all ports, the bridge learns the source address of the frame and makes a new entry about its belonging in its address table, which is also called the filtering or routing table.

After the bridge has passed the learning phase, it can operate more efficiently. When receiving a frame directed, for example, from computer 1 to computer 3, it scans the address table for the coincidence of its addresses with the destination address 3. Since there is such an entry, the bridge performs the second stage of analyzing the table - it checks whether computers with source addresses ( in our case, this is address 1) and the destination address (address 3) in one segment. Since in our example they are in different segments, the bridge performs the operation forwarding frame - transmits a frame to another port, having previously received access to another segment.

If the destination address is unknown, then the bridge transmits the frame to all its ports, except for the port - the source of the frame, as in the initial stage of the learning process.


44. Bridges with routing from the source.
Source-routed bridging is used to connect Token Ring and FDDI rings, although transparent bridging can also be used for the same purpose. Source Routing (SR) is based on the fact that the sending station puts in a frame sent to another ring all the address information about intermediate bridges and rings that the frame must pass before entering the ring to which the station is connected. recipient.

Let's consider the principles of operation of Source Routing bridges (hereinafter, SR-bridges) using the example of the network shown in Fig. 4.21. The network consists of three rings connected by three bridges. Rings and bridges have identifiers to define the route. SR-bridges do not build an address table, but when advancing frames, they use the information available in the corresponding fields of the data frame.

Fig. 4.21.Source Routing Bridges

Upon receipt of each packet, the SR-bridge only needs to look at the Routing Information Field (RIF, in a Token Ring or FDDI frame) for its own identifier. And if it is present there and accompanied by the identifier of the ring that is connected to this bridge, then in this case the bridge copies the incoming frame to the specified ring. Otherwise, the frame is not copied to the other ring. In any case, the original copy of the frame is returned over the original ring of the sending station, and if it was transmitted to another ring, then the A (address recognized) and C (frame copied) bits of the frame status fields are set to 1 to inform the sending station, that the frame was received by the destination station (in this case, it was transmitted by the bridge to another ring).

Since routing information in a frame is not always needed, but only for frame transmission between stations connected to different rings, the presence of the RIF field in the frame is indicated by setting the individual / group address (I / G) to 1 bit (in this case, this bit is not used as intended, since the source address is always individual).

The RIF has a three-part control subfield.

  • Frame type defines the type of the RIF field. There are different types of RIF fields used to find a route and to send a frame along a known route.
  • Maximum frame length field used by the bridge to connect rings that have a different MTU value. Using this field, the bridge notifies the station of the maximum possible frame length (that is, the minimum MTU value along the entire multipart route).
  • RIF field length is necessary, since the number of route descriptors that specify the identifiers of the crossed rings and bridges is unknown in advance.

For the source routing algorithm to work, two additional frame types are used - a single-route broadcast frame (SRBF) and an all-route broadcast frame (ARBF).

All SR bridges must be manually configured by the administrator to send ARBF frames to all ports except the source port of the frame, and for SRBF frames, some bridge ports must be blocked to avoid network loops.

Advantages and Disadvantages of Source Routing Bridges

45. Switches: technical implementation, functions, characteristics that affect their work.
Features of the technical implementation of switches. Many first-generation switches were similar to routers, that is, they were based on central processing unit general purpose, connected to the interface ports on the internal high-speed bus. The main disadvantage of these switches was their low speed. The general-purpose processor could in no way cope with the large volume of specialized operations for transferring frames between interface modules. In addition to the processor chips for successful non-blocking operation, the switch also needs a high-speed node to transfer frames between the port processor chips. Currently, switches use one of three schemes as a base on which such an exchange node is built:

  • switching matrix;
  • shared multi-input memory;
  • common bus.

Fast Ethernet

Fast Ethernet - the IEEE 802.3 u specification officially adopted on October 26, 1995, defines a data link protocol standard for networks operating using both copper and fiber-optic cables at a speed of 100 Mb / s. The new specification is the successor to the Ethernet IEEE 802.3 standard, using the same frame format, CSMA / CD media access mechanism and star topology. Several physical layer configuration elements have evolved to increase throughput, including cable types, segment lengths, and number of hubs.

Fast Ethernet structure

To better understand the operation and understand the interaction of Fast Ethernet elements, refer to Figure 1.

Figure 1. Fast Ethernet System

Logic Link Control (LLC) Sublayer

The IEEE 802.3 u specification breaks down link layer functions into two sublayers: logical link control (LLC) and medium access layer (MAC), which will be discussed below. LLC, whose functions are defined by the IEEE 802.2 standard, actually provides interconnection with higher-level protocols (for example, IP or IPX), providing various communication services:

  • Service without connection and acknowledgment of receipt. A simple service that does not provide flow control or error control, and does not guarantee correct delivery of data.
  • Connection-oriented service. An absolutely reliable service that guarantees correct data delivery by establishing a connection to the receiving system before the data transfer begins and using error control and data flow control mechanisms.
  • Connectionless service with acknowledgments. A moderately complex service that uses acknowledgment messages to ensure delivery, but does not establish connections until data is sent.

On the transmitting system, the downstream data from the Network Layer protocol is first encapsulated by the LLC sublayer. The standard calls them Protocol Data Unit (PDU). When the PDU is handed down to the MAC sublayer, where it is again framed with a header and post information, it can technically be called a frame at this point. For an Ethernet packet, this means that the 802.3 frame contains a three-byte LLC header in addition to the Network Layer data. Thus, the maximum allowable data length in each packet is reduced from 1500 to 1497 bytes.

The LLC header consists of three fields:

In some cases, LLC frames play a minor role in the network communication process. For example, on a network using TCP / IP along with other protocols, the only function of LLC might be to allow 802.3 frames to contain a SNAP header, like an Ethertype, indicating the Network Layer protocol to which the frame should be sent. In this case, all LLC PDUs use the unnumbered information format... However, other higher-level protocols require more advanced services from the LLC. For example, NetBIOS sessions and several NetWare protocols use LLC connection-oriented services more broadly.

SNAP header

The receiving system needs to determine which of the Network Layer protocols should receive the incoming data. 802.3 packets within the LLC PDU use another protocol called Sub -NetworkAccessProtocol (SNAP, Subnetting Access Protocol).

The SNAP header is 5 bytes long and is located immediately after the LLC header in the data field of the 802.3 frame, as shown in the figure. The header contains two fields.

Organization code. The Organization or Vendor ID is a 3-byte field that takes the same value as the first 3 bytes of the sender's MAC address in the 802.3 header.

Local code. The local code is a 2-byte field that is functionally equivalent to the Ethertype field in the Ethernet II header.

Matching sublevel

As stated earlier, Fast Ethernet is an evolutionary standard. A MAC designed for the AUI interface needs to be mapped for the MII interface used in Fast Ethernet, which is what this sublayer is for.

Media Access Control (MAC)

Each node in a Fast Ethernet network has a media access controller (MediaAccessController- MAC). MAC is key to Fast Ethernet and has three purposes:

The most important of the three MAC assignments is the first. For any network technology that uses a common medium, the medium access rules that determine when a node can transmit are its primary characteristic. Several IEEE committees are involved in the development of rules for access to the environment. The 802.3 committee, often referred to as the Ethernet committee, defines LAN standards that use rules called CSMA /CD(Carrier Sense Multiple Access with Collision Detection).

CSMS / CD are media access rules for both Ethernet and Fast Ethernet. It is in this area that the two technologies completely coincide.

Since all nodes in Fast Ethernet share the same medium, they can only transmit when it is their turn. This queue is defined by CSMA / CD rules.

CSMA / CD

The MAC Fast Ethernet controller listens on the carrier before transmitting. The carrier exists only when another node is transmitting. The PHY layer detects the presence of a carrier and generates a message for the MAC. The presence of a carrier indicates that the environment is busy and the listening node (or nodes) must yield to the transmitting one.

A MAC that has a frame to transmit must wait a minimum amount of time after the end of the previous frame before transmitting it. This time is called interpacket gap(IPG, interpacket gap) and lasts 0.96 microseconds, that is, a tenth of the transmission time of a normal Ethernet packet at 10 Mbps (IPG is the only time interval, always specified in microseconds, not bit time) Figure 2.


Figure 2. Interpacket gap

After the end of packet 1, all LAN nodes must wait for the IPG time before being able to transmit. The time interval between packets 1 and 2, 2 and 3 in Fig. 2 is IPG time. After the transmission of packet 3 was complete, no nodes had material to process, so the time interval between packets 3 and 4 is longer than the IPG.

All nodes on the network must comply with these rules. Even if a node has many frames to transmit and this node is the only transmitting one, then after sending each packet it must wait for at least IPG time.

This is part of the CSMA Fast Ethernet Media Access Rules. In short, many nodes have access to the medium and use the carrier to keep track of whether it is busy.

The early experimental networks applied exactly these rules, and such networks worked very well. However, the use of CSMA alone led to a problem. Often, two nodes, having a packet to transmit and waiting for IPG time, would start transmitting at the same time, resulting in data corruption on both sides. This situation is called collision(collision) or conflict.

To overcome this obstacle, early protocols used a fairly simple mechanism. Packages were divided into two categories: commands and reactions. Each command sent by the node demanded a reaction. If no response was received for some time (called a timeout period) after the command was sent, the original command was re-issued. This could happen several times (the maximum number of timeouts) before the transmitting node recorded the error.

This scheme could work fine, but only up to a certain point. Conflicts caused dramatic performance degradation (usually measured in bytes per second) because nodes often stood idle, waiting for commands to never reach their destination. Network congestion, an increase in the number of nodes are directly related to an increase in the number of conflicts and, consequently, to a decrease in network performance.

Early network designers quickly found a solution to this problem: each node must detect the loss of a transmitted packet by detecting a conflict (and not wait for a reaction that will never follow). This means that packets lost due to the conflict must be retransmitted immediately before the timeout expires. If the host transmitted the last bit of the packet without a conflict, then the packet was transmitted successfully.

Carrier sense can be combined well with collision detection. Collisions still continue to occur, but this does not affect the performance of the network, as the nodes quickly get rid of them. The DIX group, having developed the rules for accessing the CSMA / CD environment for Ethernet, formalized them in the form of a simple algorithm - Figure 3.


Figure 3. Algorithm of CSMA / CD operation

Physical layer device (PHY)

Since Fast Ethernet can use different type cable, a unique signal preconversion is required for each medium. Conversion is also required for efficient data transmission: to make the transmitted code resistant to interference, possible loss, or distortion of its individual elements (baud), to ensure effective synchronization of clocks on the transmitting or receiving side.

Coding Sub-Layer (PCS)

Encodes / decodes data coming from / to the MAC layer using algorithms or.

Physical interconnection and physical media dependency sublayers (PMA and PMD)

The PMA and PMD sublayers communicate between the PSC sublayer and the MDI interface, providing formation in accordance with the physical coding method: or.

Auto-negotiation sublevel (AUTONEG)

The auto-negotiation sublayer allows two communicating ports to automatically select the most efficient mode of operation: full-duplex or half-duplex 10 or 100 Mb / s. Physical layer

The Fast Ethernet standard defines three types of 100 Mbps Ethernet signaling media.

  • 100Base-TX - two twisted pairs of wires. Transmission is carried out in accordance with the standard for data transmission in a twisted physical medium, developed by ANSI (American National Standards Institute - American National Standards Institute). Coiled data cable can be shielded or unshielded. Uses 4B / 5B data coding algorithm and MLT-3 physical coding method.
  • 100Base-FX is a two-core fiber optic cable. Transmission is also carried out in accordance with the ANSI standard for data transmission in fiber optic media. Uses 4B / 5B data coding algorithm and NRZI physical coding method.

100Base-TX and 100Base-FX specifications are also known as 100Base-X

  • 100Base-T4 is a special specification developed by the IEEE 802.3u committee. According to this specification, data transmission is carried out over four twisted pairs of telephone cable, which is called UTP Category 3 cable. It uses 8B / 6T data coding algorithm and NRZI physical coding method.

Additionally, the Fast Ethernet standard includes guidelines for Category 1 shielded twisted pair cable, which is the standard cable traditionally used in Token Ring networks. The support organization and guidelines for using STP cable on Fast Ethernet provide a path to Fast Ethernet for customers with STP cabling.

The Fast Ethernet specification also includes an auto-negotiation mechanism that allows a host port to automatically adjust to a data transfer rate of 10 Mbps or 100 Mbps. This mechanism is based on the exchange of a number of packets with a port of a hub or switch.

100Base-TX environment

Two twisted pairs are used as the transmission medium for 100Base-TX, with one pair being used to transmit data and the other to receive them. Since the ANSI TP-PMD specification contains descriptions of both shielded and unshielded twisted pairs, the 100Base-TX specification includes support for both unshielded and shielded type 1 and 7 twisted pairs.

MDI (Medium Dependent Interface) connector

The media-dependent 100Base-TX link interface can be one of two types. For unshielded twisted-pair cable, use an 8-pin RJ 45 Category 5 connector as the MDI connector. The same connector is used on a 10Base-T network to provide backward compatibility with existing Category 5 cabling. use IBM STP type 1 connector, which is a shielded DB9 connector. This connector is commonly used in Token Ring networks.

Category 5 (e) UTP cable

The UTP 100Base-TX media interface uses two pairs of wires. To minimize crosstalk and possible signal distortion, the remaining four wires should not be used to carry any signals. The transmit and receive signals for each pair are polarized, with one wire carrying the positive (+) and the other negative (-) signal. The color coding of the cable wires and the pin numbers of the connector for the 100Base-TX network are shown in table. 1. Although the 100Base-TX PHY layer was developed after the ANSI TP-PMD standard, the RJ 45 connector pin numbers have been changed to align with the 10Base-T pinouts already used. The ANSI TP-PMD standard uses pins 7 and 9 to receive data, while the 100Base-TX and 10Base-T standards use pins 3 and 6 for this. This wiring allows you to use 100Base-TX adapters instead of 10 Base adapters - T and connect them to the same Category 5 cables without changing the wiring. In the RJ 45 connector, the pairs of wires used are connected to pins 1, 2 and 3, 6. For the correct connection of the wires, follow their color coded.

Table 1. Purpose of connector contactsMDIcableUTP100Base-TX

Nodes interact with each other by exchanging frames (frames). In Fast Ethernet, a frame is the basic unit of exchange over a network - any information transmitted between nodes is placed in the data field of one or more frames. Forwarding frames from one node to another is possible only if there is a way to uniquely identify all network nodes. Therefore, every node on a LAN has an address called its MAC address. This address is unique: no two nodes on the local network can have the same MAC address. Moreover, in no LAN technology (with the exception of ARCNet) no two nodes in the world can have the same MAC address. Any frame contains at least three main pieces of information: recipient address, sender address, and data. Some frames have other fields, but only the three listed are required. Figure 4 shows the Fast Ethernet frame structure.

Figure 4. Frame structureFastEthernet

  • address of the recipient- the address of the node receiving the data is indicated;
  • sender's address- the address of the node that sent the data is indicated;
  • length / type(L / T - Length / Type) - contains information about the type of transmitted data;
  • frame checksum(PCS - Frame Check Sequence) - designed to check the correctness of the frame received by the receiving node.

The minimum frame size is 64 octets, or 512 bits (terms octet and byte - synonyms). The maximum frame size is 1518 octets, or 12144 bits.

Frame addressing

Each node on a Fast Ethernet network has unique number, which is called the MAC address or host address. This number consists of 48 bits (6 bytes), assigned to the network interface during device manufacture and programmed during initialization. Therefore, the network interfaces of all LANs, with the exception of ARCNet, which uses 8-bit addresses assigned by the network administrator, have a built-in unique MAC address that differs from all other MAC addresses on Earth and is assigned by the manufacturer in agreement with the IEEE.

To facilitate the management of network interfaces, it has been proposed by the IEEE to divide the 48-bit address field into four parts, as shown in Figure 5. The first two bits of the address (bits 0 and 1) are address type flags. The meaning of the flags determines how the address part is interpreted (bits 2 - 47).


Figure 5. Format of the MAC address

The I / G bit is called individual / group address flag and shows what (individual or group) the address is. An individual address is assigned to only one interface (or node) on the network. Addresses with the I / G bit set to 0 are MAC addresses or node addresses. If the I / O bit is set to 1, then the address belongs to the group and is usually called multipoint address(multicast address) or functional address(functional address). A multicast address can be assigned to one or more LAN network interfaces. Frames sent to a multicast address receive or copy all LAN network interfaces that have it. Multicast addresses allow a frame to be sent to a subset of hosts on a local network. If the I / O bit is set to 1, then bits 46 to 0 are treated as a multicast address and not as the U / L, OUI, and OUA fields of the normal address. The U / L bit is called universal / local control flag and determines how the address was assigned to the network interface. If both bits, I / O and U / L, are set to 0, then the address is the unique 48-bit identifier described earlier.

OUI (organizationally unique identifier - organizationally unique identifier). The IEEE assigns one or more OUIs to each manufacturer of network adapters and interfaces. Each manufacturer is responsible for the correct assignment of the OUA (organizationally unique address - organizationally unique address), which should have any device it creates.

When the U / L bit is set, the address is locally managed. This means that it is not specified by the manufacturer of the network interface. Any organization can create its own MAC address for a network interface by setting the U / L bit to 1, and bits 2 through 47 to some chosen value. The network interface, having received the frame, first of all decodes the destination address. When the I / O bit is set in the address, the MAC layer will receive this frame only if the destination address is in the list that is stored on the node. This technique allows one node to send a frame to many nodes.

There is a special multicast address called broadcast address. In a 48-bit IEEE broadcast address, all bits are set to 1. If a frame is transmitted with a destination broadcast address, then all nodes on the network will receive and process it.

Field Length / Type

The L / T (Length / Type) field serves two different purposes:

  • to determine the length of the data field of the frame, excluding any padding with spaces;
  • to denote the data type in the data field.

The L / T field value between 0 and 1500 is the length of the data field of the frame; a higher value indicates the type of protocol.

In general, the L / T field is a historical residue of the Ethernet standardization in the IEEE, which gave rise to a number of compatibility problems for equipment released before 1983. Nowadays Ethernet and Fast Ethernet never use L / T fields. The specified field serves only for coordination with the software that processes frames (that is, with protocols). But the only truly standard purpose of the L / T field is to use it as a length field - the 802.3 specification does not even mention its possible use as a data type field. The standard states: "Frames with a length field value greater than that specified in clause 4.4.2 may be ignored, discarded, or privately used. The use of these frames is outside the scope of this standard."

Summarizing what has been said, we note that the L / T field is the primary mechanism by which frame type. Fast Ethernet and Ethernet frames in which the length of the L / T field is set (the L / T value 802.3, frames in which the data type is set by the value of the same field (L / T value> 1500) are called frames Ethernet- II or DIX.

Data field

In the data field contains information that one node sends to another. Unlike other fields that store very specific information, a data field can contain almost any information, as long as its size is at least 46 and no more than 1500 bytes. How the content of a data field is formatted and interpreted is determined by the protocols.

If you need to send data less than 46 bytes in length, the LLC layer adds bytes with an unknown value to the end of the data, called insignificant data(pad data). As a result, the field length becomes 46 bytes.

If the frame is of type 802.3, then the L / T field indicates the amount of valid data. For example, if a 12-byte message is being sent, then the L / T field contains the value 12, and the data field contains 34 additional insignificant bytes. Adding insignificant bytes initiates the Fast Ethernet LLC layer, and is usually implemented in hardware.

The MAC layer facility does not specify the contents of the L / T field - it does software... Setting the value of this field is almost always done by the network interface driver.

Frame checksum

The Frame Check Sequence (PCS) ensures that the received frames are not corrupted. When forming the transmitted frame at the MAC level, a special mathematical formula is used CRC(Cyclic Redundancy Check), designed to calculate a 32-bit value. The resulting value is placed in the FCS field of the frame. The values ​​of all bytes of the frame are supplied to the input of the MAC layer element that calculates the CRC. The FCS field is the primary and most important Fast Ethernet error detection and correction mechanism. Starting from the first byte of the destination address and ending with the last byte of the data field.

DSAP and SSAP Field Values

DSAP / SSAP Values

Description

Indiv LLC Sublayer Mgt

Group LLC Sublayer Mgt

SNA Path Control

Reserved (DOD IP)

ISO CLNS IS 8473

The 8B6T coding algorithm converts an eight-bit data octet (8B) to a six-bit ternary symbol (6T). Code groups 6T are designed to be transmitted in parallel over three twisted-pair cable pairs, so the effective data transfer rate for each twisted pair is one third of 100 Mbit / s, that is, 33.33 Mbit / s. The ternary symbol rate for each twisted pair is 6/8 of 33.3 Mbps, which corresponds to clock frequency 25 MHz. It is with this frequency that the timer of the MP interface works. Unlike binary signals, which have two levels, ternary signals transmitted on each pair can have three levels.

Character encoding table

Linear code

Symbol

MLT-3 Multi Level Transmission - 3 (multilevel transmission) - a bit similar to the NRZ code, but unlike the latter, it has three signal levels.

The unit corresponds to the transition from one signal level to another, and the change in the signal level occurs sequentially taking into account the previous transition. When transmitting "zero", the signal does not change.

This code, like NRZ, needs to be pre-encoded.

Compiled on the basis of materials:

  1. Laem Queen, Richard Russell "Fast Ethernet";
  2. K. Zakler "Computer Networks";
  3. V.G. and N.A. Olifer "Computer Networks";