Data Link Layer

  • Uploaded by: abhinavgupta2010
  • 0
  • 0
  • May 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Data Link Layer as PDF for free.

More details

  • Words: 4,797
  • Pages: 59
TCP/IP Suite and OSI Reference Model •The TCP/IP protocol stack does not define the lower layers of a complete protocol stack •In this lecture, we will address how the TCP/IP protocol stacks interfaces with the data link layer and the MAC sublayer

Data Link Layer • The main tasks of the data link layer are: •

Transfer data from the network layer of one machine to the network layer of another machine



Convert the raw bit stream of the physical layer into groups of bits (“frames”)

Two types of networks at the data link layer – Broadcast Networks: All stations share a single communication channel – Point-to-Point Networks: Pairs of hosts (or routers) are directly connected

• Typically, local area networks (LANs) are broadcast and wide area networks (WANs) are point-to-point

Local Area Networks • Local area networks (LANs) connect computers within a building or a enterprise network • Almost all LANs are broadcast networks • Typical topologies of LANs are bus or ring or star • We will work with Ethernet LANs. Ethernet has a bus or star topology.

MAC and LLC • In any broadcast network, the stations must ensure that only one station transmits at a time on the shared communication channel • The protocol that determines who can transmit on a broadcast channel are called Medium Access Control (MAC) protocol • The MAC protocol are implemented in the MAC sublayerwhich is the lower sublayer of the data link layer • The higher portion of the data link layer is often called Logical Link Control (LLC)

IEEE 802 Standards •IEEE 802 is a family of standards for LANs, which defines an LLC and several MAC sublayers

Ethernet • Speed: • Standard:

10-1000 Mbps 802.3, Ethernet II (DIX)

• Most popular physical layers for Ethernet: • 10Base2 cable • 10Base-T • 100Base-TX pair • 100Base-FX

Thin Ethernet: 10 Mbps thin coax 10 Mbps Twisted Pair 100 Mbps over Category 5 twisted 100 Mbps over Fiber Optics

Ethernet Hubs vs. Ethernet Switches • An Ethernet switch is a packet switch for Ethernet frames • Buffering of frames prevents collisions. • Each port is isolated and builds its own collision domain • An Ethernet Hub does not perform buffering: • Collisions occur if two frames arrive at the same time.

Hub

Switch

Ethernet and IEEE 802.3: Any Difference? • On a conceptual level, they are identical. But there are subtle differences that are relevant if we deal with TCP/IP. • “Ethernet” (Ethernet II, DIX) • An industry standards from 1982 that is based on the first implementation of CSMA/CD by Xerox. • Predominant version of CSMA/CD in the US. • 802.3: • IEEE’s version of CSMA/CD from 1985. • Interoperates with 802.2 (LLC) as higher layer. • Difference for our purposes: Ethernet and 802.3 use different methods to encapsulate an IP datagram.

Ethernet II, DIX Encapsulation (RFC 894)

IEEE 802.2/802.3 Encapsulation (RFC 1042)

Point-to-Point (serial) links • Many data link connections are point-to-point serial links: – Dial-in or DSL access connects hosts to access routers – Routers are connected by high-speed point-to-point links • Here, IP hosts and routers are connected by a serial

Data Link Protocols for Pointto-Point links • SLIP (Serial Line IP) • First protocol for sending IP datagrams over dial-up links (from 1988) • Encapsulation, not much else • PPP (Point-to-Point Protocol): • Successor to SLIP (1992), with added functionality • Used for dial-in and for high-speed routers • HDLC (High-Level Data Link) : • Widely used and ifluential standard (1979) • Default protocol for serial links on Cisco routers • Actually, PPP is based on a variant of HDLC

PPP - IP encapsulation •

The frame format of PPP is similar to HDLC and the 802.2 LLC frame forma

• • •

PPP assumes a duplex circuit Note: PPP does not use addresses Usual maximum frame size is 1500

Additional PPP functionality •In addition to encapsulation, PPP supports: – multiple network layer protocols (protocol multiplexing) – Link configuration – Link quality testing – Error detection – Option negotiation – Address notification – Authentication •The above functions are supported by helper protocols: – LCP – PAP, CHAP

PPP Support protocols • Link management: The link control protocol (LCP) is responsible for establishing, configuring, and negotiating a data-link connection. LCP also monitors the link quality and is used to terminate the link. • Authentication:Authentication is optional. PPP supports two authentication protocols: Password Authentication Protocol (PAP) and Challenge Handshake Authentication Protocol (CHAP). • Network protocol configuration:PPP has network control protocols (NCPs) for numerous network layer protocols. The IP control protocol (IPCP) negotiates IP address assignments and other parameters when IP is used as network layer.

Unbalanced Point-to-point link Commands Primary

Secondary Responses

Commands

Unbalanced Multipoint link

Primary Responses Secondary

Secondary

Secondary

Balanced Point-to-point link between Combined Stations Primary Secondary

Commands Responses

Responses

Secondary

Commands

Primary Figure 5.33

Flag

Address

Contro l

Information

FCS

Flag

Figure 5.35

Information Frame 1 2-4 0

5

N(S)

Supervisory Frame 1 0

6-8

P/ F

N(R)

N(R)

S

S

P/ F

M

M

P/ F

Unnumbered Frame 1

1

M

M

M

Figure 5.36

SABM

UA

Data transfe r

DISC

UA

Figure 5.37

Secondaries B, C

Primary A B, RR, 0, P X B, SREJ, 1C, RR, 0, P

B, SREJ, 1,P

B, I, 0, 5

B, I, 0, 0B, I, 1, 0B, I, 2, 0,F

C, RR, 0, F B, I, 1, 0B, I, 3, 0B, I, 4, 0, F

Figure 5.38

Combined Station A B, I, 0, 0 B, I, 1, 0 B, I, 2, 1 B, I, 3, 2 B, I, 4, 3 B, I, 1, 3 B, I, 2, 4 B, I, 3, 4

X

Combined Station B A, I, 0, 0 A, I, 1, 1 A, I, 2, 1 B, REJ, 1 A, I, 3, 1 B, RR, 2 B, RR, 3

Figure 5.39

Flag

Address

01111110 1111111

All stations are to accept the frame

Contro Protoco 00000011 l l

Unnumbered frame

Information

CRC

f 01111110 lag

Specifies what kind of packet is contained in the payload, e.g., LCP, NCP, IP, OSI CLNP, IPX

Figure 5.40

A Typical Scenario 7. Carrier Dropped Terminate 6. Done 5. Open 4. NCP Configuration

1. Carrier Detected

D ead faile d

faile d

Establis h 2. Options Negotiate d

Authenticat e 3. Authentication Completed

Network

Home PC to Internet Service Provider 1. PC calls router via modem. 2. PC and router exchange LCP packets to negotiate PPP parameters. 3. Check on identities. 4. NCP packets exchanged to configure the network layer, e.g., TCP/IP ( requires IP address assignment). 5. Data transport, e.g. send/receive IP packets. 6. NCP used to tear down the network layer connection (free up IP address); LCP used to Figure 5.41

Overview

DLL Design 3.1 DLL Design Issues 3.2 Error Detection and Correction 3.3 DLL Protocols 3.4 Sliding Window Protocols 3.5 Protocol Specification and Verification

The concerns at the Data Link Layer include: 3. 4. 5. 6.

What services should be provided to upper layers? Framing, Error Control. Flow Control.

DLL Design

Overview

The goal of the data link layer is to provide reliable, efficient communication between adjacent machines connected by a single communication channel. Specifically: 1. Group the physical layer bit stream into units called frames. Note that frames are nothing more than "packets" or "messages". By convention, we'll use the term "frames" when discussing DLL packets. 2. Sender checksumsthe frame and transmits checksum together with data. The checksum allows the receiver to determine when a frame has been damaged in transit. 3. Receiver re-computes the checksumand compares it with the received value. If they differ, an error has occurred and the frame is discarded. 4. Perhaps return a positive or negative acknowledgmentto the sender. A positive acknowledgment indicate the frame was received without errors, while a negative acknowledgment indicates the opposite. 5.

Flow control. Prevent a fast sender from overwhelming a slower receiver. supercomputer can easily generate data faster than a PC can consume it.

For example, a

6. In general, provide service to the networklayer. The network layer wants to be able to send packets to its neighbors without worrying about the details of getting it there in one piece. At least, the above is what the OSI reference model suggests. As we will see later, not everyone agrees that the data link layer should perform all these tasks.

DLL Design

Overview

There are several possible kinds of services that can be provided to network layers. The Figure is a reminder of the difference between virtual and actual communications between layers.

DLL Design

SERVICES PROVIDED TO THE NETWORK LAYER

Delivery Mechanisms:

UN-Acknowledged Connection-Less Connection Oriented

“Best Effort”

Acknowledged

Better Quality

Reliable Delivery

DLL Design

SERVICES PROVIDED TO THE NETWORK LAYER

Unacknowledged Connection-less Service -- Best Effort: The receiver does not return acknowledgments to the sender, so the sender has no way of knowing if a frame has been successfully delivered. When would such a service be appropriate? 1. When higher layers can recover from errors with little loss in performance. That is, when errors are so infrequent that there is little to be gained by the data link layer performing the recovery. It is just as easy to have higher layers deal with occasional lost packets. 2. For real-time applications requiring "better never than late" semantics. Old data may be worse than no data. For example, should an airplane bother calculating the proper wing flap angle using old altitude and wind speed data when newer data is already available?

DLL Design

SERVICES PROVIDED TO THE NETWORK LAYER

Acknowledged Connection-less Service -- Acknowledged Delivery: •

The receiver returns an acknowledgment frame to the sender indicating that a data frame was properly received.



Likewise, the receiver may hand received frames to higher layers in the order in which they arrive, regardless of the original sending order.



Typically, each frame is assigned a unique sequence number, which the receiver returns in an acknowledgment frame to indicate which frame the ACK refers to. The sender must retransmit unacknowledged (e.g., lost or damaged) frames.

DLL Design

SERVICES PROVIDED TO THE NETWORK LAYER

Acknowledged Connection-Oriented Service -- Reliable Delivery: •

Frames are delivered to the receiver reliably and in the same order as generated by the sender.



Connection state keeps track of sending order and which frames require retransmission. For example, receiver state includes which frames have been received, which ones have not, etc.

DLL Design

FRAMING

The DLL translates the physical layer's raw bit stream into discrete units (messages) called frames. How can frame be transmitted so the receiver can detect frame boundaries? That is, how can the receiver recognize the start and end of a frame? We will discuss four ways: Character Count: Bit Stuffing: Character stuffing: Encoding Violations:

DLL Design

FRAMING

Character Count: •

Make the first field in the frame's header be the length of the frame. That way the receiver knows how big the current frame is and can determine where the next frame ends.



Disadvantage: Receiver loses synchronization when bits become garbled. If the bits in the count become corrupted during transmission, the receiver will think that the frame contains fewer (or more) bits than it actually does.



Although checksum will detect the frames are incorrect, the receiver will have difficulty re-synchronizing to the start of a new frame. This technique is not used anymore, since better techniques are available.

DLL Design

FRAMING

Bit Stuffing: IDEA: Use reserved bit patterns to indicate the start and end of a frame. For instance, use the 4-bit sequence of 0111 to delimit consecutive frames. A frame consists of everything between two delimiters. Problem:What happens if the reserved delimiter happens to appear in the frame itself? If we don't remove it from the data, the receiver will think that the incoming frame is actually two smaller frames! Solution:Use bit stuffing. Within the frame, replace every occurrence of two consecutive 1's with 110. E.g., append a zero bit after each pair of 1's in the data. This prevents 3 consecutive 1's from ever appearing in the frame.

DLL Design

FRAMING

Bit Stuffing: The receiver converts two consecutive 1's followed by a 0 into two 1's, but recognizes the 0111 sequence as the end of the frame. Example: The frame "1 0 1 1 1 0 1" would be transmitted over the physical layer as "0 1 1 1 1 0 1 1 0 1 0 1 0 1 1 1". Note: When using bit stuffing, locating the start/end of a frame is easy, even when frames are damaged. The receiver will re-synchronize quickly with the sender as to where frames begin and end, even when bits in the frame get garbled. The main disadvantage with bit stuffing is the insertion of additional bits into the data stream, wasting bandwidth. How much expansion? The precise amount depends on the frequency in which the reserved patterns appear as user data.

DLL Design

FRAMING

Character stuffing: Same idea as bit-stuffing, but operates on bytes instead of bits. Use reserved characters to indicate the start and end of a frame. For instance, use the twocharacter sequence DLE STX (Data-Link Escape, Start of TeXt) to signal the beginning of a frame, and the sequence DLE ETX (End of TeXt) to flag the frame's end. Problem: What happens if the two-character sequence DLE ETX happens to appear in the frame itself? Solution: Use character stuffing within the frame, replace every occurrence of DLE with the two-character sequence DLE DLE. The receiver reverses the process, replacing every occurrence of DLE DLE with a single DLE. Example: If the frame contained "A B DLE D E DLE", the characters transmitted over the channel would be "DLE STX A B DLE DLE D E DLE DLE DLE ETX". Disadvantage: A octet is the smallest unit that can be operated on; not all architectures are 8-bit oriented.

DLL Design

ERROR CONTROL

Must insure that all frames are eventually delivered (possibly in order) to a destination. Three components are required to do this: Acknowledgments,

Timers, and Sequence Numbers

Acknowledgments: • • • •

Reliable delivery is achieved using the "acknowledgments with retransmission" paradigm. The receiver returns a special acknowledgment (ACK) frame to the sender indicating the correct receipt of a frame. In some systems, the receiver also returns a negative acknowledgment (NACK) for incorrectly-received frames. This is only a hint to the sender so that it can retransmit a frame right away without waiting for a timer to expire.

DLL Design

FRAMING

Encoding Violations: Send a signal that doesn't conform to any legal bit representation. In Manchester encoding, for instance, 1-bits are represented by a high-low sequence, and 0bits by low-high sequences. The start/end of a frame could be represented by the signal low-low or high-high. The advantage of encoding violations is that no extra bandwidth is required as in bit or character stuffing. The IEEE 802.4 standard uses this approach. Finally, some systems use a combination of these techniques. IEEE 802.3, for instance, has both a length field and special frame start and frame end patterns.

DLL Design

ERROR CONTROL

Timers: • • •

One problem that simple ACK/NACK schemes fail to address is recovering from a frame that is lost, and as a result, fails to solicit an ACK or NACK. What happens if an ACK or NACK becomes lost? Retransmission timers are used to resend frames that don't produce an ACK. When sending a frame, schedule a timer to expire at some time after the ACK should have been returned. If the timer goes off, retransmit the frame.

Sequence Numbers: • • •

Retransmissions introduce the possibility of duplicate frames. To suppress duplicates, add sequence numbers to each frame, so that a receiver can distinguish between new frames and repeats of old frames. Bits used for sequence numbers depend on the number of frames that can be outstanding at any one time.

DLL Design

FLOW CONTROL

Flow control deals with throttling the speed of the sender to match that of the receiver. Usually, this is a dynamic process, as the receiving speed depends on such changing factors as the load, and availability of buffer space. One solution is to have the receiver extend credits to the sender. For each credit, the sender may send one frame. Thus, the receiver controls the transmission rate by handing out credits. LINK INITIALIZATION: In some cases, the data link layer service must be "opened" before use: The data link layer uses open operations for allocating buffer space, control blocks, agreeing on the maximum message size, etc.

Error Detection & Control 3.1 DLL Design Issues 3.2 Error Detection and Correction 3.3 DLL Protocols 3.4 Sliding Window Protocols 3.5 Protocol Specification and Verification

Overview

This section is about putting in enough redundancy along with the data to be able to detect (and correct) data errors.

Error Detection & Control

ERROR CORRECTING CODES

In data communication, line noise is a fact of life (e.g., signal attenuation, natural phenomenon such as lightning, and the telephone worker). Moreover, noise usually occurs as bursts rather than independent, single bit errors. For example, a burst of lightning will affect a set of bits for a short time after the lightning strike. Detecting and correcting errors requires redundancy - sending additional information along with the data. There are two types of attacks against errors: Error Detecting Codes: Include enough redundancy bits to detect errors and use ACKs and retransmissions to recover from the errors. Error Correcting Codes: Include enough redundancy to detect and correct errors. We will introduce some concepts, and then consider both detection and correction. To understand errors, consider the following: Messages (frames) consist of m data (message) bits and r redundancy bits, yielding an n = ( m + r ) bit codeword

Error Detection & Control

ERROR CORRECTING CODES

Hamming Distance. Given any two codewords, we can determine how many of the bits differ. Simply exclusive or (XOR) the two words, and count the number of 1 bits in the result. This count is the Hamming Distance. Significance? If two codewords are d bits apart, derrors are required to convert one to the other. A code's Hamming Distance is defined as the minimum Hamming Distance between any two of its legal codewords (from all possible codewords). In general, all 2mpossible data words are legal. However, by choosing check bits carefully, the resulting codewords will have a large Hamming Distance. The larger the Hamming distance, the better the codes are able to detect errors. To detect d 1-bit errors requires having a Hamming Distance of at least d + 1 bits. Why? To correct d errors requires 2d + 1 bits. Intuitively, after d errors, the garbled messages is still closer to the original message than any other legal codeword.

Error Detection & Control

ERROR CORRECTING CODES

Parity Bits

A single parity bit is appended to each data block (e.g. each character in ASCII systems) so that the number of 1 bits always adds up to an even (odd) number. 1000000(1) 1111101(0) The Hamming Distance for parity is 2, and it cannot correct even single-bit errors (but can detect single-bit errors). As another example, consider a 10-bit code used to represent 4 possible values: "00000 00000", "00000 11111", "11111 00000", and "11111 11111". Its Hamming distance is 5, and we can correct 2 single-bit errors: For instance, "10111 00010" becomes "11111 00000" by changing only two bits. However, if the sender transmits "11111 00000" and the receiver sees "00011 00000", the receiver will not correct the error properly. Finally, in this example we are guaranteed to catch all 2-bit errors, but we might do better: if "00111 00111" contains 4 single-bit errors, we will reconstruct the block correctly.

Error Detection & Control

ERROR CORRECTION

What's the fewest number of bits needed to correct single bit errors? Let us design a code containing n = m + r bits that corrects all single-bit errors (remember m is the number of message (data) bits and ris number of redundant (check) bits): There are 2mlegal messages (e.g., legal bit patterns). Each of the m messages has n illegal codewords a distance of 1 from it. That is, if we systematically invert each bit in the corresponding n-bit codeword, we get illegal codewords a distance of 1 from the original. Thus, each message requires n + 1 bits dedicated to it (n that are one bit away and 1 that is the message). The total number of bit patterns is ( n + 1 ) * 2m < 2n. That is, all (n+1) * 2m encoded messages should be unique, and there can't be fewer messages than the 2n possible code-words. Since n = m + r , we get: ( m + r + 1) * 2m < 2m+r or ( m + r + 1) < 2r This formula gives the absolute lower limit on the number of bits required to detect (and correct!) 1-bit errors.

Error Detection & Control

ERROR DETECTION

Error correction is relatively expensive (computationally and in bandwidth.) For example, 10 redundancy bits are required to correct 1 single-bit error in a 1000-bit message. In contrast, detecting a single bit error requires only a single-bit, no matter how large the message. The most popular error detection codes are based on polynomial codes or cyclic redundancy codes(CRCs). Allows us to acknowledge correctly received frames and to discard incorrect ones. Tanenbaum and you have worked several examples.

Overview

DLL PROTOCOLS 3.1 DLL Design Issues 3.2 Error Detection and Correction 3.3 DLL Protocols 3.4 Sliding Window Protocols 3.5 Protocol Specification and Verification

How can two DLL layers communicate in order to assure reliability? We will look at increasingly complex protocols to see how this is done.

DLL Protocols

Overview

ELEMENTARY DATA LINK PROTOCOLS: The DLL provides these services to the Network Layer above it: Data handed to a DLL by a Network Layer on one module, are handed to the Network Layer on another module by that DLL. The remote Network Layer peer should receive the identical message generated by the sender (e.g., if the data link layer adds control information, the header information must be removed before the message is passed to the Network Layer). The Network Layer may want to be sure that all messages it sends, will be delivered correctly (e.g., none lost, no corruption). Note that arbitrary errors may result in the loss of both data and control frames. The Network Layer may want messages to be delivered to the remote peer in the exact same order as they are sent.

Sliding Window Protocols 3.1 DLL Design Issues 3.2 Error Detection and Correction 3.3 DLL Protocols 3.4 Sliding Window Protocols 3.5 Protocol Specification and Verification

Overview

These methods provide much more realism! General method provides buffering with ACKs.

Sliding Window Protocols

FEATURES

Assumptions: Use more realistic Two-way communication. We now have two kinds of frames(containing a "kind" field): 9. 10.

Data ACK containing (sequence number of last correctly received frame).

Piggybacking - add acknowledgment to data frames going in reverse direction. Piggybacking issue: For better use of bandwidth, how long should we wait for outgoing data frame before sending the ACK on its own.

Sliding Window Protocols

EXAMPLE

Example of a sliding window protocol. Contains a sequence number whose maximum value, MaxSeq, is 2n - 1. For stop-and-wait sliding window protocol, n = 1. Essentially same as Simplex Protocol, except ACKs are numbered, which solves early time out problem. Two-way communication. Protocol works, all frames delivered in correct order. Requires little buffer space. Poor line utilization due to stop-and-wait. (To be solved in next example.) <<< Figure 3.13 >>>

Sliding Window Protocols

OTHER ISSUES

Problem with stop and wait protocols is that sender can only have one unACKed frame outstanding. Example: 1000 bit frames 1 Mbs channel (satellite) 270 ms propagation delay Frame takes 1msec ( 1000 bits/(1,000,000 bits/sec) = 0.001 sec = 1 msec ) to send. With propagation delay the ACK is not seen at the sender again until time 541msec. Very poor channel utilization. Several solutions are possible: We can use larger frames, but the maximum size is limited by the bit error rate of the channel. The larger the frame, the higher the probability that it will become damaged during transmission. Use pipelining: allow multiple frames to be in transmission simultaneously.

Sliding Window Protocols

PIPELINING

Sender does not wait for each frame to be ACK'ed. Rather it sends many frames with the assumption that they will arrive. Must still get back ACKs for each frame. Provides more efficient use of transmit bandwidth, but error handling is more complex. What if 20 frames transmitted, and the second has an error. Frames 3-20 will be ignored at receiver side? Sender will have to retransmit. What are the possibilities? Two strategies for receive Window size:

Sliding Window Protocols

SLIDING WINDOW MECHANISMS

Go back n- equivalent to receiver's window size of one. If receiver sees bad frames or missing sequence numbers, subsequent frames are discarded. No ACKs for discarded frames.

Selective repeat - receiver's window size larger than one. Store all received frames after the bad one. ACK only last one received in sequence.

Sliding Window Protocols

SLIDING WINDOW MECHANISMS

Tradeoff between bandwidth and data link layer buffer space on the receiver side. In either case will need buffer space on the sender side. Cannot release until an ACK is received. Use a timer for each unACK'ed frame that has been sent. Must be able to enable/disable network layer because may not be able to handle more send data if there are many unACK’d frames Window Size Rules Potential problem of window sizes (receiver window size of one): MaxSeqis 7 (0 through 7) is valid. How big can sender window be? Send 0-7. Receive 0-7 (one at a time) and send ACKS All ACKS are lost Message 0 times out and is retransmitted Receiver accepts frame 0 (why? - because that is next frame) and passes it to Network Layer. So – sender window size must be smaller than MaxSeq. Look at how this is all put together in <<< Figure 3.16 >>>

Examples

HDLC

HDLC - HIGH LEVEL DATA LINK CONTROL: Adopted as part of X.25. A connection oriented 64Kbps network using either virtual or permanent circuits. Bit oriented (uses bit stuffing and bit delimiters) 3-bit sequence numbers Up to 7 unACK'ed frames can be outstanding at any time (how big is the receiver's window?) ACK's the "frame expected" rather than last frame received (any behavior difference between the two? No, as long as the sender and receiver agree on the same convention). Look at control information in the two Figures.

Examples

DLL In The Internet

Point-to-point lines: Between routers over leased lines Dial-up to a host via a modem PPP - Point-to-Point Protocol a Standard (RFCs 1661-1663) Can be used for dial-up and leased router-router lines. Provides: • • • • • •

Framing method to delineate frames. Also handles error detection. Link Control Protocol (LCP) for bringing lines up, negotiation of options, bringing them down. These are distinct PPP packets. Network Control Protocol (NCP) for negotiating network layer options. Similar to HDLC, but is character-oriented. PPP doesn’t provide reliable data transfer using sequence numbers and acknowledgments as the default. Reliable data transfer can be requested as an option (as part of LCP). Allows an internet provider to reuse IP addresses. You get to use an address only for the duration of your login.

Examples

DLL In ATM

Transmission Convergence (TC) sublayer (refer back to ATM reference model). Physical layer is T1, T3, SONET, FDDI. This sublayer does header check-summing and cell reception. Header Checksum • • • • •

5-byte header consists of 4 bytes of virtual circuit and control + 1 byte of checksum. Checksum 4 bytes of header information and store in 5th byte. Use CRC checksum x8 + x2 + x + 1 and add a constant 01010101 bit string. Low probability of error (likelihood of fiber) so keep it cheap to checksum. Upper layers can checksum payload if they like. 8-bit checksum field is called Header Error Control (HEC).

Idle Cells: May have to output dummy cells in a synchronous medium (must send cells at periodic times). Use idle cells. Also have operation and maintenance (OAM) cells. Exchange control and other information.

Examples

DLL In ATM

Cell Reception: Drop idle cells , pass along OAM cells. Need to generate framing information for underlying technology, but no framing bits! Use a probabilistic approach of matching up valid headers and checksums in a 40-bit window. See the Figure which describes how to get in synch. Have a state-transition diagram where we are looking for d consecutive valid headers. If a bad cell received (flipped bit) do not immediately give up on synchronization.

Related Documents

Data Link Layer
May 2020 28
Data Link Layer
November 2019 50
Data Link Layer Pada Atm
November 2019 33
Link Layer
May 2020 33
Ca Ex S1m07 Data Link Layer
October 2019 30

More Documents from "api-3800467"