Computer Network File(mpct)

  • Uploaded by: VIKALP KULSHRESTHA
  • 0
  • 0
  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Computer Network File(mpct) as PDF for free.

More details

  • Words: 9,317
  • Pages: 40
Submitted To-

Miss. AMRITA SHARMA Lect. Of C.S. deptt.

Submitted By-

Mr. PRATEEK KHARE Roll no.- 0903EC061077

Index

1.Case study of different type of LAN. 2.

Explain different type of BUS.

3.Write short note on.  CSNA/CD protocol.  RS-232.  ISDN services.  Pure and slotted ALOHA. 

Token bus LAN.

 Ring bus LAN. 4. Difference between B-ISDN and N-ISDN. 5. Explained error control and transmission techniques.

Case study of different type of LAN.

1.

one way to categorize the different types of computer network designs is by their scope or scale. For historical reasons, the networking industry refers to nearly every type of design as some kind of area network. Common examples of area network types are: • • • • • • • •

LAN - Local Area Network WLAN - Wireless Local Area Network WAN - Wide Area Network MAN - Metropolitan Area Network SAN - Storage Area Network, System Area Network, Server Area Network, or sometimes Small Area Network CAN - Campus Area Network, Controller Area Network, or sometimes Cluster Area Network PAN - Personal Area Network DAN - Desk Area Network

LAN and WAN were the original categories of area networks, while the others have gradually emerged over many years of technology evolution. Note that these network types are a separate concept from network topologies such as bus, ring and star.

LAN - Local Area Network A LAN connects network devices over a relatively short distance. A networked office building, school, or home usually contains a single LAN, though sometimes one building will contain a few small LANs (perhaps one per room), and occasionally a LAN will span a group of nearby buildings. In TCP/IP networking, a LAN is often but not always implemented as a single IP subnet. In addition to operating in a limited space, LANs are also typically owned, controlled, and managed by a single person or organization. They also tend to use certain connectivity technologies, primarily Ethernet and Token Ring.

WAN - Wide Area Network As the term implies, a WAN spans a large physical distance. The Internet is the largest WAN, spanning the Earth. A WAN is a geographically-dispersed collection of LANs. A network device called a router connects LANs to a WAN. In IP networking, the router maintains both a LAN address and a WAN address. A WAN differs from a LAN in several important ways. Most WANs (like the Internet) are not owned by any one organization but rather exist under collective or distributed ownership and management. WANs tend to use technology like ATM, Frame Relay and X.25 for connectivity over the longer distances.

LAN, WAN and Home Networking Residences typically employ one LAN and connect to the Internet WAN via an Internet Service Provider (ISP) using a broadband modem. The ISP provides a WAN IP address to the modem, and all of the computers on the home network use LAN (so-called private) IP addresses. All computers on the home LAN can communicate directly with each other but must go through a central gateway, typically a broadband router, to reach the ISP. Other Types of Area Networks While LAN and WAN are by far the most popular network types mentioned, you may also commonly see references to these others: • •



Wireless Local Area Network - a LAN based on Wi-Fi wireless network technology Metropolitan Area Network - a network spanning a physical area larger than a LAN but smaller than a WAN, such as a city. A MAN is typically owned an operated by a single entity such as a government body or large corporation. Campus Area Network - a network spanning multiple LANs but smaller than a MAN, such as on a university or local business campus.

Storage Area Network - connects servers to data storage devices through a technology like Fiber Channel. Explain different type of BUS



2.

Control Bus The control bus is used by the CPU to direct and monitor the actions of the other functional areas of the computer. It is used to transmit a variety of individual signals (read, write, interrupt, acknowledge, and so forth) necessary to control and coordinate the operations of the computer. The individual signals transmitted over the control bus and their functions are covered in the appropriate functional area description. Address Bus The address bus consists of all the signals necessary to define any of the possible memory address locations within the computer, or for modular memories any of the possible memory address locations within a module. An address is defined as a label, symbol, or other set of characters used to designate a location or register where information is stored. Before data or instructions can be written into or read from memory by the CPU or I/O sections, an address must be transmitted to memory over the address bus.

Data Bus The bidirectional data bus, sometimes called the memory bus, handles the transfer of all data and instructions between functional areas of the computer. The bidirectional data bus can only transmit in one direction at a time. The data bus is used to transfer instructions from memory to the CPU for execution. It carries data (operands) to and from the CPU and memory as required by instruction translation. The data bus is also used to transfer data between memory and the I/O section during input/output operations. The information on the data bus is either written into.

Carrier Sense Multiple Access with Collision Detection :The Ethernet network may be used to provide shared access by a group of attached nodes to the physical medium which connects the nodes. These nodes are said to form a Collision Domain. All frames sent on the medium are physically received by all receivers, however the Medium Access Control (MAC) header contains a MAC destination address which ensure only the specified destination actually forwards the received frame (the other computers all discard the frames which are not addressed to them). Consider a LAN with four computers each with a Network Interface Card (NIC) connected by a common Ethernet cable:

One computer (Blue) uses a NIC to send a frame to the shared medium, which has a destination address corresponding to the source address of the NIC in the red computer.

The cable propagates the signal in both directions, so that the signal (eventually) reaches the NICs in all four of the computers. Termination resistors at the ends of the cable absorb the frame energy, preventing reflection of the signal back along the cable.

All the NICs receive the frame and each examines it to check its length and checksum. The header destination MAC address is next examined, to see if the frame should be accepted, and forwarded to the network-layer software in the computer.

Only the NIC in the red computer recognizes the frame destination address as valid, and therefore this NIC alone forwards the contents of the frame to the network layer. The NICs in the other computers discard the unwanted frame. The shared cable allows any NIC to send whenever it wishes, but if two NICs happen to transmit at the same time, a collision will occur, resulting in the data being corrupted.

ALOHA & Collisions To control which NICs are allowed to transmit at any given time, a protocol is required. The simplest protocol is known as ALOHA (this is actually an Hawaiian word, meaning "hello"). ALOHA allows any NIC to transmit at any time, but states that each NIC must add a checksum/CRC at the end of its transmission to allow the receiver(s) to identify whether the frame was correctly received. ALOHA is therefore a best effort service, and does not guarantee that the frame of data will actually reach the remote recipient without corruption. It therefore relies on ARQ protocols to retransmit any data which is corrupted. An ALOHA network only works well when the medium has a low utilisation, since this leads to a low probability of the transmission colliding with that of

another computer, and hence a reasonable chance that the data is not corrupted. Carrier Sense Multiple Access (CSMA) Ethernet uses a refinement of ALOHA, known as Carrier Sense Multiple Access (CSMA), which improves performance when there, is a higher medium utilization. When a NIC has data to transmit, the NIC first listens to the cable (using a transceiver) to see if a carrier (signal) is being transmitted by another node. This may be achieved by monitoring whether a current is flowing in the cable (each bit corresponds to 18-20 milliamps (mA)). The individual bits are sent by encoding them with a 10 (or 100 MHz for Fast Ethernet) clock using Manchester encoding. Data is only sent when no carrier is observed (i.e. no current present) and the physical medium is therefore idle. Any NIC which does not need to transmit listens to see if other NICs have started to transmit information to it. However, this alone is unable to prevent two NICs transmitting at the same time. If two NICs simultaneously try transmit, then both could see an idle physical medium (i.e. neither will see the other's carrier signal), and both will conclude that no other NIC is currently using the medium. In this case, both will then decide to transmit and a collision will occur. The collision will result in the corruption of the frame being sent, which will subsequently be discarded by the receiver since a corrupted Ethernet frame will (with a very high probability) not have a valid 32-bit MAC CRC at the end.

Collision Detection (CD) A second element to the Ethernet access protocol is used to detect when a collision occurs. When there is data waiting to be sent, each transmitting NIC also monitors its own transmission. If it observes a collision (excess current above what it is generating, i.e. > 24 ma for coaxial Ethernet), it stops transmission immediately and instead transmits a 32-bit jam sequence. The purpose of this sequence is to ensure that any other node which may currently be receiving this frame will receive the jam signal in place of the correct 32-bit

MAC CRC, this causes the other receivers to discard the frame due to a CRC error. To ensure that all NICs start to receive a frame before the transmitting NIC has finished sending it, Ethernet defines a minimum frame size (i.e. no frame may have less than 46 bytes of payload). The minimum frame size is related to the distance which the network spans, the type of media being used and the number of repeaters which the signal may have to pass through to reach the furthest part of the LAN. Together these define a value known as the Ethernet Slot Time, corresponding to 512 bit times at 10 Mbps. When two or more transmitting NICs each detect a corruption of their own data (i.e. a collision), each responds in the same way by transmitting the jam sequence. The following sequence depicts a collision:

At time t=0, a frame is sent on the idle medium by NIC A.

A short time later, NIC B also transmits. (In this case, the medium, as observed by the NIC at B happens to be idle too).

After a period, equal to the propagation delay of the network, the NIC at B detects the other transmission from A, and is aware of a collision, but NIC A has not yet observed that NIC B was also transmitting. B continues to transmit, sending the Ethernet Jam sequence (32 bits).

After one complete round trip propagation time (twice the one way propagation delay), both NICs are aware of the collision. B will shortly cease transmission of the Jam Sequence, however A will continue to transmit a complete Jam Sequence. Finally the cable becomes idle.

Retransmission Back-Off An overview of the transmit procedure is shown below. The transmitter initializes the number of transmissions of the current frame (n) to zero, and starts listening to the cable (using the carrier sense logic (CS) - e.g., by observing the Rx signal at transceiver to see if any bits are being sent). If the cable is not idle, it waits (defers) until the cable is idle. It then waits for a small Inter-Frame Gap (IFG) (e.g., 9.6 microseconds) to allow to time for all receiving nodes to return to prepare themselves for the next transmission. Transmission then starts with the preamble, followed by the frame data and finally the CRC-32. After this time, the transceiver Tx logic is turned off and the transceiver returns to passively monitoring the cable for other transmissions. During this process, a transmitter must also continuously monitor the collision detection logic (CD) in the transceiver to detect if a collision occurs. If it does, the transmitter aborts the transmission (stops sending bits) within a few bit periods, and starts the collision procedure, by sending a Jam Signal to the transceiver Tx logic. It then calculates a retransmission time.

If all NICs attempted to retransmit immediately following a collision, then this would certainly result in another collision. Therefore a procedure is required to ensure that there is only a low probability of simultaneous retransmission. The scheme adopted by Ethernet uses a random back-off period, where each node selects a random number, multiplies this by the slot time (minimum frame period, 51.2 µS) and waits for this random period before attempting retransmission. The small Inter-Frame Gap (IFG) (e.g., 9.6 microseconds) is also added. On a busy network, a retransmission may still collide with another retransmission (or possibly new frames being sent for the first time by another NIC). The protocol therefore counts the number of retransmission attempts (using a variable N in the above figure) and attempts to retransmit the same frame up to 15 times. For each retransmission, the transmitter constructs a set of numbers: {0, 1, 2, 3, 4, 5, ... L} where L is ([2 to the power (K)]-1) and where K=N; K<= 10; A random value R is picked from this set, and the transmitter waits (defers) for a period

R x (slot time) i.e. R x 51.2 Micro Seconds For example, after two collisions, N=2, therefore K=2, and the set is {0, 1, 2, 3} giving a one in four chance of collision. This corresponds to a wait selected from {0, 51.2, 102.4, 153.6} micro seconds.

After 3 collisions, N = 3, and the set is {0, 1, 2, 3, 4, 5, 6, 7}, that is a one in eight chance of collision. But after 4 collisions, N=4, the set becomes {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}, that is a one in 16 chance of collision. The scaling is performed by multiplication and is known as exponential backoff. This is what lets CSMA/CD scale to large numbers of NICs - even when collisions may occur. The first ten times, the back-off waiting time for the transmitter suffering collision is scaled to a larger value. The algorithm includes a threshold of 1024. The reasoning is that the more attempts that are required, the greater the number of NICs which are trying to send at the same time, and therefore the longer the period which needs to be deferred. Since a set of numbers {0,1,...,1023} is a large set of numbers, there is very little advantage from further increasing the set size. Each transmitter also limits the maximum number of retransmissions of a single frame to 16 attempts (N=15). After this number of attempts, the transmitter gives up transmission and discards the frame, logging an error. In practice, a network that is not overloaded should never discard frames in this way.

Late Collisions In a proper functioning Ethernet network, a NIC may experience collision within the first slot time after it starts transmission. This is the reason why an Ethernet NIC monitors the CD signal during this time and use CSMA/CD. A faulty CD circuit, or misbehaving NIC or transceiver may lead to a late collision (i.e. after one slot time). Most Ethernet NICs therefore continue to monitor the CD signal during the entire transmission. If they observe a late collision, they will normally inform the sender of the error condition.

Performance of CSMA / CD It is simple to calculate the performance of a CSMA/CD network where only one node attempts to transmit at any time. In this case, the NIC may saturate the medium and near 100% utilization of the link may be achieved, providing almost 10 Mbps of throughput on a 10 Mbps LAN. However, when two or more NICs attempt to transmit at the same time, the performance of Ethernet is less predictable. The fall in utilization and throughput occurs because some bandwidth is wasted by collisions and backoff delays. In practice, a busy shared 10 Mbps Ethernet network will typically supply 2-4 Mbps of throughput to the NICs connected to it. As the level of utilization of the network increases, particularly if there are many NICs competing to share the bandwidth, an overload condition may occur. In this case, the throughput of Ethernet LANs reduces very considerably, and much of the capacity is wasted by the CSMA/CD algorithm, and very little is available for sending useful data. This is the reason why a shared Ethernet LAN should not connect more than 1024 computers. Many engineers use a threshold of 40% Utilization to determine if a LAN is overloaded. A LAN with a higher utilization will observe a high collision rate, and likely a very variable transmission time (due to back off). Separating the LAN in to two or more collision domains using bridges or switches would likely provide a significant benefit (assuming appropriate positioning of the bridges or switches).

Shared networks may also be constructed using Fast Ethernet, operating at 100 Mbps. Since Fast Ethernet always uses fibre or twisted pair, a hub or switch is always required.

Ethernet Capture A drawback of sharing a medium using CSMA/CD, is that the sharing is not necessarily fair. When each computer connected to the LAN has little data to send, the network exhibits almost equal access time for each NIC. However, if one NIC starts sending an excessive number of frames, it may dominate the LAN. Such conditions may occur, for instance, when one NIC in a LAN acts as a source of high quality packetised video. The effect is known as "Ethernet Capture".

Ethernet Capture by Node A. The figure above illustrates Ethernet Capture. Computer A dominates computer B. originally both computers have data to transmit. A transmits first. A and B then both simultaneously try to transmit. B picks a larger retransmission interval than A (shown in red) and defers. A sends, and then sends again. There is a short pause, and then both A and B attempt to resume transmission. A and B both back-off, however, since B was already in back-off (it failed to retransmit), it chooses from a larger range of back-off times (using the exponential back-off algorithm). A is therefore more likely to succeed, which it does in the example. The next pause in transmission, A and B both attempt to send, however, since this fails in this case, B further increases its back-off and is now unable to fairly compete with A.

Ethernet Capture may also arise when many sources compete with one source which has much more data to send. Under these situations some nodes may be "locked out" of using the medium for a period of time. The use of higher speed transmission (e.g. 100 Mbps) significantly reduces the probability of Capture, and the use full duplex cabling eliminates the effect.

ISDN ISDN, which stands for Integrated Services Digital Network, is a system of digital phone connections which has been available for over a decade. This system allows voice and data to be transmitted simultaneously across the world using end-to-end digital connectivity. With ISDN, voice and data are carried by bearer channels (B channels) occupying a bandwidth of 64 kb/s (bits per second). Some switches limit B channels to a capacity of 56 kb/s. A data channel (D channel) handles signaling at 16 kb/s or 64 kb/s, depending on the service type. Note that, in ISDN terminology, "k" means 1000 (103), not 1024 (210) as in many computer applications (the designator "K" is sometimes used to represent this value); therefore, a 64 kb/s channel carries data at a rate of 64000 b/s. A new set of standard prefixes has recently been created to handle this. Under this scheme, "k" (kilo-) means 1000 (103), "M" (mega-) means 1000000 (106), and so on, and "Ki" (kibi-) means 1024 (210), "Mi" (mebi-) means 1048576 (220), and so on. (An alert reader pointed out some inconsistencies in my use of unit terminology throughout this Tutorial. He also referred me to a definitive web site. As a result, I have made every effort to both conform to standard terminology, and to use it consistently. I appreciate helpful user input like this!) There are two basic types of ISDN service: Basic Rate Interface (BRI) and Primary Rate Interface (PRI). BRI consists of two 64 kb/s B channels and one

16 kb/s D channel for a total of 144 kb/s. This basic service is intended to meet the needs of most individual users. PRI is intended for users with greater capacity requirements. Typically the channel structure is 23 B channels plus one 64 kb/s D channel for a total of 1536 kb/s. In Europe, PRI consists of 30 B channels plus one 64 kb/s D channel for a total of 1984 kb/s. It is also possible to support multiple PRI lines with one 64 kb/s D channel using Non-Facility Associated Signaling (NFAS). H channels provide a way to aggregate B channels. They are implemented as: • • • •

H0=384 kb/s (6 B channels) H10=1472 kb/s (23 B channels) H11=1536 kb/s (24 B channels) H12=1920 kb/s (30 B channels) - International (E1) only

To access BRI service, it is necessary to subscribe to an ISDN phone line. Customer must be within 18000 feet (about 3.4 miles or 5.5 km) of the telephone company central office for BRI service; beyond that, expensive repeater devices are required, or ISDN service may not be available at all. Customers will also need special equipment to communicate with the phone company switch and with other ISDN devices. These devices include ISDN Terminal Adapters (sometimes called, incorrectly, "ISDN Modems") and ISDN Routers.

ISDN History The early phone network consisted of a pure analog system that connected telephone users directly by a mechanical interconnection of wires. This system was very inefficient, was very prone to breakdown and noise, and did not lend itself easily to long-distance connections. Beginning in the 1960s, the telephone system gradually began converting its internal connections to a packet-based, digital switching system. Today, nearly all voice switching in the U.S. is digital within the telephone network. Still, the final connection from the local central office to the customer equipment was, and still largely is, an analog Plain-Old Telephone Service (POTS) line.

A standards movement was started by the International Telephone and Telegraph Consultative Committee (CCITT), now known as the International Telecommunications Union (ITU). The ITU is a United Nations organization that coordinates and standardizes international telecommunications. Original recommendations of ISDN were in CCITT Recommendation I.120 (1984) which described some initial guidelines for implementing ISDN. Local phone networks, especially the regional Bell operating companies, have long hailed the system, but they had been criticized for being slow to implement ISDN. One good reason for the delay is the fact that the two major switch manufacturers, Northern Telecom (now known as Nortel Networks), and AT&T (whose switch business is now owned by Lucent Technologies), selected different ways to implement the CCITT standards. These standards didn't always interoperate. This situation has been likened to that of earlier 19th century railroading. "People had different gauges, different tracks... nothing worked well." In the early 1990s, an industry-wide effort began to establish a specific implementation for ISDN in the U.S. Members of the industry agreed to create the National ISDN 1 (NI-1) standard so that end users would not have to know the brand of switch they are connected to in order to buy equipment and software compatible with it. However, there were problems agreeing on this standard. In fact, many western states would not implement NI-1. Both Southwestern Bell and U.S. West (now Qwest) said that they did not plan to deploy NI-1 software in their central office switches due to incompatibilities with their existing ISDN networks. Ultimately, all the Regional Bell Operating Companies (RBOCs) did support NI-1. A more comprehensive standardization initiative, National ISDN 2 (NI2), was later adopted. Some manufacturers of ISDN communications equipment, such as Motorola and U S Robotics (now owned by 3Com), worked with the RBOCs to develop configuration standards for their equipment. These kinds of actions, along with more competitive pricing, inexpensive ISDN connection equipment, and the desire for people to have relatively low-cost high-bandwidth Internet access have made ISDN more popular in recent years.

Most recently, ISDN service has largely been displaced by broadband internet service, such as xDSL and Cable Modem service. These services are faster, less expensive, and easier to set up and maintain than ISDN. Still, ISDN has its place, as backup to dedicated lines, and in locations where broadband service is not yet available.

Advantages of ISDN 1. Speed The modem was a big breakthrough in computer communications. It allowed computers to communicate by converting their digital information into an analog signal to travel through the public phone network. There is an upper limit to the amount of information that an analog telephone line can hold. Currently, it is about 56 kb/s bidirectional. Commonly available modems have a maximum speed of 56 kb/s, but are limited by the quality of the analog connection and routinely go about 45-50 kb/s. Some phone lines do not support 56 kb/s connections at all. There were currently 2 competing, incompatible 56 kb/s standards (X2 from U S Robotics (recently bought by 3Com), and K56flex from Rockwell/Lucent). This standards problem was resolved when the ITU released the V.90, and later V.92, standard for 56 kb/s modem communications. ISDN allows multiple digital channels to be operated simultaneously through the same regular phone wiring used for analog lines. The change comes about when the telephone company's switches can support digital connections. Therefore, the same physical wiring can be used, but a digital signal, instead of an analog signal, is transmitted across the line. This scheme permits a much higher data transfer rate than analog lines. BRI ISDN, using a channel aggregation protocol such as BONDING or Multilink-PPP, supports an uncompressed data transfer speed of 128 kb/s, plus bandwidth for overhead and signaling. In addition, the latency, or the amount of time it takes for a communication to begin, on an ISDN line is typically about half that of an analog line. This improves response for interactive applications, such as games.

2. Multiple Devices Previously, it was necessary to have a separate phone line for each device you wished to use simultaneously. For example, one line each was required for a telephone, fax, computer, bridge/router, and live video conference system. Transferring a file to someone while talking on the phone or seeing their live picture on a video screen would require several potentially expensive phone lines. ISDN allows multiple devices to share a single line. It is possible to combine many different digital data sources and have the information routed to the proper destination. Since the line is digital, it is easier to keep the noise and interference out while combining these signals. ISDN technically refers to a specific set of digital services provided through a single, standard interface. Without ISDN, distinct interfaces are required instead. 3. Signaling Instead of the phone company sending a ring voltage signal to ring the bell in your phone ("In-Band signal"), it sends a digital packet on a separate channel ("Out-of-Band signal"). The Out-of-Band signal does not disturb established connections, no bandwidth is taken from the data channels, and call setup time is very fast. For example, a V.90 or V.92 modem typically takes 30-60 seconds to establish a connection; an ISDN call setup usually takes less than 2 seconds. The signaling also indicates who is calling, what type of call it is (data/voice), and what number was dialed. Available ISDN phone equipment is then capable of making intelligent decisions on how to direct the call. 4. Interfaces In the U.S., the telephone company provides its BRI customers with a U interface. The U interface is a two-wire (single pair) interface from the phone switch, the same physical interface provided for POTS lines. It supports fullduplex data transfer over a single pair of wires, therefore only a single device can be connected to a U interface. This device is called an Network Termination 1 (NT-1). The situation is different elsewhere in the world, where

the phone company is allowed to supply the NT-1, and thereby the customer is given an S/T interface. The NT-1 is a relatively simple device that converts the 2-wire U interface into the 4-wire S/T interface. The S/T interface supports multiple devices (up to 7 devices can be placed on the S/T bus) because, while it is still a full-duplex interface, there is now a pair of wires for receive data, and another for transmit data. Today, many devices have NT-1s built into their design. This has the advantage of making the devices less expensive and easier to install, but often reduces flexibility by preventing additional devices from being connected. Technically, ISDN devices must go through an Network Termination 2 (NT-2) device, which converts the T interface into the S interface (Note: the S and T interfaces are electrically equivalent). Virtually all ISDN devices include an NT-2 in their design. The NT-2 communicates with terminal equipment, and handles the Layer 2 and 3 ISDN protocols. Devices most commonly expect either a U interface connection (these have a built-in NT-1), or an S/T interface connection. Devices that connect to the S/T (or S) interface include ISDN capable telephones and FAX machines, video teleconferencing equipment, bridge/routers, and terminal adapters. All devices that are designed for ISDN are designated Terminal Equipment 1 (TE1). All other communication devices that are not ISDN capable, but have a POTS telephone interface (also called the R interface), including ordinary analog telephones, FAX machines, and modems, are designated Terminal Equipment 2 (TE2). A Terminal Adapters (TA) connects a TE2 to an ISDN S/T bus. Going one step in the opposite direction takes us inside the telephone switch. Remember that the U interface connects the switch to the customer premises equipment. This local loop connection is called Line Termination (LT function). The connection to other switches within the phone network is called Exchange Termination (ET function). The LT function and the ET function communicate via the V interface.

RS-232 Electronic data communications between elements will generally fall into two broad categories: single-ended and differential. RS232 (single-ended) was introduced in 1962, and despite rumors for its early demise, has remained widely used through the industry.

Independent channels are established for two-way (full-duplex) communications. The RS232 signals are represented by voltage levels with respect to a system common (power / logic ground). The "idle" state (MARK) has the signal level negative with respect to common, and the "active" state (SPACE) has the signal level positive with respect to common. RS232 has numerous handshaking lines (primarily used with modems), and also specifies a communications protocol. The RS-232 interface presupposes a common ground between the DTE and DCE. This is a reasonable assumption when a short cable connects the DTE to the DCE, but with longer lines and connections between devices that may be on different electrical busses with different grounds, this may not be true. RS232 data is bi-polar.... +3 TO +12 volts indicate an "ON or 0-state (SPACE) condition" while A -3 to -12 volts indicates an "OFF" 1-state (MARK) condition.... Modern computer equipment ignores the negative level and accepts a zero voltage level as the "OFF" state. In fact, the "ON" state may be achieved with lesser positive potential. This means circuits powered by 5 VDC are capable of driving RS232 circuits directly; however, the overall range that the RS232 signal may be transmitted/received may be dramatically reduced. The output signal level usually swings between +12V and -12V. The "dead area" between +3v and -3v is designed to absorb line noise. In the various RS232-like definitions this dead area may vary. For instance, the definition for V.10 has a dead area from +0.3v to -0.3v. Many receivers designed for RS-232 are sensitive to differentials of 1v or less.

Data is transmitted and received on pins 2 and 3 respectively. Data Set Ready (DSR) is an indication from the Data Set (i.e., the modem or DSU/CSU) that it is on. Similarly, DTR indicates to the Data Set that the DTE is on. Data Carrier Detect (DCD) indicates that a good carrier is being received from the remote modem. Pins 4 RTS (Request To Send - from the transmitting computer) and 5 CTS (Clear To Send - from the Data set) are used to control. In most Asynchronous

situations, RTS and CTS are constantly on throughout the communication session. However where the DTE is connected to a multipoint line, RTS is used to turn carrier on the modem on and off. On a multipoint line, it's imperative that only one station is transmitting at a time (because they share the return phone pair). When a station wants to transmit, it raises RTS. The modem turns on carrier, typically waits a few milliseconds for carrier to stabilize, and then raises CTS. The DTE transmits when it sees CTS up. When the station has finished its transmission, it drops RTS and the modem drops CTS and carrier together.

Clock signals (pins 15, 17, & 24) are only used for synchronous communications. The modem or DSU extracts the clock from the data stream and provides a steady clock signal to the DTE. Note that the transmit and receive clock signals do not have to be the same, or even at the same baud rate. Note: Transmit and receive leads (2 or 3) can be reversed depending on the use of the equipment - DCE Data Communications Equipment or a DTE Data Terminal Equipment.

Aloha Pure Aloha Protocol With Pure Aloha, stations are allowed access to the channel whenever they have data to transmit. Because the threat of data collision exists, each station must either monitor its transmission on the rebroadcast or await an acknowledgment from the destination station. By comparing the transmitted packet with the received packet or by the lack of an acknowledgement, the transmitting station can determine the success of the transmitted packet. If the transmission was unsuccessful it is resent after a random amount of time to reduce the probability of re-collision.

Figure Pure Aloha Protocol Advantages: · Superior to fixed assignment when there is a large number of burst stations. · Adapts to varying number of stations. Disadvantages: · Theoretically proven throughput maximum of 18.4%.

· Requires queuing buffers for retransmission of packets. Slotted Aloha The first of the contention based protocols we evaluate is the Slotted Aloha protocol. The channel bandwidth is a continuous stream of slots whose length is the time necessary to transmit one packet. A station with a packet to send will transmit on the next available slot boundary. In the event of a collision, each station involved in the collision retransmits at some random time in order to reduce the possibility of recollection. Obviously the limits imposed which govern the random retransmission of the packet will have an effect on the delay associated with successful packet delivery. If the limit is too short, the probability of reclusion is high. If the limit is too long the probability of reclusion lessens but there is unnecessary delay in the retransmission. For the Mars regional network studied here, the resending of the packet will occur at some random time not greater than the burst factor times the propagation delay. Another important simulation characteristic of the Slotted Aloha protocol is the action which takes place on transmission of the packet. Methods include blocking (i.e. prohibiting packet generation) until verification of successful transmission occurs. This is known as "stop-and-wait". Another method known as "go-back-n" allows continual transmission of queued packets, but on the detection of a collision, will retransmit all packets from the point of the collision. This is done to preserve the order of the packets. In this simulation model queued packets are continually sent and only the packets involved in a collision are retransmitted. This is called "selective-repeat" and allows out of order transmission of packets.

Slotted Aloha Protocol By making a small restriction in the transmission freedom of the individual stations, the throughput of the Aloha protocol can be doubled. Assuming constant length packets, transmission time is broken into slots equivalent to the transmission time of a single packet. Stations are only allowed to transmit at slot boundaries. When packets collide they will overlap completely instead of

partially. This has the effect of doubling the efficiency of the Aloha protocol and has come to be known as Slotted Aloha.

Figure Slotted Aloha Protocol

Advantages: · Doubles the efficiency of Aloha. · Adaptable to a changing station population.

Disadvantages: · Theoretically proven throughput maximum of 36.8%. · Requires queuing buffers for retransmission of packets

Token Bus LAN.

Token bus is a network implementing the token ring protocol over a "virtual ring" on a coaxial cable. A token is passed around the network nodes and only the node possessing the token may transmit. If a node doesn't have anything to send, the token is passed on to the next node on the virtual ring. Each node must know the address of its neighbor in the ring, so a special protocol is needed to notify the other nodes of connections to, and disconnections from, the ring. Token bus was standardized by the IEEE 802.4 Working Group. It is mainly used for industrial applications. Token bus was used by GM (General Motors) for their Manufacturing Automation Protocol (MAP) standardization effort.

Token RING LAN 1. Introduction Conventional token Ring networks are used primarily in technical and office environments [1-3]. The main principles of operation of these LANs are as follows: Whenever a station which is initially in the listening mode wants to send a frame, it first waits for the token (a three byte frame, without it no station has the right to send frames through the ring), then initiating transmission of the data frame after seizing the free token and including it in the header of the frame. The data frame which includes the address of destination is received and then reinserted in the ring bit by bit by all stations in the ring see Fig.( 1 ). This procedure continue until the frame circulates back to the initiating station, which is now considered to be in the transmission mode, where it is either to be saved for comparison with the original data or discarded it. It is obvious that the procedure will provide stations by a copy of the frame, only the desired station takes a copy of the frame while the other will ignore it. The intended station after receiving the data frame completely sets the response bits at the tail of the data frame as a matter of an acknowledgement. After a successful transmission, the transmitting station releases the seized token. This is done in two ways depending on the bit rate (Speed) of the ring. With slower rings (4 Mbps), the token is released only after the reception of the slotted response bits. With higher speed rings (16 Mbps), it is released after transmitting the last bit of a frame (this is known as early token release) [4-6]. In this paper, a modified token ring operation is adopted to improve the performance of the conventional ring LAN. It is based on splitting the original LAN into multiple sub LAN’s (sub rings) (for simplicity of simulation, two tokens case is considered in this paper) managed by two supervisor stations.

4.

Explained error control and transmission techniques. Introduction Data link layer is layer 2 in OSI model. It is responsible for communications between adjacent network nodes. It handles the data moving in and out across the physical layer. It also

provides a well defined service to the network layer. Data link layer is divided into two sub layers. The Media Access Control (MAC) and Logical Link Layer (LLC).Data-Link layer ensures that an initial connection has been set up, divides output data into data frames, and handles the acknowledgements from a receiver that the data arrived successfully. It also ensures that incoming data has been received successfully by analyzing bit patterns at special places in the frames. In the following sections data link layer's functions- Error control and Flow control has been discussed. After that MAC layer is explained. Multiple access protocols are explained in the MAC layer section. Error Control Network is responsible for transmission of data from one device to another device. The end to end transfer of data from a transmitting application to a receiving application involves many steps, each subject to error. With the error control process, we can be confident that the transmitted and received data are identical. Data can be corrupted during transmission. For reliable communication, error must be detected and corrected. Error control is the process of detecting and correcting both the bit level and packet level errors.

Types of Errors Single Bit Error The term single bit error means that only one bit of the data unit was changed from 1 to 0 and 0 to 1. Burst Error In term burst error means that two or more bits in the data unit were changed. Burst error is also called packet level error, where errors like packet loss, duplication, reordering.

Error Detection Error detection is the process of detecting the error during the transmission between the sender and the receiver. Types of error detection • • •

Parity checking Cyclic Redundancy Check (CRC) Checksum

Redundancy Redundancy allows a receiver to check whether received data was corrupted during transmission. So that he can request a retransmission. Redundancy is the concept of using extra bits for use in error detection. As shown in the figure sender adds redundant bits (R) to the data unit and sends to receiver, when receiver gets bits stream and passes through checking function. If no error then data portion of the data unit is accepted and redundant bits are discarded. Otherwise asks for the retransmission. Parity checking Parity adds a single bit that indicates whether the number of 1 bits in the preceding data is even or odd. If a single bit is changed in transmission, the message will change parity and the error can be detected at this point. Parity checking is not very robust, since if the number of bits changed is even, the check bit will be invalid and the error will not be detected. 1. Single bit parity 2. Two dimension parity Moreover, parity does not indicate which bit contained the error, even when it can detect it. The data must be discarded entirely, and re-transmitted from scratch. On a noisy transmission medium a successful transmission could take a long time, or even never occur. Parity does have the advantage, however, that it's about the best possible code that uses only a single bit of space.

Cyclic Redundancy Check CRC is a very efficient redundancy checking technique. It is based on binary division of the data unit, the remainder of which (CRC) is added to the data unit and sent to the receiver. The Receiver divides data unit by the same divisor. If the remainder is zero then data unit is accepted and passed up the protocol stack, otherwise it is considered as having been corrupted in transit, and the packet is dropped. Sequential steps in CRC are as follows. Sender follows following steps. • •

Data unit is composite by number of 0s, which is one less than the divisor. Then it is divided by the predefined divisor using binary division technique. The remainder is called CRC. CRC is appended to the data unit and is sent to the receiver.

Receiver follows following steps. • •

When data unit arrives followed by the CRC it is divided by the same divisor which was used to find the CRC (remainder). If the remainder results in this division process is zero then it is error free data, otherwise it is corrupted.

Diagram shows how to CRC process works. [a] sender CRC generator [b] receiver CRC checker Checksum Check sum is the third method for error detection mechanism. Checksum is used in the upper layers, while Parity checking and CRC is used in the physical layer. Checksum is also on the concept of redundancy. In the checksum mechanism two operations to perform.

Checksum generator Sender uses checksum generator mechanism. First data unit is divided into equal segments of n bits. Then all segments are added together using 1’s complement. Then it complements ones again. It becomes Checksum and sends along with data unit. Exp: If 16 bits 10001010 00100011 is to be sent to receiver. So the checksum is added to the data unit and sends to the receiver. Final data unit is 10001010 00100011 01010000. Checksum checker Receiver received the data unit and divided into equal size of segments. All segments are added using 1’s complement. The result is completes once again. If the result is zero, data will be accept, otherwise rejects. Exp: The final data is nonzero then it is rejected. Error Correction This type of error control allows a receiver to reconstruct the original information when it has been corrupted during transmission. Hamming Code It is a single bit error correction method using redundant bits. In this method redundant bits are included with the original data those bits are arrange such that different incorrect bits produce different error results, that incorrect bits can be identified. Once the bit is identified, the receiver can reverse its value and correct the error. Hamming code can be applied to any

length of data unit and uses the relationships between the data and the redundancy bits. Algorithm: 1. 2. 3. 4.

Parity bits are positions at the power of two (2 r). Rest of the positions is filled by original data. Each parity bit will take care of its bits in the code. Final code will sends to the receiver.

In the above example we calculate the even parities for the various bit combinations. The value for the each combination is the value for the corresponding r (redundancy) bit. R1 will take care of bit 1,3,5,7,9,11. And it is set based on the sum of even parity bit. The same method for rest of the parity bits. If the error occur at bit 7 which is changed from 1 to 0. Then receiver recalculates the same sets of bits used by the sender. By this we can identify the perfect location of error occurrence. Once the bit is identified the receiver can reverse its value and correct the error. Flow Control Flow Control is one important design issue for the Data Link Layer that controls the flow of data between sender and receiver. In Communication, there is communication medium between sender and receiver. When Sender sends data to receiver than there can be problem in below case: 1) Sender sends data at higher rate and receive is too sluggish to support that data rate. To solve the above problem, FLOW CONTROL is introduced in Data Link Layer. It also works on several higher layers. The main concept of Flow Control is to introduce EFFICIENCY in Computer Networks.

Approaches of Flow Control Feedback based Flow Control 2. Rate based Flow Control 1.

Feedback based Flow Control is used in Data Link Layer and Rate based Flow Control is used in Network Layer. Feed back based Flow Control In Feedback based Flow Control, until sender receives feedback from the receiver, it will not send next data. Types of Feedback based Flow Control A. Stop-and-Wait Protocol B. Sliding Window Protocol 1. A One-Bit Sliding Window Protocol 2. A Protocol Using Go Back N 3. A Protocol Using Selective Repeat A. A Simplex Stop-and-Wait Protocol In this Protocol we have taken the following assumptions: 1. It provides unidirectional flow of data from sender to receiver. 2. The Communication channel is assumed to be error free. In this Protocol the Sender simply sends data and waits for the acknowledgement from Receiver. That's why it is called Stop-and-Wait Protocol. This type is not so much efficient, but it is simplest way of Flow Control.

In this scheme we take Communication Channel error free, but if the Channel has some errors than receiver is not able to get the correct data from sender so it will not possible for sender to send the next data (because it will not get acknowledge from receiver). So it will end the communication, to solve this problem there are two new concepts were introduced. 1.

2.

TIMER, if sender was not able to get acknowledgement in the particular time than, it sends the buffered data once again to receiver. When sender starts to send the data, it starts timer. SEQUENCE NUMBER, from this the sender sends the data with the specific sequence number so after receiving the data, receiver sends the data with that sequence number, and here at sender side it also expect the acknowledgement of the same sequence number.

This type of scheme is called Positive Acknowledgement with Retransmission (PAR). B. Sliding Window Protocol Problems Stop –wait protocol in the last protocols sender must wait for either positive acknowledgement from receiver or for time out to send the next frame to receiver. So if the sender is ready to send the new data, it cannot send. Sender is dependent on the receiver. Previous protocols have only the flow of one sided, means only sender sends the data and receiver just acknowledge it, so the twice bandwidth is used. To solve the above problems the Sliding Window Protocol was introduce. In this, the sender and receiver both use buffer, it’s of some size, so there is no necessary to wait for the sender to send the second data, it can send one after one without wait of the receiver’s acknowledgement. And it also solve the problem of uses of more bandwidth, because in this scheme both sender and receiver uses the channel to send the data and receiver just send the acknowledge with the data which it want to send to

sender, so there is no special bandwidth is used for acknowledgement, so the bandwidth is saved, and this whole process is called PIGGYBACKING. Types of Sliding Window Protocol i. A One-Bit Sliding Window Protocol ii. A Protocol Using Go Back N iii. A Protocol Using Selective Repeat

i. A One-Bit Sliding Window Protocol This protocol has buffer size of one bit, so only possibility for sender and receiver to send and receive packet is only 0 and 1. This protocol includes Sequence, Acknowledge, and Packet number. It uses full duplex channel so there is two possibilities: 1. Sender first start sending the data and receiver start sending data after it receive the data. 2. Receiver and sender both start sending packets simultaneously, First case is simple and works perfectly, but there will be an error in the second one. That error can be like duplication of the packet, without any transmission error. ii. A Protocol Using Go Back N The problem with pipelining is if sender sending 10 packets, but the problem occurs in 8th one than it is needed to resend whole data. So the protocol called Go back N and Selective Repeat was introduced to solve this problem. In this protocol, there are two possibility at the receiver’s end, it may be with large window size or it may be with window size one.

The window size at the receiver end may be large or only of one. In the case of window size is one at the receiver, as we can see in the figure (a), if sender wants to send the packet from one to ten but suppose it has error in 2nd packet, so sender will start from zero, one, two, etc. here we assume that sender has the time out interval with 8. So the time out will occur after the 8 packets, up to that it will not wait for the acknowledgment. In this case at the receiver side the 2nd packet come with error, and other up to 8 were discarded by receiver. So in this case the loss of data is more. Whether in the other case with the large window size at receiver end as we can see in the figure (b) if the 2nd packet comes with error than the receiver will accept the 3rd packet but it sends NAK of 2 to the sender and buffer the 3rd packet. Receiver does the same thing in 4th and 5th packet. When the sender receiver the NAK of 2nd packet it immediately send the 2nd packet to the receiver. After receiving the 2nd packet, receiver send the ACK of 5th one as saying that it received up to 5 packets. So there is no need to resend 3rd, 4th and 5th packet again, they are buffered in the receiver side.

iii. A Protocol Using Selective Repeat Protocol using Go back N is good when the errors are rare, but if the line is poor, it wastes a lot of bandwidth on retransmitted frames. So to provide reliability, Selective repeat protocol was introduced. In this protocol sender starts its window size with 0 and grows to some predefined maximum number. Receiver's window size is fixed and equal to the maximum number of sender's window size. The receiver has a buffer reserved for each sequence number within its fixed window. Whenever a frame arrives, its sequence number is checked by the function to see if it falls within the window, if so and if it has not already been received, it is accepted and stored. This action is taken whether it is not expected by the network layer. Here the buffer size of sender and receiver is 7 and as we can see in the figure (a), the sender sends 7 frames to the receiver and starts timer. When a receiver gets the frames, it sends the ACK back to the sender and it passes the frames to the Network Layer. After doing this, receiver empties its buffer and increased sequence number and expects sequence number 7,0,1,2,3,4,5. But if the ACK is lost, the sender will not receive the ACK. So when the timer expires, the sender retransmits the original frames, 0 to 6 to the receiver. In this case the receiver accepts the frames 0 to 5 (which are duplicated) and send it to the network layer. In this case protocol fails. To solve the problem of duplication, the buffer size of sender and receiver should be (MAX SEQ + 1)/2 that is half of the frames to be send. As we can see in fig(c), the sender sends the frames from 0 to 3 as it's window size is 4. Receiver accepts the frames and sends acknowledgment to the sender and passes the frames to the network layer and increases the expected sequence number from 4 to 7. If the ACK is lost than sender will send 0 to 3 to receiver again but receiver is expecting to 4 to 7, so it will not accept it. So this way the problem of duplication is solved.

Related Documents

Computer Network
May 2020 19
Computer Network
June 2020 26
Computer Network
July 2020 24
Computer Network
June 2020 35
Computer Network
June 2020 26

More Documents from ""