Performance Evaluation of Deadline Monotonic Policy over 802.11 protocol Ines El Korbi and Leila Azouz Saidane National School of Computer Science University of Manouba, 2010 Tunisia Emails:
[email protected] [email protected]
ABSTRACT Real time applications are characterized by their delay bounds. To satisfy the Quality of Service (QoS) requirements of such flows over wireless communications, we enhance the 802.11 protocol to support the Deadline Monotonic (DM) scheduling policy. Then, we propose to evaluate the performance of DM in terms of throughput, average medium access delay and medium access delay distrbution. To evaluate the performance of the DM policy, we develop a Markov chain based analytical model and derive expressions of the throughput, average MAC layer service time and service time distribution. Therefore, we validate the mathematical model and extend analytial results to a multi-hop network by simulation using the ns-2 network simulator. Keywords: Deadline Monotonic, 802.11 protocol, Performance evaluation, Medium access delay, Throughput, Probabilistic medium access delay bounds.
1
INTRODUCTION
Supporting applications with QoS requirements has become an important challenge for all communications networks. In wireless LANs, the IEEE 802.11 protocol [5] has been enhanced and the IEEE 802.11e protocol [6] was proposed to support quality of service over wireless communications. In the absence of a coordination point, the IEEE 802.11 defines the Distributed Coordination Function (DCF) based on the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol. The IEEE 802.11e proposes the Enhanced Distributed Channel Access (EDCA) as an extension for DCF. With EDCA, each station maintains four priorities called Access Categories (ACs). The quality of service offered to each flow depends on the AC to which it belongs. Nevertheless, the granularity of service offered by 802.11e (4 priorities at most) can not satisfy the real time flows requirements (where each flow is characterized by its own delay bound). Therefore, we propose in this paper a new medium access mechanism based on the Deadline Monotonic (DM) policy [9] to schedule real time flows over 802.11. Indeed DM is a real time scheduling policy that assigns static priorities to flow packets according to their deadlines; the packet with the shortest deadline being assigned the highest
UbiCC Journal - Volume 3
priority. To support the DM policy over 802.11, we use a distributed scheduling and introduce a new medium access backoff policy. Therefore, we focus on performance evaluation of the DM policy in terms of achievable throughput, average MAC layer service time and MAC layer service time distribution. Hence, we follow these steps: − First, we propose a Markov Chain framework modeling the backoff process of n contending stations within the same broadcast region [1]. Due to the complexity of the mathematical model, we restrict the analysis to n contending stations belonging to two traffic categories (each traffic category is characterized by its own delay bound). − From the analytical model, we derive the throughput achieved by each traffic category. − Then, we use the generalized Z-transforms [3] to derive expressions of the average MAC layer service time and service time distribution. − As the analytical model was restricted to two traffic categories, analytical results are extended by simulation to different traffic categories. − Finally, we consider a simple multi-hop scenario to deduce the behavior of the DM policy in a multi hop environment.
1
The rest of this paper is organized as follows. In section 2, we review the state of the art of the IEEE 802.11 DCF, QoS support over 802.11 mainly the IEEE 80.211e EDCA and real time scheduling over 802.11. In section 3, we present the distributed scheduling and introduce the new medium access backoff policy to support DM over 802.11. In section 4, we present our mathematical model based on Markov chain analysis. Section 5 and 6 present respectively throughput and the service time analysis. Analytical results are validated by simulation using the ns-2 network simulator [16]. In section 7, we extend our study by simulation, first to take into consideration different traffic categories, second, to study the behavior of the DM algorithm in a multi-hop environment where factors like interferences or routing protocols exist. Finally, we conclude in Section 8. 2
LITTERATURE REVIEWS
2.1 The 802.11 protocol 2.1.1 Description of the IEEE 802.11 DCF Using DCF, a station shall ensure that the channel is idle when it attempts to transmit. Then it selects a random backoff in the contention window [0,CW-1], where CW is the current window size and varies between the minimum and the maximum contention window sizes. If the channel is sensed busy, the station suspends its backoff until the channel becomes idle for a Distributed Inter Frame Space (DIFS) after a successful transmission or an Extended Inter Frame Space (EIFS) after a collision. The packet is transmitted when the backoff reaches zero. A packet is dropped if it collides after maximum retransmission attempts. The above described two way handshaking packet transmission procedure is called basic access mechanism. DCF defines a four way handshaking technique called Request To Send/ Clear To Send (RTS/CTS) to prevent the hidden station problem. A station S j is said to be hidden from S i if S j is within the transmission range of the receiver of S i and out of the transmission range of S i . 2.1.2 Performance evaluation of the 802.11 DCF Different works have been proposed to evaluate the performance of the 802.11 protocol based on Bianchi’s work [1]. Indeed, Bianchi proposed a Markov chain based analytical model to evaluate the saturation throughput of the 802.11 protocol. By saturation conditions, it’s meant that contending have always packets to transmit. Several works extended the Bianchi model either to suit more realistic scenarios or to evaluate other performance parameters. Indeed, the authors of [2] incorporate the frame retry limits in the Bianchi’s model and show that Bianchi overestimates the
UbiCC Journal - Volume 3
maximum achievable throughput. The native model is also extended in [10] to a non saturated environment. In [12], the authors derive the average packet service time at a 802.11 node. A new generalized Z-transform based framework has been proposed in [3] to derive probabilistic bounds on MAC layer service time. Therefore, it would be possible to provide probabilistic end to end delay bounds in a wireless network. 2.2 Supporting QoS over 802.11 2.2.1 Differentiation mechanisms over 802.11 Emerging applications like audio and video applications require quality of service guarantees in terms of throughput delay, jitter, loss rate, etc. Transmitting such flows over wireless communications require supporting service differentiation mechanisms over wireless networks. Many medium access schemes have been proposed to provide some QoS enhancements over the IEEE 802.11 WLAN. Indeed, [4] assigns different priorities to the incoming flows. Priority classes are differentiated according to one of three 802.11 parameters: the backoff increase function, Inter Frame Spacing (IFS) and the maximum frame length. Experiments show that all the three differentiation schemes offer better guarantees for the highest priority flow. But the backoff increase function mechanism doesn’t perform well with TCP flows because ACKs affect the differentiation mechanism. In [7], an algorithm is proposed to provide service differentiation using two parameters of IEEE 802.11, the backoff interval and the IFS. With this scheme high priority stations are more likely to access the medium than low priority ones. The above described researches led to the standardization of a new protocol that supports QoS over 802.11, the IEEE 802.11e protocol [6]. 2.2.2 The IEEE 802.11e EDCA The IEEE 802.11e proposes a new medium access mechanism called the Enhanced Distributed Channel Access (EDCA), that enhances the IEEE 802.11 DCF. With EDCA, each station maintains four priorities called Access Categories (ACs). Each access category is characterized by a minimum and a maximum contention window sizes and an Arbitration Inter Frame Spacing (AIFS). Different analytical models have been proposed to evaluate the performance of 802.11e EDCA. In [17], Xiao extends Bianchi’s model to the prioritized schemes provided by 802.11e by introducing multiple ACs with distinct minimum and maximum contention window sizes. But the AIFS differentiation parameter is lacking in Xiao’s model. Recently Osterbo and Al. have proposed
2
different works to evaluate the performance of the IEEE 802.11e EDCA [13], [14], [15]. They propose a model that takes into consideration all the differentiation parameters of the EDFA especially the AIFS one. Moreover different parameters of QoS have been evaluated such as throughput, average service time, service time distribution and probabilistic response time bounds for both saturated and non saturated cases. Although the IEEE 802.11e EDCA classifies the traffic into four prioritized ACs, there is still no guarantee of real time transmission service. This is due to the lack of a satisfactory scheduling method for various delay-sensitive flows. Hence, we need a scheduling policy dedicated to such delay sensitive flows. 2.3 Real time scheduling over 802.11 A distributed solution for the support of realtime sources over IEEE 802.11, called Blackburst, is discussed in [8]. This scheme modifies the MAC protocol to send short transmissions in order to gain priority for real-time service. It is shown that this approach is able to support bounded delays. The main drawback of this scheme is that it requires constant intervals for high priority traffic; otherwise the performance degrades very much. In [18], the authors proposed a distributed priority scheduling over 802.11 to support a class of dynamic priority schedulers such as Earliest Deadline First (EDF) or Virtual Clock (VC). Indeed, the EDF policy is used to schedule real time flows according to their absolute deadlines, where the absolute deadline is the node arrival time plus the delay bound. To realize a distributed scheduling over 802.11, the authors of [18] used a priority broadcast mechanism where each station maintains an entry for the highest priority packet of all other stations. Thus, stations can adjust their backoff according to other stations priorities. The overhead introduced by the broadcast priority mechanism is negligible. This is due to the fact that priorities are exchanged using native DATA and ACK packets. Nevertheless, the authors of [18] propose a generic backoff policy which can be used by a class dynamic priority schedulers no matter if this scheduler targets delay sensitive flows or rate sensitive flows. In this paper, we focus on delay sensitive flows and propose to support the fixed priority deadline monotonic scheduler over 802.11 to schedule delay sensitive flows. For instance, we use a priority broadcast mechanism similar to [5] and propose a new medium access backoff policy where the backoff value is inferred from the deadline
UbiCC Journal - Volume 3
information. 3
SUPPORTING DEADLINE MONOTONIC (DM) POLICY OVER 802.11
With DCF all the stations share the same transmission medium. Then, the HOL (Head of Line) packets of all the stations (highest priority packets) will contend for the channel with the same priority even if they have different deadlines. Introducing DM over 802.11 allows stations having packets with short deadlines to access the channel with higher priority than those having packets with long deadlines. Providing such a QoS requires distributed scheduling and a new medium access policy. 3.1 Distributed Scheduling over 802.11 To realize a distributed scheduling over 802.11, we introduce a priority broadcast mechanism similar to [18]. Indeed each station maintains a local scheduling table with entries for HOL packets of all other stations. Each entry in the scheduling table of node S i comprises two fields S j , D j where S j is the source node MAC address and D j is the
(
)
deadline of the HOL packet of node S j . To broadcast the HOL packet deadlines, we propose to use the DATA/ACK access mode. When a node S i transmits a DATA packet, it piggybacks the deadline of its HOL packet. The nodes hearing the DATA packet add an entry for S i in their local scheduling tables by filling the corresponding fields. The receiver of the DATA packet copies the priority of the HOL packet in ACK before sending the ACK frame. All the stations that did not hear the DATA packet add an entry for S i using the information in the ACK packet. 3.2 DM medium access backoff policy Let’s consider two stations S 1 and S 2 transmitting two flows with the same deadline D1 ( D1 is expressed as a number of 802.11 slots). The two stations having the same delay bound can access the channel with the same priority using the native 802.11 DCF. Now, we suppose that S 1 and S 2 transmit flows with different delay bounds D1 and D 2 such as D1 < D 2 , and generate two packets at time instants t 1 and t 2 . If S 2 had the same delay bound as S 1 , its packet would have been generated at time t '2 such as t '2 = t 2 + D 21 , where D21 = ( D2 − D1 ) . At that time, S 1 and S 2 would have the same priority and transmit their packets according to the
3
802.11 protocol. Thus, to support DM over 802.11, each station uses a new backoff policy where the backoff is given by: • The random backoff selected in [ 0 , CW − 1] according to 802.11 DCF, referred as BAsic Backoff (BAB). • The DM Shifting Backoff (DMSB): corresponds to the additional backoff slots that a station with low priority (the HOL packet having a large deadline) adds to its BAB to have the same priority as the station with the highest priority (the HOL packet having the shortest deadline). Whenever a station S i sends an ACK or hears an ACK on the channel its DMSB is revaluated as follows: DMSB( S i ) = Deadline( HOL( S i ) ) − DTmin ( S i )
(1)
Where DTmin ( S i ) is the minimum of the HOL packet deadlines present in S i scheduling table and Deadline( HOL( S i ) ) is the HOL packet deadline of node S i . Hence, when S i has to transmit its HOL packet with a delay bound Di , it selects a BAB in the contention window [ 0 , CW min − 1] and computes the WHole Backoff (WHB) value as follows: WHB( S i ) = DMSB( S i ) + BAB( S i )
(2)
The station S i decrements its BAB when it senses an idle slot. Now, we suppose that S i senses the channel busy. If a successful transmission is heard, then S i revaluates its DMSB when a correct ACK is heard. Then the station S i adds the new DMSB value to its current BAB as in equation (2). Whereas, if a collision is heard, S i reinitializes its DMSB and adds it to its current BAB to allow colliding stations contending with the same priority as for their first transmission attempt. S i transmits when its WHB reaches 0. If the transmission fails, S i doubles its contention window size and repeats the above procedure until the packet is successfully transmitted or dropped after maximum retransmission attempts. 4
MATHEMATICAL MODEL OF THE DM POLICY OVER 802.11
UbiCC Journal - Volume 3
In this section, we propose a mathematical model to evaluate the performance of the DM policy using Markov chain analysis [1]. We consider the following assumptions: Assumption 1: The system under study comprises n contending stations hearing each other transmissions. Assumption 2: Each station S i transmits a flow Fi with a delay bound Di . The n stations are divided into two traffic categories C1 and C 2 such as: − C1 represents n1 nodes transmitting flows with delay bound D1 . − C 2 represents n 2 nodes transmitting flows with delay bound D 2 , such as D1 < D 2 , D21 = ( D 2 − D1 ) and ( n1 + n 2 ) = n . Assumption 3: We operate in saturation conditions: each station has immediately a packet available for transmission after the service completion of the previous packet [1]. Assumption 4: A station selects a BAB in a constant contention window [0 ,W − 1] independently of the transmission attempt. This is a simplifying assumption to limit the complexity of the mathematical model. Assumption 5: We are in stationary conditions, i.e. the n stations have already sent one packet at least. Depending on the traffic category to which it belongs, each station S i will be modeled by a Markov Chain representing its whole backoff (WHB) process. 4.1 Markov chain modeling a station of category C1 Figure 1 illustrates the Markov chain modeling a station S 1 of category C1 . The states of this Markov chain are described by the following quadruplet ( R , i , i − j , D21 ) where: R : takes two values denoted by C 2 and • ~ C 2 . When R = ~ C 2 , the n 2 stations of category C 2 are decrementing their shifting backoff (DMSB) during D21 slots and wouldn’t contend for the channel. When R = C 2 , the D 21 slots had already been elapsed and stations of category C 2 will contend for the channel..
4
Figure 1: Markov chain modeling a category C1 Station
• • •
i : the value of the BAB selected by S 1 in [0 ,W − 1] . ( i − j ) : corresponds to the current backoff of the station S 1 . D 21 : corresponds to ( D2 − D1 ) . We choose
the negative notation − D 21 for stations of C1 to express the fact that only stations of category C 2 have a positive DMSB equal to D 21 . Initially S 1 selects a random BAB and is in one of the states ( ~ C2 , i , i ,− D21 ) , i = 0..W − 1 . During ( D 21 − 1) slots, S 1 decrements its backoff if none of the ( n1 − 1) remaining stations of category C1 transmits. Indeed, during these slots, the n 2 stations of category C 2 are decrementing their DMSB and wouldn’t contend for the channel. When S 1 is in one of the states ( ~ C 2 , i , i − ( D21 − 1) ,− D21 ) , i = D 21 ..W − 1 and th senses the channel idle, it decrements its D 21 slot. But S 1 knows that henceforth the n 2 stations of category C 2 can contend for the channel (the D 21 slots had been elapsed). Hence, S 1 moves to one of the states ( C 2 , i , i − D21 ,− D 21 ) , i = D 21 ..W − 1 .
However, when the station S 1 is in one of the states ( ~ C 2 , i , i − j ,− D 21 ) , i = 1..W − 1 , j = 0.. min( D 21 − 1, i − 1) and at least one of the ( n1 − 1) remaining stations of category C1 transmits, then the stations of category C 2 will reinitialize their DMSB and wouldn’t contend for
UbiCC Journal - Volume 3
channel during additional D21 slots. Therefore, S 1 moves to the state ( ~ C 2 , i − j , i − j ,− D 21 ) , i = 1..W − 1 , j = 0.. min( D21 − 1, i − 1) . Now, If S 1 is in one of the states ( C 2 , i , i − D21 ,− D21 ) , i = ( D21 + 1) ..W − 1 and at least one of the ( n − 1) remaining stations (either a category C1 or a category C 2 station) transmits, then S 1 moves to one of the states ( ~ C 2 , i − D21 , i − D21 ,− D21 ) , i = ( D21 + 1) ..W − 1 . 4.2 Markov chain modeling a station of category C2 Figure 2 illustrates the Markov chain modeling a station S 2 of category C 2 . Each state of S 2 Markov chain is represented by the quadruplet ( i , k , D21 − j , D21 ) where: • i : refers to the BAB value selected by S 2 in [0 ,W − 1] . k : refers to the current BAB value of S 2 . • • •
D21 − j : refers to the current DMSB of S 2 , j ∈ [ 0 , D21 ] . D21 : corresponds to ( D 2 − D1 ) .
When S 2 selects a BAB, its DMSB equals D21 and is in one of the states ( i , i , D 21 , D 21 ) , i = 0..W − 1 . During D21 slots, only the n1 stations of category C1 contend for the channel. If S 2 senses the channel idle during D21 slots, it moves to one of the states ( i , i ,0 , D 21 ) , i = 0..W − 1 , where it ends its shifting backoff.
5
Figure 2: Markov chain modeling a category C 2 Station When S 2 is in one of the states ( i , i ,0 , D 21 ) , i = 0..W − 1 , the ( n 2 − 1) other stations of category C 2 have also decremented their DMSB and can contend for the channel. Thus, S 2 decrements its BAB and moves to the state ( i , i − 1,0 , D 21 ) , i = 2..W − 1 , only if none of ( n − 1) remaining stations transmits. If S 2 is in one of the states ( i , i − 1,0 , D 21 ) , i = 2..W − 1 , and at least one of the ( n − 1) remaining stations transmits, the n 2 stations of category C 2 will reinitialize their DMSB and S 2 ( i − 1, i − 1, D21 , D21 ) , moves to the state i = 2..W − 1 . 4.3 Blocking probabilities in the Markov chains According to the explanations given in paragraphs 4.1 and 4.2, the states of the Markov chains modeling stations S 1 and S 2 can be divided into the following groups: •
•
•
•
γ
: the set of states of S 2 , where stations of category C 2 contend for the channel (pink states in figure 2). γ 2 = { ( i , i ,0 , D21 ) , i = 0..W − 1 2
∪ ( i , i − 1,0 , D 21 ) , i = 2..W − 1}
Therefore, when stations of category C 1 are in one the states of ξ 1 , stations of category C 2 are in one of the states of ξ 2 . Similarly, when stations of category C 1 are is in one of the states of γ 1 , stations of category C 2 are in one of the states of γ 2. Hence, we derive the expressions of S 1 blocking probabilities p11 and p12 shown in figure 1 as follows: −
p11 : the probability that S 1 is blocked given that S 1 is in one of the states of ξ 1 . p11 is
ξ 1 : the set of states of S 1 where none of the n 2 stations of category C 2 contends for the channel (blue states in figure 1). ξ 1 = { ( ~ C 2 , i , i − j ,− D 21 ) , i = 0..W − 1, j = 0.. min( max( 0 , i − 1) , D 21 − 1)}
the probability that at least a station S 1' of the other ( n1 − 1) stations of C 1 transmits given that S 1' is in one of the states of ξ 1 .
γ
of C1 transmits given that S 1' is in one of the states of ξ 1 :
: the set of states of S 1 where stations of category C 2 can contend for the channel (pink states in figure 1). γ 1 = { ( C 2 , i , i − D 21 ,− D 21 ) , i = D 21 ..W − 1}
ξ
1
: the set of states of S 2 where stations of category C 2 do not contend for the channel (blue states in figure 2). ξ 2 = { ( i , i , D 21 − j , D 21 ) , i = 0..W − 1,
p 11 = 1 − ( 1 − τ
where τ
τ
11
[
=
UbiCC Journal - Volume 3
π
(3)
]
( ~ C2 ,0 ,0 ,− D21 )
1
W − 1 min ( max ( 0 ,i − 1) ,D21 − 1) π 1( ~C2 ,i ,i − j ,− D21 ) i= 0 j= 0
∑
π 1( R ,i ,i −
) n1 − 1
is the probability that a station S 1'
= Pr S 1' transmits ξ 1
2
j = 0..( D 21 − 1)}
11
11
j ,− D21 )
∑
(4)
is defined as the probability of
the state ( R , i , i − j ,− D21 ) , in the stationary
6
conditions and Π
{
= π
1
( R ,i ,i −
j ,− D21 )
1
}
is the
probability vector of a category C 1 station. −
p12 : the probability that S 1 is blocked given that S 1 is in one of the states of γ 1 . p12 is
the probability that at least a station S 1' of the other ( n1 − 1) stations of C 1 transmits given that S 1' is in one of the states of γ 1 or at least a station S 2' of the n 2 stations of
−
p 22 : the probability that S 2 is blocked given that S 2 is in one of the states of γ 2 .
p 22 = 1 − ( 1 − τ
Π i Pi = Π i π ij = 1 j
∑
) n1 − 1 ( 1 − τ 22 ) n2
12
(5) where τ 12 is the probability that a station S 1' of C 1 transmits given that S 1' is in one of the states of γ 1 .
τ
[
= Pr S 1' transmits γ
12
=
π 1( C2 ,D21 ,0 ,− D21 )
W−1
∑
π
1
]
(6)
( C2 ,i ,i − D21 ,− D21 )
1
22
the probability that a station S 2' of
C 2 transmits given that S 2' is in one of the states of γ 2 .
τ
12
= Pr =
[
S '2
transmits γ
π W−1
∑
π
π
2
2
]
2
+
i= 0
( i ,k ,D21 −
W−1
∑
π
( i ,i − 1,0 ,D21 ) 2
(7)
i= 2
j ,D21 )
of the state
4.4 Transition probability matrices 4.4.1 Transition probability matrix of a category C1 station Let P1 be the transition probability matrix of the station S 1 of category C1 . P1 { i , j} is the probability to transit from state i to state j . We have: P1 { ( ~ C 2 , i , i − j ,− D 21 ) , ( ~ C 2 , i , i − ( j + 1) ,− D 21 )}
( i , k , D21 −
stationary condition. Π
2
j , D 21 ) ,
{
= π
in the
( i ,k ,D21 −
2
j ,D21 )
}
is the probability vector of a category C 2 station. In the same way, we evaluate p 21 and p 22 the blocking probabilities of station S 2 as shown in figure 2: p 21 : the probability that S 2 is blocked − given that S 2 is in one of the states of ξ 2 . p 21 = 1 − ( 1 − τ
11
P1 { ( ~ C 2 , i ,1,− D 21 ) , ( ~ C 2 ,0 ,0 ,− D 21 )} = 1 − p11 ,
(11)
i = 1.. min(W − 1, D 21 − 1)
(12)
P1 { ( ~ C 2 , i , i − D 21 + 1,− D 21 ) , ( C 2 , i , i − D 21 ,− D 21 )} P1{ ( ~ C2 , i , i − j ,− D21 ) , ( ~ C2 , i − j , i − j ,− D21 )}
= p11 , i = 2..W − 1, j = 1.. min( i − 1, D21 − 1) P1 { ( ~ C2 , i , i ,− D21 ) , ( ~ C2 , i , i ,− D21 )} = p11 ,
is defined as the probability
UbiCC Journal - Volume 3
(10)
= 1 − p11 , i = D 21 ..W − 1
( 0 ,0 ,0 ,D21 )
( i ,i ,0 ,D21 ) 2
(9)
= 1 − p11 , i = 2..W − 1, j = 0.. min( i − 2 , D 21 − 2 )
i = D21
and τ
) n1 ( 1 − τ 22 ) n2 − 1
The blocking probabilities described above allow deducing the transition state probabilities and having the transition probability matrix Pi , for a station of traffic category C i . Therefore, we can evaluate the state probabilities by solving the following system [11]:
C 2 transmits given that S 2' is in one of the states of γ 2 .
p 12 = 1 − ( 1 − τ
12
)
n1
(8)
i = 1..W − 1
(13) (14)
(15)
P1{ ( C2 , i , i − D21 ,− D21 ) , ( ~ C2 , i − D21 , i − D21 ,− D21 )}
= p12 , i = ( D21 + 1) ..W − 1
(16) P1{ ( C2 ,i ,i − D21 ,− D21 ) ,( C2 ,( i − 1) ,( i − 1 − D21 ) ,− D21 )}
= 1 − p12 ,i = ( D21 + 1) ..W − 1
(17) P1{ ( ~ C2 ,0 ,0 ,− D21 ) , ( ~ C2 , i , i ,− D21 )} =
1 , W
(18)
i = 0..W − 1
If ( D 21 < W ) then:
7
1 , W
P1{ ( C2 , D21 ,0 ,− D21 ) , ( ~ C2 , i , i ,− D21 )} =
τ 11 = f (τ 11 ,τ 12 ,τ 22 ) τ 12 = f (τ 11 ,τ 12 ,τ 22 ) τ 22 = f (τ 11 ,τ 12 ,τ 22 ) under the constraint τ 11 > 0 ,τ 12 > 0 ,τ 22 > 0 ,τ
(19)
i = 0..W − 1
By replacing p11 and p 12 by their values in equations (3) and (5) and by replacing P1 and Π 1 in (10) and solving the resulting system, we can ( R ,i ,i − j ,− D21 ) express π 1 as a function of τ 11 , τ 12 and
τ 22 given respectively by equations (4), (6) and (7). Transition probability matrix of a category C2 station Let P2 be the transition probability matrix of the station S 2 belonging to the traffic category C 2 . The transition probabilities of S 2 are: 4.4.2
P2 { ( i , i , D21 − j , D21 ) , ( i , i , D21 − ( j + 1) , D21 )}
(20)
= 1 − p21 , i = 0..W − 1, j = 0..( D21 − 1)
P2 { ( i , i ,0 , D21 ) , ( i , i − 1,0 , D21 )} = 1 − p22 , P2 { ( 1,1,0 , D21 ) , ( 0 ,0 ,0 , D21 )} = 1 − p22
(23)
P2 { ( i , i ,0 , D21 ) , ( i , i , D21 , D21 )} = p22 ,
(24)
i = 1..W − 1 P2 { ( i , i − 1,0 , D21 ) , ( i − 1, i − 1, D21 , D21 )} = p22 , i = 2..W − 1
(25)
P2 { ( i , i − 1,0 , D21 ) , ( i − 1, i − 2 ,0 , D21 )} = 1 − p22 , i = 3..W − 1
P2 { ( 0 ,0 ,0 , D21 ) , ( i , i , D21 , D21 )} =
(26)
1 , i = 0..W − 1 (27) W
22
given respectively by equations (4), (6)
and (7). Moreover, by replacing π
π
( i ,k ,D21 −
2
j ,D21 )
( R ,i ,i −
1
j ,− D21 )
and
by their values, in equations (4), (6)
and (7), we obtain a system of non linear equations as follows:
UbiCC Journal - Volume 3
< 1,τ
22
< 1
THROUGHPUT ANALYSIS
In this section, we propose to evaluate Bi , the normalized throughput achieved by a station of traffic category C i [1]. Hence, we define: Pi ,s : the probability that a station S i belonging to the traffic category C i transmits a packet
successfully. Let S 1 and S 2 be two stations belonging respectively to traffic categories C 1 and C 2 . We have: P1,s = Pr [ S1 transmits successfully ξ 1 ] Pr [ξ 1 ]
+ Pr [ S1 transmits successfully γ 1 ] Pr [γ 1 ] = τ 11 ( 1 − p11 ) Pr [ξ 1 ] + τ 12 ( 1 − p12 ) Pr [γ 1 ]
(29) P2 ,s = Pr [ S 2 transmits successfully ξ
+ Pr [ S 2 transmits successfully γ =τ
By replacing p 21 and p22 by their values in equations (8) and (9) and by replacing P2 and Π 2 in (10) and solving the resulting system, we can ( i ,k ,D − j ,D21 ) express π 2 21 as a function of τ 11 , τ 12 and τ
5
(22)
i = 2..W − 1
12
Solving the above system (28), allows deducing the expressions of τ 11 , τ 12 and τ 22 , and deriving the state probabilities of Markov chains modeling category C 1 and category C 2 stations.
(21)
i = 0..W − 1, j = 0..( D21 − 1)
< 1,τ
(28)
−
P2 { ( i , i , D21 − j , D21 ) , ( i , i , D21 , D21 )} = p21 ,
11
−
22 ( 1 −
p 22 ) Pr [γ
2]
2
] Pr[ξ 2 ] ] Pr[γ 2 ] 2
(30) Pidle : the probability that the channel is idle.
The channel is idle if the n1 stations of category C 1 don’t transmit given that these stations are in one of the states of ξ 1 or if the n stations (both category C 1 and category C2 stations) don’t transmit given that stations of category C 1 are in one of the states of γ 1 . Thus: Pidle = ( 1 − τ
11
) n1
Pr [ξ 1 ] + ( 1 − τ
12
) n1 ( 1 − τ 22 ) n2
Pr [γ 1 ] (31)
Hence, the expression of the throughput of a category C i station is given by:
8
Bi =
Pi ,s T P PIdle Te + Ps Ts + 1 − PIdle −
2
∑
i= 1
ni Pi ,s Tc (32)
Where Te denotes the duration of an empty slot, Ts and Tc denote respectively the duration of a successful transmission and a collision. 2 1 − PIdle − ni Pi ,s corresponds to the i= 1 probability of collision. Finally T p denotes the
∑
average time required to transmit the packet data payload. We have:
(
)
Ts = T PHY + TMAC + T p + T D + SIFS +
( TPHY
+ T ACK + T D ) + DIFS
(
)
Tc = TPHY + TMAC + T p + TD + EIFS
For all the scenarios, we consider that we are in n presence of n contending stations with stations 2 for each traffic category. In figure 3, n is fixed to 8 and we depict the throughput achieved by the different stations present in the network as a function of the contention window size W , ( D21 = 1) . We notice that the throughput achieved by category C1 stations (stations numbered from S 11 to S 14 ) is greater than the one achieved by category C 2 stations (stations numbered from S 21 to S 24 ).
(33) (34)
Where T PHY , TMAC and T ACK are the durations of the PHY header, the MAC header and the ACK packet [1], [13]. T D is the time required to transmit the two bytes deadline information. Stations hearing a collision wait during EIFS before resuming their backoff. For numerical results stations transmit 512 bytes data packets using 802.11.b MAC and PHY layers parameters (given in table 1) with a data rate equal to 11Mbps. For simulation scenarios, the propagation model is a two ray ground model. The transmission range of each node is 250m. The distance between two neighbors is 5m. The EIFS parameter is set to ACKTimeout as in ns-2, where: ACKTimeout = DIFS + ( T PHY + T ACK + T D ) + SIFS (35)
Table 1: 802.11 b parameters.
Data Rate Slot SIFS DIFS PHY Header MAC Header ACK Short Retry Limit
UbiCC Journal - Volume 3
Figure 3: Normalized throughput as a function of the contention window size ( D 21 = 1, n = 8 ) Analytically, stations belonging to the same traffic category have the same throughput given by equation (31). Simulation results validate analytical results and show that stations belonging to the same traffic category (either category C1 or category C 2 ) have nearly the same throughput. Thus, we conclude the fairness of DM between stations of the same category. For subsequent throughput scenarios, we focus on one representative station of each traffic category. Figure 4, compares category C1 and category C 2 stations throughputs to the one obtained with 802.11.
11 Mb/s 20 µs 10 µs 50 µs 192 µs 272 µs 112 µs 7
Curves are represented as a function of W and for different values of D21 . Indeed as D21 increases, the category C1 station throughput increases, whereas the category C 2 station throughput decreases. Moreover as W increases, the difference between stations throughputs is reduced. This is due to the fact that the shifting backoff becomes negligible compared to the contention window size.
Ubiquitous Computing- and Communication Journal
9
Finally, we notice that the category C1 station obtains better throughput with DM than with 802.11, but the opposite scenario happens to the category C 2 station.
We propose to evaluate the Z-Transform of the MAC layer service time [3], [14], [15] to derive an expression of the average service time. The service time depends on the duration of an idle slot Te , the duration of a successful transmission Ts and the duration of a collision Tc [1], [3],[14]. As Te is the smallest duration event, the duration of all events Tevent will be given by . Te 6.1 Z-Transform of the MAC layer service time 6.1.1 Service time Z-transform of a category C1 station: Let TS 1 ( Z ) be the service time Z-transform of a station S1 belonging to traffic category C 1 . We define:
Figure 4: Normalized throughput as a function of the contention window size (different D21 values) In figure 5, we generalize the results for different numbers of contending stations and fix the contention window size W to 32.
H 1( R ,i ,i −
j ,− D21 )
(Z) :
The Z-transform of the time already elapsed from the instant S 1 selects a basic backoff in [ 0 ,W − 1] (i.e. being in one of the states ( ~ C 2 , i , i ,− D 21 ) ) to the time it is found in the state ( R ,i ,i − j ,− D21 ) . Moreover, we define: 11 Psuc : the probability that S 1 observes a successful transmission on the channel, while S 1 is in one of the states of ξ 1 .
•
11 Psuc = ( n1 − 1)τ 11 ( 1 − τ 11 ) n1 − 2
(36)
12 Psuc : the probability that S 1 observes a successful transmission on the channel, while S 1 is in one of the states of γ 1 .
•
12 Psuc = ( n1 − 1)τ 12 ( 1 − τ 12 ) n1 − 2 ( 1 − τ 22 ) n2
Figure 5: Normalized throughput as a function of the number of contending stations All the curves show that DM performs service differentiation over 802.11 and offers better throughput for category C1 stations independently of the number of contending stations. 6
SERVICE TIME ANALYSIS
In this section, we evaluate the average MAC layer service time of category C 1 and category C 2 stations using the DM policy. The service time is the time interval from the time instant that a packet becomes at the head of the queue and starts to contend for transmission to the time instant that either the packet is acknowledged for a successful transmission or dropped.
UbiCC Journal - Volume 3
+ n2τ 22 ( 1 − τ 22 ) n2 − 1 ( 1 − τ 12 ) n1 − 1
(37)
We evaluate H 1( R ,i ,i − j ,− D21 ) ( Z ) for each state of S1 Markov chain as follows: H 1( ~ C2 ,i ,i ,− D21 ) ( Z ) =
(p
11
−
11 Psuc
)
Ts 1 11 Te + Psuc Z + W
Tc min ( i + D21 − 1,W − 1) T Z e H 1( ~ C 2 ,k ,i ,− D21 ) k = i+ 1
∑
(Z)
Ts Tc 12 Te T 12 ˆ + H 1( C 2 ,i + D21 ,i ,− D21 ) ( Z ) Psuc Z + p11 − Psuc Z e
(
)
(38) Where:
10
ˆ1 H ( C2 ,i + D21 ,i ,− D21 ) ( Z ) = H 1( C2 ,i + D21 ,i ,− D21 ) ( Z ) if ( i + D 21 ) ≤ W − 1 ˆ H 1( C2 ,i + D21 ,i ,− D21 ) ( Z ) = 0 Otherwise (39)
We also have: H 1( ~ C2 ,i ,i −
( (1 −
j ,− D21 ) ( Z ) =
p11 ) Z ) j H 1( ~ C2 ,i ,i ,− D21 ) ( Z ) Ts Te
11 1 − Psuc Z
i = 2..W − 1, j = 1..min( i − 1, D21 − 1)
(
Tc Te
)
( (1 − 1−
p11 ) Z ) D21 H 1( ~ C2 ,i ,i ,− D21 ) ( Z )
Ts 11 Te Psuc Z
(
− p11 −
11 Psuc
)
Tc T Z e
+ ( 1 − p12 ) ZH 1( C 2 ,i + 1,i + 1− D21 ,− D21 ) ( Z ) ,i = D21 ..W − 2
(41) H 1( C 2 ,W − 1,W − 1− D21 ,− D21 ) ( Z ) =
( (1 −
p11 ) Z ) D21 H 1( ~ C2 ,W − 1,W − 1,− D21 ) ( Z )
1−
Ts 11 Te Psuc Z
(
− p11 −
11 Psuc
)
TS1 ( Z ) =
Tc Te + ( 1 − p12 ) H 1( C2 ,D21 ,0 ,− D21 ) ( Z ) p11 H 1( ~ C2 ,0 ,0 ,− D21 ) ( Z ) Z i= 0
)∑
+ p12 H 1( C2 ,D21 ,0 ,− D21 ) ( Z )
1− + ( 1 − p11 ) Z
min ( W − 1,D21 − 1)
∑
i= 2
(
− p11 −
H 1( ~ C2 ,i ,1,− D21 ) ( Z ) +
6.1.2 Service time Z-transform of a category C2 station: In the same way, let TS2 (Z) be the service time Z-transform of a station S 2 of category C 2 . We define: H 2( i ,k ,D21 − j ,D21 ) ( Z ) : The Z-transform of the time already elapsed from the instant S 2 selects a basic backoff in [0 ,W − 1] (i.e. being in one of the states ( i , i , D21 , D 21 ) ) to the time it is found in the state ( i , k , D21 − j , D 21 ) . Moreover, we define: •
21 Psuc : the probability that S 2 observes a successful transmission on the channel, while S 2 is in one of the states of ξ 2 . 11 Psuc = ( n1 − 1)τ 12 ( 1 − τ 12 ) n1 − 1
11 Psuc
)Z
Tc Te
•
1 W
If S 1 transmission state is ( ~ C 2 ,0 ,0 ,− D 21 ) , the transmission will be successful only if none of the ( n1 − 1) remaining stations of C 1 transmits. Whereas when the station S 1 transmission state is ( C 2 , D21 ,0 ,− D21 ) , the transmission occurs successfully only if none of ( n − 1) remaining stations (either a category C 1 or a category C 2 station) transmits. If the transmission fails, S 1 tries another transmission. After m retransmissions, if the packet is not acknowledged, it will be dropped.
(45)
22 Psuc : the probability that S 2 observes a successful transmission on the channel, while S 2 is in one of the states of γ 2 . 22 Psuc = n1τ 12 ( 1 − τ 12 ) n1 − 1 ( 1 − τ 22 ) n2 − 1
(43)
UbiCC Journal - Volume 3
i
(44)
p11 ) ZH 1( ~ C2 ,1,1,− D21 ) ( Z )
Thus:
))
(
Tc T Z e
Ts 11 Te Psuc Z
(
Tc T + Z e p11H 1( ~ C2 ,0 ,0 ,− D21 ) ( Z ) + p12 H 1( C 2 ,D21 ,0 ,− D21 ) ( Z )
(42) H 1( ~ C 2 ,0 ,0 ,− D21 ) ( Z ) =
p11 ) H 1( ~ C 2 ,0 ,0 ,− D21 ) ( Z ) m
+ ( 1 − p12 ) ZH 1( C 2 ,i + 1,i + 1− D21 ,− D21 ) ( Z ) ,i = D21 ..W − 2
(1 −
(( 1 −
11 − p11 − Psuc Z
(40) H 1( C2 ,i ,i − D21 ,− D21 ) ( Z ) =
Ts T Z e
+ ( n2 − 1)τ 22 ( 1 − τ 22 ) n2 − 2 ( 1 − τ 12 ) n1
(46)
We evaluate H 2( i ,i ,D21 − j ,− D21 ) ( Z ) for each state of S1 Markov chain as follows: 1 H 2( i ,i ,D21 − j ,D21 ) ( Z ) = , i = 0 and i = W − 1 (47) W Ts 1 22 Te H 2( i ,i ,D21 ,D21 ) ( Z ) = + Psuc Z + W Tc T 22 p 22 − Psuc Z e H 2( i + 1,i ,0 ,D21 ) ( Z ) , i = 1..W − 2 (48)
(
)
To compute H 2( i ,i ,D21 −
j ,D21 )
(Z) ,
we define
j ( Z ) , such as: Tdec
11
)
m+ 1
0 Tdec
(Z) =
1
(49)
(1 −
j Tdec (Z) =
21 1 − Psuc Z for j = 1..D 21
Ts Te
p 21 ) Z
(
)
21 + p 21 − Psuc Z
Tc Te
j− 1 Tdec ( Z )
Tc T TS 2 ( Z ) = p 22 Z e H 2( 0 ,0 ,0 ,D21 ) ( Z )
(1 −
p 22 ) Z
Ts Te
H 2( 0 ,0 ,0 ,D21 ) ( Z )
m
∑
i= 0
m+ 1
+
Tc Te H 2( 0 ,0 ,0 ,D21 ) ( Z ) p 22 Z (54)
(50) So: H 2( i ,i ,D21 −
j ,D21 )
(Z) =
H 2( i ,i ,D21 −
j + 1,D21 )
i = 0..W − 1, j = 1..D 21 , ( i , j ) ≠ ( 0 , D 21 )
( Z )Tdecj ( Z ) , (51)
And: H 2( i ,i − 1,0 ,D21 ) ( Z ) = ( 1 − p 22 ) ZH 2( i + 1,i ,0 ,D21 ) ( Z ) +
(1 −
p 22 ) ZH 2( i ,i ,0 ,D21 ) ( Z )
Ts Tc 22 Te T 22 1 − Psuc Z + p 22 − Psuc Z e i = 2..W − 2
(
)
Where TS i( 1) ( Z ) , is the derivate of the service time Z-transform of station S i [11].
D21 Tdec ( Z )
(51) H 2( W − 1,W − 2 ,0 ,D21 ) ( Z ) =
(1 −
p 22 ) ZH 2( W − 1,W − 1,0 ,D21 ) ( Z )
Ts Tc 22 Te T 22 1 − Psuc Z + p 22 − Psuc Z e
(
)
6.2 Average Service Time From equations (43) (respectively equation (54)), we derive the average service time of a category C 1 station ( respectively a category C 2 station). The average service time of a category C i station is given by: X i = TS i( 1) ( 1) (55)
(52) D21 Tdec ( Z )
By considering the same configuration as in figure 3, we depict in figure 5, the average service time of category C 1 and category C2 stations as a function of W . As for the throughput analysis, stations belonging to the same traffic category have nearly the same average service value. Simulation service time values coincide with analytical values given by equation (55). These results confirm the fairness of DM in serving stations of the same category.
According to figure 2 and using equations (44), we have: H 2( 0 ,0 ,0 ,D21 ) ( Z ) = H 2( 0 ,1,0 ,D21 ) ( Z )Tdec21 ( Z ) ( 1 − p 22 ) ZH 2( 1,1,0 ,D21 ) ( Z ) + (53) Ts Tc 22 Te Te D21 22 1 − Psuc Z + p 22 − Psuc Z Tdec ( Z ) D
(
)
Therefore, we can derive an expression of S 2 Z-transform service time as follows:
Figure 6: Average service time as a function of the contention window size (D21=1, n=8) In figure 8, we show that category C 1 stations obtain better average service time than the one obtained with 802.11 protocol. Whereas, the opposite scenario happens for category C 2 stations
UbiCC Journal - Volume 3
12
i
independently of n , the number of contending stations within the network.
D 21 = 4 , the probability that
S 1 service time exceeds 0.005s equals 0.28%. Whereas, station S 2 service time exceeds 0.005s with the probability 5.67%. Thus, DM offers better service time guarantees for the stations with the highest priority.
In figure 9, we double the size of the contention window size and set it to 64. We notice that category C 1 and category C 2 stations service time curves become closer. Indeed, when W becomes large, the BAB values increase and the (DMSB) becomes negligible compared to the basic backoff. The whole backoff values of S 1 and S 2 become near and their service time accordingly.
Figure 7: Average service time as a function of the number of contending stations 6.3 Service Time Distribution Service time distribution is obtained by inverting the service time Z transforms given by equations (43) and (54). But we are most interested in probabilistic service time bounds derived by inverting the complementary service time Z transform given by [11]: 1 − TS i ( Z ) ~ Xi (Z) = 1− Z
(55)
Figure 9: Complementary service time distribution for different values of D21 (W=64)
In figure 8, we depict analytical and simulation values of the complementary service time distribution of both category C 1 and category C 2 station (W = 32 ) .
In figure 10, we depict the complementary service time distribution for both category C 1 and category C 2 stations and for values of n , the number of contending nodes.
Figure 8: Complementary service time distribution for different values of D21 , (W = 32 )
Figure 10: Complementary service time distribution for different values of the contending stations
All the curves drop gradually to 0 as the delay increases. Category C 1 stations curves drop to 0 faster than category C 2 curves. Indeed, when
Analytical and simulation results show that complementary service time curves drop faster when the number of contending stations is small for both category C 1 and category C 2 stations. This
UbiCC Journal - Volume 3
13
means that all stations service time increases as the number of contending nodes increases. 7 EXTENTIONS OF THE ANAYTICAL RESULTS BY SIMULATION The mathematical analysis undertaken above show that DM performs service differentiation over 802.11 protocol and offers better QoS guarantees for highest priority stations Nevertheless, the analysis was restricted to two traffic categories. In this section, we first generalize the results by simulation for different traffic categories. Therefore, we consider a simple multihop and evaluate the performance of the DM policy when the stations belong to different broadcast regions.
CW max < 1024 and K =1. Analytical and simulation results show that throughput values increase with stations priority. Indeed, the station with the lowest delay bound has the maximum throughput.
Moreover, figure 12 shows that stations belonging to the same traffic category have the same throughput. For instance, when n is set to 15 (i.e. m = 3 ), the three stations of the same traffic category have almost the same throughput.
7.1 Extension of the analytical results In this section, we consider n stations contending for the channel in the same broadcast region. The n stations belong to 5 traffic categories where n = 5 m and m is the number of stations of the same traffic category. A traffic category C i is characterized by a delay bound Di , and Dij = Di − D j is the difference between the deadline values of category C i and category C j stations. We have: Dij = ( i − j ) K (53) Where K is the deadline multiplicity factor and is given by: Di + 1,i = Di + 1 − Di = K (53) Indeed, when K varies, the deadline values of all other stations also vary. Stations belonging to the traffic category C i are numbered from S i1 to S im .
Figure 11: Normalized throughput for different traffic category stations In figure 11, we depict the throughput achieved by different traffic categories stations as a function of the minimum contention window size CW min such as CW min is always smaller than CW max ,
UbiCC Journal - Volume 3
Figure 12: Normalized throughput: different stations belonging to the same traffic category In figure 13, we depict the average service time of the different traffic category stations as a function of K , the deadline multiplicity factor. We notice that the highest priority station average service time decreases as the deadline multiplicity factor increases. Whereas, the lowest priority station average service time increases with K .
Figure 13: Average service time as a function of the deadline multiplicity factor K In the same way, the probabilistic service time bounds offered to S 11 (the highest priority station) are better than those offered to station S 51 (the lowest priority station). Indeed, the probability that S 11 service time exceeds 0.01s=0.3%. But, station
14
S 51 service time exceeds 0.01s with the probability of 36%.
and D21 = D 2 − D1 =5 slots. Flows F3 and F4 are transmitted respectively by S 12 and S 4 and have the same delay bound. Finally, F5 and F6 are transmitted respectively by S 5 and S 6 with delay bounds D1 and D2 and D 2 ,1 = D2 − D1 = 5 slots. Figure 16 shows that the throughput achieved by F1 is smaller than the one achieved by F2 .
Figure 14: Complementary distribution (W=32, n=8)
service
time
The above results generalize the analytical model results and show once again that DM performs service differentiation over 802.11 and offer better guarantees in terms of throughput, average service time and probabilistic service time bounds for flows with short deadlines. 7.2 Simple Multi hop scenario In the above study, we considered that contending stations belong to the same broadcast region. In reality, stations may not be within one hop from each other. Thus a packet can go through several hops before reaching its destination. Hence, factors like routing protocols or interferences may preclude the DM policy from working correctly. In the following paragraph, we evaluate the performance of the DM policy in a multi-hop environment. Hence, we consider a 13 node simple mtlti-hop scenario described in figure 15.
Figure 15: Simple multi hop scenario Six flows are transmitted over the network. Flows packets are routed using the AODV protocol. Flows F1 and F2 are respectively transmitted by stations S 1 and S 2 with delay bounds D1 and D2
UbiCC Journal - Volume 3
Figure 16: Normalized throughput using DM policy Indeed, both flows cross nodes 6 and 7, where F1 got a higher priority to access the medium than F2 when the DM policy is used. We obtain the same results for flows F5 and F6 . Flows F3 and F4 have almost the same throughput since they have equal deadlines. Figure 17 show that the complementary service time distribution curves drop to 0 faster for flow F1 than for flow F2 .
Figure 17: End to end complementary service time distribution The same behavior is obtained for flow F5 and F6, where F5 has the shortest delay bound. Hence, we conclude that even in a multi-hop
15
environment, the DM policy performs service differentiation over 802.11 and provides better QoS guarantees for flows with short deadlines. 8
CONCLUSION In this paper we first proposed to support the DM policy over 802.11 protocol. Therefore, we used a distributed backoff scheduling algorithm and introduced a new medium access backoff policy. Then we proposed a mathematical model to evaluate the performance of the DM policy. Indeed, we considered n contending stations belonging to two traffic categories characterized by different delay bounds. Analytical and simulation results show that DM performs service differentiation over 802.11 and offers better guarantees in terms of throughput, average service time and probabilistic service time bounds for the flows having small deadlines. Moreover, DM achieves fairness between stations belonging to the same traffic category. Then, we extended by simulation the analytical results obtained for two traffic categories to different traffic categories. Simulation results showed that even if contending stations belong to K traffic categories, K > 2 , the DM policy offers better QoS guarantees for highest priority stations. Finally, we considered a simple multi-hop scenario and concluded that factors like routing messages or interferences don’t impact the behavior of the DM policy and DM still provides better QoS guarantees for stations with short deadlines. 9
REFERENCES
[1] G. Bianchi: Performance Analysis of the IEEE 802.11 Distributed Coordination Function, IEEE J-SAC Vol. 18 N. 3, (March 2000). [2] H. Wu1, Y. Peng, K. Long, S. Cheng, J. Ma: Performance of Reliable Transport Protocol over IEEE 802.11 Wireless LAN: Analysis and Enhancement, In Proceedings of the IEEE INFOCOM`02, June 2002. [3] H. Zhai, Y. Kwon, Y., Fang: Performance Analysis of IEEE 802.11 MAC protocol in wireless LANs”, Wireless Computer and Mobile Computing, (2004). [4] I. Aad and C. Castelluccia, “Differentiation mechanisms for IEEE 802.11”, In Proc. of IEEE Infocom 2001, (April 2001). [5] IEEE 802.11 WG: Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specification”, IEEE (1999).
UbiCC Journal - Volume 3
[6] IEEE 802.11 WG, ”Draft Supplement to Part 11: Wireless Medium Access Control (MAC) and physical layer (PHY) specifications: Medium Access Control (MAC) Enhancements for Quality of Service (QoS)”, IEEE 802.11e/D13.0, (January 2005). [7] J. Deng, R. S. Chang: A priority Scheme for IEEE 802.11 DCF Access Method, IEICE Transactions in Communications, vol. 82-B, no. 1, (January 1999). [8] J.L. Sobrinho, A.S. Krishnakumar: Real-time traffic over the IEEE 802.11 medium access control layer, Bell Labs Technical Journal, pp. 172-187, (1996). [9] J. Y. T. Leung, J. Whitehead: On the Complexity of Fixed-Priority Scheduling of Periodic, Real-Time Tasks, Performance Evaluation (Netherlands), pp. 237-250, (1982). [10]K. Duffy, D. Malone, D. J. Leith: Modeling the 802.11 Distributed Coordination Function in Non-saturated Conditions, IEEE/ACM Transactions on Networking (TON), Vol. 15 , pp. 159-172 (February 2007) [11]L. Kleinrock: Queuing Systems,Vol. 1: Theory, Wiley Interscience, 1976. [12]P. Chatzimisios, V. Vitsas, A. C. Boucouvalas: Throughput and delay analysis of IEEE 802.11 protocol, in Proceedings of 2002 IEEE 5th International Workshop on Networked Appliances, (2002). [13]P.E. Engelstad, O.N. Osterbo: Delay and Throughput Analysis of IEEE 802.11e EDCA with Starvation Prediction, In proceedings of the The IEEE Conference on Local Computer Networks , LCN’05 (2005). [14]P.E. Engelstad, O.N. Osterbo: Queueing Delay Analysis of 802.11e EDCA, Proceedings of The Third Annual Conference on Wireless On demand Network Systems and Services (WONS 2006), France, (January 2006). [15]P.E. Engelstad, O.N. Osterbo: The Delay Distribution of IEEE 802.11e EDCA and 802.11 DCF, in the proceeding of 25th IEEE International Performance Computing and Communications Conference (IPCCC’06), (April 2006), USA. [16]The network simulator ns-2, http://www.isi.edu/nsnam/ns/. [17]Y. Xiao: Performance analysis of IEEE 802.11e EDCF under saturation conditions, Proceedings of ICC, Paris, France, (June 2004). [18]V. Kanodia, C. Li: Distribted Priority Scheduling and Medium Access in Ad-hoc Networks”, ACM Wireless Networks, Volume 8, (November 2002).
16
A FRAMEWORK FOR AN AGGREGATED QUALITY OF SERVICE IN MOBILE AD HOC NETWORKS Ash Mohammad Abbas, Khaled Mohd Abdullah Al Soufy Department of Computer Engineering Zakir Husain College of Engineering and Technology Aligarh Muslim University Aligarh – 202002, India
[email protected]
ABSTRACT Providing quality of service in an ad hoc network is a challenging task. In this paper, we discuss a framework for user perceived quality of service in mobile ad hoc networks. In our framework, we try to aggregate the impact of various quality of service parameters. Our framework is flexible and has a provision of providing dynamic quality of service. Further, an application may adapt from the required quality of service to that which can readily be provided by the network under a stressful environment. Our framework may adapt to the QoS desired by a source based on user satisfaction. Keywords: User perceived quality of service, quality of service aggregation, dynamic and adaptive, ad hoc networks.
1
INTRODUCTION
An ad hoc network is a cooperative engagement of a collection of mobile devices without the required intervention of a centralized infrastructure or a centralized access point. In the absence of centralized infrastructure, an ad hoc network may provide a cost-effective and a cheaper way of communication. Applications of an ad hoc network include battlefield communications, disaster recovery missions, convention centers, online classrooms, online conferences, etc. In an ad hoc network, there are no separate routers. As a result, the devices need to forward packets of one another towards their ultimate destinations. The devices possess limited radio transmission ranges, therefore, routes between any two hosts are often multihop. The devices are often operated through batteries whose power depletion may cause the device failure and/or associated link failures. Further, nodes may move about randomly and thus the topology of the network varies dynamically. There can be applications of an ad hoc network, where users expect a given level of quality of service (QoS) to be provided by the network. These applications may include multimedia streaming, exchanging geographical maps, etc. However, the requirements and the expectations of users about the level of the QoS to be provided by an ad hoc network may not be as high as those in case of a wired network or a wireless network that possesses a centralized infrastructure. Provision of QoS in an ad hoc network is a
UbiCC Journal - Volume 3
challenging problem. The challenge is posed by the characteristics of ad hoc networks. In an ad hoc network, one cannot have a solution that either relies on extensive amount of computations or consumes a significant amount of power and energy because the resources of the devices used to form such a network are scarce. Note that an ad hoc network is an on the fly network and should be self organizing in nature. Frequent node and link failures together with mobility of nodes give rise to a highly dynamic topology of the network. The dynamically varying topology of the network makes it difficult to provide any hard QoS guarantees. However, as the users are aware that they are part of an ad hoc network, therefore, instead of expecting hard QoS guarantees, users may expect a soft QoS. In situations, when an end-user can tolerate variations in the QoS, the user should have a flexibility to change the specifications of QoS parameters depending upon the extent of satisfaction with the QoS provided by the network. However, not much work is done in this area. In [6], a framework for QoS aware service location is presented in the context of an ad hoc web server system. Therein, the authors assign the priorities in integers to QoS parameters. In [11], an adaptive QoS routing protocol is presented by rerouting the packets that faced QoS violations. The rest of this paper is organized as follows. In Section 2, we discuss the framework for an aggregated quality of service. In Section 3, we discuss how to adapt quality of service based on user feedback. In Section 4, we describe methods of QoS aggregation. Section 5 contains results and
17
Table 1: Symbols used for QoS parameters in AQS. Weight
Min/Max Value
Tolerance Limit (%)
Parameter
Weight
Min
Max
End-to-End Delay Delay Jitter
wD
Dmax
δD
0.40
3.0
-
wJ
J max
δJ
Bandwidth
wB
Bmin
δB
0.20 0.25
-
1.0 3.0
wR
Rmin
δR
0.2 2.0 -
Packet Delivery Ratio Route Lifetime
0.80
2.0
wL
Lmin
δL
End-to-End Delay Delay Jitter Bandwidth Packet Delivery Ratio Route Lifetime
Tolerance Limit (%) 1.0
10
2.0
discussions. Finally, the last section is for conclusion and future directions. 2
Table 2: Example 1 – Values of QoS parameters.
Parameter
A FRAMEWORK FOR AN AGGREGATED QOS
In this section, we describe a framework for an aggregated QoS. We call our framework as an aggregated QoS (AQS) framework because it aggregates the effect of many QoS parameters or metrics. In our framework, we consider a set of QoS parameters such as end-to-end delay, delay jitter, bandwidth, packet delivery ratio, route lifetime 1 . Our aggregation mechanism consists of assigning importance or weights to each of the parameters discussed in the previous subsection, and then computing a factor of aggregation. Let us first consider assignment of importance or weights 2 . To each of these parameters, we assign a weight wi , 0 ≤ wi ≤ 1 , in such a fashion so that n
∑ w =1 i
(1)
0.1 0.05
-
In what follows, we define aggregated QoS to incorporate the effect of the parameters mentioned above. Definition 1: Let there be n QoS parameters P1 , P2 ,..., Pn . Let Pk ,1 ≤ k ≤ n for bandwidth be defined as follows. PBW =
FileSize BW
(2)
where, FileSize denotes the size of file that is sent using the particular bandwidth. Let Pk ,1 ≤ k ≤ n for packet delivery ratio be defined as follows. PPDR = RΔ
(3)
where, Δ is the duration of time for which the particular packet delivery ratio 3 is desired. Having defined the constituent parameters in time units, we now define a parameter called Weighted Aggregate QoS (WAQ) as follows.
i =1
where wi represents the relative importance (or the weight) of parameter i. If there are n parameters, and each parameter i is assigned a weight 1/n then all QoS parameters are equally important. If wi > w j , i ≠ j , then parameter i is said to be relatively more important than parameter j. The notation used to represent information related to QoS parameters is as follows. We use the following symbols to represent QoS parameters− D: end-to-end delays, J: delay jitter, B: bandwidth, R: packet delivery ratio, L: route lifetime. For each of these parameters, the symbol w is used to represent relative importance or weight, prefix Δ is used to represent tolerance limit, and subscripts max/min are used to represent either the maximum or minimum value of the parameter. This notation is summarized in Table 1. _________________________ 1
There can be long list of QoS parameters, however, we consider only the parameters defined above. 2
Note that it is a different issue that who assigns the weights to the parameters and shall be discussed later in this paper.
UbiCC Journal - Volume 3
{( P +∑ {( P
WAQ = ∑
min
i
max j
} )w }
+ TLi ) wi
i
+ TL j
(4)
j
j
where 1 ≤ i, j ≤ n, i ≠ j , and Pi min is the value of ith parameter whose minimum value is specified, and Pjmax is the value of jth parameter whose maximum values is specified. Further, TLi and TL j are values of the corresponding tolerance limits for ith and jth parameters. Note that the unit of aggregation parameter, WAQ, is time. Further, it will have positive values and its values may come out to be greater than 1 depending the values of its constituent QoS parameters. In what follows, we define an aggregation factor, whose value lies between 0 and 1. _________________________ 3
In this way, we have converted all parameters to a single unit i.e. time. This is done so that we are able to aggregate the effect of different parameters on the QoS.
18
Table 3: Example 2 – Values of QoS parameters.
Figure 1: Adapting QoS through user feedback. Definition 2: Let the QoS parameters, Pi , Pjmax , and their tolerance limits, TLi , and TL j , min
be defined as in Definition 1. We define a factor that we call Weighted Aggregate QoS Factor (WAQF) as follows. WAQ WAQF = min ∑ ( Pi + TLi ) + ∑ ( Pjmax + TL j ) i
j
(5) where, WAQ is given by (4) as part of Definition 1. It is worth mentioning that WAQF has no unit as it is simply a ratio bearing the units of time in both the numerator as well as the denominator of the expression defining it. In what follows, we discuss an example incorporating different weights and the values of the QoS parameters mentioned above. Example 1: Let the weights, min/max values, and tolerance limits assigned by the source for different QoS parameters be as given in Table 2. In Example 1, the end user has given the highest relative importance by assigning a weight 0.40 to the end-to-end delay. The maximum value of the delay is specified to be 3.0 milliseconds, however, the user may accept upto a tolerance limit 1.0%. This means that user may accept a value of the end-to-end delay upto 3.01 milliseconds. The second parameter is delay jitter. For that the user has specified a weight of 0.20, the maximum value to be 0.2 milliseconds, and a tolerance limit of 1.0%. This means that the user can accept the value of delay jitter upto 0.202 milliseconds. The third parameter is bandwidth reserved for the flow. The user has specified a weight of 0.25, the minimum value of bandwidth to be 2.0 Mbps, and a tolerance limit of 3.0%. In other words, the user may accept the bandwidth upto 1.96 Mbps. The fourth parameter, packet delivery ratio is assigned a weight of 0.10, the minimum value 80% and a tolerance limit of 2.0%. This means that the user may accept the packet delivery ratio upto 78%. The last parameter is route lifetime 4 that is assigned a weight of 0.05, the minimum value 10 milliseconds, and a tolerance limit of 2.0%. In other words, the user may accept the value of the route failure time upto 9.8 milliseconds. ____________________________ 4
There can be a debate whether one should consider the minimum value or the maximum value of route failure time. The user would prefer to specify the minimum value of route failure time, so that there are no route failures during packet transmissions. However, from the point of view of network, one would like to maximize the value of route failure time.
UbiCC Journal - Volume 3
Parameter
Weight
Min
Max
End-to-End Delay Delay Jitter Bandwidth Packet Delivery Ratio Route Lifetime
0.30
2.0
-
Tolerance Limit (%) 1.0
0.10 0.45 0.1
0.3 2.0 -
0.80
1.0 3.0 2.0
0.05
-
10
2.0
Table 4: Default values of QoS parameters. Parameter
Value
End-to-End Delay Delay Jitter Bandwidth Packet Delivery Ratio Route Lifetime
5.00 0.01 10.0 0.90
Tolerance Limit (%) 2.0 2.0 2.0 2.0
10.0
2.0
Table 5: Sets of weights assigned to QoS parameters. Set No. S1 S2 S3 S4 S5
Set of Weights 0.6 0.1 0.1 0.1 0.1 0.6 0.1 0.1 0.1 0.1 0.6 0.1 0.1 0.1 0.1 0.6 0.1 0.1 0.1 0.1
0.1 0.1 0.1 0.1 0.6
Table 6: The QoS parameter WAQ and the factor WAQF as a function of bandwidth. Bandwidth 1 2 3 4 5 6 7 8 9 10
WAQ 3.40964 3.30760 3.27359 3.25658 3.24638 3.23957 3.23471 3.23107 3.22824 3.22597
WAQF 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2
Table 7: The QoS parameter WAQ and the factor WAQF as a function of FileSize. FileSize 1 2 3 4 5 6 7 8 9 10
WAQ 3.22957 3.24638 3.26678 3.28719 3.30760 3.32801 3.34842 3.36883 3.38923 3.40964
WAQF 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2
19
Table 8: The QoS parameter WAQ and the factor WAQF as a function of Δ . WAQ WAQF Δ 1 2 3 4 5 6 7 8 9 10
3.22597 3.40957 3.59317 3.77677 3.96037 4.14397 4.32757 4.51170 4.69477 4.87837
0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2
Table 9: The QoS parameter WAQ and the factor WAQF as a function of sets of weights. Set of Weights S1 S2 S3 S4 S5
WAQ 4.06298 1.61788 1.66400 2.07198 6.71298
WAQF 0.251892 0.100304 0.103163 0.128457 0.416184
Let the file size be 1 Mb, and the maximum bandwidth be Bmax = X Mbps. As a result, 1/X seconds is consumed in sending a file of the specified size over the specified bandwidth. For a variation or tolerance limit of Δ in a given bandwidth X, the value of the parameter Pk for bandwidth is 1 PBW = (6) . X − ( X ×Δ) Therefore, for a tolerance limit of 3.0% in the value of bandwidth which is 2.0 Mbps (as shown in Table 2 for Example 1), the value of PBW is 1 = 0.515463917. Further, in case of 2 − (2× 0.03) packet delivery ratio which is given by (3), assume that Δ be 1 time unit, so that PPDR is affected by the packet delivery ratio only and that is R. For Example 1, the value of aggregation parameter, WAQ, comes out to be 1.948065979. The value of denominator is 14.69946392. The value of the aggregation factor, WAQF, comes out to be 0.132526328. Note that in Example 1, the end-to-end delay is assigned the highest weight or priority. This is an example of delay-sensitive application. Depending upon the weight assigned to a QoS parameter, there can be other types of applications as well. For that consider another example with different weights and different values of corresponding QoS parameters. Example 2: Let the weights, min/max values, and tolerance limits assigned by the source for different QoS parameters be as given in Table 3. In Example 2, the bandwidth is assigned the highest relative importance. This is an example of bandwidth-sensitive application. The next relatively
UbiCC Journal - Volume 3
important parameter is end-to-end delay. Other parameters may not be so important for a particular application, therefore, those are assigned relatively low weights. For rest of the parameters, we assume a similar scenario as in Example 1. In Example 2, PBW =0.515463917. The value of aggregation parameter, WAQ, comes out to be 1.447258763. The value of denominator is 15.233. The value of the aggregation factor, WAQF, comes out to be 0.095008124. Note that the aggregation factor is 28.31% smaller as compared to that in Example 1. The reason is that in case of WAQ, the contribution of end-to-end delay component has become half of that in Example 1, and the contribution of delay jitter has also been decreased. The total decrease of these two parameters is approximately 0.6. The contribution due to bandwidth has increased by a value of approximately 0.103. As a result, the net effect is a decrease by a value of approximately 0.5 in the value of WAQ. The aggregation factor, WAQF, has changed accordingly. In what follows, we discuss a framework for adapting QoS through user feedback. 3
ADAPTIVE QOS THROUGH USER FEEDBACK
Fig. 1 shows a framework for adapting QoS using user feedback. The steps in our framework are as follows. • The source or the user specifies the values of different QoS parameters with their minimum/maximum values and tolerance limits. • The QoS parameters alongwith their respective values are given to the QoS Manager. • If QoS Manager receives QoS parameters for a packet of the flow for the first time, it sorts the parameters according to their relative importance or weights. After, that the QoS Manager calls a protocol or a method to take care of the parameter that is relatively the most important. After that it takes measures to take care of the next relatively important parameter (if possible), and so on. • The packet is then delivered to the destination according to its QoS specifications and the source is informed accordingly. • If the source or the user is satisfied with the QoS of the packet delivered, the next packet is sent to the destination. • If the source is not satisfied with the QoS of the packet delivered, it informs the QoS Manager about the change it wishes to have
20
in the QoS. The QoS Manager tries to adjust the QoS parameters accordingly. Note that the source specifies the values of QoS parameters, their minimum and/or maximum values, relative importance, and tolerance limit for each parameter. As mentioned above, QoS Manager sorts the parameters according to their relative importance. The QoS Manager calls appropriate methods or protocols for providing the QoS. The fact that which protocol has to be called first depends upon the relative importance of the parameters. Depending upon the relative importance of QoS parameters, different methods are required to be called. Further, the functionality of QoS Manager 5 resides at every node in the network. As mentioned earlier, we confine to wireless networks using 802.11 and the nodes in that are operating in ad hoc mode. However, the same can be extended with some modifications to other types of wireless networks as well. There might be a question that in ad hoc networks, nodes have limited resources so why do we have QoS with user feedback. It should be noted that provision of user feedback in our approach should require little more computations. In general, the energy and power consumption during transmissions is significantly larger than that of computations. We believe that the provision of user feedback shall not consume a significant amount of energy rather it will add few more computations and would be feasible with current technological trends. In what follows, we describe what are the methods and protocols that may need to be called for a QoS specification. 4
Manager extracts the values of QoS parameters, tolerance limit, and relative importance. If the sum of all relative importance or weight is greater than 1, the QoS specifications are referred back to the source. Otherwise, the QoS Manager marks whether a parameter is “insignificant” or “significant”. We assume that if the weight assigned to a particular parameter is less than 1/(2k), where k is the number of QoS parameters, then the parameter is insignificant, otherwise it is significant. If a parameter is insignificant, the QoS Manager need not bother about it. For all significant parameters, the weights of the parameters are arranged in descending order. The first parameter in this order is the most significant parameter. An appropriate protocol is called to provide QoS for the most significant parameter so obtained. A qosReply packet is sent to the source by the QoS Manager. Upon receiving a qosReply packet, the source sends a TestQoS packet. The TestQoS packet is delivered to the destination and a flag named statusPi is set to be “done” for the QoS parameter, Pi . The source, if satisfied sends the extent upto that he/she is satisfied. If the satisfaction of the source falls below 50%, the source shall specify his/her desired QoS parameters again, and the above process shall continue. When the user is satisfied by the QoS provided to TestQoS packets, he/she will start sending actual packets.
METHODS FOR AGGREGATED QOS
In this section, we present methods and protocols that the QoS Manager needs to call for providing the desired QoS. Note that the input to the QoS Manager is a set of parameters with their respective values, tolerance limits, and relative importance. The first and the foremost task that the QoS Manager needs to perform is sorting of the QoS parameters according to their relative importance or weight. Another set of input is the user feedback, in case when some form of QoS has been provided to the flow but the user is not satisfied with the QoS. Once the QoS parameters have been sorted, appropriate methods and/or protocols are required to be called, to provide the given level of QoS. Algorithm 1 describes what are the actions that are taken by the QoS Manager. When a sources needs to send packets of a flow, it sends a qosEnquiry packet to the QoS Manager along with the specification of QoS parameters. The QoS ________________________ 5
The component QoS Manager is not only manages the QoS, however, it is also responsible for other functions like resource reservation, call admission control, and negotiating the QoS.
UbiCC Journal - Volume 3
Figure 2: The QoS parameter WAQ and the factor WAQF as functions of bandwidth.
Figure 3: The QoS parameter WAQ and the factor WAQF as functions of FileSize.
21
__________________________________________________________________________________________ Algorithm 1: Marking of significant parameters by QoS Manager __________________________________________________________________________________________ Let i ∈ enum P: {Delay, Jitter, Bandwidth, Delivery Ratio, Lifetime}, 1 ≤ i ≤ k , k =| P | , where |.| denotes the cardinality. For parameter i, let wi : weight, Vi : value, δi : tolerance limit. 1: if UserSatisfactionPi ≤ 0.5 then 2: statusPi = "notdone" 3: Get wi , Vi , δi for all i such that statusPi = "notdone" 4:
if
∑ w > 1 then i
i
5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15:
Refer back the QoS parameters to the source 1 else if 0 ≤ wi ≤ then 2k Mark Pi : "insignificant" else Mark Pi : "significant" end if For all Pi that are marked "significant", arrange wi in descending order For max( wi ) , call a protocol to provide the QoS for parameter Pi Deliver the packet to destination set stausPi = "done" end if
___________________________________________________________________________
There is an issue about how long should one be allowed so as not to waste time in setting up of the desired QoS. This depends upon how many attempts are being made for a QoS parameter. We limit these attempt to 3 irrespective of the user satisfaction. After third attempt, we assumed that the most significant parameter is taken care of. In what follows, we discuss some empirical results.
5 Figure 4: The QoS parameter WAQ and the factor WAQF as functions of Δ .
Figure 5: The QoS parameter WAQ and the factor WAQF versus sets of weights.
UbiCC Journal - Volume 3
RESULTS AND DISCUSSION
Let us first discuss some results pertaining to the effect of aggregation of QoS parameters. 5.1 Effect of Aggregation We have defined the weighted aggregation QoS parameter, WAQ, and weighted aggregation QoS factor, WAQF, in Section 2. For the results in this subsection, the values of different QoS parameters and their tolerance limits are shown in Table 4. The default weight assigned to each QoS parameter is equal and is 0.20. The default value of FileSize is 1 bandwidth unit (e.g. 1 Mbps, if the bandwidth is expressed in Mbps). The default value of time duration for which a desired packet delivery ratio is needed, Δ is 1 time unit.
22
Table 6 shows the empirical values of the QoS aggregation parameter, WAQ, and that of the QoS aggregation factor, WAQF, as functions of bandwidth. It is observed from Table 6 that the QoS aggregation parameter, WAQ, decreases with the increase in the bandwidth. The decrease in WAQ is not linear (see Figure 2). The values of the QoS factor remain constant. The reason is that for equal weights assigned to individual QoS parameters, the amount of increase in numerator and the denominator of the QoS aggregation factor, WAQF (as defined by (5), is almost the same. Hence, WAQF remains the same and is equal to the weight assigned to each individual QoS parameter. Table 7 shows the empirical values of the QoS aggregation parameter, WAQ, and that of the QoS aggregation factor, WAQF, as functions of FileSize. It is observed from Table 7 that the QoS aggregation parameter, WAQ, increases with the increase in the FileSize. The decrease in WAQ is linear (see Figure 3). Table 8 shows the empirical values of the QoS aggregation parameter, WAQ, and that of the QoS aggregation factor, WAQF, as functions of Δ . We mentioned earlier that Δ is the time duration for which the specified packet delivery ratio is required. It is observed from Table 8 that the QoS aggregation parameter, WAQ, increases with the increase in Δ . The decrease in WAQ is linear (see Figure 4). The values of the QoS factor, WAQF, remain constant for the reason mentioned above. The sets of weights assigned to the QoS parameters in the following order <end-to-end delay, delay jitter, bandwidth, packet delivery ratio, route lifetime> are shown in Table 5. It is observed that the value of the QoS aggregation parameter, WAQ, and that of aggregation factor, WAQF, are the largest for the set of weights, S6, and is the smallest for the set of weights, S2 (see Figure 5). The reason being that in case of S6, the contribution of the largest valued QoS parameter i.e. route lifetime is multiplied by the largest weight among S6. However, the situation in case of S2 is just reverse of that in S6. Note that the next largest values of WAQ and WAQF are for the set of weights, S2. In that case the contribution of the next large valued parameter i.e. end-to-end delay is multiplied by the largest weight among the set of weights S1. 5.2 Effect of User Feedback Recall that when the significant parameters are found. The QoS Manager selects and calls appropriate protocols depending upon the QoS parameters. A qosReply packet is sent to the source by the QoS Manager. Upon receiving a qosReply packet, the source sends a TestQoS packet. The TestQoS packet is delivered to the destination and a flag named statusPi is set to “done” for the QoS
UbiCC Journal - Volume 3
parameter, Pi . The source, if satisfied sends the extent upto which he/she is satisfied. If the satisfaction of the source falls below 50%, the source shall specify his/her desired QoS parameters again, and the above process shall continue. When the user is satisfied by the QoS provided to TestQoS packets, he/she will start sending actual packets. The TestQoS packets are simply overheads 6 . The number of these packets depends upon the extent of user satisfaction 7 . However, since there is a cost associated with user perceived QoS, therefore, the number of attempts made by the user for a desired level of QoS is restricted to 3. The user may select any one level of QoS that suits to his/her needs out of that provided by these attempts. Note that we mentioned it earlier that nodes are operating in ad hoc mode and the type of network we are interested in, is supposed to use 802.11 standards. A consequence of having user feedback is that some of the energy is consumed by the TestQoS packets which would have been used for some other useful task. However, we would like to remind that these packets are very small in size and that we have limited these packets to only a few (say 3). Although, there is some additional energy consumption, however, that is the price to be paid to have user feedback about the QoS. However, we believe that it would not be too large to be afforded. 5.3 Probability of User Satisfaction In order to evaluate the probability of user satisfaction, let us assume a simple scenario in which ps is the probability that the user is satisfied after an attempt has been made, i.e. after sending a TestQoS packet. The probability that the user is not satisfied after sending a TestQoS packet will then be 1− ps . Let us assume that each attempt is made independently. Then, the probability that the user is satisfied in k such attempts out of n attempts have been made, will then be governed by ⎛ n⎞ n− k PkUS = ⎜⎜ ⎟⎟⎟ psk (1− ps ) . ⎜⎝k ⎠⎟
(7)
The above equation is an expression of the binomial distribution for Bernoulli trials. The probability getting exactly one success (i.e. user is ________________________ 6
There will be overheads for a protocol that is used to get the desired level of QoS. However, those overheads will depend upon the specific protocol. 7
Although, the level of user satisfaction is not a measurable quantity, however, depending upon the feedback received by letting them to answer a set of questions, one may be able to get a feel of the level of user satisfaction. Either the number of questions successfully answered by an end-user, or by some other measure, may be taken as the user satisfaction. Therefore, let us assume that a user satisfaction of 50% means that 50% of the questions have successfully answered by the end-user.
23
satisfied in exactly one of the attempts) is given by ⎛3⎞ 2 P1US = ⎜⎜ ⎟⎟⎟ p1s (1− ps ) . ⎜⎝1⎠⎟
(8)
We want that the user to be satisfied in at least one of the attempt out of three attempts have been made. The probability that the user is satisfied in atleast one of the three attempts is given by ⎛ 3⎞ 3− k ψ = ⎜⎜ ⎟⎟⎟ psk (1− ps ) . ⎜⎝k ⎠⎟
(9)
For ps = 0.5 , ψ = 0.875 . It means that if the probability of success (i.e. the probability that the user is satisfied after an attempt) is assumed to be 0.5, then the probability of satisfaction of the user in at least one of these attempts is 0.875. As mentioned earlier, the user may select the QoS that fits best to his/her needs, if he wishes to do so. Note that, in this paper, wherever we referred to the aggregation of QoS parameters, we mean the aggregation of only those parameters whose combined effect may be computed in a reasonable amount of time (i.e. polynomial time). The parameters that we considered in this paper are only for the purpose of example. One has to see which parameters can be computationally combined before actually aggregating their effect. In case, one wishes to see the effect of the parameters that cannot be combined computationally, one may use a technique called QoS filtering. In that, if one wishes to seek QoS based on parameters ( A1 , A2 ,..., An ) . One should first seek the QoS based on one of these parameters, say A1 , and then on the resulting set of A1 , one should seek QoS based on A2 , and so on. Further, which parameter should be considered first or what should be the order of parameters in the QoS filtering will depend upon what is the relative importance of the parameters considered. 6
CONCLUSION
Providing QoS in a mobile ad hoc network is a challenging task due to inherent characteristics of such a network. In this paper, we proposed a framework for provision of QoS in a mobile ad hoc network. Our contributions are as follows. • We proposed that one can aggregate the effect of QoS parameters depending upon the importance or weights assigned to each parameter. • In our framework, we tried to incorporate the level of user satisfaction about the QoS provided by the network. • We proposed that there can be a trade-off
UbiCC Journal - Volume 3
between the QoS expected by the end-user and the QoS that may be provided by the network. However, we left it on to the enduser to decide about the trade-off depending upon his/her requirements. • We discussed overheads incurred in adapting the QoS to the level of expectance of the user. In summary, we discussed a framework for an aggregated and dynamic QoS based on user satisfaction. Further validation of the framework forms our future work. 7
REFERENCES
[1] S.C. Lo, G. Lee, W.T. Chen, J.C. Liu, “Architecture for Mobility and QoS Support in All-IP Wireless Networks”, IEEE Journal on Selected Areas in Communications (JSAC), vol. 22, no. 4, May 2004. [2] B. Li, L. Li, B. Li, K.M. Sivalingam, X.R. Cao, “Call Admission Control for Voice/Data Integrated Cellular Networks: Performance Analysis and Comparative Study”, IEEE Journal on Selected Areas in Communications (JSAC), vol. 22, no. 4, May 2004. [3] N. Passas, E. Zervas, G. Hortopan, and L. Merakos, “A Flow Rejection Algorithm for QoS Maintenance in a Variable Bandwidth Wireless IP Environment”, IEEE Journal on Selected Areas in Communications (JSAC), vol. 22, no. 4, May 2004. [4] Dutta, W. Chen, O. Altintas, H. Schulzrinne, “Mobility Approaches for All-IP Wireless Networks”, Proceedings of 6th World Multi Conference on Systematics, Cybernetics and Informatics (SCI), July 2002. [5] M. Ghaderi, R. Boutaba, “Towards All-IP Wireless Networks: Architectures and Resource Management Mechanism”, InderScience International Journal on Wireless and Mobile Computing (IJWMC), 2005. [6] J. Liu, V. Issarny, “QoS-Aware Service Location in Mobile Ad hoc Networks”, Proceedings of IEEE International Conference on Mobile Data Management (MDM), pp. 224-235, 2004. [7] H. Zhai, X. Chen, Y. Fang, “How Well Can the IEEE 802.11 Wireless LAN Support Quality of Service?”, IEEE Transactions on Wireless Communications, vol. 4, no. 6, November 2005. [8] C.S.R. Murthy, B.S. Manoj, Ad Hoc Wireless Networks: Architectures and Protocols, Pearson Education, New Delhi, 2005. [9] H.J. Chao, X. Guo, Quality of Service Control in High-Speed Networks, John Wiley, New York, 2002. [10]Zanella, D. Miorandi, S. Pupolin, P. Raimondi, “On Providing Soft-QoS in Wireless Ad hoc Networks”, Proceedings of International
24
Symposium on Wireless Personal Multimedia Computing (WPMC), October 2003. [11]V. Kone, S. Nandi, “QoS Constrained Adaptive Routing Protocol for Mobile Ad hoc Networks”, Proceedings of 9th IEEE International Conference on Information Technology (ICIT), pp. 40-45, December 2006. [12]M. Mirhakkak, N. Schult, D. Thomson, “Dynamic Quality-of-Service for Mobile Ad hoc Networks”, Technical Report, Mitre Corporation, http://www.mitrecorporation.net/ work/tech_papers/tech_papers_00/thomson\mp_ adhoc/thomson_adhoc.pdf, 2000. [13]B. Li, “QoS-Aware Adaptive Services in Mobile Ad hoc Networks”, Proceedings of IEEE International Workshop on Quality of Sevice (IWQoS), pp. 251-268, 2001. [14]A.M. Abbas, K.A.M. Soufi, “LANM: Lifetime Aware Node-Disjoint Multipath Routing for Mobile Ad hoc Networks”, Proccedings of IET International Conference on Information and Communication Technology in Electrical Sciences (ICTES), 2007.
UbiCC Journal - Volume 3
[15] S. Nelakudity, Z.L. Zhang, R.P. Tsang, D.H.C. Du, “Adaptive Proportional Routing: A Localized QoS Routing Approach”, IEEE/ACM Transactions on Networking, vol. 10, no. 6, December 2002. [16] Y.S. Chen, Y.C. Tseng, J.P. Sheu, P.H. Quo, “An On-demand, Link-State, Multipath QoS Routing in A Wireless Mobile Ad hoc Network”, Elsevier Journal on Computer Communications, 2003. [17] S. Wu, K.Y.M. Wong, B. Li, “A Dynamic Call Admission Policy With Precision QoS Guarantee Using Stochastic Control for Mobile Wireless Networks”, IEEE Transaction on Networking, vol. 10, no. 2, pp. 257-271, April 2002.
25
Impact of Node Density on Cross Layer Design for Reliable Route Discovery in Mobile Ad-hoc Networks B.Ramachandran
S.Shanmugavel
Dept. of Electronics & Communication Engg. S.R.M. University Chennai – 603 203
[email protected]
Dept. of Electronics & Communication Engg. Anna University Chennai – 600 025
[email protected]
Abstract : The mobile nature of nodes and dynamic topology of Mobile Ad-hoc Networks (MANETs) lead to route failures and requiring the transmission of control packets. It is important to reduce the number of control packets to save resources and to improve the overall performance of the network. Ad-hoc Ondemand Distance Vector (AODV) is appealing as an efficient on demand routing protocol because of low routing overhead and high performance. However, AODV is not robust against topology variations as it uses weak links due to long hops introduced by shortest path metric. In this paper we propose a mobility adaptive cross layer design to enhance the performance of AODV routing protocol by establishing stable routes. The adaptive decision making according to the speed of mobile nodes on Route Request (RREQ) packet forwarding results in stable routes. We also test the impact of node density in the network on our algorithm, to tell, when to invoke the our cross layer design in mobile ad-hoc networks. To demonstrate the efficiency of our protocol and its impact on network connectivity, we present simulations using network simulator, GloMoSim. Keywords: Mobile Ad-hoc Networks, AODV, Routing Overhead, Stable Route, and Cross Layer Design.
AODV and DSR send control packets only when route discovery or route maintenance is done. When a route is created or repaired, the control packets, particularly RREQ packets flooded by source is network wide broadcast. Moreover, the number of control packets increased rapidly with network size and topology changes. The primary goal of an ad-hoc network routing protocol is correct and efficient route establishment between a pair of nodes so that messages may be delivered in a timely manner. Route construction should be done with a minimum of overhead and bandwidth consumption. The on-demand routing protocols create route only when desired by the source node. When a node requires a route to a destination, it initiates a route discovery process within the network. This process is completed once a route is found or all possible route permutations have been examined. Once a route has been established, it is maintained by a route maintenance procedure or until the route is no longer desired. The Ad-hoc On-Demand Distance Vector routing protocol builds on the Destination Sequenced Distance Vector (DSDV) algorithm. It is an improvement on DSDV because it typically minimizes the routing load by creating routes on a demand basis. AODV [2] is a pure on-demand route acquisition system, since node that are not on a selected path do not maintain routing information or participate in routing table exchanges. When a source node desires to send a message to some destination and does not already have a valid route to that destination, it initiates a “route discovery” process to locate the destination. It broadcasts a route request packet to its neighbours, which then forward to their neighbours and so on, until either the destination or an intermediate node with a “fresh enough” route to the destination is located. During the process of forwarding the RREQ, the intermediate nodes record in their route tables the address of the neighbor from which the first copy of the broadcast packet is received thereby establishing a reverse path. If additional copies of the same RREQ are later received, these packets are discarded. Once the RREQ reaches the destination or an intermediate node with a fresh enough route, the destination /
I. Introduction Recent growing interest on potential commercial usage of MANETs has led to the serious research in this energy and bandwidth constrained network. It is essential to reduce control packet overhead as they consume resources. Routing in MANETs is non trivial. Since mobile nodes have limited transmission capacity, they mostly intercommunicate by multi-hop relay. Multi-hop routing is challenged by limited wireless bandwidth, low device power, dynamically changing network topology, and high vulnerability to failure and many more. To meet those challengeous, many routing protocols have been proposed for MANET [1]. They are categorized as proactive and reactive protocols. Proactive protocols such as DSDV periodically send routing control packets to neighbors for updating routing tables. Reactive routing protocols such as
UbiCC Journal - Volume 3
26
intermediate node responds by a unicast route reply (RREP) packet back to the neighbor from which it first received the RREQ. (The route maintenance process and other details of AODV are not considered here as they are out of scope of this paper). AODV prefers longer hops to form shortest path, which in turn makes route with weaker links. The presence of node mobility may induce route failures (link failures) frequently. Many studies have shown that the on demand approach is relatively quite efficient under a wide range of scenarios. But when seen in isolation, route discovery component is the major bottleneck in on demand protocols. Since route discovery is done via network wide flooding, it incurs significant routing overhead and eats greater network resources. Actually, the longer distance between intermediate nodes on the route rises route maintenance cost, reduces the packet transmission rate (due to increased packet loss), and induces frequent route failures [3]. In our previous work, we proposed a cross layer design extension to AODV in order to form stable routes. It reduces route failures and hence, keeps routing overheads as low as possible, at the cost of lengthy routes with more hops. In this paper, we go further in enhancing AODV performance, by using mobility based adaptive cross layer design to optimize the trade off between route stability and number of hops. Our objective is to form reliable routes in order to reduce number of routing control packets, and thus conserving network resources. The proposed mobility adaptive cross layer design couples the route discovery process with physical layer related received signal strength information and speed of mobile nodes to built stable and optimum routes. As these constraints on received signal strength and node speed will certainly have an impact on network connectivity, we also study the suitability of our algorithm under various node density levels. The remainder of this paper is organized as follows. In section II, we present the related work and emphasize the need for cross layer design. Section III describes the proposed mobility adaptive cross layer algorithm. The simulation model, results and analysis are presented in section IV. Finally we conclude our discussion in section V. II. Related Work As an optimization for the current basic AODV, in [4], a novel stable adaptive enhancement for AODV routing protocol is proposed, which considers joint route hop count, node stability and route traffic load as a route selection metric. A QoS routing protocol based on AODV to provide higher packet delivery ratio and lower routing overheads using a local repair mechanism is proposed in [5]. The received
UbiCC Journal - Volume 3
signal strength changing rate is used to predict the link available time between two nodes to find out a satisfying routing path in [6], which reports improvement in route connection time. In [7], route fragility coefficient (RFC) is used as routing metric, to cause AODV to find a stable route. Mobility aware agents are introduced in ad-hoc networks and Hello packets of AODV protocol is modified in [8] to enhance mobility awareness of node to force it to avoid highly mobile neighbor nodes to be part of routes and ultimately to reduce the re-route discovery. On receiving the Hello Packet with GPS co-ordinates of the originator, mobility agent compares them with previous ones and hence has awareness about the mobility of the originator with references to itself. In [9], an AODV based protocol which uses a backbone network to reduce control overhead is proposed. The destination location is given by GPS and transmitted to source by the backbone network to limit the route search zone. But formation of an additional backbone network and GPS enabled service are extra burden for infrastructure-less ad-hoc network implementation. In order to cope with problems such as the poor performance of wireless links and mobile terminals including high error rate, power saving requirements and quality of service, a protocol stack that considers cross layer interaction is required [10]. Multi-hop routing, random movement of the nodes and other features unique to ad-hoc networks results in lots of control signal overhead for route discovery and maintenance. This is highly unacceptable in bandwidth-constrained ad-hoc networks. Usually the mobile devices have limited computing resources and severe energy constraints. Currently ad hoc routing protocols are researched to work mainly on the network layer. It guarantees the independency of the network layer. However each layer needs to do redundant processing and unnecessary packet exchange to get information that is easily available to other layers. This increases control signals resulting in wastage of resources such as bandwidth and energy. Due to these characteristics, there is lot of research work happening in the performance optimization of ad-hoc networks. However, most of the research works are based on optimization at individual layer. But optimizing a particular layer might improve the performance of that layer locally but might produce non-intuitive side effects that will degrade the overall system performance. Hence optimization across the layers is required through interaction among layers by sharing interlayer interaction metrics [11]. By using cross layer interaction, different layers can share locally available information. This is useful to design and standardize an adaptive architecture that can exploit the interdependencies among link, medium access, networking
27
and application protocols. The architecture where each layer of the protocol stack responds to the local variations as well as to the information from other layers is a major challenge [12]. Cross layer interaction schemes that can support adaptability and optimization of the routing protocols can discover and maintain the routes based on current link status, traffic congestion, signal strength etc. Usually routing layer is not concerned with signal strength related information handling. Lower layer takes care of signal strength related issues. Signal strength can be useful to know the quality of link to select for best effort packet forwarding and to achieve power conservation [13]. Only the link with signal strength above the threshold value can forward the packet. Routing algorithm can exploit signal characteristics related information for such benefits. In the previous work on Reliable AODV [14], we used signal strength information as interlayer interaction parameter. The strength (received power) of RREQ broadcast packet is passed to the routing layer by the physical layer. In the routing layer the signal strength is compared with a pre-defined threshold value. If the signal strength is greater than the threshold, the routing layer continues the route discovery process. Otherwise the Reliable AODV drops the RREQ packet. This leads to formation of routes with strong links where adjacent nodes are well within the transmission range of each other. So, even when the nodes are moving, the probability of route failure due to link breakages would be less with Reliable AODV, compared to the existing Basic AODV. The threshold value is set suitably with reference to the nodes’ transmission power which dictates the transmission range. The essence of Reliable AODV is illustrated in Fig.1 where the node A sends a RREQ which is received by its neighbors B and C. As the received signal strength at node B exceeds the threshold, it forwards the RREQ but the node C drops the RREQ because it is close to the transmission range boundary of node A and hence has a weak link to node A.
The fixed threshold value used is independent of speed of mobile nodes and it may not be justified to low speed nodes. Hence, in this new adaptive cross layer design, we propose adaptive decision making of RREQ forwarding in accordance with speed of mobile nodes which is discussed in the following section. III. Mobility Adaptive Cross Layer Design Routing protocol may let route / link failure happen which is detected at MAC layer by retransmission limits, but dealing with route failure in this reactive manner results in longer delay, unnecessary packets loss and significant overhead when an alternate new route is discovered. This problem becomes more visible especially when mobile nodes move at high speed where route failure is more probable due to dynamic topology changes and negative impact of control packet overhead on network resources utilization is of more significance. We emphasize that routing should not only be aware of, but also be adaptive to node mobility. Hence we propose mobility adaptive cross layer design. In this cross layer design a node receiving signal, measures its strength and passes it from physical layer to routing layer. We also assumed that information about speed of the node is available to it. Hence the signal strength, when receiving RREQ packet which is a MAC broadcast, is passed to routing layer along with the speed information of the node. The AODV routing protocol’s route discovery mechanism is modified to use the above two parameters in making a decision on forwarding / discarding the RREQ packet. The received signal strength is measured and used to calculate the distance between the transmitting and receiving nodes. The two ray propagation model is considered, where the loss coefficient value used is 2 as the maximum transmission range (dmax) of nodes is 350 meters which corresponds to 10dBm transmission power. Hence the received signal strength can be expressed as Pr = Pt (λ/ 4πd)2 (1) Where, Pt - Transmission Power λ - Wavelength in meters and d - Distance between transmitting and receiving nodes Also the unity gain omni directional transmitting and receiving antennas are considered. When the RREQ packet is presented with received signal strength information to the AODV implementation of the node, it calculates its distance from transmitting node using, d = Sqrt (Pt / Pr) * (λ / 4π) (2) Next, the receiving node calculates its distance to the transmission range boundary of the
Fig. 1 Reliable AODV
UbiCC Journal - Volume 3
28
transmitting node using the known maximum transmission range (dmax) as, db=dmax– d (3) The minimum time needed for a node to go out of the transmission range boundary of the transmitting node depends its distance from the boundary and the speed as given below. tb = db / Speed (4) If the source specifies a minimum route lifetime (tl), in its RREQ packet, any intermediate node receiving that packet can calculate its safe distance from transmission range boundary using its speed information as ds = tl * Speed (5) It is now possible to the node to make a decision on forwarding the RREQ. That is, the decision rule inserted in AODV route discovery mechanism is, {If (db ≥ ds), then forward RREQ else drop RREQ }. (6) Hence the route discovery mechanism of AODV routing protocol is made adaptive to the node speed, which leads to the formation of more stable (reliable) routes. The parameter tl, the minimum route life-time, is application specific. This adaptive algorithm will certainly reduce the hop count and hence the average end-to-end delay of data packets than those incurred with fixed signal strength threshold based RREQ processing. To show the efficiency of our new adaptive algorithm, simulation results are presented in the next section. IV. Simulation Model and Result Analysis The simulation for evaluating the problem is implemented within the GloMoSim library [15]. GloMoSim provides a scalable simulation environment for wireless network systems. It is designed using the parallel discrete event simulation capability provided by PARSEC, a C based simulation language developed by parallel computing laboratory at University California at Los Angels, for sequential and parallel execution of discrete event simulation models. The simulation area is 1000 x 1000 square meters size, where nodes are placed uniformly. The transmission power and receiver threshold level of nodes are 10dBm and -81dBm respectively. The random way point mobility model is used. In this model, each node chooses a random destination and move towards that destination with a random speed chosen between the minimum and maximum values specified. The node then waits there for the specified pause time and continues it movement as described above. The bandwidth of shared wireless channel is assumed to be 2 MHz. The physical layer employs two ray propagation model. The nodes use the distributed coordination function of IEEE 802.11 WLAN [16]
UbiCC Journal - Volume 3
standard with RTS / CTS extension and provide link layer feedback to routing layer. The CBR traffic of 4 packets per sec, with 512 bytes packet size is used. There are two randomly chosen source-destination pairs and each source generates 4200 packets. Simulations are run for 1200 seconds and each data point represents an average of at least four runs with different seed values. Identical mobility and traffic scenarios are used across the three protocol variants. The fixed signal strength threshold used in AODV-Fixed variant is 78dBm whereas AODV-Adaptive used received signal strength and speed of mobile nodes passed from physical layer through cross layer interaction. The minimum route life-time requirement is set as 4 seconds. We used the following five parameters to evaluate the performance of the protocol variants: 1) Number of routes selected (implies route failures), 2) Number of RREQ packets transmitted (counted hopby-hop basis), 3) Packet delivery ratio, 4) Number of Hops and 5) Average end-to-end delay.
Fig 2. Route Failure Frequency
Fig 3. Routing Overhead
29
Fig 4. Reliable Packet Delivery
Fig 5. Average Path Length
In the experiment to study the effect of mobility, the maximum speed of nodes is varied between 0-25 m/sec, where 49 nodes are used in the simulation. Fig 2 shows the number of routes used by three protocol variants. The Fixed and Adaptive AODVs result in reduced number of routes selected, i.e. reduced number of route failures that reflect the formation of reliable (more stable) routes. Hence the number RREQ sent by nodes also got reduced as shown Fig 3. We could also infer that usage of fixed threshold value leads to reduced connectivity, particularly at very low speed ranges, which make AODV-fixed to suffer with increased RREQ broadcast during route search process. The improvement in packet delivery ratio is reflected in Fig 4. Both AODV-Fixed and AODV-Adaptive variants outperform AODV-Basic, because of stable route formation. But this improvement is at the cost of increased number of hops, which is shown in fig 5. This figure also highlights the need of mobility adaptive route discovery which optimizes routes with speed information and helps in reducing the average end-to-end delay of data packets significantly than those incurred with fixed threshold usage. Fig 6 shows the delay performance of three protocol variants. Further, in order to explore the impact of node density on the proposed new cross layer algorithm, we conducted another experiment, in which the node density is varied between 16 and 64 nodes in 1000 x 1000 sqm area. The maximum speed of mobile nodes is set as 25 m/sec. The imposed signal strength threshold and minimum route life-time constraints reduce network connectivity, which is shown in Fig 7. The number of routes used by AODV-Fixed variant is relatively low at very low node density, which does not imply formation of stable routes but reflects scarcity of network connectivity. The repeated search for connectivity increases the RREQ broadcasts in AODVFixed variant which is presented Fig 8. But, the performance of AODV-Adaptive excels in this regard. Hence, the control packet overhead is under control even in lightly densed network with our adaptive algorithm. The packet delivery ratio suffers when these constraints are enforced in lightly densed networks. The improvement is visible only when network density increases beyond a particular level as shown in Fig 9. So, it is clear that cross layer design using signal strength threshold is useful and improves network performance in highly densed networks where redundantly available links ensure required network connectivity. Where as the new adaptive algorithm makes a trade off in this regard between the basic and fixed AODV variants. Hence when to invoke cross layer algorithm is also an important design issue.
Fig 6. Delay Performance
UbiCC Journal - Volume 3
30
V. Conclusion
Fig 7. Node density vs Route Failures
Fig 8. Node Density vs Routing Overhead
Fig 9. Node Density vs PDR
UbiCC Journal - Volume 3
We observe that the cross layer AODV with fixed threshold reduces the number of route failures and routing overheads, at the cost of increased hop counts and average end-to-end delay. Certainly the proposed mobility adaptive algorithm for route discovery optimizes the above trade off. The AODV-Adaptive variant reduces number of hops and delay to a greater extent and brings them closer to those of AODV-Basic variant. It is important to note that both cross layer AODV variants improve the packet delivery ratio, but at the cost of slightly increased end-to-end delay. However, the reduced route failures and routing overheads obtained are very attractive for mobile ad-hoc networks which are highly resources constrained. Finally, it is worth to note that impact on network connectivity due to signal strength threshold enforcement is serious in lightly densed networks and hence, the proposed cross layer design is well suited for highly densed networks. References: [1] Mohammad Ilyas, “The Hand Book of Ad-hoc Wireless Networks”, CRC Press, 2003. [2] C E Perkins, E M Royer and S R Das, “ Adhoc On-demand Distance Vector Routing Protocol”, IETF RFC 3651, July 2003. [3] B.Awerbuch, D.Holmer and H.Rubens, “High Throughput Route Selection in Multi-rate Ad-hoc networks”, in Proc. of First working Conf. on Wireless On-demand Network Systems, 2004. [4] X.Zhong et al., “Stable Enhancement for AODV Routing Protocol”, in proc. of 14th IEEE Conf. on Personal, Indoor and Mobile Radio Communication, vol 1, pp 201-205,2003. [5] Y.Zhang and T.A. Gulliver, “Quality of Service for Ad-hoc On-demand Distance Vector Routing”, in proc.of IEEE International Conf. on Wireless Mobile Computing, Networking and Communications, vol 3, pp 192-193, 2005. [6] R.S.Chang and S.J.Leu, “Long-lived Path Routing with Received Signal strength for Ad-hoc Networks”, in proc. of 1st International Symposium on Wireless Pervasive Computing, 2006. [7] G.Quddus et al., “Finding A Stable Route Through AODV by Using Route Fragility Coefficient as Metric”, In proc. of International Conf. on Networking and Services, pp 107- 113, 2006. [8] M.Idrees et al., “Enhancement in AODV Routing Using Mobility Agents”, in proc. of IEEE Symposium on Emerging Technologies, pp 98102, 2005. [9] D.Espes and C.Teyssie, “Approach for Reducing Control Packets in AODV-based MANETs”, in proc. of 4th European Conf. on Universal Multi-
31
service Networks, pp 93-104, Toulouse, France, 2007. [10] G Carneiro et al., Cross Layer Design in 4G Wireless Terminals, IEEE Wireless Commn., vol 11, no 2, pp 7-13, April 2004. [11] T S Rappaport, et al., Wireless Commn: Past event and a future perspective, IEEE Communication Magazine, vol 40, no 5 (50th Anniversary ), pp 148-161, 2002. [12] V.Srivastava and M.Motani, “Cross Layer design : A Survey and The road Ahead”, IEEE Communication Magazine, vol. 43, no.12, pp 112119, dec2005. [13] B. Ramachandran and S. Shanmugavel, “A Power Conservative Cross Layer Design for Mobile Ad-hoc Networks” , Information Technology Journal, vol 4, no 2, pp 125-131, 2005. [14] B.Ramahandran and S.Shanmugavel, “Reliable Route Discovery for Mobile Ad-hoc Networks”, in proc of IETE International Conf. on Next Generation Networks, pp CP 26.1-26.6, Mumbai, India, 2006. [15] GloMoSim User Manual, http://pcl.cs.ucla.edu/project/glomosim [16] LAN IEEE Std 802.11, Part 11: Wireless MAC & PHY Layer Specifications, 1999.
UbiCC Journal - Volume 3
32