Prob 1

  • Uploaded by: Animesh
  • 0
  • 0
  • December 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Prob 1 as PDF for free.

More details

  • Words: 4,220
  • Pages: 7
PROBABILITY

1 1.1

experiment E. The union of the sets A and B, denoted by A+B, is an event which can be stated as ‘A or B’ or equivalently ‘at least one of A and B’. The intersection of A and B, denoted as AB, is an event which can be stated as ‘both A and B’. The complement of A, denoted as A, is an event stated as ‘not A’. In general, ∞ X if A1 , A2 , · · · , An , · · · are events, then the event An

Some Definitions Random Experiment

An experiment E is called a random experiment if

n=1

1. all possible outcomes of E are known in advance,

is stated as ‘at least on of A1 , A2 , · · · , An , · · · and the ∞ Y 2. it is impossible to predict which outcome will occur event An is stated as ‘all of A1 , A2 , · · · , An , · · · . at a particular performance of E, n=1 3. E can be repeated, at least conceptually, under identical conditions for infinite number of times.

1.4

The experiment of tossing a coin is an example of random experiment. Here the possible outcomes are ‘head’ and ‘tail’, but it is impossible to predict which outcome, namely ‘head’ or ‘tail’ will occur at a particular toss of the coin under the given conditions. Other examples of random experiment are ‘throwing a die’, ‘drawing a card from a full pack of 52 cards at random’, etc.

1.2

An event of a given random experiment is called an impossible event if it can never happen in any performance of the random experiment under identical conditions. In connection with the random experiment of throwing a die, the event ‘face marked 7’ is an impossible event.

Event Space 1.5

The set of all possible outcomes of a given random experiment E is called the event space of the experiment and it will be denoted by S. Here the outcomes, also called event points, are the elements of S. The event space S of the random experiment of tossing a coin is {H, T }, where H denotes the outcome ‘head’ and T denotes the outcome ‘tail’. This is an example of a finite event space. The event space S corresponding to the experiment of choosing a number at random from the interval (2, 4) is the set(2, 4) which is an infinite set.

1.3

Impossible Event

Certain Event

An event of a given random experiment is called an certain event if it happens in every performance of the corresponding random experiment under identical conditions. In connection with the random experiment of tossing a coin, the event ‘head’ or ‘tail’ is a certain event.

1.6

Events

Simple & Composite Events

An event A is called a simple event or an elementary event if A contains exactly one element i.e., A can happen in only one way in any performance of the corresponding random experiment. An event is called a composite event if A contains more than one element. In connection with the random experiment of throwing a die, the event space S is given by S = {1, 2, 3, 4, 5, 6}. Let A1 , B1 be respectively the events defined by A1 = {2, 4, 6}, B1 = {2}. The event A1 is composite event whereas the event B1 is simple event.

An event A of a given random experiment E can be defined as a subset of the corresponding event space S. Let us consider the random experiment of throwing a die. Here S = {1, 2, 3, 4, 5, 6}. Let A = {2, 4, 6} be an event which can be described as ‘even number appears in throwing a die’. Here the event A happens in a specific trial of the given random experiment if and only if exactly one of the outcomes ‘2’, ‘4’ or ‘6’ occurs in the trial. Let A and B be any two events of a given random 1

1.7

3

Mutually Exclusive Events

Two events connected to a given random experiment E are said to be mutually exclusive if A, B can never happen simultaneously in any performance of E, i.e., if AB = O. In connection with the random experiment of throwing a die, the events ‘multiple of 3’ and ‘a prime number’ are not mutually exclusive whereas the events ‘even number’ and ‘odd number’ are mutually exclusive events of the same random experiment.

Frequency Definition of Probability

Let A be an event of a given random experiment E. Let the event A occur N (A) times when the random experiment E is repeated N times under identical conditions. Then on the basis of statistical regularity we can asN (A) exists finitely and the value of sume that lim N →∞ N this limit is called the probability of the event A denoted by P(A),

N (A) = lim f (A), N →∞ N A collection of events is said to be exhaustive if in every where f (A) = N (A) is the frequency ratio of the event N performance of the corresponding random experiment A in N repetitions of the corresponding random experat at least one event (not necessarily the same for every iment under identical conditions. performance) belonging to the collection happens. In set theoretic notations X the collection of events Aα , α ² I is exhaustive iff Aα = S, where I is an index set. 4 Axiomatic Definition of

1.8

P(A) = lim

Exhaustive Set of Events

N →∞

α²I

Probability In connection with the random experiment of throwing a die the collection of events {A1 , A2 , A3 } is exhaustive Let E be a random experiment described by the event where A1 = {1, 3, 5}, A2 = {2}, A3 = {4, 6}. space S and A be any event connected with E, i.e., A ⊆ S. The probability of A is a number associated with A, to be denoted by P(A), such that the following 1.9 Statistical Regularity axioms are satisfied : 1. P(A) ≥ 0. Let a random experiment E be repeated N times under identical conditions in which we note that an event A 2. The probability of a certain event, P(S) = 1. N (A) of E occurs N(A) times. Then the ratio is called N 3. If A1 , A2 , A3 , · · · be a finite or infinite sequence of the frequency ratio of A and is denoted by f (A). Now, pairwise mutually exclusive events, i.e., Ai Aj = if the random experiment E is repeated a large number O(i 6= j; i, j = 1, 2, 3, · · · ), then of times, it is seen that f (A) gradually stabilizes to P(A1 +A2 +A3 +· · · ) = P(A1 )+P(A2 )+P(A3 )+· · · . a more or less constant. This tending of stability of frequency ratio is called statistical regularity and this 4.1 Deductions from Axiomatic Defifact was confirmed by many experimental results.

nition

4.1.1

2

Classical Definition of Probability

P(O) = 0.

We have O = O + O + O + · · · , where O occurs countably infinite number of times in the right hand side. Here OO = O. So by axiom (3),

P(O + O + O) = P(O) + P(O) + P(O) + · · · Let the event space S of a given random experiment E be finite. If all the simple events connected to E be or P(O) = P(O) + P(O) + P(O) + · · · (4.1) equally likely then the probability of an event A(A ⊆ S Let P(O) = k. is defined as Then k ≥ 0 by axiom (1). If possible, let k 6= 0, then the infinite series in the right hand side of (4.1) is m P(A) = , n k + k + k + · · · which is divergent, lim nk = ∞(∵ k > 0) and then (4.1) is where n is the total number of simple events connected Since here n→∞ to A, i.e., n is the number of distinct elements of S impossible. So it is proved that the assumption k 6= 0 and m of these simple events are favourable to A, i.e., is wrong. Hence k = 0. A contains m distinct elements. ∴ P(O) = 0. 2

4.1.2

Theorem 1: If A1 , A2 , · · · An be finite Since by Axiom (1), P(B − A) ≥ 0, it follows that number of pairwise mutually exclusive P(A) ≤ P(B). events, then P(A1 +A2 +· · ·+An ) = P(A1 )+P(A2 )+· · ·+P(An ). 4.1.7 Theorem 2: If A and B be any two events connected to a random experiment, then

4.1.3

P(A) = 1 − P(A)

4.1.4

Deduction of the Classical Definition

P(A + B) = P(A) + P(B) − P(AB) proof : The events A − AB, B − AB and AB are pairwise mutually exclusive events. Also A + B = (A − AB) + (B − AB) + AB. Hence by Theorem 1,

In this case the event space S is finite and contains n distinct elements u1 , u2 , · · · , un (say), so that S = {u1 , u2 , · · · , un }

P(A + B) = P{(A − AB) + (B − AB) + AB} Here the n distinct simple events U1 = {u1 }, U2 = = P(A − AB) + P(B − AB) + P(AB) (4.4) {u2 }, · · · , Un = {un } are equally likely, which means that the simple events have equal probability, i.e., Again A = (A − AB) + AB, P(U1 ) = P(U2 ) = · · · = P(Un ) (4.2) and B = (B − AB) + AB. Hence by Theorem 1, as A − AB, AB are mutually Since any two distinct simple events are necessarily exclusive and B − AB, AB are also mutually exclusive, mutually exclusive, by axiom (2) and Theorem 1 we P(A) = P(A − AB) + P(AB) (4.5) have P(B) = P(B − AB) + P(AB). (4.6) 1 = P(S) = P(U + U + · · · + U ) 1

2

n

P(U1 ) + P(U2 ) + · · · + P(Un ). (4.3) Eliminating P(A − AB) and P(B − AB) from (4.4), (4.5) and (4.6) we get From (4.2) and (4.3), we get P(A + B) = {P(A) − P(AB)} + {P(B) − P(AB)} + P(AB) P(U1 ) = P(U2 ) = · · · = P(Un ) = P(A) + P(B) − P(AB) P(U1 + U2 + · · · + Un ) 1 = = n n Let now A be an event connected to the given ran- 4.1.8 Theorem 3: For any three events A, B, C, dom experiment. If the event A contains m distinct elements of S,say ui1 , ui2 , · · · , uim where i1 , i2 , · · · , im P(A + B + C) = P(A) + P(B) + P(C) take distinct values from the set {1, 2, · · · , n}, we can −P(AB) − P(BC) − P(CA) + P(ABC) write =

4.2

A = Ui1 + Ui2 + · · · + Uim .

Theorem 4:

Since any two distinct simple events are mutually ex- If {An } be a monotonic sequence of events then clusive, we get by Theorem 1 P( lim An ) = lim P(An ) n→∞ n→∞ P(A) = P(Ui1 ) + P(Ui2 ) + · · · + P(Uim ) 1 1 1 proof : Case 1: Let {An } be a monotonically in= + + + · · · m times n n n creasing sequence of events, i.e., {An } ⊆ {An+1 } for m . = all n and for that n ∞ Hence the classical definition is established. X lim An = An . (4.7) 4.1.5

0 ≤ P(A) ≤ 1

4.1.6

If A be a subevent of B, i.e., A ⊆ B, then P(A) ≤ P(B)

n→∞

n=1

We define another sequence of events {Bn } as follows:

B1 = A1 , Bn = An − An−1 , n ≥ 2. (4.8) Since A ⊆ B, then we can write B = A + (B − A). Since A and B − A are mutually exclusive, by Theo- It can be easily verified that {Bn } is a sequence of pairwise mutually exclusive events such that rem 1 P(B) = P(A) + P(B − A)

∞ X

∴ P(B − A) = P(B) − P(A)

n=1

3

An =

∞ X n=1

Bn .

(4.9)

4.3

It can be similarly shown that n X

Ai =

i=1

n X

Bi .

Let A and B be any two events connected to a given random experiment E. The conditional probability of the event A on the hypothesis that the event B has occurred, denoted by P(A|B), is defined as

(4.10)

i=1

Again since A1 ⊆ A2 ⊆ · · · ⊆ An . n X

Ai = An .

Conditional Probability

(4.11)

P(A|B) =

i=1

P(AB) , provided P(B) 6= 0. P(B)

Now, P( lim An ) = n→∞

= =

P(

∞ X

n=1 ∞ X

P(

An ),

by (4.7)

Bn ),

by (4.9)

Similarly we can define P(B|A) when P(A) 6= 0. For example, let us consider the random experiment of throwing a symmetric die. If A and B be the events ‘even face’ and ‘multiple of three’. Here the event space contains 6 simple events and the number of simple events favourable to the events A, B and AB are respectively 3, 2 and 1. ∴ P(A) = 1/2, P(B) = 1/3, P(AB) = 1/6. P(AB) So by definition P(B|A) = = 1/3, P(A) P(AB) and P(A|B) = = 1/2. P(B)

n=1 ∞ X

P(Bn ), by axiom (3)

n=1

= = =

lim

n→∞

n X

P(Bi )

i=1

n X lim P( Bi ), by axiom(3)

n→∞

i=1

4.3.1

n X lim P( Ai ), by (4.10)

Bayes’ Theorem

Let A1 , A2 , · · · , An be n pairwise mutually exclusive events connected to a random experiment E = lim P(An ), by (4.11) where at least on of A1 , A2 , · · · , An is sure to hapn→∞ pen (i.e., A1 , A2 , · · · , An form an exhaustive set of Case 2: Let {An } be a monotonically decreasing sen events). Let X be an arbitrary event connected quence of events, i.e., {An } ⊇ {An+1 } for all n and to E, where P(X ) 6= 0. Also let the probabilities for that P(X |A ), P(X |A 1 2 ), · · · , P(X |An ) be all known. Then ∞ Y lim An = An . (4.12) P(Ai )P(X |Ai ) n→∞ n=1 P(Ai |X ) = n , i = 1(1)n (4.13) X P(A )P(X |A ) ∴ {An } ⊆ {An+1 } for all n. r r r=1 i.e., {An } be a monotonically increasing sequence of events. Then by Case 1, proof : A , A , · · · , A being an exhaustive set of n→∞

i=1

1



n→∞ ∞ X

P( P(

n→∞

∞ Y

n→∞

where S is the corresponding event space. An ) = 1 − lim P(An ),

∴ or

n→∞

n=1

by De M organ0 s law ⇒

1 − P(

∞ Y

P(

∞ Y

n→∞

An ) = lim P(An ),

n=1



n→∞

P( lim An ) = lim P(An ), by (4.12). n→∞

X (A1 + A2 + · · · + An ) = X S = X X A1 + X A2 + · · · + X An = X

∵X ⊆S

Now (X Ai )(X Aj ) = X (Ai Aj ) = X O = O for i 6= j, ∴ X A1 , X A2 , · · · , X An are pairwise mutually exclusive events and hence

An ) = 1 − lim P(An ),

n=1



n

S = A1 + A2 + · · · + An

An ) = lim [1 − P(An )], by (4.7)

n=1



2

events,

P( lim An ) = lim P(An ),

or

n→∞

This completes the proof of the theorem.

P(X A1 ) + P(X A2 ) + · · · + P(X An ) = P(X ) P(A1 )P(X |A1 ) + P(A2 )P(X |A2 ) + · · · · · · + P(An )P(X |An ) = P(X ) (4.14)

4

P(Ai |X ) = =

P(Ai X ) , ∵ P(X ) 6= 0 P(X ) P(Ai )P(X |Ai ) , f or i = 1(1)n n X P(Ar )P(X |Ar )

which implies that A, B, C are not mutually independent. Hence pairwise independence does not always imply mutual independence.

r=1

Note 2. It is to be noted that the concept of mutually exclusive events and independent events are not equivalent. If two events A, B are mutually exclusive then AB = O and so the occurrence of one of the two events, in this case, is hindered by anticipating the occurrence of the other. On the other hand, if the occurrence of one event has no effect on the probability of the other event, the two events are said to be independent and in this case P(AB) = P(A)P(B). Two events can be mutually exclusive and not independent. For example, consider the random experiment of tossing of two coins. Let A and B be the events ‘both the coins show head’ and ‘both the coins show tail’ respectively. Then A and B are clearly mutually exclusive, since if A happens, B cannot happen and as such AB = O. But P(A) = 1/4, P(B) = 1/4. ∴ P(AB) = 0 6= P(A)P(B), i.e., A and B are not independent. Again two events can be independent and not mutually exclusive. For example, consider the random experiment of throwing 2 dice together. Let A and B be the events ‘6 appears in the first die’ and ‘6 appears in the second die’ respectively. Then P(AB) = 1/36 = P(A)P(B) = 16 × 16 and so A and B are independent. Also AB = {(6, 6)} 6= O which implies that A and B are not mutually exclusive. Finally, two events A and B can be both mutually exclusive and independent when

by(4.14) Hence the theorem. 4.3.2

Independence of Events

Two events A and B are said to be stochastically independent or statistically independent or simply independent if and only if P(AB) = P(A)P(B) 4.3.3

Mutual and Pairwise Independence of Three Events

Three events A, B, C are said to be pairwise independent if P(AB) = P(BC) = P(CA) =

P(A)P(B) P(B)P(C) P(C)P(A)

and A, B, C are said to be mutually independent if P(AB) = P(A)P(B) P(BC) = P(B)P(C) P(CA) = P(C)P(A) P(ABC) = P(A)P(B)P(C) Note 1. From the definition of mutual independence, we see that mutual independence implies pairwise independence, but the converse is not true, as shown by the following example : Let the equally likely outcomes of an experiment be one of the four points in the three-dimensional space with rectangular co-ordinates (1, 0, 0), (0, 1, 0), (0, 0, 1) and (1, 1, 1). Let A, B, C denote the events ‘xco-ordinate 1’, ‘y-co-ordinate 1’ and ‘z-co-ordinate 1’ respectively. Then by using classical definition, P(A) = P(B) = P(C) =

P(AB) = P(A)P(B) = 0 which holds if at least one of the two events A and B has zero probability. In fact, two events having both non-zero probabilities cannot be simultaneously mutually exclusive and independent. Prob. 1 What is the probability of 53 Sundays in a leap year?

2 1 = . 4 2

Ans :

1 = P(A)P(B) 4 1 P(BC) = = P(B)P(C) 4 1 P(CA) = = P(C)P(A) 4

P(AB) =

2/7.

Prob. 2 Find the probability PN that a natural number chosen at random from the set {1, 2, · · · , N } is divisible by a fixed natural number k. Also find lim PN . N →∞

Ans : Let [z] denote the largest integer contained in z, z being a rational number. Then number of integers in the given set {1, 2, · · · , N } that are divisible by k is

Hence A, B, C are pairwise independent. But P(ABC) = 41 6= P(A)P(B)P(C), 5

·

¸ N . Also the event space contains N simple events. k Therefore, the required probability · ¸ 1 N PN = N k · If

N k

From (4.15) and (4.16) we get P(AC) + P(AC) ≥ P(BC) + P(BC) P(AC + AC) ≥ P(BC + BC), since (AC)(AC) = O, (BC)(BC) = O, P(A) ≥ P(B).

or, ∴

¸ = q, then we can write

Prob. 4

∴ N ow ∴

If A and B are two events and P(B) 6= 1, P(A) − P(AB) then prove that P(A|B) = . Hence show 1 − P(B) that P(AB) ≥ P(B) + P(B) − 1. Also show that P(A) > or < P(A|B) according as

N = kq + r, 0 ≤ r < k · ¸ 1 N q N −r 1 r PN = = = = − . N k N kN k kN 1 1 r 0≤ < . Also lim = 0. N →∞ N kN N r lim =0 N →∞ kN 1 lim PN = N →∞ k

P(A|B) > or

< P(A).

Prob. 4 The probability of detecting tuberculosis in X-ray examination of a person suffering from the disease is 1 − b. The probability of diagnosing a healthy Prob. 3 An integer is chosen at random from the person as tubercular is a. If the ratio of tubercular pafirst 100 positive integers. What is the probability that tients to the whole population is c, find the probability the integer is divisible by 6 or 8? that a person is healthy if after examination he is diagnosed as tubercular. Ans : 6/25. ∴

Ans : Let A denote the event ‘the person is tubercuProb. 4 From the numbers 1, 2, · · · , 2n +1. three lar’, B denote the event ‘the person is diagnosed tuberare chosen at random. What is the probability that cular. Then by the question P(B|A) = 1 − b, P(B|A) = these numbers are in A.P.? a, P(A) = c. Then we have to find the value of P(A|B). Now 3n Ans : . A + A = S, the corresponding event space, 4n2 − 1 ⇒ BA + BA = B, Prob. 4 Show that the probability that exactly one of ⇒ P(B) = P(BA) + P(BA) the events A and B occurs is P(A) + P(B) − 2P(AB). as BA and BA are mutually exclusive events ⇒ P(B) = P(B|A)P(A) + P(B|A)P(A), Prob. 4 Show that the conditional probability satis⇒ P(B) = (1 − b)c + a(1 − c). fies all the axioms of probability. Therefore, Prob. 4 If P(A|C) ≥ P(B|C) and P(A|C) ≥ P(B|C), then prove that P(A) ≥ P(B) Ans :



a(1 − c) P(B|A)P(A) = . P(B) (1 − b)c + a(1 − c)

From P(A|C) ≥ P(B|C), we get P(BC) P(AC) ≥ . P(C) P(C) P(AC) ≥ P(BC), ∵ P(C) > 0.

Similarly, from P(A|C) ≥ P(B|C) we get P(BC) P(AC) ≥ . P(C) P(C) ∴

P(A|B) =

P(AC) ≥ P(BC),

∵ P(C) > 0.

Prob. 5 The chance that a doctor will diagnose a certain disease correctly is 60%. The chance that a patient will die by his treatment after correct diagnosis (4.15) is 40 % and the chance of death by wrong diagnosis is 70 %. A patient of the doctor who had the disease dies. What is the probability that the disease was diagnosed correctly? (4.16) Ans : 6

6/13.

Prob. 6 On an attempt to land an unmanned rocket on the moon, the probability of a successful landing is 0.4. The probability that monitoring system will give the correct information concerning landing is 0.9 in either case. A shot is made and a successful landing is indicated by the monitoring system. What is the probability of a successful landing? Hints : Let A denote the event ‘successful landing’ and B denote the event‘monitoring system indicates successful landing’. Given that P(A) = 0.4, P(B|A) = 0.9. The required probability is P(A|B) = 6/7. Prob. 7 There are three identical urns containing white and black balls. The first urn contains 2 white and 3 black balls; the second urn 3 white and 5 black balls, and the third urn 5 white and 2 black balls. An urn is chosen at random, and a ball is drawn from it. Find the probability that it is from the second urn if the ball drawn is white? Ans :

35/139.

7

Related Documents

Prob. 1
May 2020 4
Prob 1
December 2019 14
Exercicios Prob 1
November 2019 20
Matematik Prob
June 2020 12
Stat&prob
November 2019 34
Matematik Prob.
June 2020 7

More Documents from ""

Prob 1
December 2019 14
Shortcuts -quant Cat.pdf
April 2020 15
Control_units.pdf
December 2019 19
Corejava.pdf
December 2019 12