How can we help?

You can also find more resources in our Help Center.

Probability

numerical measure of the likelihood that an event will occur.

experiment

a process that generates well-defined outcomes

Sample point

an experimental outcome (to identify it as an element of the sample space)

Combinations

counting rule that allow one to count the number of experimental outcomes when the experiment involves selecting "n" objects from a (usually larger) set of "N" objects

Permutations

counting rule that is sometimes useful is the counting rule for permutations. It allows one to compute the number of experimental outcomes when "n: objects are to be selected from a set of "N" objects where the order is important.

requirements for assigning probabilities

1. probability assigned to each experimental outcome must be between 0 and 1, inclusively.

2. the sum of the probabilities for all the experimental outcomes must equal 1.0

ex. P(e1)+P(e2)+P(e) etc= 1

2. the sum of the probabilities for all the experimental outcomes must equal 1.0

ex. P(e1)+P(e2)+P(e) etc= 1

Classical method of assigning probabilities

appropriate when all the experimental outcomes are equally likely. - 2 basic requirements for assigning probabilities are automatically satisfied when using this approach. (think tossing a coin)

Relative frequency method

assigning probabilities is appropriate when data are available to estimate the proportion of the time the experimental outcome wil occur if the experiment is repeated a large number of times. example:

number waiting # of days

0 2

1 5

2 6

0 waited on 2 days 2/13= .15

1 waited on 5 days 5/13= .38

2 waited on 6 days 6/13= .46

number waiting # of days

0 2

1 5

2 6

0 waited on 2 days 2/13= .15

1 waited on 5 days 5/13= .38

2 waited on 6 days 6/13= .46

subjective method

assigning probabilities when one can not realistically assume that the experimental outcomes are equally likely and when little relevant data are available. - think degree of belief

Event

a collection of sample points

probability of an event

is equal to the sum of the probabilities of the sample points in the event. calculated by adding all the robabilities of the sample points (experimental outcomes) that make up the event.

compliment

defined to be the event consisting of all sample points that are NOT in A denoted by A^c

P(A) + P(A^c)= 1

P(A) + P(A^c)= 1

AUB= addition law

used when interested in knowing the probability that at least one of 2 events occurs (event A or B or both) denoted by

P(AUB)= P (A) + P(B)- (PnB)

P(AUB)= P (A) + P(B)- (PnB)

Mutually exclusive events

if the events have no sample points in comon

addition law for mutually exclusive events

P(AUB)= P(A)+ P(B)

conditional probability

written P (A I B) meaning the probability of event A given the condition that event B has occured (A given B)

Joint Probabilities

the intersection of two events (think being a man and getting a raise)q

Marginal Probabilities

refers to the location in the margins of joint probabilities, found by suming the join probabilities in the corresponding row or column of the joint probability table

independent event

the probability that A event occuring is not changed by the existance of event B

Random Variable

a numerical description of the outcome of an experiment

Discrete Random Variable

a random variable that may assume either a finite number of values or an infinite sequence of values such as 0,1,2...

Continuous Random Variable

a random variable that may assume any numerical value in an interval or collection of intervals

Probability Distribution

describes how probabilities are distributed over the values of the random variable

Probability function

provides the probability of each value of the random variable denoted by f(x)

Discrete probability function

random variable that is either a finite number of values or infinite sequence of values where F(x) is greater than or equal to 0 AND the sume of f(x)=1, both required for this probability function

Discrete uniform probability function

f(x)= 1/n where n= the number of values the random variable may have

Expected Value

mean of a random variable - is a measure of the central location for the random variable

Variance

the variability in the values of a random variable

standard deviation

positive square root of the variance

Binomial Properties

must have the following properties

1. experiment consists of a sequence of "n" identical trials.

2. Two outcomes are possible on each trial. success or failure

3. probability of success, denoted by p, does not change from trial to trial. Probability of a failure, denoted by 1- p, does not change from trial to trial

4. Trials are independent.

1. experiment consists of a sequence of "n" identical trials.

2. Two outcomes are possible on each trial. success or failure

3. probability of success, denoted by p, does not change from trial to trial. Probability of a failure, denoted by 1- p, does not change from trial to trial

4. Trials are independent.

binomial probability function

f(x)= (nCx)p^x(1-p)^(n-x)

x= # of successes, p= probability of a success on one trial, n= # of trials f(x)= probability of x successes in n trials

x= # of successes, p= probability of a success on one trial, n= # of trials f(x)= probability of x successes in n trials

Poisson Probability Function

f(x)= (mean)^x*e^(-mean)/ x!

f(x)= probability of x occurences in an interal

mean= expected value or mean number of occurences in an interval

e= 2.71828

f(x)= probability of x occurences in an interal

mean= expected value or mean number of occurences in an interval

e= 2.71828

Poisson Properties

1. probability of occurence is the same for any 2 intervals of equal length

2. occurences or nonoccurence in any interval is independent of the occurence or nonoccurence in any other interval.

2. occurences or nonoccurence in any interval is independent of the occurence or nonoccurence in any other interval.

hypergeometric probability distribution

closely related to binomial distribution, but differ in 2 key ways, this distribution is not independent, and the probability of success changes from trial to trial.