counting rule that allow one to count the number of experimental outcomes when the experiment involves selecting "n" objects from a (usually larger) set of "N" objects
counting rule that is sometimes useful is the counting rule for permutations. It allows one to compute the number of experimental outcomes when "n: objects are to be selected from a set of "N" objects where the order is important.
requirements for assigning probabilities
1. probability assigned to each experimental outcome must be between 0 and 1, inclusively.
2. the sum of the probabilities for all the experimental outcomes must equal 1.0
ex. P(e1)+P(e2)+P(e) etc= 1
Classical method of assigning probabilities
appropriate when all the experimental outcomes are equally likely. - 2 basic requirements for assigning probabilities are automatically satisfied when using this approach. (think tossing a coin)
Relative frequency method
assigning probabilities is appropriate when data are available to estimate the proportion of the time the experimental outcome wil occur if the experiment is repeated a large number of times. example:
number waiting # of days
0 waited on 2 days 2/13= .15
1 waited on 5 days 5/13= .38
2 waited on 6 days 6/13= .46
assigning probabilities when one can not realistically assume that the experimental outcomes are equally likely and when little relevant data are available. - think degree of belief
probability of an event
is equal to the sum of the probabilities of the sample points in the event. calculated by adding all the robabilities of the sample points (experimental outcomes) that make up the event.
defined to be the event consisting of all sample points that are NOT in A denoted by A^c
P(A) + P(A^c)= 1
AUB= addition law
used when interested in knowing the probability that at least one of 2 events occurs (event A or B or both) denoted by
P(AUB)= P (A) + P(B)- (PnB)
written P (A I B) meaning the probability of event A given the condition that event B has occured (A given B)
refers to the location in the margins of joint probabilities, found by suming the join probabilities in the corresponding row or column of the joint probability table
Discrete Random Variable
a random variable that may assume either a finite number of values or an infinite sequence of values such as 0,1,2...
Continuous Random Variable
a random variable that may assume any numerical value in an interval or collection of intervals
describes how probabilities are distributed over the values of the random variable
Discrete probability function
random variable that is either a finite number of values or infinite sequence of values where F(x) is greater than or equal to 0 AND the sume of f(x)=1, both required for this probability function
Discrete uniform probability function
f(x)= 1/n where n= the number of values the random variable may have
mean of a random variable - is a measure of the central location for the random variable
must have the following properties
1. experiment consists of a sequence of "n" identical trials.
2. Two outcomes are possible on each trial. success or failure
3. probability of success, denoted by p, does not change from trial to trial. Probability of a failure, denoted by 1- p, does not change from trial to trial
4. Trials are independent.
binomial probability function
x= # of successes, p= probability of a success on one trial, n= # of trials f(x)= probability of x successes in n trials
Poisson Probability Function
f(x)= (mean)^x*e^(-mean)/ x!
f(x)= probability of x occurences in an interal
mean= expected value or mean number of occurences in an interval
1. probability of occurence is the same for any 2 intervals of equal length
2. occurences or nonoccurence in any interval is independent of the occurence or nonoccurence in any other interval.