Upgrade to remove ads
PHIL 183 Midterm 2
Terms in this set (26)
a selection effect (similar to the survival effect) in which some patients drop out of a research study, or data is lost in some other way that can result in unreliable evidence
a metaphor used for the situation when our sources of information and opinion have all been selected to support our opinions and preferences. This includes our own selection of media and friends with similar viewpoints, but also results from the fact that social media tailors what we see using an algorithm designed to engage us.
Evidence for H
when a fact is more probable given H than given ~H, it constitutes at least some evidence for H. By the first rule of evidence, this means we should increase our degree of confidence in H at least a tiny bit.
Evidence test, the
if we are wondering whether a new fact or observation is evidence for a hypothesis H, we can ask whether that fact or observation is more likely given H or given ~H. If the former, it's at least some evidence for H. If the latter, it's at least some evidence for ~H. If neither, it's independent of H. The formal version of the evidence test is "Is P( E | H ) greater or less than P( E | ~H )"?
this is a selection effect caused by the researchers themselves, who may not even bother to write up and send in a study that is unlikely to be published (viz. a boring study), but instead might leave it in their file drawers. See the related entry for publication bias.
this is any claim under investigation, often denoted with the placeholder letter "H".
Independent of H
see the entry for evidence test.
although this term is generally used to refer only to political biases on the part of media, we use it to cover the general bias towards engaging content, though this may manifest in content of special interest to viewers with a certain political orientation or even outrightly slanted content. The general category of media bias also includes the highly tailored algorithms of social media.
the tendency for academic books and journals to publish research that is surprising in some way. A piece of research can do this by providing evidence against conventional wisdom, or providing evidence for a surprising alternative. Meanwhile studies that support the conventional wisdom, or fail to provide support for alternatives, can be passed over.
a factor that systematically selects which things we can observe. This can make our evidence unreliable if we are unaware that it's happening.
when observations that support a hypothesis bring that hypothesis to mind, causing us to notice that they support the hypothesis, but observations that disconfirm that hypothesis do not bring it to mind. The result is that we are more likely to think about the hypothesis when we are getting evidence for it, and fail to think about it when we are getting evidence against it. So it will seem to us like we are mainly getting evidence for it. This can happen even if the hypothesis is just something we've considered or heard about—it needn't be something we antecedently believe. (So selective noticing can happen without confirmation bias, although it seems to be exacerbated when we do antecedently accept the hypothesis.)
Serial position effect
tendency to remember the very first and last events in a series (or the first and last parts of an extended event).
a measure of the strength of a piece of evidence, namely the result of dividing P( E | H ) by P( E | ~H ), where we are measuring the strength of the evidence E provides for H. The higher the strength factor, the stronger the evidence provided by E. (A strength factor less than 1 is also possible, when E is less likely given H than ~H: this means it's evidence against H.) A traditional but arbitrary threshold for "strong evidence" is about a strength factor of 10.
Strength test, the
a test of the strength of a piece of evidence. Informally, it involves asking: How much more (or less) likely is this if H is true than if H is false? Formally, the question is: How much greater (or less) is P( E | H ) than P( E | ~H )? Note that we need a comparative answer to the strength test, so we divide P( E | H ) by P( E | ~H ) to give us the strength factor.
this is a more specific term for bias arising from an extreme form of selection effect, when there is a process that actually eliminates some potential sources of information, and we only have access to the ones that survive. For example, suppose I have met lots of elderly people who have smoked all their lives and are not sick, and so decide that smoking is not so unsafe. I may be forgetting that the people who smoked and died are not around for me to meet them.
when we get a new piece of evidence for a claim, we need to pay attention both to the strength of that evidence and the probability of that claim before we got the evidence. (In our updating rule, that initial (or "prior") probability gets converted into prior odds.) Often our initial probability comes from a statistical fact called a base rate. When we take evidence into account but ignore the base rate, that is base rate neglect. Someone who commits base rate neglect may still be paying attention to all the evidence and using the full strength factor. One way to diagnose base rate neglect in someone's reasoning is to ask whether their conclusion would have been different if they were taking the base rate into account.
when two hypotheses are mutually exclusive, the probability of one or the other happening is equal to the sum of the individual probabilities; in other words, when A and Be are mutually exclusive, P( A or B ) = P( A ) + P( B )
when some claims together cover all the possibilities-- at least one of them has to be true-- we say they are jointly exhaustive.
Heads I win, tails we're even
This mistake is failing to treat a new fact as evidence against one's position even though, if they had observed the opposite evidence, they would have treated it as evidence for their own position. There are various ways they might do this, including ignoring the evidence or being inconsistent with what they assign to the strength factor values. But the key to this particular pitfall is the inconsistency between how I would respond to the evidence at hand, and how I would have responded to the opposite evidence.
when two claims rule each other out-- they could not both be true-- we say they are mutually exclusive. When several claims are mutually exclusive, it means there is no overlap between any of them-- no two of them could both be true.
One-sided strength testing
the strength of a piece of evidence for a claim is a matter of how much more likely the evidence would be if the claim were true than if it were false. If we pay attention only to how likely the evidence would be if the claim were true, and treat a high value as constituting evidence, we are making the mistake called one-sided strength testing. In our notation, this means using P( E | H ) and ignoring P( E | ~ H ). Someone who makes this error may still take the base rate and all the relevant evidence into account.
Opposite evidence rule
to help us avoid ignoring evidence against a view that we find plausible, it can be useful to ask ourselves how we would have reacted to the opposite observation. If we would have treated the opposite observation as evidence for our our view, then we should treat the evidence we have as evidence against our view. (Though the amount of evidence can be different.) In other words, if E is evidence for H, then ~E is evidence for ~H. Forgetting this leads to the error we call heads I win, tails we're even.
if someone updates using some new evidence, but not other bits of new evidence that are relevant, that is selective updating. Such a person may still take the base rate into account and properly update using the full strength factor using the evidence they do pay attention to.
the individual probabilities of any set of mutually exclusive and jointly exhaustive claims add up to 1. See mutually exclusive and exhaustive.
adjusting one's degree of confidence in a claim after getting evidence for or against that claim. Updating should follow the updating rule.
the rule for properly updating one's degrees of confidence in a claim after getting evidence. The rule is: prior odds × strength factor = new odds. This is equivalent to a combination of two rules this text does not specifically cover (except in the Appendix to this chapter): Bayes' theorem and conditionalization.
Sets with similar terms
Phil Exam 2
Forensics- Types of Evidence
Anthropology 357 Chapter 2
Introduction to Forensics Exam Review
Other sets by this creator
PSYCH291 Exam 1
spanish 232 chapter 10 culture vocab
Boss Battle II
Chapter 7 mini conferencia vocab & concepts
Other Quizlet sets
Architekten-, GU- und TU-Verträge
Quiz 5 - Flush, SiteScrub, PureHub
New Testament Biblical Allussions
Study Guide Science