1. The Sample with the Built-in Bias

Response Bias: Tendency for people to over- or under-state the truth

Non-response: People who complete surveys are systematically different from those who fail to respond. Accessibility/Pride.

Representative Sample: One where all sources of bias have been removed. (Literary Digest)

Questionnaire wording/Interviewer effects

Recall Bias: Tendency for one group to remember prior exposure in retrospective studies

The sample with a built-in bias : the origin of the statistics problems - the sample. Any statistic is based on some sample (because the whole population can't be tested) and every sample has some sort of bias, even if the person wanting the statistic tries hard to not create any. The built-in bias comes from the respondents not replying honestly, the market researcher picking a sample that gives better numbers, personal biases based on the respondent's perception of the market researcher, data not being available at a certain past time are a few of the biases that creep in when building a statistic. One of the example (from the 1950s) that the author mentions is a readership survey of two magazines. Respondents were asked which magazine they read the most - Harpers or True love story. Most respondents came back that they read the True Love Story, but that publisher's figures came back that the True Love Story had a much higher circulation than Harpers - refuting the results from the sampling. The reason for this discrepancy - people were not willing to respond due to their own bias. As Dr.House says - Everybody Lies ! Summary of the chapter - given any statistic, question the sample that was taken. Assume that there is always a bias in the sample

Non-response: People who complete surveys are systematically different from those who fail to respond. Accessibility/Pride.

Representative Sample: One where all sources of bias have been removed. (Literary Digest)

Questionnaire wording/Interviewer effects

Recall Bias: Tendency for one group to remember prior exposure in retrospective studies

The sample with a built-in bias : the origin of the statistics problems - the sample. Any statistic is based on some sample (because the whole population can't be tested) and every sample has some sort of bias, even if the person wanting the statistic tries hard to not create any. The built-in bias comes from the respondents not replying honestly, the market researcher picking a sample that gives better numbers, personal biases based on the respondent's perception of the market researcher, data not being available at a certain past time are a few of the biases that creep in when building a statistic. One of the example (from the 1950s) that the author mentions is a readership survey of two magazines. Respondents were asked which magazine they read the most - Harpers or True love story. Most respondents came back that they read the True Love Story, but that publisher's figures came back that the True Love Story had a much higher circulation than Harpers - refuting the results from the sampling. The reason for this discrepancy - people were not willing to respond due to their own bias. As Dr.House says - Everybody Lies ! Summary of the chapter - given any statistic, question the sample that was taken. Assume that there is always a bias in the sample

2. The Well-Chosen Average

Arithmetic Mean: Evenly distributes the total among individuals. Can be unrepresentative when measurements are highly skewed right. (e.g. per capita income)

Median: Value dividing distribution into two equal parts. 50th percentile. (e.g. median household income)

Mode: Most frequently observed outcome (rarely reported with numeric data)

The well-chosen average: how not qualifying an average can change the meaning of the data. Before I delve into this, quickly, when I say, average - what comes to your mind? Sum(x1....xn) / N - right? The arithmetic mean. But I said average, not arithmetic average did I? Not many people know that there are 3 averages

Arithmetic average / mean - sum of quantities / number of quantities

Median - the middle point of the data which separates the data, the midpoint when data is sorted

Mode - the data point that occurs the most in a given set of data

And when someone says average, leaving it unqualified, there is a lot of room for juggling. The author mentions a very simple example. If an organization publishes a statistic that the average pay of the employees is $1000, what does this mean? This makes most of us think that almost everyone makes around $2000 - the reader thinks it is the median. But, the corporation can be talking about an arithmetic mean, where the boss might be earning say $10,500 and the rest of the 19 employees earn $500 each - the arithmetic average. Just by not qualifying the average the published fact can be completely twisted out of form from the real facts.The way out - always ask what is the kind of the average that someone is talking about.

Median: Value dividing distribution into two equal parts. 50th percentile. (e.g. median household income)

Mode: Most frequently observed outcome (rarely reported with numeric data)

The well-chosen average: how not qualifying an average can change the meaning of the data. Before I delve into this, quickly, when I say, average - what comes to your mind? Sum(x1....xn) / N - right? The arithmetic mean. But I said average, not arithmetic average did I? Not many people know that there are 3 averages

Arithmetic average / mean - sum of quantities / number of quantities

Median - the middle point of the data which separates the data, the midpoint when data is sorted

Mode - the data point that occurs the most in a given set of data

And when someone says average, leaving it unqualified, there is a lot of room for juggling. The author mentions a very simple example. If an organization publishes a statistic that the average pay of the employees is $1000, what does this mean? This makes most of us think that almost everyone makes around $2000 - the reader thinks it is the median. But, the corporation can be talking about an arithmetic mean, where the boss might be earning say $10,500 and the rest of the 19 employees earn $500 each - the arithmetic average. Just by not qualifying the average the published fact can be completely twisted out of form from the real facts.The way out - always ask what is the kind of the average that someone is talking about.

3. The Little Figures That Are Not There

Small samples: Estimators with large standard errors, can provide seemingly very strong effects

Low incidence rates: Need very large samples for meaningful estimates of low frequency events

Significance levels/margins of error: Measures of the strength and precision of inference

Ranges: Report ranges or standard deviations along with means (e.g. "normal" ranges)

Inferring among individuals versus populations

Clearly label chart axes

The little figures that are not there: This chapter is about how the sample data is picked up in a way to prove the results - something we are all too aware in marketing campaigns. And picking the sample data right can mean picking a sample size that gives the kind of results we are looking for or a smaller number of trials. The author demonstrates this with a very important issue for parents - is my normal or not. The author talks about the 'Gesell Norms', where Dr.Arnold Gesell stated that most kids sit erect by the age of two. This immediately translates to a parent trying to think about his/her kid and deciding whether the kid is normal or not. What is missing in this case is, that, from the source of the information (the research) to the Sunday paper where a parent read this, the average has been changed from a range to an exact figure. If the writer of the Sunday magazine article mentioned to the reader that there is a range of age in which a child sits erect, the reader is assuaged and that is where the little figures disappear. The way out - ask if the information presented is a discrete quantity or if there is a range involved.

Low incidence rates: Need very large samples for meaningful estimates of low frequency events

Significance levels/margins of error: Measures of the strength and precision of inference

Ranges: Report ranges or standard deviations along with means (e.g. "normal" ranges)

Inferring among individuals versus populations

Clearly label chart axes

The little figures that are not there: This chapter is about how the sample data is picked up in a way to prove the results - something we are all too aware in marketing campaigns. And picking the sample data right can mean picking a sample size that gives the kind of results we are looking for or a smaller number of trials. The author demonstrates this with a very important issue for parents - is my normal or not. The author talks about the 'Gesell Norms', where Dr.Arnold Gesell stated that most kids sit erect by the age of two. This immediately translates to a parent trying to think about his/her kid and deciding whether the kid is normal or not. What is missing in this case is, that, from the source of the information (the research) to the Sunday paper where a parent read this, the average has been changed from a range to an exact figure. If the writer of the Sunday magazine article mentioned to the reader that there is a range of age in which a child sits erect, the reader is assuaged and that is where the little figures disappear. The way out - ask if the information presented is a discrete quantity or if there is a range involved.

4. Much Ado about Practically Nothing

Probable Error: Estimation error with probability 0.5. If estimator is approximately normal, PE is approximately 0.675 standard errors. (Old school)

Margin of Error: Estimation error with probability 0.95. If estimator is approximately normal, PE is approximately 2 standard errors

Clinical (practical) significance: In very large samples an effect may be significant statistically, but not in a practical sense. Report confidence intervals as well as P-values.

Much ado about practically nothing: This little chapter is about errors in measurement. There are two measures for measuring error - Probable Error and Standard Error. The probable error measures the error in the measurement based on how much off is your measurement device. For example, if you were using a measuring scale that is 3 inches off a foot, then your measurement across trials is +/- 3. This kind of difference becomes important when there are business decisions taken based on a positive or negative result.

Margin of Error: Estimation error with probability 0.95. If estimator is approximately normal, PE is approximately 2 standard errors

Clinical (practical) significance: In very large samples an effect may be significant statistically, but not in a practical sense. Report confidence intervals as well as P-values.

Much ado about practically nothing: This little chapter is about errors in measurement. There are two measures for measuring error - Probable Error and Standard Error. The probable error measures the error in the measurement based on how much off is your measurement device. For example, if you were using a measuring scale that is 3 inches off a foot, then your measurement across trials is +/- 3. This kind of difference becomes important when there are business decisions taken based on a positive or negative result.

5. The Gee- Whiz Graph

The Gee-Whiz graph: This one is something that we see quite often. How to manipulate a graph so that it shows an inflated / deflated picture (based on what you are plotting on the graph). Some tricks include - miss out the measure of the axis, don't label the axis leaving only numbers and hence letting the reader make his/her own assumptions.

6. The One-Dimensional Picture

The one-dimensional picture: This one is an interesting trick. The trick here is use some sort of symbol - a money bag, a factory symbol things like that on the graph. So, when measuring the growth of, say the factory, increase the size of the factory image - and increase it across all the dimensions. An example - you try to display the difference in pay-scale. If it were a bar-chart, you'd have one bar with a measure of (say) 10 and another one of (say) 30. So the 1:3 ratio is clear when you see the bar chart. Now picture a money bag of similar proportions - one with a money bag of size 1 and the other one, a much larger one and immediately, the user perceives an increase of 1:9. Why? The money bag is grown 3 times across all the dimensions. Given that the reader is seeing this on a graph, the other dimensions are forgotten and the large money bag gives the impression that there is a much larger difference than 1:3 !

7.The Semiattatched Figure

The semi-attached figure: This one is a Sir Humphrey Appleby classic (the one used in A real partnership). And what is the idea - very simple. If you can't prove what you want to prove, prove something else and demonstrate that they both are the same! Things like you can't prove that your drug can cure cold, but you can prove that it kills 32,132 cold germs, more than any other drug. And then wait and watch for the consumer to make that assumption that there is a connection. The author talks about percentages in this chapter, where growth can be measured in percentages or percentage points. And when it comes to percentages - you have the classic trap. A growth of 10% means what? Measuring against the last year's production or some arbitrary year that you decide to pickup? And if it was some other year in the past, how is the reader to make that mental leap to know if the 10% growth is actually in real terms - very difficult indeed. And then there is the percentage point - a drop of 3 percentage points gives a much softer blow than saying a loss of 23% (say a drop from 13% to 10% in real terms).

8. Post Hoc Rides Again

The post hoc rides again: I guess this is my favorite chapter of the book. Post-hoc analysis. The cause and effect problem. You have the effect and then you go around shopping for a cause that you want to portray. And what is to stop someone to make that connection? The author brings up an example of smoking being related to bad grades. Now in this case, is it that smoking was the cause of the bad grades or was it that the individuals who were getting bad grades decided to take up smoking? How is one to contest such an assumption? The way out - mostly is to ask for when the data was collected and if enough data was available during the entire course of the experiment (there is also the possibility of deducing based on the time when there was no data available - how do you contest something when there is no data available at all !).

9. How to Statisticulate

How to statisticulate: In a term that the author coins, statisticulate is to manipulate readers by using statistics. This chapter is a cheat sheet to what to watch out for. The author lists various tricks - things like measuring profit on cost price, showing a graph with a finer Y-axis scale just to show how steep growth is, how income calculations mislead by involving children in the family as individuals for the average amongst a few.

10. How to Talk Back to a Statistic

How to talk back to a statistic: This one is a cheat-sheet for the reader. What should one ask to find out if the statistic being presented is genuine or not. There are a few questions one can ask