24 terms

# Chapter 5: The Process of Measurement

Applied Social Research A Tool for Human Services

#### Terms in this set (...)

Measurement
The process of describing abstract concepts in terms of specific indicators by the assignment of numbers or other symbols to these incidents in accordance with rules: Abstract/Nominal Definition --Measurement--> Specific/Operational Definition
Indicator
An observation that is assumed to be evidence of the attributes or properties of some phenomenon (e.g. symptomology).
Item: single indicator
Index or Scale: composite of multiple items
Measurement process
1. Nominal definition
2. Measurement
3. Operational definition
4. Reconceptualization

theoretical level/abstract<----> concrete/research level
ways of measuring
Verbal reports "what do you know?"
-answers to questions or responses statements concerning behavior process and knowledge or values

Observation
-social scientists measure concepts but directly viewing phenomena

Archival records
-available recording information such as statistical records, organizational documents, personal letters,
or the mass media
e.g. diaries, pyschevaluations
Levels of Measurement
The rules that define permissible mathematical operations form set of numbers used by measure. In other words, the same level of analysis must be the same level of measurement or lower
nominal measures
observations are classified into mutually exclusive and exhaustive measures i.e., can't fit into two categories
Numbers in a nominal scale are nearly symbols were labels used to identify categories of the nominal variable. Ex., religious affiliation: Christianity, Buddhism, Hinduism.
Ordinal measures
Mutually exclusive and exhausted categories.
Inherent fixed order variable categories.
Assigned numbers that correlate with value measures but the value differences are not determined.
Interval Measures
Mutually exclusive.
Inherent order to the categories.
Equal spacing between categories.
Ex., thermometer
Ratio Measures
Ratio measures have a meaningful zero point in addition to all the characteristics of interval measures.
Examples: Annual Income and Prison sentence length
Selecting a Level of Measurement
The primary concern is to have an accurate measure of a variable.

Select variables on theoretical grounds and not on the basis of their possible level of measurement.
Discrete Versus Continuous Variables
Discrete variables have a finite number of distinct and separate categories.
Examples: sex, race, household size, number of arrests

Continuous variables can take on an infinite number of values.
Examples:Age, Social class, Income, Grade Point Average
Validity
Validity refers to the accuracy of a measure: Does it accurately measure the variable that it is intended to measure?
Note: Does it measure what we are trying to measure?

*Many studies limit their assessment to content validity, with its heavy reliance on the subjective judgments of individuals and juries, but should be used with caution
Face Validity
Subjective/weakest demonstration of validity
Assessing whether a logical relationship exists between the variable and the proposed measure.
Seen/Observation
No more than a starting point for more stringent methods for assessing validity.
Content Validity/Sampling Validity
Whether a measure device covers the full range of meanings or forms that are included in a variable to measure.
More extensive assessment of validity than face validity, yet still subjective: more carefully considered judgment than occurs with face validity.
Ex. Jury Opinion
Criterion Validity
(Specific question) Showing correlation between a measurement device and some other criterion or standard that we know or believe accurately measure the variable under consideration. *Find a criterion variable against which to compare the results of the measuring device. (ex., suicide risk and occurrence of destructive behaviors.
-Concurrent validity: compares instrument under evaluation to some already-existing criterion
-Predictive Validity: an instrument predicts some future state of affairs (ex., SAT score with how students will perform in college)
Construct Validity
Relating an instrument to an overall theoretical framework to determine whether the instrument is correlated with all the concepts and propositions that comprise the theory.
-Multitrait-multimethod approach: 1. Two instruments that are valid measure of the same concept should correlate rather highly with each other even thought they are different instruments; 2. two instruments, even if similar to each other, should not correlate highly if they measure different concepts.
Reliability
...a measure's ability to yield consistent results each time it is applied.
-Stability is the idea that a reliable measure should not change from one application to the next.
-Equivalence means that all items of an instrument should measure the same thing.
Reliability: Test-Retest
Apply measure at Time 1
Apply same measure at Time 2
Compute correlation
Correlation of 0.80 or better to be considered reliable.
Reliability: Multiple Forms
Create two separate but equivalent versions of a scale, made up of different items, e.g. questions.
Give both versions successively to same group.
Test correlation (r=0.80 or better)
Requires only one testing session
Requires no control group to test
Must appear as one long instrument to study group.
Difficult to develop two really equivalent forms
Reliability: Internal Consistency
Administer one complete scale to study group.
Randomly split items into two halves.
Compute correlation between the halves.
Correct for test length reduction.

Cronbach's alpha is like average of all possible split halves
Another approach:
Compute correlation of each item with every other item
Average these correlations
Measurement with Minority Populations
Unique cultural characteristics and attitudes of minorities may affect measurement.
English as second language.
Figures of speech may have different meaning in different cultures
Steps to improving measurement:
Researchers immerse themselves in the culture
Use key informants
Use double translation: English to Spanish to English, visa versa
Test validity and reliability with the population of study
Errors in Measurement
There is no such thing as an exact measurement. Esp. with human behavior

Random error is neither consistent nor patterned. Random errors are essentially chance errors that, in the long run, tend to cancel themselves out.

Systematic error is consistent and patterned. Systematic errors do not cancel out. Ex., "confusing question"
Improving Validity and Reliability
Develop concepts more extensively.
Improve training of those who will be applying the measuring devices.
Interview the subjects of the research about the measurement devices.
Use a higher level of measurement. NOIR
Use more indicators of a variable.
Conduct an item by item analysis.
Choosing a Measurement Device
Consider theoretical relevance to the research.
Emphasize proven reliability and validity.
Opt for higher level of measurement.
Minimize systematic and random error.
Consider feasibility issues.