CH 3, 2, 1
Terms in this set (50)
Is the careful, deliberate observation of the real world for the purpose of describing objects and events in terms of the attributes composing a variable.
Public research is only as good as the instruments one uses to measure elements of the social word.
The mental process whereby fuzzy and imprecise notions (concepts) are made more specific and precise.
Conceptualization processes a specific, agreed- upon meaning for a concept for the purposes of research. (E.g. the concept "prejudice" invokes feelings toward what it means.
How would one measure prejudice?
Concepts are constructs
Derived by mutual agreement from mental images.
Conceptions summarize collections of seemingly related observations and experiences.
1. Direct observables (includingmanifest concepts / variables).
Physical characteristics of a person directly in front of an interviewer (sex, weight, height, eye color).
2. Indirect observables(includinglatent concepts / variables).
Characteristics of a person as indicated by answers given in a self- administered questionnaire.
3. Constructs (theoreticalcreationsbasedon observations that cannot be observed directly or indirectly).
Level of self-esteem, as measured
by a scale that combines several
direct and/or indirect observables.
Conceptualization provides definite meaning to a concept by specifying one or more indicators of what one has in mind.
An indicator is an observation one chooses to consider as a reflection or representation of a variable they wish to study.
Indicators are proxies of something; they do not represent the exact concept/variable they are associated with. (E.g. one may use the number of religious services one attends over a period of time as an indicator of religiosity)
A dimension is a specifiable aspect of a concept. (E.g. one's religiosity might be specified in terms of the following dimensions):
▪ Devotion ▪ Faith
3 kinds of definitions
1. Real definitions
3. Operational definitions
Statements about the essential nature of some entity.
Real definitions assume a construct is a real entity (which it is not: it is a proxy).
Since real definitions are often vague they are not useful for the purpose of rigorous inquiry.
Are assigned to a term without any claim that the definition represents a "real" entity.
Nominal definitions are arbitrary.
Nominal definitions represent a consensus or convention about the meaning of something.
Specify precisely how a concept will be measured, (i.e. what operations will be performed).
Operational definitions are nominal, but they attempt to achieve clarity about the meaning of a concept within the context of a study.
The order of conceiving a research question often is as follows:
1. Conceptualization; What are the different meanings and dimensions of the concept "aggression?")
2. Nominal definition; For our study, we will define aggression as representing physical harm, specifically, how often one hits another.
3. Operational definition; We will measure physical harm via responses to the survey question "How many times have you hit someone in the past year?"
4. Real world measurement; The interviewer will ask, "How many times have you hit someone in the past year?"
Descriptive & Explanatory Research
Descriptive research requires detail and precision in its definitions.
Explanatory research often is less concerned with subtle nuances of a definition, and more with general patterns (so multiple definitions for the same phenomenon might be acceptable).
Conceptualization is the refinement and
specification of abstract concepts.
Operationalization is the development of specific research procedures that will result in empirical observations representing those concepts in the real world.
Every variable must have two qualities:
1. Attributes composing a variable must be
2. Attributes composing a variable must be exhaustive.
A variable's attributes or values are mutually exclusive if every case can have only one attribute.
A variable's attributes or values are exhaustive when every case can be classified into one of the variable's categories.
Example of a variable whose attributes are NOT exhaustive:
▪ Native American
Four levels (or scales) of measurement that define all variables
Measures have greater use in data analysis as they move from the nominal to the ratio level.
The nominal scale
Nominal level variables (also called categorical variables) represent unordered categories identified only by name.
Nominal measurements only permit one to determine whether two individuals are the same or different.
Examples: religion, race, or countries.
The ordinal scale
Ordinal variables represent an ordered set of categories. Ordinal measurements tell one the direction of difference between two individuals.
Examples: the alphabet, Likert scales, any scale that measures something according to low, medium, and high.
The interval scale
Interval scales represent an ordered series of equal- sized categories.
Interval measurements identify the direction and magnitude of a difference.
The zero point is located arbitrarily on an interval scale.
Examples: Fahrenheit temperature scale, IQ scores, dates (i.e. March 12 or April 2).
The ratio scale
Ratio scale measures are interval scales that contain an absolute zero at one point along the spectrum of the scale (i.e. zero indicates none of the variable).
Ratio measurements identify the direction and magnitude of differences and allow ratio comparisons of measurements.
Examples: income, height, 40 yard dash time.
Three elements are important to consider regarding measurement quality:
1. Precision and accuracy
Precision and Accuracy
Accuracy regards the degree of truth, correctness, or exactness of a variable's attributes.
Precise measures are superior to imprecise ones. Precision is not the same as accuracy.
Refers to the quality of a measurement method that suggests the same data would have been collected each time in repeated observations of the same phenomenon.
Reliability is not the same as accuracy.
The following methods can be used to ensure one has reliable measures:
1. Test-retest method
2. Split-half method
3. Using established measures
4. Having reliable research workers
Reliability; Test-retest method
By using the test-retest method, one makes the same measurement more than once.
If one measures twice and gets the same result, a measurement is more likely to be reliable.
If a second measure reveals different results, the measurement is likely to be unreliable.
Reliability; SPLIT- half
By using the split-half method, one uses multiple sets of randomly assigned variables in order to produce the same classifications.
E.g. the Rosenberg self-esteem scale has 10 items that together measure "self-esteem." If one split the 10 items into two groups of 5, both groups should still represent one's level of self-esteem.
If the result for each group is different, the measure of self-esteem would likely be unreliable.
Reliability- Established Measure
Established measures are measures that others have already proved reliable in previous research.
E.g. if one has a unique measure for "prejudice" they can compare their results with established measures of prejudice to be confident their measure is a reliable measure of prejudice.
Reliability; Reliability & Research Worker
One can determine the reliability of measurements and results by checking the reliability of research assistants.
E.g. multiple coders can be used for the same data.
E.g. in a study of 1000 people, a principle investigator could randomly contact 50 of those 1000 and personally interview them. If the P.I.'s research assistants gathered reliable data, answers from those 50 should correspond to the answers from the greater 1000.
A term describing a measure that accurately reflects the concept it is intended to measure.
E.g. a measure of "social class" should not measure "religiosity" instead.
Validity means that one indeed is measuring what they intend.
Four types of validity are important to consider:
1. Face validity
4. Content validity
Face validity represents whether the quality of an indicator makes it a reasonable measure of some variable.
Face validity means that a measure "makes sense" on the face. It is the lowest level of validity assurance.
E.g. one's voting frequency seems to be a good indicator of community involvement.
Criterion-related validity represents the degree to which a measure relates to some external criterion.
E.g. the validity of SAT tests is based on their ability to predict college success.
College success is the criterion by which the SAT test is determined to be valid.
Represents the degree to which a measure relates to other variables as expected within a system of theoretical relationships.
E.g. the variable marital satisfaction is likely to correlate with the variable marital fidelity.
By comparing these variables one can better determine whether one has a valid measure.
Represents the degree to which a measure covers the range of meanings included within a concept.
E.g. a measure of mathematical ability does not have content validity if it only includes "addition."
By including "addition, subtraction, division, and multiplication" one ensures their measure
of mathematical ability is more valid.
Index & Scales
An index is a type of composite measure that summarizes and rank-orders several specific observations; it represents general dimensions.
A scale is a type of composite measure composed of several items that have a logical or empirical structure among them.
Four steps in the construction of an index
2.Examination of empirical relationships
The first step in creating an index is to select items for a composite index. This is created to measure some variable.
When selecting items, one must consider the following:
c.Generality vs. specificity
The first criterion for selecting index items is face validity.
E.g. if one was to create an index of morality, items such as compassion, justice, honesty, and caring would be logical representations of "morality."
These items have face validity: it makes sense that they represent the meanings of morality.
The second criterion for selecting index items is unidimensionality.
A composite measure should only represent one dimension of a concept.
E.g. items representing religious fundamentalism should not be included in a measure of political conservatism.
Generality Vs. Specificity
Depending on the nature of one's desired index, one may select general or specific items.
E.g. an index measuring general aspects of religiosity might include ritual participation, belief, etc.
An index measuring specific aspects of religiosity (such as ritual participation) might select attendanceat church, confessions, bar mitzvah's etc.
In selecting items, one must address the amount of variance they provide.
These two procedures can guarantee variance within an item:
1.Select several items will divide people into equal groups concerning the variable.
2.Select items differing in variance.
Examination of Emperical Relationships
An empirical relationship is established when respondents' answers to one question help one predict how they will answer other questions.
Two types of relationships exist:
1.Bivariate relationships describe relationships between two variables.
2.Multivariate relationships describe relationships between more than two variables.
After choosing the best items for an index, one assigns scores for particular responses, following this two step process:
1.Determination of the desirable range of the index scores.
▪There is a conflicting desire for a range of measurement in the index and an adequate number of cases at each point in the index, so one must determine the desirable range.
Two step process to choose the best items for an index (cont.):
2.Determination of whether to give each item in the index equal or different weights.
▪Standard items should be weighted equally unless there are compelling reasons for differential weighting.
There are two general ways to validate an index:
▪An item analysis is an assessment of whether each of the items included in a composite measure makes an independent contribution or merely duplicates the contribution of other items in the measure.
Two general ways to validate an index (cont.):
▪External validation is the process of testing the validity of a measure, such as an index or score, by examining its relationship to other presumed indicators of the same variable.
Index Construction: Handing Missing Data
Missing data can be problematic. One may do the following five things regarding missing data:
1.If there are few cases with missing data, one may decide to exclude them from the construction of the index and analyses.
2.Treat missing data as one of the available responses.
3. Analyze the missing data to interpret their meaning.
4.Assign missing data the middle value, or the mean value.
5.Assign values to the proportion of variables scored.
Indexes may fail to take into account that not all indicators of a variable are equally important.
Scales offer more assurance of ordinality by tapping the intensity structures among indicators.
Types of Scales
The following are common examples of scales used in social research:
1.Bogardus social distance scales
4.Semantic differential scales
Bogardus Social Distance Scales
The Bogardus social distance scale (BSDS) is a measurement technique for determining the willingness of people to participate in social relations - of varying degrees of closeness - with other kinds of people.
Differences in intensity suggest a structure among items.
Logically, once a person changes their answer on a BSDS scale, their answers will remain changed (i.e. when one changes from "yes" to "no" they will not answer "yes" to any additional items on the BSDS).
A Thurstone scale is a type of composite measure constructed in accord with the weights assigned by "judges" to various indicators of some variables.
E.g. one could determine the strength of 5 indicators of "aggression" by having many people rank the indicators from 1 to 5 in terms of which indicator they feel are the best and worst indicators of aggression.
Thurstone scaling is rare, because their creation is labor intensive.