Applied Social Research

138 terms by ameigs 

Ready to study?
Start with Flashcards

Create a new folder

Advertisement Upgrade to remove ads

Book by Monette, Sullivan, & DeJong [8th edition] Social Work 571 @ OSU

Chapter One: Research in the Human Services

Pages 2-19

Human Services

Refers to activities w/ the primary goal of enhancing the relationship between people & societal institutions so that people may maximize their potential and alleviate distress

Department of Human Services

Designates the agency that administers financial assistance and intervention programs at the state level

Linkages between research & human services practice!

For instance, research can contribute to the goals of practice, just as practice can contribute to the goals of research

Two reasons to explain why scientific research & human services are NOT totally distinct enterprises

1) Strong Parallels b/n the conduct of researcg & the conduct of practice = similar structures, processes, techniques, observations
2) Properly conducted practice intervention can provide scientifically valid knowledge about human behavior & the effectiveness of intervention

Social Research

The systematic examination (or reexamination) of emperical data, collected by someone firsthand, concerning the social or psychological forces operating in a situation. Three elements:
1) Systematic = aspects of the research process are carefully planned in advance -no causualness!
2) Emperical Data = info/facts about the world that are based on sensory experiences
3) Social & Physiological Factors that affect human behavior -biologocal, physiological, nutritional, etc.

Descriptive Research

Has at it's goal DESCRIPTION or the attempt to discover facts or describe reality.
Ex) what are ppl's attitudes toward welfare?

Predictive Research

Focuses on PREDICTION or making projections about what may occur in the future or in other settings,

Explanatory Research

Involes explanation od determining why or how someting occurred. Focuses on WHY

Evaluation Research

Focuses on evaluation or the use of scientific research methods to plan intervention programs, to monitor the implementation of new programs and the operation of existing ones and to determine how effectively programs/practices achieve their goals = also can determine unintended consequences

Basic Research

It's purpose is to advance knowledge about human behavior w/ little concern for any immediate, practical benefits

Applied research

Research designed w/ a practical outcome in mind and w/ the assumption that some group or society as a whole will gain specific benefits from it

Five focal areas where research is applied in human services

1) Behavior & Social enviornments
2) Needs Assessment
3) Assessment of Client Functioning
4) Program Evaluation
5) Practice Effectiveness Evaluation

Evidence-based Practice

The conscientious, explicit, & judicious use of the best ecidence in making decisions about human service assessment & intervention; "best evidence" is knowledge that has been gained via the scientific research process

Special Issues: Research on Minorities & Women

-minorites tend to suffer disproportionately from the problems that human service workers attempt to alleviate
-many social conditions affect minorities negatively & limit their opportunities & achievements

Steps in Conduction Research

1) Problem Formulation = decide on what the problem is -shape a concern into a specific researchable question
2) Research design development = detailed plan
3) Data collection = pretests/pilot studies
4) Data Analysis = ideas confirmed/refuted by emperical data
5) Drawing Conclusions = from data analysis
6) Public Dissemination of Results

Research Design

A detailed plan outlining how observations will be made; plan is followed by the researcher as the project is carried out - address key issues: who is studied, how are they selected, what info will be gathered?

Pretest

A preliminary application of the data-gathering technique to determine its adequacy; a trial run

Pilot study

A small-scale trial run of all the procedures planned for use in the main study; may include such things as a test of the procedures for selecting the sample & an application of the statistical procedures to be used in the data-analysis stage; improves the validity of the data collected & bolsters confidence in the conclusions drawn

Steps in Practice Intervention

1) Problem Assessment
2) Formulation of an Intervention Strategy
3) Implementation
4) Evaluation
5) Closure
6) Documentation & Dissemination

Survey Research

A widely used data collection technique based on obtaining people's responses to questions; a broad research strategy that involves asking questions of a sample of people, in a fairly short period of time, & testing hypothesis or describing a situation bases on their answers

Available Data

Can be helpful to human services researchers, Ex) records kept by prisions, hospitals, etc. = data collected by someone other than the investigator for purposes that differ from the investigator's but that are available to be analyzed

Field Research

A type of qualitive research that involves observations of people in their natural settings as they go about their everyday lives

Observational Methods

the collection of data through direct visual or auditory experience of behavior

Experiments

controlled methods of observation in which teh vakue of one or more independent variable is changed to assess its casual effect on one or more dependent variables

Single-system designs/research

quasi-experimental designs featuring continous or nearly continuous measuremenr of the dependent variable on a single research subject over a time interval that is divided into a baseline phase and one or more additional phases during which the independent variabke is manipulated; experimental effects are inferred by comparing the subject's responses across baseline and intervention phases

evaluation research

The use of scientific research methods to plan intervention programs, to monitor the implementation of new programs and the operation of existing programs, and to determine how effectively programs or clinical practices achieve their goals.

data analysis

The process of placing observations in numerical form and manipulating them according to their aithmetic properties to derive meaning from them

Chapter Two: The logic of Social Research

Page 20 - 48

Traditional Knowledge

is knowledge based on custum, habit, & repitition

Experimental Knowledge

knowledge gained through firsthand observation of events and based on the assumption that truth can be achieved through personal experience and that witnessing will lead to an accurate comprehension of those events; limitations = human perceptions is unreliable, affected by many factors, messed w/ by inferences, ppl experience things differently

Common sense

practical judgements based on the experiences, wisdom, and prejudices of a people --> presumed to make sound decisions even lacking specialized training & knowledge

Journalism

...

Science

a method of obtaining objective knowledge about the world through systematic observation (also, accumulated knowledge that results from scientific inquiry)

Science has 5 distinct characteristics:

1) empirical = science is based on direct observation of the world
2) systematic = procedures used by scientists are organized, methodical, public, & recognized by other scientists
3) Search for causes = assume their is an order for the universe; reasons for the occurence of all events, orderly nature of the world
4) Provisional = scientific conclusions which are always accepted as tentative & subject to question & possible refutation
5) Objectivity = means that scientists try to avoid having their personal bias & values influence their scientific conclusions

Theory

A set of interrelated abstract propositions or statements that offers an explanation of some phenomenon; three key elements: propositions; abstract/deductive systems; explanations

Propositions

statements about the relationship b/n some elements in the theory

Abstract Systems

Deductive Systems

Abstract = they link general and abstract propositions to particular, testable events or phenomena
Deductive = a general set of propositions that can be used to deduce further, more concrete relationships b/n the elements of the theory

Three Major Functions in research & practice

1)explanation of phenomena
2)guide for research & practice
3) integration of multiple observations

verification

the process of subjecting hypotheses to emperical tests to determine whether a theory is supported or refuted

concepts

mental constructs or images developed to symbolize ideas, persons, things, or events; building blocks of theories

nominal definitions

verbal definitions in which scientists agree that one set of words or symbols will be used to stand for another set of words or symbols

operational definitions

defintions that indicated the precise procedures or operations to be followed in measuring a concept

Measurement

the process in moving from the nominal to the operational level

Hypotheses

Testable statements of presumed relationships b/n 2 or more concepts

Variables:
Independent Variable:
Dependent Variable:

=things that are capable of taking on more than one value
=presumed active or casual variable; one believed to be producing changes in the dependent variable
=the passive variable, the one that is affected

Deductive Reasoning

Involves deducting or inferring a conclusion from some premises or propositions

Inductive Reasoning

Involves inferring something about a whole group or class of objects from our knowledge of one or a few members of that group or class

Nomethetic Explanations

Focus on a class of all events & attempt to specify the conditions that seem common to all those events; probabilistic in nature

Idiographic Explanations

Focus on a single person, event, or situation, & attempt to specify all the conditions that helped produce it; deterministic in nature; fault is limited generalizability

Paradigms

general ways of thinking about how the world works and how we gain knowledge about the world; fundemental orientations, perspectives, or world views that often are not questioned or subjected to emperical tests

Positivism

the perspective that the world exists independently of people's perceptions of it and that science uses objective techniques to discover what exists in the world

quanitative research

involves measurement of phenomena using numbers and counts; tend to be deductive and nomothetic explanations, experimental designs, and survey research

qualitative research

involves data in the form of words, pictures, descriptions, or narratives rather than numbers and counts; also use inductive or idiographic explanations and field observations when these are appropriate to a research question

Interpretive Approach (nonpositivist)

Also known as interactionist/verstehen approaches: percieve social reality as having a subjective component and as arising out of the creation & exchange of social meanings during the process of social interaction; argue that the objective, quantitative approaches of positivism miss a very important part of the human experience

Verstehen

the effort to view and understand a situation from the perspective of the people who are actually in that situation

Causality

the situation where an independent variable is the factor--or one of several factors--that produces variation in a dependent variable

Chapter Three: Ethical Issues in Social Research

Page 49 to 102

Ethics

The study of what is proper and improper behavior, of moral duty and obligation; the responsibilities that researchers bear toward those who participate in research, those who sponsor research, and those who are potential beneficiares of research

Six basic ethical issues

informed consent, deception, privacy (including confidentiality & anonymity), physical or mental distress, problems in sponsored research, scientific misconduct/fraud, & scientific advocacy

Informed consent

refers to telling potential research participants about all aspects of the research which might reasonably influence the decision to participate

consent form

describes the elements of the research that might influence a person's decision to participate

ethical absolutists

argue that people should be fully informed of all aspects of the research in which they might play a part

Privacy

refers to the ability to control when and under what conditions others will have access to your beliefs, values, or behavior

Anonymity

means that no one, including the researcher, can link any data to a particular respondent

confidentiality

Ensuring that information or responses will not be publicly linked to specific individuals who participate om research

Misconduct

a broader concept that includes not only fraud but also carelessness or bias in recording and reporting data

control group

the subjects in an experiment who are not exposed to the experimental stimulus

fraud (scientific)

the deliberate falsification, misrepresentation, or plagiarizing of data, findings, or the ideas of others

informed consent

telling potential research research partipants about all aspects of the research that might reasonably influence their decision to participate

conceptual development

identifying & properly defining the concepts on which the study will focus

Units of analysis

the specific objects or elements whose characteristics we wish to describe or explain and about which we will collect data

ecological fallacy

inferring something about individuals from data collected about groups

reductionist fallacy

inferring something about groups, or other hugher levels of analysis, based on data collect from individuals

Units of Analysis

Individuals, Groups, Organizations, Programs, Social Artifacts

Reactivity

refers to the fact that people can react to being studied and may behave differently than when they dont think they are being studied

Qualitative Research

involves data in the form of words, pictures, descriptions, or narratives

Quantitative research

uses numbers, counts, and measures of things

Cross-sectional research

research based on data collected at one point in time

Logitudinal research

research based on data gathered over an extended period of time

panel study

research in which data gathered from the same people at different times

trend study

research in which data are gathered from different people at different times

Chapter 5: The process of meaurement

pg 104

measurement

refers to the process of describing abstract concepts in terms of specific indicators by assigning numbers or other symbols to these indicants in accordance with rules

indicator

is an observation that we assume is evidence of the attributes or properties of some phenomenon

Item

A single indicator of variable, such as an answer to a question or an observation of some behavior or characteristic

Index

A measurement technique that combines a number of items into a composite score

Scake

A measurement technique, similar to an index, that combines a number of items into a composite score

Measurement Techniques

1) Verbal Reports = ppl answering questions, being interviewed, or responding to verbal statements
2) Observation = directly watch ppl at school, work, or other setting & make note of what they say & do
3) Archival records = review available recorded information

X = T + E

-X represents our observation or measurement of some phenomenon; it is our indicator
-T represents the true, actual phenomenon that we are atrtempting to measure with X; it would be what a student actually learned in a social research class or whayt his or her true self-esteem is
-E represents any measurement error that occurs or anything that influences X other than T

Levels of measurement

rules that define permissible mathematical operations on a given set of numbers produced by a measure; there are four: nominal, ordinal, interval, ratio

Nominal measures [naming, categorizing'

classify observations into mutuallu exclusive and exhaustive categories, but w/ no ordering to the categories; they represnt nominal variables at the theoretical level

Ordinal measures [ranking]

They are of a higher level than nominal measures, b/c in addition to having mutually exclusive and exhaustuve categories, the categories have a fixed order = equal spacing between the categories

Interval measures

They share the characteristics of ordinal scales--mutually exclusive & exhaustive categories & an inherent order--but have equal spacing b/n the categories

Ratio Measures

They have all the characteristics of interval measures, but the zero point is absolute and meaningful rather than arbitrary. Has true zero, is mutually exclusive & exhaustive, possesses a fixed order, & equal spacing between ranks!

Discrete Variables

Variables with a finite number of distinctand seperate values, can only be measured in whole numbers basically. EX) household size

Continuous Variables

Variables that, theoretically, have an infinite number of values. EX) Age, continuous b/c we can measure age by infinite number of variables

Validity

The degree to which a measure accurately reflects the theoretical meaning of the variable

Face Validity

The degree to which a logical relationship exists between the variable and the proposed measure.

Content Validity

The extent to which a measuring device covers the full range of meanings or forms included in a variable being measured. More extensive assessment of validity! Involves two distinct steps: determining the full range or domain of the content of a variable & determining whether all those domains are represented among the items that constitue the measuring device. No agreed-on criteria that determine whether a measure has content validity.

Sampling Validity

An approach to establishing validity of measures through determining whether a measuring device covers the full range of meanings that should be included in the variable being measured.

Jury Opinion

An extension of face or content validity that uses the opinions of other investigators, especially those who are knowledgable about the variables involved, to assess whether particular operational defintions are logical measures of a variable.

Criterion Validity

A technique for establishing the validity of measures that involves demonstrating a correlation between the measure and some other standard. Key is to find the criterion variable against which to compare the results of the measuring device.

Concurrent Validity

A type of criterion validity in which the results of a newly developed measure are correlated with the results of an existing measure. Basically, it compares the instrument under evaluation to some already existing criterion such as the results of another measuring device; weakness is the validity of the existing measure that is used for comparison.

Predictive Validity

A type of criterion validity wherein scores on a meaure are used to predict some future state of affairs! Certain future events!

Construct Validity

A complex approach to establishing the valdity of measures taht involves relating to measure to a complete theoretical framework, including all the concepts and propositions that the theory comprises

Multitrait-Multimethod Approach

A particularly complex form of construct validity involving the simultaneous assessment of numerous measures and concepts through the computation of intercorrelations; based on 2 ideas = two instruments that are vaild measures of the smae concept should correlate rather highly with each other even through they are different numbers; two instruments even if similar to each other should not correlate highly if they measure different concepts

Reliability

Refers to a measure's ability to yield consistent results each time it is applied

Test-Retest

The first and most generally appliable assessment of reliability; technique involves applying a measure to a sample of people and then, somewhat later, applying the same measure to the same people again. Adv: you can use it w/ many measures; Disadv: slow & cumbersome; can't use on measures of variables whose value might have changed during the interval between tests

Multiple-Testing Effects

A problem that exists when exposing people to the same measure twice; the problem occurs b/c when you do this, people may not react the same way the second time

Mult-Item-Scales

Create two-sep but equivalent versions made up of different items, such as different questions; mult forms have the advantages of requiring only one testing session and of needing no control group; disadvantages: difficulty of developing two measures w/ different items that really are equivilant; ppl realize they are answering the same q twice and get sketchy

Internal Consistency Approaches

Approaches to reliability use a single scale that is administered to one group of people to develop an estimate of reliability

Split-half Approach

The test group responds to the complete measuing instrument; then randomly divide the responses to the instrument into halves, treating each half as though it were a seperate scale; correlate the two halves bu using an appropriate measure of association, r = 2ri/ 1+ri

Measurement w/ Minority Populations

1) Researchers need to IMMERSE
2) Researchers should use key informants!
3) Double Translation to check for errors/inconsistences
4)Test instruments for validity/reliability on the population they intend to study

Errors in Measurement

Two basic types of error: random & systematic!
E = R + S
R=random
S=systematic
Measurement formula becomes X = T+R+S

Random Errors

Errors that are neither consistent nor patterned; the error is as likely to be in one direction as in another; chance errors that in the long run tend to cancel themselves out. Major problem is that it weakens the precision with which a researcher can measure variables

Systematic Errors

Errors that are consistent and patterned, they may NOT cancel themselves out; they are more troublesome to researchers because they are more likely to lead to false conclusions

Improving Validity & Reliability

1) More extensive conceptual development
2) Better training of those who will apply the measuring devices
3)Interview the subjects of the research about the measurement device
4)Higher level of measurement
5)Use more indicatiors of a variable
6)Conduct an item-by-item assessment of multiple item measures

Chapter 13: Scaling

pages 349 - 374

Scale

A measurment technique, similar to an index, that combines a number of items into a composite scale; Advantages: Improved Validity, Improved Reliability, Increased level of measurement; increased efficiency in Data Handling

Partially Ordered Data

A term that refers to data with a few ordered categories but with many cases tied for each category

Developing Scales

1) develop or locate many potential scale items
2) eliminate items that are redindant, ambiguois, or, for some other reason, inappropriate for the scale
3)Pretest the remaining items for validity, reliability, or other measurement checks to be described shortly
4) eliminate items that do not pass the tests of step 3
5) repeat steps 3 & 4 as often as necessary to reduce the scale to the number of items required

Sources of scale Items

1) researcher's own imagination
2) judges
3) people who are the focus of the research project

Characteristics of Scale Items

Validity, Range of variation, Unidimensionality

Unidimensional Scale / Multidimensional Scale

=A multiple-item scale that measures one, and only one, variable
=A scale designed to measure complex variable composed of more than one dimension

Likert Scales

A measurement scale consisting of a series of statements, each followed by a number of ordered response alternatives, such as "strongly agree," "agree," "disagree," or "strongly disagree"; 5 is the most common number of alternatives; one example of a summated rating scale

Summated Rating Scale

A scale in which a respondent's score is determined by summing the numbers of questions answered

Nondiscriminating items

Those that are responded to in a similar fashion by both people who score high and people who score low on the overall scale

Disciminatory Power Score

A value calculated during construction of a likert scale that indicates the degree to which each item discriminates between high scorers and low scorers on the entire scale; first step is obtaining DP scores is to calculate the total scores of each respondent & then rabj thise scires from the highest to lowest, upper quartile is the cutoff point in a distribution above which the lowest 25% of the scores are located, and the lower quartile is the cutoff pt w/ the lowest 25% of the scores located; next we compute the weighted mean (average) by dividing the weighted total by the number of cases in the quartile, then we obtain the DP score by subtracting the mean from the LQ from the mean of those above the UQ

Thurstone Scales

Measurement scales consisting of a series of items with a predetermined scale value to which respondents indicate their agreement or disagreement;

Semanic Differential

A scaling technique that involves respondents rating a concept on a scale between a series of polar-opposite adjectives; presents the respondant with a stimulus such as a person or event to be rated on a scale between a series of poalr opposite adjectives; can be designed to measure 3 dimensions: evauluation, potency, & activity

Guttman Scales

A measurement scale in which the items have a fixed, progressive order and that has teh characteristic of reproducibility; the procedures used in the construction of the scale help ensure that the resulting scale will truly be unidimensional (progressive order); difficult to construct

Scale Discrimination Technique

The procedure for selecting items for a Guttman scale

Multidimensional Scale

A scale designed to measure complex variables composed of more than one dimension

Response Bias

Responses to questions that are shaped by factors other than the person's true feelings, intentions, and beliefs

Response Pattern Anxiety

The tendency for people to become anxious if they have to reppeat the same response to change their responses to avoid doing so

Social Desirability Effect

People's tendency to give socially desirable, pipular answers to questions to present themselves in a good light

Response Set

one source of response bias; the tendency for people to agree or disagree with statements regardless of the content of the statements

Please allow access to your computer’s microphone to use Voice Recording.

Having trouble? Click here for help.

We can’t access your microphone!

Click the icon above to update your browser permissions above and try again

Example:

Reload the page to try again!

Reload

Press Cmd-0 to reset your zoom

Press Ctrl-0 to reset your zoom

It looks like your browser might be zoomed in or out. Your browser needs to be zoomed to a normal size to record audio.

Please upgrade Flash or install Chrome
to use Voice Recording.

For more help, see our troubleshooting page.

Your microphone is muted

For help fixing this issue, see this FAQ.

Star this term

You can study starred terms together

NEW! Voice Recording

Create Set