Upgrade to remove ads
Social Work Research Methods - Test #2
Terms in this set (98)
The positivist way of thinking strives toward:
3. Reducing uncertainty
5. The use of standardized procedures.
What does it try to study? Pg. 96
Tries to study only those things that can be objectively measured.
The Positivistic Research Approach
• One objective reality
• Seeks to be objective
• Reality unchanged
• Researcher puts aside own values
• Social and physical sciences are unity
• Passive roles for research subjects
• Many research subjects involved
• Data obtained through observations and measurements
• Data are quantitative in nature
• Deductive logic applied
• Casual information obtained
• Seeks to explain or predict
• Tests hypotheses
• High generalizability of finding
The Interpretive Research Approach
• Many subjective realities
• Admittedly subjective
• Reality changed
• Researcher recognizes own values
• Active roles for research participants
• Few research participants involved.
• Data obtained through observations and asking questions
• Data are qualitative in nature.
• Inductive logic applied
• Descriptive information obtained
• Seeks to understand
• Produces hypotheses
• Researcher is measuring instrument
• Limited generalizability of findings
The textbook argues that according to the positivist way of thinking, "objectivity is
largely a matter of agreement." What does this mean?
• That as more people agree on what they have observed, the less likely it becomes that the observation was destroyed by bias, and the more likely it is that the agreement reached is "objectively true."
• That there are some things, usually physical phenomena, about which most people agree. E.g. objects fall when dropped, water turns to steam at a certain temperature, seawater contains salt. Pg. 99
What are the values of a variable?
How are they labeled?
• Labels, which do nothing more than describe a variable via its different categories. Pg. 107
• According to their different attributes. Pg. 106
the variable that does the affecting; symbolized by x
The variable that is affected; symbolized by y. Pg. 110
What kind of variables are not independent or dependent variables?
Variables, which are not associated in any way. Pg. 111
Can be defined as any variable other than the independent variable that could cause a change in the depend variable. In our study we might realize that age could play a role in our outcome, as could family history, education of parents or partner interest in the class topic, or even time of day, preference for the instructor's teaching style, or personality. The list, unfortunately, could be quite long and must be dealt with in order to increase the probability of reaching valid and reliable results.
Intervening variables, like extraneous variable, can alter the results of our research. These variables, however, are much more difficult to control for. Intervening variable include motivation, tiredness, boredom, and any other factor that arises during the course of research. For example, if one group becomes more bored with their role in the research than the other group, the results may have less to do with our independent variable and more to do with the boredom of our subjects.
Specifically indicates the predicted direction of the relationship between two variables.
1. Ethnic majorities see hospital social workers more than ethnic minorities.
2. Ethnic majorities are referred to the hospital's social service department more than ethnic minorities.
3. Ethnic majorities follow up with social service referrals more than ethic minorities.
4. Ethnic minorities are more intimidated with the referral process than ethnic majorities.
A statement that says you only expect to find a relationship between two or more variables.
1. Ethnic minorities and ethnic majorities see hospital social workers differentially.
2. Ethnic minorities and ethnic majorities are referred to the hospital's social service department differentially.
3. Ethnic minorities and ethnic majorities vary to the degree they follow up on referrals.
4. Ethnic minorities and ethnic majorities feel different on how intimidated they were about the referral process.
What are the criteria for a good hypothesis? Pg. 113
What does it mean to operationalize a variable?
Defining the variable in such a way that it can be measured. Pg. 401
Why is this done? Be able to recognize operationalized variables.
So that the variable can be measured.
Ethinic minority and ethnic majority
• Statistics, which describe your study's sample or population.
• You can easily describe your research participants in relation to their ethnicity by stating how many of them fell into each category label of the variable.
o Value Label (attributes)
o Ethnic Minority......40%
o Ethnic Majority......60%
• Other descriptive information about your research participants could include variables such as average age, percentages of males and females, average income, and so on. Pg. 117-118
• Determines the probability that a relationship between the two variables within your sample also exist within the population from which it was drawn.
• Permits you to say whether or not the relationship detected in your studies sample exists in the larger population from which it was drawn - and the exact probability that your finding is in error.
o A statistically significant relationship between your samples (participants) ethnicity and whether they successfully accessed social services within your hospital setting.
What are the differences between qualitative and quantitative data? Pg. 131
1. Says the only real approach to find out about the subjective reality of our research participants is to ask them.
2. The answer will come back in words (text), not numbers.
1. Produces data in the form of numbers.
What is studied in each? Pg. 131
1. Explores and seeks to understand the meaning individuals or groups ascribe to a social or human problem.
2. Involves emerging questions and procedures, data typically collected in the participant's setting, data analysis inductively building from particulars to general themes, and the research making interpretations of the meaning of the data.
3. Those who engage in this form of inquiry have assumptions about testing theories deductively, building in protections against bias, controlling for alternative explanations, and being about to generalize and replicate the findings.
1. A means for testing objective theories by examining the relationship among variables.
2. Variables can be measured; typically on measuring instruments, so that numbered data can be analyzed using statistical procedures.
3. The final written report has a set structure consisting of introduction, literature and theory, methods, results, and discussion.
4. Those who engage in this form of inquiry have assumptions about testing theories deductively, building in protections against bias, controlling for alternative explanations, and being able to generalize and replicate the findings.
Be able to recognize examples? Pg. 131
Quantitative data: Pieces of evidence in the form of numbers.
Qualitative date: Pieces of evidence in the form of words.
1. Something that is given, either from a quantitative observation and/or measurement or from a qualitative discussion with "Ms. Smith" about her experiences in giving birth at her home.
2. Are pieces of evidence, in the form of words (qualitative data) or numbers (quantitative data), that you put together to give you information- which is what the research method is all about.
1. Something you hope to get from the data once you have analyzed them - whether they are numbers or words.
2. How you interpret the facts.
3. The subjective interpretation of objective facts.
One reality vs. many. What does that mean? Pg. 132
The belief that there is not only one reality, but many realities, which are shaped by individual's different perspectives, beliefs, and traditions.
What is a research participant? Pg. 133
• A very important data source.
• The quantitative approach tends to relegate the research participant to the status of an object or subject. In the study of caesarian births at a hospital during a certain period, for example, Ms. Smith will not be viewed as an individual within the quantitative approach to knowledge development, but only as the seventeenth woman who experienced such a birth during that period. Details of her medical history may be gathered without any reference to Ms. Smith as a separate person with her own hopes and fears, failings and strengths.
• Conversely, a qualitative approach to caesarian births will focus on Ms. Smith's individual experiences. What was her experience? What did it mean to her? How did she interpret it in the context of her own reality?
• Subjects, if we were doing an experiment or Respondents, if we were doing a survey.
• A less objective way of referring to a client. Personalizes a client rather than making the client an object of the research.
• A less objective way of referring to a client. Personalizes a client rather than making the client an object of the research.
• Indicates that you are more attentive of the effects of your study, than when simply objectivizing those people.
What are the five characteristics that most qualitative research studies have in common? Pg. 134
1. Research studies that conducted primarily I the natural settings where the research participants carry out their daily business in a "non-research" atmosphere.
2. Research studies where variables cannot be controlled and experimentally manipulated (through changes in variables and their effect on other variables can certainly be observed).
3. Research studies in which the questions to be asked are not always completely conceptualized and operationally defined at the outset (though they can be).
4. Research studies in which data collected are heavily influenced by the experiences and priorities of the research participants, rather than being collected by predetermined and/or highly structured and/or standardized measurement instruments.
5. Research studies in which meanings are drawn from the data (and presented to others) using processes that are more natural and familiar than those used in the quantitative method. The data need not be reduced to numbers and statistically analyzed (through counting statistics can be employed if they are though useful).
How are hypotheses used in qualitative studies? Pg. 136
To further your research question even further.
Case studies are commonly used in what kind of research? Pg. 140
What are good data collection methods for qualitative research? Pg. 141
1. Make every effort to be aware of your own biases. Your own notes on reactions and biases to what you are studying are sued as sources of data later on, when you interpret the data.
2. Data collection is a two-way street. Research participants tell you their stories, and, in turn, you tell them your understanding or interpretation of their stories. It is a process of checks and balances.
3. Typically involves multiple data sources and multiple data collection methods. You may see clients, line-level social workers, and supervisors as potential data sources. You may collect data from each of these groups using interviews, observation, and existing documentation.
What is grounded theory? Pg. 138
• A specific qualitative strategy of inquiry in which questions may be directed toward generating a theory of some process, such as the exploration of the process of how caregivers and patients interact in a hospital setting.
• In a qualitative case study, the questions may address a description of the case and the themes that emerge from studying it.
What is ethnography? Pg. 138
• A specific qualitative strategy of inquiry in which questions would include a mini-tour of the culture-sharing group and their experiences, use of native language, and contrasts with other cultural groups as well as questions to verify the accuracy of the data.
• Questions may build on a body of existing literature.
• Questions become working guidelines rather than truths to be proven.
What is phenomenology? Pg. 138
• A specific qualitative strategy of inquiry in which questions might be broadly stated without specific reference to the existing literature or typology of questions.
• Questions might ask what the participants experienced and the contexts or situation in which they experienced it.
• E.g. What is it like for a mother to live with a teenage child who is dying of cancer?
What is the principal data collection instrument in qualitative research? Pg. 141
What is the objective of analyzing data in qualitative studies? Pg. 142
• To interpret data in such a way that the true expressions of your research participants are revealed.
• To "walk the walk" and "talk the talk" of your research participants and not to impose "outside" meaning to the data they provided.
What are the major differences between the quantitative and the qualitative approaches in terms of perceptions of reality, ways of knowing, value bases, and applications? Pg. 142-143
1. Perceptions of reality:
Quantitative: Ethnic minorities share similar experiences within the public social service system. These experiences can be described objectively; that is, a single reality exist outside any one person.
Qualitative: Individual and ethnic group experiences within the public social service system are unique. There experiences can only be described subjectively: that is, a single and unique reality exists within each person.
2. Ways of "knowing":
Quantitative: The experience of ethnic minorities within public social services is made known by closely examining specific parts of their experiences. Scientific principles, rules, and test of sound reasoning are used to guide the research process.
Qualitative: The experience of ethnic minorities within the public social services is made known by capturing the whole experiences of a few cases. Parts of their experiences are considered only in relation to the whole of them. Sources of knowledge are illustrated through stories, diagrams, and pictures that are shared by the people with their unique life experiences.
3. Value Bases:
Quantitative: The researchers suspend all their values related to ethnic minorities and social services from the steps taken within the research study. The research participant "deposits" data, which are screened, organized, and analyzed by the researchers who do not attribute any personal meaning to the research participants or to the data they provide.
Qualitative: The research is the research process, and any person values, beliefs, and experiences of the research will influence the research process. The researcher learns from the research participants, and their interaction is mutual.
Quantitative: Research results are generalized to the population from which the sample is draw (e.g., other minority groups, other social services programs). The research findings tell us, on the average the experience that ethnic minorities have within the public social service system.
Qualitative: Research results tell a story of a few individuals' or one group's experience within the public social service system. The research findings provide n in-depth understanding of a few people. The life context of each research participant is key to understanding the stories he or she tells.
Can both qualitative and quantitative research approaches be used in the same study? Pg. 143
Both approaches can be used to study any particular social problem.
The quantitative approach is more effective than the qualitative approach in reaching a specific and precise understanding of one aspect (or part) of an already well-defined social problem.
The qualitative approach aims to answer research questions that provide you with a more comprehensive understanding of a social problem from an intensive study of a few people and is conducted within the context of the research participant's natural environments.
o The lowest level of measurement and is used to measure variable whose attributes are different in kind.
o Cannot be subtract or divide them, or do anything statistically interesting with them at all.
o E.g.: gender, ethnicity, place of birth
o Higher level of measurement than nominal and is used to measure those variables whose attributes can be rank ordered.
o E.g.: socioeconomic status, sexism, racism, client satisfaction, and the like.
o Measures variables in which the distance, or interval, separating their attributes does having meaning.
o In SW, these measures are most commonly used in connection with standardized measuring instruments.
o E.g.: temperature, I.Q. scores
o Highest level of measurement.
o Used to measure variables whose attributes are based on a true zero point.
o E.g.: How many children one has, Income, How many times one has seen a social worker, length of residence in a given place, age, number of times married, number of organizations belongs to, number of antisocial behaviors, number of case reviews, number of training sessions, number of supervisory meetings.
Why do we need to describe variables as accurately as possible? What are the four reasons?
o Means making a link between what we measure and/or observe and the theories we have developed to explain what we have measured and/or observed.
o E.g.: the concept of attachment theory can easily explain the different behaviors (variables) of small children when they are separated from- or reunited with-their mothers.
o A way of defining a complex variable so that the variable will mean the same thing to different researchers and measure it in the same way.
o Means nothing more than defining the level of a variable in terms of a single number, or score.
o Increases certainty and it is only possible if the variables being studied have been standardize and quantified.
What are the criteria for the selection of a measuring instrument? Pg. 160 -168
2. Sensitivity to small changes
• The degree of accuracy, precision, or consistency in results of a measuring instrument, including the ability to produce the same results when the same variable is measured more than once or repeated applications of the same test on the same individual produce the same measurement;
• The degree to which individual differences on scores or in data are due either to true difference or to errors in measurement.
Stability over time
Consistency within the instrument
Test-retest, Pg. 162
• Involves administering the same measuring instrument to the same group of people on two separate occasions.
o Results are then compared to see how similar they are: that is, how well they correlate
• Does an individual respond to a measuring instrument in the same general way when the instrument is administered twice? (Box 7.1)
o Testing Effect: When completing the same instrument twice, answers given on the first occasion may affect the answers given on the second.
• E.g.: Ms. Smith might remember what she wrote the first time and write something different just to enliven the proceedings.
o The more often an instrument is completed, the more likely
Alternate forms, Pg. 163
• A second instrument that is as similar as possible to the original except that the wording of the items contained in the second instrument has changed.
• Administering the original form and then the alternative form reduces testing effects since the respondent is less likely to base the second set of answers on the first.
• When two forms of an instrument that are equivalent in their degree of validity are given to the same individual, is there a strong convergence in how that person responds? (Box 7.1)
• It is time consuming to develop different by equivalent instruments, and they must still be tested for reliability using the test-retest method, both together as a pair, and separately as two distinct instruments.
Split-half, Pg. 163
• Involves splitting one instrument in half so that it becomes two shorter instruments.
• Are the scores on half of the measuring instrument similar to those obtained on the other half? (Box 7.1)
• Usually, all the even numbered items, or questions, are used to make one instrument while the odd-numbered items makeup the other.
o This ensures that the original instrument is internally consistent; that is, it is homogeneous, or the same all the way through, with no longer or more difficult items appearing at the beginning or the end.
• The two halves should ideally yield the same score when tested by the test-retest method.
Oberservation reliability Pg. 164
• When behaviors are measured by observing how often they occur, or how long they last, or how severe they are and then the results are recorded on a straightforward, simple form.
• The level of agreement between observers provides a way of establishing the reliability of the process used to measure behavior.
• Is there an agreement between the observers who are measuring the same variable? (Box 7.1)
Inter-rater reliability Pg. 164
• Level of agreement between observers.
• To the extent to which a measuring instrument measures the variable it is suppose to measure and measures it accurately
• The degree to which an instrument is able to do what it is intended to do, in terms of both experimental procedures and measuring instruments (internal validity) and generalizability of results (external validity)
• The degree to which scores on a measuring instrument correlate with measures of performance on some other criterion.
1. Face Validity
2. Content Validity
3. Criterion-oriented Validity
1. Face validity:
Does the measuring instrument appear to measure the subject matter under consideration? Not really a form of validity. (Box 7.2)
• Does this look good to me?
• Is this the way that someone with high self-esteem would answer?
• Face Validity is cheap!
• The idea that if it looks good, it must be right.
a. Expert panel validity: When someone who feels it looks right, sends it to a panel of experts who have experience in the field of study. Ask them, " Do you think this is a good way to measure self-esteem?"
2. Content validity (what are "domains?"):
Does the measuring instrument adequately measure the major dimensions of the variable under consideration? (Box 7.2)
3. Criterion-oriented validity:
oriented validity: Does the individual's measuring instrument score predict the probable behavior on a second variable (criterion-related measure)?
• An instrument has this if it gives the same result as a second instrument that is designed to measure the same variable.
a. Predictive validity: Deals with the future.
• You know it's correct b/c it tells you something about what is going to happen in the future (e.g. the IQ test as an educational instrument The kids that got the high score on the test, received high scores on later exams; therefore, they could predict that the higher score you get on an IQ test, the higher score you get on 6th grade tests (predicts competence).
b. Concurrent validity: Deals with the present and idea of a gold standard.
c. Group contrast (discriminant) validity: Is based on the idea of correlations of scores of two or more groups.
What is the relationship between reliability and validity?
• If an instrument is not reliable, it cannot be valid. Pg. 169
• You can have reliability without validity, but you can't have validity without reliability.
The marksmanship analogy for this relationship.
A tight pattern.
In what order do we establish reliability and validity.
When you are creating a scale, establish reliability first, and then go on to validity.
tends to be skewed by error. There are things going on in the measurement procedure, which tend to make it look like there is not a correlation between validity and reliability.
Types of Constant Error
1. Contrast Error
2. Halo Effect
3. Error or leniency
4. Error or severity
5. Error or central tendency
Why is called "constant" error?
Because they are errors that remain constant throughout a study and may affect answers. p.170
Types of Random Errors
1. Transient qualities of the research participant.
2. Situational factors
3. Administrative factors
Transient qualities of the research participant:
things such as fatigue, boredom, or any temporary personal state that will affect the participant's responses.
Solution: Give the best test you can give, analyze the data, and go for another round, eliminating all of the possible kinds of error. Your correlation should be better. If it's not, clean it up more and give the test again. After three times, if the results are still not what you need, something is seriously wrong.
the weather, the pneumatic drill outside the window, or anything else in the environment that will affect the participant's responses.
anything relating o the way the instrument is administered, or the interview conducted or the observation made. These include transient qualities of the researcher (or whoever collects the data) as well as sporadic stupidity like reading out the wrong set of instructions.
Why is it called "random" error?
because they are not constant and difficult to find and make allowances for.
How do you correct (get rid of) error?
They may cancel each other out, but there is little researchers can do about them except to be aware that they exist.
Level of Function Scales (LOF):
• are before and after assessment instruments, usually designed by agency or program staff for use with a particular target population, that attempt to capture important dimensions of client functioning.
• Are sometimes used to measure acquisition of skills so that the skill your interested in would get the higher level of functioning.
• Designed to be completed by a case manager or some third party observer rather than by a client.
• Can be a single or multi-itemed scale
• You must specify concrete indicators of whatever it is that you are trying to measure.
• Useful because of their flexibility.
• Improvised scale that enables you to measure something that you have found no other way to measure.
• It increases the flexibility you have to respond to problems, which you have found nothing on in literature reviews.
• Enables you to adapt.
Rules of LOF:
• Select concrete indicators of what you want to measure
• It has been said this is the same thing as "old
fashioned" face validity.
Client satisfaction scale:
• The dependent variable cannot be objective.
• Is almost always some form of ordinal measurement.
• Should never be used as a main independent variable, but rather in conjunction with success of treatment.
To rate others as opposite to oneself with respect to a particular characteristic.
to think that a participant is altogether wonderful or terrible because of one good or bad trait. Or to think that the trait being observed must be good or bad because the participant is altogether wonderful or terrible.
Error or leniency
to always give a good report.
Error of severity
to always give a bad report.
Error of central tndency
observers, like participants, can choose always to stay comfortably in the middle of a rating scale and avoid both ends.
What are the questions to ask before measuring a variable
1. Why do we want to make a measurement
2. When will the measurement be made
3. What do we want to measure
4. Who will make the measurement
5. What format do we require
6. When will the measurement be made
What is a Likert Scale
Be able to recognize an example of it.
What are the pros and cons of using scales with and without a middle (neutral) value
• A list made by the research participants.
• Is valid to the degree that the list is complete and sensitive in the addition or omission of items over time and is indicative of change.
• Is fairly reactive in that is provokes thought.
• Is reliable in that the same experience should always result in the same entries on the list.
• E.g. List below the things that make you feel depressed. Pg. 183
• Useful means of data collection when you are undertaking an interpretive study.
• Not usually used as data collection devices within positivistic studies.
• Can only be achieved if those keeping them have reasonable language skills and are willing to complete their journals on a regular basis.
• Utility depends on whether the client likes to write and is prepared to continue with what may become an onerous task.
• Usually very reactive. Pg. 182
• A number of unidimensional instruments stuck together which measure a number of variables at the same time.
• E.g.: a instrument that contains three unidimentional instruments:
1. Relevance of received social services.
2. The extent to which the services reduced the problem.
3. The extent to which services enhanced the clients self-esteem and contributed to a sense of power and integrity.
• A list prepared by the researcher.
• Same considerations apply as to an inventory except that validity may be compromised if the researcher does not include all the possibilities that are relevant to the participant in the context of the study.
• E.g. Check below all the things that you have felt during the past week.
A wish to be alone
• When used in research situations, they are nothing more than a structured kind of journal, where the research participant is asked to record events related to particular experiences or behaviors in note form.
• May be more reliable because it is more likely that a similar experience will be recorded in a similar way.
• May be more useful because is takes less time for the participant to complete and less time for the researcher to analyze.
• Is usually less sensitive to small changes because it includes less detail.
• May be somewhat less reactive depending on the extent to which it leads to reflection and change.
• Provide a greater range of responses, usually asking how frequently or to what degree a particular item, or question, applies.
• Is any instrument that allows the researcher to derive a sum or total score from a number or items,
• Are designed so that low scores indicate a low level of the variable being measured and high scores indicate a high level. Pg. 184
• A measuring instrument composed of several items that are logically or empirically structured to measure a construct. Pg. 455
• Only measures one variable. Pg. 179
• E.g.: self-esteem
How would you evaluate a standardized measuring instrument Pg. 185-186
1. The Sample from Which Data Were Drawn
a. Are the samples representative of pertinent
b. Are the sample sizes sufficiently large
c. Are the samples homogeneous
d. Are the subsamples pertinent to respondents' demographics
e. Are the data obtained from samples up to date
2. The Validity of the Instrument
a. Is the content domain clearly and specifically defined
b. Was there a logical procedure for including the items
c. Is the criterion measure relevant to the instrument
d. Was the criterion measure reliable and valid
e. Is the theoretical construct clearly and correctly stated
f. Do the scores converge with other relevant measures
g. Do scores discriminate from irrelevant variables
h. Are there cross-validation studies that conform to these concerns
3. The Reliability of the Instrument
a. Is there sufficient evidence of internal consistency
b. Is there equivalence between various forms
c. Is there stability over a relevant time interval
4. The Practicality of Application
a. Is the instrument an appropriate length
b. Is the content socially acceptable to respondents
c. Is the instrument feasible to complete
d. Is the instrument relatively direct
e. Does the instrument have utility
f. Is the instrument relatively nonreactive
g. Is the instrument sensitive to measuring change
h. Is the instrument feasible to score
What are the desired properties of clinical measurement?
2. Appropriateness and acceptability
4. Sensitivity (responsiveness
Questions to be considered when assessing the validity of measures for clinical practice.
1. Does the measure cover factors that are clinically relevant to clients, their families, and health and social service professionals—that is, does the measure have face validity in terms of providing information that clinicians can use to treat the client? (See also "interpretability," below.)
2. Are the domains appropriate, important, and sufficient for the setting or types of problems being worked on—that is, does the measure have content validity in terms of the nature of this problem? Does it collect information as to the causes, and/or the symptoms? Does it cover the stages of the problem or the different forms of the problem, or the things that make the problem worse or better? Does the measure adequately "map" the problem? Will you know more about the problem that the client has as a result of the measures?
3. Does the measure correlate with a gold standard or superior measure—that is, does the measure have criterion-oriented validity? If there is no gold standard then an alternative question is to ask whether the measure produces results that conform to a theory. For example, a measure of weakness correlates with the stage of a patient's disease (muscle strength, tone, and energy deteriorate as the severity of disease increases). This test, however, is only as good as the theory used.
Appropriateness and acceptability:
is the measure suitable for its intended use? This property is crucial in clinical practice because measures must be simple to use.
Questions to be answered when assessing appropriateness and acceptability.
1. Convenience: Is the measure short enough or long enough to be completed or administered in the intended setting and with the types of patients, families, or informants for which it is intended?
2, User-friendly format: Is the format of the measure and the questions acceptable and suitable for use in the intended setting and with the intended informants?
3. Administrative track record: Has it been used in this or similar settings before, and did it work? Were there any complaints or difficulties?
4. Cultural compatibility: If the measure is in English, will it work in the client's culture and language if the client is not American or does not speak English well? Has there been a double translation—that is, has it been translated into English and then translated back into the source language from the English version to ensure equivalence? Has its conceptual as well as its semantic equivalence been assessed?
does the measure produce the same results when repeated in the same population?
• Assessing reliability should include an assessment of the inter rater (or inter observer) reliability, and rate-rerate reliability, which determines whether similar results are obtained by different observers, and test retest reliability, which determines whether similar results are obtained at different points in time. All the points of split-half reliability and alternate forms reliability, etc. apply here.
• Another test sometimes used is whether the individual items on the measure correlate with one another (known as internal consistency). Note that if a measure has very high internal consistency this suggests that many items in the measure are capturing the same factors. So even though this indicates that the measure is reliable, it is possible that some items may be redundant and the measure could be shortened.
Sensitivity (responsiveness to change):
does the measure detect clinically meaningful changes?
• Sensitivity is critical if the measure is to be useful in clinical practice. If the client is improving, we need to know that. If the client is getting worse, we need to know that. If the client is not changing, we need to know that too. If the client is changing, we need to know small or at least reasonably small increments of change to know how much the client is changing.
• The question that should be asked is whether the measure can discriminate between different degrees of severity or detect changes anticipated to occur given the proposed treatment. How sensitive must the measure be? It must be sensitive enough to measure changes that are clinically relevant. Any change that is sufficiently large to relate to how we treat a client, or make changes in the way we treat that client, should be measurable on a desirable measurement instrument.
Sensitivity problems that can arise at extreme values of factors we wish to measure
• Patients who have progressive or advanced illness often score poorly on measures of normal functioning that have been designed for fully functioning people. We want to measure functioning because the client's ability to perform the normal functions of life is part of the assessment of quality of life. Low functioning persons may have a quite poor quality of life. A very ill person may score very low, or at bottom, on a measure of functioning. If we want to measure any further decline that may be impossible because that client is already at the bottom of the scale. That is known as a FLOOR EFFECT. Sensitivity to very low levels of functioning is lost when floor effects are characteristic of a measurement procedure. Care must be taken to use measures that are designed to include sensitivity to very low values of what is being measured.
• There are also CIELING EFFECTS. Taking quality of life as an example for the second time the relation between a person's quality of life and that person's level of monetary income works well within moderate income ranges. But when current quality of life scales are used, quality of life tends to be at the very top of the scale's value when the income level is about $150,000. Above that level quality of life cannot get any higher as it is currently measured. So we cannot measure the improvement in quality of life as incomes rise above $150,000. People with $200,000 in income therefore seem to have the same quality of life as those with $2 million or $20 million or even $200 million. This is a ceiling effect. Sensitivity to quality of life at very high income levels is lost when ceiling effects are characteristic of a measurement procedure. Care must be taken to use measures that are designed to include sensitivity to very high values of what is being measured.
• The first point is relevance. The results of a measure must tell the clinician something about what he or she is treating the client for. It must provide a fuller or finer description of the client's problem, and/or a more correct and precise diagnosis, and/or some information that can be used to decide on treatment. When given a series of scores over time it must provide some information that can be used to gauge whether progress is occurring.
• Componential accessibility: The clinician needs to be able to consider what to do with the information. For example, a skills test that just gives us a client's overall score does not mean very much. But if that score is broken down to give us (a) a score on verbal skills, and (b) a score on mathematical skills, and (c) a score on reading, and (d) a score on critical thinking, then a clinician can plan for working on those areas that are weaker than others. Over time, a series of such componential scores enable us to see the areas in which the client is improving and those in which there is no improvement, etc.
Similarly, an overall quality of life score of, say, 50 out of 100 offers little information that will help in planning appropriate interventions. The clinician needs to understand what factors are affecting the patient's quality of life— such as symptoms of illnesses, functional impairments, community resources, financial concerns, social support, etc. —so that appropriate treatments and social services can be planned. The measure must provide information that relates to such things. To be clinically useful quality of life measures must provide easy access to the components of the assessment.
THIS SET IS OFTEN IN FOLDERS WITH...
Reseach Methods Final
Research Methods for Social Work
Statistics for Social Workers (Carleton University)
YOU MIGHT ALSO LIKE...
Methods and Data Analysis
Exam 2- Research Methods in Social Work
CRM Test 1
OTHER SETS BY THIS CREATOR
LMSW Licensure Exam
Greif and Loss Final
Grief and Loss Test #1
Assessment and Differential Diagnosis Final