-unstructured interview, the clinician asks mostly open-ended questions, perhaps as simple as "Would you tell me about yourself?" The lack of structure allows the interviewer to follow leads and explore relevant topics that could not be anticipated before the interview.
-In a structured interview, clinicians ask prepared—mostly specific—questions. Sometimes they use a published interview schedule—a standard set of questions designed for all interviews. Many structured interviews include a mental status exam, a set of questions and observations that systematically evaluate the client's awareness, orientation with regard to time and place, attention span, memory, judgment and insight, thought content and processes, mood, and appearance (Sommers-Flanagan & Sommers-Flanagan, 2013). A structured format ensures that clinicians will cover the same kinds of important issues in all of their interviews and enables them to compare the responses of different individuals. +One problem is that they sometimes lack validity, or accuracy (Sommers-Flanagan & Sommers-Flanagan, 2013). Individuals may intentionally mislead in order to present themselves in a positive light or to avoid discussing embarrassing
+Or people may be unable to give an accurate report in their interviews. Individuals who suffer from depression, for example, take a pessimistic view of themselves and may describe themselves as poor workers or inadequate parents when that isn't the case at all.
+Interviewers too may make mistakes in judgments that slant the information they gather. They usually rely too heavily on first impressions, for example, and give too much weight to unfavorable information about a client. Interviewer biases, including gender, race, and age biases, may also influence the interviewers' interpretations of what a client say.
+Interviews, particularly unstructured ones, may also lack reliability (Sommers-Flanagan & Sommers-Flanagan, 2013). People respond differently to different interviewers, providing, for example, less information to a cold interviewer than to a warm and supportive one (Quas et al., 2007). Similarly, a clinician's race, gender, age, and appearance may influence a client's responses
+Because different clinicians can obtain different answers and draw different conclusions even when they ask the same questions of the same person, some researchers believe that interviewing should be discarded as a tool of clinical assessment A test consisting of ambiguous material that people interpret or respond to
require that clients interpret vague stimuli, such as inkblots or ambiguous pictures, or follow open-ended instructions such as "Draw a person." Theoretically, when clues and instructions are so general, people will "project" aspects of their personality into the task (Cherry, 2015; Hogan, 2014).
Projective tests are used primarily by Psychodynamic clinicians to help assess the unconscious drives and conflicts they believe to be at the root of abnormal functioning (Baer & Blais, 2010). The most widely used projective tests are the Rorschach test, the Thematic Apperception Test, sentence-completion tests, and drawings On the assumption that a drawing tells us something about its creator, clinicians often ask clients to draw human figures and talk about them (McGrath & Carroll, 2012). Evaluations of these drawings are based on the details and shape of the drawing, the solidity of the pencil line, the location of the drawing on the paper, the size of the figures, the features of the figures, the use of background, and comments made by the respondent during the drawing task. In the Draw-a-Person (DAP) test, the most popular of the drawing tests, individuals are first told to draw "a person" and then are instructed to draw a person of the other sex. Until the 1950s, projective tests were the most commonly used method for assessing personality
. In recent years, however, clinicians and researchers have relied on them largely to gain "supplementary" insights
reasons behind this:
One reason for this shift is that practitioners who follow the newer models have less use for the tests than psychodynamic clinicians do. Even more important, the tests have not consistently shown much reliability or validity
In reliability studies, different clinicians have tended to score the same person's projective test quite differently. Similarly, in validity studies, when clinicians try to describe a client's personality and feelings on the basis of responses to projective tests, their conclusions often fail to match the self-report of the client, the view of the psychotherapist, or the picture gathered from an extensive case history (Cherry, 2015; Bornstein, 2007).
Another validity problem is that projective tests are sometimes biased against minority ethnic groups For example, people are supposed to identify with the characters in the TAT when they make up stories about them, yet no members of minority groups are in the TAT pictures. In response to this problem, some clinicians have developed other TAT-like tests with African American or Hispanic figures Naturalistic clinical observations usually take place in homes, schools, institutions such as hospitals and prisons, or community settings. Most of them focus on parent-child, sibling-sibling, or teacher-child interactions and on fearful, aggressive, or disruptive behavior. Often such observations are made by participant observers—key people in the client's environment—and reported to the clinician.
analog observations, often aided by special equipment such as a video camera or one-way mirror. Analog observations often have focused on children interacting with their parents, married couples attempting to settle a disagreement, speech-anxious people giving a speech, and fearful people approaching an object theyfrightening For one thing, they are not always reliable. It is possible for various clinicians who observe the same person to focus on different aspects of behavior, assess the person differently, and arrive at different conclusions. Careful training of observers and the use of observer checklists can help reduce this problem
Similarly, observers may make errors that affect the validity, or accuracy, of their observations ). The observer may suffer from overload= and be unable to see or record all of the important behaviors and events. Or the observer may experience observer drif+t, a steady decline in accuracy as a result of fatigue or of a gradual unintentional change in the standards used when an observation continues for a long period of time. Another possible problem is observer bias—the observer's judgments may be influenced by information and expectations he or she already has about the person
An ideal observation Using a one-way mirror, a clinical observer is able to view a mother interacting with her child without distracting the duo or influencing their behaviors.
A client's reactivity may also limit the validity of clinical observations; that is, his or her behavior may be affected by the very presence of the observer ). If schoolchildren are aware that someone special is watching them, for example, they may change their usual classroom behavior, perhaps in the hope of creating a good impression.
Finally, clinical observations may lack cross-situational validity. A child who behaves aggressively in school is not necessarily aggressive at home or with friends after school. Because behavior is often specific to particular situations, observations in one setting cannot always be applied to other settings CLINICAL ASSESSMENT Clinical practitioners are interested primarily in gathering individual information about each client. They seek an understanding of the specific nature and origins of a client's problems through clinical assessment.
To be useful, assessment tools must be standardized, reliable, and valid. Most clinical assessment methods fall into three general categories: clinical interviews, tests, and observations. A clinical interview may be either unstructured or structured. Types of clinical tests include projective, personality, response, psychophysiological, neurological, neuropsychological, and intelligence tests. Types of observation include naturalistic observation, analog observation, and self-monitoring In addition to deciding what disorder a client is displaying, diagnosticians assess the current severity of the client's disorder—that is, how much the symptoms impair the client. For each disorder, the framers of DSM-5 have suggested various rating scales that may prove useful for evaluating the severity of the particular disorder (APA, 2013).
In cases of major depressive disorder, for example, two scales are suggested by DSM-5: the Cross-Cutting Symptom Measure and the Emotional Distress-Depression Scale. The former scale indicates the current frequency of general negative feelings and behaviors (for example, "I do not know what I want out of life"), and the latter indicates the frequency of depression-specific feelings and behaviors (for example, "I feel worthless"). Using scores from these scales, the diagnostician then rates the client's depression as "mild," moderate," or "severe." Based on his clinical interview, tests, and observations. DSM-5 is the first edition of the DSM to consistently seek both categorical and dimensional information as equally important parts of the diagnosis, rather than categorical information alone. A classification system, like an assessment method, is judged by its reliability and validity. Here reliability means that different clinicians are likely to agree on the diagnosis when they use the system to diagnose the same client. Early versions of the DSM were at best moderately reliable (Regier et al., 2011). In the early 1960s, for example, four clinicians, each relying on DSM-I, the first edition of the DSM, independently interviewed 153 patients (Beck et al., 1962). Only 54 percent of their diagnoses were in agreement. Because all four clinicians were experienced diagnosticians, their failure to agree suggested deficiencies in the classification system. The framers of DSM-5 followed certain procedures in their development of the new manual to help ensure that DSM-5 would have greater reliability than the previous DSMs (APA, 2013). For example, they conducted extensive reviews of research to pinpoint which categories in past DSMs had been too vague and unreliable. In addition, they gathered input from a wide range of experienced clinicians and researchers. They then developed a number of new diagnostic criteria and categories, expecting that the new criteria and categories were in fact reliable. Despite such efforts, some critics continue to have concerns about the procedures used in the development of DSM-5 (Wakefield, 2015; Brown et al., 2014; Frances, 2013). They worry, for example, that the framers failed to run a sufficient number of their own studies—in particular, field studies that test the merits of the new criteria and categories. In turn, the critics fear that DSM-5 may have retained several of the reliability problems that were on display in the past DSMs. The validity of a classification system is the accuracy of the information that its diagnostic categories provide.
Categories are of most use to clinicians when they demonstrate predictive validity—that is, when they help predict future symptoms or events.
A common symptom of major depressive disorder is either insomnia or excessive sleep. When clinicians give Franco a diagnosis of major depressive disorder, they expect that he may eventually develop sleep problems even if none are present now. In addition, they expect him to respond to treatments that are effective for other depressed persons. The more often such predictions are accurate, the greater a category's predictive validity.
DSM-5's framers tried to also ensure the validity of this new edition by conducting extensive reviews of research and consulting with numerous clinical advisors. As a result, its criteria and categories may have stronger validity than those of the earlier versions of the DSM. But, again, many clinical theorists worry that at least some of the criteria and categories in DSM-5 are based on weak research and that others may reflect gender or racial bias. In fact, one important organization, the National Institute of Mental Health (NIMH), has already concluded that the validity of DSM-5 is sorely lacking and is acting accordingly). The world's largest funding agency for mental health research, NIMH has announced that it will no longer give financial support to clinical studies that rely exclusively on DSM-5 criteria. 1.Adding a new category, "autism spectrum disorder," that combines certain past categories such as "autistic disorder" and "Asperger's syndrome" .
2.Viewing "obsessive-compulsive disorder" as a problem that is different from the anxiety disorders and grouping it instead along with other obsessive-compulsive-like disorders such as "hoarding disorder," "body dysmorphic disorder," "trichotillomania" (hair-pulling disorder), and "excoriation (skin-picking) disorder" (see Chapter 4).
3.Viewing "posttraumatic stress disorder" as a problem that is distinct from the anxiety disorders (see Chapter 5).
4.Adding new categories, "disruptive mood dysregulation disorder," "persistent depressive disorder," and "premenstrual dysphoric disorder," and grouping them with other kinds of depressive disorders (see Chapter 6).
5.Adding a new category, "somatic symptom disorder" (see Chapter 8).
6.Replacing the term "hypochondriasis" with the new term "illness anxiety disorder" (see Chapter 8).
7.Adding a new category, "binge eating disorder" (see Chapter 9).
8.Adding a new category, "substance use disorder," that combines past categories "substance abuse" and "substance dependence" (see Chapter 10).
9.Viewing "gambling disorder" as a problem that should be grouped as an addictive disorder alongside the substance use disorders (see Chapter 10).
10.Replacing the term "gender identity disorder" with the new term "gender dysphoria" (see Chapter 11).
11.Replacing the term "mental retardation" with the new term "intellectual disability" (see Chapter 14).
12.Adding a new category, "specific learning disorder," that combines past categories "reading disorder," "mathematics disorder," and "disorder of written expression" (see Chapter 14).
13.Replacing the term "dementia" with the new term "neurocognitive disorder" (see Chapter 15).
14.Adding a new category, "mild neurocognitive disorder" (see Chapter 15). Even with trustworthy assessment data and reliable and valid classification cate.gories, clinicians will sometimes arrive at a wrong conclusion
Like all human beings, they are flawed information processors. Studies show that they may be overly influenced by information gathered early in the assessment process. In addition, they may pay too much attention to certain sources of information, such as a parent's report about a child, and too little to others, such as the child's point of view. Finally, their judgments can be distorted by any number of personal biases—gender, age, race, and socioeconomic status, to name just a few. Given the limitations of assessment tools, assessors, and classification systems, it is small wonder that studies sometimes uncover shocking errors in diagnosis, especially in hospitals
classifying-
the very act of classifying people can lead to unintended results., for example, many family-social theorists believe that diagnostic labels can become self-fulfilling prophecies. When people are diagnosed as mentally disturbed, they may be perceived and reacted to correspondingly. If others expect them to take on a sick role, they may begin to consider themselves sick as well and act that way. Furthermore, our society attaches a stigma to abnormality. People labeled mentally ill may find it difficult to get a job, especially a position of responsibility, or to be welcomed into social relationships. Once a label has been applied, it may stick for a long time.
Because of these problems, some clinicians would like to do away with diagnoses. Others disagree. They believe we must simply work to increase what is known about psychological disorders and improve diagnostic techniques. They hold that classification and diagnosis are critical to understanding and treating people in distress. After collecting assessment information, clinicians form a clinical picture and decide upon a diagnosis. The diagnosis is chosen from a classification system. The system used most widely in North America is the Diagnostic and Statistical Manual of Mental Disorders (DSM). The most recent version of the DSM, known as DSM-5, lists more than 500 disorders. DSM-5 contains numerous additions and changes to the diagnostic categories, criteria, and organization found in past editions of the DSM. The reliability and validity of this revised diagnostic and classification system are currently receiving clinical review and, in some circles, criticism.
Even with trustworthy assessment data and reliable and valid classification categories, clinicians will not always arrive at the correct conclusion. They are human and so fall prey to various biases, misconceptions, and expectations. Another problem related to diagnosis is the prejudice that labels arouse, which may be damaging to the person who is diagnosed Franco's therapist began, like all therapists, with
assessment information and diagnostic decisions. Knowing the specific details and background of Franco's problem (idiographic data) and combining this individual information with broad information about the nature and treatment of depression, the clinician arrived at a treatment plan for him.
Yet therapists may be influenced by additional factors when they make treatment decisions. Their treatment plans typically reflect their theoretical orientations and how they have learned to conduct therapy (Sharf, 2015). As therapists apply a favored model in case after case, they become more and more familiar with its principles and treatment techniques and tend to use them in work with still other clients.
according to surveys, therapists gather much of their information about the latest developments in the field from colleagues, professional newsletters, workshops, conferences, Web sites, books, and the like . not all read research articles. Altogether, more than 400 forms of therapy are currently practiced in the clinical field. Naturally, the most important question to ask about each of them is whether it does what it is supposed to do. Does a particular treatment really help people overcome their psychological problems?
The first problem is how to define "success." If, as Franco's therapist implies, he still has much progress to make at the conclusion of therapy, should his recovery be considered successful? The second problem is how to measure improvement ). Should researchers give equal weight to the reports of clients, friends, relatives, therapists, and teachers? Should they use rating scales, inventories, therapy insights, observations, or some other measure
Perhaps the biggest problem in determining the effectiveness of treatment is the variety and complexity of the treatments currently in use. People differ in their problems, personal styles, and motivations for therapy. Therapists differ in skill, experience, orientation, and personality. And therapies differ in theory, format, and setting. Because an individual's progress is influenced by all these factors and more, the findings of a particular study will not always apply to other clients and therapists Gordon Paul said decades ago that the most appropriate question regarding the effectiveness of therapy may be "What specific treatment, by whom, is most effective for this individual with that specific problem, and under which set of circumstances?"
Drug therapy is sometimes combined with certain forms of psychotherapy, for example, to treat depression. In fact, it is now common for clients to be seen by two therapists—one of them a psychopharmacologist, a psychiatrist who primarily prescribes medications, and the other a psychologist, social worker, or other therapist who conducts psychotherapy .