An Epidemiologic Anthropology: Considerations when Employing Mixed Methods

Anthropology versus Epidemiology

Author, Kathryn Oths
Author, Kathryn Oths

Anthropologists and epidemiologists have contributed vital knowledge to understanding public health problems such as low birth weight, reemerging disease, mental health, and more. Lively and enduring dialogue on the potential for collaboration between the disciplines was sparked in the ‘80s by Janes et al.’s (1986) Anthropology and Epidemiology and True’s (1990) chapter “Epidemiology and medical anthropology.”  The discourse continues to the present, well-summarized in the works of Dein and Bhui (2013), Hersch-Martínez (2013), Inhorn (1995), and Trostle (2005).

In contrast to early literature, later writing—from both camps—implies that what anthropology most offers epidemiology is its qualitative sensibility (e.g., Ragone and Willis 2000; Scammell 2010). While clearly one of anthropology’s great strengths, sensitivity to qualitative dimensions is not all we have to offer. Rigorous, contextualized mixed-methodology is more likely to be persuasive to other disciplines than mere entrained awareness (Prussing 2014). In fact, by incorporating epi techniques into anthropological designs, we can employ a holistic paradigm on our own—what Inhorn calls synthetic or wearing both hats. (The reverse, training health professionals in anthropology, has also been suggested [O’Mara et al. 2015]).

Kathy's epi anth model
Kathryn’s Epi Anth Model

Anthropological orientations in health research might be glossed as follows: Anthropologists of Suffering record the pain and distress of a people, striving to understand meaning surrounding health problems. Anthropologists of Sickness, in addition to searching for meaning, use structured surveys emerging from ethnographic observation to systematically ferret out factors contributing to dis-ease and illness. The first approach interrogates the meaning of critical life events, while the second investigates how socially and culturally constructed meanings themselves shape risk of morbidity and mortality. As Trostle and Sommerfeld (1996) state, “data can be used to create emotional responses in the reader, or to explain relationships.” Both approaches are vital and mutually enhancing, but less has been written about the latter.

For example, most anthropologists of reproduction interpret the clinical interactions that oppress and mystify women’s knowledge and autonomy, as well as women’s resistance to these controlling forces. They study the technologizing of natural processes and the hegemony of biomedical over self-knowledge. This research is an important corrective to years of neglect of reproductive work (Rapp 2001). The focus of others, including myself, has been more outcome-driven, a systematic explanatory study of the conditions not of clinical but rather daily lifelike workplace organization and intimate relationshipsthat shape women and babies’ health (Oths et al. 2001; Dunn & Oths 2004).

A Word on Publishing

While epidemiology and anthropology share the common goal of improving human health, each field has its own prerogatives. Those who blend qualitative and quantitative methods in the pursuit of an Epidemiological Anthropology of Sickness may face problems getting published in the public health literature. I’ll make three points regarding disciplinary differences of opinion on the accurate specification of analytic models:

   1. Anthropological methods are not self-explanatory. 

Anthropological methods essential to getting results are detailed, iterative, and not necessarily self-explanatory. However, there is no space to discuss these vital tools in standard public health journal articles. Be forewarned: Public health expects very brief methods sections!

   2. What’s reliable to others may not be valid to us.

Other fields are more strict than ours in insisting that survey items be tested for reliability before use. Reliability, or insuring that an instrument gives the same results with repeated use, is a good thing. However, a scale, once published, should not be changed. (A survey instrument you construct yourself? Even more suspect.) Yet without local contextualization, an instrument’s validityactually measuring what said instrument claims to measure—may be compromised. This is a constant issue when we employ scales that have been normed to populations other than the one we will survey. For epidemiologists, patterns of association are of greater concern than measurement issues. Categories they work with are believed to be fixed in nature, race being a prime example. For us, they are anything but fixed. Anthropologists insist on emic construct validity of categories—categories should make sense in the cultures we’re measuring them in. Rule of thumb: Take care of validity, and reliability will follow.

Rule of thumb: Take care of the validity, and reliability will follow.

   3. We lack authority to critique normative methods.

Some journals, such as American Journal of Public Health (AJPH), recommend use of specific statistics, such as logistic rather than ordinary least squares regression. They insist every dependent outcome variable be broken into two discrete categories instead of having the generally continuous, tough-to-define, but more precise character of real life. However, they don’t insist on power analyses, which determine if a given study’s sample size is sufficient to make a statistical test valid.  An example from my birth weight study illustrates this: None of six previous studies using a model developed by Karasek found a direct association between job strain and birth outcomes. Four had low power for their logistic regression, which may have resulted in undetected effects. And instead of using the full range of values—500 to 4500 grams for birth weight—logistic regression uses only ‘low’ or ‘normal’ as outcomes, which results in a loss of variability and, thus, information. We would’ve needed twice the sample size in our study to achieve sufficient power using logistic regression. When my colleagues and I demonstrated that least squares regression detects an effect while logistic regression does not, the editor of AJPH was not impressed.

Why the one model fits all assumption regardless of whether it’s the best one? It fits with naturalized categories, like disease and race, which are seen as binary oppositions: yes/no, black/white.  This implicit model of the world is simply too rigid for anthropological sensibilities (Dressler, Oths, and Gravlee 2005). Newsflash: The world isn’t always best modeled by dichotomies.

In summary, when we strive to measure more accurately, we may meet with resistance from the gatekeepers of public health journals. Perhaps my outline of some common pitfalls of writing for an interdisciplinary audience will help reduce the frustration of others who attempt the same.

This was originally posted in Anthropology News‘ August 2015 “Knowledge Exchange.”

Challenges of Mixed-Method Research

Jo Weaver

Jo Weaver

Reposted from Anthropology News April 2015 column.

Mixed-method research involves inherent challenges that make it at once more gratifying and more difficult than traditional single-method approaches. By “mixed-method,” I am referring to studies that employ a mixture of qualitative and quantitative methods. This approach is a hallmark of most biocultural research, and those of us committed to this approach believe that the triangulation of multiple methods is a more effective way of capturing human experience than an approach that attempts to represent only quantitative trends or only qualitative individual experience. Mixed methods also have the potential to make our work more intelligible to those outside of anthropology who transact primarily in the quantitative—those, for instance, in public health, psychiatry, or sociology.

Mixed-method studies are fundamentally challenging because they often take twice the work and require methodological expertise in multiple areas: Instead of just conducting an epidemiological survey to learn about the spread and correlates of a disease in a given sample or only conducting illness narrative interviews to learn about individuals’ experiences with a disease, a biocultural researcher is likely to be doing both of these. This requires a fair amount of time, money, training, and logistic agility.

I am certainly not the first to point out the complications involved in mixed-method research (see Bill Dressler’s 5 Things You Need to Know About Statistics: Quantification in Ethnographic Research from Left Coast Press). But the complication continues after the research is done, and these days, I’m finding the post-fieldwork integration of quantitative and qualitative data more difficult than the execution of research itself. How do we combine all those mixed-method data together into a coherent form that accurately represents human experience?

Let me give an example. At the recent March 2015 Society for Applied Anthropology meeting in Pittsburgh, I organized and participated in a session called “Food insecurity and mental health in global perspective.” The purpose was to bring together scholars who are studying the relationships between food insecurity and mental health and to move toward a unified research agenda that might help us identify some of the social pathways that link these two states in widely different parts of the world. This kind of comparative enterprise obviously requires that there be some standardization in the methods used to measure important outcome variables like food insecurity and mental health across locations. Accordingly, most of us assessed mental health through a standard scale like the Hopkins Symptoms Checklist-25 (HSCL-25) or the Center for Epidemiological Studies-Depression (CES-D) scale. While these have been validated for use in many cultural contexts (including those in which we work), they nevertheless reduce a profound experience of human suffering—depression—to a number.

Dr. Steven Schensul, an applied anthropologist with many years of experience in mixed-method research who attended the session, pointed out the relative lack of attention that each presenter gave to mental health. And he was right—most of us did little more in our 15-minute presentations than name the depression assessment scale we were using before moving on. As he reminded us, there is a whole branch of anthropology, psychological anthropology, dedicated to questioning, problematizing, and pluralizing psychiatric diagnostic categories. And indeed, many of those presenting at the session have an arm of our own research dedicated to just this (for instance, see my and Bonnie Kaiser’s recent article in Field Methods, where we suggest an approach to measuring mental health that employs standard scales to appeal to those who need numbers but also develops locally-derived and ethnography-based ways of measuring mental health in a context specific fashion). My response at the time was to say that we as a group are indeed aware of this limitation and to point to some of the more nuanced mental health work we have done in other contexts.

Making a mixed-method study happen is inherently challenging because it often takes twice the work and requires methodological expertise in multiple areas.

Afterward, I kept wondering, if we are all in fact sensitive to the potentially problematic nature of some of the measures we use, then why didn’t we find time to address that in our presentations? And I kept coming back up against the idea that one can only do so much. I don’t mean that as a defense of my research’s shortcomings, but rather to say that it’s a resounding theme in my own experiences of working and writing at the intersection of qualitative and quantitative social sciences. One can only do so much: in 15 minutes, in a single paper, in a single book, in a single study, with that amount of money, in that time frame, with that word count. In a session devoted to the relationships between food insecurity and mental health, then, perhaps it’s not surprising that none of us dwelled on the methods we were using to measure either one—unsurprising, but not necessarily best practices, either.

Now, to get back to my original point, I think these realistic limitations of academic presenting and publishing are part of the reason why I find it so challenging to assemble the qualitative and quantitative data I’m collecting. Human experience is hard to chunk into measurable quantities, single conversations, a 15-minute presentation, or even an article-length manuscript. This is something that all anthropologists struggle with, and it brings up some of the fundamental issues of social science—things like, how do we make our work “speak” to as wide an audience as possible? How do we know that we’re measuring what we think we’re measuring? How do we represent the people we study with fidelity and ethics? How do we even know what their reality is? How do we claim some authority to knowledge about the people we are studying without overstating the case?

In other words, the challenges of biocultural anthropology are the challenges of anthropology in general. We can’t capture it all. But that doesn’t mean we shouldn’t try.