Reposted from Anthropology News April 2015 column.
Mixed-method research involves inherent challenges that make it at once more gratifying and more difficult than traditional single-method approaches. By “mixed-method,” I am referring to studies that employ a mixture of qualitative and quantitative methods. This approach is a hallmark of most biocultural research, and those of us committed to this approach believe that the triangulation of multiple methods is a more effective way of capturing human experience than an approach that attempts to represent only quantitative trends or only qualitative individual experience. Mixed methods also have the potential to make our work more intelligible to those outside of anthropology who transact primarily in the quantitative—those, for instance, in public health, psychiatry, or sociology.
Mixed-method studies are fundamentally challenging because they often take twice the work and require methodological expertise in multiple areas: Instead of just conducting an epidemiological survey to learn about the spread and correlates of a disease in a given sample or only conducting illness narrative interviews to learn about individuals’ experiences with a disease, a biocultural researcher is likely to be doing both of these. This requires a fair amount of time, money, training, and logistic agility.
I am certainly not the first to point out the complications involved in mixed-method research (see Bill Dressler’s 5 Things You Need to Know About Statistics: Quantification in Ethnographic Research from Left Coast Press). But the complication continues after the research is done, and these days, I’m finding the post-fieldwork integration of quantitative and qualitative data more difficult than the execution of research itself. How do we combine all those mixed-method data together into a coherent form that accurately represents human experience?
Let me give an example. At the recent March 2015 Society for Applied Anthropology meeting in Pittsburgh, I organized and participated in a session called “Food insecurity and mental health in global perspective.” The purpose was to bring together scholars who are studying the relationships between food insecurity and mental health and to move toward a unified research agenda that might help us identify some of the social pathways that link these two states in widely different parts of the world. This kind of comparative enterprise obviously requires that there be some standardization in the methods used to measure important outcome variables like food insecurity and mental health across locations. Accordingly, most of us assessed mental health through a standard scale like the Hopkins Symptoms Checklist-25 (HSCL-25) or the Center for Epidemiological Studies-Depression (CES-D) scale. While these have been validated for use in many cultural contexts (including those in which we work), they nevertheless reduce a profound experience of human suffering—depression—to a number.
Dr. Steven Schensul, an applied anthropologist with many years of experience in mixed-method research who attended the session, pointed out the relative lack of attention that each presenter gave to mental health. And he was right—most of us did little more in our 15-minute presentations than name the depression assessment scale we were using before moving on. As he reminded us, there is a whole branch of anthropology, psychological anthropology, dedicated to questioning, problematizing, and pluralizing psychiatric diagnostic categories. And indeed, many of those presenting at the session have an arm of our own research dedicated to just this (for instance, see my and Bonnie Kaiser’s recent article in Field Methods, where we suggest an approach to measuring mental health that employs standard scales to appeal to those who need numbers but also develops locally-derived and ethnography-based ways of measuring mental health in a context specific fashion). My response at the time was to say that we as a group are indeed aware of this limitation and to point to some of the more nuanced mental health work we have done in other contexts.
Afterward, I kept wondering, if we are all in fact sensitive to the potentially problematic nature of some of the measures we use, then why didn’t we find time to address that in our presentations? And I kept coming back up against the idea that one can only do so much. I don’t mean that as a defense of my research’s shortcomings, but rather to say that it’s a resounding theme in my own experiences of working and writing at the intersection of qualitative and quantitative social sciences. One can only do so much: in 15 minutes, in a single paper, in a single book, in a single study, with that amount of money, in that time frame, with that word count. In a session devoted to the relationships between food insecurity and mental health, then, perhaps it’s not surprising that none of us dwelled on the methods we were using to measure either one—unsurprising, but not necessarily best practices, either.
Now, to get back to my original point, I think these realistic limitations of academic presenting and publishing are part of the reason why I find it so challenging to assemble the qualitative and quantitative data I’m collecting. Human experience is hard to chunk into measurable quantities, single conversations, a 15-minute presentation, or even an article-length manuscript. This is something that all anthropologists struggle with, and it brings up some of the fundamental issues of social science—things like, how do we make our work “speak” to as wide an audience as possible? How do we know that we’re measuring what we think we’re measuring? How do we represent the people we study with fidelity and ethics? How do we even know what their reality is? How do we claim some authority to knowledge about the people we are studying without overstating the case?
In other words, the challenges of biocultural anthropology are the challenges of anthropology in general. We can’t capture it all. But that doesn’t mean we shouldn’t try.