The notion that nature and nurture interact to produce the phenotype of an individual is a very old one. Modern techniques of molecular biology and the mapping of the human genome have led to multiple studies approaching gene-environment interaction as more than a metaphor. A part of that revolution was led by Caspi and associates in their studies of depression. They examined a gene linked to the serotonin transporter mechanism, which influences how long the neurotransmitter serotonin remains in the synapse between nerve cells. Many anti-depressant medications inhibit the re-uptake of serotonin by the transmitting cell. In some way, this helps to alleviate depressive symptoms. Caspi and associates found that risk of depression was enhanced under the condition of individuals possessing a specific polymorphism or gene variant for the serotonin transporter gene and having experienced recent stressful life events. For individuals with other variants, risk of depression associated with the experience of stressful life events was reduced.
This was indeed revolutionary work, inspiring an explosion of papers examining gene-environment interaction (or “GxE” as it is abbreviated). It also generated its own controversy and lively debates. One of these involves whether or not GxE effects have been detected at all, based on two issues. First, replication of results in GxE research is highly variable. Second, the “candidate gene” approach, where one gene (or sometimes a multiple polymorphism index) is examined, has been severely criticized by researchers in favor of an approach involving genome-wide scans, a method in which, for example, respondents are divided into groups who are depressed and not depressed, and then the entire genome for each individual in each group is scanned to see if there is some collection of genes that differ between them. The argument for this approach is that any complex phenotype, like depression, is likely to be the result of multiple—literally dozens—of interacting genes. It is then argued that candidate gene researchers are finding spurious results because they are capitalizing on a chance outcome, given the hundreds of possible combinations of genes that could be detected.
While the candidate gene versus genome-wide scan arguments are interesting, what seems more relevant to me is the question forming the title of this essay: what is the “environment” in gene-environment interaction research? As I noted, there has been an explosion of papers in GxE research, and with only a few exceptions, the ”environment” examined has consisted solely of the occurrence of stressful life events. These have been assessed both during childhood (referred to as “childhood adversity”) and as standard concurrent stressful events (e.g., unemployment, death of a close family member); but typically, that’s the “environment.”
There are very few anthropologists involved in GxE research, at least in the study of mental health outcomes. It would, nevertheless, seem an area of research ripe for our attention, because if anthropologists know about anything, it is context. And if the environment in GxE research refers to anything, it must refer to context. We know that the cultural context within which individuals function can be configured in a myriad of ways. Therefore, it is entirely plausible that the variable replication of results in GxE research is a result of the ways in which the environment changes from one study to the next, in subtle and nuanced ways.
Why have GxE researchers stuck to an impoverished view of the human environment? I think because most of them have been more interested in the genes than in the environment (and that is, of course, an important way to proceed). Since a number of researchers have found interesting results employing a measure of experience in the social environment as easy and straightforward as stressful life events, it simplified research design to stick to that. It does, however, leave the environment in GxE work under-specified.
My colleagues and I ventured into this area some years ago when we found, in a small subsample from a larger study in Brazil, an interaction between cultural consonance in family life and a polymorphism in the 2A receptor for serotonin. Those individuals with a single variant of the gene, if they had low cultural consonance, reported exceptionally high depressive symptoms; those individuals with that same variant, if they had high cultural consonance, reported exceptionally low depressive symptoms. Persons with other gene variants exhibited the same inverse association of cultural consonance and depressive symptoms, but not as strong.
When later we tried to replicate the results on a larger and more representative sample, as is some common in this literature, we could not. We did, however, find an interaction between childhood adversity and the 2A receptor polymorphism. This in turn was mediated by cultural consonance in family life, and the interaction and mediating effects were most pronounced among lower social class respondents.
This results in a model that looks like this:
This is a far cry from a simple interaction between life events and a gene. And, given that there is a cultural mediation of a gene-environment interaction, which in turn is moderated by social class, the sociocultural context looms large in the overall biocultural process. This approach does not lend itself well to the genome-wide scan, either, since instead of comparing two groups, you would end up having a complex, multiple group comparison.
I won’t argue that more complex models of GxE, specified within a sociocultural context, is the only way to proceed. I only want to emphasize is that this is an important way to proceed, and that biocultural anthropology is uniquely situated to contribute to it.
Every so often a piece of research comes along that is a real game-changer—it literally shakes the earth under your feet. I had that experience about a year ago when Anne Case and Angus Deaton, two economists, published an analysis of recent mortality trends in the United States. If you electronically search “mortality trends” for the U.S., you will see that, overall, mortality rates are declining, as they are for Canada and Western Europe. What Case and Deaton did was to separate out mortality rates for non-Hispanic Whites. Starting about the year 2000, rather than continuing to decline like everyone else’s, mortality for this group bucked the trend and started to climb. When causes of mortality were examined, deaths from lung cancer were declining, from diabetes were stable, and for three causes were climbing, dramatically. These were chronic liver disease, suicide, and what Case and Deaton refer to, by default since that’s what the feds say, “poisonings” (read: unintentional drug overdose).
I’ve taught epidemiology off-and-on for a long time, and mortality rates just don’t jump around like this, unless, that is, something catastrophic is happening. An example of a catastrophic event leading to high mortality rates was the fall of the former Soviet Union. In the decade following that political upheaval mortality—especially male mortality—climbed, fueled by a potent combination of vodka and cigarette smoke.
A further component to their findings was that the changes in mortality rates were highest among non-Hispanic Whites who had a high-school education or less. In 1999, rates of death from “poisonings” were 4 times higher for people with a high-school education or less than they were for people with a college degree. In 2013 those death rates were 7.2 times higher for the less-well educated versus the well-educated.
I felt so compelled by this evidence that I dropped what I was doing in my classes—one on cognitive anthropology and one on the history of anthropological theory—and taught the Case and Deaton paper. Even though it caught a lot of attention in the national press, at least for awhile, I was afraid that it would escape the notice of many of my students and, furthermore, that they might not really appreciate the magnitude of the results.
The pattern of results suggests that non-college educated Whites are experiencing some kind of profound stress and that in response they are self-medicating with alcohol (hence chronic liver disease) or with prescription opioid pain medication (with its attendant risks of overdosing), and that they are responding with major depression and the associated risk of suicide. In their interpretation, Case and Deaton emphasized the stress of economic insecurity for working class Whites, noting that widening inequality might account for the trend. This would seem to me to affect other population groups—like African Americans—even more than non-college educated Whites, yet the trend toward higher mortality from these causes is not observed in other population groups. Case and Deaton also suggest that it might be specifically the transition in retirement programs from guaranteed benefit plans to defined-contribution plans, with their associated stock market risk. In this interpretation, looking forward to an uncertain and possibly impoverished future is the source of stress.
For obvious reasons—and I’m talking about the election season in which we find ourselves—I have continued to think about this research, given the prominent place that Whites with a high school or lower education seem to be playing in support of one of the major candidates. Is it, to quote a political strategist from a past campaign associated with the other major candidate, “the economy, stupid!” Or, is it that, and something more?
As I considered the findings, I was reminded of an old paper by James P. Henry and John C. Cassel from the American Journal of Epidemiology in 1969. They examined cross-cultural data on age and blood pressure, noting that, while many physicians believed the rise of blood pressure with age to be “natural,” it was in fact “cultural.” In many communities around the world, especially those that had yet to be drawn very closely into capitalist, market economies, there was little evidence of an increase of blood pressure with age. In what were called (back in the day) “modern” communities, blood pressure rose with age.
To explain these results, they drew on a process that Cassel had been thinking about for some time, namely, the inconsistencies and incongruities that can accompany profound culture change. Cassel’s preferred research strategy had been to follow migrants into a new setting, where he predicted that the incongruity between the culture they arrived with, and the culture of their majority host community, created a period of stressful and taxing adaptation, as the migrants tried to adjust to their new setting. The end result of this stressful adjustment, especially if it was not especially successful, was an increased risk of disease. Henry and Cassel suggested that the same process could be occurring across the life-span of an individual, arguing that in the modern world, with the ever increasing pace of social change, an individual is born into and socialized in one culture, yet ends up living in another, as the world changes around him or her.
This strikes me as an eminently plausible interpretation for the Case and Deaton findings. Non-college educated Whites are indeed facing economic stresses, but the broader cultural changes they are experiencing are even more profound. And what can more effectively and graphically communicate to them that the world around them has changed than the fact that they will shortly trade their first African American president for their first female president? (I trust Sam Wang and his Princeton Election Consortium.)
Cassel drew heavily on culture theory for his insightful interpretation of epidemiologic data. Case and Deaton’s findings suggest that those insights are still relevant.
The question of what an anthropology degree means, especially in cultural anthropology, has been asked ever since I was an undergraduate (back when I saw Pigpen on keyboards with the Dead). As things change, in the academy as in the world around us, there is a certain renewed urgency in that question, as we prepare students to do: what? (And don’t for a second think that I regard a university degree as vocational training.)
The what will be what anthropologists have always done. Some will continue in the academy, both in traditional faculty roles and in new ways of teaching and doing research. Others will become applied anthropologists in government and non-profits. More will likely forge new roles for themselves in the shifting landscape of the marketplace. How do we help?
“Bringing something to the table” is a hackneyed but nonetheless useful phrase, and that is of course how we must help in educating anthropology students. The student of anthropology must bring something to the table. That mythical table will be set for some in universities, although it seems for more it will be in novel settings, and ones in which the table will be shared (contested?) by those from other social sciences.
The main dish we bring to the table is the concept of culture and the overarching framework that people and what they do are shaped day-to-day by this mysterious miasma of shared knowledge. And they, in turn, modify that shared understanding in response to changing circumstances. Grasping this and all of its implications is what anthropology is all about. This was, of course, Malinowski’s directive—“to see the world as others see it”—and while other social sciences flirt with this perspective, it remains at the core of anthropological thinking.
Malinowski’s directive—“to see the world as others see it”—remains at the core of anthropological thinking.
Bringing this perspective, however, will get you nowhere if you can’t demonstrate its utility, especially in hard-nosed settings like interdisciplinary research groups, applied projects, or in business. This hinges in part on what we mean by demonstrate. An online dictionary defines this term as “clearly show the existence or truth of (something) by giving proof or evidence.”
We are, in part, talking about methods that our students use to demonstrate the utility of their perspective for explaining something. But this will not be an exhortation just for better methods, mixed methods, or more rigorous qualitative methods. These appeals are correct and important and have been voiced for a long time. What I want to argue for, however, is the development of a configuration of methods that can uniquely capture empirically, in a way that can be clearly communicated to others, the singular contribution of an anthropological perspective.
Research methods are often presented in exhaustive compendia, or, continuing the table metaphor, a smorgasbord. The budding researcher is faced with a vast array of research methods, just like a vast buffet of potential consumables, especially in the day and age of mixed methods. We teach methods as being suited to particular problems. You choose the best set of methods for the problem at hand. Yet, alighting on the best set of methods can be a very difficult task, especially when we are trying to pull together traditional tools of ethnography and quantitative techniques.
I’ve come to think lately about this in a somewhat more focused way, and it goes back to that Malinowskian directive, interpreted from a mixed-methods mindset. We want to understand the world as others see it, then what? The mixed-methods orientation says that we then go on to quantify that in some way. It is worth stopping and reflecting on what that means. In strictly emic terms, seeing the world as others see it is to discover the categories and modalities that people use as their taken-for-granted reality. From a measurement standpoint, quantifying that means coming up with a way to order people along a continuum in terms that they themselves have defined. By ordering people along such a continuum, we can in turn relate that variation to variation in any other variable. Such a measurement strategy generates what Kathryn Oths and I have termed high “emic validity,” which in turn can be used in examining anything you care to study, alongside the etic measurements that are staples of other social sciences.
There are a variety of ways of doing this, and for examples I would start with Lance Gravlee’s research on race in Puerto Rico, Lesley Jo Weaver and associates’ studies of mental health, François Dengah’s studies of religion, as well as my work on cultural consonance. These are all empirically successful approaches in capturing that emic perspective in ways that are both theoretically and methodologically satisfying.
This is something special to bring to the table. This approach requires a rigorous and systematic attention to a way of understanding human existence. It requires mastering a specific set of qualitative and quantitative research skills. And it requires staying true to a particular vision of anthropology. Furthermore, it is a unified perspective that can be taught at any level of study in anthropology.
At this point I would be remiss were I not to give a shout out to a few people who have done our field immeasurable good by putting their energies and efforts behind providing the training to students in anthropology to do just this kind of thing. I’m talking about Russ Bernard, Jeff Johnson, and Sue Weller and the NSF-funded Summer Institute in Research Design (SIRD). The SIRD is coming to a close this year, after providing some 340 anthropology students over 20 years with absolutely top-notch education and critique as they embarked on their dissertation research. They, along with the support offered by Stu Plattner and Deb Winslow at NSF, deserve all our thanks for all they’ve done to enhance anthropological research.
2014 was an interesting year for the concept of culture. Merriam-Webster declared ‘culture’ the most important word of the year, in that more people looked up its definition online than any other. Then, on the website edge.org, the question was posed: what scientific idea should be retired? No less luminaries than Pascal Boyer and John Tooby responded: culture. Hmmm…EB Tylor – author (arguably) of the first true anthropological definition of ‘culture’
I will declare first that I belong to the ‘culture-is-too-important-a-concept-to-be-jettisoned’ wing of anthropology. And, I think a useful concept of culture is well within reach.
My perspective is that the concept of culture ought to do something. Concepts are tools, after all, and a tool needs to be useful. It has work to do. Culture must be put to work in the service of research and explanation. Culture as a concept must function both in a network of theoretical constructs to account for some phenomenon, and in a network of operational constructs that enables us to reach into the world and capture phenomena in observation.
Curiously, though, in much work culture as a term may not appear at either level. Culture often occurs as little more than an orienting construct, indicating to a reader what direction an argument will (or won’t) take. At some level we are all crypto-Tylorians. If culture is ‘that complex whole,’ than we just declare that’s what is important, and then we go on to talk about class, gender, race, or whatever, because it’s all culture (right?).
In 1934 Sapir wrote that a less comprehensive, more focused concept of culture “…will turn out to have a tougher, more vital, importance for social thinking than the tidy tables of contents attached to this or that group which we have been in the habit of calling ‘cultures,’” although he didn’t specify precisely that focused concept.
I propose that five questions must be adequately answered (note ‘adequately,’ not ‘ultimately’) to get the tool I want, and perhaps the ‘tougher, vital’ construct Sapir envisioned. These five questions have bedeviled culture theory since Tylor, although there certainly are others as well. But I think these need answers in order to move our endeavor forward. They are:
(1) What is culture made of? In highfalutin’ terms this is the issue of ontology. Key explanatory terms must reach into the world to latch onto phenomena that are epistemically observer-independent (i.e., knowledge of which does not depend on the mind of a single observer). And, as John Searle argues, an ontological account of culture must be consistent with what we know about the rest of the world (like cognitive neuroscience, language, and human information processing). We don’t get to invent a new order of reality.
(2) Is culture a term that refers to aggregates or individuals? This is the part-whole problem debated in social thought for quite some time, in various guises. Another way to approach this is: what is the locus of culture, the group or the person? I think it is both, but a satisfying account of that must explain how, not merely assert that it is so.
(3) How do we account for variability? This is the issue of ‘intracultural diversity,’ and a description of variability must apply to both the part and the whole.
(4) What is the relationship of culture and behavior? In anthropology culture has been thought to: cause behavior; result from behavior; be abstracted from behavior. This needs to be sorted out systematically (frankly it’s probably ‘all of the above,’ but there has to be an account of that).
(5) What is the relationship of culture and other theoretical constructs—like ‘value,’ ‘belief,’ ‘attitude’—that are thought either to be subsumed by or to compose culture? These social-psychological terms appear prominently in explanations of human behavior, and if culture is a part of those explanations, how does it relate to those other constructs?
A contemporary cognitive theory of culture can adequately address each of these questions. To wit:
(1) Culture is the knowledge we use to function in a given social system. As Searle has shown, that knowledge is of a special kind, generated by a certain class of speech acts. These speech acts, that Searle refers to as ‘constitutive rules,’ literally construct the world around us. And, this is an ontological account consistent with what we know of our biological and evolutionary history.
(2) Cultural consensus theory and its associated formal model have shown that there is indeed an aggregate culture in the sense of a knowledge-set that cannot be found in any single person’s mind. Nor is this a pious pronouncement that the whole is greater than the sum of its parts, but rather empirically demonstrable. At the same time, each of us carries around versions of that knowledge-set that place us more proximate to, or distal from, the aggregate knowledge-set. In a non-mysterious way, culture is a term that applies to both individuals and aggregates.
(3) Culture is variable within a social group in the sense that there may be multiple models, even in the context of an overall cultural consensus (again, empirically demonstrated). And of course culture is variable in the sense that any subset of people have varying degrees of idiosyncratic biographical influence on their personal configuration of that knowledge.
(4) Culture as a variably shared knowledge-set gets variably translated into behavior. This is what I have called ‘cultural consonance.’ Just because you know something doesn’t mean you get to act on it, and variation in cultural consonance is both systematic and can have profound effects.
(5) This knowledge-set called culture underlies other social-psychological constructs. Your understanding that, for example, the American cultural model of marriage is beginning to extend to same-sex partners doesn’t explain your beliefs, values, or attitudes about that understanding.
This thumbnail sketch of a culture theory that works is just that—a sketch. Fortunately, there is a growing body of empirical studies that demonstrate its utility, and perhaps this culture theory provides both the toughness and vitality that Sapir envisioned.
At the beginning of the semester and my class, “Culture, Mind, and Behavior,” I started thinking about this topic, because this class is devoted to cognitive culture theory, including the concept of cultural consonance. Cultural consonance is the degree to which people incorporate into their own beliefs and behaviors the cultural prototypes for belief and behavior encoded in shared cognitive models. In other words, it’s how closely people match up with the culture around them.Over a number of years of research, my colleagues and I have reliably identified shared cultural models in a number of domains (e.g., family life in Brazil), and we have found that higher cultural consonance–that is, people actually, for example, believing that their own family matches the prototypical Brazilian family–is associated with better health status (such as lower stress and depression, and lower blood pressure).
Whenever I lecture about or teach this material, inevitably there is somebody who raises some form of the following objection: well, what about people who reject the shared cultural model and follow their own personal model of how life is to be lived? Sometimes it’s a student who, I often suspect, is offended by the notion that he or she is not all that “special,” i.e., that what he or she thinks or does is actually a variant of what a lot of other people are thinking and doing, simply because they are all working from the same template. Other times it’s a more principled objection from anthropologists who are taken with the notion of personal agency and (mistakenly) think that I think that people are what the Brits charmingly call “cultural dopes,” i.e., we are all cookies stamped out by the cultural cookie cutter. Of course, the concept of cultural consonance is completely compatible with an agentic perspective, and in fact I’ve got at least one paper planned for the future to explore how that works in some detail. In any event, however it is framed, there are people who object to the notion of cultural consonance because they think that personal, individual models would trump the influence of cultural consonance were I to measure and incorporate them into my analyses.
At the outset, I would note that conceptualizing and measuring the concept of “personal consonance” or “individual consonance” is a much thornier issue than you might think. Remember that to measure something people have to be presented with the same stimuli, or, in our business, asked the same questions. If you are committed to the idea that people have individual or personal models, then logically each individual would have to be asked about their model only and how they are committed to it. How could that be turned into a comparable measurement from one person to the next? You might present an alternative to identifying each individual model that involves asking people questions like “I always try to do what is most important to me,” and having them respond on a Likert scale. Well, congratulations, you have just re-invented 1950s social psychology and the psychological construct of self-efficacy! Don’t get me wrong, I’m all in favor of incorporating a psychological construct like this into this research, and seeing how it might alter the effect of cultural consonance (it doesn’t). But my main point is that measuring something that people mean by personal or individual models is very, very difficult.
Let’s assume, however, that consonance with an individual model, or “IC” (for individual consonance) can in fact be measured, and hence both IC and cultural consonance (or “CC”) can be examined as influences on health outcomes (“HO”). What would happen? At this point I’m going to digress briefly to reiterate something one of my favorite bloggers, Paul Krugman, has written from time-to-time, and that is the importance of your theoretical model in thinking through a problem. It’s one thing to say, for example, well, somebody’s IC could be more important than their CC. This is on a par with saying, “well, I could learn to levitate.” Yes, maybe, but if your theoretical model of the world includes something called gravity, then you have to think more complexly about this levitation business. It’s the same in anthropology. Yes, somebody’s IC could trump their CC in influencing HO, but what is likely to happen given a particular theoretical model of how the world works?
Here is a theoretical model:
This theoretical model says that CC is associated with HO, and IC is associated with HO, and in some way, CC and IC are correlated. To simplify things, let’s assume that HO is measured in terms of positive health—like better self-reported health or positive affect—so that being culturally consonant and being individually consonant are associated with feeling better, so the association is in a positive direction.
This simple exercise clarifies things a bit. We already know that CC is associated with higher HO (this has been replicated many times by multiple investigators). The question is what happens when IC is introduced into the picture? Well, it depends, and it depends exclusively on how you conceptualize the correlation between CC and IC. What kind of operational or statistical model can we use here? I’ve already been talking about correlation, so that’s my operational model. When, in looking at the correlation between CC and HO, I take into account the correlation of IC with HO, and of IC with CC, what happens?
In somewhat more technical terms, what we want to do is to remove the effect of IC on CC, and remove the effect of IC on HO, and see what is left over in terms of the correlation of CC and HO. Or, put differently, we want to look at the correlations of the residuals among all of these variables.
That sounds like a statistical mouthful, but it actually can be understood very simply by looking at just the numerator of a partial correlation coefficient (which, I might add, is the same as the numerator of a partial regression coefficient; the only difference is in the denominator, or by what you are standardizing). Here’s the numerator, spelled out in prose:
[Correlation of HO and CC] – [(correlation of IC and CC) X (correlation of HO and IC)].
What this says is that if we want to control for, or otherwise get rid of, the influence that IC has in looking at the correlation of CC and HO, we have to subtract out the product of the correlation of IC and CC and the correlation of IC and HO. You don’t even have to be all that much of a statistical heavyweight to get this. Correlation = co-variation. To “purify” the co-variation of CC and HO, we have to get rid of the co-variation of HO and IC, and the co-variation of IC and CC (see the model above).
And, it depends exclusively on the correlation of IC and CC, and frankly has nothing much to do with the correlation of IC and HO. Think about the simplest case, where the correlation of IC and CC = 0. If you multiply the correlation of HO and IC by 0 (zero), that second term in the equation above becomes 0, and the correlation of CC and HO is completely independent of the correlation of IC and HO.
What happens if the correlation of IC and CC is positive? In this case, the second term in the equation would become negative, and the correlation of CC and HO would be reduced in proportion to the size of the IC/CC correlation (again, it doesn’t have much to do with the IC/HO correlation).
Finally, if the correlation of IC and CC is negative, the correlation between CC and HO would increase in proportion to the magnitude of the IC/CC correlation (remember that in this case the second term in the equation would be negative, and then you would be subtracting a negative number, which makes it positive).
To summarize: when the IC/CC correlation is zero, no effect. When the IC/CC correlation is negative, the CC/HO correlation goes up. Only when the IC/CC correlation is positive would the CC/HO correlation potentially go down, and then only in proportion to the magnitude of the IC/CC correlation.
This little exercise should help skeptics of cultural consonance re-think their critique. What do they really mean to say? If they are saying that an individual can have his or her own cognitive model of the world that sets them apart—and this is, I think, what most of them are trying to say—then it turns out that it does not alter the effect of cultural consonance. Another way of thinking about this is in terms of the scatter of data-points around a regression line in a scatterplot. If the plot is of the correlation of CC and HO, we will see that many people cluster around the regression line, i.e., as their cultural consonance goes up, their health outcome improves. The people farther from that regression line are the people who, perhaps, are adhering more closely to their individual model. If we take that into account, we are reducing the noise in the data, and the effect of cultural consonance on health outcomes becomes more clear (technically, the standard error of the regression coefficient will go down). Or, IC and CC simply become independent influences on health.
Alternately, skeptics might be saying that IC and CC are negatively correlated. This would be equivalent to saying that overall, people who adhere to their own individual models do so and explicitly respond in the opposite direction to questions about shared cultural models. Well, maybe, but think about this explicitly. In Brazil, for example, the cultural model of social support says, in part, that in response to many problems, you start by seeking help and assistance from your family and friends, and then you gradually seek help in less intimate relationships of work, church, and ultimately professional supports like doctors and lawyers and such. The “IC and CC are negatively correlated” position would argue that in describing how you follow your own model of social support, you also describe yourself in relation to the cultural model of social support in the opposite direction, i.e., you never ask family and friends for help, and you exclusively ask strangers and professionals for help. OK, if one or a few people answer like this, it’s just the wacky nature of humans. But for it to affect the correlation of CC and HO, you would have to have a set of people systematically responding in this way. Hmmm…and of course, even it you want to believe this, it is in this case that controlling for IC would cause the CC/HO correlation to go up.
The third position is that IC and CC are positively correlated. If this is so, it calls into question the whole notion of an individual model and individual consonance, since it (individual consonance) would turn out to be some version of the cultural model and cultural consonance. Actually, some skeptics fail to appreciate that, in one sense, cultural consonance is the personal, lived model, formed as people knit together a life for themselves in the context of the environment of shared meanings (the cultural model) and the various factors that enable them to act, or constrain them from acting, on those shared meanings. A super skeptic—like a full-blown psychological reductionist—could argue that the whole theoretical construct and measurement of cultural consonance is epiphenomenal to individuals, with individual models, making individual decisions, of how to live based on those models. They just turn out to be similar from one individual to the next, probably based on some neurocognitive module selected for in evolution, or some basic personality construct, or whatever. This seems highly implausible, however, given that the construct of cultural consonance is based on a well-developed, well-articulated cognitive culture theory, along with a well-understood measurement model. The full-blown psychological reductionist would have to argue that the theory and method to derive cultural consonance is a social scientific version of reading chicken entrails. They may believe that, but it seems pretty implausible.
So, to me the most plausible way that a consonance with individual models would work in this process is the independence of individual consonance and cultural consonance. If this is the case, bringing individual consonance into the theoretical model would not alter the influence of cultural consonance on health outcomes, except in the relatively trivial sense that controlling for individual consonance would increase the statistical significance of the coefficient assessing the effect of cultural consonance (because the standard error would go down). Where I’m skeptical is in regard to the measurement of individual consonance. I have difficulty envisioning a truly satisfying measurement.
In the final analysis, here, I return to the wisdom of Paul Krugman. Your theoretical model is of paramount importance. It is through your theoretical model that you can sort out the implications of various alternatives, just as I have done in this post. Models of every variety—theoretical scientific models, shared cultural models, personal models—are good to think with.
The phrase “a working definition” is something that is encountered frequently in the literature in the social sciences. As an adjective, “working” is usually used in the following sense that appears in Webster’s: something that is “adequate to permit work to be done.” Note the use of the word “adequate.” There is the connotation of a definition that is rough-and-ready, somewhat unrefined, but that will suffice for the moment. At the risk of being accused of making one of those little academic ironic jokes—and if I am so accused, I will confess immediately that I am guilty—I intend to use the phrase in a different way. What I mean to talk about is a definition of culture that works, that can be used as both a theoretical and a methodological tool in understanding—in short, a definition that really does something.
The reason that I am approaching this essay in this way is because of the occasion: the considerable honor of having been chosen for this year’s Burnum Award.* This award is made on the basis of an overall research career, and hence this lecture is my opportunity to engage in a kind of retrospective examination of that research career. It has been 30 years since I decided, as a junior at Grinnell College, to pursue anthropology as a profession—which sounds like a long time even to me, although it feels like a short time. There are many ways I could think about and talk about those 30 years. My own view of what I’ve been doing really has most to do with the core idea of the field of anthropology: namely, the concept of culture and how to make it work in the research process.
My area of research is the intersection of culture, health and healing. What anthropologists like me do is to go around the world examining how culture shapes both the risk of disease and what it is that people do to recover from disease or illness. Obviously, we are talking about a wide range of questions encompassed by this area. In my own research, I have concentrated on the initial stages, namely, falling ill. How does culture shape that risk of disease?
The first question here is: what evidence is there that culture shapes disease at all? The short answer to that is: the epidemiologic transition.
In Figure 1, we see several countries in the Western hemisphere, comparing mortality rates from all causes of child mortality and from coronary heart disease (CHD). All-cause child mortality can be used as a proxy for various kinds of infectious and parasitic diseases (often summarized in official statistics under the heading diarrheal diseases) that tend to wreak greatest havoc among the most vulnerable in a population. CHD is foremost among the variety of chronic diseases. In some countries, child mortality equals or exceeds chronic disease mortality, while in others child mortality declines dramatically and chronic disease mortality increases equally dramatically.
What accounts for this difference? Some obvious answers come into play. Basic infrastructure like clean water and effective sewage systems, plus immunization programs, make a big difference. Also, in the process of economic development people have tended to become more sedentary with the related risk of obesity, which can contribute to many chronic diseases. The quality of our diets has changed, with much less fiber, more fat and more of other nutrients like sodium. Does the combination of these factors not account for these differences?
Well, actually, no. Certainly all of these factors play a role in the process, but even after their combined effects are removed, there are still societal differences in disease rates that are left unexplained.
Figure 2 shows another brief example of how sociocultural factors shape disease. The increase of blood pressure with age, as shown here for the West Tuscaloosa community, is taken to be “natural.” But if we compare this age distribution to the Zoró Indians of the Amazon basin, we see that the rise is not necessarily “natural,” but in some sense relative to cultural context. Again, a typical approach to unraveling these differences would be to look at issues such as diet and physical activity, and perhaps genetic predisposition.
But I want to take a moment to reflect on the logic that is being employed here. This logic unwittingly employs what has been called the “onion metaphor” of human beings. That is, we can forget about the Zoró’s mixed horticultural and fishing subsistence economy; we can forget about their system for tracing kinship relationships that is more complex than our own; we can forget the way in which they form household and family relationships; we can forget about their origin myths and conceptions of the supernatural. We can, in other words, strip away everything that makes them culturally different, in order to look at their physical activity or their diet to explain their blood pressure. Like stripping away the successive layers of an onion, this metaphor goes, we can strip away cultural difference to get at what is psychologically universal about people; we can strip away belief, value and personality and just look at behavior; we can strip away behavior and look at nutrient transport in the circulatory system; we can strip away physiologic process to look at base-pair coding. We can, ultimately, find our way down to what is fundamentally causative.
Or, can we? Could it be that the onion metaphor is just that, a metaphor that says more about how we look at the world and less about how the world really works? Could it be that we are as thinking, feeling, interacting, and, yes, biological entities, so suspended in a matrix of culture that to think we can strip it away as mere surface appearance so violates the phenomenon that we misunderstand it?
To even entertain this thought demands a way of conceptualizing culture that is subtle and nuanced, and at the same time that is hard-nosed and pragmatic. The concept of culture has to do some work in the research process.
So, what do we mean by culture? A fairly typical view, both in common language and in the way anthropologists have approached their work, sees culture as a shared body of custom, reproduced through time, that makes societies distinctive. Over a century ago, this kind of view of culture emerged in anthropology as an alternative to racialist thinking. Traders, travelers, warriors, missionaries and others had been covering the globe for some time, documenting the astounding variety of human social systems with the myriad ways that people found to resolve basic problems of finding food and shelter, avoiding predation, and reproducing themselves. In part, because people with these different customs also looked very different from the Europeans who visited them, there were appeals to biology to explain custom. People were thought to behave differently because they were biologically a different—and explicitly inferior—sort of creature.
These views were challenged by the anti-racist formulations of Franz Boas and his students. Simply put, they argued forcefully that other people were not biologically different from people of European ancestry, but were different rather because they had a different culture. Culture in this sense was a name put to the total lifeway of a people. From growing food to marrying to having and raising children to governing communities to imagining the supernatural, different peoples did—not just some things—but everything differently. These ways of getting things done were routinized and regularized and learned anew by succeeding generations of a society or community. This totality of the lifeway was called culture, and the learning of it by each generation served as an effective alternative to racial determinism.
The question then became: what was the “stuff” of culture? What was culture made of? How did it get from one generation to another? How do you know it when you see it?
Answers to these questions were generated in the historical context of early ethnographic research, or the documentation of cultural patterns in different societies. In doing cross-cultural research, ethnographers looked for regularities in learned behavior that could in turn be used to make inferences about the larger systematic design for living called culture. Your job was primarily to decode and describe that design, and not to worry too much about how some people may or may not deviate slightly from the pattern. The differences within a society, especially individual differences, were just noise in the system. And it’s important to remember just how difficult it was to decode that pattern, as you were far from home, working in a second language, and trying to understand remarkably different ways of life. The more simplifying assumptions, the better.
In a sense, you could see the people in the picture as merely the space-and-time bound carriers of a cultural tradition. They happened to be there at the moment, but at another moment those particular people would be gone and you would have another set, but still carrying on that same cultural tradition. Your job was to understand the tradition, not the particular people who carried it on at the moment.
When we talk about, for example, “British culture,” we don’t really suppose it is there only because the Brits who happen to be alive right now believe and act in the ways they do. British culture was an entity in 1902 and is one in 2002 and probably will be one in 2102, regardless of the people. This gives culture a sense of “externality,” something first articulated by Herbert Spenser in the 19th century. It really does feel as though culture exists “out there.” We seem as individuals to be casting about within the confines of our own cultures. And this is something that continues to surprise students of culture in the 21st century. So, a working concept of culture must be able to account for this really quite peculiar, property of culture.
But, having said that, I’m not advocating a kind of “swamp-gas” theory of culture. It’s not out there floating around with us breathing it in (or choking on it, as the case may be). Where culture resides can only be in individual human beings. Furthermore, if we are interested in the biological impact of culture, we have to be able to trace it from “out there” to “in here.” But, we have to somehow reconcile this external quality of culture with its locus in the individual.
One way of getting at this is to stop and think about what is really important in culture and cultural differences. Is the fact that I’m wearing this suit today really important as far as my culture is concerned? Well, sort of, because I am wearing this suit, as opposed to a grass skirt or a Brazilian carnaval costume or even nothing at all. But what is probably more important than my wearing this suit is that I knew, I understood that wearing this suit was what you expected of me. We shared the knowledge that this was the right thing for the occasion. Imagine if I had showed up here to present my Burnum lecture wearing old sneakers, cut-off jeans, and a baseball cap that said Auburn Tigers on it. Probably I would not have gotten tossed out—although the Auburn part might have done me in. More likely than not, you all would have looked at me, shifted uncomfortably in your chairs, and thought something like: “what is this world coming to when they give the Burnum to the likes of this joker?” I would, in other words, have failed to live up to our shared understanding of the world in my behavior.
Now, as basic a sketch as this is, there are a couple of useful ideas implicit in this example. First, there is the shared knowledge or shared expectation. Social life—all of human life—only works because we share various understandings of the world. Everything we do we can do because of these shared expectations. One way of referring to these expectations and understandings is as “shared cultural models.” Second, I’ve just suggested how we can distinguish between culture and behavior, which actually will turn out to be quite important in the story I’m building here. As I said, we may have shared expectations regarding behavior and social interaction, but for various reasons, some people may not fulfill those shared expectations in their own behavior.
This is a very brief sketch of a theory of culture on which I have been working, in one way or another, for quite some time. But, is it good for anything? That is, does it “work?” To examine this issue, let me turn briefly to some of my empirical work. As I said, a basic observation on which all this work is founded is illustrated in Figure 3, showing how average blood pressure levels vary across different kinds of societies. Here the societies are categorized along a continuum of sociocultural complexity, ranging from the simplest foraging societies, to the most complex industrial states.
But we can break the pattern apart in more precise ways. Figure 4 shows an example of blood pressure differences among communities in Samoa, in the South Pacific, arrayed along a continuum of modernization. The term “modernization” here is just a shorthand descriptor for a variety of differences among the communities. These differences include subsistence technologies (in the traditional community, people grow yams and herd pigs for their own consumption, while in the modern community people work in factories); patterns of social interaction (in the traditional community, people are much more embedded in their extended family systems, while in the modern community people focus more on independent nuclear households); education and literacy (people in the traditional community receive relatively little formal schooling, while people in the modern community receive more); and, belief systems (people in traditional communities are embedded in a system of supernatural beliefs derived locally, while people in the modern community tend to be pulled into one of the globally institutionalized belief systems, like Christianity).
Why do people in the more modernized communities have higher blood pressures? Well, as I noted at the outset, the obvious answer to that question involves things like diet and physical activity, but taking those factors into account actually fails to explain all of the differences, although these factors clearly explain a part of those differences.
For years, one explanation for these findings has loomed large: the stress of culture change. Somehow, all of these changes in peoples’ lives are stressful, and the resulting stresses are associated with higher blood pressure. Now, this explanation is terrifically compelling, especially when linked with all of the careful laboratory studies showing how psychologically threatening events or circumstances can influence physiology. The problem, however, has been sorting out, in a conceptually precise way, just what this phrase—“the stress of culture change”—really means.
About forty years ago, there was a remarkable burst of activity in thinking about this issue at UNC-Chapel Hill, involving the epidemiologist John Cassel, the psychologist Dave Jenkins and the anthropologist Ralph Patrick. They were particularly interested in what happened to migrants from rural areas to urban areas, although the same reasoning can be applied to culture change occurring within any community. They offered the following hypothesis: the migrant to a novel setting carries with her a particular understanding of how the world works, in every sense (i.e., what it means to work, how marriages are constituted, how families treat themselves and their neighbors, how to worship—everything). She is confronted, however, with a system for which her understanding may not work. The novel and dominant culture of the new setting must be learned for everyone else’s behavior to be understood and, indeed, for her to behave in ways that are understandable to others. She must, in other words, adapt to the new setting. Even if she is successful, such adaptation can be costly. Indeed, this is precisely what Hans Selye meant by the General Adaptation Syndrome when he gave the concept of stress its first scientific respectability in the 1930’s. Adaptation is costly, and the cost of adaptation is written on the body in terms of what we call health. So, Cassel, Patrick and Jenkins argued that the less successfully the migrant culturally adapts to the new setting, the higher her blood pressure.
Unfortunately, Cassel and his colleagues had neither the conceptual nor the methodological tools to really carry this project forward—or, to continue my theme, their definition of culture didn’t “work.” But what I have introduced here—namely the idea of culture as these shared cultural models, plus the idea of a person’s relative ability to really live in accordance with those models—gives us a way of attacking the problem. Simply put, realizing shared cultural expectations in individual behavior—or what I will refer to as “cultural consonance”—is in part a measure of how well individuals are able to adapt to their social milieu. And I would take Cassel’s model much further. We don’t need to limit our thinking to situations of migration, or modernization, or culture change because each of us, in our own way, every day, is engaged in the process of sorting out, in our own behaviors, these shared expectations. We are engaged in a daily endeavor to better adapt, and one way of thinking about that process is in terms of our success at meeting those shared expectations, or cultural consonance. I hypothesize that the higher a person’s cultural consonance, the better his or her health status.
I’ve been able to examine these processes in a variety of settings over the years, including, prominently, in Alabama. I arrived here in 1978 after doing my dissertation research around these topics in the West Indies. This conventional “modernization” view of things described well what had been going on in the West Indies for some 25 years. There modernization had been driven by a single economic innovation occurring in the early 1950’s: the introduction of the banana as a large-scale cash crop. And this is typically the case in the so-called Third World. Economic change drives societal modernization.
In Alabama in 1978, I began to explore the possibility of doing research on blood pressure in the African American community, and I tended to think about the community, and its experiences in the latter half of the 20th century, in terms analogous to the modernization paradigm. Black Americans in the South were denied participation in the modern world by the American version of apartheid that we called “segregation.” But a single political innovation— Brown vs. the Board of Education in 1954, and the civil rights movement spawned by that decision—changed everything. Like an economic innovation in the developing world, this political innovation changed not just some things, but everything, for the black community. Or, like the migrants to a novel setting described by Cassel, black Americans now had a whole new world opened to them. Let me hasten to add that this is a long, drawn-out process with which we are still dealing. But, in broad outline, this is a useful way of thinking about what occurred.
What I mean literally here is that the cultural models for everyday life ceased to be primarily autochthonous creations from within the African American community, and became instead creations more of the intersection of those models with general middle class American cultural models. Not that local meanings and understanding are irrelevant, but rather that black Americans have had a whole new set of circumstances, including a whole new way of understanding the world and its opportunities and its limitations, to which to adapt. What has the effect of all this been on their health?
We know the rate of high blood pressure among black Americans is 50% higher than among European Americans (Figure 5). In my work in the community here in Tuscaloosa (and, as I will briefly mention, in Brazil), I’ve tried to examine how these cultural stresses are implicated in the process. This is how I have gone about it. On the one hand, there are the cultural models, the shared ideas about how life is to be lived. On the other hand, there is the relative success with which people can approximate those cultural models in their behaviors. The link of model and behavior is cultural consonance. Assessing and measuring a representative sample of peoples’ behaviors is what social survey work is all about. The trick has been to get at the cultural models in a rigorous and systematic way; in a way that is faithful to theory; and in a way that we can directly connect to peoples’ behaviors as assessed in the survey.
Fortunately, in the mid-1980’s, Kim Romney and Sue Weller came up with a statistical model for doing just that that they call “the cultural consensus model.” I won’t go into the details here, but the consensus model can be used to determine the degree to which people share knowledge or ideas about some phenomenon. Remember that no sharing = no culture. And, if there is sharing, we can determine the content of what is shared. Having determined what that shared content is, we can then measure the degree to which peoples’ reported behaviors actually reflect that content and see if any disparity there is associated with health status.
OK, what are the important cultural models that people must live up in order to achieve better health status? Well, obviously this is a big question, and one on which I am currently working hard. But for purposes of illustration let me pick one. There is probably no aspect of American middle class culture more highly valued than our lifestyles, by which I literally mean the kinds of material circumstances of life we can achieve, and the kinds of leisure time activities that go along with that. Thorstein Veblen placed lifestyles at the center of human motivation a century ago in his “Theory of the Leisure Class.” Now, Veblen is well-remembered for his phrase “conspicuous consumption” to describe a rather vulgar pursuit of that lifestyle among the noveau riche. He is, however, less well-remembered for this observation: “[for most people, achieving a particular lifestyle ]…is a desire to live up to the conventional standard of decency…[in the community].”
In other words, to be left behind with respect to the middle-class lifestyle in American society is to be seen to be, somehow, “indecent” as a person.
In one of our recent studies, carried out here in the African American community in West Tuscaloosa, we asked a small sample of persons to list and rate the importance of material goods and related behaviors as indicative of having had a successful life. The consensus model showed us that they agreed strongly on what that meant. Basically, it meant having a modest and comfortable, but not ostentatious, lifestyle, including such things as owning a home, a car, having nice furnishings, keeping up on current events, and, significantly, participating in one’s church. I think the inclusion of that last item speaks volumes about the sensitivity of this technique to local meanings in the black community.
We also conducted an epidemiological survey of households in the community in which we collected data on blood pressures and a variety of factors, including individual self-reports of their ownership of lifestyle items and their adoption of related behaviors. Cultural consonance in lifestyle was measured as the degree to which an individual’s reported lifestyle matched the lifestyle described in the cultural model (Figure 6).
Figure 7 shows the relationship of systolic blood pressure, which has been adjusted to take out the effects of age, sex, body mass, income and various dietary variables, and cultural consonance in lifestyle. I think the relationship is pretty clear. The closer that a person can truly approximate in his or her own behavior the shared cultural model of lifestyle in the community, the lower his/her blood pressure. Furthermore, the more distal one becomes from the model, the stronger the effect, hence the curvilinear relationship. These results suggest that low cultural consonance may be a profound and chronically stressful circumstance that, in the long run, results in poor health status.
I assume that many of you are now playing the “my favorite variable” game. This is the game in which, after presenting data, someone jumps up and asks: “But did you control for __________ (fill in your favorite variable)?” I may be especially sensitive to this game, because I have spent a good bit of time presenting these ideas to psychologists, epidemiologists, nutrition researchers, and, yes, even internists. Well, I’ve been at this business for a long time, and I’ve managed to cram most of the variables that get mentioned in the research literature into studies, and so far, controlling for these other factors fails to dislodge the importance of cultural consonance.
What creates this state of affairs, in which people do not live in consonance with shared cultural models? Well, in the African American community, cultural construction collides with structural constraint. In the best of times, unemployment rates in the black community are twice that of the white community. More than a third of households live in poverty. Median household incomes are only about 60% of white household incomes. Hence, the likelihood that an individual can achieve even the modest lifestyle goals encoded by cultural models is diminished. The tragic part of this process is that these structural constraints are a result of institutional racism and racial stratification. Over a lifetime, for a large segment of the community, people see their shared hopes and their shared aspirations, modest as they might be in a material sense, denied to them. And that denial is written on their bodies in the form of poorer health status and risk of premature death.
These ideas have pretty good legs. I’ve been working in Brazil for nearly 20 years, and have examined many of the same processes there. Figure 8 shows how, for black Brazilians, low cultural consonance leads to blood pressures higher than their white counterparts, but higher cultural consonance leads to blood pressures lower than whites.
In a sense, we have come full circle here. Remember that early in this lecture I talked about how the concept of culture emerged in anthropology as a challenge to racialist explanations of others. My work has, in a way, continued that. Now, I don’t think that many people in medicine take seriously the old idea that African Americans are at risk of high blood pressure due to a racial-genetic trait, although that idea continued to be prominent well into the 1980’s. Rather, as Tom LaVeist pointed out, there is a tendency in the medical literature to document black-white health differences without comment; however, black folks are almost always coming out worse in terms of health status: more high blood pressure, more low birthweight babies, higher stroke rates, and worse cancer outcomes. Left uninterpreted, there is a kind of unspoken inference that somehow these black-white differences are a result of racial differences. Without grappling directly with the question of how so-called “race” may actually result in poor health through sociocultural pathways, we end up reinforcing the idea that the biologically bankrupt concept of race actually has some biological validity.
But, as I have argued, if we look closely enough, we find something else going on. With blood pressure, it’s not biology in some racial-genetic sense, but rather a complex set of social structural and biocultural processes that result in the appearance that somehow race matters as a biological factor, when it doesn’t. What I hope I have shown here is that continuing the anthropological project of the 19th century—that is, using the concept of culture to debunk racialist and other kinds of wrong-headed ideas—is still an important thing to do.
To do it right, however, we need a concept of culture that works. We need a concept of culture to help us to deconstruct the surface appearances of life. As the Dutch psychologist Ap Appel noted: “The final discovery a fish can make is that of water. It does not know what it means to live in water until it is lying on the counter of a fish shop. Similarly, people do not realize to what extent their behavior…is rooted in the culture in which they live.”
By explicating those links of culture and behavior, we can, I hope, both improve our theoretical understanding of the world, and maybe make it a better place to live.
*This essay was prepared as a lecture on the occasion of receiving the Burnum Distinguished Faculty Award.