Evading the State and Self-Determination: One Weird Trick for Avoiding Cancer, Heart Disease, and Alzheimer’s – Against Utopia Monthly #2

What do upland anarchists, mitochondria, cancer, and carbon dioxide have in common? Let’s take a closer look at how evading the state can

** Issue 2

** Evading the State is Good For You: Upland Natives, Valley Civilizations, Mitochondria, and Carbon Dioxide

Welcome to the second edition of the Against Utopia monthly newsletter, where I explore problems of social organization, philosophy, biology, politics, and more through an epistemological anarchist lens. Or in simpler (cruder) terms, analyzing the authorities’ basis of knowledge and mostly concluding that they should fuck off, so we can flex our autonomy.

This week I want to have a quick word about “domestication” of humans by the state, the resistance of autonomous hill peoples, valley “civilizations”, and an interesting biological connection between them that can tell us quite a bit about how energy metabolism organizes social possibilities.

** Upland Peoples and Their Strategies for Avoiding Lowland States

In The Art of Not Being Governed [3] by anthropologist and political scientist James C. Scott, he introduces the concept of “Zomia”[1], a term coined by historian Willem van Schendel to describe the upland region of Southeast Asia and southern China. This land mass encompasses the highlands of north Indochina (north Vietnam and all of Laos), Thailand, the Shan Hills of northern Myanmar, and the mountains of Southwest China. Some experts extend the region as far west as Tibet, Northeast India, Pakistan, and Afghanistan, for sociological reasons that will become clear shortly.

These areas share some common features: they are elevated and arid, and have been the home of ethnic minorities who have fought against state intrusion to preserve their cultures and way of life, and in some cases like Afghanistan, have fought multi-generational wars to maintain their autonomy from the state.

The groups of people who reside in this region have made resistance a way of life. They’ve formally derived and practiced the primary forms of state evasion, and Scott makes these forms of autonomous social organization the focus of his work. In order to understand their resistance, let’s take a closer look at what the state is, what it’s trying to do, how it uses language, and the comparative anthropology of the folks escaping its wrath.

The state apparatus in these regions sounds violent and spooky, possibly corrupt, yet it is exactly what you are thinking it is when you hear “the state”: “the government”, “bureaucracy”, “MPs”, “Congress” etc. These are no spooks, they’re not some kind of “bad” version of government that would somehow be better if it was done right, they just are the government, and Scott’s case is built upon realizing that what these upland states are doing is a form of internal colonialism, that ALL states, including Western ones, are primarily tasked with. In Scott’s take, they are a primarily extractive entity.

Like all extractive state entities, they are concerned with who their people are, how many children they have, how to count them all accurately, how to tally their production, and how to bring them to heel, so that they can extract resources in the form of taxes and people for military or civil service.

One of the key factors driving this state formation, which later enabled the state to extract taxes and grow, is the development of observable economic activity. Scott spends a considerable amount of time talking about grain cultivation practices in rice paddies in the entire region, and how rice paddy development coincides with valley civilizations, because that’s where rice grows: semiaquatic tracts of arable land in wet valleys interspersed between the more arid hills and highlands. “Observable economic activity” very literally means the ability to observe with one’s own eyes what is being grown and what is eventually produced from plots, so it can be appropriated for state use.

Scott wrote an entire other book (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=4b479cb945&e=5e36cb54af) [2] about how and why specific lowland crops the world over seem to be the same ones that produce state-level entities, but it’s worth it to divert for a bit and dig deeper into this. In his opinion, it’s not a coincidence that the same handful of crops form the base of agricultural activity that drives state formation the world over – rice, barley, millet, and wheat. There’s a very simple reason for this: all of these crops are lowland crops that grow easily, and most importantly, observably. All of them can be counted, quantified, and taxed easily by a bureaucrat traveling far outside of a state center.

On the opposite end of this, if you’re looking to evade state control, you’re looking for crops that stagger their maturity patterns, grow fast, can grow below ground, are of little value per weight or per volume, have higher caloric yields per unit labor and per unit land, and ideally, are adapted to growth in slash-and-burn or swidden agriculture (more below on this term).

Oats, taro, yams, sago palm, cassava – mostly roots and tubers – fit the bill. Yams are suited to dry hillsides, grow wild in the mountains, and are less susceptible to attack from insects and fungi compared to rice. Taro has much of the same advantages but requires wetter soils to grow. With these advantages, these foods were referred to as “famine” foods in certain valley civilization lexicons like e.g. the Vietnamese state, because when famines hit and crops that were appropriated for state use became scant, even the lowlanders would resort to growing them to support themselves. The basis of a diet and lifestyle that gave one people their autonomy from the state, was to the subjects of the state, resilience for famines.

These two groups were not necessarily static. There’s evidence of dynamic adjustments to situational context, with groups adopting fluid ethnic identities, and fluid food development strategies seasonally as early as the 1700s if not earlier. In Laos under French rule, whole villages would move when their colonial responsibilities became too much to bear, e.g. living near a road that they were expected to maintain with corvée labor. Movement uphill was associated with swidden agriculture, as the Laotian peasantry knew that these activities were illegible to bureaucrats.

In New Guinea highland maroon communities under Dutch rule, the sweet potato paired with pig husbandry gave maroons (escaped slaves) the autonomy to get very high caloric yields per unit labor and meet all of their needs with a crop that outperformed all other crops at high altitude. It’s utility as a method of escape is best exemplified by the Spanish colonizers of the Philippines, who remarked:

“[They move] from one place to another on the least occasion for there is nothing to stop them since their houses, which are what would cause them concern, they make any place with a bundle of hay; they pass from one place to another with their crops of yames and camotes [sweet potato] off of which they live without much trouble, pulling them up by the roots, since they can stick them in wherever they wish to take root.”

Fig 1. Summary of escape characteristics of crops in the hill people repertoire. Crops that have higher-end elevation bandwidth in particular allow for ranging to higher elevations to escape state control (credit: James C. Scott, The Art of Not Being Governed).

The common pattern amongst all of these crops is that they 1) thrived at high altitude with little tending and 2) were not grains, and were in many ways nutritionally superior to grains. I won’t go into this too deeply, but it is a well-studied phenomenon particularly in the last 20 years that hill people, foragers, and hunter-gatherers were more robust physically than their valley agriculturalist counterparts. Their wide-ranging diets were more nutritionally complete than their agricultural counterparts, and as we will see below, agriculture’s ascendance coincided with a shrinking of the brain cavity and overall height in humans. The diet and lifestyles that statecraft selects for have a LOT to do with this.

Now, as these agricultural modes of state food production developed, alongside them grew a rich set of bureaucratic techniques, roles, and modes of social control, with new vocabularies beyond mere famine foods. The indigenous Yao people of southwestern China and Vietnam exemplify this. The term Yao originally was used to designate anyone who was obligated to perform corvée labor, which is unpaid labor for a feudal lord. As social forms continued to evolve, and feudalism receded with the ascendance of the administrative state, Yao became an ethnonym referring to anyone in the southwest and southern highlands who was essentially an unpaid laborer.

The Yao in particular are interesting – to this day, the Yao who are closer to the state, in the lowlands of southern China, have altered their social production forms to include rice paddy development and cultivation. The Yao who are closer to the highlands still practice hunting and foraging with a much smaller percent of their calories coming from rice. They also practice what Scott refers to as an “enemy of the state” – slash-and-burn agriculture or swidden agriculture.

Why is swidden agriculture so derided by bureaucrats? It’s pretty simple. Swidden agriculture allows many degrees of freedom that enable state evasion. For example, if you rewild areas of your land by planting 2-3 years in a row in the same plot, then let it all grow back and burn a new tract of land, it is extremely hard for a civil servant to determine what your production is on that one plot of land. Now consider that many of theseYao clans could number in the 50s to ~500s, and there is simply no “fair” way to begin to tax people based on an observable, justifiable metric of what they produced. You have to resort to violence and risk flight or revolt, or collect no taxes. We all know what states choose to do in this situation.

From the perspective of the administrative state, these upland people seem unruly, hard to control, difficult to find, and just plain rebellious. They have the energy and will to live hardy lives, and plan ahead by planting and hiding taro in unpredictable patterns 2-3 years at a time. They have the flexibility to grow lowland crops for trade, including things like opium. Anything and everything they could do maintain autonomy over their social organization.

What appears as backward agricultural practices in the state’s perspective is actually a volitional, political choice used as a tool in the fight against the loss of autonomy.

** “Anarchist” Nutrition and Metabolism

Let’s switch gears a bit now and examine some other common features of hill peoples evading state control – nutrition and the benefits of living at high altitude.

Recall that these escape crops offered people the ability to evade state control by being simple to grow at high altitude, easy to hide, and also, by being nutritionally complete. A grain-based lowland diet holds numerous pitfalls if not properly supplemented. Diseases like pellagra, Ricketts’, scurvy, and beriberi were probably not common until a state came along and forced its people to live on subsistence diets of corn, soy, or rice, which lacked complete proteins, vitamin C, and the B vitamins.

On the other hand, all of the foods listed in the tables above and specified in individual cases have very high amounts of vitamin A, vitamin E, vitamin C, B vitamins, fibers, calcium, magnesium, potassium, and a decent amount of protein and fats. Even more importantly, they don’t come with the baggage of complex food processing methods required to make them nutritionally complete or at least not poisonous and destructive. Here I’m thinking of e.g. the nixtamalization process required to prevent pellagra from corn consumption, or the B vitamin deficiencies caused by consuming a wheat-only diet which led to the fortification of grains in the 20th century by the US government (which has its own heinous consequences (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=fdb2bd44fa&e=5e36cb54af) [4], but let’s focus).

All of these vitamins, minerals, and nutrients contribute to healthy oxidative metabolism and general physiological robustness. The B vitamin complexes are required for functional oxidative metabolism in the mitochondrion of every cell in your body. Without thiamine (B1) for instance, electrons cannot be transported for efficient oxidative phosphorylation inside of the mitochondria, and this is responsible for many of the cognitive deficits seen in childhood malnutrition. It’s very literally the inability to provide energy to brain mitochondria in a scalable way causing an observable mental deficit.

I’m over-indexing on oxidative phosphorylation here for a reason. Many of the maladies of civilization such as diabetes, cancer, heart disease, Alzheimer’s [5], are now understood to have a mitochondrial etiology[6], or in other words, they seem to be diseases of mitochondrial energy metabolism which exact systemic consequences on the rest of the body as your physiology struggles to adapt to increasing demands without optimal metabolism. This only gets compounded by the mental stresses of modern life[7], which have been shown to have an almost equivalent effect on our biology as other types of physiological stress. In simpler terms, feeling psychologically stressed, like being stuck in traffic and not being able to escape, is almost identical to lacking the physical energy needed to meet a demand, like running away from an animal trying to eat you.

So what does mitochondrial biology, modernization, and psychological stress have to do with hill peoples and living in the highlands?

It turns out that in almost all highland peoples who maintain a form of life and social organization that approximates communal or pre-modern forms, chronic disease of the sorts described above is close to nonexistent. This connection also doesn’t disappear when one controls for infant mortality, epidemics, war, famine, and other commonly trumpeted reasons for the connection.

Robert Sapolsky cites research from anthropology, genomics, and paleontological physiology[8] showing that the ascendance of agricultural forms of social organization coincides with a near 30% reduction in lifespan, as well as far poorer bone and dental health in the fossil record[9].

In the modern era in Victorian England of 1883, the nutritional standards put in place by the state exacted such a toll that the infantry were forced to lower their height requirements from 5 ft 6 inches to 5 ft 3 inches – for men [24]!

We didn’t even touch on the fact that highlanders live comparatively slower lives[10], working as little as 2 hours per day to secure resources, avoiding commutes, engaging with loved ones more often, and avoiding the dominating psychological stresses of modern life which as we mentioned before have been shown to exact very real physiological tolls on the body.

** Carbon Dioxide, Mitochondria, and the Secret to Upland Resistance

All of that being said, what’s left? Well, there’s a hidden feature of the life of hill peoples that is now being revived and actively researched after a long dormant period, that I think plays a huge role in giving them the ability and energy to resist, and ability to strive for better possibilities. This thing also contributes mightily to mitochondrial biogenesis and stability, and is one of the key diagnostic indicators of most chronic disease. That hidden thing is something swimming in the air all around us – carbon dioxide.

In 1977 in New Mexico, a study was conducted [11] on people residing at high altitude.. This study compared age-adjusted mortality rates for heart disease for white men and women living in Santa Fe from 1957-1970, in 1000 ft increments. For years before the study, it was believed that living at high altitude placed stress on our biology, as we struggled to adapt to a lower oxygen pressure.

What they found was the exact opposite.

The altitude groups 1-5 were arranged to start with the lowest altitude as group 1, with each subsequent group being 1000ft higher than the previous group, e.g. group 5 is 5000 ft higher than group 1. The death rates were normalized to the 1st altitude group, and it was found that there was a near linear relationship in the male death rate, with an effect size of 28% reduction in group 5.

This was an epidemiological study that was well controlled for race, migration, pre-existing disease, so the authors were able to eliminate a considerable amount of factors in explaining the association, but some factors remained. They comment in an apocryphal manner: “Further studies are needed to elucidate the mechanism of this association, if it is confirmed in other data.” It turns out that the association has been confirmed in myriad other data, but no one has as of yet elucidated a mechanism, at least officially in the literature. I will speculate about what the mechanism is at the end, but let’s take a closer look at a few more instances of this effect.

Seeking alternative explanations, some investigators expected a strong positive correlation between altitude and cancer mortality, particularly because the cosmic ray radiation at altitude could be much stronger than at sea level and was strong enough to be ionizing. Research over the decades preceding the study had shown ionizing radiation to be carcinogenic (usually nuclear bomb tests, etc).

Picking up on this, one of the other interesting studies [12] in this area sought to understand the connection between altitude, radiation exposure, and mortality specifically in heart disease and cancer. The authors began by commenting that several studies conducted in the US expected to find correlations between mortality rates for cancer and ionizing background (cosmic ray) radiation. The correlations were actually the inverse of what was expected, so this group fit models that incorporated altitude inputs as well as background radiation for predictors of mortality in order to elucidate what was going on.

The group found that negative correlations with cosmic rays and mortality from cancer and heart disease disappeared once the models included variations in altitude. Not only that, in the case of breast, intestine, and lung cancer, the correlations became positive!

On the other side of it, the correlations with altitude persisted when the model was modified to include adjustments for radiation. The altitude correlation had a significant and powerful effect in this (admittedly) observational data. The study authors concluded that one can’t neglect the negative mortality effects of radiation from this study, but that perhaps the reduced oxygen pressure of inspired air at high altitude is protective against certain causes of death.

In a 2009 retrospective cohort study of patients initiating dialysis in the US between 1995 and 2004 [13], it was observed that patients at higher altitude receive lower erythropoietin doses, yet achieved higher hemoglobin concentrations in serum. The authors expected the reverse, and along the way, also discovered yet again that in their data, patients at higher altitude also had lower mortality. Since it was 2009 and we had more powerful analytical methodologies at our disposal, the study authors had more room to speculate about the physiological effects responsible for the observed mortality increase.

By the time they wrote their conclusion, they were able to pull in recent findings from the study of cancer in hypoxia-induced factors. First, hypoxia-induced factors regulate enzymes that affect cardiovascular risk, such as VEG-F, heme oxygenase-1, i-NOS, and cyclooxygenase 2. The preceding enzymes are involved in inflammation, blood vessel growth and repair, and dilation / constriction. Enzymatic effects stemming from them and the subsequent effects on mortality had already been well established in studies at sea level. The connection the authors made here is that the benefits to mortality observed in this population most likely stem from the relative lack of oxygen, and possibly the addition of carbon dioxide replacing it.

This pattern was highly repeatable and conserved over time.

In one study [14] of 300 autopsies carried out at 14,000 feet in Peru, not a single case of death by heart attack, nor even moderate coronary artery disease was found.

Shepherds serum at high altitude has been found to exhibit profound anti-thrombotic effects [15] from life at high altitudes. Their vascular walls are also stronger and feature less clot activity. This could partly explain why they are so resilient to heart disease, stroke, and hypertension.

At a meeting of the World Health Organization in 1968 [16], it was reported that native populations in Chile and the Himalayas also had significantly less heart disease and cancer than populations at sea level.

Carbon dioxide has also been shown to inhibit in vivo generation of reactive oxygen species [17]. Essentially this is broad evidence for an overall systemic protective effect.

And so on.

But what of carbon dioxide? It seems a little out of place here. What does carbon dioxide have to do with mortality reduction? Isn’t it just waste, a byproduct of respiration?

Not really.

Besides the large effects observed in heart disease, altitude seems to have a large effect on cancers of all types, and it’s important to understand why that is.

** Cancer, Carbon Dioxide, Lactate, and Acidity

Otto Warburg [18] received the Nobel Prize for discovering cellular respiration in developing sea urchin eggs. More importantly, towards the end of his career he discovered that cancer cells preferentially turn sugar into lactate, even in the presence of oxygen, a feature he called aerobic glycolysis. If you haven’t taken biochemistry, let me translate that for you.

When you sprint and your legs and lungs start to burn, the subjective experience and the measured physiological effects are typically attributed to the buildup of lactate. As you breathe harder and harder and pump your legs faster and faster, you build up this waste product, lactate, as you burn oxygen and sugar. Normally 95% of your energy is produced by oxidative phosphorylation, which produces ATP and carbon dioxide, but when there is not enough oxygen to burn your energetic substrates completely, your body switches to a form of energy production called glycolysis, which is the use of sugar to produce pyruvate and downstream of that, lactate.

Many, many cancers have the feature of over-producing lactate. Lactate has been shown to power angiogenesis, for example, which is the way that cancers begin to spread their tendrils and grow vascularity in order to get more oxygen, sugar, and fat supplied for their growth. Warburg’s great contribution was to show that even when you are sitting at rest, if you have a cancerous tumor, your body is basically panting. It’s stuck in glycolysis. It’s constantly turning sugar into lactate instead of carbon dioxide, even when there is plenty of oxygen around.

Now zooming back out, when you are at altitude, it might seem like you should be panting all the time, and that you would experience aerobic glycolysis [19], because there is less oxygen around. Based on everything I just said, you might predict that when we go to high altitudes, because we get tired faster and have less work capacity, that we’re producing more lactate, and that we might expect higher cancer rates. That would be true except for the implications of two key biophysical effects: the Haldane effect and the Bohr effect (which circumscribe the lactate paradox), which I think is what hill people above are inadvertently benefiting from.

The Haldane effect is relatively simple. John Scott Haldane discovered that in hemoglobin, the protein that carries oxygen and carbon dioxide in our blood, oxygen displaces carbon dioxide in proportion to its partial pressure in the environment. As you go higher and higher up in altitude, less carbon dioxide is displaced from hemoglobin, and cells throughout the body retain more of their carbon dioxide and bind less oxygen.

The Bohr effect is even simpler still – it was first observed by Christian Bohr in 1904, and is the ability of carbon dioxide to displace oxygen in hemoglobin as pH decreases. Surprise, surprise, as you go up in altitude, carbon dioxide works to form carbonic acid in the blood more easily, and the resultant decrease in pH (increased acidity) makes it harder to bind oxygen in the blood.

The downstream effect of this is that there is less oxygen in the mitochondria of the cells, and the oxygen is more efficiently consumed by oxidative phosphorylation, an energetic pathway that “competes” with glycolysis. I say competes in quotes because oxidative phosphorylation is dependent on the first part of the glycolysis pathway in order for the overall pathway to function, so competing is a bit of a misnomer. This is a detail that’s ancillary to the broader effect which basically demonstrates that if you are in an oxygen-scarce environment, more efficient use of oxygen results in far less lactate production and the reduction of aerobic glycolysis, the main feature of cancer energy production.

Whew, I know that was a mouthful of big words, but we’re almost done.

** Mitochondria: The Powerhouses of Social Organization?

Now what evidence is there for this? Well, if we expect that there is less oxygen present at altitude, and this leads to increased absorption and efficiency of oxygen consumption, we’d expect the organism to also make more mitochondria to meet the demands of lack of oxygen. Either that, or we’d expect efficiency gains within the mitochondria themselves. And we’d expect less lactate production at altitude.

It turns out that all are true.

In an old study comparing cows raised at sea level and cows acclimated to very high altitudes [20], the high altitude cow had direct mitochondrial counts 40% higher than the sea level cows, evidence of massive amounts of mitochondrial biogenesis to acclimate to altitude.

Additionally, in a different study examining rat heart, liver, and kidney mitochondria [21], ND6, COX, and other genes involved in energy production within the mitochondrion all were 30-40% higher in rats acclimated to high altitude vs. sea-level rats. Ninety-five percent of all energy production in the rat (and us) occurs via oxidative phosphorylation which takes oxygen and sugar and turns it into carbon dioxide and energy. At these higher altitudes, it appears that the genes used specifically to upregulate oxidative phosphorylation and consequently produce more carbon dioxide were massively upregulated. Energy production became so efficient that the rats produced very little lactic acid / lactate.

Alan C. Burton, a founding father of biophysics, late in his career noticed this correlation between altitude and significantly reduced cancer rates[22]. He hypothesized that there is a connection between intracellular pH (not serum pH, which is buffered and strictly controlled by the body to a narrow range) and carcinogenesis. On its face, it seems like a reasonable assumption particularly if we take Warburg’s aerobic glycolysis into consideration.

Lactate produced downstream of aerobic glycolysis has a pKa of 3.82, which means above a pH of 3.82 it is lactate, and below it is lactic acid. We’re talking about finely tuned microenvironments here when we talk about the acidity of a cell. A cell’s momentary (over)production of lactate can easily produce pHs below 3.82 but that’s just a bit of speculation. The point is that at altitude, the increase in carbon dioxide creates momentary decreases in alkali reserve because the increased carbon dioxide creates carbonic acid through the buffer system in the blood. This results in a slightly reduced but still well-buffered pH at altitude, and as we saw before, a marked reduction in lactate.

Burton had speculated that if this is what happens at altitude, it might be possible to simulate this effect with a drug, acetazolamide, which blocks the enzyme responsible for the buffering of carbon dioxide in the blood. At the time of his writing, acetazolamide had been given to cancer patients and tumor reduction had been observed, but no one followed up to see if this was explicitly due to acetazolamide or some other factor. All we do know is that lactate production was observed to be lower in hill populations, and that this effect could be mimicked by a drug.

To date there has been only one study that I know of that demonstrates a full picture of increased carbon dioxide retention at altitude leading to biogenesis of mitochondria with increased efficiency with an observed effect of improving some disease state, and that is FZ Meerson’s work, as cited by Ray Peat[23]. Meerson was able to demonstrate that rats at altitude increased overall count of mitochondria per unit brain mass, with more efficient use of oxygen as a percent concentration of oxidative enzymes, leading to prosociality and increased learning ability.

Increased prosociality and learning ability as a result of mitochondrial energy production and efficiency.

Recently, the titan of mitochondrial genetics, Douglas C. Wallace, showed that in mice under acute psychological stress, mitochondrial functions modulated the neuroendocrine, metabolic, inflammatory, and transcriptional responses[7]. This study is broadly instructive and ties everything in the picture we’ve painted together, in my opinion.

Wallace’s group, in a very well-controlled study, was able to show that when you put mice in a restraint stress condition, which is putting them into a situation where they are enclosed and can’t escape, kind of like mouse prison, the explicit, minute functions of the mitochondria organize the overall physiological response and the ability to respond to stress at every level of organization. Let me restate that: when a mouse is under solitary confinement, Wallace’s group found that mitochondrial energy production quality facilitates the ability of its cells, organs, organ systems, and cognitive functions to respond to that stress. This takes shape in a few different forms in the data.

The mice in the study were modified in specific parts of their genomes to make mutant mice that would be bad at transporting the proteins necessary for energy production from the cell nucleus to the mitochondria. Other variants would specifically have defects in the electron transport chain, where electrons are transported from sugars and proteins in the mitochondria, terminating in oxygen to make carbon dioxide, and so on. Then, the researchers picked neurotransmitters, hormones, and other biomarkers that are indicative of the functioning of systems responsible for responding to stress in the mice. The systems in question in this study were the hypothalamic-pituitary-adrenal (HPA) axis and the sympathetic-adrenal-medullary axis, and the study authors were interested in the levels of norepinephrine, epinephrine, serotonin, adrenaline, and other compounds generated as a result of the activity of these systems when the mice were placed in restraint stress.

What they found was that the mice with very specific defects of mitochondrial energy production experienced profound deleterious effects from the level of the cell all the way through to the entire organism. Their organs, their axes, and the entire mouse failed to adapt to the insult of restraint stress if the right defect in energy production was introduced. Some were more deleterious than others, but the overall point stands: if you can’t produce energy under stress, whether it’s psychological or physical materialist in its nature, you fail to adapt down to the level of your lived biology.

In my speculations, though this hasn’t been shown to be the case in humans yet, there is no reason to suspect that our own responses to psychological stress aren’t mitigated in the same or similar ways, and modern life in bureaucratic, systematic civilizations is one long constant stressor of domination that strips people of their autonomy. I think on a near societal scale we are experiencing learned helplessness from chronic psychological stress, and the people who have successfully evaded state control to self-determine their circumstances not only have a desire to do so, they also have real, material, physiological advantages, if the story I’ve woven together makes sense.

** Conclusions

We’ve seen how hill peoples who are escaping (and continue to resist) early attempts at statecraft in valley regions used “escape crops” for self-determination of their way of life. This strategy coincides with the crops being nutritionally complete, and the interesting biological effect that rode along with these desires for freedom – residing at high altitude conferred benefits against cancer, heart disease, and other chronic illnesses, such that disease incidence drops by as much as 28% in the case of men living in Santa Fe, New Mexico.

This effect is probably mediated in two ways. First, mitochondrial resilience and biogenesis, and the proteins optimized to consume oxygen to produce carbon dioxide and energy, are all increased by adaptation to high altitude. Second, as a result of the lower oxygen pressure, lactate production from glycolytic energy pathways drops precipitously, which in my speculation, is probably responsible for the lower incidence of diabetes, cancer, and heart disease. These diseases, especially diabetes, share the common diagnostic feature of increased lactic acid production coupled with lower serum (and breath) carbon dioxide.

Taking this into consideration, lastly, I’m going to enter the realm of pure speculation (to some) and say that, given Douglas Wallace’s intricate study of the modulation of the stress response in psychologically stressed rats, residing at altitude provides direct cognitive, energetic, and at times, survival competencies that aid the social behavior of peoples escaping state intrusion and domination. The complex forms of anti-authoritarian social organization, and the ability to address wide-ranging problems that for lowlanders requires e.g. centralized authority, 36 months, and a large budget, cannot happen if people don’t have a base level of food security, a good, healthy, plentiful environment, and thriving. Anything that aids this, as altitude seems to do, is indispensable. Stated more simply, residing at altitude confers energetic, health, and cognitive benefits that I think play a huge role for any group in the act of autonomous social organization. It’s not a coincidence that
anti-authoritarians escaping the structure, simplicity, and violence of civilized life in a valley state live more complex lives at altitude – altitude aids their already existing desires.

Life may appear simpler when we lowland state inhabitants, embedded in our bureaucracies and technologies and structured urban life, hike up to a hill town to scenes of serene hill farms and animals and huts, but we who don’t have to make choices about how we get our water, how we get our food, and when a boss has outlived his welcome and can stop being a boss are the ones who actually have simpler lives. We’ve used the violence of bureaucracy and the state to artificially engineer possibilities out of existence. We have tamped down the complexity of our own lives with state bureaucracy and violence, but that will be a chat for another time.

As for what you can do with this information, besides running away from an oppressive valley civilization, acetazolamide, the B vitamins, aspirin, thyroid hormone, magnesium, have all been shown to increase oxygen consumption or carbon dioxide retention at sea level.

I personally try to spend at least a month every year in Santa Fe or Albuquerque, New Mexico, which are at 7200 feet and 5500 feet respectively.

Thanks for reading!

** If you enjoyed this and want to support my work, here’s my Patreon. (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=bdee6f68c2&e=5e36cb54af)

** If you want to read more stuff like this, sign up for my long form essay series here. (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=abf6359df7&e=5e36cb54af)

** Bibliography **
1. “Zomia (Region).” Wikipedia, Wikimedia Foundation, 28 Dec. 2018, en.wikipedia.org/wiki/Zomia_(region).
2. Scott, James C. Against The Grain: a Deep History of the Earliest States. Yale Univ Press, 2018.
3. Scott, James C. The Art of Not Being Governed: an Anarchist History of Upland Southeast Asia. Yale University Press, 2011.
4. Dalton, Clayton. “Iron Is the New Cholesterol – Issue 67: Reboot.” Nautilus, 20 Dec. 2018, nautil.us/issue/67/reboot/iron-is-the-new-cholesterol.
5. Wallace, Douglas C. “A Mitochondrial Paradigm of Metabolic and Degenerative Diseases, Aging, and Cancer: A Dawn for Evolutionary Medicine.” Annual Review of Genetics, vol. 39, no. 1, 2005, pp. 359–407., doi:10.1146/annurev.genet.39.110304.095751.
6. Wallace, Douglas C. “Mitochondrial Diseases in Man and Mouse.” Science, vol. 283, no. 5407, 1999, pp. 1482–1488., doi:10.1126/science.283.5407.1482.
7. Martin Picard, Meagan J. McManus, Jason D. Gray, Carla Nasca, Cynthia Moffat, Piotr K. Kopinski, Erin L. Seifert, Bruce S. McEwen, Douglas C. Wallace. Proceedings of the National Academy of Sciences Dec 2015, 112 (48) E6614-E6623; DOI: 10.1073/pnas.1515733112
8. Sapolsky, Robert. “Being Human: Life Lessons from the Frontiers of Science.” Guidebooks, guidebookstgc.snagfilms.com/1686_Being Human.pdf.
9. Kahn, Sandra, and Paul R. Ehrlich. Jaws the Story of a Hidden Epidemic. Stanford University Press, 2018.
10. “How to Change the Course of Human History.” Eurozine, 2 Mar. 2018, www.eurozine.com/change-course-human-history/.
11. Mortimer, Edward A., et al. “Reduction in Mortality from Coronary Heart Disease in Men Residing at High Altitude.” New England Journal of Medicine, vol. 296, no. 11, 1977, pp. 581–585., doi:10.1056/nejm197703172961101.
12. Weinberg, Clarice R., et al. “Altitude, Radiation, and Mortality from Cancer and Heart Disease.” Radiation Research, vol. 112, no. 2, 1987, p. 381., doi:10.2307/3577265.
13. Winkelmayer, Wolfgang C. “Altitude and All-Cause Mortality in Incident Dialysis Patients.” Jama, vol. 301, no. 5, 2009, p. 508., doi:10.1001/jama.2009.84.
14. Ramos, et al. “Patología Del Hombre Nativo De Las Grandes Alturas : Investigación De Las Causas De Muerte En 300 Autopsias.” PAHO/WHO IRIS, World Health Organization, iris.paho.org/xmlui/handle/123456789/15274.
15. Bekbolotova, A. K., et al. “Effect of High-Altitude Ecological and Experimental Stresses on the Platelet-Vascular Wall System.” Bulletin of Experimental Biology and Medicine, vol. 115, no. 6, 1993, pp. 636–639., doi:10.1007/bf00791144.
16. “Report of the WHO/PAHO/IBP Meeting of Investigators on Population Biology of Altitude.” World Health Organization, WHO/PAHO/IBP, hist.library.paho.org/English/ACHR/RES7_4.pdf.
17. Boljevic, S., et al. “Carbon dioxide inhibits the generation of active forms of oxygen in human and animal cells and the significance of the phenomenon in biology and medicine.” Vojnosanitetski pregled 53.4 (1996): 261-274.
18. Apple, Sam. “An Old Idea, Revived: Starve Cancer to Death.” The New York Times, The New York Times, 12 May 2016, www.nytimes.com/2016/05/15/magazine/warburg-effect-an-old-idea-revived-starve-cancer-to-death.html.
19. “Warburg Effect (Oncology).” Wikipedia, Wikimedia Foundation, 8 Mar. 2019, en.wikipedia.org/wiki/Warburg_effect_(oncology).
20. Ou, L. C., and S. M. Tenney. “Properties of mitochondria from hearts of cattle acclimatized to high altitude.” Respiration physiology 8.2 (1970): 151-159.
21. Shertzer, H. G., and J. Cascarano. “Mitochondrial alterations in heart, liver, and kidney of altitude-acclimated rats.” American Journal of Physiology-Legacy Content 223.3 (1972): 632-636.
22. Burton, Alan C. “Cancer and altitude. Does intracellular pH regulate cell division?.” European Journal of Cancer (1965)11.5 (1975): 365-371.
23. Peat, Raymond F. “A Biophysical Approach to Altered Consciousness.” Journal of Orthomolecular Psychiatry 4 (1975): 189-97.
24. Clayton, P.; Rowbotham, J. How the Mid-Victorians Worked, Ate and Died. Int. J. Environ. Res. Public Health 2009, 6, 1235-1253.

Against Utopia Weekly #1 – Eugenics Underpins Digital Health Technology

** Issue 1
Eugenics: The Hidden Driver of Healthcare and Digital Health Technology

Welcome to the first edition of the Against Utopia weekly newsletter, where I’ll expound on incomplete thoughts I’ve had, play with them a bit, and hopefully engage you with some new ideas and perceptions on our current situation.

Before we get into it, spread my propaganda to your friends and tell them to sign up for the newsletter here. (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=b1f0f80bff&e=5e36cb54af)

My long form essay series can be found here. (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=aa6affd21a&e=5e36cb54af)

If you really like me and want to help me spread more propaganda faster, support me on Patreon. (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=141b889d90&e=5e36cb54af)

My first hot take weekly newsletter topic is going to be that digital health, and modern healthcare in general, operates on a principle tantamount to eugenics. Let’s get into it.

So what is digital health? Digital health is the use of information and communication technologies to digitize the delivery of healthcare services, and to provide efficiencies in care, risk, and condition management. In terms of investments, the sector has grown 10x since 2010, reaching $11.7 billion in venture capital funding in 2017. (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=f70b407b67&e=5e36cb54af)

For a little over ten years, the digital health sector has been heralded as the next great bringer-of-progress on the frontier of information technology.

Fitbit (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=f5a82d9049&e=5e36cb54af) would help us figure out how to move more often, and measure our movement, so we can anticipate and reduce our heart disease risk.

Teladoc (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=7d4cf8878d&e=5e36cb54af) would help us manage our chronic and acute mental health issues remotely, anywhere, on demand.

iRhythm (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=7c83b0dd12&e=5e36cb54af) would help us use continuous monitoring technology to predict cardiac health events, and take proactive action to reduce their risk and severity.

Tabula Rasa Healthcare (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=39f5847e39&e=5e36cb54af) would centralize and systematize the mitigation of medication risk, and through this lens, use digital systems to reduce

And so on.

These digital health companies are assuming that the efficiency gains from information technology are there for the taking, and that the underlying medical technology actually works: therefore the problems to be solved are in distribution, form factor, or interoperability, key areas that IT directly improves.

And there’s another pattern here as well – notice how often the word “manage” and “risk” appears on the webpage explainers of these technologies.

If we have the diseases cracked, particularly in the chronic disease realm (heart disease, cancer, diabetes, Alzheimer’s, etc), then it’s merely a matter of scaling the reach of the managerial healthcare state. As soon as we do that, we should watch the risk mitigation sharply reduce mortality and deliver us all the digital health revolution we’ve been waiting for.

However, the digital health revolution will most likely never arrive – almost entirely because it is built on the shady, rickety foundation known as eugenics. In order for us to understand that, we’ll need to go back in time a bit.

In my long form essays, I covered the historical development of depression as a treatable illness, and the many factors that manifested in the medicalization of grieving. One of the key developments in the history of medicine and depression was the methodology of observation, identification, and language standardization that enabled doctors to communicate in objective ways about the mental suffering of their patients.

This professionalization process began in earnest around the turn of the 19th century with the work of a German psychiatrist, Dr. Emil Kraepelin, the founder of scientific psychiatry. Kraepelin was renowned for his scientific approach to managing patient care, his intricate data and reporting methodologies, and his development of objective pattern-seeking methods for comparative psychology. He was also a eugenicist.

I don’t necessarily mean to use eugenicist in this context as a slur, nor do I mean to excuse it. It’s just a simple fact that is crucial to understanding Kraepelin and the way that early 20th century doctors thought about the plight of their patients and the possibilities for their coping and recovery.

Doctors glommed on to Kraepelin’s framing of the problem of mental illness, and this frame was one that spoke deeply to genetic origins. He believed that the amount of mental illness in the population is always fixed, and that this mental illness was of a genetic origin, therefore it is folly for doctors to spend their time trying to figure out how to “cure” mental illness – better they spend time on rigorous scientific methods for identifying those at risk of mental illness in the population, so that their illness can be managed. “Managed” typically meant by a doctor in an asylum, sequestered from the public, and not much more, at least in Kraepelin’s time.

Fast forwarding a bit, it’s not an understatement to say that in 2019, our current medical culture has maintained much of this approach as well. Look no further than genomics and chronic care management. There are scores and scores of genome-wide association studies, where the human (or animal) genome’s variation is broken down into its constituent parts, called single-nucleotide polymorphisms (SNPs), and associations are sought between embodied disease states and SNPs. The important consideration in questioning this work is not to ask whether it should be done at all, but rather to ask:

Even if we did find out as a result of this work, that the majority of chronic illnesses like Alzheimer’s, heart disease, and diabetes have a strong genomic association, what would we actually be able to do if we had that information?

When looking at the medical culture, the answer seems to be that we’d find drugs that address the symptoms of the chronic illnesses, drugs that reduce the risk of carrying these genes in your genome, try to return the values that they are meant to influence to the “normal” range, but never bother to understand the baseline physiological causes or etiology.

We’d find ways to manage the conditions within their reference ranges (e.g. keep blood sugar below x target, keep blood pressure within x/y targets), and we’d assume that if you happen to fall outside of the range, you’re just unlucky, which is another way of saying you have bad genetics. It doesn’t seem to be, at least so far, that we’d want to find ways to engineer genetic interventions for the perceived genetic root causes.

It might seem like I’m putting words in physicians’ mouths, but this is basically what they say by their own admission. Much of the 80s and 90s were spent telling people with clinical depression as measured on the Hamilton depression scale that their experience e.g. is caused by a “chemical imbalance” that is genetic in nature. This language even in the 90s remained unsubstantiated – to this day it is a doubly dubious claim, both from the perspective of a measured chemical imbalance (no one has measured serotonin in the brain, it’s too risky), and from the perspective of genetics (there are extremely weak correlations between some SNPs for serotonin transporters and depression risk).

I’d say it plainly as, the second that a eugenic epistemology takes hold, we stop viewing diseases as things that have root causes and can be cured, and start to view them as things to be managed, with genetic root causes. And since we all have genes, we all have some genetic risk, no matter how little it is.

So it should be no surprise then, that the medical culture has stopped searching for cures for illnesses, and bought wholesale into an ideology of eugenics for disease origination and management.

It may not appear so.

You might say we’re still curing stuff like acute hepatitis. Gilead Sciences released a cure for hepatitis type C a few years ago, and achieved cure rates of ~90% (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=3fd65d0def&e=5e36cb54af) . The problem is, cures exhaust the pool of treatable patients, and they end up being bad for business. This might seem like a common trope your reactionary uncle who never believes experts told you at Thanksgiving, and it seemed nutty coming from him.

Allow me to quote Goldman Sachs at length, validating your uncle:

“GILD [Gilead Sciences] is a case in point, where the success of its hepatitis C franchise has gradually exhausted the available pool of treatable patients,” the analyst wrote. “In the case of infectious diseases such as hepatitis C, curing existing patients also decreases the number of carriers able to transmit the virus to new patients, thus the incident pool also declines … Where an incident pool remains stable (eg, in cancer) the potential for a cure poses less risk to the sustainability of a franchise.”

Acute conditions that cause harm or death such as hepatitis C might not have a genetic risk component. And as this analyst at Goldman above says, it’s doubly worse if they’re not carrying the disease, as it can’t be spread, creating new customers for the cure.

Chronic conditions by comparison present endlessly growing, inexhaustible pools of patients limited only by medical perception, language, and standard of care and as a result, there exists a strong incentive to find an ultimate genetic cause for them. Should one be found, or justified into existence, then it can be managed, and suddenly everyone with that SNP is a customer.

So in summary, two key ideas driving eugenics in healthcare technology:
1. the idea that we already have the solutions, they just need to be scaled to manage the health of the population
2. There are no cures anymore, all disease has a genetic profile, and since everyone has genes, everyone is at risk

Let’s take a closer look at healthcare and what we’re actually trying to scale with digital health technology.

Take continuous blood pressure monitoring technologies. Blood pressure (BP) is one of the primary independent risk factors of cardiac illnesses of all kinds, and interventions targeting it are the first line therapies available to reduce overall cardiac illness mortality. It is believed that by intervening in hypertension for patients, we will prevent future heart attacks, and deaths that result from them.

The first line therapies included thiazide-diuretics, calcium channel blockers, ACE inhibitors, and angiotensin receptor blockers. Knowing what they are and how they work is not material to our investigation here – the most important thing to know about them is that, despite being the first line therapies, for all but the most sick hypertensives they are actually useless (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=410563c6d0&e=5e36cb54af) .

A Cochrane meta-analysis, one of the largest and most statistically powerful, showed that the standard of care for hypertensive patients, putting them on a first-line therapy for any BP measurement over 140/90, fails to reduce cardiac events, cardiac mortality, and overall cardiac disease risk.

Despite the utter failure of medications to manage blood pressure risk, the search for genomic causes or associations for blood pressure risk continues (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=38f5bbfeef&e=5e36cb54af) . It hasn’t stopped a veritable sea of medical devices and digital health technologies from receiving funding to scale the measurement of blood pressure, and the management of the associated medications meant to manage the condition itself.

Lastly, let’s take the case of type 2 diabetes. By now it’s basically a full blown epidemic. Approximately 25.8 million Americans suffer from it, and estimates in 2013 had 113 million Chinese suffering from it, which represents about 10% of the Chinese population.

Diabetes has been linked to increased risk of cardiovascular illness, cancer, and Alzheimer’s, among many other maladies. These maladies all told are the major cost contributors as well as mortality drivers for all healthcare systems the world over. Uncovering the major contributors of disease risk for diabetes and lowering them is and continues to be a major goal of all healthcare state funded research, and private research.

Since the discovery of recombinant insulin, it has been one of the primary therapies for managing glycemic load and limiting the systemic damage of diabetes. The other top line therapies also focus on glycemic control but via pharmacological means. The standard of care here has become metformin.

After over a half century of intervention, the results are mostly in – insulin therapy does nothing to reduce cardiovascular mortality from diabetes, and doesn’t reduce cardiac events. I won’t mediate, let’s just quote from the source (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=bf1d3b77fd&e=5e36cb54af) :

“Starting insulin therapy early in the course of chronic treatment of patients with type 2 diabetes would imply that there are unique benefits to insulin treatment. As addressed above, there is little evidence to support such a view. Insulin treatment is neither durable in maintaining glycemic control nor is unique in preserving β-cells. Better clinical outcomes than those that occur with other antihyperglycemic regimens have not been shown. The downside of insulin therapy is the need to increase the dose and the regimen complexity with time, the increase in severe hypoglycemia, and the potential increase in mortality as well as the potential increased risk for specific cancers.”

Ok, so insulin doesn’t really work for reducing negative outcomes, but it’s not the standard of care.

How’s the standard of care doing?

Metformin is a safe, easy to manage drug that sensitizes cells to insulin. Truth be told, nobody actually knows how it works, we just know that it gets into cells and something something sugar seems to get into cells more easily, and hyperglycemia is reduced.


A recent meta-analysis (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=de0ad4919d&e=5e36cb54af) shows that while metformin reduces the incidence of hyperglycemia, it does fuck-all for mortality outcomes. This is kind of like saying that if you take metformin, you won’t suffer from the symptoms of your uncontrolled blood sugar, but you’re still going to die from the disease on a different (shorter) timetable than you would if you didn’t have it.

I’m simplifying this for the sake of bombast, sure, but just take me up on going into Google Scholar and hunting these studies and meta-analyses down for yourself. You’ll find the difference is not that much in the other direction for the few cases where metformin actually improves quality of life AND outcomes.

So that’s two major areas, heart disease and diabetes, where the top line scientific management methods turn out to be actually useless in reducing mortality.

And if we turn our gaze back to the digital health and healthcare technology sector, we see a spate of technologies built to scale, digitize, and automate the administration of these disease states.

Glooko (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=77d2301006&e=5e36cb54af) builds remote monitoring technology for diabetics and their doctors to seamlessly integrate information about their blood sugar from their devices, for expert oversight.

Livongo (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=b534e0d4f2&e=5e36cb54af) integrates body weight, behavior change coaching, and medication management to purportedly engineer better outcomes for its members.

Sano (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=4a59e96c9e&e=5e36cb54af) is pioneering continuous glucose monitoring, mining data about the interaction of food, blood sugar, and insulin constantly, in order to better manage outcomes for all people, not just diabetics.

You see the problem here, right?

Insulin and various medications that control hyperglycemia are still the first line therapies in these digitized methods. The digital methods that drive these companies above in particular are no different from the things we’ve already been trying for the last half a century, to the tune of, ya know, hundreds of millions of diabetics and a worldwide epidemic of crisis proportions.

This shit didn’t happen overnight! It’s BEEN not working!

And my bet would be that digital information technologies built on top of this eugenic epistemology that merely structure data to manage health conditions are making a strong statement that patients under their care can never be cured. Their health can only be managed, because all they are doing is delivering interventions that do not work to cure anyone, merely to manage conditions within a range as declared by a medical expert.

The only companies that don’t treat their patients with the eugenic gaze seem to be having success reversing diabetes on the other hand. It appears that if you believe patients can be cured, then you’re free to act in the decision space that lets you unearth and scale that cure.

Companies like Omada Health, who coach prediabetics to lose weight and prevent progression to diabetes, or Virta Health, who use low carbohydrate and ketogenic methods to manage behavior change and reverse diabetes, have been experiencing some success on this front, but I’m not here to talk about solutions (yet), I’m just pointing out problems.

All this is not to say that we’re wasting our time trying to build digital health technologies that might radically lower barriers of access to healthcare, cheapen healthcare delivery, or systematize repeatable, computable things such as chronic medication management that humans can’t do as well as algorithms.

It’s just that, in its current manifestation, digital health is tantamount to a bad algorithm of sorts – it’s feeding garbage in, and spewing garbage out.

If my thought experiment here is directionally correct, that the underlying driver of our social organization of science is eugenics, where genetic “risks” define our propensity for ill health, and these risks neglect embodied physiology, then our systems will only ever create garbage.

Cures, better value, and better outcomes, can only ever be a happy accident when padding massive profit margins with a faulty model is the real goal.

And who’s to say the model is faulty if it makes everyone a target with science-y looking things like genome-wide association studies, and then manages to influence its own targets with medicine that never cures anyone?

With eugenics as the driver of our medical epistemology, we may think we’ve got a simple picture finally figured out, but we’ll never tangle with the messy interactions between genetics, environmental inputs, and embodied physiology, leaving many non-genetic possibilities that are more complex but potentially more fruitful unexplored.

If we assume that everyone will get diseases because everyone has genes, and build digital health technologies accordingly, we’ll fashion beautiful digital interfaces that imprison us in our genetics, and leave entire possibilities for our liberation from disease unexplored.

Like what you’re reading? Support me on Patreon! (https://againstutopia.us18.list-manage.com/track/click?u=117c8f6cf4a7e5ceeb45e2d4e&id=0cf985e7e1&e=5e36cb54af)

Against Utopia – Now in Weekly Edition!

If you’re subscribed to my tinyletter, you may have noticed I haven’t sent anything out since the end of November. It turns out, writing about the history of depression and ways for us all to make progress against it without medical authorities is harder than I thought. The history is convoluted and complex, the concepts are murky, and the science is hard to deconvolute, but I’m almost done, and the letter should be out in the next week (though I promised the end of December – sorry!).

However, over on Twitter and Mastodon, I am constantly posting my thoughts on current events, good articles, new (and old) scientific research, and technology, from an epistemological anarchist perspective.

If you’re interested in reading more diverse content filtered through this lens, such as:
* Why every single Netflix documentary is bad
* What the hidden concept driving both blockchain and AI technology is, and what it predicts for our future
* How medical legibility killed thousands of women and ended the Women’s Health Initiative
* How decentralized forms of organization have fared against centralized forms historically, and why decentralized forms are the future (IMO)
* Which areas of science are rife for Kuhnian revolutions, and how

Then go over here and sign up.

Look for my first communication next week, 1/20/19 or thereabouts.

Against Utopia Issue #5: The Epistemology of Depression Part 5 – The Case of Depression under the Statistical Gaze

Hello, and welcome to Against Utopia, a newsletter investigating authoritarian utopianism in science, technology, politics, culture, and medicine, and anti-authoritarian alternatives. This is Issue Five, published October 17th, 2018.

This newsletter now has a Patreon. If you are enjoying it and want to support my work, please do. I will never charge for this newsletter, or my “IP”, regardless.

So far, we’ve examined, in a general sense, how medical facts come to be. Despite what we may have been taught about the mythology of the scientific method in school, we’ve seen precisely how much historical and social baggage the practice of medicine has, and that this baggage lingers well into modernity.

Recall that we’ve seen how 19th and early 20th century doctors viewed genealogy as the way to practice medicine by classifying disease. They believed that if they could classify observations scientifically, then they could practice medicine correctly. We then saw how their clinical experience modified classification, such that a gap came to exist between classifications derived from examining the dead, and the observations of the physiology of the living. At almost exactly the right time, the ascendance of statistical methodologies allowed them to bridge this gap via application of the risk factor. The risk factor enabled the emergent structures of medicine to define everyone as having some percentage of illness, which further allowed the pharmaceutical industry to define the ill and the healthy in a statistical sense using the surrogate endpoint, resulting in drugs engineered to target statistical states transitions, with the implicit promise that this maps to physiology.

Taking this all together, we have (1) a way to label the ill and the not ill, (2) a way to justify these labels with scientific methodologies, (3) a way to expand definitions to apply to as many people as possible objectively and hopefully, defensibly, (4) a way to scientifically and mathematically invalidate lived experience, and (5) a methodology to develop drugs for a defined, expanded, and scientifically valid market, that simultaneously is now the principal way that we derive knowledge of physiology, yet can only derive faulty knowledge thereof.

It’s how we end up with serotonin as a “downer” for bears and all mammals it’s ever been studied in, except in humans, where it is the happy molecule, an “upper” (cf. Issue Two <https://tinyletter.com/AgainstUtopia/letters/against-utopia-issue-2-the-epistemology-of-depression-part-2-grasshoppers-squirrels-and-bears-oh-my>).

We will now examine the case of mental disorders in the 19th and 20th century to see how this generalized process applies to depression. Let’s get into it.

Objective Categorization of Mental Illness – Kraepelinian Nosology

In late 19th to early 20th century, psychiatry as a practice started to become formalized, from its quack origins and pet theories, to medical practice and standardization. There were still plenty of doctors (and charlatans) injecting their patients with insulin on a whim, willfully ignoring informed consent, lobotomizing patients, and electrocuting patients with no theory in mind, but a subset of people with MDs began to try and apply the scientific method to what they were observing in asylums and sanitariums.

Emil Kraepelin was one of these MDs. An influential German psychiatrist at the turn of the 20th century, he believed that psychiatric illnesses were principally biological and genetic in nature. He was also a eugenicist, and eugenics at the time was the basic theory of the genetics of disease. Kraepelin endeavored to make the practice of medicine in mental illness as a whole, and depression specifically, more scientific by attempting to apply categories and objective criteria, to theoretical unobserved and undetermined biological states that were assumed to exist, so that all physicians would use the same language to detect the same diagnostic states and hence make the practice of psychiatric medicine “more scientific”. It is important to step back and realize that at this time, the notions of randomization, clinical trials, and selection bias were acknowledged, but were in their infancy. If you saw nothing but women and the maligned classes of Blacks, Latinos, and other non-whites, you were more wont to say that mental illness is more prevalent in these populations, and to diagnose their illnesses as essential to their personhood, and exclude them from polite society. And that’s exactly what he believed – that there was a fixed amount of “illness” in the population, and that a physician’s job was not to heal or change that state, because nothing could be done about it. A physician’s job was to scientifically identify and exclude. That’s part and parcel of what Kraepelin sought to standardize, with science.

To Kraepelin, eugenics was not the problem; the burgeoning practices of psychiatry worldwide were in a diagnostic crisis. It was a common occurrence for psychiatrists to present at conferences and find that what one group was presenting as depression in a clinical study in the UK, was another US group’s definition of schizophrenia, and vice-versa. This fluidity in diagnosis and standards allowed neurologists of the time, who were just glorified spin doctors with the unquestioned authority to shock, lobotomize, and poison people to cure their mental illnesses, to run amok.

This diagnostic crisis only got worse into the 20th century – there were many attempts at controlled experiments in diagnosis, with the American Psychological Association e.g. sending out videos of cases to 20 psychiatrists and psychologists, and receiving back 14 conflicting diagnoses. The most famous case of this is perhaps the Rosenhan Experiment <https://en.wikipedia.org/wiki/Rosenhan_experiment>, where a group of practicing psychologists and students infiltrated mental health institutions by faking schizophrenic symptoms that resolved, but were nonetheless forced to admit that they have a mental disorder, and given drugs. But I digress.

It was clear that psychology and psychiatry had a nosological crisis (and continue to have them!), and Kraepelin’s attempt to make psychiatry “clinical” took place under this backdrop, so it was not entirely without merit.

After about a decade and a half of working with the mentally ill, in Kraepelin’s estimation, nothing could be done about depression. He was prolific in this view, and published guide after guide on how to diagnose effectively, and spent much of his time propagandizing his approaches so that doctors and psychiatrists would begin to use the standards and converge on the same diagnoses. Today, Kraepelin is not well known outside of psychiatry, but within the profession, to this day, he is known for being the first real scientific manager of epidemiological mental health, and for setting the standard for how to manage, observe, and record thousands of observations rigorously. His approach would, after being somewhat discredited in the early to mid 20th century, resurface in 1970 in the form of the Diagnostic and Statistical Manual (DSM) of Mental Disorders, and he is recognized for being the major influence behind modern mental health instruments like the HAM-D (Hamilton Depression Rating Scale).

In connecting him to the movements that produced scientific medicine and the pharmaceutical industry, it’s important to note that Kraepelin was perhaps, in my view, the originator of the idea that you do not need to know the underlying biology in order to classify mental disorders, which I contend is also the major epistemological pillar behind the production of scientific knowledge in the pharmaceutical-industrial complex. If you can identify the steps of progression universally, which Kraepelin called “patterns of symptoms”, then you can group these together as “syndromes”, and then psychiatric medicine shifts from merely “symptomatic” to “clinical”. This approach might seem circular in its reasoning, and that’s because it actually is. Kraepelin’s belief boiled down to the pattern of symptoms = disease, and the disease = its pattern of symptoms. These two ideas, that one does not need to know the underlying biology in order to classify illness, and that the symptoms = disease, and the disease = symptoms, laid the groundwork for the pharmaceutical industry to later come along and build a pipeline that could identify “surrogate endpoints” of, in our previous example in Issue Four, Alzheimer’s Disease, without having to understand what actually causes progression in the disease. This was the early form of the surrogate endpoint thinking we discussed in part 4, at work.

Recall that a surrogate endpoint, typically “staging” of a disease, is fashioned in order for the drug to have a target state to work on. It is assumed that e.g. stage 2 Alzheimer’s is an object that exists in the world that we can identify with a clinical instrument (i.e. a quiz). If we can identify stage 2, and a theoretical (heh) stage

then we can fashion a drug whose goal it is to prevent progression from stage 2 to stage 3. Then we can quantify its effect size in a population of clinical trial participants and decide if the drug works.

If we can’t, for example, cure Alzheimer’s because we don’t really understand what causes it, then the theory is that we can just try to prevent progression to a more severe form of the illness by identifying the transition with a standardized dementia questionnaire. We don’t have to understand any of the how – we can just focus on the what. The only unquestioned assumption here is that the progression is real, that it exists in the physiology of the person: if it doesn’t, then the whole theory falls apart.

Now clearly, there are weaknesses to this case that don’t take an entire century to resolve. If all it took to canonize Kraepelin’s approach was the existence of categories, then we could zoom ahead to petrochemicals, dyes, and the pharmaceutical industry’s production of drugs. However, plenty of Kraepelin’s contemporaries called out the tautology of symptoms as disease, the problem of observing classes of disease without physicalizing them in material biology, and other such problems.

Furthermore, Kraepelin’s theories were developed in the background of the burgeoning psychoanalysis movement led by one Sigmund Freud, which was seemingly demonstrating much success explaining the plight of the average person under modernity. There was a bit of a battle brewing, and in order for us to understand how patient experience and context could be ignored in favor of the objective classifications of the psychiatrists, it is important to understand another seminal figure in 20th century psychiatry, Adolf Meyer, and the work he did to popularize and integrate Freudian psychoanalysis with diagnosis, and occupational therapy.

The Reaction to Classification – Adolf Meyer, Patient Experience and Psychoanalysis

When Adolf Meyer received his MD in 1892, Emil Kraepelin’s ideas and scientific practice of psychiatry had just started to take hold in Europe, and Meyer was enthusiastic about applying them in the clinic to real patients. He took up a postdoctoral position at the University of Zurich, and began practicing on patients, putting Kraepelin’s designations in play, learning, iterating, and working at the bleeding edge of psychiatric science. Around the same time that he began to doubt Kraepelin’s classification schemes, he failed to secure an appointment at the University of Zurich, and he instead moved to the United States.

Meyer’s work in the US began to focus on the burgeoning contextual revolution in psychiatry, psychoanalysis, and the work of Sigmund Freud. Meyer was thoroughly Kraepelian in his approach, like most European doctor of the time. In letters to Kraepelin and his personal writings, he (in my opinion, correctly) identified that Kraepelin’s approach was a tautology, in that the symptoms = disease, and the disease = symptoms. How the patient gets the symptoms, how they manifest, their particular form of suffering gives meaning to the disease state in Meyer’s practice, and after he observed Kraepelin at work and mentored under him, he realized that there were multiple holes in his categorical frameworks which he thought could be patched by the inclusion of patient meanings and context.

The principal form of inclusion of patient meaning and context at the time was psychoanalysis. I won’t digress to explain Freud and psychoanalysis, but suffice it to say that the Freudians emphasized the formative influence of early child rearing on adults, of sexuality, and of the interaction between people and their environments. If the eugenicists stressed objectivity and classification, the Freudians agreed to an extent, but stressed that environments generated the maladies that we are classifying, and the maladies themselves were not determined merely by genes or biology. In a way, theirs was a reaction to the very reactionary position that we are born with a certain amount of diseased genes, and that there can be nothing done about this.

In 1908, Meyer started a psychiatric clinic at Johns Hopkins Hospital to begin to implement new ideas, both of Kraepelin and Freud, in the clinic. He decided not to use the clinical model of Emil Kraepelin for classifying disease, but he incorporated some of his practices of observation, standardized record-keeping, and monitoring of pre and post-symptomatic phases of the disorder, in order to build histories and make inferences about likely disease progression. On the other end, Meyer incorporated social and biological factors that affected the personality into the clinical practices under his supervision. One of his primary theses was that modernity was causing a generalized, societal anxiety called neurasthenia, which was actually better understood as a failure to adapt to the demands of modern life. He also thought that instead of specifiable, objective natural disease classes, people mostly suffered as a result of “psychobiological” life situations and he sought to frame mental disorders as reactions to these situations (“A failure to adapt” vs. “you are broken forever”). He was also an early supporter of occupational therapy as a way to help people cope with their life situations, and find meaning in themselves and their communities by productively participating in them.

With his scientific approach to psychoanalysis and classification, and his ability to bring back the patient’s meaning into the context of medical psychiatry, he began to popularize an idea among the burgeoning profession of psychiatry: that everyday people could experience generalized social anxiety and distress, without being insane. This was a HUGE boon that cannot be understated – Meyer, while working to make classifications more scientific, and to create possibilities and de-pathologize mental disorder, expanded the scope of treatable illness from just the “insane”, to everyone.

While helping popularize occupational therapy as a practice was undoubtedly useful and ameliorative for patients, Meyer unknowingly backdoored the expansion of the label of mental illness by allowing any everyday person to come under the influence of the psychiatric profession. He, in effect, democratized psychiatry and its approach to mental illness, lowering the bar to entry to the psychiatrist’s office from the “criminally insane” to the everyday nervous or anxious person.

If you’re following along at home we now have some of the factors needed for a growth market to explode into existence:

– An “objective” criterion, with categorization, used to identify the depressed (Kraepelinian nosology)
– A way to make the categorization and definition of treatable extend to as many people as possible

​The diagnostic problems still lingered well into Meyer’s time, and it’s hard to engineer drugs for something that even most experts don’t agree is a treatable condition, or the right treatable condition. We needed a way, once and for all, to say who is depressed, and what the criteria are, so that we could make drugs that fit the bill. Perhaps a manual, from a set of experts, blessed by the FDA and the American Psychiatric Association, could finally put this issue to bed…

Integrating Clinical Objectivity with Patient Experience: The Menninger Brothers, Robert Spitzer, and the DSM

Adolf Meyer retired in 1941, just in time to miss the generation-defining opportunity to diagnose and treat people affected by the worst war the world had ever seen. The Menninger brothers, on the other hand, were at the right place at the right time. Karl and his brother William noticed that soldiers who had gone to war and come back often came back with profound psychological trauma. Karl observed, “We must attempt to explain how the observed maladjustment came about and what the meaning of this sudden eccentricity or desperate or aggressive outburst is.” The answer was not found in a classification scheme, it was in “what was behind the symptom,” as Karl put it.

The Menninger brothers, knowingly or unknowingly, agreed with the direction that Adolf Meyer was taking psychiatry in. They believed that factors in the environment influenced the patient’s ability to adapt and cope with stress, or not. This point of view of the organism and its ability to adapt was canonized in the first version of the Diagnostic and Statistical Manual of Mental Disorders, or DSM, which was released in 1952, and featured mental illnesses named as reactions. Everything, from depression (depressive reaction), schizophrenia (schizophrenic reaction), and anxiety (anxiety reaction), was seen as a reaction to a modern environment within which patients were failing to live.

This psychobiological view, as coined by Meyer and expanded by the Menningers, created a conflict between the “scientific” practitioners of observation and classification led by Kraepelin, and the messier therapy-focused practitioners, like the Menningers and Adolf Meyer. If it was indeed factors in the environment that were to blame for the inability to cope exhibited by patients of mental disorders, then, as we indicated in the previous section, everyone can be insane. Instead of using as an opportunity to de-pathologize mental illness, the APA and other governing bodies saw this expansiveness as a conflict to be resolved, because it made diagnosis, identification, and treatment very complicated, and worst of all, it was the source of the murkiness in diagnosis that we identified previously.

It was this key point in time, where psychoanalysts had shown us that patient meaning and individual context were huge inputs into mental disorders, while scientific observation could simultaneously be a huge boon in diagnosis. Instead of using the two approaches to find a middle ground that could present a more holistic view of patient mental health and brain physiology, the authorities took a utopian view, that only one of these is right, only one of these approaches can be scientific, and it is the one that makes health legible, objective, and most importantly, applicable at scale to the entire population.

In 196

Karl Menninger wrote “Instead of putting so much emphasis on different kinds … of illness, we propose to think of all forms of mental illness as being essentially the same in quality and different quantitatively”, which neatly captures the idea in context, at a time when it was thought that everyone can have mental disorders, it’s just about how much.

Right when this thinking was at its apogee, when psychiatry was facing this crisis in diagnosis seemingly brought about by including too much context, Robert Spitzer happened on the scene, and the first few editions of the DSM settled the fight between classification and patient context.

The first and second editions featured this inclusion of context by framing everything as a reaction to the environment, with the larger understanding that it was simply a failure to adapt to the demands of modern life that caused illness. In due time, it was implied that we would figure out what these shortcomings were, both in the environment and in our own physiology, but the connection between the two was assumed.

Spitzer worked on precisely this problem by refining the nosology of psychiatric medicine. By the time the DSM 3 was up for approval, he and his colleagues had collected more than a decade of clinical signs and symptoms that had been selected precisely for their observability and standardization. Anything that belied a hidden or implied meaning to mental suffering, in effect, anything that brought with it a theory of mind, was excluded. Only what the doctor observed, via senses and discourse, mattered. Spitzer and his colleagues held the position that reaction implies that there was a healthy way of transacting with one’s environment, and that psychiatrists also knew what it was. This, as we’ve seen, was prone to appearing unscientific. In many ways it was, but that was not a bug of the approach, it was a feature.

As we’ve seen through the duration of these newsletter thus far, pretending that we understand a phenomenon by cleaving away the messier parts of it to make it addressable by science is typically done by those with power or authority. It’s also a utopian view, because it seeks to start from a reduced understanding that can be perfected to address all maladies. This is precisely the business that mid to late 20th century psychiatry was involved in – a utopian view of what mental illness looked like, because the real messiness of it would belie the description and prescription of experts. Psychiatrists cut prescription entirely out of the game by saying they did not know how patients should react to their environments, and therefore they were no longer accountable for this. What they were claiming authority over was applying clinical criteria to observable signs of mental disorder, with reliability. By excluding normative claims to mental health buttressed by reaction and psychoanalysis, they were able to exclude all illegible, unscientific practices from psychiatry, so it began to “look” more scientific. This signed over a tremendous amount of power to psychiatrists, eliminated the unreliability of diagnosis (90% of doctors now converged on the same diagnosis, curtailing the effect of the Rosenhan Experiment), and birthed the opportunity for clear criterion, with surrogate endpoints, drugs to address them, of which the market was everyone, because, as we’ve seen, psychoanalysis was shuttered, but the rhetoric with which it expanded the market of pathological mental disorder was not.

Iproniazid, Imipramine, Reserpine and Prozac: How a Shaky Theory Gained Acceptance with Clinical Trials and the FDA

In 1952, doctors in a clinical trial originally seeking new drugs targeted at tuberculosis noted that the drug isoniazid made patients “inappropriately happy” after a dose was administered.

A different drug, imipramine, meant to treat schizophrenia, was also found to reduce depressive symptoms.

Investigation into these drugs quickly showed that they act on biogenic amine transport and production in different ways. It was found that isoniazid and a close analog, iproniazid, work to inhibit the enzymes that breakdown monoamines, like serotonin.

Later on, it was discovered that imipramine works to block monoamine transport, which keeps serotonin outside of somatic cells longer (which is the primary way that the antidepressant SSRI drugs work). A theory started to take form – that serotonin influenced the pathological experience of depression, because limiting its degradation in the cell (iproniazid) or preventing its uptake by a neuron (imipramine) may alleviate depression.

At roughly the same time in 1955, reserpine, an alkaloid derived from Indian snakeroot (which Gandhi apparently used as a tranquilizer) seemed to increase depressive symptoms in clinical observations of just one trial. Reserpine was known to deplete biogenic amines, such as norepinephrine, serotonin, and dopamine, by blocking their transport out of the cell, in effect working in the perceived “opposite direction” of imipramine.

Looking at this in the 1950s, it would appear that if depleting biogenic amines increased depressive symptoms, and increasing biogenic amine transport via enzyme inhibition or transport reduced depressive symptoms, then a likely theory that structures these observations is that the biogenic amines themselves were responsible for depressive symptoms. By 1967, these three observations became reified in the mental health literature with the release of The Biochemistry of Affective Disorders. However, this theory, and in some cases, even the observations, were ultimately wrong, as we’ll see.

Being wrong didn’t stop what I will call the low serotonin theory of depression from fueling the next generation of innovation in anti-depression drugs, starting with fluoxetine (Prozac) and its clinical trials. The medical trade and pharmaceutical-industrial complex by now had some key factors in place in order to convince the FDA that depression is an illness, and that it is treatable. First, they reified depression as an object, with an objective criteria with which to identify and diagnose it, so that 90% of practicing physicians could identify the illness reliably. This ensured that the FDA would unequivocally recognize an indication for the drugs put forth for its review, meaning that it recognized as real, the disease of depression. This was then enshrined in the DSM, and developed further with psychiatric measurement instruments such as the Hamilton Depression Scale, which used 29 standardized questions and a computation to score the severity of your depression.

Second, they had a physiological-looking science-y theory of how it worked in the brain – the low serotonin hypothesis, which was buttressed primarily by early findings in the 1950s. What remained unsaid is that these studies were either poorly replicated, or didn’t replicate at all when run through trials decades later. Now, in order to fuel the production of first generation of SSRI drugs, we had to bring the two together – we had to see if we could drive changes in the “objective” measurement scores of depression by applying a drug to an existing population of the identified depressed. That’s precisely what the first few clinical trials used to garner FDA approval did.

If you examine their abstracts, a common theme emerges – all of the effects are framed specifically as statistical reductions in depression based on scoring with an instrument such as HAM-D. None incorporate patient experience, context, or meanings. We’ve already seen how this objectification approach maligns patient experience and meaning, but nonetheless, the score it produces is treated as an objective measurement of whether someone is depressed or not. This treatment is a decision, and the decision is made by authorities with power, over subjects to that power. This is exactly the type of thing I’m interested in calling out – it serves the power structures of medicine to heroically simplify, to build a utopian vision of how depression can be measured, much more so than the people who are subjects of this measurement, because it allows them to deploy power through institutions easily, rationally, and coherently, devaluing the individual experience of depression and preserving the authority to do so.

With the HAM-D in play, the reductions in HAM-D observed, and a theory to go along with it, the trade organizations and pharmaceutical companies were allowed to submit the drugs to the FDA for approval. The FDA recognized that the drugs served an indication, which means they are meant to treat a recognized condition, and they demonstrated effectiveness against that condition (which is another can of worms we won’t open here). Furthermore, since anyone can suffer from depression, and not just major depressive disorder, probabilities could be generated for transitioning from one score to another, putting the risk factor back into play. If you could show that a drug also plausibly reduces a risk factor that ladders up to a particular score, that multiplies the number of surrogate endpoints your drug can target, and now we are completely off the map of lived experience and physiology if you ask me. We might as well be talking about magic, flying pigs, Bigfoot, and other seemingly anecdotal and unscientific things.

The story didn’t quite end there, however. Support for the low serotonin hypothesis has been waning almost since its inception. The original reserpine studies mentioned above were replicated and re-interpreted under randomized controlled conditions, and it was found that reserpine actually did the exact opposite of what it was reported to do in 1955. The clinical criteria for evaluating whether it worked or not suffered from the same problems we discussed earlier in terms of classification and standardization. Reinterpreting the criteria under HAM-D showed no effect. Under these more standardized conditions, patients reported that they actually felt better as a result of taking reserpine. So, depleting biogenic amines in the postsynaptic neuron actually made people feel better, which also goes against the major claims (not even evidence, really) made for the efficacy of SSRIs.

Furthermore, in 1987, a clinical trial of a related compound, tianeptine, showed that enhancing serotonin uptake into the cell also alleviated symptoms of major depressive disorder, in some cases better than SSRIs such as fluoxetine. In the course of just a few years, two of the major pillars that built the low serotonin hypothesis were called into question, but the theory nevertheless persists, because it the theory was never meant to explain the experience of depression and ways to manage it. My claim, from the outset, is that the theory reified a set of medical objects that could be used to create a disease, structure the market around it, design surrogate endpoints for magic bullets to address, and then convince consumers that this particular magic bullet was responsible for solving their problems. The perceived insanity of the functioning of this market makes sense, when you stop trying to examine it logically from the patient’s perspective, and examine its logic through the lens of capital and the authorities that control it (doctors).

To recount, we’ve seen how, in order to know depression, we have to go back in and through time to see how doctors know mental health. They derived knowledge of it via classification and scientific observation, and clinical empiricism coupled with psychoanalysis and patient context. The utopian drive for objectivity ultimately won out, while the empirical approach lingered on and was used to define everyone as already ill, suffering from a nonzero amount of mental illness pursuant to the objective criterion of the DSM, and risk factors associated with each of its diagnoses. The definition of the entire population as ill or potentially ill, enabled the construction of endpoints necessary to medicalize mental disorder by the pharmaceutical-industrial complex, working in concert with the labor and complicity of the trade organizations (American Psychiatric Association and primary care physicians). This is a predictable process that, by virtue of not seeking validity, can produce knowledge that is at odds with physiology and biology, but yet can justify itself with statistics.

The epistemology of depression can be, and is, corrupted systemically. Even if every actor in the system takes the “right” action, because the system is pointed at the wrong objects, it still misfires. No malicious actors necessary, just incentives.

So what does a more effective explanatory theory of depression look like? On what side does the evidence lie? What possibilities do we as average people have to increase our ability to cope under mental stress? We’ll cover that in the remaining two parts of the series.

– Dumit, J. Drugs for Life: How Pharmaceutical Companies Define Our Health. Duke University Press, 2012.
– Pills for Mental Illness?, TIME Magazine, November 8, 1954
– Foucault, M. Madness and Civilization. New York: Random House. 1965.
– Greenberg, G. Manufacturing Depression. New York: Simon & Schuster, 2014.
– Chouinard, G. “A Double-Blind Controlled Clinical Trial of Fluoxetine and Amitriptyline in the Treatment of Outpatients with Major Depressive Disorder.” The Journal of Clinical Psychiatry , vol. 46, no.

2, Mar. 1984, p. 32–37.
– Fabre, L. F., & Putman, H. P. (1987). A fixed-dose clinical trial of fluoxetine in outpatients with major depression. The Journal of Clinical Psychiatry, 48(10), 406-408.
– Cohn, JB. “A comparison of fluoxetine, imipramine, and placebo in patients with major depressive disorder” The Journal of Clinical Psychiatry 46(3 Pt 2):26-31 · April 1985
– Spitzer RL, Endicott J, Robins E. Research Diagnostic CriteriaRationale and Reliability. Arch Gen Psychiatry.1978;35(6):773–782.
– Endicott J, Spitzer RL. A Diagnostic InterviewThe Schedule for Affective Disorders and Schizophrenia. Arch Gen Psychiatry. 1978;35(7):837–844.
– Born, G.V.R., Gillson, R.E., 1959. Studies on the uptake of 5-hydroxytryptamine by blood platelets. J. Physiol. 146, 472–491.
– Everett, G.M., Toman, J.E.P., 1959. Mode of action of Rauwolfia alkaloids and motor activity. Biol. Psychiatry 1.
– Jacobsen, E., 1964. The theoretical basis of the chemotherapy of depression. In: Davies, E.B. (Ed.), Depression: Proceedings of the Symposium held at Cambridge, 22 to 26 September 1959. Cambridge University Press, New York, NY, pp. 208–214.
– Gram, L.F. Psychopharmacology (1986) 90: 131.
– Mennini, T., Mocaer, E. & Garattini, S. Naunyn-Schmiedeberg’s Arch Pharmacol (1987) 336: 478. https://doi.org/10.1007/BF00169302
– Baumeister, A.A., Hawkins, M.F., Uzelac, S.M., 2003. The myth of reserpine-induced depression: role in the historical development of the monoamine hypothesis. J. Hist. Neurosci. 12, 207–220.
– Davies, D.L., Shepherd, M., 1955. Reserpine in the treatment of anxious and depressed patients. Lancet 266, 117–120.
– Kraepelin, E. Lectures on Clinical Psychiatry. New York: Hafner, 1968.
– Scull, Andrew; Schulkin, Jay (January 2009). “Psychobiology, Psychiatry, and Psychoanalysis: The Intersecting Careers of Adolf Meyer, Phyllis Greenacre, and Curt Richter”. US National Library of Medicine. 53: 5–36.
– Meyer, Adolf (1928). “Thirty-Five Years of Psychiatry in the United States and Our Present Outlook”. American Journal of Psychiatry
– Muijen, M. , Roy, D. , Silverstone, T. , Mehmet, A. and Christie, M. (1988), A comparative clinical trial of fluoxetine, mianserin and placebo in depressed outpatients. Acta Psychiatrica Scandinavica, 78: 384-390.
– Menninger, K. The Vital Balance.New York: Viking, 1963.
– Menninger, W. “Psychiatric Experience in the War, 1941-1946.” American Journal of Psychiatry, 10 no.5 (March 1947). 577-86.

The Epistemology of Depression Part 4 – The Statistical Gaze: Clinical Trials as the Growth Engine for Prescriptions

Hello, and welcome to Against Utopia, a newsletter that lifts the veil of authoritarian utopianism in science, technology, politics, culture, and medicine, and explores anti-authoritarian alternatives. This is Issue Four, published July 18th, 2018.

In issue 3, I established an  understanding of how doctors see. Cognizant of how medical perception develops both historically and culturally, we can now begin to analyze critically  how the hegemonic discourse employed by doctors, pharmaceutical marketers, and regulators shapes knowledge of our physiology.

Recall at the end of Issue 3, we discussed some of the  typical, patterned ways doctors may have explained medicine to you in the past:

1 in 5 patients experience improved symptoms on Prozac within 90 days, 1 in 20 experience suicidal thoughts, and 2 out of 5 do not respond. It has a low likelihood of working for you because of other drugs you’re on, so let’s try something else”

How is it that seemingly authoritative statements of medical expertise from doctors have come to take on this wishy-washy, uncertain form?  A doctor would think this statistical guesswork something alien, even heretical to the craft of medicine a hundred years ago but now, it seems, fairly common practice.

By 1920, probability and statistics in medical treatment was well understood. However, unlike today, as we saw in issue 3, doctors of that era felt strongly that disease was embodied in subjective experiences that could be extracted with interaction in a medical clinic, that autopsies provided a way to link the pathology of disease with living bodies, and these diseases could be universally and canonically classified.

Somewhere along the way, statistical-inferential expertise like that demonstrated in the example above began to replace statements grounded in physiological and biological functions. The effect appears to many nowadays as if scientist and doctors are hedging all their bets, abandoning medical treatment in favor of medical gambling: “well, if we can’t see into you, and we can’t quite see into your future, perhaps epidemiological numbers will give us a map with which to understand the territory of your body.”

It would be overly reductive of me to ascribe such a complex social, political, and scientific change in diagnostic thinking to a handful of factors. But I do think two factors in the 20th century led us to a situation where our health would no longer be perceived by medical experts as subjectively varied and embodied but rather as something that could be objectified in a spreadsheet and predicted with numbers. These two factors include: 1) the Framingham Heart Study, a landmark epidemiological study that helped us figure out that smoking causes cancer, and led to the creation of the risk factor and 2) the advent and growth of the clinical trial process for drug approval which, as we’ll see, drives most of today’s modern medical research  as well as shapes the market for medical diagnosis and pathologization

My aim here is to show how the map of the body created by epidemiological risk factors does not significantly overlap with the territory (human physiology) it is circumscribing, and how as a result, we don’t really seem to cure anything anymore. We’ve just become very good at endlessly identifying disease, and prescribing medication, but we’re no better off in terms of outcomes.

The Framingham Study: “Risk Factors” and Alienation

The landmark Framingham Heart Study began in 1948, involving over 5,000 participants sampled from the small Boston suburb of Framingham, Massachusetts. It carefully followed and monitored this sample for decades, tracking, among many other things, their behaviors (e.g. smoking), their biomarkers (e.g. cholesterol), and any other relevant medical events (heart attacks, deaths). With careful analysis of many data points, it sought to measure the causal connections, if any, between these observed behaviors, recorded biomarkers, and events.

In one of the first publications summarizing its early findings in 1961, Dr. William B. Kannel, the director of the study, introduced  the now-common term “risk factor” to refer to the relative probability that a patient with a given condition would develop a certain adverse outcome (i.e. smoking increases the “risk factor” of developing lung-related illness). The term “risk factor” thus became canonized soon after in the medical lexicon. Before this study, doctors believed: atherosclerosis caused  heart disease; hypertension was a normal part of aging for everyone; smoking might be related to heart disease; and obesity had no sure bearing on mortality. However, what this study gave researchers was the ability to estimate, with relative statistical certainty, the likelihood that someone who smokes a pack of cigarettes a day will develop lung cancer over the course of their life compared to someone who doesn’t. It became, in other words, very easy to tell a patient how their lifestyle correlated with future outcomes such as obesity, heart disease and cancer. Researchers felt empowered to give causal weight to your actual weight, attributing to your 20 extra pounds an increased likelihood of early death.

Surely, in a vacuum, this might seem like a good thing that doctors and patients  could now identify and isolate risk factors for developing disease. No doubt interventions developed to mitigate noted risk factors (e.g. “Quit smoking or you’ll die.”) have done much to  reduce the incidence of disease in many patients. In a sense, the Framingham Study offered medicine what it did not have before:the scientific rationale for prescribing preventative interventions.

Unfortunately, these things never happen in a vacuum. They occur in the hegemonic domain of the pharmaceutical-industrial complex., Instead of simply empowering doctors to prevent disease, the Framingham study instead gave the super-organization of pharma, government, and the medical trade exactly the tools it needed to turn healthcare into a market, and clinical trials provided the tools to increase the scope of disease and grow prescriptions.

By 1961, the pharmaceutical industry understood that with a sufficiently broad definition of risk, they could begin to medicalize larger and larger populations, relatively healthy people who had yet to see themselves as patients. As one industry executive said that their goal for treating diabetes wasn’t to reduce its occurrence across the population but rather: “To uncover more hidden patients among the apparently healthy.”  The Framingham study inspired the demand for still larger trials, needed to “render visible the relatively small improvements provided in less severe forms [of medical intervention].”

Risk factors thus became the tool used to uncover this new market of  hidden patients. Instead of seeking ways to treat and cure symptoms of disease, pharmaceutical researchers were incentivized to hunt for precursors (surrogate endpoints), evidence that a patient could potentially develop disease and would need long-term (and preferably costly) intervention to prevent dying from it. In the discourse, this manifested as an upending of the dominant view of disease – instead of being seen as incurable spirals toward an eventual death, diseases were now seen as chronic conditions requiring surveillance, prediction, and management. But defining disease with these endpoints seemed a little… arbitrary.

A medical anthropologist, Joseph Dumit, puts it as such (quoting him at length):

“Illness was redefined by treatment as risk and health as risk reduction, and the line of treatment itself was determined by the clinical trial and an associated cost-benefit calculation. … If the RCT (randomized controlled trial) meant that doctors had to give up control during the trial and trust the numbers afterward, the emergent notion of illness as defined by that line was equally troubling precisely because it was both arbitrary and unsatisfying. Why at this number and not a bit higher or lower? Why are the numbers usually so round (everyone over 30 should be on cholesterol-lowering drugs?)”

The risk thresholds established for these diseases as a result of studies like Framingham did, as he says, have an arbitrary feel. There was no clear distinction in the risk factor between when a disease starts and when it ends, when one is safe and when one is at risk. In addition, since we’re dealing with statistical notions of health, not everyone at risk will actually develop the disease. So, what kind of a justification could we have for saying that all 29 year olds shouldn’t be on cholesterol-lowering drugs, but all 30 year olds should? Doctors, epidemiologists, and statisticians marshaled the facts to end this debate.

Geoffrey Rose, a pioneer of epidemiological medicine, was among the first to take this controversy head on. He posited first that the features discovered in the Framingham Study and in other epidemiological studies were diseases in and of themselves hitherto unrecognized in medicine, in which the defect is quantitative not qualitative. Let’s reiterate – the thresholds for disease identified by longitudinal epidemiological studies, are diseases themselves. In our study thus far, we have traced how the discourse of medicine socially shaped the notion of illness, from one of classification and genealogy, to one of interaction, etiology, and time-course, and now to one in the modern age where the presence of disease is purely statistical. In simplistic terms, one can think of the classificatory gaze and anatomo-clinical gaze developed by Foucault in Issue 3 as different versions of ‘what does he have, and does he have it?’. What we’ve developed here, which is a statistical-inferential gaze defined by epidemiology and clinical trials zooms right past this – everyone has it, it’s just about “how much of it does she have?”

Rose puts it as:

“this decision taking underlies the process we choose to call ‘diagnosis’, but what it really means is that we are diagnosing ‘a case for treatment’, and not a disease entity.”
The threshold becomes the diagnosis; the prescription for the diagnosis is chronic medicine. When the potentiality for death is made the disease, well, everyone’s got it.

Clinical Trials as Machinery for Generating Prescriptions

In Drugs for Life, Joseph Dumit, a medical anthropologist at UC Davis, presents a ethnomethodological and ethnographic study of how drugs are made in the US after  World War 2. If you’re not familiar with these methods, it just means he followed pharmaceutical marketers and clinicians around for 15 years and observed how they think about, talk about, and perform their jobs, and, in turn, analyzed how these activities influenced their industry and overall medical knowledge. By the end of his extensive study, Dumit declares the hegemony of the clinical trial as the basis of modern medical research, that the guidelines from those trials have been used to change how we define  illness from a set of symptoms doctors can treat to a set of risk factors doctors can manage. To aid in this chronic management of lifetime risk for death, a huge market for prescriptive therapy exploded into existence.

At the outset of a clinical trial, the research designers of the trial will choose a disease target, e.g. Alzheimer’s disease. Since companies eager for patents do not have the time to wait to see for certain if their drugs have an appreciable effect on disease progression ten years from now, they need a way to measure the therapeutic effect indirectly–and now–rather than ~10+ years down the line. So, rather than allow the treated and untreated disease run its course across a randomized sample of patients, what trial researchers do now is identify and choose a “surrogate endpoint”, something that reasonably lets them claim their drug regimen demonstrates  some degree of efficacy in preventing the progression of the disease.

For example, instead of trying to treat the symptoms of  debilitating Stage 7 Alzheimer’s disease, of which there are few surviving and able-bodied participants who can weather a 2-3 year trial, pharmaceutical marketers financing clinical trials  will focus on earlier stage patients who exhibit much milder symptoms. There’s no a priori reason to assume that treating early-stage biomarkers of Alzheimer’s will actually manifest in less severe progression of Alzheimer’s, but this justification is never demanded up front from companies sponsoring studies by the FDA, or any other governing body.

Thus, it’s important to stress here that there are considerably more people experiencing mild to moderate symptoms of Alzheimer’s than who will ever progress to Stage 7, with or without treatment. However, treating all patients with Alzheimer’s as potentially Stage 7 justifies early intervention and thus a new market for long-term drug intervention. You might think that the industry would go to greater lengths to obscure how its search for profit undermines health, but they’re actually quite brazen about it:

(click “display images” if you don’t see anything here)

This figure is titled “Converting Patients to Volume”, from Forecasting for the Pharmaceutical Industry: Models for New Product and In-Market Forecasting and How to Use Them, and breaks down into objective categories, the various parts of the process entailing turning patients into revenue for a pharmaceutical company’s bottom line. Dumit examines the levers that manipulate each box in turn, but we’ll focus on just one – number needed to treat (NNT), which ladders up to the “Number of Patients” in the box above.

We’ve already seen how choosing a surrogate endpoint allows one to define a far larger market, because there will always be more people with a less severe version of an illness, but there’s always a tradeoff. Recall our discussion of risk factors above. Number needed to treat is defined as the number of people who need to be treated in order to observe the reduction of one adverse outcome, usually a death or a diagnosis of disease progression.

Put simply, let’s say you as a clinical trial designer treat one group of 100 people who have unknown likelihood of developing Alzheimer’s, and leave another group of 100 similar people untreated. You pick stage 6 -> stage 7 development of Alzheimer’s as your surrogate endpoint. Ten patients in the untreated control group develop Stage 7 Alzheimer’s compared to only 7 patients in the treatment group. Success! Your drug trial demonstrates 33.3% drop in the risk for progression (10-7/100). The number needed to treat to observe exactly 1 adverse event is [1 / (1-0.03)] = 33.3 people. These findings would indicate that your company would have to administer the pill to 33 people with unknown disease progression before you observe an effect of one less Alzheimer’s progression.

This would all be well and good in a vacuum, except that there is obvious capitalist  pressure on corporate and corporately-funded trial designers maximize the NNT. Because drug profiteers can’t ethically give people diseases (yet), expanding the  NNT is the only way to increase the total addressable market size for a disease..

So let’s say you, as a trial designer, like all of your colleagues, decide that you need to get a drug through a trial as quickly and cheaply as possible. You can’t afford to wait to see if you can reverse, revert, or inhibit progression of Alzheimer’s to stage 7. So you pick a surrogate endpoint somewhere at stage 2. You treat two groups of 100 people, same as above, but observe no effect in the treatment group. So you redo the study, theorizing that maybe the effect is perhaps too small to observe. You conduct the same drug intervention again, but with bigger groups of 1000 patients each. You now observe 10 progressions in the control group, 7 in the experimental. Success! What’s your NNT? Well, since you slapped a 0 on the end of the group sizes, it’s now 333. That’s great! You just multiplied the number of patients you can market your drug by a factor of ten!

Except that now, in material terms, you have to treat 300 more people than you did before in order to see get the same results – one less observed event. By definition, since you now have to treat more people to get the same effect, you have a less effective drug. Let’s repeat that so it is clear:

Pharmaceutical marketers, who control the design and output of clinical trials for the material gain of their companies, are under constant pressure to increase the total market size of a drug. Increasing the market size always carries with it the side effect of decreasing the effectiveness of the drug.

By choosing which trial to run (stage 2 vs. stage 6 Alzheimer’s, for example), the industry and its regulatory body increase the apparent illness that is treated, and arbitrarily expanding the  definition of surplus health that is generated by the study.

Quoting Dumit:

“This is the public secret of capital-driven health research. Clinical trials can and are used to maximize the size of treatment populations, not the efficacy of the drug. By designing trials with this end in mind, companies explicitly generate the largest surplus population of people taking the drug without being helped by it. Since the trials are the major evidence for the treatment indication, which effectively becomes the definition of the illness or risk, there is a rarely any outside place from which to critique or even notice this bias.”

By utilizing the risk factor, deploying it in NNT calculations in order to define markets, and running clinical trials to move those markets, the pharma-industrial complex in concert with clinicians has completed the transition from genealogies of health (classification) and empirical practice (anatomy and clinical experience), to mathematical subjugation and control (statistical-inferential gaze). It’s not clear that medicine has ever really worked well, but it’s clear now that the introduction of capitalist bias in the discipline of fact discovery has completely warped our notion of care at the expense of understanding our actual physiology and the ways in which drug interventions actually affect the body.

As Dumit says, clinical trials are being designed in order to answer the question of “What is the largest, safest, and most profitable market that can be produced?” The question is not one of choices by designers, but one of structural pressure and violence. As a result, many clinicians, patients, and even pharmaceutical analysts can no longer imagine how to generate accurate clinical information about the body. We have been alienated from our subjective physiology by medical facts.

What would the world look like if Dumit is in fact correct about the structure that produces medicine and medical facts? What would we expect to see if these critiques are true?

Novartis just last week joined other pharmaceutical companies in ending their antibiotic research programs. Staph and MRSA infections have been identified by the CDC as a leading cause of death during hospitalization, if not the leading cause. Staph infection is not something that can be managed chronically, however. It’s episodic. The majority of people, if they are not wont to hospitalization, are not at a quantifiable risk of getting staph. Since they’re not, it makes no sense for antibiotic research programs to exist – and now they don’t!

You’d also probably expect to see more of this risk language proliferating amongst the general public, with more and more language addressing chronic management of symptoms. Maybe your direct-to-consumer advertising would frame all diseases as chronic, embedding the management of risk directly in the ads.

Perhaps they’d look a little something like this:

Figure credit: Drugs for Life, Joseph Dumit, 2003. (click “display images” if you don’t see anything here)

In the next issue, we’ll finally tie this all together with the history of SSRI drugs, the medicalization of depression, and the production of the first SSRIs. We’ll see how serotonin’s receptor transmission targets were identified for surrogate endpoints incorrectly, how serotonin transmission was objectified into various categories against these endpoints, and how 3 generations of drugs were engineered for these endpoints with the machinery of thresholds derived from standardized questionnaires (PHQ-2 and PHQ-9), so that larger and larger populations of people could be targeted for medicalized depression treatment.

  1. Dumit, Joseph. Drugs for Life: How Pharmaceutical Companies Define Our Health. Duke University Press, 2012.
  2. Greene JA. Prescribing by Numbers: Drugs and the Definition of Disease. Baltimore: Johns Hopkins University Press, 2007.
  3. Pickering, GW. High Blood Pressure. 2nd ed. J. & A. Churchill, Ltd, London; 1968.
  4. Rose, Geoffrey. Rose’s Strategy of Preventive Medicine. OUP Oxford, 2008.
  5. “Converting Patients to Volume,” Cook 2006.
  6. Bartfai, Tamas. Future of Drug Discovery: Who Decides Which Diseases to Treat? Elsevier Science & Technology Books, 2013.
  7. Bloomberg.com. (2018). Novartis Exits Antibiotics Research, Cuts 140 Jobs in Bay Area. [online] Available at: https://www.bloomberg.com/amp/news/articles/2018-07-11/novartis-exits-antibiotics-research-cuts-140-jobs-in-bay-area [Accessed 14 Jul. 2018].
  8. Klein E, Smith DL, Laxminarayan R. Hospitalizations and Deaths Caused by Methicillin-Resistant Staphylococcus aureus, United States, 1999–2005. Emerging Infectious Diseases. 2007;13(12):1840-1846. doi:10.3201/eid1312.070629.

Against Utopia Issue #3: The Epistemology of Depression Part 3 – How We “See” Depression

Hello, and welcome to Against Utopia, a newsletter that lifts the veil of authoritarian utopianism in science, technology, politics, culture, and medicine, and explores anti-authoritarian alternatives. This is Issue Three, published May 30th, 2018. At the end of the last issue, I said we’d talk about “The Medical Epistemology of Drug Development for Antidepressants”, which I thought would dovetail nicely from what we learned about depression and serotonin in mammals and humans. However, in order for us to now get an understanding of how medical knowledge works in drug development, we first need to explore how the Western medical apparatus sees—specifically, how medical perception was shaped by the sociopolitical climate of the West in the 1800s through to 1950s. So for now, I’m going to take a detour through how this warped perception continues to shape how we think about the body, disease, death, and medicine today, in 2018.

To recap, in Issue #1 we learned what being “against utopia” means; how modern medicine is utopian in vision and execution; how utopianism is authoritarian in that it values some knowledges at the expense of marginalizing or erasing others; and how utopian visions structure sociological and political constructs that inevitably lead to authoritarian outcomes.

In Issue #2, we applied this utopian vision analysis to serotonin and conventional models of depression. We showed how biological knowledge was excluded in this study of depression by heroic simplification. As a result, the mammalian and insect evidence demonstrating the physiology of how serotonin actually works has been neglected.

So in Issue #3 I want to run with this idea of “vision,” but more literally. I want to explore the dilemma of perception in science. I want to help you, reader, see that there are multiple ways of seeing what we call Depression.

Put simply, depression is more than a disease. Culture surrounds the depressive person and gives meaning to the suffering, meaning that changes with context and history. How depression is diagnosed and treated in any society, too, is dependent on cultural conditions. Doctors are not free to practice medicine how they wish but must at least attempt to abide by best practices and standards that they themselves do not establish, but that are established by a common language, discourse, and ways of perceiving. For these reasons, it’s critical for us as we think about Depression to examine it in its proper context. How a doctor uses the various human faculties to procure knowledge, generate models, and make sense of what’s going on in a healthy body, a pathological body, and a dead body, and how these all relate to each other, are not objective – they all occur in a culture that shapes their perception, available choices, and actions, at every level of interaction (interpersonal, communal, nation-state).

Let’s get into it.

In The Birth of the Clinic (1963), Michel Foucault follows the reorganization of medical knowledge through the 18th century, resulting in the institutionalization of the modern clinic of post-revolutionary France. Foucault centers his analysis around what he calls the “medical gaze.” The “medical gaze” simultaneously describes three phenomena: 1) the material structure that makes possible a physical analysis of the body via the sensory faculties, 2) the epistemic structure that enables physicians share best practices and a collective history of knowledge of various pathologies, and 3) a taxonomy regarding normative health that enables doctors to diagnose, classify and define illnesses. We’ll call these three historical components of the medical gaze the classificatory gaze (genealogy), the nosological gaze (diagnosis, histories), and the anatomico-clinical gaze (experience derived from material structure).

The classificatory gaze describes how Western medicine worked before the end of the 18th century. Inspired by animal taxonomy, it was a theory of medicine that sought to group diseases hierarchically within families, genera, and species. Causal and temporal evolutions were normalized to the present only, and the perception of a given disease was as a state without depth in time or space. A given disease was distinguished by its key symptoms and features, and it was believed to be closed, systemic, and self-contained. It’s evolution in time filled in gaps in knowledge, but even this was problematic as time evolution of disease did not matter as much as holding the features static for classification. In fact, the patient’s own experience would only hamper classification, as every individual’s signs of progression muddled classification.

For example, phthisis, what we now just call tuberculosis, was understood as a vast space of wasting diseases accompanied with fever and inflammation. Phthises were grouped into phthises of the eye, the kidney, and more, and the specific disease of phthisis became classified as one of wasting, and not of inflammatory lung disease caused by an infection of Mycobacterium tuberculosis. Physicians of this period merely classified diseases by their common symptoms but did not have a theory of causation until much later (which we’ll examine when we look at the To call tuberculosis a bacterial disease was not possible, no matter what empirical methodology one chose, simply because it could not be conceptualized in any other way; its species classification demanded that it be nested within the species of phthisis. This becomes more clear when you realize that the analogy was key to the classificatory gaze. Practicing doctors of the 18th century compared numerous cases and constructed ontologies of disease that would give them complete pictures of disease, so that physicians could structure and communicate about the world of disease.

Foucault believed that the basis of our knowledge is the set of language. Language is what allows us to articulate experience but it also constrains how we can express ourselves. In this way, language shapes our knowledge and what can be named by language implicitly obscures other possibilities of interpretation. Thus, the shape of discourse in turn shapes medical knowledge, and phthisis offers a perfect example of this. To decry phthisis as unscientific, and correctly point out that tuberculosis bacterium is the ultimate cause of consumption, this would not have done much to alter the actual practice of medicine in the 18th century. Doctors of the 18th century would not know what to do with the empirical reality of bacterial infection. They would not have an appreciation for how the body’s immune system processes bacterial infection. They would not have the language to talk about it. The classificatory gaze itself, the way we see, shapes the discourse of tuberculosis, and also stands in the way of other possibilities.

The nosological gaze was the another perception mechanism of the antebellum period. Now, it was already understood that the primary space of disease was its locus within the genealogy – its classification. It’s secondary spatialization, how it was embodied in the organs and tissues, was underutilized because it lacked coherence. It was only tied together by clinical experience, and clinical experience was seen as faulty, prone to error, and ahistorical (seen as impossible to tie together). Early 19th century nosology defined clinical experience further with symptoms and signs, and this was when doctors began to make progress beyond classification.

Symptoms were understood as visible presentations of pathological essence regardless of the individual patient. They were universally accessible. Signs implied the fate of the individual patient, and were tied to the course of the disease, and outcomes. As the clinic began to take hold as the primary way to teach medical histories, essences of disease were left behind: the direct experience of the clinician took precedence over any classification, and temporal phenomenon began to be integrated more tightly into the practice of medicine, as a result of direct experience. The primary spatialization, as we said above, was the disease place in the family; the body and organs embodied the disease, but merely as a secondary spatialization outside of the classification. With the nosological gaze born of the clinic, this spatialization took more and more precedence, and the space of thought moved from the imaginary of classifications to the embodiment of disease in a person.

There was a problem here of course – the path of disease is usually uncertain, non-linear and unpredictable. How did we know that a cough with fever with no sputum was not tuberculosis, when a cough with blood would occasionally clear up with no intervention? What was to be done to address these cases?

In the early 19th century, mathematician Pierre-Simon Laplace presented a new methodology for resolving this problem: statistics. The grand project of taxonomizing disease was abandoned for the application of empirical measure, the probability of signs, symptoms, and ultimately disease. The false binary of health and illness was dissolved. . How would statistics come into play?

One way to study disease would be to cut a person open and directly gaze at the lesions in their body to monitor the course of disease. This was of course unethical, but vivisecting and examining corpses was not. By studying the dead in the 19th century, researchers began to be able to define health in opposition to pathological states observed among the sick and the dead. But because health was so broadly defined in this way, it was at this point that Foucault observed the discrepancy between genealogies (classificatory gaze) and experience (nosological gaze) could begin to be bridged in the discourse of doctors. Probability gave doctors a gradient from which to walk from the observed lesions and seat of disease in a dead body, to the pathological states of normative health observed in a still living person. One could look at the lesions in the liver left behind from cirrhosis in a chronic alcoholic, and surmise that a sore liver coupled with yellowed eyes and skin indicate a particular progression and probability of morbidity vs. someone with a blocked bile duct who was otherwise fine. A gradation could be formed in the discourse that helped to finally migrate from a static ontology of classifiable diseases, to the lived experience and probabilistic progression of observed disease. This set the stage for the anatomico-clinical gaze.

As the 19th century practice of medicine developed, the anatomico-clinical gaze began to take shape. Surely, there had been many people before the 18th century who cut a person open to see what was inside, but it was not until the 19th century that we would see researchers using systemic knowledge derived from the scientific examination of thousands of dead bodies to bridge this gap and inform healthy states of being. The theory of a medicine of species had become increasingly inadequate as theories and classification were not often enough, yet practice was found wanting and did not equal theory in application. The two, theory and practice, were linked by the experienced anatomico-clinical gaze. In a language of probability for instance, this gaze was trying to bridge between the uncertainty of what was hidden beneath the surface and could only be observed via symptoms and signs, with the total certainty of death as examined by autopsy. A physiologically-based nosology becomes possible with this linkage, and the temporality of symptoms and disease can be mapped on the spatiality of the tissues, with probability used to fill in the gaps in course.

That brings us to the 20th century and how medical researchers “do” science today. If statistics helped researchers resolve previous empirical error it seems that by the 20th century, researchers have doubled down on statistical objectification as an ideological commitment. Contemporary medicinal research has developed a jargon and concomitant gaze deeply entrenched in the symbol system of mathematics and probability. I’m going to call this the statistical-inferential medical gaze.

You’ve probably experienced the statistical-inferential medical gaze if you’ve ever asked your doctor for an opinion on a potential operation. If it is an iffy situation, either likely to result in potentially bad outcomes, or even better, uncertain outcomes , in between hems and haws your doctor will likely say something along the lines of:

“50% of the time the operation is a success, but that should be weighed against the costs of surgery, financial and otherwise”

“These types of cysts generally have a very low likelihood of becoming malignant, therefore not operating is probably the best option, but they can get ugly, so if cosmetic reasons matter to you operation is valid”

“1 in 5 patients experience improved symptoms on within 90 days, 1 in 20 experience suicidal thoughts, and 2 out of 5 do not respond. It has a low likelihood of working for you because of , so let’s try ”

Now we have a firm understanding of how contemporary doctors “see” disease and treatment through the hegemonic lenswork of probability. From here, we can now examine how the modern language of the doctor shapes their knowledge of our bodies. We can use the statistical-inferential medical gaze to construct our own history of what normal and pathological mental states are, how depression emerges and is quantified in this gaze, and what experiences these power structures exclude, at great cost to us. We’ll cover that next by looking at the statistical-inferential medical gaze in the mid 20th century, the classification of depression, and the development of the first SSRIs in the pharmaceutical-industrial complex model. Thanks for reading.

– Foucault, Michel. The Birth of the Clinic: An Archaeology of Medical Perception. London: Routledge, 2010. Print.

– Montgonery, K. How Doctors Think. Oxford: Oxford UP, 2006. Print.

– Dumit, Joseph. Drugs for Life: How Pharmaceutical Companies Define Our Health. Duke University Press, 2012.

Against Utopia Issue #2: The Epistemology of Depression Part 2 – Grasshoppers, Squirrels, and Bears, oh my!

Hello, and welcome to Against Utopia, a newsletter that lifts the veil of authoritarian utopianism in science, technology, politics, culture, and medicine, and explores anti-authoritarian alternatives. This is Issue Two, published March 31st, 2018. At the end of the last issue, we finished up by examining the complexity of serotonin physiology, it’s simplification by modern pharmaceutical approaches, and its idiosyncrasy across various organisms, specifically, squirrels, bears, and grasshoppers. Today, I want to lead you through a deeper examination of serotonin physiology in these three organisms, and talk about what that means for us as humans, and what possibilities it reveals in our thinking about depression.

Every winter, if you live in a part of the world with four seasons (or if you don’t live in California), you most likely experience seasonal fluctuations in serotonin (amongst many other, interconnected, interacting hormones). Here in Oregon, it is a common occurrence to hear people talk about their difficulty getting out of bed once the shorter days hit, and how the lack of sun is making them more tired, sluggish, or sometimes, depressed. It’s so common that it has a medicalized name – seasonal affective disorder. <https://www.mayoclinic.org/diseases-conditions/seasonal-affective-disorder/symptoms-causes/syc-20364651>

In addition, humans have been known to accumulate serotonin’s precursor, tryptophan, in their white hairs, especially during winter, when some people begin to experience seasonal greying of hair in the winter. This seasonal change in hair color is indicative of changes in core metabolism and hormones associated with it. Specifically, the onset of a true winter experience (shorter days, colder temperatures, etc) slows the pathways involved in the conversion of tryptophan to niacin, converting much of the tryptophan to serotonin, and building up tryptophan in the scalp.

Serotonin, furthermore, has been shown to slow the mammalian and human metabolism by increasing glycolysis (the conversion of sugar to pyruvate and lactic acid) over oxidative metabolism. There are a multitude of cascading effects associated with the increase of glycolysis due to serotonin that we don’t have the scope to cover here, but some of them include what you would expect when you read “slow metabolism”: sluggishness, weight gain, lowered insulin sensitivity, and increased fat storage. Some of these frequently appear as side effects of common anti-depression drugs, specifically, selective serotonin reuptake inhibitors (SSRIs).

The red-cheeked ground squirrel is a common squirrel in Central Asia. When winter comes, an enzyme system that generates serotonin, tryptophan hydroxylase, kicks into high gear. Again, a common pattern occurs. Resources get scarce, days get shorter, nights get longer, temperatures get lower, and animals require an adaptation to help them bear the lack of energy producing resources in a plentiful environment. In squirrels in or nearing hibernation, serotonin production is approximately 50% higher than in squirrels not yet entering the early stages of torpor. This activation of serotonin producing systems occurs before the animals start to get cold, slow down, and enter hibernation, indicating that serotonin plays a key role in producing torpor, and that this resource scarcity is anticipated by the animal’s neurobiology. The increase in serotonin produces an over 50 degree drop in body temperature – from 98.6 degrees to 41 degrees F! This allows the animal to enter a state of somewhat restless torpor while expending very little energy. It’s not as restful as sleep, which takes energy to maintain, but it allows the animal to make it through the winter. When the spring rolls back around, in order to exit hibernation and resume normal activity, the system that generates serotonin grinds to a halt, and serotonin levels in the brain drop rapidly. The squirrels exit hibernation, its insulin sensitivity reverts to normal, and it begins to put on weight again.

In black and brown bears, which are also known to hibernate during winter, insulin sensitivity and hibernation are mediated by serotonin. As October approaches, the amount of a particular gene product, PTEN, goes up. PTEN indirectly promotes serotonin synthesis, and serotonin synthesis causes the bears to become almost literally diabetic. By the time hibernation is kicked off with appropriate levels of serotonin, like in the squirrels above, the bears have become fully insulin resistant and mirror what is known in humans as type 2 diabetes. High fat levels in their blood sustain them through winter, until serotonin levels drop off and hibernation ends, just as in the red squirrel. Bears are even more interesting in that their body temperature remains mostly the same, dropping only 2-4 degrees celsius, hence making their experience somewhat more comparable to humans. Again, the effects of increased serotonin here in bears are impaired insulin sensitivity, high fat metabolism, lowered body temperature, and increased torpor and sluggishness, much of which mirrors the side effects of SSRIs (which increase serotonin to treat depression) in humans. This increase in serotonin is precipitated by a real or perceived lack of resources in the environment by the organism, which can also explain why it’s so heavily conserved across species, but that bit we can save for another time.

Lastly, locusts get a really bad rap. From the Old Testament, to modern day farmers in Australia, they wreak havoc on crops and food supplies, in a somewhat predictable seasonal manner, and are universally reviled for it. But, did you know that locusts are actually just grasshoppers who have morphed into a more gregarious state? I didn’t until recently. A group of researchers has shown that grasshoppers, when in close proximity to each other, close enough to scratch each other basically, can trigger metamorphosis into the locust state. The close proximity and scratching triggers the production of serotonin in the grasshoppers, and this scratching is interpreted as a signal that the environment contains dwindling resources to support the population. Better to begin transformation into a voracious horde of locusts, and feast on what you can while the getting is good! Again, the close clustering comes with a neurobiological, experiential interpretation that comports with the perceived lack of resources. The animal’s metabolism is altered by an upsurge in serotonin, and precautionary steps are taken to make sure that the animal can make it through this perceived lack of resources.

This brings us back full circle to humans. If this observed relationship between high serotonin, torpor and hibernation, low metabolism, and environmental scarcity is seemingly conserved in bears, squirrels, and locusts (even hamsters, which I didn’t talk about), what is to be said about our approach to serotonin drugs for depression in humans? The first line prescription drug for treating depression in humans, let’s be clear, increases the concentration of serotonin in the brain areas where it acts. How plausible is it really for serotonin physiology to operate in a directly opposite manner in humans, as it does in most other animals studied? Could it be that the heroic simplification of the serotonin system and its application to the experience of depression is actually completely backward? In order to understand how we got here, we have to look back at the history of the development of serotonin influencing drugs, and gain a deeper understanding of medical epistemology, and how drugs are developed from it.

Which is exactly what we’ll go into in Issue #3: The Medical Epistemology of Drug Development for Antidepressants. Thanks for reading.

Thanks for subscribing! If you would like to share this newsletter with anyone, you can suggest they sign up here <http://tinyletter.com/againstutopia>. Notes

– Anticancer Research. “The Effect of PTEN on Serotonin Synthesis and Secretion from the Carcinoid Cell Line BON”. Silva SR, Zaytseva YY, Jackson LN, Lee EY, Weiss HL, Bowen KA, Townsend CM Jr, Evers BM.

– Journal of Neuroscience. “Peptide Inhibitors Disrupt the Serotonin 5-HT2C Receptor Interaction with Phosphatase and Tensin Homolog to Allosterically Modulate Cellular Signaling and Behavior”. Anastasio NC, Gilbertson SR, Bubar MJ, Agarkov A, Stutz SJ, Jeng Y, Bremer NM, Smith TD, Fox RG, Swinford SE, Seitz PK, Charendoff MN, Craft JW Jr, Laezza FM, Watson CS, Briggs JM, Cunningham KA.

– Pharmacol Biochem Behav. 1993 Sep;46(1):9-13. “Involvement of brain tryptophan hydroxylase in the mechanism of hibernation.” Popova NK, Voronova IP, Kulikov AV.

– Pharmacol Biochem Behav. 1981 Jun;14(6):773-7. “Brain serotonin metabolism in hibernation.” Popova NK, Voitenko NN.

– J Comp Physiol B. “Mitochondrial metabolism in hibernation and daily torpor: a review.” Staples JF, Brown JC.

– J Comp Physiol B. “Life in the fat lane: seasonal regulation of insulin sensitivity, food intake, and adipose biology in brown bears.” Rigano KS, Gehring JL, Evans Hutzenbiler BD, Chen AV, Nelson OL, Vella CA, Robbins CT, Jansen HT.

– Il Farmaco. “Tryptophan in human hair: correlation with pigmentation.” Biasiolo M, Bertazzo A, Costa CV, Allegri G.

– Science. “Serotonin mediates behavioral gregarization underlying swarm formation in desert locusts.” Anstey ML, Rogers SM, Ott SR, Burrows M, Simpson SJ.

– Am J Physiol. “Does serotonin play a role in entrance into hibernation?” Canguilhem B, Miro JL, Kempf E, Schmitt P.

– Genes, Brain, and Behavior. “The brain 5‐HT1A receptor gene expression in hibernation” V. S. Naumenko S. E. Tkachev A. V. Kulikov T. P. Semenova Z. G. Amerhanov N. P. Smirnova N. K. Popova.

– Int. Conf. Bear Res. and Manage. “Insulin and Glucagon Responses in the Hibernating Black Bear.” PJ Palumbo.

Against Utopia Issue #1: The Epistemology of Depression Part 1

Hello, and welcome to Against Utopia, a newsletter that lifts the veil of authoritarian utopianism in science, technology, politics, culture, and medicine, and explores anti-authoritarian alternatives. This is Issue One, published March 10th, 2018.
Depression has been in the news lately, specifically in the mainstream scientific backlash against Johann Hari’s book, Lost Connections.

In Lost Connections (which I haven’t read yet), Hari purports to have found the real causes of depression and real solutions, and furthermore, they have nothing to do with the “chemical imbalance” theories you may have heard of (e.g. serotonin, “the happy molecule”). Before I even read the book I want to get this out of the way – I believe him.

“How can you so hastily agree with something you haven’t read!?” you might think. It’s easy, allow me to explain.

The key part of my blind agreement rests in unpacking the terms “chemical imbalance”. What do these terms mean? Presumably, a number of things, but currently, in 2018, when someone says the words “chemical imbalance” when referring to depression, they’re usually referring to an “imbalance” of serotonin. The dogma of serotonin’s involvement in the manifestation of depression is of course popularly accepted, but why? This requires further investigation.

What do we mean when we say “chemical imbalance”? Presumably, our scientists and doctors have done the leg work to define what an acceptable level of serotonin is in the brain of normatively well-functioning people who are not depressed or anxious. When we invoke the term, we’re referring to serotonin being either too low or too high, and we’re also referring to the resulting lived experience of the person who now has low or high serotonin. Except that’s not really true at all. We’ve never really measured the levels of serotonin in the brain and mapped them to specific depressive phenotypes, and we certainly haven’t mapped them to the much more fluid, and consequently harder to define, lived subjective experience of depression.

The problem is that this attempt at mapping specific levels of serotonin in the brain to subjective lived experience falls prey to utopian simplification. This view neglects the immense complexity of serotonin in the brain, gut, lungs, and circulatory system. If 95% of the body’s serotonin is not in the brain, if platelets carry it throughout the body, if serotonin is intimately involved in blood clotting, how can it be implicated as the primary culprit in depression? Is there more complexity here than meets the eye, that we might be ignoring?

If we’re ignoring said complexity, the commonly accepted dogma is then in actuality a heroic simplification of what is going on in the depressed brain, and is at best a utopian view of brain chemistry that does little to illuminate our understanding of depression. And the numbers would back that up.

What does utopia have to do with this? The term utopia was coined in the early 1500s by Sir Thomas More, to describe a fictional, perfectly organized, and rational society off of the coast of South America. The book was published about 150 years before the Enlightenment, but its values were deeply embedded in what was to come. The Enlightenment is known for the proliferation of liberal values such as liberty, tolerance, fraternity, and reason as the primary source of authority as separate from God and the Church.

In addition to these, the most important thing to realize about the Enlightenment is that it gave us reflexive rationalist social engineering (organizational hierarchy justified by reason and not God) and scientism (the application of the scientific method to social problems with which it’s knowledge extraction and justification methods are ill-equipped to deal). These two phenomenon working in concert made concrete the ability to understand and perfect Man™ – with enough categorical reductionism, skeptical empiricism, and time, we could figure anything out with science!

Except that real world maps for complex phenomenon are exactly that – just maps. Some territories are more mappable than others (DNA!), and some we are only just discovering how to map (metabolism, proteomics, etc). In the realm of mental health, with a methodology for the perfectibility of Man™ within sight, and the actual delivered progress of the scientific method, at the beginning of the 20th century we must have thought that mental health would surely be cracked.


The complex interplay between genetics, metabolism, psychology, physiology, doesn’t care about our mental models, our attempts to reduce depression to a single locus that can then be perfected with empiricism. This complexity will rear it’s head whether we abstract it away or not. The efficacy of the drugs for depression speak to our understanding of depression well enough – and that is precisely why I believe Johann Hari without having read his book.

The way that we know the science of depression, otherwise known as the epistemology of depression, is through this rationalist heroic simplification of mental health, and in order to form a more complete picture within which we might take action, we need to understand where these ideas came from, and how they came to be.

What is a serotonin “receptor”?

How did the normative definitions of sound mental health come to be, and why are they based on scant clinical trials of small sample size from the 50s?

How did two alkaloids, reserpine and tianeptine, make us think down is up, and right is left? What is the resultant jumble of word salad (atypical antidepressants anyone?) we’ve invented to make sense of it all them and depression?

What do crickets, bears, and squirrels have to do with this?

In next week’s issue, we’ll begin to examine the history of these concepts, who and what put them in place, how they culminate in our modern shared understanding of what depression is, and what we can do with this information as a result.