Tuesday, September 11, 2018

How to Win the Argument with Milton Friedman

How to Win the Argument with Milton Friedman

How to Win the Argument with Milton Friedman

In 1970, in his famous essay, The Social Responsibility of Business is to Increase its ProfitsMilton Friedman railed against any corporate attempt to promote "desirable social ends" which he argued were "highly subversive to the capitalist system."

Ever since, folks who have gotten together in gatherings like last week's Inclusive Capitalism Conference have argued that Friedman is wrong to make the trade-off between shareholders and the rest of society so wholly in favor of shareholders and that greater balance is required in that trade-off.

Yet the fact that they make that argument is precisely why Friedman has won the day for going on half a century, a spectacular success for a social sciences argument. Friedman has won the way a great debater wins — by cleverly framing the terms of the debate, not by brilliantly arguing the logic of the debate once it has been framed.

Because Friedman was so inflammatory in his call for a 100% versus 0% handling of the trade-off, his entire opposition for the entire time since 1970 has focused on making arguments for a number lower than 100% for shareholders. In doing so, they implicitly — and I would argue, fully — accepted Friedman's premise that there is a fundamental trade-off between the interests of shareholders on the one hand and other societal actors such as customers, employees and communities on the other hand.

Ever since, the Friedmanite defense has been to force the opposition to prove that making a trade-off to any extent whatsoever against shareholders won't seriously damage capitalism. As a result Friedman is innocent until proven guilty and the opposition guilty until proven innocent. That is why we are exactly where we are nearly a half century later.

Had the opposition been cleverer, it would have attacked the premise from the very beginning by asking: what is the proof that there is a trade-off at all? Had they done so, they would have found out that Friedman had not a shred of proof that a trade-off existed prior to 1970. And they would have found out that there still isn't a single shred of empirical evidence that 100% focus on shareholder value to the exclusion of other societal factors actually produces measurably higher value for shareholders.

Friedman, of course, didn't feel the need to assemble any empirical evidence to support his point. An economist falls apart and turns into a blubbing puddle on the floor if you take away the concept of trade-offs because they all started in the same place: the societal trade-off between guns and butter. Trade-offs are a sacred article of faith for economists. You simply can't be an economist if you don't consider trade-offs to be a central feature of your worldview.

I am an economist but the training apparently didn't stick entirely for me. I think I read too much Aristotle along the way and to me he just seems smarter than anyone else I have ever read. What he argued about happiness has more direct relevance to shareholder value maximization than anything an economist has ever written. He maintained that happiness does not derive from its pursuit but rather is the inevitable consequence of leading a virtuous life.

The same applies to corporations. If they make it their purpose to maximize shareholder value, shareholders are likely to suffer because that cravenness turns off customers, employees, and the world in general. If they make it their purpose to serve customers brilliantly, be a fabulous place to work, and contribute meaningfully to the communities in which they operate, chances are their shareholders will be very happy.

That is my premise and I am sticking to it until someone can provide a shred of evidence that the opposite has any validity whatsoever.



Sent from my iPhone

Sunday, July 29, 2018

Almost 80% of US workers live from paycheck to paycheck. Here's why | Robert Reich | Opinion | The Guardian

Almost 80% of US workers live from paycheck to paycheck. Here's why | Robert Reich | Opinion | The Guardian

Almost 80% of US workers live from paycheck to paycheck. Here's why | Robert Reich

America doesn't have a jobs crisis. It has a 'good jobs' crisis – where too much employment is insecure, and poorly paid

Workers protest for more money outside a McDonald's in Miami, Florida.

The official rate of unemployment in America has plunged to a remarkably low 3.8%. The Federal Reserve forecasts that the unemployment rate will reach 3.5% by the end of the year.

But the official rate hides more troubling realities: legions of college grads overqualified for their jobs, a growing number of contract workers with no job security, and an army of part-time workers desperate for full-time jobs. Almost 80% of Americans say they live from paycheck to paycheck, many not knowing how big their next one will be.

Blanketing all of this are stagnant wages and vanishing job benefits. The typical American worker now earns around $44,500 a year, not much more than what the typical worker earned in 40 years ago, adjusted for inflation. Although the US economy continues to grow, most of the gains have been going to a relatively few top executives of large companies, financiers, and inventors and owners of digital devices.

America doesn't have a jobs crisis. It has a good jobs crisis.

When Republicans delivered their $1.5tn tax cut last December they predicted a big wage boost for American workers. Forget it. Wages actually dropped in the second quarter of this year.

Not even the current low rate of unemployment is forcing employers to raise wages. Contrast this with the late 1990s, the last time unemployment dipped close to where it is today, when the portion of national income going into wages was 3% points higher than it is today.

What's going on? Simply put, the vast majority of American workers have lost just about all their bargaining power. The erosion of that bargaining power is one of the biggest economic stories of the past four decades, yet it's less about supply and demand than about institutions and politics.

Two fundamental forces have changed the structure of the US economy, directly altering the balance of power between business and labor. The first is the increasing difficulty for workers of joining together in trade unions. The second is the growing ease by which corporations can join together in oligopolies or to form monopolies.

By the mid-1950s more than a third of all private-sector workers in the United States were unionized. In subsequent decades public employees became organized, too. Employers were required by law not just to permit unions but to negotiate in good faith with them. This gave workers significant power to demand better wages, hours, benefits, and working conditions. (Agreements in unionized industries set the benchmarks for the non-unionized).

Yet starting in the 1980s and with increasing ferocity since then, private-sector employers have fought against unions. Ronald Reagan's decision to fire the nation's air-traffic controllers, who went on an illegal strike, signaled to private-sector employers that fighting unions was legitimate. A wave of hostile takeovers pushed employers to do whatever was necessary to maximize shareholder returns. Together, they ushered in an era of union-busting.

Employers have been firing workers who attempt to organize, threatening to relocate to more "business friendly" states if companies unionize, mounting campaigns against union votes, and summoning replacement workers when unionized workers strike. Employer groups have lobbied states to enact more so-called "right-to-work" laws that bar unions from requiring dues from workers they represent. A recent supreme court opinion delivered by the court's five Republican appointees has extended the principle of "right-to-work" to public employees.

Today, fewer than 7% of private-sector workers are unionized, and public-employee unions are in grave jeopardy, not least because of the supreme court ruling. The declining share of total US income going to the middle since the late 1960s – defined as 50% above and 50% below the median – correlates directly with that decline in unionization. (See chart below).

Robert Reich graph

Perhaps even more significantly, the share of total income going to the richest 10 percent of Americans over the last century is almost exactly inversely related to the share of the nation's workers who are unionized. (See chart below). When it comes to dividing up the pie, most American workers today have little or no say. The pie is growing but they're getting only the crumbs.

Robert Reich graph

Over the same period time, antitrust enforcement has gone into remission. The US government has essentially given a green light to companies seeking to gain monopoly power over digital platforms and networks (Google, Apple, Amazon, Facebook); wanting to merge into giant oligopolies (pharmaceuticals, health insurers, airlines, seed producers, food processors, military contractors, Wall Street banks, internet service providers); or intent on creating local monopolies (food distributors, waste disposal companies, hospitals).

This means workers are spending more on such goods and services than they would were these markets more competitive. It's exactly as if their paychecks were cut. Concentrated economic power has also given corporations more ability to hold down wages, because workers have less choice of whom to work for. And it has let companies impose on workers provisions that further weaken their bargaining power, such as anti-poaching and mandatory arbitration agreements.

This great shift in bargaining power, from workers to corporations, has pushed a larger portion of national income into profits and a lower portion into wages than at any time since the second world war. In recent years, most of those profits have gone into higher executive pay and higher share prices rather than into new investment or worker pay. Add to this the fact that the richest 10% of Americans own about 80% of all shares of stock (the top 1% owns about 40%), and you get a broader picture of how and why inequality has widened so dramatically.

Another consequence: corporations and wealthy individuals have had more money to pour into political campaigns and lobbying, while labor unions have had far less. In 1978, for example, congressional campaign contributions by labor Political Action Committees were on par with corporate PAC contributions. But since 1980, corporate PAC giving has grown at a much faster clip, and today the gulf is huge.

It is no coincidence that all three branches of the federal government, as well as most state governments, have become more "business-friendly" and less "worker-friendly" than at any time since the 1920s. As I've noted, Congress recently slashed the corporate tax rate from 35% to 21%. Meanwhile, John Roberts' supreme court has more often sided with business interests in cases involving labor, the environment, or consumers than has any supreme court since the mid-1930s. Over the past year it not only ruled against public employee unions but also decided that workers cannot join together in class action suits when their employment contract calls for mandatory arbitration. The federal minimum wage has not been increased since 2009, and is now about where it was in 1950 when adjusted for inflation. Trump's labor department is busily repealing many rules and regulations designed to protect workers.

The combination of high corporate profits and growing corporate political power has created a vicious cycle: higher profits have generated more political influence, which has altered the rules of the game through legislative, congressional, and judicial action – enabling corporations to extract even more profit. The biggest losers, from whom most profits have been extracted, have been average workers.

America's shift from farm to factory was accompanied by decades of bloody labor conflict.

The shift from factory to office and other sedentary jobs created other social upheaval. The more recent shift in bargaining power from workers to large corporations – and consequentially, the dramatic widening of inequalities of income, wealth, and political power – has had a more unfortunate and, I fear, more lasting consequence: an angry working class vulnerable to demagogues peddling authoritarianism, racism, and xenophobia.

  • Robert Reich is chancellor's professor of public policy at the University of California, Berkeley, and was secretary of labour in the Clinton administration. His latest book, The Common Good, was published earlier this year


Sent from my iPhone

Sunday, June 3, 2018

anthropomics: Who wants Charles Murray to speak, and why?

anthropomics: Who wants Charles Murray to speak, and why?

Who wants Charles Murray to speak, and why?

Some years ago, I wrote a broad critique of The Bell Curve, that old Social Darwinist psychology tome from 1994 by the hereditarian psychologist Richard Herrnstein and conservative political theorist Charles Murray. It was in a very nice collection edited by Besteman and Gusterson (who ought to be a law firm, but are actually cultural anthropologists), called Why America's Top Pundits are Wrong.

             A few years later, Paul Erickson and Liam Murphy included it in their reader on the history of anthropological theory. In fact, the third edition of that reader (2010) actually began with Marx and ended with Marks.  That was pretty cool.  The fourth edition (2013) also started with Marx and included Marks, but had a couple of more readings after Marks.

             They kicked me out of the fifth edition (2016).  No hard feelings, though, because I'm cited in their companion volume, A History ofAnthropological Theory.  But I know why they did it, too.  My essay was very dated. It was criticizing a twenty-year-old bit of pseudoscience, which only old people remember.  Richard Herrnstein is dead.  Charles Murray is just a distant irrelevancy.

            Well, the joke's on them.  

Charles Murray is back again.  He had surfaced briefly a couple of years ago, when Nicholas Wade's racist anti-science bullshit called A Troublesome Inheritance was published.  That's the book that stimulated an issue of critical, negative reviews in the scholarly journal Human Biology, by the likes of Agustin Fuentes, Jennifer Raff, Charles Roseman, Laura Stein, and your humble narrator. It also stimulated a letter in the New York Times by nearly 150 geneticists repudiating Wade's invocation of their scientific field.  And they ought to know.

In fact, pretty much the only mainstream review of Nicholas Wade that was positive was the one in the Wall Street Journal, by Charles Murray.  So on this side, we have the biological anthropologists and human geneticists in accord that Wade's racist screed is a perversion of the relevant sciences, in which they are, for all intents and purposes, experts.  And on the other side, the political theorist  Charles Murray, who seems to wish that the "science" in Wade's book were true, regardless of what the data show and the experts think.  That's pretty anti-science.  It's just like the creationists, anti-vaxxers, and climate-change-deniers. What do they all have in common? They like to argue the science with the scientists.

It's like mansplaining, only less gendered.  Moronsplaining.

So Charles Murray is still out there, still sponsored by the right-wing think-tank called the American Enterprise Institute, and ever ready to publicly hawk a book of pseudoscience that the scientific community repudiates. Still ready to peddle his own antiquated ideologies about rich people being genetically smarter than poor people. And since social programs designed to assist the poor are doomed to failure because the poor are innately stupid, they should be abolished.

              To the extent that class and race are correlated in the US, Murray's ideas about the poor being genetically stupid make an easy transition into the world of scientific racism.  And it wasn't accidental.  The Bell Curve cited literature from The Mankind Quarterly, which no mainstream scholar cites, because it is an unscholarly racist journal, supported by the Pioneer Fund, that wacko right-wing philanthropy that has thrown money at wacko eugenicists, racists, segregationists, and hereditarians of all stripes, since its inception in 1937 under the aegis of the wacko eugenicist Harry Laughlin. The Bell Curve also cited the work of that racist wacko psychologist Philippe Rushton – who believed that the mean IQ of Africans is genetically set at 70, and that Africans had been r-selected for high reproductive rate and low intelligence – and then pre-emptively defended his wacko racist ideas in an appendix.  Even the wacko evolutionary psychologists distanced themselves from Rushton, appreciating the toxicity of his ideas: "Bad science and virulent racial prejudice drip like pus from nearly every page of this despicable book," wrote David Barash in the journal Animal Behaviour.

                But Charles Murray wasn't smart enough to see it.  He couldn't see the virulent racial prejudice in the work he was defending.  Or else he was blinded by his own prejudices.  It's age-old bind: ideologue or idiot?

                And now the alt-right has gained political ascendancy, and Charles Murray is on a speaking tour.  And he gets shouted down and driven off of Middlebury College.  But he gets invited to other colleges and his message is heard. 

He is invited to Notre Dame by a political science professor named Vincent Phillip Muñoz, and is civilly and effectively rebutted by Agustín Fuentes.

But let's back up a clause or two.  Who is inviting Charles Murray to speak at their college, and why?  At Middlebury, he was invited by Allison Stanger, a professor of international politics and economics, who told her story in the New York Times, as wanting to engage with his ideas. Likewise, Muñoz argues that "Murray makes an important argument that should be heard". Even the New York Times agrees he should say his piece.

                I'm going to disagree.  Charles Murray talks science that is bogus, and political philosophy that is evil, and uses one to justify the other.  He doesn't need to be heard by anybody, any more than a creationist, or a pedophile, or an anti-vaxxer deserves to be heard. 

                So this is what I find confusing. In the free marketplace of ideas in contemporary political science, we still entertain the scientific hypothesis that the poor deserve what little they have because they are genetically stupider than the rich? First of all, I don't know any geneticist who agrees to the the second clause.  A hundred years ago, geneticists believed that. Since the Great Depression, however (which democratized poverty), not too many geneticists have believed it.  (The late Henry Harpending did. That was probably an example of Planck's Principle.)

                Rather, nearly all contemporary geneticists seem to think that the old lefty J. B. S. Haldane more or less got it right when he said, "The average degree of resemblance between father and son is too small to justify the waste of human potentialities which an hereditary aristocratic system entails." Let me translate: You inherit a lot of stuff, and some of that stuff is genetic.  But a lot of the most important stuff – like, privilege – is not. And it is a big mistake to confuse the two categories. Consequently, if you are committed to the proposition that genetic properties are more important than everything else, that is a moral proposition not supported by genetics itself, you smug bastard.

                Class advantages are very real, but they aren't genetic. Doesn't everybody know that?

                I think it's kind of weird that political scientists would be willing to entertain ostensibly scientific ideas – in this case about human genetics – that the relevant scientists themselves do not take seriously.

                But Charles Murray isn't a geneticist.  He is a genetics fanboy. Imagine that you were a professional magician, with a three-year-old child trying to convince you, and everyone else around, that everything important in life is caused by magic.

                That said, however, don't think I'm going to let geneticists off the hook so easily. Sad to say, there are, and always have been, opportunistic geneticists who recognize the self-interest in telling the public that everything important in their lives is genetic. Over a century ago, there was Reginald C. Punnett, inventor of the eponymous Square, who ended the first English textbook on Mendelian genetics with the conclusion that "progress is question of breeding rather than of pedagogics; a matter of gametes, not training…. [T]he creature is not made, but born."  The American geneticist Charles Davenport jumped on the Mendelian bandwagon, and soon explained class differences just as Charles Murray does.  But rather than speak of cryptic factors, as Murray does, Davenport  isolated the cause of those class differences in the gene for feeblemindedness.  Rich white people from northern Europe had one allele; everybody else had another. But whether you speak of specific genes for feebleminded or cryptic genetic factors that cause the poor to be stupid, it's still fake science. 


               The Bell Curve capitalized on the popularity of the Human Genome Project in putting forth its thesis about the genetic stupidity of poor people in the 1990s.  Some geneticists repudiated it, but others recognized, as the geneticists of the 1920s did, that it was good for the business of genetics.  When Science reviewed Madison Grant's The Passing of the Great Race – a classic of American racist thought, which was read in defense of Karl Brandt at the Nuremberg trials to show that the Germans had simply been doing what the Americans were advocating – it concluded with a sobering thought: "This is a book that will … help to disseminate the ever-growing conviction among scientific men of the supreme importance of heredity." Sure, the genetic theory in question might be inane, might be evil, and it might be false, but it definitely is good for business. More recently, the Human Genome Project was backed up with all sorts of purple prose about how your DNA sequence was the most important thing about you: The Code of Codes, The Book of Man, and the like.  They knew it was bullshit then, and that's why there is such interest in epigenetics now


               These geneticists are reprehensible, because they provide the hereditarian soil for scientific racism.  The geneticists may not themselves be racists, but their idiotic statements about what they think their knowledge applies to have indeed sometimes crossed over.  James D. Watson, who knows more about DNA than you do, caused a stir a decade ago, when he said that different groups of people have different "powers of reason".  The rest of the genetics community disagreed, and challenged his own powers of reason.

                And here is the newest exhibit. A video from the famous mouse genetics lab in Bar Harbor, Maine.  It tells you about genetics and genomics, and how genetics controls things like your  eye color and good taste.

Wait, what? (It's at 0:15). Good taste is genetic?

Well she was a bit coy about it, wasn't she?  She delivered the line with a giggle, and the disclaimer, "maybe even good taste".

Geneticists know that good taste is not genetic, because good taste is context-dependent and locally-specific. Geneticists of the 1920s knew that it was in their short term interests to have the public believe that any and all shit was innate.  But the field evolved, and can't afford to devolve.

It would be nice if we could get beyond genetics-vs-culture, so we could talk more comprehensively about "embodiment".  But the hereditarians and racists won't allow it.

We should not be debating the innate intelligence of black people, or of the poor, on college campuses or anywhere.  It is a morally corrupt pseudoscientific proposition. 


It's like inviting a creationist or an inventor of a perpetual motion machine. The university should not be a censor, but it sure as hell is a gatekeeper.  At this point, sometimes they go all radical epistemological relativist and and say that all ideas deserve a hearing.  But all ideas don't deserve a hearing.  The universe of things that do get discussed and debated on college campuses is rather small in proportion to the ideas that people have debated over the years.  Should we stone witches? No. Might the speed of light be 140,000 miles per second, rather than 186,000? No.  Might the universe just be made up of earth, air, water, and fire? No.  Might Africans just be genetically stupid? Might people who want to debate this point have their fundamental civic morality called into question instead?





Let me give the last word, then, to Allison Stanger, who invited Charles Murray out to Middlebury College and got roughed up a bit, because she thinks that the innate intelligence of black people ought to be a debatable topic; which apparently ruined the pleasure she ordinarily derives from tormenting marginalized people. As she casually explained it in the New York Times:
I had tough questions on both the controversial "Bell Curve," in which he partly blames genetics for test score differences among races ... But the event had to be shut down, lest the ensuing dialogue inflict pain on the marginalized.




-----------------------------------------------

[Note:  Apparently Stanger herself did not invite Murray, but "welcomed the opportunity to moderate a talk with him on campus."  In any case, we still disagree on the central issue of whether the innate intellectual capacities of non-white people should be a subject open for debate on campuses in 2017.]



Sent from my iPad

Sunday, May 13, 2018

Why I Escaped the ‘Intellectual Dark Web’ - The Chronicle of Higher Education

Why I Escaped the 'Intellectual Dark Web' - The Chronicle of Higher Education

Why I Escaped the 'Intellectual Dark Web'

Pissing off progressives isn't intellectual progress

By Alice Dreger May 11, 2018
Matt Roth for The Chronicle Review

Conventional wisdom says that if a staff writer for The New York Times wants to feature you in a story about brave intellectuals, you reply, "Yes, please!" This is especially true if the Times sends a Pulitzer Prize-winning photographer to create a noble portrait of you for an accompanying visual pantheon.

But every time that Times writer, Bari Weiss, called to talk with me about the "Intellectual Dark Web" and my supposed membership in it, I just started laughing. In case you missed it — though, really, how could you, considering that it seems to be everywhere at the moment? — the Times recently published a piece about a bunch of renegade intellectuals who "dare venture into this 'There Be Dragons' territory on the intellectual map."

Why was I laughing? The idea that I was part of a cool group made me think there was at least some kind of major attribution error going on. The confused feeling was exacerbated by the dramatic photo setup: Damon Winter, the Pulitzer winner, had me standing in a marsh full of tall, dry reeds, waiting for a particular moment just past sunset. "Why am I in this scene?" I wondered as we waited, and not just because my favorite dress boots were getting muddy and I worried about ticks.

I also had no idea who half the people in this special network were. The few Intellectual Dark Web folks I had met I didn't know very well. How could I be part of a powerful intellectual alliance when I didn't even know these people?

When I asked what this group supposedly had in common, the answer seemed to be "they've climbed the ladder of fame by pissing people off, saying stuff you're not supposed to say." They regularly made progressives angry with "politically incorrect" statements about gender, race, genetics, and so on. This troubled me the most — that one might think of pissing people off as an inherent good, a worthy end.

Opinion is not scholarship, it is not journalism, and we are dying for lack of honest, fact-based, slow inquiry.
While I am very experienced at being annoying, including to members of my own progressive tribes, I don't think this is a technique that should on its own be valorized. Pissing people off is something to be done accidentally, as a side effect, while you're trying to fix a significant problem. Yet the operating assumption behind the Intellectual Dark Web seems to be that angering progressives represents a mark of honor in itself. Indeed, the group's signature hack is leveraging these alleged badges of honor into greater fame and fortune. (Witness the singular genius of Jordan Peterson.)

I knew that some of the people named as part of the IDW are, like me, legitimate scholars — they care about research, data, and changing their own minds through honest inquiry. But that just made me wonder why these enlightened souls would want to be glorified as part of a "dark web." Perhaps they were in universities just long enough to get the pernicious message from their central administrations that all publicity is good scholarship (until it is cause for firing)?

The Times article confirmed my initial fears — and made me glad that I asked to be left out (which I was). The article begins by breathlessly reporting that the IDW is rife with "beauty" and "danger." So, what even is it? Here's the vague rundown: "Most simply, it is a collection of iconoclastic thinkers, academic renegades and media personalities who are having a rolling conversation — on podcasts, YouTube and Twitter, and in sold-out auditoriums — that sound unlike anything else happening, at least publicly, in the culture right now."

Meh. How is this really about intellectualism, darkness, or a special web? If these people are having conversations that are so rare "in the culture," how is it that they have millions of followers and pack auditoriums? (Is "the culture" The New York Times?)

The whole thing — especially the excitement over these people having found a "profitable market" — made me identify anew with that person standing in the ESPN-televised crowd at some SEC football game holding the sign that said, "You people are blocking the library." I don't see it as a sign of intellectual progress when a bunch of smart people find a way to make money off of niche political audiences by spewing opinions without doing much new research.

Opinion is not scholarship, it is not journalism, and we are dying for lack of honest, fact-based, slow inquiry. Twenty years since my first scholarship-based op-ed ran in The New York Times, here's what I see: a postapocalyptic, postmodern media landscape where thoughtfulness and nonpartisan inquiry go to die. The Intellectual Dark Web isn't a solution, it might just be a sign of end times.

I'm all for bringing intellectualism to the masses, but like a lot of academics, I value ambivalence itself, along with intellectual humility. Yet these values seem in direct opposition to the kind of cocksure strutting that is the favored dance move of the IDW.

What I'm left with after this experience is a sense, for myself, of how much academe matters. How we need to fight back against university administrators' equation of "entrepreneurship," funding, and publicity with scholarship. How, since resigning my position at Northwestern University over my dean's censorship of my work, I miss the Intellectual Light Web, the crisscross of walking paths that bisect the campus green. How we need job security to keep people from going to the dark side.

Professors, listen to me: You don't want to be in this dark-web thing, even if it comes with an awesome trading-card photo. You are in the right place. Carry on.

Alice Dreger is a historian of medicine and science and the author of Galileo's Middle Finger (Penguin, 2015).



Sent from my iPad

Wednesday, May 9, 2018

I’d whisper to my student self: you are not alone

I'd whisper to my student self: you are not alone

I'd whisper to my student self: you are not alone

Twenty years on, Dave Reay speaks out about the depression that almost sunk his PhD, and the lifelines that saved him.

COMMENT

Dave Reay

Artistic illustration of a person at the prow of a ship gazing out across the sea at the sunset

Illustration by Neil Webb @ Debut Art

A black dog. That's how Winston Churchill described depression. A dark companion that lies quiet in the good times and is your master in the bad. Its arrival is hard to predict, just as its departure seems impossible until it has happened. Depression is as varied as the people who experience it, as shifting in form as a murmuration. Yet its flavour is unmistakable.

It is twenty years since my own black dog stalked away. Twenty years since debilitating depression saturated my every thought and act. Survivors are forever wary that the dog will return. That it will overcome the new defences, leap the redoubts. It's always out there somewhere, skulking; reminding us to watch our step.

Mental illness affects around a quarter of us at some point in our lives1, with more than 300 million people battling depression worldwide2 and academics at greater risk than many3. The pressure to publish, make an impact, win funding, chase tenure, engage the public, shine on social media and influence policy, combined with failing experiments, lone working and rigid hierarchies, are all threats to our well-being. Yet our job is a thing of beauty, too. To explore new ideas, to grapple with the unknown, and ultimately to understand: that is a privilege.

"The saving grace was really my colleagues around me and they didn't even know it"

As scientists, we talk, a lot. To our students and peers, to policymakers and funders. We become experts in telling the world about our work and opinions, but we risk not truly hearing ourselves or those around us.

My own two decades of silence were borne of fear. Fear of ridicule and mistrust, of overt hostility and covert isolation. Talking to friends, sharing memories of the dark times, has tempered these fears. Although speaking openly about my illness still makes me anxious, it is nothing to the waking nightmare lived by my student self.

If I could speak to him now, I'd tell him not to be afraid, to trust in people. I'd tell him how wonderful the warmth of those he spoke to will feel, how sharing the pain of his mental illness would diminish its effects, not him. I would hug him and whisper: "You are not alone."

I tell this story for anyone who is silenced by the stigma of depression. I tell it to thank the wonderful people who gave me a voice, and then listened. I tell this story because I can, because I survived.

Early one year in the mid-1990s my undergraduate road was nearing its end. Three years of marine biology, substance abuse, mild poverty and falling in love. By October, I was a doctoral student 500 kilometres away from family and the love of my life.

It was natural to be scared. New people, new place, unknown rules and expectations. On the tall stool at my carefully demarcated square of lab bench I sat each day, trying to fathom the instructions of my supervisor. The research was on impacts of warming in the Southern Ocean — growing algae in the lab for starters, maybe a trip south later on. It was a masquerade from the start. The techniques and equipment were alien, my nodding response to instruction, an act. I tried to mimic the easy confidence of the other students, to fit in. My supervisor and his group must have assumed I had some idea of what to do. I hadn't. Not a clue.

Batch after batch of my growing medium went bad. Each morning, returning to the lab from my bedsit, the large glass bottles would have the unmistakable milkiness of contamination. Hours of weighing, mixing and sterilization would follow as I flushed with embarrassment and tried to suppress a mounting fear. Fear of being reprimanded. Fear that I was just too stupid. Fear that everyone would find out.

The days merged into weeks; the failures continued. My supervisor called me to his office. He was unimpressed. I needed to do better, work harder, get it right. Walking back to my bench that day, a black dog walked with me. When exactly it had arrived I can't say. I was 21. My ramshackle mental defences had been crumbling for a while. I'll never know what tore the breach. It may have been an admonishment from a lab technician or just one more cloudy culture bottle. Whatever had splintered the final defence, depression was my new master.

It is amazing how we can maintain a facade of normality while behind it a maelstrom of disintegrating sanity roars. By Christmas, I was thinner and quieter, but still me. Time with family and fiancée kept the black dog subdued. It waited.

Instead of seeking help, I counted down the remaining days of holiday like a condemned man. Sometimes you can see deep depression coming for you, slamming doors of escape, tightening the orbits of desperation. Deciding I would kill myself was a relief. I wrongly felt that it was the only door left open. One place the black dog could not follow.

Saying goodbye to my family when I returned to university that January was brutal. My mask could not withstand the warmth of those people I so loved. Through the sobbing, fear drove me on, piloting my body through the long journey back to my term-time lodgings.

Alone in the house I ran a hot bath, drank whisky without tasting it and selected the sharpest kitchen knife. I sat in that bath for an age. The knife hurt on my flesh, whisky or no. I wept and retched and wept some more. Too afraid to cut deep, anxious about what people would think, about the pain. I drained the bath and went to bed.

The months that followed were a fluctuating grey-scale of near-normality and despair. Depression is exhausting. Even eating and washing felt irrelevant to my own contracting existence. Some days were doable, others I spent lying balled and wide-eyed, waiting for the night. When it arrived, sleep delivered only confused marathons of lost time.

My second attempt at suicide came just before Easter and was longer in the planning, the means gleaned from the Dick Francis mystery Comeback and obtained from the lab's chemical store. An injection, some nausea, a short blackout, that's all.

Throughout this period I self-medicated with alcohol, and cannabis when it was available; anything for oblivion. I didn't seek help, therapy, or prescription drugs, nor did I quit my PhD. I survived long enough for help to find me, eventually.

By the summer, life settled. Research, relationships, even my supervisor: all the quakes that had brought down the walls of my mind were still. I laid plans for work on an icebreaker in the South Atlantic that coming winter. It was exciting, an opportunity I had dreamed of, and yet still one in which I pictured my own destruction: deep inside, thoughts of suicide lingered, a secret obsession. A vision of slipping unseen into a remote sea, all a terrible accident, a tragic loss.

It was people who saved me. They still do. A warm word from a friend, a joke and a moan with a colleague. Day by day, piece by piece, the people around me unknowingly brought my mind back to health. One man in particular — a beautiful human and can-do lab technician named Paul Beckwith — helped me to trust in others, to share and to belong.

In December, just over a year after the start of my PhD, I joined the research ship near the Falkland Islands and my convalescence accelerated. I sipped daily on a tonic of big-hearted, inspirational and funny scientists. Being stuck together in floating isolation for months could have been torment. Instead it was my salvation.

The cloudy jars and experimental despair were replaced by thriving baths of algae, and geeky thrills as the microbes responded to artificially induced future climates. Even through the darkest days, gripping my lab bench to suppress the convulsions within, I never hated the science. Now that I could finally understand how to do it, I loved it.

The savage beauty of the South Atlantic, its sparkling wildlife and scientists, will always be a touchstone for me. A month after joining the ship and at the end of a long night shift, I stood alone on its rolling deck. Dawn broke, and tears streamed down my face. Tears of joy and release. The black dog was gone.

The two decades since have sewn their own patchwork of light and dark. Deaths of those close to me, including the beautiful Paul, have called my black dog close again at times. But, for me, it is kept at bay by lucky brain chemistry, and the warmth and understanding of my wife (that love of my life), and of our children and friends. Not everyone is so fortunate.

As the supervisor of dozens of students and staff, empathy is useful, listening is vital. Just a chat, making time for people; it can mean the world. We are mentors, not therapists, but our university communities are under enormous stress. Insecure jobs and mounting debt, endless metrics and poor management — all are risks to our mental health in the edifice of corporatization that our higher-education system has become.

Awareness is growing, with supervisor training, peer-support networks and counselling services now commonplace. More importantly, the stigma of mental health issues is fading. Whether it's to a friend or loved one, a therapist or colleague, those afflicted by mental illness must feel secure in speaking out. Silence was for too long the enemy of my own healing. Next time that black dog comes close, it will be met with a roar.

Nature 557, 160-161 (2018)

doi: 10.1038/d41586-018-05080-6

Sign up for the daily Nature Briefing email newsletter

Stay up to date with what matters in science and why, handpicked from Nature and other publications worldwide.

Sign Up

References

  1. 1.

    Kessler, R. C. et al. Epidemiol. Psichiatr. Soc. 18, 23–33 (2009).

  2. 2.

    World Health Organization Depression and Other Common Mental Disorders: Global Health Estimates (WHO, 2017); available at http://go.nature.com/2ksfnhg

  3. 3.

    Royal Society & Wellcome Trust Understanding Mental Health in the Research Environment: A Rapid Evidence Assessment (Royal Society, Wellcome Trust, 2017); available at http://go.nature.com/2p8fq8r

Download references



Sent from my iPhone