200Bn Weights of Responsibility
The Stress of Working in Modern AI
Felix Hill, Oct 2024
The field of AI has changed irrevocably in the last 2 years.
ChatGPT is approaching 200m monthly users. Gemini was visited almost 320m times in May 2024, AI enthusiasts can now avail themselves of AI microwaves, AI toothbrushes and even an AI football.
However, for many of us who work in AI, this boom in popular interest is both a blessing and a curse. Certainly, salaries have increased, along with stock prices and market valuations. On the other hand, the change has brought with it a unique set of stresses.
This blog is about the stresses of modern AI. It is aimed at those who work in AI (which by conservative estimates is now something like 87% of the world’s population), but particularly those who do AI research.
Ultimately, I hope that talking about what makes AI research stressful can make life a little more joyful for those of us lucky enough to work in the field. Because, despite the current chaos, it remains a wonderful, fulfilling profession; one that has the potential to resolve many of the great questions of science, philosophy and thus humanity itself.
A few months back, I was at a friend’s 40th birthday party. We are close friends so I knew a good proportion of the guests, some very well. But I didn’t know them all.
Among those that I knew least well, I noticed a curious effect.
Despite my not being very well (more of that later), and evidently not keen to engage in conversation, a small queue formed around me. Simply because it was known that I work for DeepMind, people wanted to talk to me.
And it was not about therapeutic things like football or 80s music. These people wanted to talk about the one thing that I was trying most to avoid thinking about: AI. While it was flattering that so many were interested in my work, it also reminded me how much had changed in the last two years. Bankers, lawyers, doctors and management consultants all wanted to get my take on ChatGPT; and although few claimed to use such LLMs directly in their work, they were convinced that something was happening in AI that they ought to know about.
As a researcher, I’m sure you can relate to this feeling of being unable to switch off at social occasions.
But it gets worse. I’m not even safe in the confines of my own home.
I had long stopped watching the news for fear of triggering anxiety. But even when watching football, VH1, Inspector Montalbano, or that excellent TV adaptation of Elena Ferrante’s Neapolitan Quartet, the adverts were replete with references to AI.
At this time, I often thought about packing my bags, crossing continents and joining an isolated sect. Although I wouldn’t be surprised if even Vipassana has in some way been infiltrated by AI at this stage.
The fact that a few large companies seem to be competing over developing the biggest, best large language model is itself inherently stressful; whoever you work for.
Doing AI research at the moment can feel like participating in a war. And from Adolph Hitler to Dutch Schulz. it is widely known that going to war can lead to grave outcomes including psychopathy, divorce and suicide.
Of course, this is not to equate participation in AI research with physical combat in a ‘literal war’. But I know from my own experience that the parallels are genuine, if somewhat tenuous.
Typically, researchers in industry are not accustomed to their work having a direct and immediate impact on the bottom line of their employers
Of course, many researchers would dream about the chance of making such an impact. It’s just that previously it was only like a once-in-a-decade occurrence.
In most cases, the result of fundamental research on LLMs today is small and likely short-term perturbations in model performance. However, with public valuations so (inextricably?) linked to LLM performance, these perturbations can in turn lead to billion-dollar swings in stock prices.
This dynamic is of course very stressful, and not something that AI researchers would have been prepared for at graduate school, during postdocs, or even on the job itself prior to 2022.
Most AI researchers, especially those of us over a certain age, did not get into research to make money. Getting lots of money for doing a job you love sounds like a panacea, but it can also provoke intense anxiety. Particularly if the external factors driving your increased revenue are not within your control and/or have the effect of making you love your job a lot less than you used to.
Whether or not AI has anything to do with it, there is ample evidence that accruing wealth suddenly can lead to all sorts of problems; just look at actors or singers who finally made it big after years of trying. Addiction, broken relationships, fractured friendships and even suicide are just some of the more common symptoms. These are certainly symptoms that I can relate to.
The scale, simplicity and efficacy of LLMs makes it hard to do ‘science’ that is relevant, in the sense that it immediately makes LLMs better.
Leading LLM researchers are already espousing Rich Sutton’s bitter lesson; the fact that almost no innovation is required beyond scale.
And, even if substantive innovation were possible in theory (it surely is), realising it would often require repeated training of the largest LLMs under different conditions. This is not something that even the largest companies can afford. For a ‘mere’ research scientist, it can feel soul-destroyingly intractable.
These conditions are hard for industrial scientists used to working in small (5–10 person) teams. But they are surely even more acute for those in academia: PhDs, postdocs and AI / CS / ML faculty.
While those in academia can (and should) continue to publish the insights gained from experimenting with LLMs, for scientists in industry the question of publication is less clear.
Publication has long been an intrinsic part of the scientific process, and has always been a central tenet of AI research. Most AI researchers I have spoken to, particularly research scientists, agree with me that publishing feels like a critical aspect of our careers.
But, in industry at least, the question of whether publication is a viable outcome for one’s research has become increasingly unclear in the last 2 years. Minor tricks that can improve an LLM equate to potentially crucial weapons in the LLM wars. Whether giving away such secrets is of benefit to the organisation that funds the research is always a nuanced question.
This all means that researchers frequently have no sense of the destiny of their own ideas, which, at least in my case, can cause immense stress.
Of course, one plausible escape from these worries is to derive a scientific vision, raise some capital and form a startup. Indeed, the current proliferation of AI startups (both big and small) shows how many scientists have chosen this route,
But becoming a founder cannot be a surefire way to avoid stress-related issues. Indeed, it is famously stressful; even with current levels of investor enthusiasm, many well-funded AI startups fail. I know from my own experience that being a founder is a particularly lonely journey. It’s doubtlessly a viable option for ambitious scientists right now, but not one that is likely to make doing science easy, nor one that reduces stress.
The last two years have been chaotic and crazy in the world of AI, but they have also been a particularly turbulent time for me personally.
In April 2023 my Mum died after a long battle with Alzheimer’s. At the time I was in a psychiatric hospital after suffering from acute psychosis, with stress a likely important factor. For the following 12 months I was in theory recovering but, in practice, in a state of both extreme anxiety and suicidal depression. During this time, I was incredibly lucky to have employers who understood my situation (and my value to the company) and who provided continual therapeutic and moral support.
After a further 6 months of life-threatening depression, I began to feel better, and recently have felt able to write about my experiences. I learned that stress and anxiety go hand-in-hand; indeed they may ultimately be the same thing. Of course, like any adaptive trait, there can be benefits to anxiety (e.g. around productivity), but when anxiety becomes malignant, the consequences can be quite serious.
It was reflecting on the last two years in AI, while trying to relearn how to be an AI researcher, that gave me the insights I have shared in this blog. Of course, sharing the insights will not solve the problems in general, but one of the few things that gave me hope in the darkest moments was knowing that I was not alone. And if you are suffering right now, trust me — you are not alone.
Social anxiety
I have covered many of the catalysts of stress or anxiety that may be afflicting those who do AI research at the moment. But there is one form of stress that I have not mentioned, because I’m lucky enough never to have experienced it myself. Rather, I know about it from intimate conversations with friends and colleagues.
And that form of stress is social anxiety.
According to friends, those who are socially anxious find group interactions challenging. This is tough in the world of modern AI, where large project teams and massive (often cross-continental) collaborations are essential. The high level of churn in the industry at the moment only makes things worse, as established teams (which often function as social ‘safety nets’) can be decimated overnight. Churn can also lead to trust issues, as previously reliable allies depart for ‘enemy’ research groups.
The good news is that social anxiety, like all of the manifestations of anxiety or stress that I have discussed thus far, can be overcome. The process starts by nurturing natural support networks like family and ‘non-AI’ friends. But a critical second step is for all of us who work in AI to start, and continue, a candid conversation about stress.
So please tweet or comment with your own experiences, and let’s see if we can make AI research not only a dynamic and intellectually challenging place, but also a compassionate and kind one.