We should talk about how the AI revolution is impacting our mental health
- By nygren
- Fri 10 April 2026
How AI is impacting the mental health of everyone is not getting enough discussion, and I also don't know what we do about it. While AIs may be able to help us get more done faster, that increased productivity may not be sustainable and does not come as a reduction in either stress or anxiety. As I suspect this is impacting many of us, whether we acknowledge them or keep them buried, it is important for leaders to acknowledge this and talk about it so that people don’t feel alone with their fears.
I'm coming from a position of thinking that emerging AI technologies are highly disruptive and transformative in the same way that the internet was in the late 90's. This doesn't mean that it will solve every problem or that it is actually a good idea for everything, but it is here to stay. I think the overall comparisons to Blockchain/NFTs/VR are misleading (as those were technologies in search of problems in a way that LLMs are not). As someone who has often been a selective early adopter of technologies (I had a Linux web server in my dorm room in 1994, had voice controlled home automation in the 2004, and I was pushing the boundaries of ChatGPT within a week of its release), this is a technology I’m willing to experiment with and am finding increasingly useful for certain tasks. In particular, I’ve been finding LLMs like Claude especially useful for domains where I’m an expert and where it is easier to fully review output than to do everything from scratch. For example, using Claude as an augmented search tool to help me find primary sources is transformative in the same way that Altavista was transformative over Yahoo.
Within the work environment there are certainly compelling and transformative uses of AI and LLMs, and there are also many places where introducing it is counter-productive at this point and/or where the risks outweigh the benefits. A colleague of mine put it well on how to approach new uses of AI: “ok, fine, let's try AI for all of this, but at the end of the day I'm a scientist so how do we measure and how do we learn from this."
But back to my initial premise, all over the place I see AI technologies as really, really triggering the fight-or-flight threat instincts in most everyone. Regardless of how experienced and senior people are, there seems to be uniform reactions of "Can I keep up? Will this replace me career-wise? How do I manage the technical risks? What are the existential risks to our civilization? What are the ethical implications and the environmental costs? How will this impact my children’s future path?” and many more. Since “Code is a liability. Code’s capabilities are assets.” (- Cory Doctorow), those of us working at tech companies are also seeing a rapidly and unsustainably ballooning amount of code that we will need to maintain.
All of the things that our “Growth Mindset” training classes taught us about are all directly applicable here, where the perceived threats and the high rate of change are almost certainly having psychological, sociological, and physiological impacts on almost all of us. Secondary discussions around this are one of the top topics of conversation everywhere it seems, whether in social media, work chats, or at social gatherings.
Some people are responding with extreme skepticism as a natural reaction, and in the creative fields this may be highly warranted – while AI can write novels, is this something people actually want to read? But in other areas this is hazardous and may just result in increasing cognitive dissonance. If we take the comparison to the Internet bubble in the early 2000, there was certainly hype and over-investment, but that rapid rate of investment did give many companies the momentum they needed to build great things we rely on heavily today. Even things that there was huge skepticism around (who wants to do banking online?) have now turned out to be part of our daily lives.
Other people seem to be taking the Growth Mindset to the extreme, embracing AI evangelism and going after it all without skepticism. Yes, it is possible to do a huge amount if you have no family (or prioritize work/innovation above all else) and work a huge number of hours of the day, but it is neither equitable nor healthy to force everyone to make this choice. Given the rate of change here, I fear that even this is unsustainable, and many of the people on this path are also going to get burned badly by all of the security and other pitfalls present in even the best of these systems.
For those of us who think of ourselves as pragmatic innovators, we can try and take a middle ground but even that middle ground is extremely stressful. There is no way anyone can keep up with the rate of change, and new threats are showing up constantly. While many people might have the option of trying to ignore it, it is still important for everyone to be aware of all of the new threats from bad actors that are leaking into the real world. For those of us in technical roles, ignoring the transformative nature of AI isn’t an option — while there are lots of bad uses, there are also lots of compelling uses, and we need to have enough familiarity to be able to differentiate and to use the tools well.
Trying to keep up here while also dealing with the emerging threats and also trying to do our jobs is exhausting, and is feeling increasingly unsustainable. While offloading things to AI might seem like an attractive solution, it doesn’t really help and makes things worse as the only way to safely use AIs today is to put up enough guardrails to be able to be personally responsible for their outputs and actions. This means that while they can help us be more productive, that increased productivity does NOT come as a reduction in either stress or anxiety.
I’m not sure what we can do about this? The only thing that I can think of is empathy and acknowledging the levels of stress that our present-day future is forcing onto ourselves and everyone around us, along with appreciating that different people are processing it in different ways. Human connections may also help some people but make things worse for others. While this seems like an important topic for those of us in tech leadership roles to talk about and try and address, the solution is NOT to try and use AI to solve it. (My dystopian fear is bringing this to HR departments or management who then suggest that employees talk to chatbots for psychological support — if anyone seriously suggests this there is plenty of research saying that doing so would be criminally negligent.) But no matter what, try and be good to the humans around you.