AI and the Age of Demoralization
There are countless articles on AI and its various impacts — however, something I've rarely seen is discourse on the mental health of our era.
Much has been said about the economic impacts of AI, from the loss of jobs to company-wide AI mandates. I hope to take a more personal approach with this article.
In particular, what does it do to a person's sense of purpose to live in a world where a machine can outperform them in seconds?
If AI can do it better than me, why bother?
The question sounds melodramatic until you hear it voiced, sotto voce, in offices, classrooms, and late‑night coding sessions. We have been taught to equate competence with worth; when an external system eclipses our competence, that worth feels threatened.
The Cult of Output
Meanwhile, social media has rewired culture around visible output: tweets shipped, videos posted, commits counted. The feed rewards finished artefacts, not the messy drafts behind them. This has nudged an entire generation to judge themselves by the frequency and polish of public deliverables. That bias toward packaged results is not inherently harmful; it has democratized creation and amplified unheard voices.
Yet it sets the stage for AI's most disquieting promise: friction‑free, infinite output. When a model can conjure publish‑ready prose or imagery in seconds, it flatters our feed‑driven instincts while quietly erasing the experiential journey that once tethered work to identity.
Living in a hyper-optimized world
The culprit is not capitalism per se, after all, humans will optimise under any system that rewards efficiency. AI is simply the latest and most frictionless path of least resistance. For businesses, that translates to cost savings; for individuals, it can translate to a quiet erosion of agency.
Used well, AI is an amplifier: artists gain new palettes, engineers new abstractions. Used indiscriminately, it becomes a crutch. I have watched myself slip from maker to merely curator, allowing a model to generate product‑requirement documents, architectural diagrams, even low‑level code stubs before I have fully gripped the problem. The result is an almost pleasant cognitive numbness, and a subtle atrophy of the creative muscles that once defined my craft.
It feels eerily similar to the sedative pull of GPS navigation: liberating at first, disorienting when the signal disappears and you realize you have forgotten how to read a map. A general and gradual descent into a sense of sluggishness.
Learning to love learning again
Over‑reliance shifts our default stance from active exploration to passive selection. We scroll through options that an algorithm has pre‑digested, choosing the "best" answer rather than grappling with the messy middle ourselves. The danger here is not that we will produce nothing, but that we will stop valuing the struggle that makes production meaningful. And while this sounds romantic or idealistic, this struggle has always been vital, integral, to the generation of good products, concepts, and artwork.
I fear that we may lose the ability to love learning, and to enjoy simply being bad at things. After all, relishing and finding comfort in one's failures and imperfections is simply part and parcel of getting closer to a idealized state.
Similar to the child who tries to build a tower of blocks: persisting despite the blocks falling again and again, we need to realize that the process in itself is what makes us human. AI will continue to raise the ceiling on what is technically possible. Our task is to raise the floor on what it means to be intentionally human: curious, fallible, and participatory in our own growth. If we can learn to wield the tool without surrendering the hand that guides it, perhaps we will learn to rediscover what it means to be human.