There is a peculiar irony at the heart of the age of artificial intelligence. We have built systems of extraordinary sophistication. They can compose music, synthesise research, generate images indistinguishable from photographs, and write in the voice of anyone who has ever put words on a page. What they cannot do, despite appearing to do it fluently, is tell us why any of it matters.
That question, what matters and why, is not a technical problem. It is a human one. And yet, quietly, incrementally, we are beginning to outsource it.
What Meaning Actually Requires
Viktor Frankl, writing from the rubble of everything that can be taken from a human being, arrived at a conclusion that sounds almost absurdly simple: meaning cannot be given. It can only be found. And it is found not through comfort, not through optimisation, but through engagement with the irreducible particularity of your own life, its suffering, its relationships, its choices, its losses.
This is not a romantic idea about suffering for its own sake. It is a precise claim about the conditions under which meaning becomes available. Meaning requires a subject. It requires someone for whom things matter, and that mattering is always personal, always rooted in lived experience, always inseparable from the question of who you actually are.
An AI system has no stakes in the world. It processes. It generates. It optimises. It does not have a life to make meaning from, because it does not have a life at all. This is not a limitation to be overcome in future models. It is a categorical distinction.
The Efficiency Trap
The pressure to delegate meaning-making to AI systems does not arrive as a philosophical proposal. It arrives as a practical convenience. Why struggle with how to express grief in a condolence message when a model can draft something eloquent and appropriate in seconds? Why sit with the difficulty of articulating your own values in a personal statement when a system can generate a compelling narrative from your bullet points? Why spend time thinking about what you actually want when an algorithm can learn your preferences and pre-select your options?
Each of these conveniences is individually defensible. Collectively, they add up to something more concerning: a gradual transfer of the cognitive and emotional work that is, in the fullest sense, the work of being human.
The efficiency trap is this: the very acts that feel like friction, the struggle to find words, the discomfort of not knowing what you think until you write it, the patience required to let meaning emerge rather than forcing it, are not obstacles to the meaningful life. They are the meaningful life, or at least significant portions of it. Remove the friction, and you remove more than the inconvenience. You remove the substance.
The Meaning Gap Is Not New
It would be a mistake to lay all of this at the feet of artificial intelligence. The philosopher Charles Taylor identified what he called the malaise of modernity decades before generative AI existed: the creeping sense that contemporary life, for all its material progress, had lost something in the way of depth, commitment, and horizon. The worry was already present in mass consumer culture, in the reduction of individuals to preference-expressing economic agents, in the narrowing of what counts as a serious human project to what can be measured and marketed.
AI does not create this malaise. But it has the potential to accelerate it with remarkable efficiency, to automate the very practices, reflection, expression, struggle, choice that might have offered a counterweight.
There is also Hubert Dreyfus’s long-standing argument that human expertise and human understanding are fundamentally embodied, grounded in the kind of engaged, situated, at-risk presence in the world that no symbol-manipulating system can replicate. Dreyfus was making this argument about earlier forms of AI, but the point has only deepened in relevance. The risk is not that AI will surpass human understanding. It is that we will stop exercising human understanding because the machines make it feel optional.
What Machines Cannot Substitute
Kierkegaard described what he called the aesthetic stage of existence: a life organised around pleasure, novelty, and distraction, rich in sensation but empty of commitment. The aesthetic life is not a bad life in any obvious sense. It is full of interesting experiences. What it lacks is a self, a stable centre from which commitments can be made, from which anything can truly matter because you have decided that it will matter to you.
The concern about meaning in the age of AI is, in part, a concern about the aestheticisation of existence at scale. An infinite personalisation engine optimising for engagement is, structurally, a machine for keeping us in Kierkegaard’s aesthetic stage indefinitely, endlessly stimulated, rarely committed, always one recommendation away from something more immediately satisfying.
What machines cannot substitute is self-authorship. The willingness to make a commitment that costs you something. The capacity to sit with a question long enough to develop a genuine relationship with it. The courage to express something imperfect and personal rather than something fluent and borrowed. These are not skills. They are orientations toward one’s own existence, and they require practice, which requires opportunity, which requires occasionally resisting the convenience of delegation.
Living Forward in the Age of AI
The question, then, is not whether to use the tools available to us. The tools are here, they are powerful, and they will become more powerful. The question is what to keep for ourselves, and why.
A useful heuristic: anything that is, for you, load-bearing, any activity whose difficulty is part of what makes it meaningful, should be treated as protected territory. The first draft of your thinking. The expression of genuine emotion. The decision you make not because an algorithm suggested it but because you, after sitting with the question, believe it is right.
None of this requires technophobia. It requires something more difficult and more interesting: discernment. The capacity to distinguish between the tasks where AI genuinely extends what you are capable of, and the tasks where the effort is the point.
Meaning was never efficient. That was never a flaw. It was always the design.
This is part of an ongoing series on Mind, Machine, and Meaning, exploring what it means to think, choose, and exist in a world increasingly shaped by artificial intelligence.
Continue reading:
The Quiet Erosion: How AI Is Rewriting Human Agency Without Asking Permission — the slow, polite way AI is draining our capacity to choose.
Everyone Can Make It Now. So Why Does Almost All of It Feel Wrong? — on taste as the moat that cannot be generated.
Ameya Agrawal is an IIM Kozhikode Gold Medalist and Strategy Manager at the Executive Director’s Office, MIT World Peace University, working at the intersection of strategy and execution. He is a core member of the central team launching WPU GŌA, India’s first transdisciplinary residential university campus. Previously CEO of Mahatma Gandhi Seva Sangh (MGSS), his work in disability rehabilitation earned two Presidential National Awards from the Government of India, impacting over 100,000 lives across Maharashtra.
Author of the bestselling self-help book “A Leap Within” (published at age 21, earning him a National Record), Ameya has been published in Forbes, Business Standard, and The Print. He founded the SkillSlate Foundation, which trained 25,000+ individuals across 100+ organizations during the pandemic. Admitted to Harvard University in 2021, he chose to stay in India to continue his social impact work.
Technical tools and projects available on GitHub | Connect on LinkedIn | Read more at blog.ameya.page





Leave a Reply