Artificial intelligence is not taking over the world with dramatic announcements and robot armies. It is doing something quieter and, in many ways, more profound: it is slowly reducing the number of decisions we make each day without us noticing. And every decision we stop making is a small transfer of agency, the fundamental human capacity to choose, from us to the algorithm.

This is not a story about malicious AI or technological unemployment. This is a story about the gentle, polite, and remarkably effective way that intelligent systems are reshaping human life by making it more convenient to let the machine decide.

The Architecture of Choice Erosion

Consider the last time you chose a movie to watch, a restaurant to visit, or a route to drive. If you’re like most people with a smartphone, you didn’t choose. You consulted Netflix’s recommendation algorithm, Google’s restaurant rankings, or Waze’s routing optimization. The choice became: which suggestion do I accept?

This is a different kind of choosing than what previous generations experienced. Your grandmother, looking for a place to eat, had to actively gather information, weigh options, and make a judgment. The work was harder. It was also more fully hers.

Today’s choice architecture operates on what behavioral economists call “choice overload reduction.” The system doesn’t eliminate your choices; it pre-selects the ones worth considering. This feels like liberation. And in terms of cognitive burden, it is. But liberation from what, exactly? From the labor of decision-making. From the responsibility of bad choices. From the development of judgment through trial and error.

The trade-off becomes visible only when you step back: convenience in exchange for agency. Efficiency in exchange for the skills that only develop through practice making decisions, including bad ones.

When Optimization Becomes Abdication

The most sophisticated example of this erosion is happening in areas where personal judgment was once considered essential. AI systems now help write job applications, draft personal statements, compose birthday messages, and even generate art that “expresses your style.” Each of these tasks used to require you to sit with the question: what do I actually think? How do I want to present myself? What matters to me?

The systems are often very good at these tasks. An AI-written cover letter may be more persuasive than one you would write yourself. An AI-generated birthday message may be more touching than your own words. An AI-created art piece may be more aesthetically pleasing than your amateur attempts.

But the quality of the output was never the only thing at stake. The process of struggling to articulate what you think, of wrestling with how to express something genuinely personal, of making choices that reflect your actual values rather than optimized outcomes, these are not inefficiencies to be automated away. They are, in a significant sense, what it means to be a person rather than a preference-executing machine.

The Delegated Self

Philosopher Harry Frankfurt wrote about the difference between first-order desires (I want coffee) and second-order desires (I want to be the kind of person who wants coffee, or I don’t want to want coffee). Human agency operates at this second level. It’s the capacity to step back from immediate impulses and ask: is this what I want to want?

AI systems, no matter how sophisticated, operate exclusively at the first level. They can predict what you’ll want based on your behavioral patterns, but they cannot engage with the question of whether what you want is what you should want, or whether the person you are becoming through your choices is the person you want to be.

This creates a subtle but crucial gap. As more decisions become automated, more of your life is optimized for the you that you currently are, rather than the you that you might choose to become. The recommendation systems don’t know about your aspirations, only your history. They cannot distinguish between choices you made intentionally and choices you made out of habit, convenience, or in moments of weakness.

The result is a life that becomes increasingly efficient at giving you what you’ve wanted, and increasingly incapable of helping you figure out what you should want instead.

The Hidden Curriculum of Convenience

There’s an educational dimension to this that goes deeper than productivity concerns. Every time you make a difficult decision, you develop what psychologists call executive function: the capacity to weigh options, consider long-term consequences, and act according to your values rather than your impulses. This is a skill that only develops through practice.

When AI systems reduce the number of decisions you need to make, they also reduce the number of opportunities you have to develop this capacity. The college student who relies on AI to write her personal statements may submit more polished applications, but she misses the experience of articulating her own values under pressure. The manager who uses algorithmic scheduling may run more efficient meetings, but he loses practice in the complex human judgment of reading the room and making real-time adjustments.

The hidden curriculum of convenience is that ease becomes habit, and habit becomes incapacity. Not immediately, but gradually. The skills atrophy from disuse. And because the atrophy happens slowly, and because the AI systems keep improving, you may not notice until you need to make an important decision without algorithmic support and realize you’ve lost the muscle for it.

Agency as Practice, Not Possession

Agency is not a possession you either have or don’t have. It’s a practice. Like physical fitness, it requires regular exercise. And like physical fitness, it can be gained or lost depending on how you live.

The question is not whether to use AI tools. The tools are here, they are useful, and they will become more powerful. The question is which decisions to keep for yourself, and why.

A useful heuristic: any decision that is, for you, a form of practice in being the person you want to be should be protected from algorithmic optimization. The choice of how to spend your evening. The expression of condolences to a grieving friend. The selection of a gift that reflects your understanding of someone you care about. These are not inefficiencies. They are exercises in agency.

And agency, once lost, is difficult to recover. Much easier to maintain it through deliberate practice than to rebuild it after years of algorithmic convenience have made you a stranger to your own capacity to choose.


This is part of an ongoing series on Mind, Machine, and Meaning, exploring what it means to think, choose, and exist in a world increasingly shaped by artificial intelligence.

Continue reading:
The Outsourced Self: When Meaning Becomes a Machine’s Job — on the quiet transfer of meaning-making to machines.
Everyone Can Make It Now. So Why Does Almost All of It Feel Wrong? — on taste as the moat that cannot be generated.


Ameya Agrawal is an IIM Kozhikode Gold Medalist and Strategy Manager at the Executive Director’s Office, MIT World Peace University, working at the intersection of strategy and execution. He is a core member of the central team launching WPU GŌA, India’s first transdisciplinary residential university campus. Previously CEO of Mahatma Gandhi Seva Sangh (MGSS), his work in disability rehabilitation earned two Presidential National Awards from the Government of India, impacting over 100,000 lives across Maharashtra.

Author of the bestselling self-help book “A Leap Within” (published at age 21, earning him a National Record), Ameya has been published in Forbes, Business Standard, and The Print. He founded the SkillSlate Foundation, which trained 25,000+ individuals across 100+ organizations during the pandemic. Admitted to Harvard University in 2021, he chose to stay in India to continue his social impact work.

Technical tools and projects available on GitHub | Connect on LinkedIn | Read more at blog.ameya.page

Leave a Reply

Trending

Discover more from Mind, Machine and Meaning

Subscribe now to keep reading and get access to the full archive.

Continue reading