Taste in the age of AI and vibe coding is being tested in ways we did not fully anticipate. Google just shipped Stitch. You describe what you want — a cinema booking app, a fintech dashboard, a healthcare portal — and it generates the full UI in seconds. Production-ready code. Figma export. Clickable prototypes. All from a text prompt.
The design community’s reaction was predictable. Half celebrated. Half panicked. And almost everyone missed the actual point.
Stitch did not make designers obsolete. It made a specific kind of designer obsolete — the one whose entire value was translating a spec into a screen. That job is gone. But the harder question, the one nobody is asking loudly enough, is this: if the generation is now free, what exactly do you bring?
The Abundance Problem Nobody Planned For
We spent twenty years building tools to make creation easier. And we succeeded. Today, you can produce a website, a song, a legal brief, a financial model, a university admissions essay, a marketing deck — without knowing how to do any of it.
The result is not a creative renaissance. The result is an ocean of content where almost nothing is worth reading, almost no product is genuinely delightful, and almost every AI-generated interface looks like it was designed by a committee that had never met a user.
Volume went up. Quality stayed flat, or dropped. And the gap between what gets made and what is actually good has never been wider.
Here is the paradox: the same tools that made creation accessible made judgment scarce. Because judgment — the ability to look at something and say this is not right yet, or this is not for us, or this solves the wrong problem — that does not come from better prompting. It comes from something else entirely.
What Taste Actually Is
Taste is not aesthetics. That is the first mistake people make when they try to think about this seriously.
Aesthetics is about what looks good. Taste is about what is right for this person, this problem, this moment, this constraint. Steve Jobs did not have better taste than other CEOs because he preferred sans-serif fonts. He had better taste because he understood, before users could articulate it, what would feel natural in their hands — and he was willing to delay shipping, cut features, and say no to his own engineers until it got there.
That is taste. It is a calibrated judgment function built from accumulated exposure, repeated failure, genuine curiosity, and the discipline to keep refining your own standard even when no one else can see what is wrong.
And it is, by construction, the one thing that gets harder to fake as the tools get better.
Because when anyone can generate anything, the only differentiator left is knowing what to generate, what to keep, what to throw away, and what to go back and redo.
The Stitch Situation Is Not About Design
Go back to Google Stitch for a moment. The tool generates a cinema booking app in seconds. Clean layout. Reasonable component hierarchy. Decent color palette. Figma-ready.
Now ask a different question. Is it good?
Does it create the right emotional experience for someone buying a ticket at 11pm on a Saturday after a long week? Does it handle the anxiety of choosing a seat blind? Does it earn trust fast enough that users will enter a credit card without a second thought? Does it feel like the cinema brand, or does it feel like every other booking interface?
Stitch cannot answer any of those questions. It can generate options. But the judgment about which option serves the user, the brand, and the business simultaneously — that requires a human with taste. One who has used enough products to have a strong intuition, failed enough to know what not to do, and cares enough to keep pushing past the point where the output looks acceptable.
The tool did not replace design thinking. It just removed the barrier to entry for people who were doing production work without design thinking. Those are very different things.
This Is Not a New Problem. But It Is Now Urgent.
Every generation of tools has created this same bifurcation. The printing press made it possible for anyone to publish. The internet made it possible for anyone to broadcast. Social media made it possible for anyone to build an audience. And each time, we worried that the old gatekeepers — editors, publishers, networks — would become irrelevant.
They did become irrelevant. But the function they performed did not. The function just moved. The question of what is worth reading, worth watching, worth believing — that question did not disappear when the gatekeepers left. It got harder. And the people who could answer it well became more valuable, not less.
AI is the same pattern, compressed into five years instead of fifty.
The generation layer is being commoditized. The judgment layer is being premium-ized. The people who understood this early enough to actually develop the latter rather than just defending the former are going to be in a very different position from the ones who spent the last three years trying to prove that AI cannot do their job.
Where This Shows Up in Practice
Take education. Universities across India are now building AI-integrated programs, AI labs, AI electives. The structural assumption is that if students know how to use the tools, they will be prepared for the workforce. That is true in the same way that teaching someone to use a calculator prepares them for mathematics. It is necessary but deeply insufficient.
What actually determines a student’s ceiling is whether they have developed the judgment to use the tool’s output critically. Can they tell when the AI is confidently wrong? Can they identify what is missing from a generated analysis? Can they push back on a well-formatted answer because something about the framing is off, even if they cannot immediately articulate why?
That capacity — critical judgment applied to AI output — is what separates the students who will use AI to do better work from the ones who will use AI to produce more mediocre work faster. And right now, almost no curriculum teaches it directly. We teach the tools. We do not teach the standard.
The same problem exists in hiring. Resumes generated by AI. Cover letters generated by AI. Portfolio pieces generated by AI. Interviewers who cannot tell the difference because they are using AI to screen them. The entire signal layer has been flattened, and everyone is acting like the problem is the tools rather than the fact that we never built good judgment into the evaluation process in the first place.
The Inconvenient Truth About Developing Taste
Here is what makes this genuinely difficult. You cannot prompt your way to taste. You cannot take a course in it. You cannot buy it, and you cannot shortcut it by consuming more content about it.
Taste develops through a specific process: exposure to a high volume of examples across the full quality spectrum, genuine effort to create and fail, honest feedback from people whose judgment you respect, and the discipline to sit with discomfort long enough that your own standard sharpens.
That process takes years. It requires actual investment of attention, not passive consumption. And it only works if you are genuinely trying to understand why something is good, not just recognizing that it is.
The uncomfortable implication is that the people who spent the last decade building deep expertise in a domain — reading widely, making things, refining standards, developing opinions they could defend — those people are not made redundant by AI. They are, quietly and counterintuitively, the most valuable people in the room. Because they have the one thing the room cannot generate on demand.
The people who spent the last decade on execution skills that are now automated — those are the ones with a problem. Not because they are less intelligent. But because the skill they built is now a commodity.
Taste in the Age of AI Vibe Coding: The Question Worth Asking Yourself
If someone handed you the best AI tools available today and gave you a week to produce something in your domain — a product, a document, a curriculum, a campaign, a business plan — would the output be recognizably better than what any reasonably intelligent person could generate with the same tools?
If yes, your taste is your moat. Protect it, develop it, and stop apologizing for it.
If no, the honest question is not what tool should I learn next. It is: what standard am I actually trying to meet, and have I spent enough time around work that is genuinely excellent to know what excellent looks like?
Because Google Stitch can build you a UI. It cannot tell you that the UI is for the wrong product. It cannot tell you that the product solves a problem nobody actually has. It cannot tell you that the whole thing is beautifully executed and entirely beside the point.
Only you can tell you that. And only if you have done enough work to have a view worth trusting.
That is what taste is. And in a world where everything can be made, it is the only thing that still cannot be copied.
Read next in this series:
The Quiet Erosion: How AI Is Rewriting Human Agency Without Asking Permission
The Outsourced Self: When Meaning Becomes a Machine’s Job
Ameya Agrawal is an IIM Kozhikode Gold Medalist and Strategy Manager at the Executive Director’s Office, MIT World Peace University, working at the intersection of strategy and execution. He is a core member of the central team launching WPU GŌA, India’s first transdisciplinary residential university campus. Previously CEO of Mahatma Gandhi Seva Sangh (MGSS), his work in disability rehabilitation earned two Presidential National Awards from the Government of India, impacting over 100,000 lives across Maharashtra.
Author of the bestselling self-help book “A Leap Within” (published at age 21, earning him a National Record), Ameya has been published in Forbes, Business Standard, and The Print. He founded the SkillSlate Foundation, which trained 25,000+ individuals across 100+ organizations during the pandemic. Admitted to Harvard University in 2021, he chose to stay in India to continue his social impact work.
Technical tools and projects available on GitHub | Connect on LinkedIn | Follow on Twitter | Read more at blog.ameya.page




Leave a Reply to The Outsourced Self: When Meaning Becomes a Machine’s Job – Ameya AgrawalCancel reply