The great AI irony: How machines are teaching us to be human again
There's a delicious irony unfolding in the AI revolution that nobody seems to be talking about. In our rush to optimize for AI agents and LLMs, we're accidentally fixing problems that should have been solved for humans decades ago.
Let me explain.
AI didn't invent customer Context (it just made us care about it)
The explosion of AI has brands scrambling to add rich context to their products. Suddenly, everyone's talking about detailed product descriptions, use cases, style guides, and occasion-based categorization. Fashion retailers are meticulously tagging items with 'cocktail party,' 'business casual,' or 'weekend brunch.' Home goods companies are creating elaborate room scenes and lifestyle contexts.
But here's the thing: LLMs didn't invent style preferences or shopping occasions. Humans have always searched this way.
When was the last time you thought, 'I need SKU #48573-B in navy'? Never. You think, 'I need something professional but not stuffy for that client dinner next week.'
The uncomfortable truth about e-commerce
For years, product pages were designed with merchants in mind, not customers. They answered the question 'How do we organize our inventory?' instead of 'How do humans actually shop?'
The result? Endless grids of products with minimal context, forcing customers to imagine how that dress might look at a wedding or whether that couch would fit their minimalist aesthetic. We built digital catalogs when customers wanted digital personal shoppers.
Now, AI agents need context to understand and recommend products effectively. And suddenly (miraculously!) brands are creating the rich, contextual content that customers have been craving all along.
The same pattern with APIs and developer experience
This irony extends beyond e-commerce. Look at Model Context Protocol (MCP) and the current push for well-documented, context-rich APIs.
For years, developers have suffered through:
- Sparse documentation that assumes you already know everything
- Cryptic error messages like 'Error 401: Unauthorized' with no hint about what's actually wrong
- APIs that make perfect sense to the backend team but are Byzantine to anyone else
Now we need AI agents to consume these APIs, and suddenly everyone's adding:
- Detailed parameter descriptions
- Clear examples and use cases
- Meaningful error messages that explain what went wrong and how to fix it
- Contextual information about when and why to use each endpoint
MCP is essentially an abstraction layer for what should have been well-designed APIs in the first place. We're adding context for agents that human developers have been begging for since the dawn of REST.
The beautiful paradox
Here's what's both hilarious and hopeful: By optimizing for AI, we're accidentally optimizing for humans.
When you write product descriptions that help an AI understand the vibe, occasion, and styling options, you're also helping the busy parent shopping on their phone at 11 PM. When you document your API thoroughly enough for an AI agent to use it, you're also helping the junior developer who just joined your team.
We could have done this a decade ago. The technology was there. The need was there. But it took machines needing context for us to finally provide the context humans always wanted.
What this means for business
This isn't just an amusing observation. It's a fundamental shift in how we need to think about information architecture and customer experience:
- Context is now table stakes. If your products, services, or APIs lack rich contextual information, you're invisible to both AI agents and increasingly, human customers who expect AI-level service.
- The bar for 'good documentation' just went way up. Whether it's product pages or API docs, 'good enough for humans to figure out' is no longer good enough.
- Customer-centricity is no longer optional. AI agents are the ultimate customer advocates. They only care about finding the best match for the user's actual needs, not your internal categorization system.
The silver lining
Perhaps this is exactly what we needed. Maybe it took building machines that think like customers for us to finally start treating customers like humans.
As we race to feed context to our AI overlords, we're inadvertently creating the rich, helpful, human-centered experiences we should have built all along. The machines are teaching us to be more human.
Funny how that works.