
Right now, a lot of AI products are still demos pretending to be businesses.
They can write, summarize, classify, and generate. They look impressive in a product demo. They make for a great launch thread. But once you try to use them inside a real business workflow, they often fall apart.
That’s not because the models are bad.
Under the hood so many AI products are being built around what the model can do, not around how people actually work.
That distinction matters more than most founders think.
I was thinking about this after a recent conversation I had on The Unordinary’s Podcast with Yaakov Zar, founder and CEO of Lev. One of the clearest takeaways from that conversation was that the real opportunity in AI is not the AI itself. It’s designing an intentional experience layer over top of it that actually fits how work gets done.
The next wave of winning AI companies won’t be defined by model access. They’ll be defined by workflow design, context, trust, and judgment.
Let's break this down.
A lot of founders are still building AI products like this:
Sometimes that works, briefly.
The real work usually looks more like this:
This is where most AI products still break.
The model may be able to generate something decent. But if the user still has to do all the setup, validation, cleanup, and follow-through themselves, you haven’t really solved the problem. You’ve just moved it around.
That’s why so many AI tools feel exciting once and then quietly disappear from someone’s stack.
They’re built to demonstrate intelligence, not to remove drag.
The best AI products reduce friction inside of a repeated workflow, making the entire task easier before, during, and after the model does its thing.
This is especially obvious in industries with ugly workflows. Commercial real estate is a good example.
It’s a massive category with high-value transactions, but much of the work still runs through spreadsheets, disconnected systems, PDFs, email chains, and institutional memory sitting inside people’s heads. That makes it a perfect environment for AI, but only if the product is built around the actual workflow rather than the headline use case.
The same pattern shows up in legal, insurance, healthcare administration, logistics, accounting, and a dozen other categories.
So many founders fall into the trap of thinking it's a matter of “apply AI to industry X” instead of understanding where work gets stuck, then designing software that removes that friction in a way users can actually trust.
That’s a much harder problem. It’s also where the real value gets created.
The average user needs to have full confidence that the tool they are using is actually intuitive and fully capable of solving their core problem.
In business software, users are constantly trying to complete a task that has consequences.
If the output is wrong, vague, or hard to verify, the user does not care that it was generated quickly. They're now only focused on the fact that they have one more thing to check.
And if they have to double-check everything, you just added a ton of uncertainty.
The strongest AI products do a few things really well:
That last point matters a lot.
If the software makes someone feel more capable, more informed, and more in control, they’ll keep using it. If it makes them feel like they’re supervising an unreliable intern, they won’t.
As models get better and cheaper, more of the technical layer will become accessible to everyone.
That’s already happening, which means the real advantage is building an experience layer on top of the core AI functionality that is customized to serve a very specific user dealing with very specific problems.
You need operational taste if you want to accomplish this.
Operational taste is knowing what part of the workflow is actually painful, what should be automated, what should stay human, what needs structure before AI touches it, and what kind of output a user will actually trust and use.
This is what separates a useful AI company from an AI wrapper.
The moat will come from knowing how work actually happens inside a category, then designing around that reality better than everyone else.
That is much less flashy than shipping a chatbot.
It’s also much more durable.
Before shipping any AI feature, founders should ask a more demanding set of questions than “can the model do this?”
A better filter looks like this:
Not what task. Not what demo.
What repeated workflow gets meaningfully better if this works?
Most AI products fail because they assume clean inputs in a world full of messy information.
If you don’t define that clearly, users won’t know when to trust the system and when to step in.
If the answer is “the user copies it into something else,” you probably haven’t gone far enough.
That’s the question that kills a lot of AI features.
A feature can be impressive and still not deserve a place in someone’s workflow.
Repeated use is a much better test than first reaction.
Most AI products won’t fail because the models weren’t good enough.
They’ll fail because they were never designed to survive contact with real work.
That’s the real shift happening right now.
The winners won’t just be the companies with access to the best models.
They’ll be the ones that understand how people operate, where workflows break, and how to turn intelligence into something usable.
That’s what software has always been at its best.
AI just raises the stakes.
If you’re building in AI, the question is not just what the model can do.
It’s whether your product actually fits the way people work.
And if it does, it won’t just feel smart.
It’ll feel like leverage.
If you want the full conversation that sparked this post, listen to the full episode with Yaakov Zar here.