BrandRadar is here! AI-powered competitor intelligence without enterprise pricing

Cyces.

At a Cyces webinar this month, Zoho Tables' Sai Kishore made a case that the AI products flooding the market right now are solving the wrong problem with the wrong tools - and that the speed at which they're being shipped is making the situation worse, not better. From where we sit - building AI products for clients every week - the diagnosis matches what we've been seeing on the ground.

A boom with a uniform shape

The AI product boom has a uniform shape to it. A founder picks a workflow, wraps it around a frontier model - Claude, GPT, Gemini - adds a prompt, ships a UI, and calls it a product. Y Combinator batches are full of them. So are LinkedIn launch posts. The pitch is roughly the same each time: the model is powerful, we're just packaging it for a specific audience.

Sai Kishore, Head of Product at Zoho Tables, has been watching this play out from inside one of the largest software companies in India, and he is not convinced. Speaking at a Cyces webinar on AI product failure earlier this month, he argued that most of these wrappers are headed for a wall - not because the underlying models are bad, but because the founders building on top of them have misunderstood what businesses actually need from AI, and have shipped before they ever bothered to find out.

His central question was blunt. If your product is a model-picker with a system prompt, why would anyone use you instead of going directly to ChatGPT or Claude?

The question founders avoid

Before the wrapper question, Kishore argued, there is a more basic one most founders skip. They run a survey, get back polite, encouraging answers, and call it validation. But survey responses, he said, are not evidence. People say yes to be nice. Showing someone a working product and asking "what do you think?" is not discovery either. "It's basically pitching."

The questions that actually tell you something are simpler. How do you live with this problem today? Will this solution save you time, money, or resources? And then the one founders dance around: would you be willing to pay for this today?

"People will tell you they love your idea," Kishore said. "But very few will be open to opening their wallet for you."

The contrast he kept returning to was builders who live inside the problem. He pointed to a robot built in Kerala - which produces close to half of India's coconuts - to climb trees and identify which ones to harvest. The builder did not need surveys, Kishore said. He lived the frustration daily. Skilled climbers were getting harder to find every season; safety incidents were a constant worry; quality was hard to verify after the fact. The product fell out of the problem because the builder was the user.

That kind of closeness is almost impossible to fake, and the teams that try to shortcut it with a Google form almost always end up building something slightly off.

A WhatsApp inbox, and the model nobody pitched

Once a real problem is in hand, the next mistake is reaching for the biggest model available. Kishore described a small business he had recently advised. The company was running 300 to 400 orders a day entirely through WhatsApp - a mix of order confirmations, inquiries, photographs, and casual messages, all landing in the same inbox. They wanted a way to log only the actual orders into their database, automatically.

The default move, he said, would have been to plug in a frontier API. Instead, he steered them toward a small, distilled classification model with one job: decide whether an incoming message was an order, an inquiry, or a greeting. The loading time was negligible. The response time was negligible. The cost, compared to routing every WhatsApp message through a frontier API, was a fraction.

For a business doing roughly 4,000 to 5,000 contacts a month with several messages each, he noted, that fraction is the difference between a product that is viable and one that quietly bleeds margin every day.

Broad models, narrow problems

The example points to a deeper mismatch. Frontier models are built to be broad. Businesses need narrow. Frontier models are trained to always produce an answer. Businesses need systems that can say "I don't know" and route the question to a human. Frontier models optimize for general capability. Businesses care about one workflow, done reliably, on their data, at a cost that makes sense.

A wrapper, Kishore argued, inherits all the breadth a business does not need and none of the specificity it does. "Everything outside that narrow corridor is a risk you haven't accounted for," he said.

Hallucination is a design problem

This framing led him to a claim that cuts against most of the industry conversation about AI reliability. Hallucination, in his view, is not really a technical problem. It is a design problem. If a general-purpose model has been handed a narrow business job and given no permission to say "I don't know," it will fill the gap with something plausible. The model is doing what it was built to do. The failure is upstream - in the decision to wrap instead of narrow.

The larger players, he suggested, have already absorbed this lesson. Distillation - the practice of compressing large models into smaller, faster, cheaper models trained for specific tasks - has become a quiet arms race, with Chinese labs particularly active. Most real business problems, Kishore noted, do not need a model that can write poetry, debug code, and explain quantum mechanics. They need a model that can classify a WhatsApp message in 200 milliseconds.

Wrappers aren't the enemy. Treating them as finished is.

Kishore was careful to clarify that he was not arguing against wrappers as a category. He cited Figma's AI features, Bolt, Lovable, and GitHub Copilot as wrapper-style products that have done meaningful work. The trap, he said, is treating the wrapper as the finished product rather than as the easy first version of something that still needs to get narrower, faster, and cheaper to survive contact with real users.

Why teams keep walking into the trap

Part of the reason this pattern is so widespread, Kishore suggested, is that AI has made shipping itself seductive. Features that used to take months of engineering work now take days. For a small team or a freelancer, hours. That speed is exhilarating, and it is also where vision quietly leaks out of a product.

He listed the warning signs he watches for as a product leader. You can no longer explain the product in a single sentence. Operating costs are climbing but retention isn't. Most users are touching only twenty or thirty percent of what you've built. Each one, on its own, is easy to dismiss. Together, they are the signature of a team that has been shipping faster than it has been thinking.

The instinct, when growth stalls, is to ship more. Kishore's advice was the opposite: stop, and figure out which of the features you've already shipped are actually being used.

What this looks like from our side of the table

At Cyces, we build AI products for clients across industries, and Kishore's framing maps almost exactly onto the pattern we see in early conversations. Founders arrive with a frontier-model wrapper that demoed beautifully and is now buckling under real usage - costs creeping up, outputs drifting, users losing trust. Often the product was shipped before anyone asked the would-you-pay question. Sometimes the team kept shipping past the point where the original vision was still recognizable.

The fix is rarely a better prompt. It's usually a smaller model, narrower scope, a system designed around what the AI is allowed to not know - and a willingness to cut features that were built fast and never earned their keep.

The path that feels fastest - pick a workflow, wrap a frontier model, ship, ship again - is also the path that produces the highest concentration of products that look impressive in a demo and fall apart in production. The harder path is the one with a future. Talk to users until you know whether they will actually pay. Pick a problem narrow enough that a small model can solve it well. Give the system permission to admit what it does not know. Resist the urge to ship a fourth feature when the first three are still half-used.

The result will be less impressive at a pitch meeting and more useful in a customer's hands. Which, in the end, is the only metric that matters.

leveraging tech for

business growth

Cyces.