The Problem
Building AI applications outside the chat paradigm means fighting your tools. If you’re creating a document editor with inline AI suggestions, a research tool that accumulates sources in real-time, or a code analyzer that displays results across multiple files, you’ve discovered that most AI development tools assume you’re building a chatbot. This creates friction at every layer of your application.The Chat Assumption
The Vercel AI SDK is well-designed and production-ready, but it’s built around a chat interaction model. That design choice permeates everything the SDK provides: the hooks, the components, the state management, the type definitions. When you try to use it for something other than chat, you discover how deeply embedded that assumption is. Consider building a document editor that streams AI suggestions inline. Your application has a document—a string of text that the user edits. The SDK, however, expects messages: an array of user and assistant turns representing a conversation. To bridge this gap, you create synthetic messages:Streaming Infrastructure
Streaming is fundamental to good AI user experiences. Users expect to see results appearing in real-time rather than waiting for a complete response. Implementing streaming outside the chat paradigm requires substantial infrastructure work. Suppose you want to stream tokens into aresult field on your state object. Here’s what you write on the server:
codePreview, explanation, and confidence simultaneously, you either stream sequentially (tripling user wait time), duplicate your infrastructure for each field, or build your own multiplexing protocol. At that point, you’re maintaining custom infrastructure for the lifetime of your application.
Type Safety at the Boundary
TypeScript provides excellent type safety within your codebase, but that safety evaporates at the client-server boundary. When you call server methods from React, you’re often back to string-based APIs:any, requiring an unsafe type assertion that the compiler can’t verify.
This makes refactoring risky. If you rename analyzeDocument to analyze on the server, you need to search every string literal in your frontend code. If you add a required parameter, you won’t know about missing arguments until users hit those code paths. If you change the return type’s structure, the type assertion silently becomes incorrect.
Manual State Synchronization
Real-time AI applications need to keep server state synchronized with client state. When the server’s processing status changes, or results become available, or an error occurs, the client needs to reflect those changes immediately. Without framework support, you implement this synchronization manually:The Cumulative Cost
Each of these problems is solvable in isolation. You can write the SSE parser. You can create synthetic chat messages. You can manually synchronize state. But you solve all of them on every project, and the time adds up. The days spent on streaming infrastructure are days not spent on your product. The bugs debugged in state synchronization don’t make your AI smarter or your user experience better. None of this work differentiates your application from competitors. This is infrastructure that someone should have built once, correctly, so that application developers never have to think about it again.FAQ
Doesn’t the Vercel AI SDK solve streaming?
The AI SDK handles streaming well for chat interfaces, where tokens appear in a message bubble as the assistant responds. For applications that don’t fit the chat model—streaming into arbitrary state fields, multiple simultaneous streams, or custom UI patterns—you’re back to implementing SSE parsing yourself.What about LangChain or LangGraph?
LangChain and LangGraph solve orchestration: composing prompts, managing context, coordinating agent behavior. But they don’t address the full-stack concerns that AI applications face. You still need to build streaming infrastructure to send tokens from your pipeline to React components. You still need state synchronization to keep clients consistent. Those problems remain.Can’t I use tRPC or GraphQL?
They help with type safety at the API boundary, which is valuable. But they don’t address the streaming model that AI applications need. You still build multiplexing for multiple simultaneous streams and accumulation logic for partial updates. tRPC and GraphQL give you typed pipes, but you still need to build what flows through them.Next: The Solution
How Idyllic addresses these problems