Skip to main content

The Solution

Most developers approach AI applications with a mental model inherited from ChatGPT: AI is something you talk to. You send messages, receive responses, and the natural interface is a chat window. This model works for chatbots, but it constrains your thinking when you’re building anything more complex. Idyllic introduces a different mental model. Instead of treating AI applications as chat interfaces that exchange text, you treat them as stateful systems that stream structured outputs to connected clients. Your AI system is a TypeScript class. Agents are properties within that class. Coordination is plain code—if/else, loops, async/await.

Systems, Not Chat

When you think of your AI backend as a stateful system rather than a chat endpoint, you work with different primitives. Instead of message arrays, you define typed state fields. Instead of sending and receiving text, you call methods that execute logic and update state. Instead of request-response cycles, you have reactive updates that flow to all connected clients automatically. Here’s a research assistant built as a stateful system:
import { AgenticSystem, field, action, stream } from 'idyllic';

export default class Research extends AgenticSystem {
  @field sources: Source[] = [];
  @field analysis = stream<string>('');
  @field status: 'idle' | 'searching' | 'analyzing' | 'complete' = 'idle';

  @action()
  async analyze(topic: string) {
    this.status = 'searching';
    this.sources = await this.searchWeb(topic);

    this.status = 'analyzing';
    for await (const chunk of ai.stream(`Analyze: ${topic}`)) {
      this.analysis.append(chunk);
    }
    this.analysis.complete();

    this.status = 'complete';
  }
}
The backend is a TypeScript class with typed state and methods. Each state field has an explicit type: sources is an array of Source objects, status is a union of string literals representing workflow stages, and analysis is a streaming string that accumulates content over time. The stream<string> wrapper indicates that this field receives incremental updates rather than being set all at once. Methods decorated with @action() become callable from connected clients. The corresponding frontend subscribes to this state and renders it:
import { useSystem } from '@idyllic/react';
import type { Research } from '../systems/Research';

function ResearchView() {
  const { sources, analysis, status, analyze } = useSystem<Research>();

  return (
    <div>
      <StatusBadge status={status} />
      <SourceList sources={sources} />
      <div>{analysis.current}</div>
    </div>
  );
}
The useSystem hook provides typed access to both state and actions. It knows the shape of your state and the signatures of your methods because it uses the Research class as a type parameter. There’s no manual WebSocket code, no state synchronization logic, no message array to manage. When the backend updates this.status, the component re-renders with the new value. When this.analysis.append() is called, the streaming text appears in the UI immediately.

Agents as Properties, Coordination as Code

When building multi-agent systems, a common architectural question is how agents communicate. In actor-model frameworks, agents are separate processes that send messages to each other. In graph-based frameworks, agents are nodes connected by edges that define data flow. Each approach introduces its own vocabulary and constraints. Idyllic takes a simpler approach: agents are properties in your class. The class is the actor—a single Durable Object that maintains state and runs methods. Agents inside that class are just objects. They share memory, can access each other directly, and coordinate through regular method calls.
export default class VirtualOffice extends AgenticSystem {
  @field alice = new Employee('Alice', 'researcher');
  @field bob = new Employee('Bob', 'writer');
  @field tasks: Task[] = [];

  @action()
  async createArticle(topic: string) {
    // Coordination is plain TypeScript
    const research = await this.alice.investigate(topic);
    const draft = await this.bob.write(research);

    // Conditional logic is just if/else
    if (draft.needsRevision) {
      const feedback = await this.alice.review(draft);
      return await this.bob.revise(draft, feedback);
    }

    return draft;
  }
}
This is not message passing. When you call this.alice.investigate(topic), you are calling a method on an object. The method might use an LLM internally, but from your code’s perspective it’s a function call that returns a value. When you access this.bob, you’re reading a property, not sending a message to another process. The coordination logic—calling Alice first, passing her output to Bob, checking if revision is needed, routing back for feedback—is expressed in the language you already know: TypeScript. There is no graph configuration, no edge definition, no state schema to wire up. Sequential operations are sequential lines of code. Conditional operations are if statements. Loops are for loops.

How the Primitives Work

Several concepts in this model differ from traditional approaches. Understanding how they work helps you use them effectively. State is declared, not managed. You define state as typed properties on your class. When you assign to a state property, that assignment triggers persistence and synchronization automatically. The framework intercepts property assignments and handles the underlying mechanics of saving state and broadcasting changes. There’s no setState call, no emit function, no explicit save operation. You write this.status = 'searching' and the framework does the rest. Methods are your API. You write async analyze(topic: string) as a method on your class, and clients can call it directly with full type safety. The @action() decorator marks which methods should be exposed to clients. There’s no endpoint definition, no route configuration, no serialization code. The method signature—its name, parameters, and return type—becomes the API contract. Streaming is a property type. Instead of implementing WebSocket handlers and managing streaming state manually, you declare a streaming field as analysis = stream<string>(''). This tells the framework that analysis will receive incremental updates. To stream content, you call this.analysis.append(chunk). On the client, analysis.current always contains the accumulated content so far. The entire streaming implementation—from buffering on the server to updates on the client—is handled by the framework based on this type declaration. Persistence is automatic. State survives server restarts. The framework stores state in durable storage after each action completes, so users can close their browser, return days later, and find their data exactly as they left it. You don’t configure a database or write persistence logic.

Comparing the Two Models

AspectChat ModelSystem Model
Core abstractionMessages arrayTyped state object
StreamingSSE/WebSocket plumbingstream<T> property type
State syncManual implementationAutomatic on assignment
Backend structureAPI endpointsClass methods
Type safetyLost at boundaryEnd-to-end

When to Use This Model

The system model is more general than the chat model. Any application that streams AI-generated content, maintains state across sessions, or coordinates long-running operations benefits from thinking in terms of systems rather than conversations. That said, if you’re building a straightforward chatbot where the chat model genuinely fits, it remains appropriate. The system model doesn’t prohibit chat interfaces; it just doesn’t assume them. A chat application built with Idyllic would have state fields for conversation history and preferences, streaming for responses, and persistence across sessions. The underlying implementation would be a system like any other.

FAQ

Is this just backend classes with WebSocket sync?

The concept is related, but Idyllic addresses patterns specific to AI applications that generic frameworks don’t handle well. AI applications have characteristics like streaming state that updates incrementally, long-running operations that may take minutes, human-in-the-loop workflows where execution pauses for approval, and persistent context that accumulates over sessions. These patterns require specific primitives. A generic WebSocket framework gives you transport, but you still build the streaming abstractions and persistence logic yourself.

What if I want to build a chat interface?

Chat is a valid pattern, and Idyllic supports it well. You’d have state fields for conversation history and user preferences, streaming for responses, and persistence that survives sessions. You’re not abandoning chat—you’re building a chat interface using primitives that happen to work for other interfaces too. The system model is more general, and chat becomes one implementation pattern within it.

How does this relate to LangChain?

LangChain and LangGraph focus on orchestrating LLM calls: composing prompts, managing context windows, and coordinating agent behavior. They’re libraries for the AI logic itself. Idyllic is infrastructure for the entire application—it handles streaming from server to client, state persistence across sessions, deployment to edge infrastructure, and frontend synchronization. The two are complementary. You can use LangChain inside an Idyllic system to handle LLM orchestration while Idyllic handles everything around it.