← All Posts
Network of connected nodes representing edge computing

Why Datsugi (often) builds AI systems on Cloudflare Workers

Arrigo Lupori

Arrigo Lupori

Managing Director & Tech Lead · January 24, 2026

Edge computing isn't just for caching anymore. Here's why we've found Cloudflare Workers to be an excellent foundation for building AI-powered applications.

When people think about building AI applications, they usually imagine powerful GPU clusters running inference on massive language models. And for good reason—that’s where the magic happens.

But there’s another piece of the puzzle that often gets overlooked: the infrastructure that connects users to those AI capabilities. At Datsugi, we’ve found Cloudflare Workers to be an excellent foundation for this layer, and I want to explain why.

The challenge with AI application architecture

Modern AI applications face a unique architectural challenge. They need to:

  1. Handle unpredictable traffic patterns (AI hype cycles are real)
  2. Minimize latency (users expect instant responses)
  3. Manage costs efficiently (AI inference isn’t cheap)
  4. Integrate with multiple AI providers (no single provider does everything well)
  5. Handle complex orchestration (chains, agents, tool use)

Traditional server architectures struggle with this combination of requirements. You either over-provision for peak traffic (expensive) or risk poor performance during spikes.

Why Workers work for AI

Global distribution by default

Cloudflare Workers run on over 300 data centers worldwide. Your code executes close to your users, which matters even when the AI inference happens elsewhere. Why? Because:

  • Initial request parsing and validation happens at the edge
  • Streaming responses can begin immediately
  • Caching decisions are made close to the user
  • Error handling and fallbacks execute faster

The difference between 20ms and 200ms of overhead might seem small, but it compounds across every interaction in an AI application.

Streaming-native architecture

AI applications live and die by their streaming capabilities. Nobody wants to wait 30 seconds for a complete response when they could see tokens appearing in real-time.

Workers support streaming responses natively through the Streams API. Combined with Cloudflare’s AI Gateway, you get:

export default {
  async fetch(request, env) {
    const response = await env.AI.run("@cf/meta/llama-2-7b-chat-int8", {
      messages: [{ role: "user", content: "Hello!" }],
      stream: true,
    })

    return new Response(response, {
      headers: { "content-type": "text/event-stream" },
    })
  },
}

That’s a production-ready streaming AI response in under 15 lines of code.

Cost efficiency at scale

Workers pricing is based on actual compute time, not reserved capacity. For AI applications with bursty traffic patterns, this is significant. You pay for what you use, not for servers sitting idle between requests.

But the bigger cost savings come from intelligent routing. Not every request needs GPT-4. Many can be handled by smaller, faster, cheaper models. Workers make it trivial to implement this logic at the edge:

const model = request.headers.get("x-priority") === "high" ? "gpt-4-turbo" : "gpt-3.5-turbo"

Built-in AI infrastructure

Cloudflare has invested heavily in AI infrastructure:

  • Workers AI: Run inference on Cloudflare’s network directly
  • AI Gateway: Unified API for multiple AI providers with caching, rate limiting, and observability
  • Vectorize: Vector database for RAG applications
  • D1: Serverless SQL database for application state

These services integrate seamlessly with Workers, reducing the complexity of building AI applications.

When Workers aren’t the right choice

To be clear, Workers aren’t the answer for everything AI-related:

  • Model training: You need GPUs for this, obviously
  • Heavy inference: For self-hosted models requiring significant compute, dedicated infrastructure makes more sense
  • Long-running processes: Workers have execution time limits (though they’ve increased significantly)
  • Complex state management: While Durable Objects help, some applications need traditional databases

We use Workers as the orchestration and integration layer, not as a replacement for dedicated AI infrastructure.

Our typical architecture

A common pattern we use at Datsugi:

  1. Workers handle incoming requests, authentication, and routing
  2. AI Gateway manages connections to external AI providers
  3. Vectorize or Pinecone stores embeddings for RAG
  4. D1 or Turso manages application state
  5. R2 stores larger artifacts (documents, images, etc.)

This gives us global distribution, excellent performance, and reasonable costs—without managing servers.

The bottom line

Cloudflare Workers won’t run your AI models (unless you use Workers AI for smaller models), but they’re excellent for everything around those models. Request handling, response streaming, caching, rate limiting, authentication, and orchestration all benefit from edge execution.

If you’re building AI applications and haven’t explored Workers, it’s worth your time. The developer experience is excellent, the pricing is fair, and the performance characteristics are well-suited to AI workloads.