Composability Is the Real Moat in AI and Data Systems

A perspective on why composability keeps reappearing across computing, from Unix pipes to blockchain protocols and AI agents, and what it means for the future of reliable data systems.

Composability Is the Real Moat in AI and Data Systems

Every few decades, we rediscover the same architectural truth in a new disguise: systems that win are the ones designed for composability.

From Unix pipes to blockchain protocols to AI agents, the pattern is identical: small, focused components, wired together through simple interfaces, beat monoliths every time.

Unix, the original composability lab

Unix pioneers didn’t just give us an operating system; they gave us a philosophy. Doug McIlroy’s maxim codified a design principle that has outlived the hardware it ran on.

  • Write programs that do one thing and do it well.
  • Write programs to work together.
  • Write programs to handle text streams, because that is a universal interface.

In practice, this meant power came not from individual tools, but from the way they could be chained. Commands like grep, awk, sed, sort, and uniq are each modest in isolation; together, they let you build ad‑hoc data pipelines in a single line. A reasonably skilled engineer could sit down at a terminal and, with nothing but a bash script, automate arbitrarily complex workflows over logs, text, and system state.

But there was a cost. These tools were terse, unforgiving, and hard to learn. They assumed discipline: consistent text formats, predictable exit codes, habits around naming and composition. But for those who learned these, the payoff was asymmetric. You weren’t just learning commands, you were learning a grammar for composition that made the system feel almost unbounded.

DOS & Windows, the great forgetting

Not all operating systems took this path. DOS shipped with utilities, but they were never truly first‑class citizens in a composable ecosystem. Core facilities such as standardized text interfaces, piping, and a culture of “this program’s output is some future program’s input” were weak, or missing.

Instead of an environment where tools were designed to interoperate, DOS gave us a loose collection of built‑ins and external programs that rarely treated composition as a first‑order design goal. The shell existed but it was treated as the place to run an executable, nothing more.

Then came Windows 1.0. It gave us “usability” and broader adoption, but threw even primitive composability out the proverbial window. Each application became an island with its own UI, its own internal data structures, and its own idea of “workflow.” 

We traded a world where power users could wire together arbitrary commands in a script for one where the only sanctioned compositions were whatever the application vendor had imagined and implemented. The “pipe” between tools was the clipboard.

In other words, we poured concrete over the very idea that small, generic utilities should be the building blocks of everything else.

AI, the archeologist

AI is now quietly undoing that mistake by rediscovering the distant past.

Modern LLMs are not powerful because they can generate fluent text. They are powerful because they can interpret goals, decompose them into steps, and orchestrate tools in a way that feels like a hyper‑augmented Unix shell.

A single human expert might be able to master a few hundred command‑line tools in a given ecosystem. Even then, they will reliably reach for a handful of familiar utilities under time pressure. An LLM, by contrast, can be wired to thousands of tools – APIs, scripts, databases, retrieval systems – and can “remember” how to invoke any of them so long as the interface is well described.

We are watching the Unix philosophy reappear at a new scale:

  • Each tool is scoped and narrow. It does one thing, i.e., retrieve data from an index, call a pricing API, execute a SQL query, write to storage, or run a simulation.
  • Interfaces are simple and text‑like: a JSON schema, a function signature, a prompt contract.
  • The LLM acts as a general‑purpose orchestrator, deciding which tools to call, in what order, and how to route their outputs into further calls.

What used to be a bash script is now an “agent” running a plan: call the documentation tool, call the blockchain indexer, verify assumptions against a risk engine, summarize the result for a human. The ingredients are familiar. The scale and flexibility are new.

This isn’t an accident of model size, it’s an architectural choice. The strength of an LLM agent comes less from the raw parameters and more from how well it can harness composable tools.

This same pattern shows up in modern data infrastructure, especially in systems that need to process and serve real-time data at scale.

LLMs as composable operating systems

The most interesting LLM systems today treat the model as part of an operating system rather than an application. You can see this in modern agent frameworks.

  • Tooling and APIs are defined as modular capabilities that can be added or removed without rewriting the whole system.
  • Workflows are expressed as graphs of states and transitions, not as single monolithic prompts.
  • Multiple agents, each with a narrow responsibility (search, summarization, classification, planning, execution), are wired together in a composable way.

This looks a lot like the Unix mindset, translated into the AI era. The LLM is the shell. Each agent is a sub-shell orchestrating discrete tools.

When this is done well, you get the same emergent property Unix had: systems that are more than the sum of their parts. Add a new tool, say, a blockchain indexer, a price API, a filesystem,  or a simulation engine, and the system’s capabilities expand combinatorially because every existing workflow can now leverage it.

This is exactly why so much current work in “agentic AI” emphasizes composable architectures. If each capability is independent, well‑scoped, and testable on its own, you can:

  • Swap tools without rewriting core logic.
  • Trace failures back through the graph instead of debugging a giant prompt.
  • Reuse the same agent or tool across multiple workflows.

The technical details vary across stacks, but the underlying lesson is the same: LLMs are only as useful as the ecosystem of composable pieces they can orchestrate.

Composability is the real moat

This has two important implications.

First, it means that in AI systems, composability is a structural property, not a feature. You cannot bolt it on later. Either the tools, data, and workflows are designed to interoperate from day one, or they calcify into bespoke integrations that resist change.

Second, it reframes where real leverage comes from. The winning systems will not be the ones with the single most capable “all‑in‑one” model; they will be the ones that expose the richest set of reliable, composable capabilities around that model. This already shows up in practice:

  • AI stacks that rely on opaque, tightly coupled tools become fragile and hard to evolve.
  • Stacks that treat tools as small, sharp components, each with a clear contract, scale to more use cases with less engineering effort.

You can see this in production‑grade agent platforms that insist on clearly scoped tools, robust error handling, and explicit reasoning traces. They are rediscovering the same lesson that Unix engineers internalized decades ago: if you want reliability and flexibility at the same time, you need composable tools.

In that light, the trajectory is clear. The pioneers of Unix gave us the first mainstream, computing environment built on small utilities and text pipes. GUI‑centric computing temporarily obscured that power by hiding tools behind application boundaries. AI is now bringing it back, with LLMs as the orchestration layer and a growing universe of tools, agents, and data systems.

Composability wins not because it is elegant, but because, over time, no other architecture can keep up.

What this means for blockchain data

The same principle applies to blockchain data systems.

Applications today depend on real-time, reliable access to on-chain data. Systems that treat indexing, querying, and data access as composable components are easier to scale and adapt as requirements evolve.

This is the approach Ormi is built around: small, reliable pieces that work together under real-world conditions, rather than tightly coupled systems that break as complexity increases.

About Ormi

Ormi is the next-generation data layer for Web3, purpose-built for real-time, high-throughput applications like DeFi, gaming, wallets, and on-chain infrastructure. Its hybrid architecture ensures sub-30ms latency and up to 4,000 RPS for live subgraph indexing.

With 99.9% uptime and deployments across ecosystems representing $50B+ in TVL and $100B+ in annual transaction volume, Ormi is trusted to power the most demanding production environments without throttling or delay.