AI Doesn’t Fail Because It Lacks Intelligence - It Fails Because It Lacks Meaning
Why ontologies matter - and why they are not enough
The Illusion of AI Readiness
Across industries, executives are making the same bet: that artificial intelligence will unlock value hidden in their data. The investments are substantial - modern data platforms, vast stores of structured and unstructured information, and increasingly sophisticated models.
And yet, a pattern is emerging.
AI systems perform impressively in controlled settings but falter in production. Outputs are inconsistent. Results cannot be fully explained. Confidence erodes at precisely the moment trust is required.
The explanation is often framed in technical terms - model tuning, prompt engineering, or data quality. But these are symptoms, not causes.
The deeper issue is more fundamental: most enterprise data is not grounded in a stable, shared understanding of what it means.
For decades, organizations have treated data structure as a proxy for meaning. Tables, schemas, and pipelines were assumed to carry sufficient context. They do not. They merely organize symbols. And when AI systems attempt to reason over those symbols without a coherent semantic foundation, they produce answers that are plausible, but not reliably correct.
This is where the conversation must begin.
The Forgotten Promise of Ontologies
The idea that data requires explicit meaning is not new. Ontologies - formal representations of concepts and their relationships - have long offered a way to define and standardize how organizations understand their data.
As Thomas R. Gruber observed in 1993, an ontology is "an explicit specification of a conceptualization." At its core, it is a disciplined attempt to answer a deceptively simple question: What do we mean when we say this?
When applied effectively, ontologies provide three critical capabilities.
They create a shared vocabulary across systems and stakeholders. They enable interoperability based on meaning rather than format. And they allow systems to reason - deriving insights from relationships that are explicitly defined rather than implicitly assumed.
For enterprise AI, these capabilities are not optional. They are foundational. Without shared concepts, models learn conflicting patterns. Without explicit relationships, context must be reconstructed imperfectly at inference time. Without consistent definitions, outputs cannot be trusted or audited.
And yet, despite their conceptual clarity, ontologies have remained peripheral in most organizations.
Why Ontologies Never Became Foundational
The slow adoption of ontologies is often attributed to complexity or lack of tooling. Those factors matter, but they are not decisive.
The real reasons are more structural.
Modern data management evolved around the relational model introduced by Edgar F. Codd in 1970, which prioritized structure and efficiency over semantics. Meaning was externalized - encoded in application logic, documentation, and institutional memory. Over time, this became the default architecture.
Ontologies, by contrast, require agreement. They force organizations to reconcile differences in how key concepts are defined and used. This is not merely a technical exercise. It is an organizational one, often exposing inconsistencies that have long been tolerated.
It is easier to integrate systems than to align meaning.
There were also economic realities. Enterprise software evolved around transactions, storage, and processing. Ontology-driven approaches - by reducing ambiguity and integration friction - offered fewer opportunities for incremental, billable complexity. They lacked strong commercial champions.
Finally, the rise of "big data" deferred the problem altogether. The prevailing logic, captured in part by Halevy, Norvig, and Pereira's influential 2009 essay on the "unreasonable effectiveness of data," was that meaning could be inferred from scale. Collect everything, process it later, and let algorithms discover patterns. That approach delivered value in some contexts, but it also amplified ambiguity. AI systems trained on inconsistent data learn inconsistent realities.
In effect, ontologies were not rejected. They were bypassed.
The Limits of Ontologies in a Dynamic Enterprise
Even where ontologies have been adopted, they have not fully solved the problem of meaning in enterprise systems. The reason is subtle but important.
Ontologies define what things are. In most enterprise implementations, however, they do not fully capture what is true in a given moment, under specific conditions, for a particular purpose.
They are typically static representations of a domain. But enterprises are not static. Definitions evolve. Policies change. Context shifts. A customer may be "active" under one set of conditions and not under another. A transaction may be valid at one point in time and non-compliant at another.
Most enterprise ontology implementations are not designed to operationalize this fluidity.
They are also frequently detached from execution. They exist in modeling environments or knowledge graphs but are not embedded in operational systems where decisions are made. As a result, they describe how the organization intends to function, while actual behavior continues to diverge.
Perhaps most importantly, ontologies do not by themselves enforce outcomes. They can define rules, but they do not ensure those rules are applied consistently across data pipelines, applications, and AI systems.
This creates a gap between defined meaning and operational reality.
Consider a global bank where "customer" is defined differently across onboarding, risk, and marketing systems. An AI model trained across those sources may confidently recommend an action that violates policy or misclassifies exposure - not because the model is inherently flawed, but because the enterprise has not established one enforceable understanding of what the data means in context.
From Defined Meaning to Stateful Meaning
Closing that gap requires a shift - from representing meaning to maintaining it.
What is needed is not simply better ontologies, but a more complete capability: what I describe as stateful meaning.
Put simply, stateful meaning is meaning that is not only defined, but actively maintained, context-aware, and enforceable as conditions change across the enterprise.
It answers not only "What does this mean?" but also:
What does it mean now?
Under what conditions is it valid?
For whom is it applicable?
And is it being used correctly?
This distinction is critical for AI.
Modern AI systems - particularly large language models - operate probabilistically. As Bender and her coauthors argue in their critique of large language models, such systems generate outputs from patterns in data rather than from grounded, authoritative definitions. That makes them powerful, but also inherently unconstrained in enterprise settings.
Without a mechanism to ground those outputs in enterprise-specific meaning, errors are not only possible - they are inevitable.
What It Takes to Make Meaning Operational
Delivering stateful meaning requires capabilities that extend beyond traditional ontology frameworks.
First, meaning must be persistent. It cannot be reconstructed at query time. There must be an authoritative, continuously maintained representation of how concepts are defined and related.
Second, meaning must be temporal. Definitions and relationships must be versioned and time-bound, reflecting how they evolve.
Third, rules must be executable. Business logic, policies, and constraints must be embedded directly into the systems that process data and generate AI outputs.
Fourth, context must be explicit. Meaning must adapt based on use case, role, and regulatory environment - without becoming ambiguous.
Finally, validation must be continuous. Every interaction with data - especially by AI systems - must be evaluated against defined semantics and rules.
This is not an incremental enhancement to existing data architectures. It is a new layer - one that sits between data infrastructure and AI systems, ensuring that meaning is not only defined, but enforced.
A New Basis for Competitive Advantage
For years, competitive advantage in data and AI has been framed in terms of scale: more data, more compute, more advanced models.
Those advantages are eroding.
What remains scarce is not data, but coherence.
Organizations that can maintain a consistent, governed, and context-aware understanding of their data will be able to deploy AI with greater speed, reliability, and trust. They will move from experimentation to operationalization. From isolated use cases to enterprise-wide capability.
Others will continue to struggle - not because their models are weaker, but because their foundations are unstable.
The Shift Ahead
Ontologies were an early attempt to formalize meaning in data systems. They remain essential. But they are no longer sufficient.
The next phase of enterprise data management will be defined by the ability to operationalize meaning - to make it persistent, contextual, and enforceable.
That is the work required to make data truly ready for AI.
And it leads to a conclusion that is both simple and consequential:
AI doesn’t fail because it lacks intelligence.
It fails because it lacks governed, stateful meaning.
The organizations that understand this and act on it will define the next generation of enterprise performance.
Sources (in order of appearance)
1. Thomas R. Gruber, "A Translation Approach to Portable Ontology Specifications," Knowledge Acquisition 5, no. 2 (1993): 199-220.
2. Edgar F. Codd, "A Relational Model of Data for Large Shared Data Banks," Communications of the ACM 13, no. 6 (1970): 377-387.
3. Mike Uschold and Michael Grüninger, "Ontologies: Principles, Methods and Applications," Knowledge Engineering Review 11, no. 2 (1996): 93-136.
4. Alon Halevy, Peter Norvig, and Fernando Pereira, "The Unreasonable Effectiveness of Data," IEEE Intelligent Systems 24, no. 2 (2009): 8-12.
5. Nithya Sambasivan et al., "Everyone Wants to Do the Model Work, Not the Data Work: Data Cascades in High-Stakes AI," Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (2021).
6. Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?," Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT) (2021): 610-623.

