Is an Object-Role Model an Ontology?
A Practical Guide
By G. Sawatzky, embedded-commerce.com
August 12, 2025
If you're deeply involved in system design, knowledge representation, or the emerging field of neuro-symbolic AI, you've likely encountered the term "ontology." But what exactly does it mean in a practical sense? And if you're using Object-Role Modeling (ORM), are you actually building an ontology?
This article aims to cut through the confusion and provide a clear, actionable definition of ontology, contrasting it with traditional conceptual data models. The goal of this article is to show that, with a rigorous approach, an Object-Role Model can indeed serve as a powerful, machine-interpretable ontology, ready to drive meaningful reasoning and implementation in modern AI systems.
What is an Ontology?
First, we should clarify that we are referring to an applied ontology, not a philosophical or academic ontology. We are not discussing the ontological nature of reality, but rather the ontological nature of knowledge representation and reasoning.
Traditionally, an ontology, in this context, is often defined as "an explicit specification of a conceptualization" ([5], [6]). While concise, this definition can be interpreted broadly, sometimes leading to informal models that lack the precision needed for computational reasoning.
For practical use in today's AI, particularly neuro-symbolic AI, let's refine this concept and define an ontology as:
"A ontology is a structured, interpretable specification of a domain expressed through logic-governed constraints, conceptual roles, and formal semantics. Its purpose is to support meaningful reasoning, verification, and implementation across both symbolic and hybrid AI systems."
This refined definition emphasizes that an ontology must be:
- Structured: Organized in a clear, consistent way.
- Interpretable: Capable of being understood and processed by machines.
- Constraint-Driven: Built around explicit rules that govern facts within a domain.
- Formal: Based on a precise logical foundation.
- Actionable: Designed for real-world application, not just theoretical discussion.
It focuses on a specific "domain" (like an enterprise's operations or a field of knowledge) rather than attempting to capture universal truths. This makes the ontology more useful and adaptable.
Ontology vs. Conceptual Data Model (with ORM)
The line between an ontology and a conceptual data model can seem blurry. Here's how they relate:
- Conceptual Data Models (like ORM): These are traditionally used in database design. They describe the high-level meaning and structure of data, identifying fundamental concepts (object types) and their relationships (roles). ORM, in particular, excels at defining and diagramming precise business rules as constraints, and its models translate directly to formal logic and database schemas.
- Ontology (as defined here): This is a highly formalized and rigorous form of conceptual modeling. While a conceptual data model typically blueprints a database schema, this pragmatic ontology extends that focus to broadly support symbolic and hybrid AI systems, enabling advanced reasoning and verification.
In essence, Object-Role Modeling (ORM) serves as an excellent methodology for building this type of pragmatic yet formal ontology. When an ORM model is built with explicit world assumptions and designed for broader AI reasoning and verification, it perfectly fits this definition. Not all conceptual data models are ontologies, but a well-constructed ORM model can and should function as one.
Basis for This Definition
This precise, actionable definition of ontology is built upon the work of respected experts and research in knowledge representation, database theory, and AI. Key influences include:
- Terry Halpin: As a primary architect of Object-Role Modeling (ORM), Halpin's work ([7], [8]) provides the foundational methodology for designing models that are rich in logic-governed constraints and conceptual roles, bridging natural language and formal logic.
- Gary Marcus: A leading voice in neuro-symbolic AI, Marcus critiques the limitations of purely deep learning approaches. He stresses the critical need for symbolic frameworks and "world models" (persistent, stable internal representations of entities and their relationships) to achieve human-like reasoning ([9], [10]). His insights directly inform the definition's goal to support both symbolic and hybrid AI systems.
- Mustafa Jarrar: His work directly addresses the use of conceptual data modeling (including ORM) for ontology engineering ([15], [16]). Jarrar highlights that conceptual data modeling techniques are effective for building ontologies, especially due to their clarity and expressiveness for constraints. He clarifies that ontologies are intended for application-independent, machine-processable knowledge, aligning with the definition's emphasis on formal interpretability and broad utility.
- Bernhard Thalheim: As a key contributor to conceptual modeling, Thalheim's focus on 'rich conceptual modeling' and formalization provides a rigorous, constraint-driven foundation for this approach, as seen in his work on the 'Handbook of Conceptual Modeling' ([12]).
- John F. Sowa: Sowa's criticisms of OWL's formalism ([2], [3]) and his advocacy for logic-based knowledge representation ([1]) align with the need for deep, precise semantics in ontological modeling.
- Pascal Hitzler: As a researcher actively bridging Semantic Web technologies, formal knowledge, and LLMs, Hitzler's recent work ([17]) on accelerating knowledge graph and ontology engineering with LLMs underscores the relevance of formal, modular ontologies for effective neuro-symbolic AI.
These experts collectively advocate for precise, structured knowledge representation that moves beyond simple classification to enable robust reasoning and practical implementation which are the core tenets of this ontology definition.
Addressing Potential Objections
It is recognized that some aspects of this approach might be seen differently by others in the knowledge engineering and ontology communities. Here, these potential objections are addressed head-on:
- Automated Reasoning and Classification: Automated reasoning and classification are key strengths of DL-based systems like OWL. However, this ORM-based approach prioritizes the human-centered aspects of modeling, where the conceptual model serves as a clear, intuitive blueprint for human understanding and communication. The rigor of the ORM is used to feed logic-based systems (i.e. Relational, Prolog, LTNs, etc...), but the primary focus is on a model that is understandable and useful for humans.
- Emphasis on Closed World Assumption (CWA): Ontologists often prefer an Open World Assumption (OWA), where unknown facts are simply unknown, reflecting incomplete real-world knowledge. This definition emphasizes "logic-governed constraints" which, for implementation, often align with a Closed World Assumption (CWA) (what's not stated is false). However, this framework explicitly supports selecting both CWA and OWA interpretations as appropriate for different parts of the domain. This allows for pragmatic choices within specific parts of the domain, ensuring the model reflects diverse real-world views while remaining computationally actionable. This is a practical decision for system building, not a philosophical stance against OWA.
- Data Integration and Interoperability: The ORM model, as a canonical blueprint, provides a single, authoritative source of truth. This structured representation enables seamless data interoperability across diverse internal and external sources, serving as a common semantic foundation for data integration.
References
- [1] Sowa, J. F. (2000). Knowledge Representation: Logical, Philosophical, and Computational Foundations. Brooks Cole.
- [2] Sowa, J. F. (2011). "Future directions for semantic systems". In https://www.jfsowa.com/pubs/futures.pdf
- [3] Sowa, J. F. (n.d.). Various writings including personal website: https://www.jfsowa.com
- [5] Gruber, T. R. (1993). A translation approach to portable ontology specifications. Knowledge Acquisition, 5(2), 199–220.
- [6] Gruber, T. R. (2008). Ontology as a specification mechanism for knowledge sharing. In Handbook on Ontologies (2nd ed.). Springer.
- [7] Halpin, T. (2005). Object Role Modeling: An Overview. University of Washington.
- [8] Halpin, T. (1997). Modeling for Data and Business Rules (Interview). Database Newsletter.
- [9] Marcus, G. (2022, August 11). Deep Learning Alone Isn't Getting Us To Human-Like AI. Noema Magazine.
- [10] Marcus, G. (2025, June 28). Generative AI's crippling and widespread failure to induce robust models of the world. Marcus on AI.
- [12] Thalheim, B. (2010). Towards a theory of conceptual modelling. Journal of Universal Computer Science, 16(20), 3102–3137.
- [15] Jarrar, M. (2003). On Using Conceptual Data Modeling for Ontology Engineering. Journal of Database Management, 14(4), 51-68.
- [16] Jarrar, M., & D.V. Aelst. (2004). Data Modelling versus Ontology Engineering. SIGMOD Record, 33(4), 72-77.
- [17] Hitzler, P., et al. (2023). Accelerating Knowledge Graph and Ontology Engineering with Large Language Models.
- [18] Modeling vs Encoding for the Semantic Web. Editors: Krzysztof Janowicz (Pennsylvania State University, USA) and Pascal Hitzler (Wright State University, USA). Solicited reviews: Thomas Lukasiewicz (Oxford University, UK) and Giancarlo Guizzardi (Federal University of Espírito Santo, Brazil). Open review: Werner Kuhn (Institute for Geoinformatics, University of Münster, Germany). https://www.semantic-web-journal.net/sites/default/files/swj35.pdf — Argues for a modeling-first approach with conceptual languages distinct from encoding like RDF/OWL.