C-FORS Summer School Foundational Ontologies

At this year’s C-FOR Oslo tutorial, I had the opportunity to guide participants through the foundational principles of Basic Formal Ontology (BFO), with an emphasis on concrete modeling exercises tied to real-world datasets. The tutorial was designed for professionals and researchers seeking to deepen their technical mastery of ontological modeling using BFO as a semantic backbone.

We began by reviewing the fundamental structure of BFO, distinguishing between continuants and occurrents, and exploring key upper-level categories like independent continuant, specifically dependent continuant, process, and quality. I emphasized that good ontological modeling must track not just the static entities in a domain but also their evolution and realization across time—a theme that recurred in all four case-based exercises.

The hands-on portion of the tutorial consisted of four exercises, each paired with worked-out solutions that participants could study in parallel. You can find my slides for the tutorial, background reading, exercises, and solutions on the National Center for Ontological Research (NCOR) Academy github page.

Many thanks to Salvatore Florio, Guendalina Righetti, and Øystein Linnebo for being gracious hosts.

What follow is brief discussion the the solutions provided.

Case 1: Jet Specification

We modeled an aircraft (Airbus A320 Neo) using a design pattern grounded in BFO axioms. The task was to represent how technical specifications—like engine model and dimensions—are linked via roles and qualities to a material artifact and its manufacturer. Participants learned how to use is about, has continuant part, and rdf:type only to capture class-level constraints.

Case 2: Aircraft Performance Measurement

This scenario modeled a deviation between expected and observed approach speeds of an Airbus A321-111. We illustrated how to construct a BFO-conformant pattern that ties together design specification, test process, and measurement data, using temporal parts and process aggregation. The exercise culminated in a SPARQL query designed to compute the average measured approach speed over several trials.

Case 3: SOC Role Definitions

Here, we modeled the U.S. Standard Occupational Classification roles for aerospace engineers, capturing how job descriptions such as “Adaptability Evaluation” and “Equipment Testing” relate to outputs and realizable entities like reports and aircraft specs. This showed how to handle granular roles in applied ontology work.

Case 4: Employment Data Modeling

The final case involved linking employment statistics to SOC role definitions using publicly available wage data. Participants mapped employment totals as qualities inhering in object aggregates, which were in turn outputs of measurement processes, all conforming to BFO’s realist ontology commitments.

LLMs and Ontology Engineering at JOWO 2025

The rapid development and deployment of Large-Language Models (LLMs) has led to growing interest in leveraging ontologies and knowledge graphs to enhance LLM capabilities and address limitations. Combining the semantically rich architectures provided by ontologies and knowledge graphs with the generative strengths of LLMs promises to provide a path towards more explainable artificial intelligence systems, more trustworthy output, and a deeper understanding of vulnerabilities arising from integrated architectures.

This workshop is dedicated to exploring the convergence of knowledge representation and LLM strategies, design patterns, models, and benchmarks. We aim to bring together researchers, practitioners, and enthusiasts from industry, academia, and government in the interest of exploring possible convergence points and advancing each field.

The deadline has been extended for submissions to June 1.

We invite submissions for our workshop focusing on the intersection of Applied Ontology and Large Language Models as part of Joint Ontologies Workshop (JOWO) episode XI, affiliated with FOIS 2025.

Post on linkedin here.

More importantly, post to the Call for Papers here.

AI's Role in Law: Assistance, Not Replacement

Full Baldy Center Blog Post here: https://www.buffalo.edu/baldycenter/multimedia/blog/24-25-blog.html#50  
Danielle Limbaugh and John Beverley 

Leveraging sophisticated chatbots seems in many ways foreign to the creative, trusted, and impactful work done by legal professionals. A question of growing importance is:

How do we promote trust among legal professionals with respect to platforms and tools leveraging modern advances in AI?

There is reason to be hopeful: ontology engineering, a discipline which focuses on the creation of structured vocabularies and relationships within data. Techniques from the field of ontology engineering have long been leveraged for promoting explainability, as they aim to make explicit the implicit formal structure of data. When AI systems are built on well-structured ontologies, they:

  1. Provide clear reasoning paths for their conclusions;

  2. Make explicit the relationships between different legal concepts;

  3. Help identify relevant precedents and principles more accurately; and

  4. Maintain consistency in legal interpretation.

As we move forward, the legal profession should embrace ontology engineering as a crucial component of AI implementation. It's not just about making AI systems more accurate; it is about making them more trustworthy and aligned with the fundamental principles of legal practice.

Promoting Healthy Aging at JOWO 2025

I want to live forever. Not metaphorically; I literally want to be immortal. I have no idea what research should be prioritized to make this a reality, but I suspect facilitating the generation of novel hypotheses and research avenues out of the vast amount of biomedical, psychological, and health-related data we have been storing for decades, will be key.

Now, I don't just want to be immortal, I want to be immortal and healthy. Aging in a healthy manner is perhaps an easier problem to tackle. Baby steps, I suppose.

With this in mind, I encourage you to submit to the Promoting Healthy Aging through Semantic Enrichment of Solitude Research (PHASES) workshop, the deadline for which has been extended to June 1st.

Maybe being immortal isn't your goal; but don't let that stop you from helping me live my best (unending) life.

As if you need any more enticement, PHASES will be part of the 2025 Joint Ontology Workshops (JOWO), which will be co-located with FOIS 2025, and held on 8-9 September, in Catania, Italy.

See linked in post here.

More importantly, see Call for Papers here.

Finding Meaning

“This research supports the idea
that late midlife is a time of possible positive change and that one has the power to work toward personal growth, fulfillment, understanding and acceptance. ”
Hollen Reischer, Visiting Assistant Professor
University at Buffalo, Department of Psychology

Late midlife often brings more than gray hair and regret. As Hollen Reischer’s recent study shows, many people begin to reinterpret their lives, weaving hardship and success into a richer, more coherent story (Reischer, 2025). This process — narrative self-transcendence — isn't about denying pain, but finding meaning in it. The events themselves don't change. The way we see them does.

Ontology engineering faces a similar task. Ontologies don't just store facts; they impose structure on complexity. They decide what distinctions matter and what patterns endure. In building ontologies for solitude and gerotranscendence, we aren’t simply cataloging experiences — we’re formalizing interpretations that, until now, have lived in scattered fields and shifting vocabularies.

Reischer’s work reminds us that growth is not about acquiring more, but interpreting better. Solitude, too, plays its role: not as mere isolation, but as space for re-narration. Quiet moments allow us to redraw the map of ourselves, connecting disparate experiences into something comprehensible.

Good ontology work mirrors this quiet architecture. It clarifies without flattening. It allows reinterpretation without erasing complexity. It is, in its way, a technical act of transcendence — aligning data points into a form that admits not just coherence, but growth.

There is something surprisingly human here: the same instinct that drives a solitary reflection at midlife drives the best efforts to structure knowledge. In both cases, we aren't chasing precision for its own sake. We are looking for meaning.

Source: https://www.buffalo.edu/grad/news.host.htm...

Towards an Ontology of Loving

Love is deeply personal and philosophically complex; it is also surprisingly underdeveloped in ontology engineering. Given the amount of research on this topic, it is worth moving beyond vague definitions towards a rigorous ontological model.

My favored starting point is what I call the Concatenation View (CV), which defines love as the combination of a passive sensation (e.g., emotional arousal) and an active evaluative judgment (e.g., perceiving the beloved as valuable). This model reconcile love’s felt immediacy with its rational accountability—addressing the philosophical puzzle of why love seems both involuntary and yet subject to justification. Among other benefits, this view explains the irrationality of love, namely, that we tend to want better for those we love than what we believe they deserve.

There are, of course, objections worth handling, such as how love can be causally linked to judgments about a beloved rather than merely being a regular co-occurrence of feelings and thoughts. To find out more, you’ll simply have to wait for me to finish the current paper I’m writing.

Stay tuned.

Man with the Golden Arm

Imitation is the sincerest form of flattery that vice can pay to virtue.

John Harrison - the man with the golden arm - died February 17th 2025. Mr. Harrison was inspiring, both morally and intellectually, which dovetailed in my research on epistemic and moral responsibility. His blood plasma was used to synthesize a cure for Rhesus disease, and in donating is estimated to have saved the lives of over 2 million infants worldwide. Remarkable.

Mr. Harrison has been a staple example in my work on responsibility, e.g. Ties that Undermine, Judgments of Moral Responsibility in Tissue Donation Cases, and Speak No Evil. I leave you with a passage from the last of these works:

“…Rhesus disease kills millions of infants around the world, and there is – at present – not a cure that can be synthesized in a lab without the blood plasma of John Harrison. Harrison’s donations have saved the lives of approximately 2.4 million infants worldwide. It seems plausible he has a responsibility to donate. But it also seems Harrison is uniquely positioned to help, and carries great responsibility to do so. To see this, consider if each of us knew we were able to provide blood plasma that could be used to synthesize a vaccine for Rhesus disease, but none of us donated, then we would have all done something morally wrong, but none of us would have obviously done anything worse than anyone else in this context. In contrast, in our context where John Harrison is uniquely able to provide this aid and is aware of that fact, John refraining from donating blood plasma seems morally worse than it would be in the context where everyone – himself included – could donate but decided not to. Perceived rarity to provide aid influences judgments of moral responsibility...”

Mapping PROV to BFO

The Provenance Ontology (PROV-O) is a World Wide Web Consortium (W3C) recommended ontology used to structure data about provenance across a wide variety of domains. Basic Formal Ontology (BFO) is a top-level ontology ISO/IEC standard used to structure a wide variety of ontologies, such as the OBO Foundry ontologies and the Common Core Ontologies (CCO). To enhance interoperability between these two ontologies, their extensions, and data organized by them, an alignment is presented according to a specific mapping criteria and methodology which prioritizes structural and semantic considerations. The ontology alignment is evaluated by checking its logical consistency with canonical examples of PROV-O instances and querying terms that do not satisfy the mapping criteria as formalized in SPARQL. A variety of semantic web technologies are used in support of FAIR (Findable, Accessible, Interoperable, Reusable) principles.

Full paper open access in Scientific Data.

Source: https://arxiv.org/pdf/2408.03866

bCLEARer & Ontologization Space

Check out the recent arxiv preprint “Extending the design space of ontologization practices: Using bCLEARer as an example” co-authored by Chris Partridge, Andrew Mitchell, Sergio de Cesare, and myself.

We outline how the design space for the ontologization process is richer than current practice would suggest. We investigate the possibility of designing a range of radically new practices, providing examples of the new practices from Chris’s work over the last three decades with an outlier methodology: bCLEARer. We also suggest that setting an evolutionary context for ontologization helps one to better understand the nature of these new practices by positioning digitalization (the evolutionary emergence of computing technologies) as the latest step in a long evolutionary trail of information transitions.

Fourfold Pathogen Reference Ontology Suite

The Fourfold Pathogen Reference Ontology Suite includes four pathogen-specific extensions. The Virus Infectious Disease Ontology (VIDO) focuses on viruses such as SARS-CoV-2, offering detailed representations of viral taxonomy, replication mechanisms, and disease manifestations. The Bacteria Infectious Disease Ontology (BIDO) provides a structured framework for bacterial pathogenesis, taxonomy, and diseases, emphasizing key mechanisms like bacterial adhesion and toxin production. The Mycosis Infectious Disease Ontology (MIDO) tackles fungal infectious diseases, addressing fungal taxonomy, antifungal resistance, and host-pathogen interactions. The Parasite Infectious Disease Ontology (PIDO) models parasitic life cycles, pathogenesis, and host-parasite relationships. These modular ontologies follow a hub-and-spoke methodology, with the Infectious Disease Ontology (IDO) serving as the central hub to ensure semantic consistency and modularity while minimizing redundancy.

More details here: https://arxiv.org/pdf/2501.01454

BFO-CCO Office Hours

BFO-CCO Office Hours

Given the growing importance of Basic Formal Ontology and the Common Core Ontologies suite, in defense and intelligence, biology and medicine, as well as service and manufacturing, there is a need for transparency concerning the development and maintenance of these artifacts.

With this in mind, the lead developers of BFO and CCO will hold biweekly "office hours" for stakeholders with questions, concerns, comments, or compliments regarding these standards. These office hours will be stakeholder-led, in that discussion during these one hour sessions will be driven by stakeholders attending the meetings.

Logistics

  • When: Biweekly on Fridays, starting May 31st 2024, 11am - 12pm

  • Where: Virtual Meetings on Teams

If you are interested in joining one or more office hours, please contact John Beverley at johnbeve[@]buffalo.edu. You will be provided a Teams invite for the scheduled time.

In addition to the biweekly office hours, there is an associated Slack channel for the group where stakeholders may continue conversation with the BFO and CCO leads. As above, please contact John Beverley at johnbeve[@]buffalo.edu to be added to the BFO-CCO Office House Slack channel.

As stakeholder questions are addressed, we will also establish an "FAQ" where stakeholders will be directed for vetted answers to commonly posed questions.

Recent NCOR Accepted Work

Members of NCOR in Buffalo have been rather busy submitting work for the 2024 conference season. Here are a few submissions accepted to various conferences (full papers unless otherwise stated):

Convergence of LLMs and Ontologies

The rapid development and deployment of Large-Language Models (LLMs) has led to growing interest in leveraging ontologies and knowledge graphs to enhance LLM capabilities and address limitations. Combining the semantically rich architectures provided by ontologies and knowledge graphs with the generative strengths of LLMs promises to provide a path towards more explainable artificial intelligence systems, more trustworthy output, and a deeper understanding of vulnerabilities arising from integrated architectures.

On July 15th I will be hosting a Workshop on the Convergence of Large Language Models and Ontologies, as part of the 2024 Formal Ontology in Information Systems conference in Enschede, Netherlands. This workshop, associated with a special issue of Applied Ontology, is dedicated to exploring the convergence of knowledge representation and LLM strategies, design patterns, models, and benchmarks. We aim to bring together researchers, practitioners, and enthusiasts from industry, academia, and government in the interest of exploring possible convergence points and advancing each field.

More information as the program develops can be found here.

Drowning in a Rising Tide

Image from Gartner Newsroom

Gartner’s recent survey of nearly 500 chief data and analytics officers (CDAOs) highlights a shortage of skilled staffing for Generative AI (GenAI). Over half of the CDAOs are at least piloting GenAI; it’s accordingly crucial to hire new talent.

Knowledge representation is increasingly important for GenAI. Ontologies and knowledge graphs are particularly vital given how they embed formal relationships across data, helping the organization make better decisions and understand business processes.

I cannot help but worry, however, that the market demand for knowledge representation will continue to outstrip the supply, so that companies will do what they’ve done in the past: hire just anyone who claims to understand knowledge graphs. The concern here is that hiring unqualified individuals will lead to unsatisfying deliverables, which will in turn lead to complaints that knowledge representation is the problem, rather than the lack of talent.

We are training new ontologists here at the University at Buffalo, as quickly as we can. I hope those sympathetic to the promise of semantic interoperability are doing the same.

Enhancing Object-Based Production Conference Part 2

March 21-22, 2024, Tampa FL

Hosted by Celestar

The Enhanced Object-Based Production (EOBP) conference (website and speaker slides here) marked a significant collaborative effort by:

  • Celestar Corporation

  • The National Center for Ontological Research (NCOR)

  • Summit Knowledge Solutions

  • RTX Corporation

  • CUBRC

  • SAIC

  • Maxar Technologies

  • Sensepoint

  • Senior Government representatives.

With approximately 45 participants, the conference aimed to leverage Object Based Production (OBP), ontology, and Referent Tracking methodologies, to enhance intelligence workflows and data management strategies.

The event kicked off with presentations on foundational concepts such as the Basic Formal Ontology (BFO), led by John Beverley, assistant professor at the University at Buffalo and co-director of NCOR, who emphasized the critical role of interoperability and data integration. Barry Smith, professor of the University at Buffalo and co-director of NCOR, explored Referent Tracking, a precise methodology for tracking entities across data sets, which underpins reliable data referencing and interoperability. Jim Tuson discussed the nuances of OBP, which focuses on the systematic handling of objects of interest to streamline intelligence processes.

The afternoon sessions delved deeper into Object Based Intelligence and Production (OBI/OBP) with John Sweet providing insights into how real-world objects are encapsulated within databases to bolster intelligence analysis. Forrest Hare followed with a discussion on the challenges and solutions related to ’track’ and how to track objects in terms of space and time. The keynote by David Limbaugh underscored the potential of enhancing OBP through ontology-based approaches, proposing the adoption of realist ontologies to ensure data model precision and reusability.

Day two commenced with Mark Jensen discussing 'stasis' in the Common Core Ontology (CCO), which facilitates stability in data models amidst change, followed by Erik Thomsen's presentation on the significance of composable and strongly typed ontologies. These sessions highlighted the necessity for robust ontologies that can adapt to complex information challenges, enhancing knowledge management in intelligence operations.

The conference concluded with a discussion on the spatial modeling within an ontological frameworks by John Beverley, and a discussion on government data organization strategies, with notable presentations by Bill Mandrick and Ryan Riccucci. Mandrick's talk suggested the inclusion of new terms to BFO and explored the taxonomy of functions, while Riccucci proposed more efficient data organization methods to alleviate government field action costs.

This gathering not only fostered a deeper understanding of EOBP but also set the stage for continued advancements in integrating ontology with object-based production, aiming for enhanced methodological cohesion and efficiency in intelligence operations. Plans for a follow-up conference are already underway, promising further progress in these critical areas.

Common Core Ontologies Governance Board

The Common Core Ontologies (CCO) [1] have become, over the last decade, an increasingly important resource for the U.S. Government. This mid-level ontology suite is currently used by dozens of organizations and deployed in critical systems in active operation. To date, CCO has been developed and maintained by Ron Rudnicki and the ontology team at CUBRC, Inc., who have overseen successive releases of CCO that have always sought to meet the needs of a growing number of end users. Through the ingenuity and discipline of this team, CCO has remained a touchstone for ontology development, reducing the time needed to develop jointly interoperable high-quality domain ontologies aligned to Basic Formal Ontology (BFO). As a result, CCO has continued to receive greater adoption, and will become a central component of the DoD IC Ontology Foundry. In particular, both BFO and CCO have been directed for use in the communities as baseline standards for ontology development.

In recognition of the need for CCO to continue to scale and evolve, future releases of CCO will be overseen by The Common Core Governance Board, which will have an established charter and bylaws and will be composed of representatives from stake-holder organizations who have been involved in CCO development through the past decade. Initial members will include:

This board shall be charged with ensuring that CCO is openly available, well-maintained, responsive to user needs and technological and theoretical changes, and independent of any undue influence imposed by a single project or organization. Additionally, the board will pursue:

  • funding for maintenance and development of CCO

  • ensuring that CCO is adopted as an IEEE standard mid-level ontology

  • creating a developer’s group for CCO that is empowered but subject to clear oversight

  • organizing conferences, virtual meetings, and so on in service of the CCO community

  • maturing CCO’s release process and associated documentation

  • encouraging academic research and the creation of robust, re-useable domain ontologies under CCO

  • stabilizing CCO to ensure future releases are transparent and mindful of impacts on end users

  • coordinating with The Industrial Ontology Foundry, The Open Biomedical and Biological Ontology Foundry, and The DoD-IC Ontology Foundry

  • ensuring that CCO is responsive to the needs of U.S. Government stakeholders

[1] https://github.com/CommonCoreOntology/CommonCoreOntologies