Research

Programs in auditable AI and information integrity.

The Institute pursues a small number of focused research programs at the intersection of formal epistemic reasoning, applied AI, and the governance of information systems.

Research agenda

The Institute's research addresses a single underlying question: how can complex information systems be made auditable, without sacrificing the capabilities that make them useful?

This question recurs across domains — defence and intelligence, regulated AI deployment, journalism, and democratic infrastructure — and the Institute's programs are organized to engage it across multiple levels of abstraction: formal logic, software architecture, and institutional practice.

Flagship program — Epistememe

Epistememe is the Institute's flagship research program on auditable information provenance infrastructure for multi-agent AI systems.

Modern AI systems increasingly consist of multiple interacting agents — each consuming, transforming, and producing claims. Traditional logging and observability tooling was not designed to answer the questions these systems raise:

  • Who knew what, and when?
  • What evidence supports a given conclusion?
  • How did a belief propagate across the system, and where can it be revised?

Epistememe approaches these questions by combining three bodies of work:

  1. Formal epistemic logic — including Dynamic Epistemic Logic, Public Announcement Logic, and AGM belief revision — to represent knowledge, belief, and their updates with mathematical precision.
  2. Large language models — to translate between natural-language evidence and formally verifiable epistemic structures.
  3. Provenance-first systems engineering — treating audit trails as the primary data structure, not as a by-product of logging.

The program's applied targets include multi-source intelligence fusion, AI-assisted decision systems under regulatory scrutiny (including regimes such as the EU AI Act and emerging North American AI compliance law), and infrastructure for information integrity in journalism and public discourse.

Dual-market relevance

The Institute's work is positioned to serve both Canadian and European markets. Under Canada's defence partnership with the European Union, Canadian-led work in areas aligned with the Institute's research agenda is SAFE-eligible for EU markets, in addition to its relevance to Canadian federal programs in defence innovation, cyber-security, and AI governance.

This dual-market alignment reflects a deliberate strategic posture: auditable AI is a transatlantic problem, and the institutional frameworks being built on both sides of the Atlantic are converging faster than the tools designed to satisfy them.

Partnerships

The Institute collaborates with academic researchers, industry practitioners, and government programs. Inquiries about research partnerships, sponsored studies, and grant collaborations are welcomed via Contact.