Peter David Fagan

"A macroscopic observer in a microscopic universe."

Google Scholar  |  GitHub  |  Blog

profile photo

I'm building Corca Health, an AI-assisted platform for ADHD screening and triage. More broadly, my work is centred around leveraging AI to improve mental and physical wellbeing.

My research develops a physical theory aimed at establishing rigorous constraints for the safe development of artificial intelligence. This work introduces the Conservation-Congruent Encoding (CCE) framework, which anchors computation in physical conservation laws to derive measurable quantities that are relevant to understanding the properties of intelligent systems.

Outside of work, I enjoy CrossFit, sea swimming, and hiking.

Conceptual Notes

Quantifying Intelligence

Click to expand

Under the CCE framework, the relationship between intelligence, consciousness, and information processing is captured by a single identity:

Intelligence

Goal-directed work extracted per nat of irreversible (distinction-destroying) computation.

Consciousness

Goal-directed work extracted per nat of preserved (reversible) internal structure.

Processing Efficiency

Preserved information relative to irreversible information — how much internal structure is reused versus destroyed.

Definitions

  • — goal-directed work performed on the environment
  • — irreversible information processed (distinction-destroying computation)
  • — preserved (reversible) information processed (distinctions maintained without irreversible cost)
  • — intelligence: goal-directed work extracted per nat of irreversible information
  • — consciousness: goal-directed work extracted per nat of preserved information

Derivation

Begin with the definitions of intelligence and consciousness:

Solve the consciousness equation for :

Substitute into the intelligence equation:

Simplify to obtain the identity:

Platonic Observer Fallacy

Click to expand

This note is to be completed.

Toward a Physical Theory of Intelligence

The following research notes represent an ongoing effort to map and structure a developing physical theory. To keep pace with theoretical ideation, I utilize AI tools to rapidly generate and formalize my work into research notes.

Please read these documents with the understanding that they are living drafts:

The Core Architecture The initial constraints, concepts, and theoretical leaps are human-authored.
The Textual Generation AI-assisted to accelerate the drafting process.
The Content is Under Active Validation I am continuously reviewing, mathematically checking, and refining these notes. Until finalized, they should be treated as developing hypotheses rather than rigorously proven theory.

Conservation-Congruent Encodings

Click to expand

Peter David Fagan

Preprint (v2), 2026

PDF

A conservation-congruent encoding (CCE) is a physically realized macroscopic distinction, unlike the abstract, substrate-independent notion of information used in traditional information theory. Under a chosen coarse-graining, it is represented by protected macroscopic regions and their associated world-tubes that persist under ambient fluctuations, are maintained by dynamical invariants tied to conserved quantities, and can be irreversibly merged only through dissipative export into explicitly modeled channels. This note gives a minimal definition of a CCE.

Cosmological Horizons as Epistemic Bounds of Conservation-Congruent Encodings

Click to expand

Peter David Fagan

Preprint, 2026

PDF

The foundations of standard cosmology rely on modelling reality with continuous macroscopic field equations. This note identifies an observer-resource assumption hidden by such equations and analyzes it using the Conservation-Congruent Encoding (CCE) framework. Within CCE, a projection Π is a coarse-graining of physical reality whose erasure, refinement, and maintenance carry energetic or informational costs. The note argues that several horizon-like limits also admit an observer-indexed operational reading within this ledger. Forward in time, heat death is read as Predictive Dissipation: the irreversible loss of macroscopic signal when a projection truncates the metric exhaust required to hold that signal distinct from the bath. Backward in time, the Big Bang singularity is read as Retrodictive Divergence: the divergence of the Landauer-scale cost required to re-instantiate erased branch distinctions under an assumption of costless resolution. Cosmological expansion and redshift are treated as standard geometric identities with an additional CCE bookkeeping interpretation, while gravitational singularities mark lower area-capacity limits for physically instantiated projections. The result is not a new dynamics, but a stricter operational license for using continuous models: observers do not access an infinite continuous universe at arbitrary precision, but work within a finite epistemic bubble bounded by their physical embedding.

Emergence of the Physical Laws of a Macroscopic Observer

Click to expand

Peter David Fagan

Preprint, 2026

PDF

We ask how apparently fundamental laws can arise from the physics of observation itself. In the Conservation-Congruent Encoding framework, an observer is a finite material device whose records must be stored and repeatedly reset. We make that bookkeeping explicit through developing an example case of a one-bit observer: a particle in a symmetric double-well potential immersed in a homogeneous thermo-acoustic medium. Incoming acoustic packets flip the bit reversibly, while a time-dependent control protocol restores the ready state. Because reset is logically irreversible and occurs while the bit remains coupled to the bath, each cycle dissipates at least kBT0ΔHcg, so under a reset rate ν the observer acts as a localized heat source with mean power P ≥ νkBT0ΔHcg. In steady state this produces a thermal halo δT(r) = P/(4πkr), which in a focusing medium induces the refractive profile n(r) = n0(1 + Γ/r). The resulting ray bending enlarges the capture cross-section and, in the weak-field limit, is mathematically equivalent to motion in an attractive 1/r potential. An external analyst restricted to the reduced event stream can therefore mistake self-induced bath distortion for an intrinsic force law. We then sketch a speculative gravitational extrapolation in which erased information is absorbed by local horizons, Newton's constant becomes Vacuum Informational Compliance, and a positive cosmological constant sets a deep-field crossover scale; with additional equilibrium assumptions, the weak-field closure can then be lifted toward the Einstein field equations.

Revisiting Classic Thought Experiments to Measure Consciousness for Artificial Intelligence Safety

Click to expand

Peter David Fagan

Preprint, 2026

PDF

This research note revisits Leibniz's mill, Turing's imitation game, and Searle's Chinese Room through the Conservation-Congruent Encoding (CCE) framework. It formalises a toy symbolic setting in which successful behaviour is measured by task performance (Wcausal,T), while the efficiency with which preserved internal structure supports that behaviour is measured by operational consciousness (κT). Within this setup, an uncompressed lookup system and a compact generative system can in principle achieve comparable behavioural success, yet diverge sharply in κT: the former relies on an expanding standing store of unreused mappings, whereas the latter reuses compact internal structure. The note therefore reframes classic disputes about understanding by separating outward performance from the organisation that sustains it, and motivates why this distinction may matter for later AI-safety analysis.

Physical Constraints on Realizing P=NP with Applications to Artificial Intelligence Safety

Click to expand

Peter David Fagan

Preprint, 2026

PDF

We apply the Conservation-Congruent Encoding (CCE) framework to the P versus NP problem by explicitly modeling the thermodynamic tradeoff between reversible information processing (Irev) and irreversible information processing (Iirr). While constructive algorithms theoretically avoid exponential candidate generation, worst-case NP-complete problems possess constraint topologies that are logically irreducible. Mapping this implicitly exponential constraint density into a poly(N) physical memory forces continuous intermediate state erasure. Under the physical identity χ = κ(Irev/Iirr), we demonstrate that processing irreducible logical structures strictly triggers an exponential Landauer tax, yielding the physical contradiction poly(N) + poly(N) ≥ Θ(2N). We present a physical constraint on scalable realizations of worst-case search on digital substrates, independent of formal mathematical shortcuts.

Intelligence, Consciousness and Designing for Computational Efficiency

Click to expand

Peter David Fagan

Preprint, 2026

PDF

We present a physical framework for computational efficiency in AI by mapping classical complexity to informational costs measured in nats. Algorithmic time is treated as irreversible information processing (Iirr), while space is treated as preserved information processing (Irev). Using these quantities, we define operational intelligence (χ = Wachieved/Iirr), structural consciousness (κ = Wachieved/Irev), and retention (ρ = Wachieved/Wtarget) to separate task demand from architecture-level performance. We ground these metrics in lightweight Conservation-Congruent Encoding (CCE) conditions, which specify when coarse-grained informational states admit metastable physical realizations and when irreversible state collapse must export entropy through conserved channels. We then apply the framework to sorting algorithms and modern sequence models, showing that dominant scaling bottlenecks are physical routing burdens rather than software abstraction alone. Under this lens, Euclidean GPU-style layouts impose congestion costs that can preserve near-quadratic pressure even when nominal attention complexity is reduced. As an illustrative topology-only proxy, a constrained optimization with N = 128 and k ≤ 4 shifts from a planar baseline (D = 21) to an expander-like layout (D = 5), reducing hop-count routing proxy from Ihopirr = 2688 to Ihopirr = 640 (76%) and increasing proxy χ by 4.2× at fixed transceiver budget. Because this surrogate omits post-layout wire length, capacitance, and Rent-style pin-limited embedding effects, these gains are reported as upper bounds pending physical place-and-route validation.

Toward a Physical Theory of Intelligence

Click to expand

Peter David Fagan

Preprint (v2), 2026

PDF

While often treated as abstract algorithmic properties, intelligence and computation are ultimately physical processes constrained by conservation laws. We introduce the Conservation-Congruent Encoding (CCE) framework as a unified, substrate-neutral physical framework for studying intelligence. We propose that information processing emerges when open systems undergo irreversible transitions, carving out macroscopic states from underlying reversible micro-dynamics. Generalizing Landauer's principle to arbitrary conserved quantities via metriplectic flows, we derive a universal bound for macroscopic computation. This yields physical metrics for intelligence and an operational analogue for consciousness, quantifying an agent's ability to extract work from the environment while minimizing its own dissipative dynamics. Applying CCE to the limits of physical observation, we model measurement as an active coarse-graining process rather than a passive projection. At the quantum scale, CCE recovers the Lindblad Master Equation, consistent with modelling decoherence as the dissipative exhaust required to record a measurement. Scaling to cosmological limits, we explore the hypothesis that gravity emerges as the macroscopic geometric footprint of these bounds. We show that, under this hypothesis, measurement-induced dissipation is consistent with a volumetric phase-space collapse, offering a dynamical route to the Bekenstein-Hawking area law. Equating the Landauer exhaust of this coarse-graining to horizon deformation outlines a limiting-case recovery of the Einstein Field Equations. Ultimately, by establishing a substrate-neutral link between thermodynamic dissipation, quantum measurement, and spacetime geometry, CCE provides physical constraints for understanding both natural and artificial intelligence.

This is a working manuscript that proposes a unified physical framework for studying intelligence. Several of the broader implications—particularly those bridging macroscopic information bounds with quantum and cosmological limits—are presented as formal hypotheses to be refined through extended proofs and empirical validation. The overarching goal is to anchor abstract computation in fundamental physical laws and, in doing so, establish rigorous, geometric constraints for the safe development of artificial intelligence.

Other Papers

Keyed Chaotic Dynamics For Privacy-Preserving Neural Inference

Click to expand

Peter David Fagan

Preprint, 2025

Project Page / arXiv

We introduce a framework for applying keyed chaotic dynamical systems to encrypt and decrypt tensors in machine learning pipelines. This lightweight, deterministic approach enables authenticated inference without modifying model architectures or requiring retraining. Designed for privacy-first AI, this method provides a new building block at the intersection of cryptography, dynamical systems, and neural computation.

Learning from Demonstration with Implicit Nonlinear Dynamics Models

Click to expand

Peter David Fagan, Subramanian Ramamoorthy

Preprint, 2024

Project Page / arXiv

We introduce a new recurrent neural network layer that incorporates fixed nonlinear dynamics models where the dynamics satisfy the Echo State Property. We show that this neural network layer is well suited to the task of overcoming compounding errors under the learning from demonstration paradigm. Through evaluating neural network architectures with/without our layer on the task of reproducing human handwriting traces we show that the introduced neural network layer improves task precision and robustness to perturbations all while maintaining a low computational overhead.

SECURE: Semantics-Aware Embodied Conversation Under Unawareness for Lifelong Robot Learning

Click to expand

Rimvydas Rubavicius, Peter David Fagan, Alex Lascarides, Subramanian Ramamoorthy

Preprint, 2025

Project Page / arXiv

In this work, we introduce an interactive task learning framework to cope with unforeseen possibilities by exploiting the formal semantic analysis of embodied conversation.

DROID: A Large-Scale In-the-Wild Robot Manipulation Dataset

Click to expand

The DROID Dataset Team

Robotics: Science and Systems (R:SS), 2024

Project Page / arXiv

In this work, we introduce DROID (Distributed Robot Interaction Dataset), a diverse robot manipulation dataset comprising 76k demonstration trajectories or 350 hours of interaction data, collected across 564 scenes and 86 tasks by 50 data collectors in North America, Asia, and Europe over the course of 12 months.

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Click to expand

Open X-Embodiment Team

IEEE International Conference on Robotics and Automation (ICRA), May 2024

Project Page / arXiv

In this work, we introduce Open X-Embodiment, a comprehensive collection of robotic learning datasets and RT-X models. These datasets and models facilitate research in embodied AI by providing large-scale, diverse, and realistic environments for training robotic systems. The datasets cover a wide range of tasks and scenarios, enabling robots to learn complex behaviors through interaction with their environment.

Software

MoveIt 2 Python Library

Click to expand

Peter David Fagan

Google Summer of Code, 2022

Code

This is the official Python binding for the MoveIt 2 library.