Contact, Methodology & Contributions
Harmonic Field Theory is an open framework under active development. If you have questions not answered in the FAQ, or if you are interested in reviewing, collaborating, or contributing to the theory, please use the channels below.
​
Identity & Transparency
This project is an independent research experiment led by Niveque Storm. The Harmonic Field Theory (HFT) papers are the output of an AI-assisted workflow: I propose and constrain the core axiom; large language models are used as tools to produce and run code, and to surface, compare, and stress-test implications against established science. I retain authorship and editorial responsibility for every claim. There is no institutional affiliation or funding. The work is published openly for critique, replication, and possible collaboration.
​
​
​
​
​
​
​​​

Origin Note
This project began with a simple idea from neuroscience: The Bayesian prior as a encoded mirror. This is nothing new in science - the same idea goes by many names. The figure above demonstrates the basic concept: The present (qualia) as a bright seam where two halves of time meet. I noticed that what we call “past” and what we experience as “now” look like the same content seen through an inversion—two faces of one event. This is a holographic idea where one side is information, or the event horizon, and the other is NOW, which is its reflection / singularity. When I pushed that hunch into physics, it became a loop: symmetry → distinction ⮂ reflection → closure/memory ←. No future required. Just Now/Then as mirrors of each other. Later on as I applied the idea to physics - i was thinking about energy and mass —E = m c²—clicked: energy and mass read as a pair rather than standalone things, two faces linked by a metric bridge (the c²).
That insight suggested every stable equation hides the same three steps: take a branch (→, like a √), swap faces (⮂), then square/close (←) to extract a scalar (•⊸). From that, a kind of Reflection Notation emerged: make the pair explicit { A ↔ B }, force the loop to close, and let the closure be the only place where “=” belongs.
This lead to the creation of a replacement for the = sign with a closure bullet "•⊸" but only where true mirror symmetry was happening. This framework enforcement of operator discipline composed of 18 tools collectively called Mirror Notation (see under papers)
Scientifically, the project started just trying to understand what consciousness is, when applied to any distinction in reality it ended up far from those roots and became a substrate independent model. It reframes equality as something earned only at the end of a reflection loop: first pick a branch (choose a determinate solution or section), then apply an involutive duality (for example a dagger/adjoint, time reversal, or boundary-to-bulk functor), and finally close via a metric or evaluation (inner product, contraction with the metric, trace, or pairing) to produce a scalar. Only at that scalar evaluation point is the special equality (•⊸) licensed. In predictive-processing terms, this is the phenomenological seam where prior and likelihood meet, with closure yielding the normalizing evidence as a scalar. In physics one example, energy and mass are treated as two faces related by a metric bridge—c-squared—so (•⊸) attaches to the invariant obtained after identification across that bridge. Mathematically, the pattern aligns with dagger-compact evaluation (coevaluation/evaluation), star-algebra adjoints and norms (“x-star x”), and trace-defined invariants; in holography, boundary data and bulk dynamics are equated only after mapping through the duality and performing the appropriate evaluation (partition-function or observable closure). Mirror Notation makes the pair explicit, enforces dualization and metric closure, and reserves equality for the unique scalar evaluation point—precisely where (•⊸) belongs.
​
Methodology
The Seed and Aim of the Project
Start from a single proposed axiom — the Mirror Law — developed through prior independent study of neuroscience and philosophy of mind.
The experiment’s aim is to test whether this axiom can compress known physics and cognition into a single reflective framework while making falsifiable predictions about where current theory breaks down — without contradicting any well-settled experimental results.
The guiding principle: if the axiom fails, it should fail cleanly and visibly, saving time for the scientific community by showing exactly where it breaks.
This paper makes no claim to absolute originality. The HFT model itself demonstrates that all knowledge is inherently relational, with new ideas emerging from a complex tapestry of prior experiences and information. Every concept presented here, like all human innovations, is the derivative product of countless influences, many of which may be subconscious or untraceable.
In this light, we must recognize that the challenge of comprehensive attribution extends far beyond AI-generated content. How does one cite the music theory that inspired a new physics concept? Or acknowledge an 11th-grade anthropology teacher whose lessons sparked a chain of thought leading to a groundbreaking theory years later? AI simply makes explicit what has always been true of human cognition - our ideas are the product of our total Bayesian prior, shaped by millions of experiences that we can rarely fully articulate or trace.
Where possible, this work situates the reader by referencing established fields of knowledge and fundamental concepts. We mention creators of key theories, like holographic theory, and reference seminal ideas such as Wheeler's "it from bit". However, we must acknowledge that this will always be an imperfect enterprise. Many contributors will inevitably go unnamed, not out of disrespect, but due to the sheer impossibility of tracing every thread in the tapestry of knowledge. For instance John Wheeler is mentioned many times but in fact most of my core ideas came to me before I had ever heard his name. Yet I find that to be more indicative of the power of his work to permeate the information sphere of which I am a part. My ideas rhyme with many of his likely because I heard distilled forms of them during my 25 year career in Information Technology.
• Role of AI (tool, not oracle)
Use multiple large language models (LLMs) only as amplifiers of search and comparison, not as authorities.
LLMs act as a high-bandwidth interface to the public scientific record, used to:
-
retrieve parallel formulations and counterexamples across diverse literature,
-
propose possible links between domains that a single researcher might overlook,
-
rapidly surface contradictions or missing pieces in existing theory.
Every model is run under strict prompt constraints that demand:
-
explicit sources and citations,
-
correct units and dimensional consistency,
-
clear boundary conditions and known failure modes.
All outputs are filtered through human judgment and formal closure rules derived from the Mirror Law.
• Process (loop that produced the papers)
-
Pose a narrow sub-question implied by the Mirror Law.
-
Generate competing lines of reasoning using several independent LLMs.
-
Filter against settled empirical results, discarding any line that conflicts with experimental data.
-
Translate surviving paths into Mirror Notation and reflection mechanics
-
Open/gate vs. close/chamber structures
-
Lawful closures require even, dimensionless invariants
-
Non-closures are marked as ε-defects
-
-
Cross-check against standard formalisms, and, where possible, construct runnable sketches or simple computational tests.
-
Iterate — thousands of loops — until a structure emerges that is:
-
internally symmetric under the Mirror Law, and
-
externally consistent with established science.
-
-
Log all changes into structured outputs and iterate the base model. The original base model of collected assumptions was roughly 2500 pages, over many iterations it boiled down to a single notation system (under 100 pages long) and 2 core papers you see on this website.
This cycle is deliberately adversarial at times: its goal is to break the theory at every step unless it truly holds. Also creative in others - the goal being to get the AI to hallucinate into new vectors and result in something true. Think of it like feeding the AI lots of puzzle pieces and then getting it to guess the picture despite never having seen it in the training data. Another way to think of it is by averaging all of human scientific thought we guess the correct weight of a cow more accurately. Another way to think of it is to think of each silo of human knowledge as a vector and average the substrate independent behavior and dynamics that all silo's share.
• Quality Controls (to limit hallucination and bias)
-
Multi-model cross-checks: independent re-derivations; high variance is a red flag.
-
Source discipline: textbooks and peer-reviewed reviews prioritized; unsourced specifics are rejected.
-
Dimensional/scale checks: all terms verified for units and orders of magnitude.
-
Counterexample hunting: models are forced to generate arguments against the current hypothesis.
-
Boundary labeling: every result tagged as:
-
aligned with settled science,
-
reinterpretive of existing results, or
-
speculative and awaiting falsification.
-
• Scope & Limits
This is not peer-reviewed work and does not appeal to institutional authority.
Where the framework ventures beyond tested regimes, statements are explicitly marked as hypotheses, meant to be falsifiable through either experimental observation or logical contradiction.
Readers are encouraged to reproduce, refute, or extend any step.
• End Result (how to read the outputs)
The outputs are hybrid products: Most statements have been edited 100's of times by both AI and Human over many thousands of prompts.
-
a human-defined seed axiom, logic and insight.
-
filtered through explicit logic and closure rules,
-
enriched by LLM-assisted use such as retrieval and comparison at scale, historical references and framing. & cross silo examples. Placing data into charts, testing and writing python code.
-
Custom edited and organized by Human
-
Fact checking by AI and Human
The result is a typed, executable notation that:
-
organizes physics and cognition under a single reflective principle,
-
flags illegal or inconsistent equations automatically,
-
provides a clear map of what is known, what is reinterpreted, and what is new.
All claims are provisional and explicitly open to falsification.
Why This Methodology is Unique
A scientist should understand: this is not just “using ChatGPT to brainstorm.”
It is a structured, recursive experiment designed to see whether a speculative axiom can survive rigorous filtering.
Mirror Law as an organizing constraint
A single involutive rule R^2=I used to test whether diverse phenomena can be expressed as lawful two-pass closures.
-
Physical domain (primary focus): quadratic invariants, Hilbert-space structure, gauge symmetries, and aspects of the quantum–classical interface framed as closure conditions.
-
Information/cognition (exploratory): whether the same closure grammar constrains information processing and reportable experience. These applications are hypotheses, downstream of Papers I–II, and not required for their results.
Executable checker
The most testable artifact of the project.
-
A linter that implements the closure rules (two-pass mirror, unit/scale consistency, parity/tier legality). (Paper 0, I)
-
Flags pass/fail and classifies non-closures via the ε\varepsilonε-ledger and defect tensors.
-
Assists researchers in auditing equations/models for compliance with the stated constraints; highlights where conflicts with established physics appear.
-
Tests the hypothesis that many lawful equalities can be recast as mirror closures; clear counterexamples would falsify that mapping.
This methodology aims to produce something falsifiable and useful, even if the broader philosophy is rejected.
-
At minimum: a practical tool for equation audit and defect tagging.
-
At maximum: a constraint-led route toward a deeper unification worth probing.​
We welcome serious inquiries, critical analysis, and constructive collaborations.
General Inquiries
For questions about the basics of HFT, please first review the FAQ & Peer Review page.
🔗 FAQ & Peer Review
Academic / Research Contact
Researchers, physicists, mathematicians, or philosophers interested in reviewing HFT, offering critique, or exploring collaboration may contact us directly.
📧 Email: webtempestconsulting@gmail.com
​​
​
​
​
Contributions
If you are interested in contributing to HFT — through :Formal review of specific papers Developing simulations or models Extending the mathematical framework Exploring applications to philosophy of science
Please reach out to discuss possibilities.
Regards, Niveque Storm
IT& AI Consultant
