A team of physicists led by Mir Faizal at the University of British Columbia has demonstrated that the universe cannot be a computer simulation, according to research published in October 2025[1].

The key findings show that reality requires non-algorithmic understanding that cannot be simulated computationally. The researchers used mathematical theorems from Gödel, Tarski, and Chaitin to prove that a complete description of reality cannot be achieved through computation alone[1:1].

The team proposes that physics needs a “Meta Theory of Everything” (MToE) - a non-algorithmic layer above the algorithmic one to determine truth from outside the mathematical system[1:2]. This would help investigate phenomena like the black hole information paradox without violating mathematical rules.

“Any simulation is inherently algorithmic – it must follow programmed rules,” said Faizal. “But since the fundamental level of reality is based on non-algorithmic understanding, the universe cannot be, and could never be, a simulation”[1:3].

Lawrence Krauss, a co-author of the study, explained: “The fundamental laws of physics cannot exist inside space and time; they create it. This signifies that any simulation, which must be utilized within a computational framework, would never fully express the true universe”[2].

The research was published in the Journal of Holography Applications in Physics[1:4].


  1. ScienceAlert - Physicists Just Ruled Out The Universe Being a Simulation ↩︎ ↩︎ ↩︎ ↩︎ ↩︎

  2. The Brighter Side - The universe is not and could never be a simulation, study finds ↩︎

  • AnarchoEngineer@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    7 hours ago

    I’m not sure I understand what you’re trying to explain with states. Do you mean measured externally? Or does part of the system discretize the signals? Or are you saying that while the driving fields may be continuous the molecular structure enforces some sort of granularity to the signals?

    You seem to know much more than me on the hardware side.

    The last time I looked at hardware I came across “ferroelectric synapses” which do the STDP learning. I think it had something to do with the way magnetic dipoles align when current is applied. I don’t think it requires measurement at any step and is continuous whether we have good enough hardware to measure those changes or not.

    So you ran a simulation of those neurons?

    Yes. A very slow and very inaccurate one. I had to approximate the parallelization by setting a time step and then numerically compute the potentials of every neuron and synapse before moving on to the next time step and repeating.

    I should state more clearly that I think it’s the temporal aspects of continuity that lead to undecidable behavior rather than just the number of states a neuron has.

    Because each neuron in a neuromorphic net is running in parallel with all others, the signals produced by that neuron will not necessarily be in sync with the signals of any other neuron. As in, theoretically no two neurons are really ever firing at the exact same time.

    As I previously stated, since timing is everything for STDP the time difference could be very significant when a neuron recieves multiple inputs in a short time window and fires.

    An additional thing to note is that in more advanced models of neurons like the Hodgkin Huxley model, one can account for multiple synapses along the same dendritic tree which absolutely makes timing matter more since input to a synapse near the soma causes a localized change in ions that would stop the propagation of signals from the farther reaches of the dendrite. And if a far signal were to propagate to that synapse just prior to a an input signal, that signal might not be strong enough to get through.

    Depending on how the hardware is build I’d imagine you could get similar effects from the nearness of electrical signals in a neurochip, where the local signaling causes non-trivial effects to the system.

    Anyway, I’ve realized that I likely don’t know enough to say with real certainty whether spiking neural nets are incomputable or not. This is the most rigorous explanation of my thoughts I can write right now:

    • If the true state of each neuron (and synapse) is continuous then there are uncountably infinite states to any net built with them.
    • If these nets are comprised of densely connected layers with any recursion then the state of the system (assuming LIF neurons) could be represented as a large set of dependent non-linear differential equations.
    • Systems of ODEs with more than three variables can give rise to chaotic behavior such that the tiniest changes in input can produce vastly different outputs (the Three Body Problem or the double pendulum are common examples of this) (this also means most likely there would be no “closed form” solution to these systems of differential equations which I don’t think necessarily means uncomputability but it would be a pain to solve one of these numerically and I don’t think runge-kutta would be accurate at all)
    • it should then be possible (if not probable) that a spiking neural net gives rise to chaotic behavior
    • Since we’ve assumed the voltage and current are continuous, they are real numbers that cannot be known with absolute certainty
    • This means that even if you were to build an algorithm for determining the future state of the machine from its present state, you would not be able to know the true state well enough to know for certain you are not in a chaotic area for which your error of measurement does not eventually give rise to behavior significantly different than your algorithm predicted.
    • Ergo, the neural net could be—at least with our flaws in measurement—uncomputable.

    I think the problem is still uncomputable even with fully precise measurements simply due to the continuity and timing I mentioned before, but I guess I don’t have enough knowledge on the topic to prove it so perhaps I’m wrong.

    I think someone else in this comment section mentioned analog-computing (which I thought included neuromorphic hardware) being capable of non-algorithmic computing so they might have more answers than me on the topic of what non-algorithmic means.

    If it could be calculated it could solve the halting problem.

    …would it? I don’t think you can derive a solution to the hard halting problem from knowing the longest finite runtime of a set of machines with n-states.

    A function for the busy beaver numbers would only tell you that there exists some machine with n states that halts after a certain number of steps. It cannot be used to determine if any specific machine of that size halts or not, just that at least one does and it takes x number of steps.

    Hell, it doesn’t even tell you what input would make a machine halt at that many steps only that there is at least one input for which you get that output.

    So I think that means—if by some miracle you were able to construct an oracle for the busy beaver numbers—you wouldn’t really solve the halting problem yes? (Again wayy outside my expertise but still fascinating)