What Does Quantum Computing Say about Free Will?

by Michael Szul on

No ads, no tracking, and no data collection. Enjoy this article? Buy us a ☕.

This isn't going to be a hard science piece, so turn off your computer if you had high expectations. This is more a philosophical debate—or revelation—if you will. A few months back, FX on Hulu (I'm still not sure of what these streaming services are doing with stuff like this) premiered a show called DEVS by Alex Garland who created the thought-provoking film Ex Machina (Bill and I covered Ex Machina in a previous podcast episode where we debated the "what comes next" of the ending). DEVS tells the story of a crypotographer who gets embroiled in a corporate conspiracy (a light one) with the founder of a Google-esque technology company when her boyfriend is murdered.

From the web site:

In Devs, an FX limited series, a young software engineer, Lily Chan, investigates the secret development division of her employer, a cutting-edge tech company based in Silicon Valley, which she believes is behind the murder of her boyfriend.

That's the brief synopsis. If you haven't watch the series (it's a miniseries), but you want to, again, shut down the Internet. I'm going to spoil it for you with reckless abandon.

Before I do that, mind you, I'm not writing a television series review here (although you'll enjoy everything except for maybe the final 15 minutes—It's the journey, not the destination, remember). What I'm interested in is one of the concepts at the heart of the series, and how that concept plays out in modern ideas of quantum physics, quantum computing, and philosophy.

In the series, there's a special project called DEVS where the programmers are working with the worlds largest quantum computer. This computer is so advanced (and has so many qubits) that it can simulate with absolute accuracy all the way back to the crucifixion of Jesus Christ. (I'm sure there was meant to be some juxaposition between the image of Christ and the Jesus beard that Nick Offerman was sporting in the series). As the programmers get deeper into past simulations, we get a bit philosophical on whether or not what's being seen on the giant simulation screen is the scene itself or merely a representation (or if it even matters since everything is information and data). Is the thing on the screen the thing itself?

(Yes, this series can hurt your brain.)

The underlying debate between a few of the programmers is about the state of reality. Is the world deterministic to a fault (de Broglie–Bohm theory), or does the Many Worlds theory of quantum physics prevail. If the former is true, are we absolved from our choices and the pain caused by them, or does the Many Worlds theory give us a glimpse of the free will we so whole-heartedly assume, presume, and pursue?

I've always been in the free will camp, because I hate the idea of pre-destiny (in the Calvinist sense). If everything is predetermined then what is the point of actually waking up and continuing on? If there is no free will, does that not shatter any sense of autonomy, independence, or self-actualization?

The answer is no. Actually, the answer is that we've probably been misinterpreting the binary swordplay between free will and determinism from the beginning.

I support Massimo Pigliucci on Patreon, (you should too—$3 for a plethora of content), and he recently talked about an article by George Ellis that took the wrong approach to free will and determinism.

The common detraction from the determinism argument is that if everything is preordained, why do anything at all? Pigliucci answers this:

If you read the above and find yourself thinking something along the lines of "well, then, if causal determinism is true why do I need to bother doing anything, since the outcomes are fated anyway?" you are engaging in what the ancient Stoics called the lazy argument. Let's say you get sick, and you think you don't need to see a doctor or take medicine, because you are either fated to die or to recover anyway. But how do you think you will recover, if you don't see the doctor and take the medicine? Again, you -- and your decision-making brain apparatus -- are part and parcel of the web of cause-effect, not something external to it and to which things just happen. If you get up and go to the doctor then you will get better. If you don't, you won't. You can't use determinism as an excuse for inaction.

We'll get back to Pigliucci in a minute. First, I want to bring up Joe Rogan. Yes, the comedian. Rogan had a comedy bit about time travel and after he talked about it, he mentioned how an audience member came up to him later and told him that time travel wasn't possible because of the grandfather paradox:

Tim hates Grandfather. Tim hates him so much that his ambition is to murder Grandfather despite the fact that Grandfather died in his sleep in 1957. Tim is no quitter, however, so he builds a time machine and travels back to the year 1920, a time before Grandfather’s death. Tim buys a high-powered rifle, practices his marksmanship for many days, rents a room along the path Grandfather takes to work every day, and waits for optimal conditions. When the time is right, Tim locks and barricades the door to avoid any intrusion, and otherwise prevents any factors that might keep him from hitting his target. Tim is perfectly accurate when shooting any practical distance. As Grandfather walks by the room, he is only twenty yards away.

It seems that Tim can kill Grandfather. Every condition is optimal for a perfect shot that would kill him instantly. There is, however, the outstanding fact that Grandfather dies in his bed in 1957. It cannot be that both Tim murders Grandfather in 1921 and Grandfather dies of natural causes in 1957. Since we know Grandfather dies in his sleep in 1957, then it must be the case that Tim does not kill Grandfather in 1921. Now, it seems that Tim can’t kill Grandfather.

Rogan's comedic response was:

What sort of an asshole travels back in time to kill his grandfather?

I bring this up because Rogan's response centers on pragmatism. In an argument about the possibility of time travel, this person goes outside of the bounds of a pragmatic approach to form a counter-argument.

This anti-pragmatic argument also follows us to artificial intelligence. The paperclip maximizer talks about creating an artificially intelligence robot to make paperclips, but it never stops making them, consuming all of Earth's (and the universe's) materials, ending civilization:

[A] paperclip maximizer is an artificial general intelligence […] whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips. […] It would work to improve its own intelligence, where "intelligence" is understood in the sense of optimization power, the ability to maximize a reward/utility function—in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels.

It would innovate better and better techniques to maximize the number of paperclips. At some point, it might transform "first all of earth and then increasing portions of space into paperclip manufacturing facilities".

A philosopher thought of this scenario. Really.

Researcher Gary Marcus, however, points out that while we worry about the prospects of a superhuman intelligence tasked with making paperclips that masters interstellar travel. This same superhuman intelligence never questions why it's making paperclips. Why?

I bring both these stories up because, as humans, we seem to be willing to go to immense lengths to come up with an unlikely scenario to counter a perfectly valid argument.

In Puggliucci's case, the idea of not going to a doctor because everything is "preordained" is silly because determism doesn't mean "preordained from outside."

As humans, we like to believe that we're special and somehow exist outside of the boundaries of physical laws. We see this most apparently in climate change where some politicians say that climate change is real, but not human caused—as if we can somehow step outside of the natural cycle of cause-and-effect when it comes to environnmental changes.

And there's the rub, right? Puggliucci calls us "choice-making machines" with "decision-making brains." This implies that choices are made; That decisions are made. But these choices are not made in a vaccuum devoid of the environmental and genetic baggage that each human being is saddled with. Free will is an illusion in the sense that choices are not made in a vaccuum devoid of influence. In that same sense determinism as "predestination"—as a book written by an outside force that has already determined what everything will be—is also an illusion. Both assume that we are not participants in cause-and-effect. But humans are most certainly highly participant in the casuality of our temporal existence.

In DEVS, Nick Offerman's character initially wants to believe in determism because it absolves him of the death of his wife and child (because he was on the phone with his wife while she was driving), but this assumes that he was not a participant in the causality of temporal existence. Even if determinism is true in his story, he is a participant in the cause-and-effect of those outcomes. He is not absolved. Puggliucci notes that George Ellis' argument is essentially one about placing moral blame rather than determining the validity of "free will." Offerman's character is in much the same boat.

As far as whether quantum computing will ever be able to predict everything? Who knows. Human beings are complicated creatures that always seem to find a way to exist outside the laws of nature… at least in our heads.