In this series, we’ve presented three arguments for an intelligent cause: the first from the fine tuning of the constants, the second from the design of the laws, and the third from the initial conditions at the big bang. In this episode, the final one in this series, we’ll summarize the prior episodes so that you can clearly see the logical line of reasoning that flows through all three arguments. Then, we’ll address many frequently asked questions on these arguments. Finally, we’ll explain the need and motivation for the next two series about God and the multiverse.
Essay Version of Episode 10
Let’s take it one argument at a time and begin by summarizing the fine tuning argument, the topic of the first five episodes.
Argument for the Proof of God from the Fine Tuned Constants of Nature
Three notions lie at the heart of the fine tuning argument: fundamental, constants of nature, and theory of everything.
The first notion, fundamental, refers to the most basic entities which can’t be reduced to anything else. Specifically, fundamental physics is the study of the deepest principles of nature which are ultimately responsible for all the complexity and diversity in chemistry, biology, astronomy, and all the other sciences.
Fundamental physics conceives of the world as being composed of fundamental particles and laws. Fundamental particles, such as electrons and quarks, are the basic building blocks from which everything else is made. The fundamental laws, such as quantum mechanics and general relativity, are the basic rules that govern how these particles behave and interact. Crucially, fundamental laws are apparently not derived from something more primary, and fundamental particles cannot be broken down any further; that’s why they are the ultimate building blocks of existence.
The next notion is the constants of nature. Based upon scientists’ observations and measurements, they have discovered fixed numbers that are built into the fundamental particles and laws of nature. These 25 or so unchanging numbers, known as the constants of nature, determine quantities regarding the particles and the laws. Two examples are the mass of an electron (a number which describes how heavy it is) and the fine structure constant (a number which describes the strength of the force between two electrons).
The final concept is physicists' dream of finding an ultimate theory that explains everything in our universe. To realize this dream, they were searching for an irreducible, beautiful, unified, and simple law that explains all the complexity and diversity in the universe.
While modern physics had been quite successful in partially realizing physicists' dream of finding a theory of everything, the specific values of the constants presented a unique challenge. Physicists did not want to posit that the constants - which appear to be an ugly list of data - are themselves uncaused ad hoc additions to an otherwise beautiful theory of everything. Rather, it seemed clear that the constants, like everything else, should be explained by a theory of everything. The problem is: how? How can a theory of everything, a master law of nature, determine exact and precise numbers like 1/137.035999139?
The problem emerged from the fact that the numbers seem completely arbitrary with no apparent reason for their values. From the perspective of theoretical physics, the constants could have taken on any value whatsoever. Richard Feynman called the problem of explaining the values of these constants “one of the greatest damn mysteries of physics.” In their pursuit of a master law that explains everything, physicists faced the immense challenge of explaining the precise values of the twenty-five constants lying at the heart of the laws of nature.
The discovery of fine tuning provided the all-important clue which illustrated that the values of the constants are not as arbitrary as they had seemed. Beginning in the 1970s, scientists realized that while the values of the constants don’t matter in terms of fundamental physics, the fields of astronomy, chemistry, and biology (among others) demand that these values are precisely tuned. That is, if these numbers were slightly different, the universe would be devoid of atoms, molecules, planets, stars, galaxies, and life. As such, fine tuning is the reason that our universe contains order, structure, and complexity.
In scientists' quest to explain the cause of the values for the constants, it became evident that the precision of the fine tuning is too great to be chalked up to a lucky coincidence, as the odds of getting all the values within the correct ranges is staggeringly low. It was clear that the discovery of fine tuning is too significant a clue to ignore.
Episode 4 - Why Fine Tuning Demands a Paradigm Shift To Solve The Mystery of The Constants of Nature
While it couldn't be denied that fine tuning is a vital clue for explaining the constants, its discovery presented a new problem. This is based upon the fact that modern science generally proceeds by explaining how the laws of nature cause complex phenomena in the universe. For example, the laws of physics cause atoms to interact in a way that brings about molecules.
But fine tuning seemed to indicate the exact opposite - that somehow the end result of having a complex universe with atoms and molecules caused the specific quantities for each constant. From the ordinary scientific perspective, this seemed backwards! Since the significance of fine tuning was undeniable, it became clear that solving the problem raised by fine tuning motivated a paradigm shift in how physicists understand the universe.
The solution to the problem of fine tuning contains the heart of our argument. The mystery of the constants emerged from trying to explain the specific values of the constants exclusively with efficient causes, a framework in which a past law causes a future effect. The solution to this mystery emerged once scientists realized that the discovery of fine tuning indicates teleological causes, a framework in which a future purpose causes a selection in the past. In other words, fine tuning indicates that the cause of the specific values of the constants is the purpose of bringing about an ordered, structured, and complex universe.
Of course, like everything that exists for a purpose, the constants also had an efficient cause which set their specific values. We argued that the efficient cause responsible for purposely setting the values of the constants must be intelligent, insofar as it fine tuned their values in just the manner needed to bring about our complex universe. This follows from the definition of ‘intelligence’ as the ability to pick out one possibility from among many for the purpose of producing an intended goal. Clearly, the selection of the fine tuned values of the constants in just the right manner that results in a universe that is much greater than the sum of its parts is a direct indication of intelligence.
While this concludes the argument from fine tuning, we want to emphasize that we’re not saying that the universe was fine tuned exclusively for the purpose of life, even though many scientists do say this. Rather, we’re arguing that the purpose of the fine tuning is for our entire complex universe which contains atoms, molecules, planets, stars, and galaxies, in addition to life.
Before moving on to the next argument about design of the laws, we discussed in episode 6 the common misconception that the potential problems with the design argument from biology also apply to the fine tuning argument in physics. This is simply not the case. They are different arguments, and the fine tuning argument in physics has significant advantages over the design argument in biology.
First, whereas biology has empirical evidence in the fossil record that life has evolved, there’s no comparable evidence in physics that the constants ever change.
Second, the multiplanet solution in biology can potentially explain the emergence of DNA and life by chance only because scientists have actually observed many other planets. But multiverse physicists haven’t observed, and never will observe, an infinite number of parallel universes.
Third, since biology isn’t truly fundamental, it may be plausible to explain the origin of life based on more primary phenomena that are outside the purview of biology. On the other hand, the constants of physics are considered fundamental and are therefore unexplainable by any deeper law of physics. And even if you argue that they aren’t truly fundamental, but are derived from some master law, this would only beg the question of what fine tuned that law?
Besides these significant advantages of the fine tuning argument in physics over the design argument in biology, it’s important to appreciate that design in biology is ultimately dependent upon fine tuning in physics. That is, even if evolution is capable of explaining the diversity of life as naturally emerging from DNA, and even if the multiplanet solution is capable of explaining the random emergence of DNA by chance alone, these things are both ultimately dependent upon fine tuning in physics. Without the fine tuned constants there would simply be no atoms, molecules, planets, stars, DNA, and of course, no life.
Argument for the Proof of God from the Design of the Laws of Physics
This brings us to episode 7’s independent argument for an intelligent cause from the qualitative laws of physics: quantum mechanics and general relativity. This argument naturally emerged from physicists’ question of “Why are these laws true and not some other laws?” The ultimate dream of physicists was to answer this question by finding a final theory which not only explained everything, but which would also be unique – the only possible theory of everything. The discovery of such a final theory would have explained why the laws of nature we observe are true - because they would be the only possible laws.
Physicists eventually realized that the dream of a unique final theory was unrealistic as there is no logical reason why our laws are the only possible laws. This brought back the question of “why these laws and not some other laws?” To answer this question, we took a step back and realized that these laws are in fact very special. Only these laws of physics result in an ordered, structured, and complex universe with atoms, molecules, planets, stars, galaxies, and life. Since the definition of ‘intelligence’ is the ability to pick out one possibility from among many for the purpose of producing an intended goal, we therefore inferred that the cause of the laws of nature intelligently selected the right laws from all possible laws in order to bring about our complex universe.
Argument for the Proof of God from the Initial Conditions of the Universe
Our final argument for an intelligent cause of our universe, based upon the initial conditions at the big bang, was the subject of episodes 8 and 9. It all started with entropy, a way of measuring the likelihood of a system being in a particular state. High entropy states, which are less ordered, are more likely to occur by chance than low entropy states, which are more ordered.
The second law of thermodynamics says that entropy always gets higher in the future and was always lower in the past. However, when we try to understand the second law using statistical reasoning, we end up with the startling, and observably false conclusion that entropy is always higher in both the future and the past.
Scientists reconcile the second law with statistical reasoning by positing that the universe started with highly ordered, low entropy initial conditions. And ever since the first moment of the big bang, entropy has been continually on the rise.
The discovery that the universe began with improbable low entropy initial conditions invited the question of just how unlikely its initial state was. Roger Penrose calculated the immense improbability for our universe to randomly have the low entropy initial conditions as being 1 out of 10^10^123. It is only because the universe started in this highly improbable state that it was able to develop into an ordered, structured, and complex universe with atoms, molecules, planets, stars, galaxies, and life, as opposed to being filled only with black holes.
We inferred that the incredibly ordered initial conditions of our universe point to an intelligent orderer that ordered its initial state for the purpose of bringing about our complex universe. Once again, this follows from the definition of ‘intelligence’ which refers to the ability to pick out one possibility from among many for the purpose of producing an intended goal. This provided our third and final, independent piece of evidence for an intelligent cause of our universe.
Putting it all together
In summation, we have shown that the qualitative and quantitative laws that govern our universe, as well as the initial conditions of the big bang, have spoken in the clearest possible language that our universe is extraordinarily fine tuned, designed, and ordered. While these three arguments are independent, they nevertheless complement each other.
The three arguments are grounded in all the essential elements of our universe and together paint a picture of a universe that is fine tuned, designed, and ordered in its totality.
To see this point, we’ll give a brief, albeit dense, history of the universe from the vantage point of modern physics. While we’re presenting the history, you’ll notice how each of the components of our universe is either ordered, designed, or fine tuned.
The universe started at the big bang in a very ordered initial state. The subsequent evolution of the universe can be subdivided into two parts: spacetime and mass-energy.
The development of spacetime is determined by the designed law of general relativity, which governs the development of all structures in the universe on a large scale - from galaxies to stars and planets, and even the very structure of spacetime itself. The fine tuned quantity associated with general relativity is the cosmological constant. This constant determines the continued expansion rate of the universe after the big bang and is responsible for the stretching of space on large scales as time progresses.
The mass-energy of the universe is composed of fundamental particles that interact according to the designed laws of quantum mechanics. The fine tuned constants of quantum mechanics fix the quantities of all the particles and the forces between them.
As time went on, spacetime expanded, and the energy formed into atoms, stars, galaxies, and life, all due to the combination of the ordered initial conditions, the designed laws of nature, and the fine tuned constants. If any one of these three essential components of our universe were appreciably different, even if the other two were still the same, this brief history would have been very different and would have never led to anything remotely similar to our beautiful and amazing complex universe.
This highlights why none of these three arguments commit the God of the gaps fallacy. They aren’t based upon ignorance, but are all based on scientific knowledge that if the initial conditions, laws, or constants of nature were different, then our complex universe wouldn’t result.
Furthermore, all three arguments naturally emerged as solutions to deep problems in the foundations of physics, not from mere gaps or details. The fine tuning argument followed from the mystery of the constants; the design argument followed from the mystery of why these laws of nature exist; and the order argument followed from the problem of the low entropy initial conditions at the big bang.
Finally, all our arguments aren’t fallacious theories of the gaps that can explain every possible universe. We all know that a theory that can explain anything, in truth, explains nothing. Rather, the theory of an intelligent cause can only explain a universe with order, structure, and complexity, but would be completely unjustified if the universe lacked these observed properties.
FAQ
Now that we’ve summarized our arguments, we’d like to take up some frequently asked questions that we’ve encountered over the past decade. Some of these questions are found in the literature or in online discussions, while others have come from people with whom we’ve discussed these arguments.
Some of these questions might have already been bothering you, and we hope all of the questions will help clarify subtle points in the arguments. We’ll start with questions often asked about our first argument from fine tuning of the constants.
How Do You Know the Constants Could Have Other Values?
Here’s the first question: “What justifies your assumption that the values of the constants could have been different? Since we have only observed one universe with one value for each constant, maybe these are the only possible values!”
Even though it seems intuitively clear that constants like 1/137.035999139 could at least theoretically have different values, we didn’t just assume that they could have been different without further justification. We dealt extensively with the possibility that they have their specific values by necessity; either because they are uncaused brute facts of reality or because they are necessarily determined by some master law.
Both the uncaused constants and the master law theories had their problems even before the discovery of fine tuning, when the only problem was the mystery of the constants. The fact that Feynman called explaining the constants one of the greatest mysteries in physics is predicated on the recognition that logically, the constants could have been different. There’s no intrinsic reason why they had to be these specific numbers, and it's hard to see how any qualitative law could determine their exact values.
Besides these basic difficulties, once fine tuning was discovered, scientists were forced to abandon both of these theories completely. This is because they didn't incorporate the new knowledge about fine tuning. After all, someone who posits that the constants must of necessity have only these specific values must maintain that their fine tuning is a massive coincidence. This totally ignores the discovered connection between the values of the constants and the complex universe.
What if the Constants Change?
Here’s our next question: What if scientists one day discover that one or more of the constants aren’t in fact constant, but actually vary in space or time? For example, imagine that at some point in the future, physicists observe that the fine structure constant that’s responsible for the existence of stable atoms actually varies in different places of the universe. Would that affect the validity of the fine tuning argument?
The answer is that it depends. If the fine structure constant were found to not always be 1/137.035999139 but to vary within a small range, say between 1/138 and 1/137, all of whose values would still allow the constant to function for its purpose of allowing stable atoms to exist, then it wouldn’t impact the argument. We probably would stop calling it a constant and instead call it a variable - but other than that, as long as there’s no intrinsic reason why the constant must be within that range, we would still argue that its limited range indicates that the range itself was intelligently fine tuned for the purpose of enabling the existence of atoms.
On the other hand, if physicists observed that in other parts of the universe the constant varied outside of the range that allowed atoms - say it varied between 1 and 200 - and that in different parts of the universe, there were no atoms, and consequently no molecules, planets, stars, and so on, then there would be a real problem with our argument. This is because it would imply a type of multiverse in which the values of the constants are truly random, and that our fine tuned values are a mere consequence of an observer bias - that intelligent observers like us can only exist in a part of the universe with these fine tuned values.
Of course, all indications are that the constants are indeed constant and don’t change at all. However, if some day one or more constants are found to vary, it’s worth understanding which type of variation would present a problem with the fine tuning argument, and which type wouldn’t impact it at all.
How do you know the Probability Distribution for the Constants?
That brings us to our next question. we wanted to include it when we were discussing the fine tuning argument in the earlier episodes, but decided to hold off until now. Once you hear it, you’ll know why. In fact, we’re going to need an analogy just to appreciate the question.
When telling people the fine tuning argument, one of the most sophisticated questions we’ve been asked goes something like this: “You claim that it’s incredibly unlikely to get the correct values of the constants by chance alone. But since we’ve only observed one universe with one value for each constant, how can you determine the range of possible values for each constant, a step that is necessary to even begin computing probabilities? Furthermore, how can you really determine the probability distribution of the constants, that is to determine the different probabilities for the different possible values, and thereby claim that our observed values are highly unlikely? Because of these problems, evaluating the probability of these constants occurring by chance is highly speculative and not nearly as rigorous as you make it sound.”
To appreciate this question, let’s consider the analogy of a lottery. Assuming that the lottery has 1000 balls numbered 0 to 999, it's clear that the probability of getting any specific number - let’s say 137 - is 1 out of 1000. However, let’s say we didn’t even know that there are 1000 possible numbers. Then we couldn't even get started with computing a probability. Or let’s say we even knew there were 1000 possible numbers but had unknown quantities of balls of each number. For example, there may be 7 balls numbered 1, 83 balls numbered 2, 52,000 balls numbered 3, and so on. Who knows? Without knowing the quantity of balls of each number, we’d have no way to compute the probability of randomly attaining 137.
Analogously, the questioner was asking that without knowing the potential range of each constant or the probability of each constant taking on various possible values, we can't rigorously compute the probability of the constants being fine tuned by chance alone.
This is actually a really good question. However, it’s only a problem for those who try to explain the values of the constants through chance alone. For example, multiverse scientists speak about fine tuning of the constants in terms of probabilities because they claim that the values for the constants in each universe are randomly determined.
While their basic method for computing these probabilities is a bit shaky, here’s how they do it. In order to determine the possible range of values for many constants, scientists make relatively reasonable assumptions about their natural limits. For example, they assume that the upper limit of a particle’s mass is when it would be so large that it would no longer be a particle but would become a black hole. After determining a range of values, they usually assume that all numbers in the allowed range have equally likely probabilities, which is also a fairly reasonable assumption given no evidence to the contrary. Of course, it’s still an assumption.
Returning to the analogy of the lottery, even if we don't know the number of balls of each number, we can try to roughly approximate the probability of randomly getting a 137 as follows. First, if we see that the balls seem to have space for three digits, we may assume that the range of possible lottery numbers is 000-999. Second, since we have no idea just how many balls there are of each number, our best assumption is that there is roughly the same quantity of each number. Based on these tenuous assumptions, we can make an educated guess that the probability of randomly getting a 137 is approximately 1/1000.
We want to emphasize that the challenge of defining a natural range and probability distribution for the constants is only a problem for those people, like multiverse scientists, who try to explain the values of the constants by chance alone. Now, it’s true that some people even present the fine tuning argument from a probabilistic perspective, and this is in fact how you’ll often see it presented, but we think that this misses the heart of the mystery of the constants, in addition to running into this problem.
We completely avoided this difficulty by formulating the argument in a way that doesn’t involve probabilities. Instead, we began with the mystery of the constants. This great mystery is only based on the fact that the constants could theoretically have been different and that there’s no intrinsic reason why they must be these specific numbers. Notice that the mystery has nothing to do with probabilities.
The mystery of the constants set the stage for appreciating the clue that solved the mystery - the discovery of fine tuning. This scientific discovery revealed that only numbers within a specific range would allow for a complex universe to exist. The fact that our constants are in this small range implies that the reason why they have their values is the purpose of bringing about our complex universe. This points directly to an intelligent fine tuner as the cause of the constants. Again, this whole line of reasoning doesn’t rely on probabilities at all and therefore avoids the entire problem of defining a range and probability distribution for each constant.
As a side point, the way we formulated the argument for an intelligent orderer from our universe’s unlikely initial conditions does involve probabilities. This is justified because the basis of the special initial conditions of our universe is entropy, a concept that is studied through well grounded rigorous probabilities.
Varying Many Constants
In our attempt to simplify the presentation of fine tuning, we skirted one issue that is often raised against the fine tuning argument. Physicist Luke Barnes formulates this question as follows:
There is an objection to fine-tuning that goes like this: all the fine-tuning cases involve varying one variable only, keeping all other variables fixed at their value in our universe, and then calculating the life-permitting range on that one variable. But, if you let more than one variable vary at a time, there turns out to be a range of life-permitting universes. So the universe is not fine-tuned for life.
While it’s true that we didn’t discuss the case of varying multiple constants at once, it’s a mistake to think that the scientific consensus of fine tuning has missed this simple objection. Barnes goes on to debunk this mistake. He says as follows:
This is a myth. The claim quoted by our questioner is totally wrong. The vast majority of fine-tuning/anthropic papers, from the very earliest papers in the 70’s until today, vary many parameters…This myth may have started because, when fine-tuning is presented to lay audiences, it is often illustrated using one-parameter limits…The scientific literature does not simply vary one parameter at a time when investigating life-permitting universes. This is a myth, born of (at best) complete ignorance.
The reality is that varying many constants at once doesn’t make it anywhere close to likely to get a complex ordered universe by chance alone. While it might create new combinations that allow for a complex universe, the chances of getting all the constants correct at the same time are still astronomically small.
All the scientists we quoted in episode 3 who endorse the discovery of fine tuning take into account the possibility of varying multiple constants and still arrive at the clear conclusion that the constants of nature are just too fine tuned to be explained as a lucky coincidence.
How Grand Unified Theories Could Have Potentially Solved the Mystery of the Constants
This next point isn’t a question but is more of a correction. Because Feynman’s mystery of the constants plays such an important role in our formulation of the fine tuning argument, we want to correct one aspect of how we presented it in prior episodes. The correction involves the possibility of a future qualitative theory explaining the exact values of the constants.
We want to emphasize at the outset that this point doesn’t impact the essence of the fine tuning argument but only demands that we understand the mystery of the constants in a deeper light than we originally presented it.
Before we even got to fine tuning, we explained why the values of the constants presented such a great mystery. One aspect of the mystery is that 25 arbitrary numbers read like a list of data and are the exact opposite of what physicists are looking for in a dreamed of final theory. That hasn’t changed.
The other side of the mystery is that it seems highly improbable that any qualitative theory would produce a number like the fine structure constant - 1/137.035999084 - or any of the other constants for that matter. The way we said it was that a theory of everything had to perfectly determine the exact value of all the constants down to the last decimal place, which is a very tall order. In a sense, we overstated the case - there was arguably never any hope that a theory of everything would be able to directly determine all the numbers down to that last decimal place.
The reason we explained it as we did is because that is how Feynman presented the mystery in his 1985 book “QED”. However, physicist Karl Rosensweig filled us in on some details of the historical background that’s relevant to the mystery of the constants. You see, in Feynman’s presentation of the mystery, he implicitly dismissed a different approach that physicists were working on during the late 1970s and 1980s, around the same time that Feynman discussed the mystery of the constants. While the point is a bit technical, we’ll try to explain it as simply as possible.
Physicists were pursuing Grand Unified Theories (different from Theories of Everything because Grand Unified Theories don’t include gravity). One of the main motivations for Grand Unified Theories was the hope that at higher energies when the universe was hotter and denser, it could be shown that the three force constants are unified into one constant. Then, over time, as the universe expanded and cooled, the effective values of the different constants separated and functionally expressed themselves as different values. If this were so, the hope was that the true unified value of this one constant would be something simple, like 2 or pi. If that were the case, physicists could hope that they would be able to find some qualitative explanation for it, instead of for something as random as 1/137.035999084.
Let’s summarize this point in a bit of a simpler manner. Before the discovery of fine-tuning, some physicists held firm to the possibility of finding a qualitative theory of everything that would explain all the values of the constants. They had hoped to discover some qualitative theory, whether a Grand Unified Theory or some other theory, that would produce a simple number like 2 or pi, that through some complex physical process would end up expressing itself as the three different force constants that scientists actually measure, one of them being 1/137.035999084.
Physicists wouldn’t necessarily have to calculate precisely how you get from the true simple number to 1/137, but they hoped to at least show the path to making the calculation, even if it was very complex.
Even though this was the scientific background of popular theories in the mid-1980s, when Feynman discussed the mystery of the constants, he didn’t explicitly mention this approach. It stands to reason that he viewed this attempt as implausible to work for all, or even some, of the constants.
While it’s hard to know exactly why Feynman rejected it, perhaps it was because the method of Grand Unified Theories at best only potentially worked for 3 of the 25 constants. Or, maybe he thought that the entire approach of deriving all the values of the constants exclusively using qualities seemed very difficult to work out. Again, this doesn’t mean he considered it impossible, only that from the perspective of modern physics, it was a great mystery to try to explain these numbers, even before the discovery of fine tuning.
As it so happened, Feynman's formulation of the mystery of the constants has proven robust - Grand Unified Theories and their attempted solution to even part of the mystery eventually fell out of favor. This is because it became clear over time, as calculations became more precise, that the values of the three constants don’t unify at higher energies, but actually diverge from each other.
That, among other reasons - such as incorrect predictions like proton decay- eventually led to Grand Unified Theories falling out of favor entirely. And forty years later, there’s still been no progress in explaining the constants using this approach. Meaning to say, that try as they might, physicists have been unable to find qualitative principles that would predict simple numbers which ultimately become the precise values we measure.
Again we want to emphasize that this whole discussion involves the plausibility of a qualitative theory explaining the constants, assuming that they’re arbitrary random numbers with no rhyme or reason. However, once scientists discovered in the late 20th century that the constants are fine-tuned, this entire approach doesn’t work at all.
This is because even if the hope and dream were realized - in other words, even if some qualitative argument gave the number 2 or pi, which through a series of calculations happened to become 1/137 - it would still be an immense coincidence that those values are exactly within the range of values necessary for atoms, molecules, planets, life, stars, and galaxies.
As we explained at length in episodes 3-5, with the discovery of fine-tuning there remain only two possible theories that could account for the apparent relationship between the specific values and the emergent complex universe: either an intelligent agent that intentionally selected these values for the purpose of producing our complex universe, or an unintelligent multiverse that explains away the apparent fine tuning as an illusion, as a mere result of an observer bias.
Maybe there are other Possible Explanations for the Constants?
The next question goes as follows: You’ve mentioned four potential theories for explaining the constants:
(1) That they’re uncaused brute facts
(2) That they’re purposely caused by an Intelligent agent
(3) That they’re exactly determined by a Master law
(4) That they’re only one set of infinitely many values in a multiverse
Then you showed how the discovery of fine tuning reduced it to only the intelligent agent or the multiverse. But, how do you know that there aren’t other possible theories for explaining the constants that you and multiverse scientists are both overlooking?
First of all, if we were arguing from elimination, then it would indeed be necessary for us to make sure that we have a comprehensive list of all possibilities, such that when we eliminate all but one, we could be sure that the one remaining possibility is the correct solution.
However, we aren’t arguing by elimination. Instead, our argument is based on the direct inference to an intelligent cause from the evidence of fine tuning, design, and order that indicate a purpose for the constants, laws, and initial conditions. Therefore, we don’t need to prove that our list of possible theories is exhaustive.
Nevertheless, it makes the argument more compelling if we can show that our list is comprehensive. In that vein, let’s categorize the possible theories with some basic subdivisions to show that we haven’t missed anything.
First, we can divide all theories by whether the constants are fundamental (having no cause), or aren’t fundamental but are caused by something else. It seems that these two possibilities are exhaustive - either the constants don’t have a cause or they do have a cause.
If they don’t have a cause, that’s possibility 1: the uncaused constants theory.
If on the other hand they do have a cause, either the cause selected the values for a purpose or it did not - that is, either the cause is intelligent or it is unintelligent. These also seem to be the only two possibilities.
If the constants were caused by an intelligent agent, that’s possibility 2 - God.
If, on the other hand, the cause is unintelligent (like some sort of law), either this unintelligent agent caused only one set of constants - those observed in our one universe - or it caused many different values for the constants, with the constants in our universe being only one of many sets of constants. These also seem to be the only two possibilities - one set of constants or many sets of constants.
If there is only one set of constants, that’s possibility 3 - the master law theory - that there is an unintelligent cause that exactly determined only one set of constants of our universe for no purpose.
If, on the other hand, there are many sets of constants, that’s possibility 4 - multiverse theory - that an unintelligent cause determined many different sets of constants with many different values.
Putting these together, it seems clear that we’ve considered all the possible explanations for the constants. Just to remind you, possibilities 1 and 3 - the uncaused constants and master law theories - were abandoned because neither of them explained fine tuning. That leaves either possibility 2 or 4 - God or the multiverse.
While we only spelled out why these four theories are exhaustive regarding the constants, the same basic subdivisions also apply to possible explanations of the qualitative laws of nature and the initial conditions.
Wait and See if Science will Discover a New Explanation for the Constants of Nature
This question follows from the previous one. Here it is: While our current state of knowledge implies that there are no other possible theories, maybe science will one day discover a new theory that can explain the values of the constants in an unexpected manner? Wouldn't it be better to admit that you just don't have a solution at the present time, but perhaps in the future, science will provide a theory that explains the problem posed by fine tuning, without having to say either God or multiverse?
It’s certainly true that not every question must be immediately answered. More often than not, even a very difficult problem demands time and patience to be solved. For example, the mystery of the constants prior to the discovery of fine tuning was just such a problem. The correct approach then was to wait and see if new information would come to light which would provide a clue that shows that the numbers are not completely arbitrary. And, in fact, that patience was rewarded with the discovery of fine tuning, which demonstrated the significance of these numbers regarding chemistry, biology, astronomy, and so on.
However, fine tuning provides positive knowledge that there’s a relationship between the values of the constants and the resultant universe. It’s the all-important clue that solves the mystery of the constants. It doesn't make sense to say, “Wait and see” after you find the information you're looking for just because you don't like where it's pointing.
Nevertheless, there’s always the possibility that science will shock us and come up with a totally new idea that no one sees coming. That’s always a possibility for any theory in science. For example, Newton’s theory of gravitation was replaced by Einstein's general theory of relativity. But that possibility wouldn’t have justified someone rejecting Newton in his day, much like it doesn’t justify us rejecting Einstein today. If we were to do that, we would never accept any scientific theory! We have no choice but to work with our current scientific knowledge and see where it leads. And when we do that regarding the constants, modern science points to a theory that explains the relationship between the values of the constants and the resultant universe. And that’s only God or the multiverse.
Burden of Proof is on Multiverse
This leads to another follow-up question. Here it is: Even if I grant you that it’s either God or the multiverse, you haven’t actually disproven the multiverse yet. You keep saying you're going to do that, but until you do, what have you actually accomplished?
Though we haven’t disproven multiverse yet, we’ve still done a lot. We’ve shown how our universe’s fine tuning, design, and order provide three direct indications that the universe has an intelligent cause. For scientists to undermine these arguments, it won’t suffice for them to merely speculate that there are an infinite number of unobservable universes with different constants, laws, and initial conditions. Rather, the burden of proof is on them to support these claims by providing evidence that indicates the existence of an infinite multiverse.
To justify this point, we’ll digress a bit and discuss various criteria by which theories should be evaluated. For a given set of observations, let’s differentiate between two types of theories:
(1) A theory that explains our observations;
(2) A theory that not only explains our observations but is also indicated by our observations.
To appreciate this important distinction, let’s see these two types of theories regarding our universe.
Based upon scientific observations, we and multiverse scientists agree that there is clear scientific evidence for at least one ordered, designed, and fine tuned universe. The disagreement is how to explain and interpret this evidence.
Let’s first consider our theory which maintains that this one universe is the result of an intelligent agent that set the values of the constants for the purpose of producing our complex universe. This theory explains our observations of fine tuning, design, and order. Furthermore, because fine tuning, design, and order are the hallmarks of intelligence, this theory is also indicated by our observations.
Let’s now consider a multiverse theory that posits the existence of an infinite number of universes, each having different constants, laws, and initial conditions. This theory also explains our observations by noting that we observe a fine tuned, designed, and ordered universe because these features are preconditions for the existence of intelligent observers. However, and here’s the key point, this theory is certainly not indicated by our observation of one fine tuned, designed, and ordered universe. Observing one universe doesn’t indicate infinitely many universes, and observing incredible order doesn’t indicate an infinite amount of disorder.
Let’s now consider an analogy that will show why a theory must be indicated by evidence before being accepted. This analogy will also help us determine which of the two competing theories has the burden of proof.
Consider the following example: Police find a dead body and soon realize that he was murdered with a weapon that was left at the crime scene. After some investigation, detectives discover fingerprints on the murder weapon, which are found to match those of one Mr. Jones. Let’s analyze two different theories regarding the presence of these fingerprints:
(1) Mr. Jones committed the murder.
(2) As part of a conspiracy to frame Mr Jones, his fingerprints were placed on the murder weapon.
The first theory - that Mr. Jones is the murderer - explains the observation of his fingerprints. More importantly, this theory is indicated by his fingerprints on the murder weapon.
The second theory - the conspiracy theory - also fully explains our observation of Mr. Jones’ fingerprints. However, there is no indication for this theory. His fingerprints on the murder weapon in no way indicate a conspiracy.
Now, given these two theories, how would the court rule? Would they conclude that since there are two competing theories that both explain all our observations, Mr. Jones must be set free? Of course not. Since the fingerprints indicate Mr Jones’s involvement, the burden of proof is on the defense to show some indication of the conspiracy theory. Though the conspiracy theory also explains the evidence, it must be dismissed as unjustified until we find evidence that indicates such a conspiracy. Without such indication, the court will accept the theory that is actually indicated by the prints and will convict Mr. Jones.
While this may seem intuitively clear, you may press the point and ask why this is the case. Since both theories equally explain the evidence, why should we favor the theory which is indicated over the theory which is not?
To answer this question, let’s consider the radical doubt famously proposed by philosopher Rene Descartes in 1641. He suggested that perhaps everything you believe to be reality is really a dream, or perhaps everything you believe to be true is really an illusion created in your mind by a very powerful malicious demon who is intent on fooling you. These theories, and many others like them, certainly explain all your observations. Yet, instead of casting doubt on your belief in a real external world, you discard such theories as being examples of radical doubt. Why? What differentiates them from a theory we accept as reasonable, like the theory that the external world is real and not the product of a dream or a malicious demon?
The primary reason your mind rejects these theories as being ridiculous is because there are no indications that they are actually true! Despite their ability to explain your observations, you reject them in place of a theory which is indicated by the evidence. In fact, if you would accept unindicated theories, there would be no end to the number of theories that a creative person could construct that are merely explanatory, but not indicated by evidence. As such, you would be forced to accept myriads of mutually exclusive theories, which is clearly impossible.
This point extends to determining which of the two competing theories has the burden of proof. In general, the position that is asserting something must supply evidence that indicates that their position is true. If they supply evidence for their position, then the burden of proof shifts to their opponents to find evidence that indicates an alternate explanation.
Returning to Mr. Jones, without the discovery of his fingerprints, the burden of proof would be on Mr. Jones's accusers to find evidence that he committed the crime. However, his fingerprints are just the evidence they seek. Since the fingerprints indicate that Mr Jones committed the murder, the burden of proof shifts to the countless conspiracy theories that can explain the prints without condemning Mr Jones. Should they fail to find any evidence which indicates the existence of such a conspiracy, they will be dismissed just like Descartes’ doubts about dreams and demons.
The same is the case regarding the question of whether or not the universe has an intelligent cause. Without the discovery of fine tuning, the burden of proof is on one who asserts an intelligent cause. Since our universe seems to be governed by unintelligent laws, one must provide evidence in order to posit an intelligent agent.
However, since fine tuning, design, and order are the fingerprints of intelligence, they are evidence that indicates an intelligent cause of our universe. This shifts the burden of proof to those seeking to deny an intelligent agent. If scientists merely posit multiverse as an alternate explanation for our observations, but cannot find some indication for their assumptions, then we must dismiss it as unjustified speculation which merely casts unreasonable radical doubt on the theory of an intelligent designer.
Of course, sometimes you’ll reject a conspiracy theory because it has no evidence supporting it, only to later find new evidence that indicates its truth after all. That’s fine. Just like it was appropriate to reject it before the evidence was discovered, it’s appropriate to revise your opinion and accept it after the evidence was discovered.
Let us give an analogy from the history of science to illustrate this point. Before scientists observed that there are many planets in the universe besides those in our solar system, someone could have argued that the fact that Earth is hospitable to life - for example, it’s the only planet in our solar system that is the perfect distance from the sun to allow for liquid water - would be an indication of an intelligent cause. If the only planets are those few in our solar system, this is too unlikely to be explained by chance alone. But then scientists observed many many planets, enough planets such that at least one of them would be at the right distance to the sun by chance alone. Furthermore, they reasoned that since we can only exist on a planet that is at just the right distance from the sun, our perfect placement can be explained as resulting from an observer bias instead of an intelligent cause.
At the point in time when the evidence only indicated the existence of the planets in our solar system, scientists followed proper methodology in only assuming the existence of these planets. At that time, Earth’s perfect distance indicated the theory of an intelligent cause, and the burden of proof was on a theory that posited more planets. But once evidence was found supporting the existence of many many planets, our perfect distance no longer indicated an intelligent cause and could be explained by an observer bias. That is an example of science operating according to proper methodology. Namely, it changed the prevailing theory once new evidence came to light - but not before that.
Let's try to clarify things even further using our good old Mr. Jones analogy. If no evidence is found for a conspiracy against Mr Jones, his fingerprints at the crime scene indicate his guilt and would lead to his conviction. However, if the defense meets the burden of proof and finds evidence of a conspiracy against him, then the court would accept this evidence and acquit him. However, just because they found evidence for a conspiracy theory in this one case doesn’t undermine fingerprints as evidence of guilt in other cases, and certainly doesn't lend credibility to all other conspiracy theories.
Multiverse scientists generally acknowledge that they must provide evidence to support their extraordinary claim that there really exists an infinite multiverse. They realize that just because it turned out there were other planets is no indication that it will turn out that there are other universes as well. They therefore argue that there are several independent lines of evidence that support the multiverse, all of which we will address in the miniseries on multiverse.
In conclusion, we’ve accomplished a lot so far. We have shown that the observations of fine tuning, design, and order in our one universe indicate that it has an intelligent cause. This shifts the burden of proof to those who deny an intelligent cause. For multiverse scientists to posit that the fine tuning, design, and order emerged from an unintelligent cause, not only must they provide a theory that gives an alternate explanation for these special features of our universe; but they must also provide evidence indicating that such a theory is true. Without such an indication, we are left with strong evidence supporting the existence of an intelligent cause who fine tuned, designed, and ordered our one universe for the purpose of bringing it about in all its complexity and grandeur.
Skepticism of Science
Since we see that science is always changing, there’s no reason to trust any argument for God based on science. For example, the ancient argument for God as the prime mover that was based upon Aristotelian science was shown to no longer be valid in light of modern science’s concept of inertia. Who knows whether modern science will share a similar fate to Aristotelian science? Maybe modern physics will be completely superseded by new theories and who knows if any of your arguments will have any validity in future physics?
It’s true that if there were to be a complete overhaul of modern science, you would need to reassess whether our arguments would still hold up. Nevertheless, this is not a sufficient reason to reject the validity of our arguments as they stand today. Our only claim is that the proper inference from the discoveries of modern science is that the universe has an intelligent cause. Insofar as you accept modern science, you should accept the philosophical inference of God as well.
Just to clarify. We’re not claiming that one’s acceptance of God is necessarily dependent upon arguments from modern science. Other reasons for believing in God are beyond the scope of our podcast, and even if our arguments are invalidated by an upheaval in modern science, it wouldn't undermine any other reasons one currently has for accepting God.
Skepticism of Philosophy
The next question is as follows: We said that your arguments for God are not science itself, but are philosophical reasoning based upon science. However, one may argue that philosophy is unable to provide us with an accurate picture of reality. After all, many extremely smart people have contradictory philosophical beliefs, and they can’t all be right. Since your whole argument is based upon the ability of philosophical thinking to discover truth, it’s completely worthless! The only thing I’ll accept is science because it uses the scientific method to compare theories to observations. Scientific experimentation allows for objective verification and for scientific consensus to form, as opposed to wishy-washy philosophy!
There are a few answers to this question. The most fundamental answer is that it’s inconsistent to only accept science and to reject all philosophy because that view itself is a philosophical position! Case in point, some would argue that even science isn’t capable of arriving at objective truth. To address such critics and justify the validity of scientific knowledge itself, one must discuss the philosophy of science. But the philosophy of science is an area of philosophy that can’t be verified with the scientific method. The bottom line is that it’s self-contradictory to only accept science and not philosophy. Accepting the validity of science is implicitly accepting the philosophy of science which justifies belief in the scientific method.
Secondly, almost everyone accepts philosophy in areas that don’t lend themselves to the scientific method. Take for instance ethics, epistemology, logic, and so on. We simply can’t live without a philosophy of how to live, and whether we like it or not, we need proper philosophical thinking to guide our worldviews and choices in life.
Lastly, we are sympathetic to the view that much of what passes for philosophy is very speculative. One person says one thing and someone else argues, and neither has anything to base their opinions upon. Because of this problem, the best type of philosophy is grounded in scientific evidence and observation. In this podcast, we never argued that the universe has an intelligent cause based on philosophical speculation. Rather, after examining what science tells us about the universe, we used philosophical reasoning to infer an intelligent cause. Scientifically grounded philosophy is very different from wishy-washy baseless philosophical speculation.
Skepticism of the Mind
The next question is as follows: How do you know that even science, much less philosophy, can tell you objective truth? Maybe the human mind simply doesn’t have the ability to ascertain truth and objective reality, in which case you can never make a convincing argument for God.
We’re not ashamed to admit that our argument assumes that the human mind is capable of using science and philosophy to approach truth and discover reality. We cannot prove this assumption. After all, to do so, we would have to use our minds - the very minds whose validity is being questioned.
However, we will point out that the very use of the human mind to skeptically doubt the human mind’s ability to know anything is self-contradictory. It doesn’t make sense to say that the one thing you know is that you can’t know anything. At the end of the day, you have no choice but to accept your mind’s ability to at least approach the truth in some framework. if you don’t accept your ability to know that some things are true, we aren’t going to be able to convince you otherwise, and we certainly won’t convince you that God exists.
Why don’t Scientists Agree?
The last question is as follows: Even if I grant the fact that we must accept science, philosophy, and of course the human mind itself, I’m still bothered by the following question. You’ve presented three arguments from three distinct features of modern physics which each independently indicate that our universe resulted from an intelligent cause. Nevertheless, most scientists don't agree with you and don’t see any of these arguments as compelling evidence for an intelligent cause of our universe. Granted that you aren’t arguing with scientists on the science itself, but only on the philosophical conclusion to draw from the agreed-upon science. But still! Why are you drawing a different conclusion from so many great scientists? In other words, if the arguments are really as clear as you’re making them sound, then why don’t all scientists agree with your conclusion?
There are two reasons for this. First of all, as we’ve mentioned many times throughout this series, scientists have an alternative explanation that they think is capable of explaining the apparent fine tuning of the constants, the design of the laws, and the order of the initial conditions. This explanation is, of course, the multiverse.
The second, and in our opinion, the deeper reason why scientists don’t accept our conclusion, is that many scientists think that God is a bad explanation, or even no explanation at all. They lodge many serious questions against the theory of an intelligent cause, such as: What caused God? Who fine tuned the intelligent fine tuner? What does the word “God” even mean? In fact, because of these questions, some go so far as to say that God is impossible.
Because of these two reasons, we can’t just say “God did it” and walk away. For our argument to be complete, we must address both of them in detail. First, we must show why multiverse is not a viable explanation for the fine tuning, design, and order in the universe. We’ll do this in our miniseries on multiverse. Second, we must show that it’s possible to formulate the idea of an intelligent cause - God - in a clear, logical, coherent, intuitive, and compelling manner that answers all serious questions that are raised against it. And we’ll do this in our third series on God.