<!–

–>

May 21, 2022

There exists s a tendency today to put too much faith in science. Consider, for example, the currently popular belief that scientific advances in Artificial Intelligence (AI) may one day enable robots to become “intelligent.” This idea is deeply problematic, and it is rooted in a fundamental belief that science is capable of explaining everything there is to know about the human person and about reality. But it is not.

‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609268089992-0’); }); }

For one thing, the idea that science can explain everything about us and about the world is not itself provable by science, since no empirical test can demonstrate it. Thus, it is really a philosophical position posing as a scientific one. Furthermore, science itself rests on philosophical foundations.

For example, science assumes the existence of the external world. This is not something science is capable of demonstrating, but something it must take for granted. Why? Because conducting an experiment to “prove” it presupposes the very data it is attempting to explain: Any instruments that might be used to test it are themselves parts of the external world, so they can be used only if they are first assumed to exist, which is the very issue the experiment is designed to test. Hence science cannot prove that the physical world is real, but must instead take it as a given.

Second, science assumes that the world is intelligible. Consider that the scientific method involves formulating hypotheses and then proceeding to test them. In other words, scientists first assume there are meaningful answers “out there” for discovery, and then they look for them. If they did not assume this, if they instead believed that nature was utterly incomprehensible, there would be no science. The comprehensibility of the natural world has enabled the rise of science, not the other way around.

‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609270365559-0’); }); }

Furthermore, not only does scientific inquiry take the intelligibility of reality for granted, it also presupposes that the human mind is capable of grasping it. Consider that, like the examples above, science cannot “prove” this assumption because it cannot test the capacity of the mind without first making use of the mind. Thus, it is a philosophical foundation science must accept before scientific investigation can get off the ground.

Now this assumption is worth dwelling on, because it is pregnant with implications that run counter to the idea that AI might one day become “intelligent.” That view is predicated on the belief that the mind is nothing but a collection of particles banging around inside our skull. In other words, the mind is nothing more than the material brain. But the fact that we can grasp intelligible things means that the mind must be more than that. If it were not, if the mind were reducible to random physical motions in our heads, we would have no reason to believe it because any argument for it would itself be the product of nonrational forces and therefore meaningless. It would be equivalent to saying that triangles smell pleasant.

But there are other reasons to suppose that the mind is more than the brain. Consider, for instance, what it means to grasp the intelligibility of nature. Our intellect is capable of abstracting the universal essences or natures of things — the “whatness” of things — and it is difficult to see how this power could come from the material world. Here is why.

Every physical thing in the world — meaning everything we ever experience — is particular. For example, any triangle we observe in daily life is drawn at a certain time, and in a particular color, and in a certain size, and in a specific location, and with unique imperfections. It is thus a particular triangle. But the nature of a triangle — or triangularity — is what philosophers call a universal. All particular triangles share the universal essence of a triangle (triangularity), otherwise they would not be triangles. This means that universals transcend time, color, location, size, and physical imperfection. In other words, they are immaterial. Now here is the important part. Our minds are capable of grasping universals. Not only can we consider a particular triangle, but we can abstract from it to apprehend its universal nature (triangularity). This poses a problem for the “material” understanding of the mind. If that view is correct, if the mind is nothing more than the brain, how can a particular material brain produce universal immaterial content? How does a system (such as a brain) with particular size, location, duration, and physical imperfection produce that which transcends size, location, time and physical imperfection?

This is no small problem. One may be tempted to think that, with enough complexity, a material system such as an AI might in principle be able to generate universal content. But this belief commits a category error. For we are wondering about the particular versus the universal, the material versus the immaterial, and not noncomplexity versus complexity, so it is difficult to see how a particular material system, no matter how complex, could ever produce universal immaterial content. Even immensely complex particular material systems would seem capable of producing only immensely complex particular material content. Indeed, believing that such  systems are so capable is akin to believing that with enough white bricks arranged into a sophisticated enough pattern, a red building might result. But of course, no matter how many there are or how intricately they may be arranged, white bricks simply cannot produce a red building.

That is why we have good reason to believe that the mind uses but is not caused by the brain, and that the latter is a necessary though insufficient condition for the operations of the former. That is also why it seems doubtful at best, if not impossible in principle, that any AI, no matter how advanced, will ever become “intelligent.”