A little transfinite math as a prelude to the main point:
Georg Cantor divided the concept of infinity into a number of subtypes. The number of integers (whole numbers) he called אo (read: alef-null). (Legend has it that he was tired of overloading meanings to Greek letters, and since he was Jewish, he chose alef.) Any infinite set that can be counted is of that size — אo.
Dealing with infinities gets weird. For example, in any finite set of integers, the fraction of them that are even is roughly 1/2. (Exactly 1/2 when the set has an even number of numbers in it, e.g. 2, 3, 4, 5; but not so if the number of elements is odd, such as 1, 2, 3, 4, 5.) However, when dealing with an infinite set, both the number of integers and the subset of them that are even are the same אo. How? For every integer i, there is an even integer 2i.
1 <-> 2
2 <-> 4
3 <-> 6
4 <-> 8
They can be matched 1-to-1! Another way to think about this oddity is that you won’t run out of even numbers before finding as match for each whole number, since infinity means never running out.
The number of fractions is also אo. Anything that can be counted or listed qualifies.
However, the number of possible real numbers, including irrationals, is some kind of bigger infinity. You can’t map the line, the continuum, to the set of integers. Another way of saying it is the number of points on a line is not countable. Cantor called this kind of infinity “C” (for “continuum”). He believed that C = אl, but he could never prove there wasn’t some step between it and אo, so he left it as “C”.
אo is the infinity of “how many?”, C is the infinity of “how much?”
A theological digression before I get to the main point:
This difference is a good metaphor for explaining a common theological error.
Many wonder how G-d, in charge of the entire universe, could possibly be interested in an individual person out of billions on a little backwater planet out in one galaxy among who knows how many…
This is viewing G-d like אo. It’s a huge set. But there are gaps between its members.
G-d’s infinity is beyond that. I’m not saying it’s C, or anything along those lines (which is why I wrote the word “metaphor” in bold at the beginning of this digression), just that it’s greater than אo. And just as the real number line as an infinite number of points between zero and one (in addition to comprising an infinite number of such intervals), G-d has an infinite amount of attention to bestow on each of us.
Now, finally on to the main point…
Computer science started with a normal of formalizations of what a computer program or a mathematical function could calculate, and it turns out that they all described the same set. This set of “computable functions” is countably infinite, i.e. it’s אo. In something closer to English, the list the number of functions that can be implemented on a computer would be infinitely long, but you could come up with a way (in principle) to map them 1-to -1 to integers. It’s easy to prove on a hand-wave level: Write them out in some Turing Complete language (a fancy way of saying a programming format provably capable of representing anything computable). Convert each symbol and space in your program into a numerical code. The result is an integer. So there can’t be more functions than integers.
However, people don’t fully describe their thoughts. Natural language doesn’t work like programming language. Instead, we rely on common experience to bridge the communication gap. Because we’re all human beings, many things can be left unsaid, and the other person fills in the gaps by imagining the described scenario themselves.
In fact, I personally believe that people CAN’T fully describe our thoughts. Thoughts do come in the stream-of-consciousness that we can put into words. But they also come in our ability to visualize and recreate sounds etc… inside our own heads — what the rishonim called koach hadimyon. (The Imaginative Faculty as Aristotle meant the term.)
It’s for that reason that I believe that Artificial Intelligence on a computer is impossible. Koach hadimyon allows people to think not just in terms of manipulating symbols, but also based on semantics in a way that is beyond algorithmic. A person thinks about red not only be having in his head the sound of the word “red”, but also he mentally experiences redness. (That redness is called a “quale”, plural “qualia”.)
Very related to this is Serle’s “Chinese Room Experiment”, here’s the description from Wikipedia:
Searle requests that his reader imagine that, many years from now, people have constructed a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, using a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All of the questions that the human asks it receive appropriate responses, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human being. Most proponents of artificial intelligence would draw the conclusion that the computer understands Chinese, just as the Chinese-speaking human does.
Searle then asks the reader to suppose that he is in a room in which he receives Chinese characters, consults a book containing an English version of the aforementioned computer program and processes the Chinese characters according to its instructions. He does not understand a word of Chinese; he simply manipulates what, to him, are meaningless symbols, using the book and whatever other equipment, like paper, pencils, erasers and filing cabinets, is available to him. After manipulating the symbols, he responds to a given Chinese question in the same language. As the computer passed the Turing test this way, it is fair, says Searle, to deduce that he has done so, too, simply by running the program manually. “Nobody just looking at my answers can tell that I don’t speak a word of Chinese,” he writes.
There are rebuttals, but discussing why I still find the argument compelling isn’t really what I’m interested in raising here. (If you’re interested, see the Wikipedia page.)
Right now I want to use this distinction between an algorithm of rules of symbol manipulation vs. using words because they mean something to me. The notion of qualia gives me a meaning to a symbol. I not only think using the word “red”, but when I do, it brings up memories of the actual experience of red objects. As we discussed in the entry on koach hadimyon, I can not only reason about elephants having hair by applying syllogisms related to their being mammals, but also by conducting mental experiments visualizing elephants with different kinds of hair in different parts of their body and comparing it to memory.
Similarly, I believe that not all of Torah can be articulated. This is the brilliance that the angels of the medrash attributed to our reply at Mt Sinai — “we will do and we will listen”, placing the doing first. According to the medrash (cited by Rashi) the angels asked who told the Jewish people this secret. Much of Torah can only be learned by experiencing it, not through discussion or abstract instruction. The navi sees visions, metaphors, experiences rather than deduces truths. Similarly aggadita is relayed through metaphor (“mashal umelitzah” as the Rambam puts it in his introduction to pereq Cheileq). Visualization is a key part of how we receive and perceive Torah, not just ideas that can be assigned countable tokens and taught out of prose books.
Thus, the Torah includes elements that can’t be mapped to algorithms, nor to a list of symbols. The proof that the number of functions is countably infinite (אo) doesn’t work for divrei Torah.