Some initial thoughts on “The Age of Em”

In The Age of Em, Robin Hanson envisions software, not hardware, as being intelligent. This makes no sense to me. Software is just information, or alternatively, a description of what an actual, physical computer is doing. If we are to have artificial intelligence through any means, brain emulations or otherwise, I belive it is more correct to think of the physical computers in question as intelligent, not the set of instructions that tell the computer what to do.

(To be clear: a computer running a million separate ems, I would view a single being, perhaps with a million separate “centers of consciousness,” to borrow a phrase from inter-Christian disputes as to the best understanding of the Trinity. Perhaps a single computer running three separate ems is loosely analogous to this doctrine, where God is both one substance and three persons.)

Maybe it is already a fallacy to think of the brain as a computer, or an “information processor” at all. I read this recently and it struck me as a nicely contrarian view; I really lack the expertise to judge the issue one way or the other. This of course has no bearing on whether a brain can be emulated, since lots of things that are not “computers” can be emulated, such as flying an airplane. However I only note this because, if it is indeed wrong to say that brains are computers, it is doubly wrong to think of brains as software. (The hardware/software distinction is of course entirely incoherent when applied to brains. There is no one single brain platform which runs different people.)

Of course, under “computationalism,” the physical implementation of a computer is irrelevant. Computationally equivalent computers can be implemented with neurons, microchips, or, in John Searle’s example, a vast array of ping-pong balls and and beer cans. So I believe that this inspires the tendency to view software or patterns as all that matters. But even in a computationalist world where someone created a sentient, galaxy-sized array of ping-pong balls and beer cans that fully emulated a human brain, I would say it is this array of beer cans and ping-pong balls, and not its ethereal logical structure, that is sentient.

I should note that these comments are not a version of Searle-like “a simualtion of a fire does not burn” objections; I am accepting that a computer running an em might actually do everything that a brain does in the real world; I am simply noting that it is only computers and not ems that are doing the doing.

Although Hanson briefly acknowledges that some ems might view being “moved” from one computer to another as a form of death, he oddly doesn’t dwell on this–oddly since the concept of copying ems is so central to his book. I only note that software, of course, cannot be “moved” at all. New copies of software can be made, various energy transmissions can take place, and so on, but using the term “move” in the context of software is only a convenient shorthand for “copy, then delete.” Whether or not there is some form of “continuity” of consciousness between an original and a copy is beyond me, however there probably would appear to be, meaning that this distinction from the perspective of the functioning of em society may not matter, as long as no one thinks about it too much. (This is perhaps similar to how transporters work in the Star Trek universe.) However I only note this as another reason to be skeptical of a framing that envisions pieces of software as opposed to actual physical computers as being somehow “real” and moving about in some kind of virtual world.

Maybe you could argue that it is indeed only “patterns” or information (or forms) that can in some sense be sentient but in order to become actualized they first need to be embedded in some physical system or another. This raises all sorts of issues, such as whether two identical instantiations of a pattern are the same or different. The Age of Em is an interesting book that dives into many minute speculative details but it generally avoids issues such as this.

One final point: I believe that human brains may be subject to Wolfram’s notion of computational irreducibility. This idea suggests that for many systems there are no shortcuts; the only way to predict what they would do, is to let them do it. So for example, it is easy to predict where a cannonball would fall without firing it off, since so much information can be discarded (e.g., the velocity of every dust particle), but there are other systems where effectively no information can be discarded. This probably closes off many paths to brain emulation, and at least questions whether it is possible to run a brain emulation at greater than real-time, or whether at most a brain emulation can operate only at physical brain speed. The mere fact that electricity on silicon chips is faster than neurons does not necessarily mean that it would be possible to have an em think faster than a physical brain thinks.