Are human brains too complex to replicate?

Posted by Andrew on January 21st, 2011

Over at his Bottom-up blog (safe for work) Cato scholar and CS PhD candidate Timothy B. Lee makes a case that we’ll *never* be able to copy the human brain in software. He argues that the human brain is too complex and living systems impossible to replicate via mathematics. (I categorize these kinds of articles as the “Sorry nerds, here’s why you’re wrong”, variety.)

While I’d be the first one to point out the futility of arguing whether or not we will or will not be able to do something, I have a little trouble with his arguments (in a later post I’ll offer my own argument as to why it might be a bigger challenge than we realize).

“You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model.”

Natural systems like physics and chemistry don’t have designers and we emulate those every day. Our ability to emulate them increase all the time. Starting from the middle ages when we had a very incorrect and non-empiracle view of these things, to today where we’re able to run simulations of what happens inside of atoms and at the point of the big bang.

An airplane wing works a lot like a bird wing in glide and we fly millions of miles everyday on a mechanical emulation of that living system.

Since brains are made of atoms, unless there’s some magical process going on that transcends physics, at some level you should be able to replicate a brain provided you have the right computational power. That computer could even be a jar of neurons (a method I don’t even think Lee considered).

At some point we’ll have computers with a greater number of virtual parts than the human brain. That’s the point that many think we’ll be able to replicate the brain. Knowing what and how to replicate it will be a challenge of course. We’re still figuring out how to make virtual proteins…

Following the graph of computational power over the last decade shows us that we’re nearing a point where the raw power should be possible.

To further make his point, Lee uses weather prediction as an example:

“Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time.”

Lee confuses a simulation for a predictive system. I can make a very simple program in just a couple lines of code that will predict with 100% accuracy the probability of a coin toss. It won’t tell you the outcome of a specific coin toss, but its results would be indistinguishable from any particular coin toss and no system could tell the difference between my virtual toss and a real one.

A replicated brain is going to have its own experience from its point of inception and be just as subject to chaos as weather, coins and other brains. It’s going to be no more confined to Newtonian physics than any living system. The fact that it behaves differently than the brain it copied is no more disproof of its utility than the fact that identical twins develop different thought patterns.

He makes his point further by saying that you can’t reduce neurons to transistors. And because they’re different, the difference between a computer and brain is too vast to bridge.

As I mentioned earlier, Lee seems to ignore entirely the premise of just creating a computer out of actual neurons. We can do that to a small degree today. There’s no reason to think that it can’t scale. Obviously a bunch of unstructured neurons are not the same as a living human brain, but the fundamental parts are similar and that’s a good start.

I think the biggest problem Lee has with this is in seeing a computer and a brain as a one-to-one analogy where the aforementioned transistors act as neurons. This of course would not work. A human neuron has way more complexity than a simple logic gate. That plus the other parts of the brain we’re just grasping their function, make it a complex task. Nobody is saying that it isn’t.

What AI researchers and people interested in the Singularity believe is that a living system isn’t irreducibly complex. At some level it’s made of the same kinds of atoms as everything else. And starting from that point you can write software that emulates the function of molecules, proteins and even cells. From there (giving enough computational power) you can replicate living systems. Brains should be no different.

  • http://twitter.com/CrashKinkaide Crash Kinkaide

    Even if one is to assume a pure hardware architecture (as opposed to software modeling) replicating the function of neurons, there’s no reason the function of a neuron could not be emulated with dozens (or hundreds) of transistors. Once that’s done, scale up.

  • http://www.andrewmayne.com Andrew Mayne

    I agree. It’ll certainly be a complex challenge. I’m just baffled by people that argue that things that are physically possible can’t be done – ever.

  • Dereckc1

    I agree. With the advances in Quantum Computers, there should be no reason that within, outer limit, 50 years we should not be duplicating people’s brains. Therefore it’s up to the next generation of people to decide whether or not to do it!

  • http://twitter.com/clonetrooper949 no

    Nice argument! I can only wonder what life will be like in the future once we have the technology to emulate brains. That would open the possibility of immortality of some sorts. The Anime series “Ghost in the Shell” goes along with that idea, of people being able to “cyber-ize” their brains and exchange body parts for cyborg ones.

    There will certainly be ethical things to figure out in the future… This will also introduce the idea that we are really just machines (biological ones) to more people. Just look at stem cell research, funding was cut even though it has HUGE potential. It still goes on today regardless, and I think that the idea of emulating the human brain will be too tempting so research should continue even if it isn’t funded / is banned by the government.

  • Anonymous

    Is this what it would be like to be in 1951 and listening to someone arguing about how impossible it would be to travel to the moon? I don’t like it when people argue against ideas just because they can’t understand that we’re always just a handful of brilliant ideas away from making the impossible very possible.

    Nice article, Andrew. You make all kinds of sense.

  • Anonymous

    Great post. Among the different positions (this post, the Bottom-up guy and Hanson) there seems, to me at least, to be more than one issue being discussed.

    If by “copying the human brain” one means uploading – making an actual copy or even a simulation of a specific person’s brain with his or her memories and preferences and such, one might at least be able to argue that the task would be much harder than just getting the hardware to run it and a detailed connection map (though the future is a long time to work on a problem).

    If one means copy in the sense of getting a system that can learn and do things a human can then the argument just doesn’t work, I think. The human brain may be too complex for humans to recreate but it is also too complex for DNA and cellular components to create a brain. And so they don’t. Instead, they do their jobs making cells and the cells do their jobs making connections and breaking connections based on the combination of genetic instructions and environmental input with lots of feedback from each of these. Environmental input would even include the brain’s own activity – thoughts change the thinker’s brain.

    In short, neither humans nor cells can “make” something as complex as a brain. But that doesn’t mean they can’t “get one made”.

    As for something which Hanson says “would have massive effects on the world economy, dramatically increasing economic growth rates”, just the latest narrow AI like the rapidly improving translation software is starting to do that. Simple versions of general AI are going to put massive amounts of change on the world’s agenda. Imagine if even low income people and small businesses could have a personal receptionist to screen out telemarketers or answer simple questions without missing calls from potential employers or clients. All of the tiny little tasks that are beneath people’s paygrade but which can’t be deligated because there is no room in the budget (and because governments ban low-skill jobs via minimum wage laws) could be taught to a general AI. The types of tasks a simple, general AI could do are major distractions to highly paid professionals who need to deal with them. Currently, even homeless people can get access to a word processor at a public library. What will it be like when everyone can have a “staff” of career advisors, financial consultants, nutritionists, researchers, secretaries and others without needing to replace them on sick days.

    Simple general AI is going to be a game changer long before human level AI is on the market.

    My stars, that was a long comment! I will be quiet now.

  • http://www.andrewmayne.com Andrew Mayne

    I concur.

  • mcsween.david

    Why would we have to emulate the entire architecture anyway? Would it be enough to aproximate a state for each region of brain function, I guess it depends on how homogenus each region is. But there should be lots of room for efficiency in emulation of human brains. After all, evolution doesn’t have a target for design so has much redundancy and crap built in that it may not need.
    So we should shoot for HumanBrain 2.0