“Mind uploading will make us able to live in the depths of interstellar space”


Ever wondered about mind uploading? Dutch neuroscientist Randal Koene wants to scan, digitalize and copy a human brain. But how can it work? Where would the consciousness be? And when will mind uploading be achievable? Read my in-depth interview with the former professor at Boston University and current chief scientist at Initiative 2045.

Mr. Koene, the thing that has confused me the most about the concept of mind uploading is this: Let’s say, you emulate my brain and run it on a computer. I would say this is not me but a copy of me, like a digital clone. So there’s no life after death. How should my consciousness jump to a computer?

Randal Koene: You’re right that this is an important question – especially, if you are thinking of mind uploading as a ‘life extension’ technique. Let me first say that life extension is not the main reason why I’m interested in mind uploading. My personal interest in this comes mostly from two different angles.

First, the desire to expand mental capabilities. Those capabilities are our main limitations with regard to experiences, modes of thought, etc. Being able to expand them demands access to the circuitry that produces mind. The optimal mode of access is full access, such as would be facilitated in a different substrate that is more amenable to it than the biology. This ties into WBE, but also brain-machine interfaces and more.

And what’s the other motivation?

The desire to extend civilization and the human species beyond the little niche within which natural selection decided we ‘fit’, in other words, to make us more adaptable to other challenges.

Humans are not intrinsically well-suited to life in space, life under water, life in methane atmospheres, extremely long distance travel, high acceleration, etc. We are also not intrinsically well-suited to challenges that involve very rapid information flow or processing and many other challenges where we now rely completely on machines that are only loosely connected to us.

Natural selection does not change one species into another that is more adapted to a new situation – it only removes unfit species, so it is not a solution when time or location changes the challenges we have to handle. Substrate-independence can allow us to self-direct our adaptation or evolution.

Concerning the potential of using mind uploading to colonize other planets: How do you imagine this?

There are many possible good answers to this, but just to keep it ‘current’, here is what I posted on Facebook after hearing about the initiative by Yuri Milner and others to send tiny probes to the stars at near lightspeed driven by laser beams:

“Let’s assume for a moment that sending a tiny probe to Alpha Centauri on light beams works. If that is feasible, then could we, in a few years when nanofabrication is more mature, send tiny nanofabricators to the stars the same way? And if that works, could we build a receiver on the other end that could reconstitute a mind uploaded person through whole brain emulation? Steps #1, #2, #3 to space-faring species. Seems more efficient and therefore more likely than Star Trekking.”

Keep in mind that we would not be restricted to Earth-bound bodies. Our form would go hand in hand with function, in other words, able to live in the depths of interstellar space or on an alien world totally unlike our own.

Returning to the possibility of using uploading as a life-extension technology. How can it work?

Before addressing the question head-on, let me also point out that there are many proposed methods for mind uploading, some of which skirt around the issue.

For example, you could envision that mind uploading can involve a process of piece-wise replacement: Create a neuroprosthetic for a piece of the brain (perhaps even a single neuron), put it in place, test that it operates just as the original biological circuit that is next to it (producing the same output, to a satisfactory criteria), then remove the biological version of that piece. Do this over an over again.

If you make the pieces small enough, you get to the point where the process is not much different than the many natural causes of change in the brain (plasticity, dying neurons, neurons that fail to fire when they should – which is extremely normal, etc.) You’d reach a threshold where the transition is imperceptible. You then have to ask: “How could you even tell if ‘you’ changed in any way?”

But that question also applies more generally (without the ‘trickery’ of such a gradual replacement process). Are you the same person that you were when you were 5 years old? If you suddenly jumped from then to now… would you have lost your ‘self’ – even though you clearly agree that you were you in both cases? So… does the process, the step-size or something like that determine that self is preserved? Is self-continuity a real issue, or is it perhaps a bit illusory?

If you lose consciousness under an icy lake, are retrieved, have no measurable brain activity (this also happens in certain surgeries on purpose now), and are then brought back to life an hour later… did your original self die an disappear? If not… why would this be any different if you replaced some of your brain with equivalent hardware and then turned it back on? And if that didn’t matter… then why would it matter if you replaced all of the hardware as part of the process? Clearly some food for thought. If nothing else, mind uploading makes what was previously a purely philosophical question a very real and hopefully testable set of hypotheses.

It comes down to the question what consciousness really is.

Many people imagine that consciousness, experience of self is some continuous ongoing process that is always there, something that is ‘you’. But that is clearly not true, because experiments demonstrate that consciousness is heavily fragmented, often post-hoc and not nearly as we (illusorily) perceive it… just like (visual) perception itself.

Now, despite all of this, I have often still described myself as a ‘fence-sitter’ on this issue: Whichever position you come to me with, I’ll tend to defend the opposite one.

I concede that our notions of consciousness and self are probably flawed and contain illusions. At the same time I concede that I still worry about self-continuity as an issue in mind uploading procedures.

If you asked me which method I would prefer (given all options), I would still prefer a gradual replacement strategy for the peace of mind that it gives me to not have to decide once and for all which side of the fence I’m on.

Yes, if everyone of my neurons gets replaced separately – neuron per neuron – I would say that my consciousness is still my consciousness and not a copy. But not if it’s just something run on a computer.

When you say ‘not if it’s just something run on a computer’, I’m not sure I follow your train of thought. If, functionally, the process produces the same output, what is the difference? If you create a new neuron via biology and train it to take the place of an old neuron, okay. If you build a hardware neuron and set it up just so and make it replace a neuron, also okay. If you build a software neuron and program it just so and have it replace a neuron… what’s the difference? I think the concerns you have are probably more about the replacement method than about the machine that is used to produce the same function, at least that’s the impression I got.

I think that it makes sense that initially, as we learn to build neural prostheses, we do a lot of work in software, because software is easy to modify. Later, when you have experience with the models, it makes sense to cast them into hardware that most optimally runs the emulation.

You see this same process in current-day neural prostheses, such as the work by Ted Berger. At each step, first they devise mathematical models that are put into software and tested. Then they produce a chip that can be used with the same model and might someday be used and worn by a patient. Their approach is very sensible, using so-called ‘system identification’ to produce the same functional results that the original brain tissue did in terms of neural spike timing, then testing that within specific experimental constraints (i.e. carefully testable) in rats, non-human primates, and (within about 3 years) in human trials.

And please note that there is really no distinction between work on neural prosthetics and whole brain emulation (or mind uploading). The only difference is the ultimate scope, and WBE is the logical outcome of progress in neural prostheses.

Please give me a step-by-step description of the way you would do mind uploading. Let’s say in the future there will be a consumer who wants to have a digital copy of his brain and you would be the one who can do it. How would you proceed?

I see that you’re not asking me about the research and development roadmap to WBE, which is usually the question. But then, you could read about or hear about that in any one of many talks I’ve given or articles published.

Imagining a future procedure for mind uploading, let me outline 2 scenarios, one a few decades hence, the other a hundred years from now.

In a few decades:

The patient begins by being injected with a cocktail that contains several billion microscopic wireless free-floating neural interfaces (the end-product of work such as the ‘Neural Dust’ being developed at UC Berkeley, perhaps combined with optogenetic technology) + delivery vehicles such as  macrophages that have been programmed to attach to an interface and to bring it near to a specific type of neuron. Within a few hours, most of the neurons within the patient’s brain would be within the listening range of at least one of the wireless interfaces.

At that point, the interfaces are recording neural activity, at least in terms of the millisecond timing of ‘spikes’ (neural firing), and possibly also the neural membrane potential (derived from the field potential) in the intervening time. This information helps to profile the neurons, identifying their category class, response function and even some of the functional connectivity between neurons (as you see activity flow from one to another). Up to this point, but at a much smaller scale, this is exactly what Berger’s team does to identify the functional system for their neural prosthesis.

Such observation is insufficient to capture ALL possible I/O function mapping in parts of the brain, because you can only observe that which happens to be active during the observation period. Even if you do some active testing by explicitly stimulating batches of neurons you are still likely to miss latent function.

So, the next step is to capture the actual connectivity between the neurons in great detail – the so-called connectome.

Ideally, you would do this in-vivo as well. Having a billion or more probes already present can facilitate this if in-situ microscopy is used. Alternatively, at this point the patient needs to undergo connectomics scanning, which today is still a post-mortem procedure.

What needs to be obtained is the dendritic network of the neurons, the places in that network where synapses reside, the sizes of pieces of the dendritic and axonal trees, the sizes and shapes of synapses, and potentially proteomic markers that tell you more about the transmitter channels at those synapses (though much of that can be derived from the combination of class of neuron and responses recorded).

Structure and function subsequently need to be mapped into a functional model. Ideally, if you have been able to do all of this in-vivo, then you have what engineers love, namely designs that can be tested and compared with the working original in a piece-wise manner – a much better approach than to build a gigantic highly complex system and then press ‘go’ in the hope that it all works.

Then, perhaps you put the replacements in piece by piece or you create the emulated brain separately (if our worry about self-continuity as being somehow related to in-place activity has been conclusively shown to be flawed thinking).

In a century:

After having plenty of experience with neural prosthesis development and use, and after having carried out WBE on animals and humans, we will have learned enough about the systems of the brain that we can reliably detect what makes sense and what does not. That means, it might then be possible to deduce the functional models for each bit of tissue, for each neuron and each synapse purely from its morphology (it’s shape) and its position within the connectome. It’s a matter of inference and of selecting sensible parameters from probability distributions based on experience with working systems.

Again, if being ‘on’ during the transfer turns out to be as fallacious a concern as prior concerns about ‘elan vital’ in the 19th century, then a new method becomes possible: A preserved human brain (for example, using plastination methods currently used in connectomics research) could be brought back to life and awareness from a structural scan and casting into functional models.

What I left out here is any discussion of the type of body that one would choose to have after an ‘upload’. I think there is room for much exploration in that area, as we already know that the human mind can adapt to ranges of body input/output from the relative incapacity that confines Stephen Hawking to the body-extension one experience as an expert kayaker or pilot of remote aerial drones. The most important realization is that whole brain emulation and uploading are not complete unless input and output (sensation and action) as we experience it as part of our waking lives is also provided.

There are critics who say that it will never be possible because we won’t unlock the neural code. Recently there was a piece in the Scientific American (Link). What’s your answer to those critiques?

I suppose we will find out. Neuroscience is certainly not going to throw in the towel anytime soon. And medical need will continue to exist. So, prosthetic cochlear implants and retinal implants will be followed by even more sophisticated neural implants and prostheses.

As we iterate through those designs and discover what works and what doesn’t, that exploration will lead us to understanding which data is essential and how that data is used to create a working, functioning replacement part.

Many parts is a whole, so that the challenge after neural prosthesis is scaling up data collection and reimplementation of function.

Given the value of the prize, I would be very surprised if we gave up at any of those points.

How far are you from accomplising this? When will the first human brain be uploaded? What are you doing right now towards this goal?

We’re still very far from the goal. Right now, the hard work is all in the field of neurotechnology: Developing better tools to measure in great detail and in millions or billions of locations at the same time what is going on in the brain. Astronomers need telescopes, neuroscientists need better function and structure recording devices.

Fortunately, there is a lot of progress to report there. Connectomics has come a long way since 2008 when it first really emerged as a field. Functional recording is taking big strides now, with the development of ever better optical stimulation and recording methods, arrays of electrodes that record from thousands and (probably this year) a million cells at once, wireless microscopic neural interfaces (e.g. UC Berkeley’s ‘Neural Dust’) and more.

One of the developments I’m most proud of is that I was actively involved in and in some ways instrumental in the emergence of a group of scientists that now reside at Harvard, MIT and a few other locations that explicitly aim to develop the technology to record from every neuron in a mouse brain (and eventually human brain) at 1ms resolution. We can talk more about this if you’re interested. It is directly connected with the BRAIN Initiative as well, which also seeks to push those boundaries.

I think it might be possible to start a project to create a whole brain emulation of the fruit fly drosophila between 2018-2020 (which would probably complete about 5 years after that). If that works, and if the neural prostheses that Berger and others are working on succeed, then the race is on to do this for human brain parts and human brains. I can’t predict the date, obviously, since there are so many factors involved that go way beyond science and deep into finances and politics.

My role at the moment is largely that of an ‘architect’ or facilitator, in that I keep track of the whole set of requirements for whole brain emulation, ensure that the right people are working together and have the means to do so, and seek to support the field with fundamental literature, roadmaps and a research network. I’m fortunate that I’m currently paid to do this full-time.

The implications of this will be immense. You are one of the leading scientists studying this. Where do you see the dangers?

Dangers are (as usual) in the many creative ways that people can imagine and implement misuses or abuses of technology. Access to brain data brings with it a plethora of property questions, questions about right of access and use. We’ve encountered this already in the area of DNA and will be dealing with similar issues more and more.

I also think that the other kind of access to technology is a big issue: If the technology provides a significant benefit to those who can use it then a positive and balanced application is possible only if it is available to everyone. In other words, ‘uploading’ or a procedure to become substrate-independent would need to be treated like major medical treatments, which ideally are available to all alike through a well-designed health care system. Obviously, this is yet another matter that is financial and political in nature.

There are many more issues to consider, worth a whole article on its own.

What do you think of the Human Brain Project in Switzerland? They do a simulation, you want to do an emulation. Where do you see the problems of this project? Do you think the millions of francs are well-spent?

Whether the money is well spent on the HBP depends largely on whether the HBP recognizes its own main hurdles and applies the money to overcome them. The biggest hurdle to the HBP – due to its nature – is that it relies on a software model that is constructed from statistical data that was collected from a large number of different subjects (animals) and that it has a huge number of parameters.

Statistical sampling as a way to generate a model insures that you have a resulting model that looks ‘plausible’ when compared with any of the individual animals studied to produce the statistics. It also results in a very large model that is highly overparameterized and does not related directly to any one of the animal brains studied. There is no place in the model produced that explains to you exactly how brain circuitry was laid out in animal X, or why, or how that relates to the behavior of animal X.

The gigantic model allows you to implement (or train) the model to produce almost any desired output for a specific function (e.g. some model behavior), because it is so wildly overparameterized that there are innumerable ways to implement any given function in that pile of parameters. And then you get what is called ‘overfitting’. Those are all serious concerns that every model in computational neuroscience, and really any model in science is faced with. (And all of science is in fact about modeling, i.e. learning about something by making formal representations of it.)

One way to look for a solution is to start using data that is obtained not from a pool of animals, but from individual animals. I.e. collect within-animal samples, not between-animal samples to set your model parameters. That sort of data collection is exactly what the BRAIN Initiative is about, and it is also exactly what is needed to produce a whole brain emulation of an individual brain (and its emergent mind).

/ / / / / / / / / / / /

Chris Kummer is a journalist and historian based in Switzerland. He focuses mainly on scientific controversies and inquiries into so-called paranormal phenomena.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht.