Mary Shelley’s Frankenstein was born during the industrial revolution when the rushing age of machinery was thrust upon the globe and population growth exploded.
The response to the so-called fourth industrial revolution or the digital revolution taking place now has its own philosophical concerns most popularized in fiction, but also through some of the biggest names in tech and business today.
A major concern that has been dismissed by the optimists and heavily argued by the pessimists is that of Artificial Intelligence merging with the human body being a bad thing.
Frankenstein may have been a monster, he may have been a misunderstood creature, and he may have been just another work of fiction designed to scare us, but one thing he never was was a superintelligent machine that could think faster and more effectively than anyone else on the planet, nor could he be self-replicating to make even better and better versions of himself until he was no longer recognizable from his original form.
“This is the real challenge for the future; to somehow be able to imbue machines that work with and for people with a sense of human ethicity, otherwise we’ll always be worried about the Frankenstein model” – Dr. Lou Marinoff
This is the dilemma of today. Elon Musk is already at work on merging AI with the human body through his startup Neuralink. He believes that we must merge with AI as hybrids, lest we become, on the optimistic side, the equivalent of their pets, or on the other side, we become irrelevant.
Following on the serial entrepreneur’s Tesla Inc, SpaceX, and OpenAI ventures, Neuralink will work towards integrating the human brain with AI that would circulate through the veins and arteries using a “neural lace” interface.
According to Gizmodo, a neural lace is “a mesh that grows with your brain, it’s essentially a wireless brain-computer interface. But it’s also a way to program your neurons to release certain chemicals with a thought.”
For a philosophical point on the matter, The Sociable caught up with philosopher and professor at the City College of New York, Lou Marinoff, who was moderating a panel on the future of work and digital disruption at the Horasis India Meeting in Malaga, Spain on June 24.
“This is the real challenge for the future; to somehow be able to imbue machines that work with and for people with a sense of human ethicity, otherwise we’ll always be worried about the Frankenstein model,” said Professor Marinoff.
“We’ve been having a bifurcated discussion; humanity on the one hand and machines on the other. That’s a very Cartesian way of looking at things, minds versus machines. I know an engineer who is working on the integration of silicon chips with human neurons.
… it’s either medicine or poison
“Like with anything else now, and this goes back to ancient Buddhist teaching and other Western teachings, it’s either medicine or poison. It can be used in either way. So, for example if you wanted to learn a language, it would be a lot easier to plug the chip in and then get those neurons firing, absorb the vocabulary, and probably in a relatively short time you could probably be fluent in Mandarin without having to take all the pain and the practice.”
Before we go any further, let us back up and see if implanting AI in the human brain is something that actual science is working on.
According to a research led by Harvard professor Charles Lieber, “We have worked under the premise that by matching the structural and mechanical properties of the electronic and biological systems, which are traditionally viewed as distinct entities, it should be possible to achieve seamless integration.”
Obviously, there’s always going to be ethical issues about that, but in the end, it’s possible.”
In an interview with Nautilus, Dr. Lieber said, “I think the actual interface to the brain is so crude today, and it relies a lot on the power of the computing or signal analysis outside of the brain.”
One of the major drawbacks that Dr. Lieber mentioned was the immune-response problem where the brain would see a foreign, mechanical thing like AI as a threat and attack it.
When asked if Musk’s neural lacing was a good thing, Dr. Lieber responded, “I think that’s a good thing, but I see it as two ways. One is it can help people who have either a disadvantage or some medical condition, but as well, the obvious thing at the other end, is enhancement. Obviously, there’s always going to be ethical issues about that, but in the end, it’s possible.”
The Harvard professor is optimistic that this technology will evolve, but an even bigger ethical question will arise. Will AI be implanted in humans to help with certain physical or medical disadvantages, or will it be used for enhancement?
This goes back to what Professor Marinoff was saying about AI being a medicine or a poison. It depends on how you use it, and whether or not the AI will become fully autonomous.
Referencing the latest Hollywood version of the movie Total Recall, Marinoff expanded on the ethics of merging Artificial Intelligence with humans, using the example of AI creating memories of vacations that people never had in “real life,” but these “memories” would be indistinguishable from reality in their minds.
It could be used to give people a vacation who couldn’t afford one
“You see, if you came out of this place with the memory of being in Hawaii for two weeks, and you have all these memories of Hawaii, what would be the difference between having the memory and having been there?” said the philosopher.
“It could be used to give people a vacation who couldn’t afford one, because it’s a lot cheaper to go into a tanning booth for an hour and come away with the memory of two weeks in Hawaii than doing it, but if the memory is augmented enough, you wouldn’t know the difference. You would remember having wonderful dinners and all the rest,” Dr. Marinoff added.
On the other hand, we’re talking about the political and social control of people
The whole memory augmentation aspect carries some fuzzy ethics. If people are aware they had taken an “AI trip,” and were OK with that, then maybe no harm, no foul. But if the AI is used without their permission and without their knowledge, it can get very dangerous.
“On the other hand, we’re talking about the political and social control of people,” says Marinoff, adding, “this is the ultimate horror story — turning us all into an ant colony, basically. And human dignity and freedom and all the things we supposedly value are out the window. Who’s going to control the AI, and what if the AI decides to take over? That leads to scenarios that, hitherto before this digital revolution which were merely science fiction, are now real, or potentially real.”
Where do we go from here? At the World Economic Forum in Davos this year, historian Yuval Harari said that the people most qualified to regulate how our biometric data is used are the poets and philosophers.
So, what does a professor of philosophy say about the next great challenges that we face?
“One of our next great challenges is a) how to get the machines to do what’s good for us and b) how not to fall prey to the imaginations of others who perhaps would want to enlist these technologies for nefarious purposes.”
Dr. Marinoff is also the co-founder of the American Philosophical Practitioners Association, and he has authored international bestsellers — Plato Not Prozac!, Therapy for the Sane, The Middle Way, The Power of Tao — that apply philosophy to the resolution of everyday problems.