Smart Technology

Column: Gerd Leonhard – I am not a robot

The word ‘robot’ is a relatively recent invention, coined in the former Czechoslovakia in the 1920s, but robots have played on our imaginations for millennia. Ironically, the word stems from a Slav word for slave (rabu), which is also related to the German word ‘Arbeit’, a word made infamous by the Nazi slogan ‘Arbeit macht frei’ (Work liberates you) which was printed over the gates at Auschwitz

We’ve always had robots in the sense of slaves. The only difference today is that they are no longer fellow human beings. Simple robots, such as those used in automotive welding, can certainly liberate us from deadening routine and mindless repetition. But as robots learn to walk, talk, ‘think’ and come out of their industrial cages, will our liberation backfire? Consider for example the extent to which we have already abdicated our human sovereignty. Ever noticed, for instance, how you instinctively dislike a hotel or restaurant that is not listed or recommended online? Ever checked someone’s profile on LinkedIn before you responded to a meeting request? Has your sense of direction deepened or weakened as a result of Sat Nav, Google Maps and Waze? Could it be that we are getting dumber as robots are getting smarter?
As a futurist in the early 21st century, I might just have the dubious privilege of reporting on the last fully biological generation of Homo sapiens. As our robots become ever-more humanoid, our bodies may become ever-more technologized. From wearable computing to indigestible chips, and from smart pills to BCIs, or even to neural implants, our relationship with technology is about to become a lot more personal: technology is getting ready to go inside us.

Gerd Leonhard

I think of robots not as friends but as beautiful tools

Gerd Leonhard
is the founder and CEO of The Futures Agency. He is based in Zurich. His new book, Technology vs. Humanity, is out now published by Fast Future Publishing.


Nothing vast enters the lives of mortals without a curse, as Sophocles famously warned. Let’s imagine, if can, a not-too-distant world in which your connected car will communicate not only your vehicle’s data (including every time you drive too fast!) in real-time to your insurance company (and conceivably to your local traffic cop); world where every single payments you make I stored on your smartphone because wallets and credit cards will be things of the past; a worldwhere your doctor knows how much or how little you walked last week and how high your heart-rate was while you last slept on the plane; a world where we all become beacons generating and transmitting gigabytes of data for dozens of Watsons to examine through their tireless, selflearning digitals brains.

Efficiency would almost certainly trump humanity at every turn: welcome to a giant machine OS that literally feeds off our output.
Next, let’s consider a very simple question: if we humans can’t even agree on what the rules and ethics for an “Internet of people”; how will we every agree on something thousands of times larger and more complex?

Just who exactly is in charge? We already have guidelines or agreements on what is permitted in bio-technology and bio-engineering such as the 1975 Asilomar guidelines on recombinant DNA. We have NPTs (nuclear proliferation agreements) in place that actually work, as the recent U.S.-Iranian negotiations have proven. But even though data is rapidy becoming the single most powerful economic driver in the world (“the new crude oil”) we lack any kind of global treaty on we are allowed to do with the personal data of billions of Internet users, much less a treaty on cognitive computing or artificial intelligence.
The biggest challenge of humanity in the next 20-50 years by far will be the relationship between man and machine.

Instead, big data, AI and the IoT are largely unregulated spaces at the very same time that their power are surpassing everything that has come before. Who will make sure that “Big IoT” will do the “right thing”? Epecially since the “wrong thing” would have potentially catastrophic results for humanity.
The technology industry’s love affair with IoT we can´t afford this kind of risk.
What we need are a new kind of “humarithm”, namely buffers and balancers that coded into the systems that comprise the Internet of Things and that will ensure truly human values are followed at all times. We need to apply a kind of digital-age precautionary principle to scrutinize and, when necessary, regulate the IoT; a way to make sure that this potential blessing will unintentionally turn into a curse.