We can distinguish ourselves in Europe from the United States and China by focusing on a humane approach to artificial intelligence. In the US, technology is mainly used by companies for commercial purposes. In China it is about used for totalitarian purposes of the government. Let us use artificial intelligence for social purposes in Europe. In that respect, I totally agree with the recent essay by Saskia Nijs (1 June).
The European Commission plans to spend € 9.2 billion on the 'Digital Europe' program, on algorithms, data and cybersecurity. Let's spend a good deal of it on developing artificial intelligence that puts human dignity first. The books of Yuval Noah Harari, Sapiens and Homo Deus can help us doing that.
Harari describes how humanity has developed all sorts of intersubjective truths in the past thousands of years: gods, money, nation states, human rights and other 'things'. These 'things' do not exist in a physical sense, but in a practical sense, because large groups of people believe in it. This allowed people to build pyramids and produce laws. Harari also describes how we are currently combining information technology and biotechnology; we integrate computer functions in our bodies and delegate human tasks to AI. The age of people seems to be over. However, this is not the time to passively see what happens. I think Harari also offers us a solution.
These human rights do not arise automatically. Only if we put enough energy into it, then we can make it exist, in a practical sense. If we are concerned about the fusion of man and machine, and the increasing power of artificial intelligence, then we can use human rights as a compass for the development and application of that technology.
Such a line of thought is expressed in a recent piece by the European Group on Ethics in Science and New Technologies. This group wants to use human dignity, human rights and ethical principles as a guideline.
It is time for a humanistic renaissance. Let's take a critical look at the practices of companies and governments that collect data, put these through of algorithms and manipulate us. When people on Facebook find themselves in filter bubbles and read fake news, what is left of solidarity and democracy? When children on YouTube see videos that become increasingly stranger (the algorithm takes care of that, to keep their attention), what does that do with their autonomy and security? When teenagers have to do their best to come across on Snapchat and Instagram, what does that mean for their self-esteem and diversity? Data and algorithms are currently being used in law enforcement and judicial process contain discrimination, which puts justice under pressure.
Now is the time to intervene. At the political level, the General Data Protection Regulation is already a good example, including the 'right to explanation'. Furthermore, companies will have to transform their business models: away from free-with-ads based on engagement (but it is more like addiction) and towards products and services that support autonomy and diversity, and for which people want to pay.
For this we can draw inspiration from the work of Shannon Vallor, professor at Santa Clara University in Silicon Valley. In Technology and the Virtues she describes how we can use technology to cultivate virtues such as self-control, courage, empathy, care and citizenship.
Moreover, you and I will have to become more aware of how artificial intelligence creeps into our lives and deals with them critically. Then, hopefully, we will have humane algorithms in our products and services in a few years, just as we now have organic bananas or fair-trade jeans. We can build on our humanistic tradition in Europe and we can successfully differentiate ourselves from the US and from China.
In: Het Financieele Dagblad, 27 July 2018, page 9, by Marc Steen
Marc Steen works as a senior researcher at TNO and is involved in the research program Responsible value creation with big data (VWData).