A lot of attention is paid these days to whether and how AI should be regulated to ensure its ethics. A High-Level-Expert Group in Brussels has started the usual European “stakeholder-carousel” and called for input to their first ideas.
But in fact, AI is a highly technical matter and when it comes to technical standardization, ethics is a relatively new field. Technology standardization is traditionally dealing with protocols, with hardware specifications, etc.. The fuzzy domain of ethics as well as the context-sensitivity of any ethical matter seems almost contrary to the straight und homogenous logic of the engineering world.
A first question in this challenge is therefore to ask what ethics is in the first place. While the philosophical world has worked over 2000 years on this question, ethics means – in a nutshell – to do the right thing in the right way and to be a good person in doing so. In other words: to act – ideally as an exemplary role model – such that you or your company contributes to the good of society. To create positive externalities and avoid negative externalities. To create wellbeing in this world and combat the opposite.
Not everyone is equally good at acting ethically. Most humans learn pretty much to stay in the green area (see figure 1). This is what we learn in childhood, as part of our upbringing or by growing up as a member of society. In contrast to this good or ethical behaviour, there is also bad behaviour or what the law recognizes as criminal behaviour. Fairy tales call it “evil”. Between these two extremes between good and bad behaviour, between good and evil, there is some kind of borderline behaviour; a behaviour Germans would call „grenzwertig“ or „marginal“. The law is demarcating the line where this marginal behaviour isn‘t acceptable any more; where a practice is so bad that it is not legitimate any more; an extreme point where the rights of the people, nature or society are undermined by actors in such a way that these should be sanctioned to ensure the long-term stability of society.
From my perspective any technology, including AI, can be and should be build such that it fosters ethical behaviour in humans and human groups and does not cause any harms. Technology can support ethical behaviour. And – most importantly – it can be built with an ethical spirit. The latter is supporting the former. What cannot be excluded prior to technological deployment is that borderline (or even criminal ) behaviour is accidentally triggered by a new technology or by the people using it. This happened to Microsoft when their AI became a fascist. Technology should therefore be iteratively improved, so that potential borderline effects are subsequently improved within it. In this vision there is no need for regulation. Companies and engineers can do the job; constantly working towards the good in their artefacts.
But here is a challenge: Ethical standards vary between world regions. Europe has different standards when it comes to safety, green IT/emission levels, privacy, etc. There are completely different ethical standards when it comes to freedom and liberty when we compare Europe with China. There are completely different gender models when Russia and the Middle East are compared to Europe or the US. Ethics is always concerned with what is good for communities in a region. But technology is global these days and built to scale in international markets. So I guess technical de facto standards that are rolled out worldwide often unwillingly and easily hit the borderline as soon as they spread across the world.
Regional legislators then have to look into this borderline behaviour of foreign technology to protect relevant values of their own society. This is what happened in the case of privacy, where the GDPR now protects Europe’s civil value standard. The GDPR shows that the legislator has a role to play when technologies cross borders.
And here is another challenge: Unfortunately these days – lets be realistic – not all companies are angels. „Firms of endearment“ do exist, but there are quite a few as well who play the game of borderline ethical/legal behaviour (see figure again). Be it to save cost, to be first to market, to pursue questionable business models or to try new things of which the effects are hardly known, companies can have incentives to pursue business practices that are ethically debatable. For example, a company may develop an AI software for predictive policing or border control where it is not fully transparent how recommendations made by this software are coming about, what the data quality is, etc. When companies are in these situations today the often play “the borderline game”. They do this in two ways:
- They influence the borderline by setting de facto standards. They push rapidly into markets setting standards that are then in the market with quite a lot of ethical flaws. Examples are Uber and Facebook who confront a lot of criticism these days around diverse ethical issues after the fact (such as hate speech, privacy, contractual arrangements with employees, etc. ).
- Or, secondly, companies actively work in official technical standardization bodies (such as CEN, ISO, IEEE, the WWW Forum, etc…) to ensure that technical standards are compatible with their business models and/or technical practices.
In both of these cases, companies prioritize the pursuit of their business more than they typically care for ethical externalities. How can regulators handle this mechanism?
To address problem 1 – sudden de facto standards – regulators need to set barriers of entry into their markets. For instance, they can demand any external technology brought to market to go through an ethical certification process. Europe should be thinking hard on what it lets in and what not.
To tackle problem 2 – companies influencing proper standardization processes – regulators must pay more attention to the games played at the official standardization bodies to ensure that unethical borderline technologies are not actually standardized.
So to sum up, there are these three tasks for regulators when it comes to tech-ethics:
- Regional legislators always have to look into ethical borderline behaviour of foreign technology to protect relevant values of their societies.
- Regulators need to set barriers of entry into their markets; i.e. by testing and ethically challenging what’s build and sold in one’s market. Europe should be thinking hard on what it lets in and what not.
- Regulators must also watch the games played at standardization bodies to ensure that unethical borderline technologies are actually legitimized through standardization.
Are we in Europe prepared to live up to address these 3 tasks? I am not sure. Because a first questions remains in the dark when it comes to Europe’s trans-regional political construct: Who is “the regulator”? Is it the folks in Brussels who pass some 70% of legislations for the regions today? Or is it the national governments?
Lets say that when it comes to technology, regulation should be proposed in Brussels so that Europe as a region is a big enough internal market for regional technologies to flourish while engaging in healthy completion with the rest of the world. But even then we have to ask who is “Brussels”? Who “in Brussels”? When we ask who the regulator is, we should not forget that behind the veil of “DG bureaucracy” it is really individual people who we are talking about. People who play, are or believe themselves to be “the regulator”. And so the very first question when it comes to ethical regulation just as much as ethical standardization is who are actually the people involved in these practices? Do they truly pursue regional interests? For instance, European interests? A good way to answer this question is to ask on whose payroll they are or who sponsors their sabbaticals, their research institutes, etc.
For example: When there is a High-Level-Expert-Group (HLEG) on AI Ethics it is worthwhile asking: Who are the people administering the master of the group’s recommendation documents? Are these people paid for by European taxpayers? Or are these people paid for by US corporations? We need transparency on both. Because it is this HLEG that is likely to pass recommendations on both legislation and standardization.
Lets presume in this example, they are all people paid for by European tax payers. Another set of questions, demanded by the ethical matter specifically, is: What concept of a person (idea of man, “Menschenbild”) do these people have? Do they believe in the grace and dignity of human beings and do they respect humans as they are with all their weaknesses? Do they have a loving attitude towards mankind or do they think – as many do these days! – that current analogue humanity is the last suboptimal generation of its kind? Short: Who do we actually entrust with regulation in sensitive ethical areas, such as the ethics of AI? As we move into more and more sensitive ethical and social matters with technology I think these questions need to be asked.
As we pass HLEG recommendations into standardization or even regulation; as we establish standards and make these standards part of the law, we need boards, such as the former Art. 29 Working Group for data protection, which are (1) recognized domain experts and (2) well respected individuals who can be entrusted with judging on whether a standard (or even a law) is actually living up to ethical standards. I would like to call such a board “guardians of ethics”. Guardians of ethics should be respected as a serious entity of power. A group that has the power to turn down legislative proposals and standards. A group that inserts into the system a new kind of separation of powers, between lobby-infused regulators and business-driven standard makers on one side and the public interest on the other. Today we only find this separation of powers with the high courts. The high courts decide on the borderline between acceptable and unacceptable technology design. But high courts come too late in the process. Faced with the rapid diffusion of new technologies ethical judgements should come before a technology deployment; before de facto standards inhibit the law and before a technical standard is released. Societal costs are too high to make ethical judgements on technology only after the fact and at late points in time of a technology‘s existence. Any standardization process and any law on ethics in technology should be passed by guardians of ethics who can challenge proposals before bad developments can unravel. Guardians of the ethics have the power to stop what is unwanted.
But in fact, AI is a highly technical matter and when it comes to technical standardization, ethics is a relatively new field. Technology standardization is traditionally dealing with protocols, with hardware specifications, etc.. The fuzzy domain of ethics as well as the context-sensitivity of any ethical matter seems almost contrary to the straight und homogenous logic of the engineering world.
A lot of attention is paid these days to whether and how AI should be regulated to ensure its ethics. A High-Level-Expert Group in Brussels has started the usual European “stakeholder-carousel” and called for input to their first ideas.
A first question in this challenge is therefore to ask what ethics is in the first place. While the philosophical world has worked over 2000 years on this question, ethics means – in a nutshell – to do the right thing in the right way and to be a good person in doing so. In other words: to act – ideally as an exemplary role model – such that you or your company contributes to the good of society. To create positive externalities and avoid negative externalities. To create wellbeing in this world and combat the opposite.
Not everyone is equally good at acting ethically. Most humans learn pretty much to stay in the green area (see figure 1). This is what we learn in childhood, as part of our upbringing or by growing up as a member of society. In contrast to this good or ethical behaviour, there is also bad behaviour or what the law recognizes as criminal behaviour. Fairy tales call it “evil”. Between these two extremes between good and bad behaviour, between good and evil, there is some kind of borderline behaviour; a behaviour Germans would call „grenzwertig“ or „marginal“. The law is demarcating the line where this marginal behaviour isn‘t acceptable any more; where a practice is so bad that it is not legitimate any more; an extreme point where the rights of the people, nature or society are undermined by actors in such a way that these should be sanctioned to ensure the long-term stability of society.