Data ethics: 5 golden principles for responsible AI

Photo Daan van Beek
Author: Daan van Beek
Senior AI and Big Data advisor
Table of Contents

The dark side of algorithms

Self-learning algorithms are making more and more business decisions independently while also becoming an ever-growing part of our private lives. Decisions made by governments, credit card companies, and banks can now be solely based on algorithms. “Computer says no” usually means end of discussion because the algorithm doesn’t explain itself. The public debates about the use of algorithms and AI trends occur along the intersection of IT, legislation, and ethics. Ethical data dilemmas mainly concern principles such as fairness, transparency, justification, and biases. How do we approach the ethical and moral use of algorithms in the AI age? Although there are currently dozens of codes of conduct for the ethical use of Artificial Intelligence (AI), Big Data, and algorithms, many people have an uneasy feeling about this new technology. Even with the best intentions, the misuse of AI can result in unintended consequences and reinforce biases, highlighting the discrepancy between its potential and real-world outcomes.

Time for an algorithm watchdog?

To what extent do algorithms drive or manipulate us? Surveillance capitalism is a new term coined by Harvard professor Shoshana Zuboff to describe this phenomenon. Is it time for an algorithm watchdog, or is a quality label for the ethical use of algorithms good enough for now?

What are ethics?

Ethics are a moral framework we can use to try to determine whether certain actions are right or wrong. It’s hard to get a singular answer to an ethical dilemma because ethics tend to be very personal.

Types of ethics

Ethics can be both descriptive and prescriptive. In the descriptive form, personal judgment is left out of the picture. We’re only describing codes of ethics, such as society saying that murder is wrong because the law says so. Prescriptive ethics means prescribing your own ethics for others to abide by.

Ethics in practice: applied ethics

You don’t need to look very far to find examples of current practical ethical dilemmas.

  • China is working on an unparalleled surveillance system. There are already over 200 million cameras in the country, and more and more are rapidly being added. Chongqing is the front runner – this city has one camera for every six people in the city. Privacy concerns are but one of the obvious ethical dilemmas here.
  • The National Dutch Police has been experimenting with CAS (Criminality Anticipation System) for several years. This form of predictive policing should be able to predict when and where street crimes, such as robberies and break-ins, will take place. Discrimination is one obvious possible issue.
  • More and more banks are experimenting with AI, for a variety of purposes. AI could be used for applications such as risk scoring based on machine learning models, early warning systems for money laundering schemes, and automatically handling loan requests. Are loans being denied unjustly? Is there sexism at play when judging credit limits, as was recently revealed with the launch of Apple’s credit card?
  • SyRI, or System Risk Indication, isn’t the iPhone’s speech recognition software, but the Dutch government’s system that links citizen data to detect various types of fraud. Using AI, it puts together risk profiles. To what extent are innocent citizens criminalized?

The grey area of AI ethics

In short: AI is embedded deep in society. But legislation always lags behind reality. You’ll have to tread carefully in the grey area of AI, where ethical lines start getting blurry. This process starts with creating awareness.

Legislation always lags behind reality, so you’ll have to tread carefully in the grey area of AI, where ethical lines start getting blurry

Big data and ethical codes of conduct

Philosophers, sociologists, futurists, and anthropologists are all concerned with ethics. Even theologians are racking their brains on this subject. Everyone seems to be looking for practical codes of conduct for AI and handling big data ethics. Meanwhile, engineers are running their agile sprints. They’re writing algorithms without worrying about ethics and the consequences of their work. Not necessarily because they don’t care, but because sometimes they’re simply “unwittingly incompetent.”

Ethical frameworks

But what are ethics, and what are the most important ethical principles AI should abide by? Luciano Floridi and Josh Cowls drew up a clear framework in the Harvard Data Science Review and judged the most important codes of conduct in a comparative review.

Ethical principle 1: AI’s lack of bias promotes fairness

Biases can be deeply rooted in people, data, and algorithms, and often in that order. Data isn’t as objective as you think, it reflects who we are as people. All of our implicit values and prejudices are also reflected in the big data. Discussions about the (supposed) discriminatory nature of algorithms are often centered on the concept of fairness, which is difficult to define. Algorithms are said to discriminate systematically, leading to unfair outcomes. The algorithm might reject people with a foreign surname or a certain zip code for a job or a loan, some say. But human decisions are also not free from biases. “If you want the bias out, get the algorithm in,” according to MIT’s Andrew McAfee. Human decisions are hard to judge. People can lie or discriminate subconsciously.

The sensitive nature of ethics and the police

In the case of undesired outcomes, the culprits tend to be the underlying data and the machine learning training data, not the algorithm itself. Sometimes it’s a case of a self-fulfilling prophecy. When the police does more patrols in certain areas, they will also come across more crimes, and police the area more heavily as a result. There’s no silver bullet or quick fix for biases, McKinsey’s researchers say. In questions of ethics, human judgement remains paramount.

Ethical principle 2: AI promotes the wellness of people and the planet

AI should primarily serve the good of as many citizens as possible. Since the days of the Bible (Genesis 1:28), people have been told to take care of the planet. It wasn’t until the 1990s that the three Ps became popular: People, Planet, and Profit, giving our planet greater priority. These days, the number of “AI for Good” initiatives can no longer be counted on two hands. Google, Microsoft, IBM, the United Nations, the European Commission, and national governments all promise to use AI for the benefit of all mankind, and to make society more sustainable.

Ethics shouldn’t be used as a PR or marketing tool

Although the people in the workplace usually have the best intentions, there’s plenty of “greenwashing” happening: organizations and governments paint a pretty picture of how their AI policies are benefiting people and the planet. An IT company can claim to be saving lives, because a famous aid organization uses their software. This has more to do with marketing than a clear ethical philosophy.

Ethics and technology

The possibilities presented by AI create new ethical dilemmas. What to think of the Chinese government trying to condition its citizens using a social credits system? Do we really want to become a super efficient, cost-conscious society where chromosomal abnormalities leading to Down’s syndrome are a reason to terminate pregnancies? And to what extent are technological trends such as genetic modification, augmented reality, and transhumanism threats? Will people accept living side-by-side with bionic supermen?

What can we learn from bioethics?

The development of ethical principles for AI is still in its infancy. There are no precedents. According to Floridi, we can learn a lot from bioethics, which concerns itself with the ethical aspects of human intervention in the lives of people, animals, and plants. Bioethics focuses on technological advancements in biology and medicine.

Ethical principle 3: AI should not harm civilians

AI in general and algorithms specifically shouldn’t just be fair and profitable but also reliable, consistent, and correct. People have to be able to check algorithms for accuracy based on these criteria. Studies have been performed to determine how algorithmic technologies like big data, the Internet of Things (IoT), and artificial intelligence impinge on the rights of citizens. Specifically, how do these technologies affect privacy, freedom, equal treatment, and procedural rights? Human dignity sometimes feels threatened by the rise of AI and algorithms.

Ethics and the law

When designing AI applications, organizations should always respect the principle of privacy by design, as well as constitutional human rights. The 23 Asimolar principles are a good starting point. The transparency and integrity of algorithms remain key points. Although some people are advocating for an algorithm watchdog, an overseeing body with far-reaching authority, that seems like a bridge too far as of yet. Meanwhile, various advisory bodies are working on a seal of quality for the development and responsible use of algorithms. Perhaps accountants can play a role by signing so-called control statements, like they already do for annual reports.

Ethical principle 4: AI is in service of mankind, not the other way around

It’s essential that people remain in control of AI, and not the other way around. At all times, people should be able to choose whether and how they want to hand over part of their decision-making ability to an AI system, and which goals they want to achieve in doing so. It’s about maintaining autonomy, the power to decide. The risk with self-learning systems (machine learning) and neural networks (deep learning) is that the decision-making power of computers will undermine human autonomy.

Ethics in life and death decisions

It’s a tricky balancing act between the autonomy of man and the decision-making powers of machines. Generally, the autonomy of the system has to be limited as much as possible. The system’s decisions may not always be reversible, especially in life and death decisions. It’s essential that people always remain capable of retaking control of the decision-making process. Consider the example of a pilot flying on autopilot, but taking manual control after an incident to regain complete control.

Ethical principle 5: Algorithms have to be explainable (explainable AI)

Only a fraction of the working population is involved in the development of the secretive algorithms that are impacting the lives of the entire world. Complicated algorithms that have been in use for years and are constantly being adjusted (like Google’s PageRank, for example) can hardly be fully understood by people anymore. So it’s no surprise that the call for transparency, responsibility, interpretability, and understandability is growing louder when algorithms and ethics are brought up. Open source could lead to increased transparency.

Who is responsible for data ethics?

To most people, algorithms will remain mysterious black boxes they have no insight into. Yet the need for transparency is only increasing. Organizations obviously don’t have to share business-sensitive information for nothing. Protecting intellectual property is very important these days, after all. But independent experts have to be able to turn an algorithm inside out to answer basic questions.

Ethics at work

Ethical AI issues are also penetrating the workplace. How does the algorithm work? Can the algorithm’s outcomes be explained? Who at work is responsible for how the algorithm works? And who do you turn to when the algorithm leads to a serious incident or a harmful outcome? And, last but not least, can whoever is responsible explain why the algorithm didn’t work the way it was intended? These are questions of principle.

Conclusion

Although some scientists think differently and want to teach AI engines ethical values, you can’t leave ethics up to a computer. People of flesh and blood will have to look out for the ethical use of algorithms and make sure that they benefit society. One thing is certain: organizations that develop AI applications based on the principle of ethics by design enjoy higher approval ratings because they’re more likely to be trusted by customers, which translates into higher profits.