Six misconceptions about algorithms

Table of Contents

Misunderstanding algorithms

Artificial Intelligence is a hot topic. Algorithms can determine the notes of a new perfume. The high-art world has been scooped by a portrait painted by an algorithm and signed with the code: minG maxD Ex[log(D(x))]+Ez[log(1-D(G(z)))]. These are two random examples of relatively innocent, yet surprising, applications of AI. However, algorithms can also inspire fear. Crashing self-driving cars, smart speakers that take over the entire house, algorithms that thoughtlessly dismiss job applicants based on gender, or sentence suspects without mercy. Can algorithms be used for the good of mankind? To address the widespread misunderstanding about algorithms, we clear up six misconceptions in this article.

Misconception 1: The Netherlands is leading the AI pack

Really? If we take research by Microsoft and EY at face value, the Netherlands is ahead of the rest of Europe when it comes to AI. The Netherlands is said to be among the countries with the highest maturity level in AI projects. 37% of the Dutch organizations researched are working on AI initiatives that are at production or in an advanced stage. The adoption of machine learning, neural networks, and deep learning should be going smoothly in this country, according to this research.

Algorithms are essentially no more than a recipe, a simple set of instructions. Computers are algorithm machines, modeled to save data, apply mathematical formulas to it, and deliver new information as output. A simple example of a so-called “If This, Then That” algorithm is: “If the temperature in a house falls below a certain threshold, then the heating will turn on.” But what about more advanced forms of AI? Because there’s a general lack of understanding surrounding algorithms, we’re debunking six common misconceptions.

Directors on alert

Of the remaining Dutch companies, 45% are in the early stages of a pilot project, and 9% are making plans to start using AI. In Dutch boardrooms, directors are on alert. AI is on Dutch C-level management’s agenda more often than in other countries. But if we scrutinize this research, we can find some nits to pick. EY surveyed 277 organizations in fifteen European countries across various industries, such as life sciences, finance, services, retail, media/entertainment, telecom, and technology.

Companies with deep pockets

However, only 22 Dutch companies are involved in the research, and they’re primarily large, publicly traded companies like Randstad, Philips, and Wolters Kluwer. Companies with deep pockets. Far from representative of Dutch business as a whole. On top of that, the level of investment is nothing to write home about. Around 40 million euros was invested in AI startups among the companies researched in the Netherlands. Considerably less than leading countries France and Germany, which are, however, home to larger populations. The German government wants to invest up to 3 billion euros in AI by 2025.

Brain drain

At the same time, there’s a brain drain among Dutch universities. AI professors are said to be moving abroad en masse, according to innovation center ICAI. “The Netherlands was an early adopter when it comes to developing AI knowledge, but it’s losing momentum,” AINED warns. The cooperative relationship, which is supported by the Boston Consulting Group (BCG), among others, argues for a national AI strategy. Most countries, including those in Europe, have already formulated such a strategy. In short: the Netherlands is more of a rear guard than a pack leader.

The Artificial Intelligence handbook Image of The Artificial Intelligence handbookAlgorithms and AI can no longer be ignored by any serious business. If you want to apply algorithms and AI to improve your organization's decision-making, this book is essential reading. Make your organization more intelligent and agile using AI and algorithms.view the Artificial Intelligence handbook

Misconception 2: I can wait it out

No way. Algorithms and big data aren’t just used by web shops, social media, and streaming services like Netflix anymore. The Belastingdienst (Dutch IRS), UWV, and DUO are considering using big data to detect fraud. Algorithms are taking over the stock trade. In precision agriculture, GPS, sensors, robots, and drones predict and optimize the harvest. In the care sector, algorithms can help detect diseases and make diagnoses. In the public sector, cameras and sensors can predict where civilians might display undesired behavior (walking on train tracks, destruction of property, breaking and entering). In logistics, software can help plan and combine cargo streams, available trucks, and drivers. The Amsterdam company Harver partly automates the work of recruiters. A self-learning algorithm can make a pre-selection of suitable candidates.

AI optimizes

In short: AI and robots are a way for every company to optimize their processes. The digital revolution also reached the industrial sector long ago. New services like remote maintenance, predictive maintenance, and smart data models are completely based on data collection and analysis of the production process. Both software applications as well as physical robots make use of artificial intelligence.

Internet of Things

The Industrial Internet of Things is going to take off in a big way, according to research by ABN Amro Bank. In 2017, around 9 billion active devices were connected through the Internet of Things. By 2025, the counter could be as high as 55 billion, as estimated by BI Intelligence. Predictive maintenance – deploying regression analysis to predict when a machine or part is due for replacement – is making good on its promise, according to PwC researchers. 95% of companies report positive results. There was a 9% increase in machine uptime, while the amount of visual inspections was drastically decreased.

Misconception 3: The government is watching over us

Forget it! Ease of use is the biggest priority when it comes to digital communication by the government. There can be no misunderstanding about that. Government agencies still don’t adequately consider the effects of digitization on the relationship between the government and its citizens, according to oversight agencies. Citizens shouldn’t suffer because the government is digitizing. Government agencies increasingly let computers make decisions for them. These decisions are often the result of automated decision rules – algorithms that automatically decide without any human intervention. Citizens can then no longer check which rules the government is applying and what data they’re using to make decisions. Governmental oversight agencies are cautioning the government to pay close attention to the motivations behind decisions. It has to be clear which decision-making rules the government has applied, and what the source of the input data is.

Algorithm watchdog

Using algorithms has its downsides. For instance, non-ICT specialists can’t check whether the translation from algorithms into a programming language is flawless and accurate. Rules in a natural language don’t translate 1:1 into an algorithm, typically. Finally, the AVG has decided that automated decision-making (including profiling) without human intervention is not allowed, in principle. The Dutch political party D66 has concerns and has drafted an Action Plan for Artificial Intelligence. In it, the party calls for an algorithm watchdog that should check every algorithmic system, including data sets. That seems rather unfeasible, but their call for ethical guidelines is certainly compelling.

Misconception 4: AI and Big Data are only relevant to big companies

Nonsense. Big Data is not a big enough priority for Dutch SMEs (small-to-mid-sized enterprises), according to recent research. More than six in ten Dutch SMEs think that big data is not relevant to their business. The SME is missing out on potential revenue and profits due to this indifferent attitude. Big data can help in making quick decisions. Data analysis gives SMEs better insight into customer behavior, it leads to more efficient production processes, and has the potential to create opportunities that are comparable to those of larger companies, according to the researchers.

Size matters

Size does matter: the bigger the company, the more big data is seen as having added value. 48% of small enterprises don’t see a lot of value in big data, while companies with 50 to 250 employees bring this number down to 21%. Lack of prioritization, knowledge, or information about big data are the biggest obstacles keeping SMEs from starting to work with big data.

Misconception 5: Algorithms are taking over

Untrue. People are, and will remain, responsible for programming and (where necessary) training algorithms. Algorithms are primarily human creations. The importance of the choices made by the people writing the algorithms can’t be overstated. Their choices affect the analysis and the eventual outcome of the analysis. The saying “technology is neither good nor bad; nor is it neutral” also goes for algorithms. Despite the “promise” of algorithmic objectivity, algorithms can definitely be very subjective.

Algorithms are biased

Because algorithms are human constructs, the prejudices and values of the programmers or clients can be embedded in algorithms. Algorithms that are used in prioritizing search results or news reports can contain political values or be otherwise non-neutral. True neutrality doesn’t exist. The data used to train algorithms also contains biases that determine the algorithm’s outcome. One-sided data produces one-sided results. Amazon recently scrapped a recruiting AI that didn’t like women, for example. Yet the use of algorithms is increasing rapidly. Biases in algorithms can amplify existing inequalities and lead to unfair outcomes, illustrating a key aspect of the ‘AI paradox,’ where the potential of AI is undermined by practical challenges in implementation and ethical use.

AI for the good of mankind

That means that organizations need to take precautions to use algorithms responsibly without thoughtlessly reproducing biases. If they don’t, they could suffer serious reputation damage. “You can’t place the blame for failing algorithms at the engineers’ feet.” In sum: algorithms are powerful weapons in the hands of people. Catelijne Muller, reporter for AI in the EU, says we have to face the risks of AI. She argues for guidelines. “Machines are still machines, and people need to control machines at all times.” The initiators of Open AI argue for transparency and promoting the safe use of AI. “AI should be used for the good of mankind above all else.”

Misconception 6: Man and machine are enemies

On the contrary. Man and machine can work together. In the near future, people will cooperate with computers for certain tasks. They’ll complement each other as much as possible, according to the Rathenau Institute. For example, a doctor might get help from a software program when making diagnoses. German researchers also predict various forms of complementary cooperation between man and machine. For example, people instructing machines until they reach equivalent levels of cooperation (“the robot as a colleague”).

Robots learn human values

Stuart Russell, professor of computer science at UC Berkeley and founder of the Center for Human-Compatible AI, believes in strict rules between man and machine. A computer or robot doesn’t know beforehand what human values and intentions are. Value alignment should ensure that algorithms and robots don’t run rampant, to undesired or even harmful effects on society.