Harness the power of Generative AI, but remain critical and vigilant

Table of Contents

Since the overpowering launch of ChatGPT by the company OpenAI in late 2022, the genie is out of the bottle. Everyone in the tech and media sector is overwhelmed by this new, exceptionally powerful technology. ChatGPT’s achievements are therefore very impressive at first glance. The “language virtuoso” needs only a few simple instructions to cough up rambling texts or program code. It can also effortlessly analyze data and create graphs in no time. Theologian, data expert, and AI evangelist Jack Esselink follows developments closely. In his lectures, presentations, and in this blog, he preaches above all to keep a sober eye on all the arts and crafts. “It’s not all hosanna, of course. ChatGPT sometimes misses the mark completely and sometimes hallucinates. Ethical dilemmas impose themselves.”

People have a strong need for a realistic picture of AI

The impact of the introduction of ChatGPT was even compared to the introduction of the iPhone in 2007. Yet the biggest hype surrounding Generative AI already seems to be waning, according to Esselink, a lecturer at Passionned Academy. “The time when we sat watching ChatGPT with open mouths is now over. If the overblown expectations are not met, disillusionment inevitably follows. Gartner shows this with its hype cycles. Above all, the world now needs a realistic view of generative AI. This technology is so powerful that we must be wary. But as with nuclear power and drones, “you can use AI for good and for bad,” Esselink kicks off. Vigilance is therefore called for, he says. “You should not be naive. The AI-generated phishing emails, for example, are many times more sophisticated than the human variants.”

Esselink’s Generative AI definition

Generative AI are artificial intelligence systems capable of producing new content, such as text, images, computer code, or sound, based on patterns learned from existing data. Esselink provides his own clear definition.

Generative AI are apps or computer programs that use artificial intelligence to generate their own content, such as audio, text, and video.

This is content normally created by humans. Foundation models, also known as generative models, are the basic components of generative AI. They are trained on huge data sets and can generate new content through random sampling. Large Language Models (LLMs) are language models specifically focused on language processing. They use neural networks to generate texts and you can use them for translation, summarization, conversation, and creative writing tasks.

More about Generative AI

Separating the wheat from the chaff

ChatGPT, for example, is now capable of writing an entire book with the right instructions. Still, Esselink does not think that humans as creative beings will become obsolete. “I expect a counter-trend. There will come a time again when people will prefer a book written by a flesh-and-blood human being. The wheat will be separated from the chaff.”

There is also agitation, according to Esselink. Generative AI might make certain professions such as programmers disappear. I don’t believe in that so much. Ten years ago, people were already saying that automation would make accountants obsolete. Today there is a glaring shortage of accountants. However, programmers will make grateful use of the new technology. In the legal profession, AI can search and summarize case law much faster than humans. This will significantly increase the productivity of programmers, lawyers, journalists, and so on. You are crazy if you don’t use Generative AI, it will help you get more done in less time. Also, new professions such as the prompt engineer will emerge thanks to this technology. Fresh blood, in other words.

Generative AI and its impact on society

But technology also evokes resistance in some circles. During the Industrial Revolution in the early 19th century, for example, the Luddites opposed machines in the textile industry and destroyed mechanical looms. Does technology have a will of its own, and are we puny humans at its mercy? Or are we (still) in control? And how reliable is Generative AI really? These are all relevant questions for an AI evangelist like Esselink.

To what extent are we still in control? Sociologist Hartmut Rosa coined the term acceleration loop. The speed of technological innovation is increasing. This leads to a new kind of alienation: we become more and more distant from ourselves. And what about copyright infringement and privacy issues? What data was actually used to train the Generative AI model? That may include personally identifiable data. ChatGPT users also probably don’t realize that the texts they enter through the prompt are stored somewhere by default.

Privacy issues and educational approaches

These privacy issues led to ChatGPT being temporarily banned in Italy, for example. Still, according to Esselink, banning is not an option. “We have to learn to deal with Generative AI in a sensible and mature way. Because the impact in education is huge, you already see the first codes of conduct emerging there. Renowned educational institutions such as Cambridge University are drawing boundaries. But Dutch universities and colleges are also following ChatGPT critically and identifying and addressing the risks.

According to Esselink, there is still room for improvement in terms of reliability. ChatGPT is certainly not yet mature and not 100% accurate. The chatbot sometimes just makes things up. “Scandals, excesses, and lawsuits are therefore bound to happen. But because ChatGPT brings everything with a certain conviction, people fall for it. Actually, I wish everyone a negative, particularly disappointing experience with ChatGPT. Pertinent wrong answers so that people get a more realistic picture of the possibilities and impossibilities of the technology. Users expect from ChatGPT the same accuracy as that of a search engine. That is not realistic and search engines are also wrong sometimes. So how should you position ChatGPT?”

The greatest common denominator

“If you knock it completely flat, ChatGPT is nothing more or less than a linguistic prediction model capable of predicting the next word in a sentence. Moreover, the answers are based on the greatest common denominator: the prevailing opinions and insights published on the Internet.”

OpenAI and Midjourney currently offer their Generative AI products such as ChatGPT, DALL-E, and Discord through their own websites and interfaces. But that is changing rapidly. AI is a systems technology and that means you can build Generative AI as embedded software basically anywhere, in Microsoft Office, Adobe Photoshop, Google Docs, or in Canva, for example, an online generator of images. Microsoft is leveraging the power of large language models (LLMs) with Microsoft 365 Copilot to allow app users to work more productively by taking certain tasks off their hands and having them performed by AI, such as turning a few notes into a complete meeting report. Furthermore, Generative AI will pop up in Microsoft Dynamics. Of course, not all of this will remain free of charge. Meta (formerly Facebook) says it is going to add Generative AI to all its products.

In short, generative AI is becoming a component within all major software suites and BI tools, and thus part of our daily workflow. According to Esselink, there is an objection to this. “Once again, American big-tech companies dominate the playing field, and the U.S. simply has different standards and values when it comes to privacy, data governance, transparency, and so on. Perhaps one day a European initiative will emerge, like the European Gaia-X cloud initiative based on European values of data sovereignty, security and transparency.”

The thumbscrews are being tightened

When we talk about AI, transparency is the magic word. But as always, legislators are behind the times. In the European context, pressure increased to include amendments on Generative AI in the AI Act. This was necessary because the European Commission had also been caught off guard by the stormy introduction of ChatGPT. More than 100 million active users in just two months. Everyone has to come to terms with that.

Other European legislation is also likely to be affected by Generative AI, such as the Digital Services Act and the Digital Markets Act. The thumbscrews (at least on paper) are being tightened further and further. In addition to privacy issues, there are also copyright issues at play, for example. Generative AI – such as ChatGPT – also makes it easier to develop malware and can, for example, make phishing mails much more credible, the NCTV warns in its Cybersecurity Assessment of the Netherlands 2023. With large-scale use of these techniques, Esselink says it is increasingly difficult to establish the authenticity and authority of textual information, images, videos, and audio. This also poses dilemmas for (government) organizations.

According to Esselink, the impact of Generative AI is so far much greater than blockchain technology, for example, because chatbots are deployed in the front office and blockchain is mainly in the back office.

Experimenting with Generative AI in labs

Nevertheless, according to Esselink, we are seeing several interesting initiatives and experiments with Generative AI emerging in the business world. For example, Albert Heijn, a Dutch supermarket, recently launched its own AI startup: Gen AI Labs within which a team of young talents is exploring the application of Generative AI and experimenting with new products and services such as the Recipe Scanner and automatically generated shopping lists. And as a retailer, why not have ChatGPT generate the texts and photos for the offer flyers? Meanwhile, banks, insurance companies, and energy companies are also experimenting with Generative AI. Britain’s Octopus Energy took a drastic step. The company deployed ChatGPT to answer customer emails, automating away the work of 250 flesh-and-blood people. In fact, A/B testing showed that customers gave higher NPS scores to the chatbot than to traditional service staff. According to Esselink, the impact of Generative AI is so far much greater than blockchain technology, for example, because chatbots are deployed in the front office and blockchain is mainly in the back office.

The Artificial Intelligence handbook Image of The Artificial Intelligence handbookDiscover the future of AI with 'The Artificial Intelligence Handbook' by Daan van Beek. This comprehensive guide equips you with essential AI knowledge, covering algorithms and real-world applications. Learn to harness AI's power and navigate its impact on society. Stay ahead in the AI revolution!view the Artificial Intelligence handbook

Job losses and creative solutions

Meanwhile, economists, analysts, and the OECD warn of potential job losses due to AI. An alarming 27% of workers worldwide are at risk of being replaced by computers or chatbots. Nevertheless, organizations are showing creativity in their approach. For example, IKEA has retrained 8,500 call center workers as design consultants. The store reports that the chatbot Billie has handled about 47% of all customer queries, saving nearly 13 million euros so far.

Collaboration between humans and technology

While the prospect of job losses is a real concern, Esselink sees a different perspective. He believes jobs will not disappear completely. Instead, he sees opportunities for a symbiotic relationship between humans and chatbots. Esselink emphasizes that humans and chatbots can work together as a team. The human conducts the conversation and the chatbot, such as ChatGPT, captures the conversation record.

Future prospects and challenges

In addition to the gloomy predictions of job losses, there are also positive prospects. According to Goldman Sachs, Generative AI could contribute to 7% growth in global gross domestic product (GDP) over the next decade, depending on the rate of adoption. Leading consultants such as McKinsey, PwC, and E&Y also predict a promising future for Generative AI. Nevertheless, caution is required. The productivity paradox is relevant here. The return on investment in IT does not always translate directly into economic growth. Past examples, such as the introduction of the washing machine and the increase in e-mail traffic, illustrate how some of the productivity gains achieved can be lost.

Moreover, the environmental impact of Generative AI cannot be ignored. The high energy consumption of servers in data centers has significant environmental impacts, which calls for sustainable solutions.

10 tips for applying ChatGPT in business

Discover how to effectively apply ChatGPT in business with these 10 valuable tips.

  1. Train the AI model more specifically: tailor the model as much as possible to company-specific documents (while maintaining privacy) to make it more familiar with jargon, abbreviations, and company-specific knowledge of your industry.
  2. Integrate with existing systems: link ChatGPT to your CRM application, to company databases, or other software tools of your organization, so you can retrieve real-time data and start providing immediate answers to specific and relevant questions or use cases.
  3. Improve front-line support: use ChatGPT as the first line of support for IT or any other internal department. It can answer frequently asked questions, provide troubleshooting steps, or direct users to staff with specific knowledge.
  4. Expand your knowledge base: link ChatGPT to your internal knowledge bases. It can help employees quickly find answers to frequently asked questions, policy explanations, or even manuals.
  5. Onboarding tool: during the onboarding process, let new employees have a lot of interaction with ChatGPT. This way, they become familiar with company policies, culture, or other relevant information, providing a personalized onboarding experience.
  6. Brainstorm with the bot: use ChatGPT whenever possible in brainstorming or creative meetings. Its broad and sometimes in-depth knowledge can provide new perspectives, suggest alternative solutions, and even stimulate discussion with unique ideas.
  7. Develop your employees: use ChatGPT as a learning tool, have employees answer questions about training materials, question them, or offer explanations on complex topics.
  8. Personal Assistant in virtual meetings: during virtual meetings, ChatGPT can be used in the background to quickly provide data, answer questions, or even transcribe and summarize discussions.
  9. Get your data analyzed and visualized: upload personal or company data files to ChatGPT and ask questions about the dataset. ChatGPT can also turn your data into graphs on the fly.
  10. Worry about privacy and moderation: make sure your custom ChatGPT version has strict privacy controls. Evaluate its interactions regularly to ensure privacy and refine its responses. Also, set up a moderation layer, especially if ChatGPT is client-centric, to prevent inappropriate or unintended responses.

ChatGPT can be a valuable tool for organizations but it is essential to regularly evaluate its performance and refine adjustments. Also, make sure your employees understand its strengths and weaknesses.

How to move forward

Most organizations are still experimenting with Generative AI. They are working toward a conclusive use case. Many companies see the potential but wonder what our next step is. But at the same time, there is also a lack of understanding: many people are still wondering where they can actually buy ChatGPT. Governments and regulators clearly still need to get used to AI as a new phenomenon. For example, fintech Bunc enforced in the district court that it may use Artificial Intelligence and data analytics in the context of combating money laundering and complying with the Know your Customer (KYC) principle. This despite opposition from the Dutch Central Bank. Esselink: “If you realize that about 20% of bank employees are currently working to comply with the anti-money laundering directive, AI and machine learning can really provide a considerable task relief.”

With great power comes great responsibility

As mentioned, the genie is out of the bottle and developments in Generative AI are moving fast: the release of the paid version ChatGTP4 is live and you can see a veritable gold rush emerging. Big Tech companies such as Google and IBM are coming up with alternatives to ChatGPT such as Google Bard. WatsonX is a new hybrid AI and data platform that lawyers and tax lawyers, for example, can also feed with their own data. Huge amounts of venture capital are being invested because no one wants to miss the boat. But as a famous saying goes, “With great power comes great responsibility.” Current AI applications are already struggling with issues of privacy, ethics, transparency, and biases. Generative AI is a tremendously powerful technology that also allows you to create deep fakes, spread disinformation, and commit cybercrime.

An ethical perspective: discuss it at the kitchen table

Generative AI raises several ethical questions that we already face today, such as issues around privacy and copyrights. Also in the longer term, this technology raises not only ethical but also existential questions such as “Does AI mean the end of humanity?”. Esselink himself thinks practically. “Discuss at the kitchen table with your son or daughter whether it is such a good idea to have your thesis written entirely by ChatGPT. If you look at things from such a purely ethical perspective, a teacher, for example, would not be allowed to use YouTube channels either. But whether that is realistic today, I venture to doubt. These kinds of questions of conscience are not new, by the way. Students do more often choose the path of least resistance by, for example, reading only the summaries of prescribed books.”

In short: by trial and error, we will have to collectively create ethical awareness. You can use a drone to deliver medicines, but also as a killer robot in a war. And perhaps we will also have to get rid of the idea that, above all, we must not miss the boat. Pressing the pause button, as some tech gurus suggested, or including moments of reflection (“what are we doing right now?”) is not such a crazy idea. Because as I said, we should not be naive but critical and vigilant. Software vendors, venture capitalists, governments, and multinationals, everyone follows their own (political) agenda, and self-interest usually prevails. By reflecting on this critically, you can then see much more sharply where technology such as Generative AI begins to touch our core values and you can anticipate this.

A selection of our customers

Become a customer now

Do you also want to become a customer of ours? We are happy to help you with data-driven working or other things that will make you smarter.

Photo Daan van Beek - Managing DirectorDAAN VAN BEEK MScManaging Director

Contact me directly

Fact sheet

___
customers
___
training courses
___
people trained
9.3
customer satisfaction
___
consultants & teachers
20
years of experience