Trend 1: Our work is changing and databases are coming to an end
AI is not only going to make a difference in technical occupations. AI is increasingly seeping into the core of many professions. Whether it’s deploying A/B testing within your marketing campaign, using algorithms in whether or not to grant a mortgage, generating a policy document via ChatGPT, or using AI in supply chain optimization.
In all these places, AI impacts the way work is done and the competencies needed and increases the quality of work. In 2024, AI is expected to further penetrate a wide range of industries, where it will optimize processes and decision-making with unprecedented accuracy and efficiency.
The end of the database in BI: integrating data with the REST API
In 2024, more and more organizations will give short shrift to the rampant spaghetti of interfaces for cumbersome, batch data exchange and say goodbye to the traditional database. Application Programming Interfaces, or APIs for short, have transformed the way systems communicate with each other and exchange data. This is particularly true of the REST API: Representational State Transfer Application Program Interface. Instead of data being trapped in separate databases, REST APIs provide a highly standardized way to share information between different applications. This seamless integration enables real-time data exchange, making organizations more agile and responsive.
With the growth in popularity of the REST API and data lakes, modern data warehouses can now be built entirely without using traditional database technology. A data lake is a centralized repository for structured and unstructured data at scale. Unlike conventional databases, where data is often stored in rigid structures, a data lake provides flexibility and scalability. It allows organizations to store massive amounts of data, often as files, without the need for prior structuring. This allows them to better handle the growing diversity and volume of data. Now we see that data warehouses can also increasingly take advantage of the powerful data-lake infrastructure, making a traditional database unnecessary for building a data warehouse.
Modern Business Intelligence (BI) tools often have the ability to read in all different file formats. Instead of depending on static reports and analysis, modern BI tools allow users to directly access data warehouses and data sources connected via APIs. This opens the door to unprecedented opportunities for real-time analytics and decision-making. And it may soon mean the end of the traditional database in BI.
Trend 2: The metaverse is growing and expanding
The metaverse, a concept emerging from virtual and augmented reality technologies, is increasingly making its appearance. 2024 will be the year when the metaverse continues to grow and expand. In the process, the virtual world and the real world are increasingly merging seamlessly. This is further transforming the way we socialize, work, store, and entertain ourselves. The distinction between this immersive side of the digital world and our physical lives is increasingly difficult to make. The integration of domains and functions will continue to take off in 2024.
The days when AI, with ChatGPT leading the way, was seen only as an interactive version of Wikipedia are now behind us. The latest generative AI uses real-time data, takes into account the specific context a user is in, and knows how to synthesize files that contain privacy-sensitive information. This final translation means that information is no longer traceable to individuals (which is important given the privacy regulations required by the AVG). At the same time, pattern recognition remains intact.
AI will integrate timeliness, context, and even personal data while maintaining privacy.
Where generative AI started with just text generation, integration with other domains is increasing hand over fist in 2024. Through platforms like Dall-E3, large groups of people are now finding their way to increasingly sophisticated text-to-image integration. And text-to-music integration is also becoming increasingly simple. That means you can now generate royalty-free music relatively easily, without hiring third parties. Also, through its AI Test Kitchen, Google is allowing groups of users to test their latest AI technologies. In 2024, Google hopes to continue competing with ChatGPT, among other things, with Google Bard. It wants to make the biggest quantum leap in AI through collaboration with the crowd. A competitive battle from which consumers will reap the benefits in 2024.
The trend under which we can categorize all these developments is that the metaverse is getting closer: AI is going to connect more and more domains – virtual and real – that were previously separate.
AI is becoming multimodal and increasingly sophisticated
Not only is AI going to integrate all kinds of domains and functions in 2024. People are also going to integrate AI into their work and lives in more and more places. This is due to several major and important developments within AI. After all, generative AI is becoming multimodal, or in other words, ChatGPT is getting ears and eyes. To have AI at our disposal, we no longer have to sit behind a screen or glued to our phones. TinyML is making its appearance. There are more and more small IoT devices (Internet of Things) that are also participating, learning along, and generating data. For example, Humane recently launched the AI Pin: a wearable pin with a camera, microphone, and projection capability. The user controls the small device through voice commands and gestures. A built-in projector literally shows you the requested data in an instant, namely: projected on your hand.
Cybercrime in the metaverse
The many possibilities of generative AI have not gone unnoticed by criminals either. Police are noticing a new threat in the world of cybercrime, as special LLMs (Large Language Models) are being developed that specifically target malware and hacking activities. According to Blackberry, 51% of IT specialists now believe that a successful cyber attack will be attributable to ChatGPT within 12 months. A number of ChatGPT’s criminal siblings have already surfaced, such as WormGPT and FraudGPT. With minimal skills, these LLMs assist in spear phishing: they generate and send convincing emails that are indistinguishable from real ones and are also personalized. Cybercrime specialists therefore expect a further increase in ransomware attacks in 2024.
Increasingly sophisticated chatbots
Against this threat, there are also opportunities: for example, the police are now also deploying the power of AI in fighting cybercrime. Among other things, they have developed an AI chatbot that can help you with current questions about cybercrime and is even able to analyze code to help understand how an attack was carried out. This chatbot is connected to trusted databases and only works with carefully verified information from these sources. This chatbot also has access to the latest updates and thus is able to use the latest information on cyber attacks, vulnerabilities, detection news, and intel.
This development in the world of chatbots is not only happening in law enforcement. In lots of places, opportunities are now emerging to personalize chatbots simply, quickly, and reliably. This also creates entirely new opportunities commercially. Chatbots do not all use the same cloud service of information and intel, but you can personally feed your chatbot with your own specific expertise, and unique resources, and also impart a style of thinking and responding. As a result of this development, chatbots with different qualities are emerging in practice that, moreover, feel like different personalities. In this world, ChatbotBuilder.ai is currently taking the lead. This company shows how you can quickly and easily build your own AI chatbot tailored to your business and expertise. But also the company OpenAI has not been idle lately, you can create and then launch your own ChatGPT that has a special focus, for example, Indian cooking, cryptocurrency or the field of Business Intelligence. If you yourself are an expert in something, launch your own chatbot in no time.
Trend 3: Generative AI allows boundaries between fake and real to be replaced
With all the developments in generative AI, The Matrix is closer than ever. Back in 2023, the top executive of the company NVDIA presented a highly realistic movie generated by AI from start to finish. And the images are so realistic that you cannot tell whether you are looking at a real world or a computer-generated world. At NVDIA, they christened this system Omniverse ACE: Avatar Cloud Engine. You can play yourself a character in the movie, your avatar looks tremendously like yourself, your mouth syncs exactly with what and how you say it, and you can act the way you want: good cop or bad guy. Not for nothing that many Hollywood actors are on strike.
Deepfakes are often unrecognizable. A recent study shows that people – even when trained to learn to distinguish deepfake voices from real ones – are still wrong in over 25% of cases. The virtual world and the real world are increasingly indistinguishable. Anno 2024, games are more realistic than ever and the world itself, with its TikTok filters, actually looks more fake than ever. A generation is now being born that will spend more time in the fictional world (The Matrix) than the real world.
More and more machine learning of typically human traits
In 1988, it was still world news. The computer Deep Thought managed to beat chess grandmaster Bent Larsen. In the meantime, we have come a long way and the way in which machines learn is becoming increasingly sophisticated. It is no longer just about pure computing power and crushing numbers. Pattern recognition and deep learning, the self-learning to combine through advanced algorithms, are causing machines to learn in a substantially different way. For example, the computer AlphaGo has now been able to win in the game of Go.
The latest development for 2024 is for machine learning to add another layer of knowledge and wisdom. This involves learning traits previously considered typically human. Consider working with bluff and understanding, recognizing and using sarcasm and humor in communication.
Recently, AI beat experts at the board game Stratego, as the machine was able to consciously employ bluff. This has long appeared to be a typical human strategy but has now entered the AI world. Elon Musk, for his part, has developed an AI chatbot called Grok with his startup xAI. This chatbot manages to give humorous answers to a wide variety of questions with appropriate sarcasm and with access to real-time data. Even to questions that most other AI systems currently refuse. Sarcasm is a fairly difficult form of communication to master because it involves asserting “a” when you actually mean “b”. Sarcasm, too, has long been seen as a typically human trait.
Manipulation lurks
Seeing is believing. That’s the wisdom most of us grew up with. With developments like deepfake and synthetic data, you can no longer rely on these old sayings. It is becoming increasingly difficult to determine what is true and what is not. For a few years now, Reuters has been using its own developed algorithm, entitled News Tracer that skims X (former Twitter) in search of news, trying to filter out fake news as best it can. However, the distinction between real and fake is increasingly difficult to make. For this very reason, it is becoming increasingly important to include the metadata – the sender, the metadata that comes with the media – in your judgment about whether something is fake or real. Unfortunately, it is a trend that even that sender and metadata are increasingly part of the manipulation. What remains is approval or confirmation by an independent party. Also, on some platforms, there is now the ability for users to add context to alleged fake news being spread. Anything to expose fake news. For example, a day after the Dutch politician Frans Timmermans flew to Malaga, a photo surfaced of him on X, in which he appears to be eating a lavish meal on the plane. However, this is an AI-generated photo that will do the leftist politician’s image no good.
Fake videos or photos once they go viral cannot be stopped. And from then on, they define the image or influence public opinion. Even when it turns out afterward that something was fake, images and opinions about people and situations will be structurally altered. In other words, fake news settles into memory. The ability to easily create realistic fake videos increases the likelihood of disinformation and fake news. Deepfakes will increasingly be used in 2024 to influence public opinion, manipulate political campaigns, or even spread completely false news stories.
Trend 4: The rise of algorithm omnipotence
We live in the age of Artificial Intelligence. AI is all around us. At the same time, we often do not see the systems we are exposed to. The algorithm omnipotence resides in the set of gateways and IP addresses we use and in data streams, including GPS location, that your mobile generates continuously. But also companies – from Kruidvat to Albert Heijn – know better your entire spending pattern thanks to customer cards and durable links to your previous purchases. They derive more and more connections from that. The AI systems are already building profiles about you and sending you personalized offers.
The systems will continue to get smarter next year. That’s first of all because all those systems are becoming more and more self-learning and, as a result, coming up with better and better suggestions about how they can retain you as a customer; or make you better faster as a patient or more accurately assess what risk you are at. This development is compounded by the fact that more and more resources are becoming available that AI is also starting to combine. The presence of more and more different real-world data streams (RWD) means that the opportunity to also arrive at real-world evidence is increasing. For example, at the most advanced health institutes, starting next year, the systems will no longer just read your patient data, but AI is going to link more and more so-called non health data to it. According to a recent article by McKinsey, this includes, for example, credit card data, an analysis of all the places you have been, what physical activity you have performed while doing so (geospatial data from your health app, for example), and any data the AI machine finds about you on the Web. For example, real-world data is used for increasingly advanced data analysis that can make better and better predictions. In this way, AI will help pharmacists in the coming year to improve their product strategy in a data-driven way and give tips on how to increase adherence on a person-by-person basis.
The Artificial Intelligence handbook This complete Artifical Intelligence book covers the entire spectrum of making organizations more intelligent. It offers you a perfect framework for structurally tackling and implementing organizational improvement with AI, Data Science and Big Data.
AI systems shift from assistant to agents
Dynamic pricing has been around for a long time. If you click on the same ad just a little too often, you’ll see the price of that item suddenly go up. The seller’s system knows: that IP address apparently finds us interesting. These dynamic prices will be applied in more and more places in 2024. The role of AI is changing with it. Because more and more search behavior is being tracked on the Internet, systems in many places often know earlier than you yourself whether you will proceed with a purchase. So-called augmented analytics is already happening in other places and also in BI. The integration of automated machine learning will take hold in 2024 and will be reflected in BI tools, which are all about automated insights and finding connections through data discovery beyond the human mind. This will shift the role of AI from assistant to the one who makes the decisions. AI is increasingly taking on the role of agent.
Algorithms hold us in an often invisible grip.
The algorithms are becoming more subtle and learning better and better how to keep you on a website, on a social media platform, or in a game for as long as possible. Underlying this is a subtle process of all sorts of micro-decisions from within the algorithm. The systems keep track of exactly how long you scroll, what kind of posts make you linger longer, what kind of terms, message headlines, and imagery grab your attention, and thus autonomously make decisions about what to serve you as the next clickbait. Thus, in 2024, algorithms will be even more capable of effectively manipulating you. For example, last year the state of Utah already intervened to curb social media use because of the amount of depressed children, precisely because of social media use.
Trend 5: Being human in the age of AI becomes critical
All the technology described in this article is, little by little, taking away our autonomy. More and more often, technology is taking over from us. This is a trend that has been going on for some time but will be felt more and more in 2024. This has everything to do with the speed at which developments are currently taking place. A speeding ticket, for example, can already can already be dropped in our mailbox without human intervention. For this to happen, the speed measurement has been automated, the data has been checked by a computer, the amount of the fine has been determined in the system, the letter is prepared, spat out and automatically sent to the correct home address. Screens, laptops, and office lights turn themselves off when these devices sense that you have not used them for a while. Good for the environment, it increases comfort or increases our safety: the Tesla brakes itself to avoid an accident. There are also smart pill boxes now on the market that send an alarm app to the rescuer if the box is not opened in time. Our hyper-connectedness combined with smart data streams saves lives but also takes away more and more pieces of our lives. And the latter trend in particular continues unabated in 2024. Being human in the age of AI will become increasingly difficult for some populations.
The Danger of Contaminated Machine Learning
A problematic point about Machine Learning is that before it applies: input is output. Depending on the resources you provide, AI teaches itself certain patterns. Pre-trained language models are typically trained on huge Web-based datasets, which today are often “contaminated” with the results that language models previously generated and that users then posted on the Internet. An example of contaminated machine learning where there can be a vicious cycle that contaminates input with previous output. Because of the impact that generative AI is now having, this risk becomes increasingly important to keep in mind in 2024. After all, it is not always clear to what extent models may or may not exploit contaminated data for tasks further down the line.
Recent experiments show that in some cases, there is exploitation, but in other cases models remember the tainted data but do not exploit it. These results highlight the importance of continuing to analyze huge data sets at the scale of the Web to keep a finger on the pulse. What sources AI uses, and how it makes decisions, matters. With that, the domain of machine learning is increasingly shifting from the IT department to the political arena.
AI and the dangerous feedback loop
Above, we were still talking about keeping inputs pure to train AI. But what if bugs, and biases do get into AI? It has now been established very recently that a dangerous feedback loop can then occur. After all, it is not only about what the AI learns from people but also what and how people learn from AI again. It turns out that people adopt biases from AI, even if they have since returned to work outside the AI environment. So the mistakes can go around like in an echo chamber getting worse and worse. AI and its users may begin to confirm each other in twisted realities and thus end up outright in falsehoods with each other. The irony of this trend is that AI has copied everything from humans.
Trend 6: The tendency to get a grip back on AI
The more sophisticated AI becomes, the more important it is to know where we all stand in the field. To keep the quality of AI language models properly measurable and testable, a newly developed benchmark test will be used in more and more places in 2024. With that, the 70-year-old Turing test is effectively being retired. For a long time, this Turing test was the benchmark for determining whether a machine was able to behave linguistically in such a way that it was indistinguishable from a human being. In the meantime – partly with the advent of the computer model LaMDA, which stands for Language Model for Dialogue Applications – frighteningly good results have been achieved: the dialogue with the computer is so natural that you almost have to attribute human characteristics to LaMDA.
As AI improves, ever more sophisticated methods are needed to assess AI. If you want to keep a grip on AI, you have to keep up with the speed at which AI is also developing. The new test is called BIG-Bench, an abbreviation for “Beyond the Imitation Game benchmark,” and contains 204 very different tasks that the computer must perform. This BIG-Bench test will become the standard for testing the next generation of language models in more and more places starting in 2024.
Datacratic leaders and the rise of data-driven culture
There is still a world to win for leaders. Companies that are already working datacratically in 2024 and have embraced data-driven leadership are 3-0 ahead of competitors. The bulk of the decisions leaders make are based on old data. That’s like having to stay on the road in a moving car just by looking in your rearview mirror. Are you looking at the scenery that you’ve passed by, or at the turns that are coming? That too is a form of getting a grip on the ever-accelerating technology. Investing in real-time data decisions pays to stay on track. It is becoming increasingly important to have real-time data available. Currently, only 14% of all CPOs (Chief Procurement Officers) have access to real-time data ecosystems to base their decisions on, according to Beroe Inc and McKinsey. Knowing about data and having visibility into current numbers ensures that you stay in control. Companies that manage to integrate business intelligence and big data applications deeper into their organizational processes are getting ahead.
AI systems should lose their black box image in 2024
All the developments that have already passed the review will thunder on in 2024: programmers use GitHub Copilot to code better, faster, and with less errors. And thanks to prompt engineering, you already don’t need to be able to code to come up with new software. Machine learning continues. At the same time, of course, products and services still need to meet all security requirements, professionals need to know where their data stays and it becomes more important than ever for professionals to continue to be able to explain all choices to customers. For this to happen, AI systems need to lose their image as black boxes. Take the study in which AI with much less data than used in the regular clinical risk model was still found to be able to predict much better and faster whether breast cancer would develop. We quote lead researcher Vignesh Arasu here:
“All five AI algorithms outperformed the BCSC risk model in estimating breast cancer risk within five years,” Arasu explained. “This amazing performance shows that AI is able to notice the earliest early stages of cancer, as well as certain clues in breast tissue that increase the risk of breast cancer further into the future. There is apparently data hidden in mammograms that reveal the risk of breast cancer, but exactly what the AI model taps into remains unclear.” (source: Scientias).
The AI machines must continue to be able to explain to professionals how they arrived at their judgments. Here we come to the area of so-called Thinking Assistants and Intelligence Augmentation (IA). The thrust of IA is not so much to take over human cognitive systems but to use AI to augment existing human intelligence. All in all, the larger trend evident here is that we are developing a tendency to take back control of AI.
Trend 7: The call for ethics is getting louder and louder
Within AI and big data, ethics is becoming an increasingly important issue. Within projects where big data streams are used, but also within regulations surrounding AI, ethics are increasingly becoming a factor of significance.
For example, the G7 is already introducing an AI code of conduct. This is a set of international guidelines (guiding principles) and an international code of conduct. These are designed to achieve effective governance of AI internationally. The starting point is to handle AI in a safe, responsible, and trustworthy manner.
As long as the use of AI still carries ethical and legal dilemmas, some companies will want to put a (temporary) stop to developments around AI. This already happened at Rabobank in late 2023. Banks want to avoid at all costs that their customers’ data ends up on American tech companies’ platforms. Customer data must be well protected at all times. Privacy laws, data governance, and data management must first be properly regulated.
Towards data-driven and value-driven organizing
A trend that is particularly prevalent in government at this time in the area of data and ethics is the following: there is a shift from solely focusing on data-driven approaches to also embracing a values-driven perspective. In some places, you then see values-driven work being pitted against data-driven work. But by making that split, you throw out the baby with the bathwater, as it were. The challenge for 2024 is to pay sufficient attention to values and ethics within data-driven work, and within data-driven organizations.
It is increasingly important to take responsibility for all moral consequences, precisely because you are working with big data streams, with Business Intelligence, and with AI. The way forward in 2024 is not to change the way AI and BI work, but to explicitly add to it the perspective of values-based thinking. Continue to see how the implementation of technology impacts all different types of stakeholders. Failure to do so will create a new dichotomy of haves and have-nots. Of the digitally literate and illiterate. Of people who know how to make use of AI and IA and those who are averse to it. The consequences of digital illiteracy are great: they have less and less access to information, can end up in social isolation faster, more often, and longer, have more limited opportunities in the labor market, and are relatively vulnerable to online scams and abuse.
The 2024 digital government work agenda focuses on values-driven digitization. This agenda includes topics such as increasing digital literacy, reducing online disinformation, safeguarding public values and ensuring privacy, responsible data use, and the ethical side of data processing and exchange.
In conclusion
Some of the trends in BI, AI and big data we have described here may sound somewhat negative, alarming, or even downright dystopian. But Passionned Group, on the contrary, always likes to take an optimistic view of the future. Bottomline: if we manage to avoid the pitfalls mentioned above, a wonderful acceleration in our human development awaits us. But be careful not to compromise your humanity and your privacy. We wish you happy holidays and a happy 2024.