This question dominates concerns in a survey about the biggest risks for the year 2024. Apprehensions are increasing, especially due to the proximity of the U.S. presidential elections, scheduled for November this year.
Since the U.S. presidential elections are approaching in November, concerns have significantly increased. The central question is: does artificial intelligence (AI) have the ability to interfere in the electoral process?
The answer seems to be getting closer. A surprising example is the proliferation of deepfakes, videos, and synthetic images generated by AI algorithms. These media can be used to spread false information or manipulate public opinion, potentially influencing election results.
This concern is exacerbated by the constant advancement of AI technology, which makes the creation of fake content easier and more accessible. As we approach the elections, it is crucial to be aware of the potential impacts of AI on the democratic process and to adopt measures to mitigate its misuse.
It’s true that many people have found entertainment in the new technology of creating fake images, even mocking its quality, such as the poorly drawn hands. However, a World Economic Forum survey titled “Global Risks for 2024” does not view these jokes with much humor.
The survey is a comprehensive analysis that aims to identify and assess the main challenges and threats that the world will face in the coming years. The report is developed through consultations with a wide range of global risk experts, political leaders, business executives, and academics to provide valuable insights for guiding public policies, business strategies, and individual actions.
This survey stands out for its comprehensive approach, analyzing a variety of risks in different areas, such as geopolitics, economics, environment, technology, and society. It not only identifies the imminent risks that may arise in the next two years but also anticipates and examines the challenges that may manifest over a horizon of up to 10 years, offering a long-term view of the problems the world may face.
By addressing topics such as misinformation, extreme weather events, social polarization, cybersecurity, and many others, the report provides a holistic understanding of the threats shaping the global landscape. Furthermore, by highlighting the interconnections between different risks and their potential ramifications, the survey helps to emphasize the complexity of the challenges faced by the international community.
Based on these analyses, the report not only helps to raise awareness of potential risks but also offers guidance on how to mitigate these threats and promote resilience on a global scale. This can include recommendations for public policies, corporate risk management strategies, investments in research and development, international cooperation, and civil society engagement.
According to the report, misinformation or inaccurate information generated by AI has the potential to amplify social polarization, especially on sensitive issues such as climate change, armed conflicts, and the economy.
The study, conducted in collaboration with Zurich Insurance Group, consulted over 1,400 global risk experts, political leaders, and business executives in September 2023. This analysis highlights widespread concern about the potential negative impacts of AI on society and underscores the importance of addressing these issues with seriousness and preventive measures.
The concern about the risks associated with Artificial Intelligence in 2024 is being intensified by the proximity of the U.S. presidential elections, scheduled for November this year. The current president, Joe Biden, has already announced his intention to launch his campaign to secure a second term.
In the Republican field, the favorite is Donald Trump, a former president facing a series of lawsuits and who has already been deemed “ineligible” in Colorado.
Saadia Zahidi, Managing Director of the World Economic Forum, highlighted that an unstable global order, marked by polarized narratives and insecurity, along with the worsening impacts of extreme weather conditions and economic uncertainty, increases the risk of spreading misinformation. This situation is potentially exacerbated by the influence of Artificial Intelligence, raising concerns about the integrity and transparency of the electoral process.
Besides the United States, other countries such as Taiwan, which faces tensions with China, India, Russia, South Africa, and Mexico, will also hold elections this year.
It is expected that about half of the world’s adult population will go to the polls in 2024. The main fear is that Artificial Intelligence could eventually influence the election results in some way.
Carolina Klint, Commercial Director for Europe at Marsh McLennan consulting firm, emphasized in an interview with CNBC: “AI is capable of developing models that can influence a large number of voters in a way we’ve never seen before. It will be very important to see how this unfolds.” This warning highlights the need for vigilance and proper regulation to ensure the integrity of electoral processes in the face of AI’s potential influence.
Check out the ranking of the main risks identified in the report, divided between those that may manifest in up to 2 years and those that may arise in up to 10 years:
Within 2 years:
- Misinformation
- Extreme weather events
- Social polarization
- Cybersecurity insecurity
- Targeted interstate conflict
- Lack of economic opportunities
- Inflation
- Involuntary migration
- Economic recession
- Pollution
Within 10 years:
- Extreme weather events
- Structural changes in terrestrial systems
- Biodiversity loss and ecosystem collapse
- National resource scarcity
- Misinformation
- Adverse outcomes of AI technologies
- Involuntary migration
- Cybersecurity insecurity
- Social polarization
- Pollution
Artificial Intelligence offers a range of benefits and opportunities for society, but it also presents a series of challenges and problems that need to be addressed carefully and ethically. The massive data collection required to feed AI systems raises concerns about individuals’ privacy and the security of their personal information, in addition to the ability to generate and disseminate false information convincingly, making it more difficult to discern between what is true and what is false.
To mitigate these problems and promote the ethical and responsible use of AI in society, regulatory measures of transparency in algorithmic decision-making processes, investments in digital education and training, as well as an open and inclusive dialogue about the social and ethical impacts of AI, are necessary.