Under-Secretary-General for Global Communications, Melissa Fleming called on generative AI developers to prioritize human safety and rights over profit at an “Arria Formula” meeting of the Security Council.
The meeting on Artificial Intelligence and its impact on hate speech, mis- and disinformation was jointly chaired by the United Arab Emirates and Albania and featured insights from digital experts Rahaf Harfoush and Jennifer Woodard.
In her address, Ms. Fleming noted that generative AI, if developed and used responsibly, had the potential to improve human rights, including access to information, health, education, and public services.
However, she expressed grave concern over the technology’s potential to “dramatically intensify online harms”.
With generative AI, large volumes of convincing disinformation – from text to audio to video – can be created at scale, at a very low cost and with minor human intervention.
Such content can be distributed en masse, not only across social media and through fake profiles, but on other personalized channels such as email campaigns, text messaging and advertisements.
Generative AI leaves few fingerprints. It is also much harder for journalists, fact-checkers, law enforcement or ordinary people to detect whether content is real or AI generated. Ms. Fleming outlined four areas of key concern for the UN:
- 国产AV and Security: AI-powered disinformation is already endangering UN peace and humanitarian operations, putting staff and civilians at risk. Over 70% of UN peacekeepers responding in a recent survey said mis- and disinformation severely hampered their ability to carry out their work.
- Human rights violations: AI is being used to create and spread harmful content, including child sexual abuse material and nonconsensual pornographic images, especially targeting women and girls. The UN is also deeply concerned that anti-Semitic, Islamophobic, racist and xenophobic content could be supercharged by generative AI.
- Democracy at risk: The potential of AI to manipulate voters and sway public opinion during elections poses a significant threat to democratic processes around the world.
- Undermining science and public institutions: For example, AI tools could escalate decades-long disinformation campaigns to derail climate action, by amplifying false information about climate change and renewable energy.
Underlying these phenomena is a decline in public trust in news and information sources.
Ms. Fleming cited a recent report which found that since May of this year the number of AI-generated news sites, operating with little to no human oversight, had risen from 49 to nearly 600. Some of these sites are populated by thousands of new articles a day. They often mimic familiar news websites, and spew out completely fabricated stories.
In light of these challenges, the UN had set up an AI Advisory Body to strengthen global AI governance.
In parallel, the UN is developing a code of conduct for information integrity, to help increase resilience in societies against disinformation and hate.
Ms. Fleming called for a balanced approach to harness AI's benefits while mitigating its risks, emphasizing the need for healthy information ecosystems for stable and united societies.
Her message was clear - AI developers must prioritize people and their well-being over profit, ensuring technology serves as a force for good.
Analysis assisted by AI.