AI introduced a "battle" by filling chatbot responses with misinformation.

Speaking to the Lusa news agency, Sergio Hernández explains that in a growing context of misinformation, "a new battle has begun to be fought, aimed at filling the responses of 'chatbots', such as ChatGPT or Gemini, with misinformation."
"Disinformation grows as social media and its algorithms exploit and amplify cognitive biases," says Sergio Hernández.
With the increase in messages, creating a confusing, dense, and complex atmosphere, the distinction between what is actually happening in the current media ecosystem is blurred by the successive eruptions of the internet, social networks, and AI, explains the expert.
In this sense, for Sergio Hernández, "planned disinformation has increased, whether promoted by economic or political interests," highlighting actions of Manipulation and Interference with Foreign Information (MIFI), which generally point to Russia as the main threat.
This week, the European organization EUvsDisinfo stated in a report that the rise of AI has reshaped the Kremlin's FIMI campaigns.
Instead of reaching the public directly through social media, Russia's disinformation apparatus has shifted its strategy to "flooding the internet with millions of misleading and low-quality articles and content designed to be used by AI-driven tools and applications."
Thus, "the difficulty of the problem increases exponentially with the irruption of artificial intelligence," both in the generation of false content and in changes in the search for information.
AI enables the creation and dissemination of large-scale fraud, as well as the design of 'deepfake' content, which reproduces the image and sound of public figures to make it seem as if they did or said things that are not true, explains the Spanish official.
In Portugal, between August and September, journalists Pedro Benevides, Clara de Sousa, and Sandra Felgueiras were examples of those whose images were manipulated using AI to spread misinformation about vaccines and medications.
"Current AI models are constantly improving the quality of their creations, with hyper-realistic videos that incorporate dialogue and ambient sound," explains the head of EFE Verifica.
Sergio Hernández adds that "the technical flaws that previously gave it its synthetic nature, such as hands with abnormal fingers or unrealistic textures, are progressively being minimized."
This month, a report by the anti-disinformation organization NewsGuard concluded that OpenAI's new AI application, Sora 2, produces hyper-realistic videos with false claims 80% of the time.
Furthermore, large language model manipulation (LLM Grooming) is a new threat.
The issue at hand is the intentional saturation of the internet with false information to influence tools like ChatGPT, with the goal of these models generating responses with misleading content, reproducing false narratives and propaganda.
Launched in 2019, EFE Verifica is the fact-checking service of the Spanish news agency EFE, aiming to respond to growing disinformation, offer information against falsehoods, and provide knowledge for citizens to deepen their critical capacity in relation to the news.
Read also: Meta adds parental controls for teen interactions with AI.
noticias ao minuto




