Fortinet warns: Email remains key to cyberattacks, AI amplifies phishing

Last year, there was an unprecedented increase in automated vulnerability scanning by cybercriminals on systems to hack them (up 16.7%). Artificial intelligence has emerged as a major ally of cybercrime: attackers use it to create deepfakes , malware (viruses), and scam bots. And amidst this complex landscape, an old and beloved resource remains one of the main gateways for attacks: email.
These are some of the insights from the 2025 Global Threat Landscape Report , a report published annually by cybersecurity firm Fortinet's threat lab. Companies periodically release their reports with information collected from their defense systems (firewall telemetry, EDR (antivirus), network controls).
Automated scans are massive searches conducted by cybercriminals using software to detect faulty or misconfigured computers, phones, cameras, servers, or connected devices. It's like a criminal trying to open thousands of locks per second until they find an open door. This year, the company detected 36,000 scans per second.
The problem with these numbers is that, without context, they're difficult to interpret. Is it a lot or a little? What does this tell us about the threat landscape, and how much does it depend on the detection system used in the analysis? And, above all, what risk does it pose to public entities, businesses, and ordinary users?
AI-powered email in Gmail settings, another new gateway for attacks. Photo: Pixabay
The release of Fortinet's latest report coincided with a visit to Argentina by Robert May , the company's Executive Vice President of Technology and Product Management. During his visit, he spoke with Clarín to better understand these figures and where we stand today.
May has experience in the tech sector. He holds a degree in Computer Science from the University of British Columbia in Canada and held engineering and product management positions at Nortel Networks and the Canadian Space Agency (CSA) before joining Fortinet.
With more than 20 years with the company, the specialist offered some insights into interpreting the data in this report, as well as discussing Latin America and the main threats users face today.
Robert May, at Fortinet's offices in Buenos Aires. Photo: Clarín
—It's often said that cybercriminals have the advantage over those who defend systems. How do you see that scenario today, with the widespread adoption of artificial intelligence?
—Well, it's something we see all the time. It happened a few years ago with the migration to the cloud: we used it to defend systems, but attackers also used it as crime as a service to launch attacks from multiple locations. Today, the same thing is happening with AI. We use it to reduce detection and response time; even in our own SOC [Security Operations Center], we saw 60% reductions in certain tasks with AI. But attackers do the same thing: they use these tools to launch attacks faster. There's a technological parity.
—In the report, you mention techniques called “living off the land .” What does this mean?
—It happens a lot in critical infrastructure. It may be easy to get in through an organization's "first door," but that doesn't give access to the most important secrets. For an intruder, the best thing to do is stay quiet on a network: instead of doing something loud that would give them away, they stay inside, acting stealthily. They don't scan the entire network all at once; they wait for moments when their behavior appears normal. This way, they can go months or years without causing damage, until they find something valuable and act.
—Everything is about AI these days. How is it impacting attacks like phishing?
—Email remains the easiest gateway for a cyberattack, and AI contributes to deception. Previously, language differences allowed for the detection of fake emails. Now, with AI, messages appear completely legitimate, with highly specific information. The threshold for launching an attack is increasingly lower.
—Passwords are still a big problem. What do you think about eliminating them?
—In our own SOC, many alerts we send to clients are simply asking them to enable multi-factor or zero-trust authentication. These are tools they already have, but they don't turn on. Passwords alone are insecure. There are solutions like passkeys, MFA [second-factor authentication]. The important thing is to add an extra layer of security beyond the password.
—Are misconfigurations still a problem?
—Yes. Sometimes they implement controls in one part of the network but not another. They're complex environments, with multiple IT teams. Even when they have the tools, they don't configure them properly. For this, there are technology visions like zero trust [limiting access to employees who don't need higher privileges] and MFA that are included in our products. We also use AI to alert when something is open or unprotected. Before, you had to review the configuration manually, but that's another side of AI that can be used for defense: with generative AI , we can alert the administrator and suggest or implement changes.
Ransomware, virus, attack, cybersecurity, hacking, hacker
—How do you see the outlook in Latin America?
—If I remember the report correctly, around 25% of the global attacks we see target Latin America. It varies by country depending on the predominant industries: oil and gas, financial services , etc. But the region is a major target for cybercriminals.
—What about industrial systems , which are now much more connected?
—Previously, they were isolated systems, but in the last 5 to 10 years, a digital transformation has connected them, leaving some problems behind. There has been investment, but not all organizations have moved at the same pace. These are critical environments: energy, water, transportation. An attack there impacts millions of people. Furthermore, they are extreme environments that require special hardware and customized software. The objectives are similar to other attacks, but the entry points are different.
—Ransomware has been the big topic of recent years, although it's been less discussed in the media. Where are we today with ransomware?
—Well, it's actually still prevalent. AI has changed how attacks are generated and defended, but not the type. In industrial sectors, specifically (OT, operational technology), we see cases where they're not just trying to steal data , but also compromising critical infrastructure to demand ransoms.
AI, a frenzy for adoption. Photo: Reuters
—Are companies adopting AI without measuring the risks?
—Yes, there's competitive pressure: if your competitor talks about AI, you have to talk about AI too. So, they deploy language models [LLM] in the cloud, poorly configured or without controls or knowledge of what data they're uploading. That's where we offer tools to secure these deployments. We also use AI for cybersecurity, detection, and to accelerate SOC work.
—As users, it's difficult to separate the "signal from the noise" within the AI landscape. Is this a problem for the industry as well?
—Yes. Many of the keynotes [talks at conferences] talk about AI without showing how it works, with a demo. You see stands that didn't have AI last year and this year they added it in the brochure. There's a lot of hype . The key is to show how it's implemented and what value it brings, not just put it in a PowerPoint presentation. The risk is deploying a poorly done product, without checking its security, and it becoming an attack gateway.
—What trends are of concern today?
—Well, the most common is when users use ChatGPT to upload sensitive data. There are thousands of new SaaS services with AI and employees uploading information there. We give the CISO [chief security officer] visibility and tools to block it. And not just ordinary users: also network operators who upload internal data to external tools.
—What advice would you give to companies and users today in this area?
—For businesses: visibility. Know what data is being shared and what is being used. Then decide what to allow and what not to. For users: be aware of what data is being uploaded and where. Just as we kept repeating “don't click on suspicious links, don't share passwords or keys with anyone,” we now have to repeat a mantra: “Don't upload personal information to a chatbot or public site .” Ultimately, we don't know where that data might end up.
Clarin