Home » Technology » AI is dominant in 2025 Cybersecurity Forecasts

AI is dominant in 2025 Cybersecurity Forecasts

Artificial intelligence is a major concern for analysts and professionals when it comes to cybersecurity by 2025.

Willy Leichter said that both attackers and defenders would use artificial intelligence, but the latter will gain more. AppSOC San Jose, California, is home to a provider of application security and vulnerability monitoring.

“We already know that AI is going to be used more and more on both sides of cyber warfare,” he said in a TechNewsWorld interview. Attackers will be less constrained, however, because they are less concerned about AI accuracy or ethics. AI will be able to benefit techniques such as highly customized phishing or scouring networks for existing weaknesses.

“While AI has huge potential defensively, there are more constraints — both legal and practical — that will slow adoption,” he said.

Chris Hauk, champion of consumer privacy at Pixel Privacy The publisher of online consumer guides for security and privacy,, has predicted that 2025 will be the year when AI versus AI takes place, with the good guys using AI to defend against AI powered cyberattacks.

He told TechNewsWorld, “It’s likely to be a year filled with back-and-forth combat as both sides will use the information they gathered from past attacks to develop new attacks and defends.”

AI Security: Mitigating the Risks

He also predicted that cyber-adversaries would start to target AI systems more frequently. He explained that AI technology has a large attack surface, with new threats emerging to machine language systems, models, datasets and data. When AI applications are rushed into production from the lab, their full security impact will not be known until they have been breached.

Karl Holmqvist is the founder and CEO Lastwall Honolulu based identity protection company,, also agreed. “The unchecked, mass deployment of AI tools — which are often rolled out without robust security foundations — will lead to severe consequences in 2025,” he told TechNewsWorld.

He said that without adequate privacy and security measures, these systems would become prime targets for manipulation and breaches. This Wild West approach will leave data and systems of decision-making exposed. Organizations must prioritize security controls and transparent AI frameworks to reduce these risks.

He also stated that in 2025 security teams will be required to take more responsibility for the protection of AI systems.

He explained that, although it may seem obvious, in many organisations, the initial AI projects are driven by business analysts and data scientists, who frequently bypass traditional application security processes. “Security will lose the battle if it tries to slow or block AI initiatives. However, they will need to bring rogue AI into their security and compliance umbrella.”

Leichter pointed out, too, that AI in 2025 will increase the attack surface available to adversaries aiming at software supply chain. He added that “we’ve seen supply chains becoming a major vector of attack as complex software stacks depend heavily on third party and open-source codes.” The explosion in AI adoption has made this target bigger, with new vectors of attacks on datasets and model.

He added: “Understanding lineage and maintaining the integrity and changing datasets are complex problems, and there is currently no way for an AI to unlearn toxic data.”

Data Poisoning: A Threat to AI Models

Michael Lieberman is the CTO and cofounder of Kusari in Ridgefield (Conn.), a company that specializes in software supply chain security. He also believes that poisoning large languages models will be a major development in 2025. He told TechNewsWorld that “data poisoning attacks to manipulate LLMs are likely to become more common, even though this method will be more resource-intensive than simpler tactics such as the distribution of malicious open LLMs.”

“Most organizations don’t train their models,” he said. Instead, they use pre-trained, free models. Because there is no transparency in the origins, it’s easy for malicious actors (as shown by the Hugging Face Malware incident) to introduce harmful models. This incident took place in 2024, when it was discovered 100 LLMs with hidden backdoors which could execute arbitrary codes on the users’ computers had been uploaded to Hugging Face.

Lieberman predicts that future data poisoning attacks will likely target large players such as OpenAI, Meta and Google who train their models using vast datasets. This makes it more difficult to detect.

“In 2025 attackers will probably outpace defenders,” said he. “Attackers tend to be financially motivated while defenders find it difficult to secure adequate budgets because security is not a revenue driver.” It may take a significant AI supply chain breach — akin to the SolarWinds Sunburst incident — to prompt the industry to take the threat seriously.”

AI will enable more attackers to launch sophisticated attacks. As AI becomes more accessible and capable, the barriers to entry will be lower for less-skilled attackers, while also increasing the speed of attacks,” said Justin Blackburn. AppOmni San Mateo is the home of a SaaS company that provides security management software.

He told TechNewsWorld that AI-powered bots would allow threat actors to carry out large-scale attacks quickly and easily. With these AI-powered tools in hand, even less-capable adversaries could gain access to sensitive data or disrupt services at a level previously reserved for more sophisticated and well-funded attackers.

Script Babies grow up

In 2025, the rise of agentic AI — AI capable of making independent decisions, adapting to their environment, and taking actions without direct human intervention — will exacerbate problems for defenders, too. “Advances of artificial intelligence will empower non-state agents to develop autonomous cyber weaponry,” said Jason Pittman A collegiate associate at the School of Cybersecurity and Information Technology at the University of Maryland Global Campus, Adelphi, Md.

He told TechNewsWorld that “Agentic AI” operates autonomously and with goal-directed behavior. “Such systems are able to use frontier algorithms in order to identify vulnerabilities, infiltrate a system, and adapt their tactics in real time without human steering.”

He explained that “These features differentiate it from other AI system which rely on predefined instruction and require human input.”

“Like Morris Worms in the past, the release or agentic cyber weapons could begin as an accident. This is more troubling. The accessibility of advanced AI and the proliferation open-source machine learning platforms lowers the barrier to developing sophisticated cyber weapons. Once developed, the powerful autonomy can easily lead to agentic artificial intelligence escaping its security measures.

AI, while potentially harmful in the hands of malicious actors, can help secure data like personally identifiable information. Rich Vibert said, “After analyzing over six million Google Drive documents, we found 40% of them contained PII, putting businesses at risk of a breach.” Metomic A data privacy platform is located in London.

He continued, “By 2025 we will see more companies prioritizing automated data classification methods in order to reduce the amount vulnerable information that is inadvertently stored in publicly accessible files as well as collaborative workspaces in SaaS and Cloud environments.”

He added that “businesses will increasingly use AI-driven software to automatically tag, identify and secure sensitive information.” “This change will enable businesses to keep up with vast amounts of daily data, ensuring sensitive data is continuously safeguarded and unnecessary data exposure minimized,” he said.

However, the hype surrounding AI in 2025 may also lead to disappointment among security professionals. Cody Scott, a Senior Analyst for Forrester Research A market research company with headquarters in Cambridge, Mass. wrote on a company’s blog.

He noted that according to Forrester data from 2024, 35% global CISOs & CIOs view exploring and deploying gen-AI use cases as a priority to increase employee productivity. “The security products market has hyped up gen AI’s productivity benefits but lack of results are fostering disillusionment.”

He continued, “The idea of an autonomous security operation center using gen-AI generated a lot hype, but could not be further away from reality.” In 2025, this trend will continue and security practitioners will become more disenchanted as budget constraints and unrealized AI advantages reduce the number security-focused gen AI implementations.