Home » Technology » Cisco’s Radical’ AI Security Approach

Cisco’s Radical’ AI Security Approach

Cisco has taken a revolutionary approach to AI Security in its new AI Defense Solution.

Rowan Cheung, the exclusive interviewee for Sunday’s edition of The Rundown AI Cisco Executive Vice President Jeetu P.atel, CPO and Chief Product Officer (CPO), said that AI Defense was “taking a revolutionary approach to address challenges that existing security tools are not equipped to deal with.”

AI Defense, which was announced just last week, is aimed at addressing risks when developing and deploying AI apps, as well identifying the use of AI in an organisation.

AI Defense features can help protect AI systems and models from attack, while also ensuring model behavior on multiple platforms.

  • Detection and monitoring of AI applications in public and private clouds.
  • Testing AI models for hundreds potential safety and security issues.
  • Continuous validation protects from potential security and safety threats such as prompt injection, service denial, and sensitive information leakage.

This solution allows security teams and organizations to protect data better by allowing them to see all AI apps that are used by their employees. They can also create policies to restrict access to AI tools not sanctioned and implement safeguards to prevent threats and data loss.

Kent Noyes, Global Head of AI and Cyber Innovation for technology services provider World Wide Technology St. Louis said in a press release. “Cisco AI Defense is a significant step forward in AI Security, providing full visibility and protection of an enterprise’s AI Assets.”

AI Security: A Positive Step

MJ Kaufmann teaches and writes at O’Reilly Media The operator of an online learning platform for professionals in technology, in Boston, confirmed Cisco’s analysis on existing cybersecurity solutions. She told TechNewsWorld, “Cisco has it right.” Existing tools do not address many operationally-driven attacks against AI systems such as data leakage and unauthorized model actions.

She said that “implementers must act and implement targeted solutions” to combat them.

Jack E. Gold – founder and principal researcher at J.Gold Associates, an IT consulting company in Northborough. He told TechNewsWorld that they can use the data they collect from their network telemetry to enhance the AI capabilities.

Cisco also wants to provide security across platforms — on-premises, cloud, and multi-cloud — and across models, he added.

He said that it would be interesting to see which companies adopted this technology. Cisco is moving in the right directions with this type of capability, because, generally, companies aren’t looking into this very effectively.

AI security is important if you want to protect your AI systems.

“Multi-model, multi-cloud AI solutions expand an organization’s attack surface by introducing complexity across disparate environments with inconsistent security protocols, multiple data transfer points, and challenges in coordinating monitoring and incident response — factors that threat actors can more easily exploit,” Patricia Thaine, CEO and co-founder of Private AI TechNewsWorld was told by, an information security and privacy company based in Toronto.

Limitations

Dev Nag is the CEO and founder at QueryPal Chatbot for customer support based in San Francisco.

TechNewsWorld reported that “While network-level monitoring provides valuable telemetry,” many AI-specific threats occur at the model and application layers, which network monitoring alone can’t detect.

“The acquisition last year of Robust Intelligence gives Cisco important capabilities in model validation and runtime security, but their focus is on network integration which may lead to gaps when it comes to securing AI development lifecycle,” said he. “Critical area like training pipeline safety, model supply-chain verification, and fine tuning guardrails requires deep integration with MLOps that goes beyond Cisco’s traditional network-centric approach.”

He continued, “Think back to the headaches caused by open-source supply chains attacks where the malicious code was visible.” “Model supply chains are almost impossible by comparison to detect.”

Nag pointed out that, from an implementation standpoint, Cisco AI Defense is essentially a repackaging existing security products and adding some AI-specific monitor capabilities on top.

The solution is reactive, not transformative. Cisco AI Defense can be useful for organizations that have already implemented Cisco security products and are starting their AI journey. However, those who want to pursue advanced AI capabilities may need security architectures designed specifically for machine learning systems.

Karen Walsh, CEO at, said that for many organizations, mitigating AI risk requires human penetration testers, who know how to ask models questions in a way that will elicit sensitive data. Allegro Solutions, a cybersecurity consultancy in West Hartford (Conn.).

“Cisco’s release suggests their ability to create guardrails specific to models will mitigate these risk to prevent the AI learning from bad data, reacting to malicious requests, sharing unintended, she told TechNewsWorld. At the very least we can hope that this will identify baseline issues and mitigate them so that pen-testers can concentrate on more sophisticated AI compromised strategies.

AGI’s Critical Need

Kevin Okemwa writes for Windows Central, notes that AI Defense’s launch couldn’t have come at better time. Major AI labs in the US are getting closer to producing artificial general intelligence (AGI), a technology designed to replicate human intelligence.

James McQuiggan is a security awareness advocate. KnowBe4 Clearwater, Fla.-based, is a provider of security awareness training.

He told TechNewsWorld that “AGI’s ability to think and act like a person with intuitiveness and orientation could revolutionize industries but also introduce risks which could have far-reaching implications.” “A robust AI solution ensures AGI evolves responsibly and minimizes risks such as rogue decisions or unintended effects.”

“AI security isn’t just a ‘nice-to-have’ or something to think about in the years to come,” he added. It’s crucial as we move towards AGI.

Existential Doom

Okemwa added: “While AI Defense represents a positive step, its implementation across major AI laboratories and organizations is yet to be determined.” OpenAI’s CEO is also a notable figure. [Sam Altman] “AI will be intelligent enough to stop AI from causing an existential crisis.”

Adam Ennamli is the chief risk and security officers at General Bank of Canada. He told TechNewsWorld that he was optimistic about AI’s ability to self regulate and prevent catastrophic outcomes. However, he also noted that the alignment of advanced AI systems with the human values remains an afterthought, rather than an absolute.

Stephen Kowski added that current AI systems can be manipulated in order to bypass security controls and create harmful content. SlashNext A computer and network security firm in Pleasanton, Calif.

He told TechNewsWorld that “technical safeguards” and human oversight are essential, since AI systems are driven by their programed goals and training data rather than an innate desire to be human.

Gold continued, “Humans are very creative.” I don’t believe this doomsday stuff. We’ll find a safe way to use AI. That’s not to say there won’t be issues along the way, but we’re not all going to end up in ‘The Matrix’.”