Enterprise Cyber Security Mapping up Structural Reset with AI Security

As enterprise accelerate AI usage for threat detection, so are attackers are leveraging AI, so are defenders ready to understand and evaluate what changes their environment must make to stay secure. The landscape of security is now emerging as a core architectural pillar providing with structural reset and shaping how organizations design networks, deploy applications and data governance.

Many questions are emerging in this cycle as AI reshapes cybersecurity

Now applications are moving to cloud, traditional perimeter security designed for data centers is no longer sufficient and all implementation are integrated, or are they operating in silos be it cloud or any other?

Are we entering a phase where AI verses AI front where AI is enabling better threat detection and response.

Another emerging challenge, security of AI applications. Are there enough visibility about application which are critical in nature.

Cisco recently gave glimpses on how AI-enabled capabilities and what we believe the new threat
landscape will look like. Whether these models used by attackers, leveraged by researchers or operated
as agents within enterprise environment.

AI usage in recent Cyber security events triggering AI-enabled threats

Cisco identified a modular framework named VoidLink, a tool that has expansive capabilities such as
role-based access control, peer-to-peer and deadletter queue routing capabilities along with implant
management capability. A number of indicators were found in the code base that indicated it was likely
developed with the assistance of an LLM.

Social engineering in particular has benefited from the use of AI. Numerous reports of actors utilizing LLM to improve email lures have been made. However, actors have gone beyond this, with Mandiant reporting on UNC1069’s potential use of AI video tools to create a deepfake video supposedly from the target company’s CEO.

AI agents are being deployed with excessive permissions across critical environments

AI models like Mythos should be operated inside tightly controlled, sandboxed environments
with strong containment. Anthropic confirmed in Mytho’s security capability technical report, the model
demonstrates high baseline alignment performance, but exhibits rare, high-severity failures categorized by:

  • Goal-directed, strategic reasoning
  • Partial decoupling between internal cognition
    and output
  • Optimization toward implicit or misspecified
    objectives
  • “Situational awareness” influencing behavior

Agentic AI systems automating entire workflow without human intervention. Is it safe from the perspective of enterprise security

You have a dealers device and you want to witness the growth of agents and how they operate on dealer devices and may not fall under any organizational network.

Now these agents will trigger orders based on overstocking or any inventory thresholds, and even might interact with multiple systems. These AI agents are designed to take decisions autonomously subsequently changing the nature of identity completely. Traditional identity models focus on authentication, verifying who a user is. But in this scenario, enterprise will need continuous authorization, context validation where judgements on controls can be laid out.

In an AI driven world continuous monitoring is central to security and any type of transformation or innovation inherently carries risk. While organizations focus on building AI capabilities, attackers are simultaneously exploring ways to exploit them and therefor it is must to first assess organizations current infrastructure and define their state and where are they in terms state below:

  • How they plan to manage security through a unified interface.
  • How they manage legacy system and components and plans to integrate new and legacy systems
  • Are organizations ready to consolidate solutions into a unified platform.
  • Address identity as identity is central as any time a user connects an AI agent to a platform, they are giving it an identity with specific permissions.

AI systems may predict potential attacks based on patterns and data. However, ensuring the accuracy and reliability of these predictions will require human oversight.

Organizations now need to work at fast pace in an era of AI-powered cyber defense by using advanced AI models to find and fix vulnerabilities, while accelerating the development of security products that can defend against AI enabled adversaries.

For Product development organizations will now go beyond vulnerability and continuously build and validate software.
This includes updating our threat models to account for AI-augmented adversaries, incorporating real time AI enabled attack and defending scenarios in various red teaming exercises.

This will ultimately guide organizations to stress-test on their products against the capabilities these models actually deliver.
As AI coding agents become integral to software development workflows, ensuring those agents produce secure code by default is essential.

Model-agnostic security framework by CISCO

Recently CISCO donated Project CodeGuard to the Coalition for Secure AI (CoSAI). Project CodeGuard provides an open-source, model-agnostic security framework that embeds secure-by-default practices directly into AI coding agent workflows. CodeGuard ships security skills and rules that guide AI agents to prevent common vulnerabilities during code generation and review.

Cisco recommends organizations adopt frameworks like CodeGuard to ensure that the same AI acceleration being used to write code is not inadvertently introducing the vulnerabilities that AI enabled attackers will exploit.

Conclusion:

For organizations any ongoing research into AI security will be crucial for staying ahead of attackers and thereby investment in research and development, organizations can discover new ways to protect AI systems from emerging threats and vulnerabilities.

In case of traditional cyber security, it is centered around defense and stopping external attacks. AI shifts the focus toward internal risk and therefore its important to identify if any user’s access was intentional or accidental.

CISCO recommends to effectively respond to the accelerating capabilities enabled by advanced AI models, organizations must adopt a balanced approach that reinforces foundational security practices while simultaneously modernizing their defensive architecture

  • Any devices or software that cannot be patched, must be systematically removed and replaced with modern platforms.
  • Prioritize phishing-resistant authentication, strong identity verification, least privilege access (including AI agents) and Zero Trust architectures.
  • Organizations should place protections directly within the workload, device and traffic path,
    enabling security controls to act in real time.

Source: https://www.cisco.com/c/dam/en_us/about/doing_business/trust-center/docs/cisco-defending-against-ai-attacks-guidance.pdf

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top