AI Lazed Attacks Targets 600+FortiGate Firewalls; Amazon; What Makes AI a Powerful tool for Hackers
AI Lazed Threat Actors Compromised 600 +Fortinet FortiGate firewalls in 55 countries
Continue ReadingAI Lazed Threat Actors Compromised 600 +Fortinet FortiGate firewalls in 55 countries
Continue ReadingType of AI based attack vectors & organizational preparedness to Threat mitigation in 2026
AI based attacks is already there and what’s more, now organizations need to protect themselves against any unorthodox attack vector’s i.e AI based. Organizational readiness to thwart any unorthodox attack vectors like AI will determine organizational security from cyber threats are.
Any preparedness by organizations to protect and combat AI powered cyber Attacks will take lot of precession as AI based attack occur at scale and speed both. In backdrop of any cyber attack that is not common how do organization’s prepare and what does statistics from 2025 reveal.
Most of AI powered attacks are not conventional in nature and traditional cybersecurity tools often struggle to respond effectively to these threat.
AI-enabled attack that organizations need to prepare for in 2026
For organizations dealing with an attack vector which are unorthodox or AI in nature require man power or skilled cyber force and tools that are automated to detect and thwart the attack before they advance towards the institutions in advance.
AI’s has capacity to process and learn vast amounts of data and in cybersecurity this is termed as powerful and presents unique challenges as well as risks. Present attack scenario we have witnessed how AI take to automate and optimize malicious activity.
For defenders AI is boon and can detect, predict and mitigate threats in real time. However, the increasing sophistication of AI-powered threats is outpacing traditional defense mechanisms.
What are the types of AI powered Attack
Hacking which is Automated and AI algorithms based, can identify and exploit vulnerabilities faster than human capabilities.
Next in line is AI- Phishing and Cybercriminals use AI to create personal and convincing phishing emails. What AI does here is to analyze data from other sources to generate highly customized messages capable of influencing.
Deepfakes are growing in form of realistic fake videos or audio impersonating public figures in order to spread misinformation, manipulate public opinion, or conduct social engineering attacks.
Corrupting AI Models via data fed into AI systems to manipulate outcomes and is particularly concerning in critical systems. This showcases the dangerous potential of AI-powered cyber attacks.
Key findings by Organizations – AI based cyber security findings.
The evolving nature of AI means that new attack vectors are constantly being developed, making detection difficult for organizations. These are below mentioned take aways from 2025 regarding AI driven cyber threats.
What do cyber security leadership require most in 2026 is having clear actionable path regarding AI based attack and threat mitigation.
A mindset change is required by CEOs, CISO’s and CXOs where focus should be to start building resilience against intelligent AI attacks.
Cybersecurity has become integral part of lives and especially 2025 was the year of cybercrimes and data breaches across verticals. As the new year commences, starting the year on a positive note with cyber-security resolutions such as
– Prioritize employee training on evolving AI based threats
– Enhance endpoint protection
– Secure data & ways to scarping
– Securing PII data during data lifecycle
– Fortify your incident response and business continuity plans
– Extend more focus on third-party security assessments
– Ensure robust cloud security is aligned with data privacy regulations
– Embrace multi-factor authentication (MFA)
– Safeguarding against AI-driven cybercrimes.
– Engaging often with board and leadership
Sources: https://www.isaca.org/about-us/newsroom/press-releases/2025/ai-driven-cyber-threats-are-the-biggest-concern-for-professionals-finds-new-isaca-research
AI tools like ChatGPT, Google Gemini and others being afflicted by malicious actors via injecting harmful instructions into leading GenAI tools. These were overlooked previously and attack methodology targets the browser extensions installed by various organizations.
The attack methodology named as ‘Man in Prompt’, exercise its attack with new class exploit targeting the AI tools as per LayerX’s researchers.
As per the research any browser extension, even without any special permissions, can access the prompts of both commercial and internal LLMs and inject them with prompts to steal data, exfiltrate it and cover their tracks.
The exploit has been tested on all top commercial LLMs, with proof-of-concept demos provided for ChatGPT and Google Gemini.
The question is how do they impact Users & organizations at large & how does the AI tools function within web browsers?
For organizations the implications can be high then expected as AI tools are most sought after and slowly organization across verticals are relying on AI tools.
The LLMs used and tested on many organizations are mostly trained ones. They carry huge data set of information which are mostly confidential and possibility of being vulnerable to such attack rises .
The attack methodology named as ‘Man in Prompt’, exercise its attack with new class exploit targeting the AI tools as per LayerX’s researchers. As per the research any browser extension, even without any special permissions, can access the prompts of both commercial and internal LLMs and inject them with prompts to steal data, exfiltrate it, and cover their tracks.
The attack methodology named as ‘Man in Prompt’, exercise its attack with new class exploit targeting the AI tools as per LayerX’s researchers. As per the research any browser extension, even without any special permissions, can access the prompts of both commercial and internal LLMs and inject them with prompts to steal data, exfiltrate it, and cover their tracks.
LayerX researcher termed this type of attack as ‘hacking copilots’ that are equipped to steal organizational information.
The prompts given are a part of the web page structure where input fields are known as the Document Object Model, or DOM. So virtually any browser extension with basic scripting access to the DOM can read or alter what users type into AI prompts, even without requiring special permissions.
Bad actors can use compromised extensions to carry out activities including manipulating a user’s input to the AI.
Understanding the attack scenario


Proof-of-concept attacks against major platforms
For ChatGPT, an extension with minimal declared permissions could inject a prompt, extract the AI’s response and remove chat history from the user’s view to reduce detection.
LayerX implemented an exploit that can steal internal data from corporate environments using Google Gemini via its integration into Google Workspace.
Over the last few months, Google has rolled out new integrations of its Gemini AI into Google Workspace. Currently, this feature is available to organizations using Workspace and paying users.
Gemini integration is implemented directly within the page as added code on top of the existing page. It modifies and directly writes to the web application’s Document Object Model (DOM), giving it control and access to all functionality within the application
These platforms are vulnerable to any exploit which Layer X researchers showcased that without any special permissions shows how practically any user is vulnerable to such an attack.
Threat mitigation
These kind of attacks creates a blind spot for traditional security tools like endpoint Data Loss Prevention (DLP) systems or Secure Web Gateways, as they lack visibility into these DOM-level interactions. Blocking AI tools by URL alone also won’t protect internal AI deployments.
LayerX advises organisations to adjust their security strategies towards inspecting in-browser behaviour.
Key recommendations include monitoring DOM interactions within AI tools to detect suspicious activity, blocking risky extensions based on their behavior rather than just their listed permissions, and actively preventing prompt tampering and data exfiltration in real-time at the browser layer.
(Source: https://layerxsecurity.com/blog/man-in-the-prompt-top-ai-tools-vulnerable-to-injection/)
Recent Comments