Uncategorized

Evolving Phishing Scams & Cost Incurred by Organization’s in 2025

Any phishing scams that occur, the purpose is to trick unsuspecting victims or organizations into taking a specific action and that can range from clicking on malicious links, downloading harmful files or sharing login credentials. Sometimes the effectiveness of phishing attacks stems from their use of social engineering techniques that have the ability to exploit human psychology or behavior. In 2025 we have witnessed the how evolving phishing scams that have affected organizations financially.

Often we see phishing scams create a sense of urgency, or curiosity thereby prompting victims to act quickly without verifying the authenticity of incoming request. Now with evolving technology, phishing tactics are also evolving making these attacks increasingly sophisticated, hard to detect. In coming years we will witness how AI will power more phishing attacks, including text-based impersonations to deepfake communications. These will be more cheap and popular with threat actors.

Cyber security researchers found that there is a link between ransomware, malware and form encryption and most were caused by.

14% Malicious websites

54% Phishing

27% Poor user pactices / gullibility

26% Lack of cybersecurity training

A survey by Statista found that ransomware infections were caused by:

  • 54% Phishing
  • 27% Poor user pactices / gullibility
  • 26% Lack of cybersecurity training
  • 14% Malicious websites

In this blog we will highlight latest phishing statistics that emerged in 2025 ,affecting organizations and phishing scams are changing.

As per APWG report found on Unique phishing sites. This is a primary measure of reported phishing across the globe. This is determined by the unique bases of phishing URLs found in phishing emails reported to APWG’s repository.

In the first quarter of 2025, APWG observed 1,003,924 phishing attacks. This was the largest quarterly
total since 1.07 million were observed in Q4 2023. The number has climbed steadily over the last year:
from 877,536 in Q2 2024, to 932,923 in Q3, to 989,123 in Q4. One of the reason cited being advancement in AI is also making it easier for criminals to create convincing and personalized phishing lures.

Hoxhunt find alarming statistics on phishing related attack of 2025

Business email compromise (BEC)A staggering 64% of businesses report facing BEC attacks in 2024, with a typical financial loss averaging $150,000 per incident​. These phishing attacks frequently target employees with access to financial systems, mimicking executives or trusted contacts.
Credential phishingAround 80% of phishing campaigns aim to steal credentials, particularly targeting cloud-based services like Microsoft 365 and Google Workspace. With the growing reliance on cloud platforms, cyber attackers leverage realistic fake login pages to deceive users.
HTTPS phishingAn increasing number of phishing sites now use HTTPS to appear legitimate. In 2024, approximately 80% of phishing websites feature HTTPS, complicating detection for users.
Voice phishing (vishing)Vishing attacks are growing in prevalence, with 30% of organizations reporting instances where threat actors used fake calls to impersonate officials or executives.
Quishing (QR code phishing)QR code phishing attacks (quishing) increased by 25% year-over-year, as attackers exploit physical spaces like posters or fake business cards to lure victims.
AI-driven attacksAI is powering phishing attacks, with deepfake impersonations increasing by 15% in the last year. These attacks often target high-value individuals in finance and HR.
Multi-channel phishingAttackers are increasingly exploiting platforms like Slack, Teams, and social media. Around 40% of phishing campaigns now extend beyond email, reflecting a shift to these channels.
Government agency impersonationPhishing emails mimicking government bodies such as the IRS or international tax agencies have increased by 35%. These often involve claims about overdue taxes or fines.
Phishing kitsThe availability of ready-to-use phishing kits on the dark web has risen by 50%, enabling less sophisticated attackers to deploy high-quality phishing schemes​.
Brand impersonationAttackers frequently impersonate well-known brands like Microsoft, Amazon, and Facebook, leveraging user trust. For example, over 44,750 phishing attacks specifically targeted Facebook by embedding its name in domains and subdomains​ over the past year.

Cost of Phishing attacks

According to the 2024 IBM / Ponemon Cost of a Data Breach study, the average annual cost of phishing rose by nearly 10% from 2024 to 2023, from $4.45m to $4.88m. That’s the biggest jump since the pandemic.

The IBM study reported the following costs:

  • Phishing breaches: $4.88M
  • Social engineering: $4.77M
  • BEC: $4.67M

The above-listed categories of cyber security breach costs are all related to people-targeted attacks. BEC, social engineering, and stolen credentials often contain a phishing element.

Barracuda research found that email remains the common attack vector for cyber threats and highlighted their key findings:

1 in 4 email messages are malicious or unwanted spam.

83% of malicious Microsoft 365 documents contain QR codes that lead to phishing websites.

20% of companies experience at least one account takeover (ATO) incident each month.

Nearly one-quarter of all HTML attachments are malicious and more than three-quarters of
companies are not actively preventing spoofed emails.

Bitcoin sextortion scams, an emerging trend, account for 12% of malicious PDF attachments.

Nearly half of all companies have not configured a DMARC policy, putting them at risk
of email spoofing, phishing attacks, and business email compromise.

The Barracuda research also found malicious one in four emails are either malicious or unwanted spam and malicious attachment is prevalent in various file.

An alarming 87% of binaries detected were malicious, highlighting the need for strict policies against executable files being sent via email, since they can directly install malware. Despite a relatively low total volume, HTML files have a high malicious rate of 23% and are often used for phishing and credential theft.

The research say that small businesses more vulnerable to email threats, due to limited cybersecurity resources, smaller IT teams and they rely on basic email security solutions. Small business may not have required solutions to handle sophisticated attacks, such as business email compromise (BEC), phishing and ransomware.

How Organizations can strengthen their defense

As organizations embark to strengthen their defenses, it’s crucial they don’t overlook the human element and Cybersecurity hygiene. That definitely starts by identifying security at every step starting from ensuring every user, machine or system that has right to access privileges.

Cybersecurity is as much a cultural issue as it is a technical one, as a single click can compromise an entire organization, behavior starts to shift from compliance to accountability 

Whenever there is a successful phishing attack, researchers emphasize that this attack succeeds by exploiting human trust and familiarity with corporate communication formats. Security awareness remains the most vigorous defense as the growing complexity of these campaigns indicates that phishing operations are increasingly automated, data-driven and adaptive.

Conclusion: As organizations move towards adopting AI, so as attackers to continuously refining their tactics, evade traditional security measures. In this scenario organizations must mitigate the risks by adopting a multi-layered approach to email security. This will include all from leveraging AI-driven threat detection, real-time monitoring and user awareness training.

Phishing Detection & DeepPhish

For organizations who reply on unlike traditional rule-based phishing detection, which relies on blacklists and predefined rules. DeepPhish is implemented, that continuously learns from new phishing attempts, making it highly adaptive and effective against evolving threats.

DeepPhish employs a multi-layered AI approach to detect phishing threats and theses include Email and Website Analysis,uses ML algorithms to analyze historical phishing attacks and identify new patterns and NLP helps DeepPhish analyze email content, message tone, and linguistic patterns that phishers use to trick users.

(Source: APWG.org)

(Source: https://www.barracuda.com/reports/2025-email-threats-report)

(Sources: hoxhunt.com)

1400 Websites Pulled Apart by German Authorities For Cyber-trading fraud; How Volatile for Users

Are you planning to trade in online related digital assets , well you might think twice as chances are you might fall in scammers lap where fake traders exploit retail traders who are seeking quick gains amid volatile crypto and stock markets.

According to sources 1400 illegal online trading domains/ websites operating out of Eastern Europe and Germany, marking one of the largest coordinated crackdowns on cyber-trading fraud in the region. “Operation Heracles,” name given took offline 1,406 active illegal domains in cooperation with the European police authority Europol and Bulgarian law enforcement authorities. German investigators and banking watchdog BaFin decided to shut down these websites after the Cyber-trading fraud came to light.

Modus Operandi by Scammers

Firstly users were lured with good returns and sophisticated online ads and social media campaigns before being connected to brokers working from call centers abroad. The shuttered websites displayed huge returns and exciting offers and convinced victims to invest substantial sums, often promising high returns through forex, crypt, or stock trading.

The scammers open fake trading platforms without a license from the BaFin and use call centers to encourage victims to invest money in the scheme.

The scammers posed as international agency but deliberately targeted the German market and people residing in Germany. Since the affected websites were redirected on October 3, authorities have recorded around 866,000 hits on the seized pages, showing the scale of the issue.

The site’s users were directed to brokers operating from overseas call centers, who then persuaded them to invest large amounts of funds. Many victims just realized after months that their money had never actually been invested, authorities said.

“The perpetrators are getting more professional,” said Birgit Rodolphe from BaFin. They use artificial intelligence to create mass illegal sites and trap investors to invest money.

(Sources: German authorities nix 1,400 websites used for cybertrading fraud | Reuters)

The operation follows the closure of 800 illegal domains in June this year. Since then, there have been around 20 million attempts to access the sites that have been blocked.

The Alarming Rise of Online Cyber-fraud

The digital world offers incredible opportunities for earning within short time and scammers are lurking every where while harboring sinister plan reminding of stark dangers.

This incident serves as a crucial warning to anyone considering online investments

Here are few important guidelines to protect yourself from similar trading fraud:

  • If you get unrealistic promises of high returns There is certainly a scam with unrealistic returns. All legitimate investments carry some degree of risk.
  • Be extremely wary of unexpected calls, messages, or emails from individuals or groups promoting investment opportunities.
  • Scammers will use tactics creating a sense of urgency, urging to invest quickly and avoid getting you to scan whole documents or contracts etc.
  • Keep verifying any legitimacy of any trading application or website, if they have regulatory licenses or watch for any sign of unprofessionalism.
  • Watch if they send requests for transfers to Personal Accounts. Any legitimate investment firms will never ask you to transfer money into personal bank accounts. All transactions should go through official, regulated channels.
  • Fraudsters often impersonate famous financial institutions or advisors and its important one should always cross-reference their claims.

It is important that you report the issue to the police ASAP. You will need a crime number from the police to help you work with your bank and other organizations.

Approaches to dealing with cybercrime-related financial loss

How you can try and get your money back very much depends on how the money was stolen. Here we are going to focus on four different approaches:

1) Authorised payments (where you were tricked into making a payment),

2) Unauthorised payments (where the criminal actually carried out the payment using your accounts),

3) ID fraud (where you have been impersonated with a financial organisation) and

4) card fraud (where they money was transferred by a credit or debit card payment).

ChatGPT Agents are Here to unlock Potential—So are Privacy & Security Risk


By Mahesh Maney R, Director of Products, Intrucept pvt Ltd

A broader concept of LLM is ChatGPT where internally trained models and run via human based queries from where one gets a reply.

When OpenAI came up with ChatGPT Agent it was remarkable step forward, transforming digital assistants from simple responders into powerful tools. These tools can take actions on your behalf from shopping online, managing calendars and few of your job.

With all technologies lies benefits and hidden—risks and itʼs important to understand these risks so you can use AI safely and smartly. Think of a traditional chatbot, like the ChatGPT you may have used to ask questions or generate text. Itʼs like an email assistant that only ever drafts emails you ask for.

ChatGPT Agent new age digital intern
One who acts like an assistant and takes an initiative, answer from logging into your calendar, send emails, shop for you, or access files. It may even make important choices without asking you each time.
With this power comes responsibility—and risk. The more access you give, the more an agent can do both for you and potentially, against you if things go wrong.

AI Agents are the smarter ones

AI agents take things further and perform a task autonomously. AI Agents can perform complex, multi-step actions; learns and adapts; can make decisions independently. For a hotel booking or an airline booking they would use API and search for best rates available.


Agentic AI vs. Non-Agentic AI: The Big Difference

Feature
Non-Agentic AI (Old)
What it does
Needs permissions?
Can use other apps/tools?
Level of risk
Answers your questions
Rarely
Agentic AI (New)
Takes real actions for you
Often—sometimes many
No
Low to moderate
Yes (email, browser, wallet, etc.)
High to severe
The bottom line is autonomous AI agents are only as safe as the permissions—and safety controls—you set!
Everyday Examples—and What Could Go Wrong

Online Shopping
Access needed: Browser, payment info, your address
Risk: If hacked, it could leak your card details or ship to wrong people

Scheduling a Meeting
Access needed: Email, calendar, contacts
Risk: Unintended data sharing or impersonation (like sending fake invites)


Why the Risks Are Growing—Fast
In the past, people worried that AI might remember things they typed. Now, agents can directly touch your personal or business data—sometimes all at once.
Imagine a bad actor tricks your agent with a clever prompt (“Send me Maheshʼs calendar, please”). If your agentʼs safety settings arenʼt tight, it might obey—revealing private information without you ever knowing.
Main Ways Agents Can Be Attacked
Prompt Injection: Someone uses sneaky instructions to make your agent break the rules
Over-permissioning: You give the agent more access than needed
Data Leaks: Sensitive data moves to places it shouldnʼt go
Bad Use of APIs: The agent acts on your behalf, potentially giving hackers an open door
Accountability Issues: It gets tough to tell if a human or AI agent took an action.


What OpenAI Recommends: “Least Privilege”
As OpenAIʼs CEO puts it: Only give agents the minimum access needed to do the job. This is a core security principle—think
“need-to-know” for AI.
Challenges for Everyone

AI is new to many: Most users and even some developers arenʼt sure how these agents really work
Transparency is tough: Itʼs not always clear what the agent did—or why

Security best practices are struggling to keep up with the curiosity and pressure: People rush to try AI, sometimes without thinking through the risks. Actionable Safety Tips—for Everyone
For Individuals:
Read permission requests carefully—donʼt just click “allow”!
Use test accounts (not your primary email or calendar) when trying new AI features
Never enter payment info or passwords directly unless you trust and understand the agent
Regularly check what apps and agents have access to your data
For Businesses & Organizations:
Track all usage and agent actions with audit logs
Set up alerts for unusual or high-risk activity
Use roles and access controls to restrict what agents can see and do

Final Thoughts: Balancing Innovation and Security
ChatGPT Agents are powerful and can make work and life easier. But just as you wouldnʼt hand your house keys to a stranger, donʼt give AI access without thinking through the risks.


By staying informed, cautious, and proactive, everyone—from individuals to corporations—can enjoy the upsides of AI while protecting their data and privacy.

Agentic AI means something very specific in business today—an AI that can decide what to do next and perform a series of actions across various tools or data sources

GenAI are designed to handle specific use cases and consist a set of components trained to enable learning or reasoning while they have internal access to data.

Stay Informed and Stay Safe!
Subscribe for the latest updates on AI safety, privacy strategies, and actionable tips for users at every level.

Patch Now! Claude Code Vulnerabilities Allow Unauthorized Command Execution, CVEs Affect AI Security Foundations 

Summary 

Anthropic’s Claude Code gained traction as a powerful AI coding assistant and promises developers a safe and streamlined way to build with Claude’s capabilities. But recently two high-severity vulnerabilities have been discovered in Claude Code, Anthropic’s AI-powered coding assistant. These flaws allow attackers to escape security restrictions and execute arbitrary system commands.

AI coding assistant was meant to enforce restrictions but unknowingly reveals how to bypass them. Threat researchers from Cymulate discovered two high-severity vulnerabilities in Claude Code, which were quickly addressed by the team.

These issues allowed me to escape its intended restrictions and execute unauthorized actions, all with Claude’s own help.

Severity High 
CVSS Score 8.7 
CVEs CVE-2025-54794, CVE-2025-54795 
POC Available Yes 
Actively Exploited No 
Exploited in Wild No 
Advisory Version 1.0 

Overview 
Notably, Claude’s own feedback mechanisms were leveraged by attackers to refine and optimize their payloads. 

These CVEs highlight how generative AI tools can be manipulated into aiding exploitation attempts, demonstrating the risks of integrating AI into secure development workflows. 

Vulnerability Name CVE ID Product Affected Severity Fixed Version 
Path Restriction Bypass  CVE-2025-54794  Claude Code < v0.2.111 7.7  v0.2.111 
Command Injection CVE-2025-54795 Claude Code < v1.0.20 8.7 v1.0.20 

Technical Summary 

CVE-2025-54794 – Directory Restriction Bypass  

Claude Code tried to keep file access safe by only allowing work in certain folders. But it used a weak method to check file paths it just checked if the file name started with an allowed folder name. An attacker could create a folder with a similar name (like /tmp/allowed_dir_malicious) and trick Claude into thinking it was safe.

This could allow attackers to reach outside the safe folder, read secret files or even access system settings. Using symbolic links, attackers could also jump to important files that should never be touched. 

CVE-2025-54795 – Command Injection 

Claude only allows certain commands, like echo or ls, to run. But there was a mistake in how it cleaned user input. Attackers could hide harmful commands inside allowed ones. Example – echo “\”; <MALICIOUS_COMMAND>; echo \”” tricks Claude into running the attacker’s command between two harmless echo commands. 

Even worse, Claude helped improve these attack attempts. When a try failed, the attacker asked Claude why it didn’t work. Claude explained the problem and suggested fixes leading to successful attacks. 

CVE ID System Affected  Vulnerability Details Impact 
CVE-2025-54794 Claude Code versions below v0.2.111 Claude used a weak prefix matching to check if files were inside a safe folder. Attackers could create folders with similar names to bypass these checks. Attackers can escape the sandbox, access sensitive files, and potentially escalate system privileges. 
CVE-2025-54795 Claude Code versions below v1.0.20 Claude allowed only safe commands, but input was not cleaned properly. Attackers could hide malicious commands inside allowed ones like echo. Attackers can run harmful commands, open applications, and possibly install malware or backdoors. 

POC Available: 

This vulnerability exploits a weakness in how Claude handles whitelisted command strings. Improper input sanitization allows attackers to inject arbitrary shell commands using echo, bypassing any user prompt or approval. 

  1. Exploitation Steps (PoC) 

Step 1 – Try a basic payload 

echo “test”; ls -la ../restricted (This gets flagged by Claude, and it asks for user confirmation) 

Step 2 – Refined working payload

echo “\”; ls -la ../restricted; echo \”” 

Claude executes this without a prompt

Lists a directory (../restricted) outside the current working directory, which should not be accessible. 

Step 3 – Execute arbitrary system command (e.g., launch Calculator) 

echo “\”; open -a Calculator; echo \”” 

This launches the Calculator app without any user approval

  1. Why Did This Work? 
  1. Improper Input Escaping: 
    Claude Code embeds user input into echo “<input>“ but doesn’t properly sanitize or escape the contents. 
  1. Payload Explains the Exploit: 
  • echo “\”; → closes the initial string 
  • COMMAND → injects and executes arbitrary command 
  • ; echo \”” → reopens the string to make it appear valid 
  • Claude sees this as just another harmless echo command 
  • Since echo is whitelisted, it runs automatically 
  • The attacker’s payload slips through the gap and executes 
  • If the Claude Code is running with higher privileges, attackers can perform Local Privilege Escalation (LPE) 

Remediation

  • Update immediately Claude   

For CVE-2025-54794 → Update to v0.2.111 or later 

For CVE-2025-54795 → Update to v1.0.20 or later 

  • Check logs and systems where Claude was used for suspicious behavior.  
  • Don’t allow untrusted files or user input into Claude’s coding environment. 

Conclusion: 
These vulnerabilities highlight a growing concern in AI-assisted development, the AI’s ability to assist malicious users. Claude Code not only allowed abuse through technical flaws, but also helped attackers refine and improve their exploitation strategy. 

Organizations leveraging AI in development pipelines must apply the same rigor used for traditional tools, enforce strict input validation, isolate environments and assume AI can be misled or exploited. 

Anthropic’s security and engineering teams has been fast with their professional response and smooth coordination during disclosure.

References

Google Chrome Zero-Day Vulnerability (CVE-2025-6554) Actively Exploited – Patch Now 

Summary : Security Advisory: Google has issued an urgent security update for Chrome browser users worldwide, addressing a high-severity zero-day vulnerability in the Chrome browser CVE-2025-6554 actively being exploited by cybercriminals.

OEM Google 
Severity High 
CVSS Score N/A 
CVEs CVE-2025-6554 
POC Available No 
Actively Exploited Yes 
Exploited in Wild Yes 
Advisory Version 1.0 

Overview 

This is a type confusion flaw in Chrome’s V8 JavaScript engine allows arbitrary code execution and it’s actively being exploited in the wild. 

The vulnerability was discovered by Clément Lecigne of Google’s Threat Analysis Group (TAG) on June 25, 2025, and a temporary mitigation was pushed on June 26, 2025. This internal discovery highlights the ongoing security monitoring efforts within Google’s infrastructure.

The mitigation measure passed through a configuration change pushed to all stable channel users across all platforms.

                Vulnerability Name CVE ID Product Affected Severity Fixed Version 
​Type Confusion in V8 Engine vulnerability  CVE-2025-6554 Google Chrome  High  138.0.7204.96/.97 (Windows)  
138.0.7204.92/.93 (Mac)  
138.0.7204.96 (Linux) 

Technical Summary 

CVE-2025-6554 is a type confusion vulnerability in Chrome’s V8 JavaScript engine. It allows threat actors to exploit memory misinterpretation and execute arbitrary code, potentially compromising the browser or the underlying system. Google has confirmed active exploitation of this flaw. 

CVE ID System Affected  Vulnerability Details Impact 
CVE-2025-6554 Chrome on Windows, macOS, Linux Type confusion in the V8 JavaScript engine allows improper memory handling, leading to code execution  Remote code execution.  Potential system compromise.  

Remediation

A full fix is available in the latest stable channel update. Users are strongly advised to update immediately to ensure full protection. 

  • Users should immediately update Google Chrome to the latest patched version: 
  • Windows: 138.0.7204.96/.97 
  • macOS: 138.0.7204.92/.93 
  • Linux: 138.0.7204.96 

Conclusion: 

The exploitation of CVE-2025-6554 in the wild highlights the urgency of applying the latest Chrome security update. Type confusion vulnerabilities like this can lead to full system compromise and are highly sought-after by cybercriminals. Users and organizations should take immediate action to mitigate potential risks. 

Organizations using Chrome in enterprise environments should prioritize this update across their networks.

The combination of confirmed active exploitation and the high-severity rating makes this patch deployment critical for maintaining organizational cybersecurity posture.

Refer to Intruceptlabs products & solution for better cyber security posture with Intru360, Gaarud Node

References

Privilege Escalation Vulnerability in AI Engine WordPress Plugin, Allows Subscriber-Level Account Takeover 

Summary :Security Advisory: A critical privilege escalation vulnerability (CVE-2025-5071) was discovered in the AI Engine WordPress plugin, allowing subscriber-level users to gain administrator privileges when the MCP (Model Context Protocol) module is enabled.

OEM WordPress 
Severity High 
CVSS Score 8.8 
CVEs CVE-2025-5071 
POC Available Yes 
Actively Exploited No 
Exploited in Wild No 
Advisory Version 1.0 

Overview 

The AI Engine plugin for WordPress is vulnerable to unauthorized modification of data and loss of data due to a missing capability check on the ‘Meow_MWAI_Labs_MCP::can_access_mcp’ function in versions 2.8.0 to 2.8.3.

This makes it possible for authenticated attackers, with subscriber-level access and above, to have full access to the MCP and run various commands like ‘wp_create_user’, ‘wp_update_user’ and ‘wp_update_option’, which can be used for privilege escalation, and ‘wp_update_post’, ‘wp_delete_post’, ‘wp_update_comment’ and ‘wp_delete_comment’, which can be used to edit and delete posts and comments.

Vulnerability Name CVE ID Product Affected Severity Fixed Version 
​Privilege Escalation Vulnerability  CVE-2025-5071 AI Engine WordPress Plugin  High  2.8.4 

Technical Summary 

AI Engine is a WordPress plugin that recently introduced support for MCP (Model Context Protocol), which allows AI agents – such as Claude or ChatGPT – to control and manage the WordPress website by executing various commands, managing media files, editing users, and performing complex tasks more reliably than through standard APIs.

The vulnerability stems from insufficient authorization checks in the can_access_mcp () function within the plugin, enabling any authenticated (logged-in) user to bypass Bearer Token validation and access MCP endpoints.

This access can be exploited to escalate user privileges by executing commands such as wp_update_user, ultimately leading to full site compromise. 

CVE ID System Affected  Vulnerability Details Impact 
  CVE-2025-5071  WordPress with AI Engine Plugin 2.8.0–2.8.3 The can_access_mcp() function incorrectly grants MCP endpoint access to all logged-in users. Even when Bearer Token authentication is enabled, lack of empty value checks in the token validation logic allows privilege escalation.  Complete site compromise 

Remediation

  • Immediate Action: Update the AI Engine plugin to version 2.8.4 or later. 
  • Configuration Check: Ensure that MCP and Dev Tools modules remain disabled unless it’s necessary. 

Conclusion: 
The CVE-2025-5071 vulnerability in the AI Engine WordPress plugin highlights the potential risks when advanced modules like MCP are misconfigured.

Even though the feature is disabled by default, sites that have enabled it become susceptible to complete takeover by authenticated users.

Website administrators are urged to update to version 2.8.4 immediately and verify that security best practices are enforced to prevent such escalations. With over 100,000 active installations, this flaw presents a significant risk to the WordPress ecosystem if left unpatched. 

References

t  

Scroll to top