Security by design

Google Chrome Patching 3 High Security Flaws Highlights Browser Security

Google Chrome emergency security update tracked as CVE-2026-2441; Highlights Browser Security

Continue Reading

AI Surge in CyberSecurity Redefining Threat & Defense; Reshaping Software Development & Security

Currently enterprise Cyber Security strategy with AI has become a game changer, reshaping is critical for both threat and defense. Embracing Gen AI for a robust defensive system empowers organizations to analyze vast amount of data is key requirement for enterprise security where software development is key to enterprise security , embracing ‘security by design’.

In 2024-2025, we have witnessed how mainstream enterprise deployment of AI has changed the strategic cyber security requirement. Thereby creating a strong defense mechanism around enterprise security, redefining the threat landscape and shaping software development.

AI is changing the way we look at products being a risk multiplier. How organization balancing innovation with protection?

AI can track and break commonly used passwords within minutes. So this is scary as more powers are in the hands of hackers, on the other side AI can improve password security again a boon. The Dark Web is already selling Fraud GPT and Worm GPT.

For Organizational cyber security strategy AI is being used now to tackle threats and cyber defense. Again AI has the capability to accelerate the speed of cyber attacks.

So what are leaders deciding when chasing AI based products. The way leaders are looking at products is products that give practical and actionable outlook and being embedded in delivery workflows.

Strategically, this means evolving away from rigid, checkbox-based compliance toward dynamic, adaptive security models that reflect how modern teams really build software—especially in AI-accelerated environments.

As per statistics 2025 witnessed the following AI based cyber attacks.16% of all breaches in 2025 involved attackers using AI. (IBM),and other AI attacks included 37% used phishing attacks and 35% used deepfake attacks. (IBM). 63% of breached organizations had no AI governance policy or were still developing one, highlighting the governance gap around AI adoption (IBM).

OpenText has released their survey and the report entails, AI is rapidly changing the threat landscape for organizations . Organizations are navigating a high-stake balancing act to enable innovation while managing risk.

Here are the key findings

Top AI-related concerns among respondents include data leakage (29%), AI-enabled attacks (27%), and deepfakes (16%).

95% of respondents are confident in their ability to recover from a ransomware attack, but only 15% of those attacked fully recovered their data.

88% allow employees to use GenAI tools, yet less than half (48%) have a formal AI use policy.

Enterprises lead AI governance (52%) compared to SMBs (43%) by having a formal AI policy in place.

52% report increased phishing or ransomware due to AI; 44% have seen deepfake-style impersonation attempts.

Surge in AI Threats via sophisticated attacks

One of the reasons cited by threat researchers is organizations are embracing GenAI, allowing employees to use generative AI tools and few less then 50% have a formal AI-use or data privacy policy in place, the report noted.

This is added with hackers innovative way in tricking using AI, bypassing any defense mechanism which is traditional. 

AI tools are now being used to create such convincing phishing emails, fake websites and even deepfake videos to injecting malicious code giving leverage to cyber criminals

In the last few months we witnessed how Ransomware attacks round the world surged and quite complex in nature as third-party service providers or software supply chains were prime targets. The Qantas airline breach and M&S data beach that hit UK’s top retail brand.

While Qantas did not to Information Age whether AI voice deepfakes were used in the breach, the cybercrime group experts believe may be linked to the hack — dubbed ‘Scattered Spider’ — has a track record of using voice-based phishing (or ‘vishing’) in its attacks. This is clear AI being used and surge is quite high in AI based cyber attacks.

AI for Cyber Defense for Organizational Cyber Security Strategy

It is not hackers who are benefiting but for Organizations it is a game changer as AI being used to detect attack at faster pace meaning mean time.

Findings of this survey reinforces that protecting against ransomware now depends not just on internal defenses, but also on how effectively organizations’, partners, and technology providers collaborate to close security gaps before they are exploited.

Key pointer for building pragmatic and strategic choices and this approach starts with embracing security by design approach in developmental life cycle.

  • Continuously Embedding security in developer workflows keeping automating, scanning, policy enforcement and anomaly detection in tools used by developers.
  • Cybersecurity AI tools are better at identifying patterns and anomalies in large datasets including vulnerabilities. teams have to highly prioritize and contextualize them in term of developing products.
  • Supposedly there is an attack and the security tools not able to detect. So continuous testing is mandatory.
  • Developers can favor simple solutions that favors pragmatic security patterns and transparency in architecture. In this way trust is developed with clients.

Few important developers keep in focus is to sponsor bug bounties, publish advisories using standards like the Common Security Advisory Framework (CSAF) and provide context on severity and exploitability.

Threat researcher suggest organizations who are building in products accept all vulnerability reports, investigate them, and fix the issues. Any critically important advisory to be used for root cause analysis to improve tools, training and various threat models. Developers are suggested to give feedback for external tools if they help them evolve. Understanding no software can ever be perfect.

Offerings from IntruceptLabs are exactly what you need to develop organizational cyber defense capabilities

Intru360

Intru360 gives security analysts and SOC managers a clear view across the organization, helping them fully understand the extent and context of an attack. It also simplifies workflows by automatically handling alerts, allowing for faster detection of both known and unknown threats.

Identify latest threats without having to purchase, implement, and oversee several solutions or find, hire, and manage a team security analyst. Unify latest threat intelligence and security technologies to prioritize the threats that pose the greatest risk to your company.

Here are some features we offer:

  • Over 400 third-party and cloud integrations.
  • More than 1,100 preconfigured correlation rules.
  • Ready-to-use threat analytics, threat intelligence service feeds, and prioritization based on risk.
  • Prebuilt playbooks and automated response capabilities.

(Sources: https://www.mckinsey.com/about-us/new-at-mckinsey-blog/ai-is-the-greatest-threat-and-defense-in-cybersecurity-today)

Sources: https://investors.opentext.com/press-releases/press-releases-details/2025/OpenText-Cybersecurity-2025-Global-Ransomware-Survey-Rising-Confidence-Meets-a-Growing-AI-Threat/default.aspx)

Report says ChatGpt Atlas is Vulnerable for Users: Understanding Open-AI Agent Mode

Atlas’s autofill and form interaction capabilities present potential attack points

As per reports ChatGpt Atlas browser is vulnerable to attacks and is laced with inherent weakness in comparison to other browser like Google Chrome. As per ‘LayerX ‘who discovered the weakness in ChatGpt Atlas, described threat actors have the ability to inject malicious instructions into ChatGPT’s ‘memory’ and execute remote code and this works by way of cross-site request forgery requests.

These exploit can allow attackers to infect systems with malicious code, grant themselves access privileges or deploy malware. “Understanding “Agent Mode” is most important and core of Atlas which is not same for any traditional browsers. In traditional browser where users manually move from site to site, agent mode allows ChatGPT to semi-autonomously operate your browser.

For e.g. any user wanting to use ChatGPT for work related purposes, the malicious code planted earlier mostly tainted will be invoked automatically to execute remote code, allowing attackers to gain control of the user account .This may include their browser, code they are writing or systems they have access to.

Rate of Vulnerability is 90% A Warning for Users

The rate of vulnerability is 90% then other browsers as when an attacker wish they can push or inject  malicious instructions into ChatGPT’s Atlas ‘memory’ and later execute via remote code.

There is a more basic warning as well. “Atlas does not include meaningful anti-phishing protections, meaning that users of this browser are “up to 90% more vulnerable to phishing attacks than users of traditional browsers,” LayerX says.

Key pointers from research

ChatGPT’s Atlas is not resilient to Phishing attacks

Out of 103 in-the-wild attacks that LayerX tested 97 to go through, a whopping 94.2% failure rate

Compared to Edge (which stopped 53% of attacks in LayerX’s test) and Chrome (which stopped 47% of attacks),

ChatGPT Atlas was able to successfully stop only 5.8% of malicious web pages

Unlike traditional web browsers where you manually navigate the internet, agent mode allows ChatGPT to operate your browser semi-autonomously.

The technology works by giving ChatGPT access to your browsing context. It can see every open tab, interact with forms, click buttons and navigate between pages just as you would.

Importance of Security by Design for web browsing & How AI is intricately involved

The sandboxing approach which is security by design is to keep websites isolated from attacks and prevent malicious code from accessing data from other tabs is crucial to modern web architecture. This is the basis of modern web that depends on separation. But if its not implemented what can be the impact.

But in Atlas, the AI agent isn’t malicious code – it’s a trusted user with permission to see and act across all sites. In this browser isolation is not required. Here AI is not directly connected to the threat but what AI does is AI following a hostile command hidden in the environment. This opens doors to security and privacy risks many users are ill-equipped to handle.

Let me put an example : If you search for air tickets and visit a site , the Atlas ChatGpt will prompt and try to book a ticket or you search for movies in near by theater ,it attempts to book a ticket ”, it will explore options and try to book reservation. Atlas autofill’s and form interaction capabilities present potential attack points, especially when AI is making rapid decisions about information entry and submission.

This is possible when access is granted to ChatGPT for any browsing requirement or context that allows it to view and open tabs, interact with forms and navigate between pages like humans do.

Is User’s security getting compromised

The above example gives users warning that any AI powered browser may be convenient but not without security risks and those who are ChatGpt Atlas, should give extreme cautious before choices are made . Do not share browsing history with any AI mode, instead adopt incognito mode. Any malicious code can  influence the AI’s behavior if browsing and this can happen across multiple tabs.

In case of Atlas, the condition is more vulnerable as Atlas provides inputs like humans doing and AI in disguise executing harmful commands within the environment.

Will AI Agent or Open AI make browsing safe for users or what it means to have safe browsing.

(Source: https://www.bbc.com/news/articles/c20pdy1exxvo)

Jaguar Land Rover Data Hack reveal Significance of Security & Privacy by Design

Jaguar Land Rover announced suffering they hit by a cyberattack in August that severely disrupted its production and retail activities. Cyber criminals stole data, held by the carmaker, it has said, as its factories in the UK and abroad face prolonged closure. This massive data hack reveal that every stakeholder in the supply chain must be embed and lazed with security and privacy by design.

Principle of security by design

So the ever evolving automotive industry and modern vehicles are more of software, which means more coding which goes upto 100 million codes and this is growing in numbers and run more applications then ever before.

So the more coding and software, the more lucrative it is for attackers to target systems and codes and if security flaws exist then its a heaven for cyber criminal as it is now easy target for data privacy leaks etc.

Best practices for Securing by Design principles and software development are enough to address the emerging risk to automotive systems and other systems within the vehicle.

According to the BBC, three plants were affected: the ones in Solihull, Halewood and Wolverhampton. Also the cyberattack forced the company to disconnect some systems, which led to factories in China, Slovakia and India getting shut down and workers being instructed to stay at home. 

As per the company suppliers and retailers for JLR are also affected, some operating without computer systems and databases normally used for sourcing spare parts for garages or registering vehicles.

Scattered Spider group behind the cyber attack

As per reports the notorious Scattered Spider  the hackers group is credited for the attack on JLR. The threat actor was also linked to recent attacks against major UK retailers, as well as several other industries worldwide. 

This is the second cyberattack that hit JLR this year. In March, the Hellcat ransomware group claimed to data theft which were in hundreds of gigabytes of data from the carmaker.

July we witnessed how Scattered spider group targeted the aviation and retail sector

https://intruceptlabs.com/2025/07/scattered-spider-group-target-aviation-sector-third-party-providers-to-vendors-are-at-risk-solutions-that-will-improve-security-posture/

Addressing cyber security challenges in Automotive security

Organization addressing such cyber incident in near future will require dedication that will extend to all levels. This includes data layer, connection layer, authentication layer and more.

If organizations are proactive enough in establishing comprehensive protective measures and ensuring reliable systems that wont fail and in place, ultimately will create safe environment for entire ecosystem more resilient against cyber disruptions.

Cybersecurity challenges in automotive innovation

The integration of advanced technology has brought the automotive industry face-to-face with complex cybersecurity challenges. Vehicle technology, now deeply intertwined with software, exposes both consumers and manufacturers to varied threats.

The challenge for manufacturers is finding the right balance between advancing connected features and securing those very connections against evolving threats.

Transformation in Automotive industry while navigating cautiously in the midst of cyber attack

The year 2025 is transformative for automotive industry as the industry witnessing many groundbreaking technological advancements that is lazed with challenges in cybersecurity and supply chain resilience.

Navigate cyber challenges

For automotive industry as a whole, opportunities are huge for the industry as a whole but will take concrete shape when fitted with with robust architecture, zero-trust security frameworks and being transparent. There is a need to have more collaborative mindset and approaches among manufacturers, suppliers and leaders in technology of which cyber security is now important part.

Intercept offers Mirage Cloak

Mirage Cloak the Deception Technology, offers various deception methods to detect and stop threats before they cause damage.

These methods include adding decoys to the network, deploying breadcrumbs on current enterprise assets, using baits as tripwires on endpoints, and setting up lures with intentionally misconfigured or vulnerable services or applications. The flexible framework also lets customers add new deception methods as needed.

Sources: https://www.theguardian.com/business/2025/sep/10/jaguar-land-rover-says-cyber-attack-has-affected-some-data

Scroll to top