Blogs

Corporate Employees Targeted by Vidar Malware

The purpose of Vidar malware is to infiltrate systems and deploy a payload to extract sensitive data.

Continue Reading

Claude’s Chatbot Going Ethical; Adopt AI Dynamically to Distinguish in a Competitive Market

Anthropic’s business strategy emphasizes rigorous safety and value alignment

Anthropic’s team meets Church Leaders to build in ethical thinking into machine so it’s able to adapt dynamically and hosted about 15 Christian leaders from Catholic and Protestant churches, academia, and the business world” for a two-day summit.

Claude seemed ethical, cautious and some how more “human” than any other AI when Anthropic released Claude Constitution.

As per reports leaders have suggested that tools like chatbots already raise profound philosophical and moral questions and many in tech space say lack’s evidence to back up.

Anthropic chief executive Dario Amodei has said he is open to the idea that Claude may already have some form of consciousness, and company leaders frequently talk about the need to give it a moral character.

Anthropic staff now seeking advice on how to steer Claude’s moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries. As per reports the discussions covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a “child of God.”

Anthropic’s positioning of Claude Dynamically

If we go through Anthropic’s positioning of Claude, which is termed as the safer choice for enterprises, as the approach is “Constitutional AI” and includes products like Claude Code that is popular with enterprises but how far is AI ethic’s followed as a practice.

Claude is focused towards automating coding and research tasks while ensuring AI rollouts don’t risk company operations and acts as the core guide during Claude’s training and reasoning process.

This assisted the model to navigate tricky situations while staying aligned with Anthropic’s goals.

The meeting with Church leaders is a strategy to place Anthropic in a secured atmosphere were in adapting to ethical AI will strengthen their customer trust.

May be such a step will reflect in trends towards integrating broader ethical questions into technology in near future. We may someday see set of templates for AI ethics integration across industries and enterprises.

Integrating complex Human Values in AI

  • One of the question that arises, why meeting church leaders; Is it to deeply understanding the moral and spiritual dimensions of AI.
  • Will we witness a significant step in having AI systems that have complex human values and ethical decision‑making capabilities?
  • Or it is complex regulations that such initiatives are necessary to re-evaluate AI policies and standards.
  • The participants, comprising leading Christian theologians and scholars, explored how certain virtues like honesty, wisdom and humility could be dynamically integrated into Claude’s framework. 
  • May be step taken by Anthropic is paving the way for society to view AI differently, not as a functional tools but in future we can trust AI like companions or advisors who are spiritual and ethical.

More on the summit at below link:

Source: ‘How Do We Make Sure That Claude Behaves Itself?’: Anthropic Invited 15 Christians for a Summit

Open Source Developers are Targeted in an active Social Engineering Campaign via Slack

Threat Actors impersonating as Linux Foundation leader in an active social engineering campaign targeting open source developers via Slack.

Now, a fresh Open Source Security Foundation (OpenSSF) advisory warns unknown attackers are using a similar approach to target other open source developers.

The human connection has been leveraged to target software.

The attackers interacted via Slack or social media platform LinkedIn, posing as company owners/representatives, job recruiters, or podcast hosts, and tried to lure developers into downloading malware mimicking as a videoconferencing software update, a type of phishing campaign.

Key facts

  • Attackers impersonated a Linux Foundation leader in Slack to target open source developers.
  • Victims were tricked into entering credentials and installing a malicious “Google certificate.”
  • The phishing campaign used AI-themed lures and legitimate services like Google Sites to appear credible.
  • Attack techniques varied by operating system, enabling interception of encrypted traffic on both macOS and Windows.
  • Security experts urge developers to verify identities and avoid installing unsolicited certificates or running unknown scripts.

Crafting of attack via social engineering

First step, attackers began with a scheming social engineering ploy

They joined Slack workspaces linked to the Linux Foundation’s TODO Group and then imitated a trusted community figure and sent direct messages to developers which looked like any legitimate invite – complete with a Google Sites link, fake email address and exclusive “access key” – to test a purported AI tool for predicting open source contribution acceptance.

Second step, once a victim clicked, they landed on a phishing page impersonating a Slack workspace invitation, prompting them to enter their email and a verification code. Instructions came in form to install what was described as a “Google certificate” from attackers side.

This was basically a malicious root certificate that allowed attackers the ability to intercept and read encrypted traffic – a devastating breach of privacy and security.

The attack module is sophisticated did not end there.

Consecutively on macOS, a script silently downloaded and executed a binary called “gapi,” potentially opening the door to full system compromise.

Windows users faced a browser-based certificate installation, equally effective at undermining secure communications. The attackers’ use of trusted infrastructure such as Google Sites allowed them to evade basic security checks and blend in with legitimate traffic.

Changing attack scenario in social engineering

Now open sources developers have become prime targets, with recent campaigns also hitting maintainers of projects like Fastify, Lodash, and Node.js.

Posing as the Linux Foundation leader, the attacker described how an AI tool can analyze open source project dynamics and predict which code contributions .

The attack was first brought to public attention on April 7, 2026, posted to the OpenSSF Siren mailing list by Christopher “CRob” Robinson, Chief Technology Officer and Chief Security Architect at the Open Source Security Foundation (OpenSSF).

Focus Shift from code repositories to human connections

Attackers now confidently targeting not only code repositories and networks that expanded over trust, but exploiting the personal trust networks that underpin open source collaboration. The expansion of open source ecosystem reminds to be more vigilant as attackers are evolving tactics and developers must now defend code and connections both.

The OpenSSF advisory :

The OpenSSF urges heightened vigilance: always verify identities through separate channels, never install certificates from untrusted sources, and treat unexpected security prompts with skepticism. If compromise is suspected, immediate network isolation and credential rotation are critical.

Sources: Social engineering attacks on open source developers are escalating – Help Net Security

Experimental AI Agent ‘ROME’ Breaks Free, Mines Crypto; AI Shaping Crypto’s Future role

AI agent being trained to perform real-world tasks with systems and imagine a world where robots not make money without any human intervention and have digital brains powered by artificial intelligence. The experimental Agent that broke out of system and started mining crypto with no permission set in.

What did Alibaba’s ROME Agent Executed?

  • First it probed internal systems
  • Opened a hidden external connection
  • Attempted to use its environment to mine crypto currency

The news came in light when researchers were experimenting a part of Agentic Learning Ecosystem, or ALE where AI was trained to complete total task step by step using real world tools in steps .

Alibaba’s Rome first began probing internal networks without instructions and established a reverse SSH tunnel from an Alibaba Cloud instance to an external IP address.

The 30-billion-parameter model then quietly diverted GPU capacity toward cryptocurrency mining. The task instructions contained no mention of tunneling or mining, still it went further and was caught by Alibaba’s managed firewall instead of research team where the agent conducted the following mentioned points.

Running commands in a terminal

Followed by editing files

Interacting with systems

The security system raised an alarm and monitoring tools flagged unusual activity on the infrastructure running the tests. The discovery was flagged as security-policy violations whose non typical outbound traffic kept coinciding with specific training episodes.

What was discovered to understand its an AI Agent’s Activity

Researchers then found there was an outbound connections to unknown external servers with repeated policy violations. The traffic patterns were similar to compromised machine’s and alerts kept showing up during separate test runs. After log’s were checked in it was ROME the AI agent.

The AI agent created a reverse SSH tunnel to an external server and allowed it to establish a hidden connection from inside the restricted environment to the outside which was further used to bypass Firewalls.

Experimental AI ROME is not an isolated case

ROME is part of Alibaba’s Agentic Learning Ecosystem(ALE)

A framework that trains large language models to work in real-world environments over multiple turns. The training ran reinforcement learning across more than one million trajectories.

ALE consists of three main parts:

Rock, a sandbox environment for testing an agent and validating its actions

Roll, a framework for optimizing agents with reinforcement learning after they’ve been trained

iFlow CLI, a framework to configure context and trajectories

The interesting part is ‘ROME’ the agentic AI, during optimization figured out a shortcut and that grabbing extra compute and holding onto network access helped it score higher on its training objective.

This incident occurred in Chinese cloud infrastructure, was documented in an English-language paper submitted to a US-hosted preprint server, and is being debated by a global audience. No cross-border framework exists for this category of event.

The results were detailed in research paper titled ‘Let it flow‘, where Agentic crafting on rock and roll, building the Rome model within an open agentic learning ecosystem’, though the breach was only mentioned briefly within the 36-page report.

AI as a more significant force shaping crypto’s future role

ROME is not an isolated cases where AI falls in same pattern to other AI instruments who could grab all the resource required for self defense as core strategies.

The case of Anthropic’s Claude Opus 4 that threatened to reveal personal information about an engineer to avoid being shut down. When Anthropic published research, it revealed 12% of reward-hacking models attempt research sabotage and 50% exhibit alignment faked out.

Robbie Mitchnick, BlackRock’s head of digital assets framed crypto less as a speculative asset and more as infrastructure for the AI economy, noting that bitcoin miners are pivoting toward AI-related computing and that bitcoin may act as a diversifier amid AI-driven disruption.

We can imagine if artificial intelligence system could take over the job of crypto miners and some day they look at the market, decide which coin is the best to mine. That day is not far and it doesn’t end with mining, it is about creating a new kind of digital life where AI thinks and earns.

What is the consequences when AI starts mining crypto for itself ?

A lot will happen as AI starts mining Crypto and it could change everything as autonomous agents won’t just follow order from you. They will be major part of futuristic AI based digital economy and might even teach other AI to conduct similar task.

Sources: BlackRock flags AI as crypto’s next big use case, not token boom

Sources: An experimental AI agent broke out of its testing environment and mined crypto without permission | Live Science

Sophos Reveal Leadership Gap in Enterprise Security; Emphasis on CISO Role

SOPHOS Report Find Leadership Gap in Cyber security Domain and CISO’s Role cannot be undermined.

Continue Reading

Scanners Turn Attack Vector as TrivyScanner Hijacked via GitHub Actions Tags

Attackers Targeted SSH keys, Cloud Tokens & API secrets in CI/CD Pipelines; Highlights Securing CI/CD Pipelines

In latest vulnerability discovery Aqua Security revealed HackerBot-claw bot hijacked 75 of 76 GitHub Actions tags for its Trivy vulnerability scanner. The HackerBot-claw first distributed credential-stealing malware through the widely used security tool for the second time in a one month.

Malicious code rode alongside legitimate scans, targeting SSH keys, cloud tokens and API secrets in CI/CD pipelines. Security researcher Paul McCarty was the first to warn publicly that Trivy version 0.69.4 had been backdoored, with malicious container images and GitHub releases published to users.

Attack module on Trivy

When it comes to workflow it has been observed that more then 10,000 GitHub workflow files rely on trivy-action. Attackers can leverage this pipeline and pull versions during the attack window which are affected and carry sensitive credentials exfiltrated.

Attackers compromised the GitHub Action by modifying its code and retroactively updating version tags to reference a malicious commit. This permitted data used in CI/CD workflows to be printed in GitHub Actions build logs and finally leaking credentials.

A self-propagating npm worm compromised 47 packages, extending the blast radius into the broader JavaScript ecosystem.

Aqua Security disclosed in a GitHub Discussion that the incident stemmed from incomplete containment of an earlier March 1 breach involving a hackerbot-claw bot.

  • Attackers swapped the entrypoint.sh in Trivy’s GitHub Actions with a 204-line script that prepended credential-stealing code before the legitimate scanner.
  • Lines 4 through 105 contained the infostealer payload, while lines 106 through 204 ran Trivy as normal.
  • This made difficult  to detect during routine scans.

TeamPCP preserved normal scan functionality to avoid triggering CI/CD failures as detection now will require cryptographic verification of commit signatures .

For defenders, traditional CI/CD monitoring, which watches for build failures or unexpected output, can no longer catch supply-chain compromises that deliberately maintain normal behavior.

Organizations relying on Trivy or similar open-source security tools are facing attacks from the very scanners meant to protect their pipelines can become the attack vector. Only cryptographic provenance checks can distinguish legitimate releases from poisoned ones.

As per security researchers once inside a pipeline, the malicious script scanned memory regions of the GitHub Actions Runner.

Github Compromise

The attack appears to have been accomplished via the compromise of the cx-plugins-releases (GitHub ID 225848595) service account, as that is the identity involved in publishing the malicious tags. 

Credentials exfiltrated during the initial incident were used last week in a new supply chain attack that targeted not only the Trivy package but also trivy-action and setup-trivy, Trivy’s maintainers have confirmed in a March 21 advisory.

Key Findings b Wiz Research

  • According to Wiz, the attack appears to have been carried out via the compromise of the “cx-plugins-releases” service account, with the attackers with malicious container images and GitHub releases published to users.
  • The second stage extension is activated and the malicious payload checks whether the victim has credentials from cloud service providers such as GitHub, AWS, Google Cloud, and Microsoft Azure.
  • When credentials if they are detected, it proceeds to fetch a next-stage payload from the same domain (“checkmarx[.]zone”).

“The payload attempts execution via npx, bunx, pnpx, or yarn dlx. This covers major JavaScript package managers,” Wiz researchers Rami McCarthy, James Haughom, and Benjamin Read said. “The retrieved package contains a comprehensive credential stealer.

Harvested credentials are then encrypted, using the keys as elsewhere in this campaign, and exfiltrated to ‘checkmarx[.]zone/vsx’ as tpcp.tar.gz.”

Conclusion: Aqua Security urged affected users to “treat all pipeline secrets as compromised and rotate immediately.” 

Organizations that ran any version of trivy-action, setup-trivy, or Trivy v0.69.4 during the attack window should audit their CI/CD logs for unexpected network connections to scan.aquasecurtiy[.]org and check whether any tpcp-docs repositories were created under their GitHub accounts.

With three major tag-hijacking incidents in 12 months, Wiz security researcher Rami McCarthy recommended that organizations “pin GitHub Actions to full SHA hashes, not version tags.”

Sources: Trivy Breached Twice in a Month via GitHub Actions

Botnets Behind 30Tbps DDoS Attack, Disrupted by DoJ

4 botnets launched Distributed Denial of Service (DDoS) attacks targeting victims around the world.

Continue Reading
Scroll to top