[Guest Diary] New Malware Libraries means New Signatures, (Fri, May 15th)

This post was originally published on this site


:root {
–isc-maroon: #7a1f1f;
–isc-maroon-dark: #5e1717;
–isc-link: #0066cc;
–isc-text: #1a1a1a;
–isc-muted: #555;
–isc-rule: #d0d0d0;
–isc-code-bg: #f4f4f4;
–isc-code-text: #c0392b;
–isc-block-bg: #1e1e1e;
–isc-block-text: #e6e6e6;
–isc-callout-bg: #fafafa;
–isc-table-header: #ececec;
}

* { box-sizing: border-box; }

html, body {
margin: 0;
padding: 0;
background: #ffffff;
color: var(–isc-text);
font-family: “Open Sans”, “Source Sans Pro”, -apple-system, BlinkMacSystemFont, “Segoe UI”, Roboto, Helvetica, Arial, sans-serif;
font-size: 15px;
line-height: 1.6;
}

.isc-header {
background: var(–isc-maroon);
color: #ffffff;
padding: 14px 24px;
border-bottom: 4px solid var(–isc-maroon-dark);
}
.isc-header .brand {
font-family: Arial, Helvetica, sans-serif;
font-size: 22px;
font-weight: bold;
letter-spacing: 0.3px;
}
.isc-header .brand a { color: #ffffff; text-decoration: none; }
.isc-header .tagline {
font-family: Arial, Helvetica, sans-serif;
font-size: 12px;
color: #f3d6d6;
margin-top: 2px;
}

main {
max-width: 920px;
margin: 0 auto;
padding: 28px 32px 48px;
}

h1.diary-title {
font-family: Arial, Helvetica, sans-serif;
font-size: 26px;
line-height: 1.25;
color: var(–isc-maroon);
margin: 8px 0 10px 0;
border-bottom: 1px solid var(–isc-rule);
padding-bottom: 12px;
}

.meta {
font-family: Arial, Helvetica, sans-serif;
font-size: 13px;
color: var(–isc-muted);
margin-bottom: 24px;
}
.meta strong { color: var(–isc-text); }
.meta a { color: var(–isc-link); text-decoration: none; }
.meta a:hover { text-decoration: underline; }

h2 {
font-family: Arial, Helvetica, sans-serif;
font-size: 19px;
color: var(–isc-maroon);
margin-top: 32px;
margin-bottom: 10px;
padding-bottom: 4px;
border-bottom: 1px solid var(–isc-rule);
}

h3 {
font-family: Arial, Helvetica, sans-serif;
font-size: 16px;
color: var(–isc-text);
margin-top: 22px;
margin-bottom: 8px;
}

p { margin: 10px 0; }

a { color: var(–isc-link); }
a:hover { text-decoration: underline; }

code, .inline-code {
font-family: “SFMono-Regular”, Consolas, “Liberation Mono”, Menlo, Courier, monospace;
font-size: 13px;
background: var(–isc-code-bg);
color: var(–isc-code-text);
padding: 1px 5px;
border-radius: 3px;
word-break: break-all;
}

.callout {
background: var(–isc-callout-bg);
border-left: 3px solid var(–isc-maroon);
padding: 10px 16px;
margin: 14px 0;
font-family: “SFMono-Regular”, Consolas, “Liberation Mono”, Menlo, Courier, monospace;
font-size: 13px;
color: var(–isc-text);
}

figure {
margin: 22px 0;
text-align: center;
}
figure img {
max-width: 100%;
height: auto;
border: 1px solid #cccccc;
display: block;
margin: 0 auto;
}
figcaption {
font-family: Arial, Helvetica, sans-serif;
font-size: 13px;
color: var(–isc-muted);
margin-top: 8px;
font-style: italic;
}
figcaption strong { color: var(–isc-text); font-style: normal; }

table.diary-table {
border-collapse: collapse;
width: 100%;
margin: 16px 0;
font-family: Arial, Helvetica, sans-serif;
font-size: 13.5px;
}
table.diary-table th,
table.diary-table td {
border: 1px solid #b8b8b8;
padding: 8px 12px;
text-align: left;
vertical-align: top;
}
table.diary-table th {
background: var(–isc-table-header);
font-weight: bold;
}
table.diary-table code {
font-size: 12.5px;
}

ul.ioc-list {
list-style: disc;
padding-left: 28px;
margin: 12px 0;
}
ul.ioc-list li { margin: 6px 0; }

ol.references {
padding-left: 28px;
font-family: Arial, Helvetica, sans-serif;
font-size: 13.5px;
line-height: 1.55;
}
ol.references li {
margin: 6px 0;
word-break: break-word;
}
ol.references a { color: var(–isc-link); }

table.appendix-ip-table {
border-collapse: collapse;
margin: 14px 0;
font-family: “SFMono-Regular”, Consolas, “Liberation Mono”, Menlo, Courier, monospace;
font-size: 13px;
}
table.appendix-ip-table td {
border: 1px solid #b8b8b8;
padding: 8px 14px;
text-align: center;
background: #fcfcfc;
}

.byline-banner {
background: var(–isc-callout-bg);
border: 1px solid var(–isc-rule);
border-left: 3px solid var(–isc-maroon);
padding: 10px 14px;
margin: 6px 0 22px 0;
font-family: Arial, Helvetica, sans-serif;
font-size: 13.5px;
font-style: italic;
color: var(–isc-text);
}

.isc-footer {
border-top: 1px solid var(–isc-rule);
margin-top: 40px;
padding-top: 14px;
font-family: Arial, Helvetica, sans-serif;
font-size: 12px;
color: var(–isc-muted);
text-align: center;
}

@media (max-width: 640px) {
main { padding: 20px 16px 40px; }
h1.diary-title { font-size: 22px; }
table.appendix-ip-table td { padding: 6px 8px; font-size: 12px; }
}

This is a Guest Diary by Gokul Prema Thangavel, an ISC intern as part of the SANS.edu Bachelor Degree Program.

Introduction

The SHA-256 a8460f446be540410004b1a8db4083773fa46f7fe76fa84219c93daa1669f8f2 is one of the most-observed Outlaw / Shellbot artifacts on the public internet. VirusTotal first ingested it on 5 July 2018 [2]. It is the SHA-256 of the authorized_keys file written by the campaign whose persistence comment string is mdrfckr, a campaign documented in handler diaries, vendor reports, and independent honeypot research for nearly seven years.

This diary does not announce a new campaign. The file hash, the public key, the mdrfckr comment string, the chattr -ia .ssh defensive disarm, the chpasswd account hijack, and the /tmp/secure.sh competitor cleanup are all well-described in prior reporting [3][4][5][6][7]. What this diary does add is one new data point in an existing lineage: between 14 and 21 April 2026, my DShield sensor [8] observed the mdrfckr campaign using a third libssh client version that has not, to my knowledge, been published as part of this campaign’s hassh chronology. The botnet’s authorized_keys file is unchanged across four years. Its SSH client library is on its third documented major version. Detection rules pinned to the older hasshes will miss the current generation.

The point of this diary is to put the prior reports side by side with my April 2026 observation, document the new hassh, and offer detection-engineering guidance for handlers maintaining mdrfckr-aware rules.

What is already known

I want to be careful to credit the prior work this diary builds on, because the new contribution is small relative to it.

The mdrfckr persistence key was first associated with the Outlaw / Dota family by Trend Micro in 2018 [3], with subsequent updates in 2019 and follow-up reporting from Anomali, Yoroi [9], Juniper [10], CounterCraft [11], Cybereason, and Kaspersky. The recon command sequence and the competitor-cleanup playbook are described across that body of work. None of the file or behaviour signatures discussed in this diary are novel.

In late 2022 and early 2023, the port22.dk blog [4][7] published a two-part deep dive on the campaign. Part one (data from October–November 2022) observed 12,913 unique IPs writing the mdrfckr key from a network of 10 honeypots. Crucially, the post introduced hassh-based clustering as a defender’s tool: 99.1% of the observed mdrfckr-key writes shared the hassh 51cba57125523ce4b9db67714a90bf6e, which corresponds to the SSH client banner SSH-2.0-libssh-0.6.0 / SSH-2.0-libssh-0.6.3. Part two (data from December 2022 onward) documented the campaign migrating to a second hassh f555226df1963d1d3c09daf865abdc9a, corresponding to SSH-2.0-libssh_0.9.5 / SSH-2.0-libssh_0.9.6, with ~30,000 unique IPs across the new fingerprint and a 94.5% confidence link. Part two also documented two new related command variants: chattr -ia .ssh; lockr -ia .ssh as a separate command, and lockr -ia .ssh run on its own, executed alongside the original key-write command.

In May 2023, a SANS ISC diary by Jesse La Grew [5] presented two example sessions writing the same SHA-256, captured via a cowrie-log enrichment script. One session originated from a DigitalOcean datacentre IP; the other from a VPN-fronted Tencent IP. Both sessions executed the post-December-2022 split-command variant.

In May / June 2023, Guy Bruneau’s monthly DShield diary [6] noted the same key-write playbook in honeypot data and attributed it explicitly to the Outlaw group via the original Trend Micro reporting.

That is the public chronology this observation extends.

What the April 2026 sensor saw

Between 2026-04-14 01:23:41 UTC and 2026-04-21 02:22:56 UTC, my DShield sensor logged 24 unique source IPs writing the SHA-256 a8460f446be540410004b1a8db4083773fa46f7fe76fa84219c93daa1669f8f2 to /root/.ssh/authorized_keys (and to other compromised account paths). The cluster wrote 229 authorized_keys modifications across 1,230 SSH sessions and executed 4,133 post-authentication commands.

The peak burst occurred on 19 April 2026: 20 of the 24 IPs first connected to the sensor between 06:05:19 UTC and 06:07:30 UTC, a 131-second window. The remaining four IPs appeared on neighbouring days but executed the same playbook with the same key.

The defensive-disarm and key-write command observed across every successful session is the post-December-2022 split variant documented by port22 part two:

 

Figure 1: Cowrie session capture of one cluster IP executing the post-December-2022 split variant (defensive disarm, key write, recon).

The recon, password-change, and competitor-cleanup commands match the prior published playbook exactly.

The new data point is the SSH client.

The new hassh: libssh 0.11.x

Every one of the 24 IPs in the April 2026 cluster advertised the SSH client banner SSH-2.0-libssh_0.11.1 and produced the hassh fingerprint 03a80b21afa810682a776a7d42e5e6fb.

Figure 2: Per-IP hassh fingerprint listing – all 24 cluster IPs share 03a80b21afa810682a776a7d42e5e6fb.

???????

Figure 3: Per-IP SSH client banner listing – SSH-2.0-libssh_0.11.1 across the cluster.

This hassh does not match the hashes documented in port22 parts one and two, nor in the May 2023 ISC diary.

Reporting period Hassh fingerprint Client banner Source
Oct–Nov 2022 51cba57125523ce4b9db67714a90bf6e libssh-0.6.0 / libssh-0.6.3 port22 part one [4]
Dec 2022 → 2023 f555226df1963d1d3c09daf865abdc9a libssh_0.9.5 / libssh_0.9.6 port22 part two [7]
Apr 2026 03a80b21afa810682a776a7d42e5e6fb libssh_0.11.1 This sensor

A hassh is a hash of the SSH client’s advertised cipher, MAC, key-exchange, and compression algorithm lists [12]. Different libssh major versions ship with different default algorithm preferences, so each new libssh version a campaign adopts produces a new hassh. The 2026 hassh 03a80b21afa810682a776a7d42e5e6fb is the third documented entry in this campaign’s libssh version walk, separated from port22’s last published value by approximately three years and one major libssh version (0.9 → 0.10 → 0.11).

I do not have a baseline of how prevalent this hassh is across the full DShield sensor population – that is the question I would most like other handlers and DShield operators to help answer. On my single sensor, this hassh accounted for 3,473 SSH log lines across the eight-day window, making it the most active SSH attacker-tooling fingerprint observed during the period.

The 24-IP burst: small confirmation of an existing observation

Twenty of the 24 cluster IPs first connected within a 131-second window. This is consistent with the coordination behaviour documented at much larger scale by port22, and does not represent a new claim. I mention it only for completeness, and because it has one practical implication for detection: per-source-IP rate limits (fail2ban, sshguard) will not trigger on this pattern because each IP performs only ~10 login attempts. Detection rules useful against this campaign should aggregate by target account rather than by source IP – ten distinct IPs attempting steam:Steam29! against the same host within five minutes is a stronger signature than any individual IP’s behaviour.

The cluster IPs and the credential dictionary are listed in the indicators section. None of the credential pairs are new: steam:Steam29!, postgres:q1, dev:dev5, sammy:sammy26, root:AAAaaa111, root:root000@, sysadmin:test123, test1:passwd, tester:testerpass, sammy:12345. This is the existing Outlaw target list.

Why this matters for defenders

The detection-engineering implication of the libssh version walk is straightforward: hassh-based detection rules written in 2022 or 2023 against 51cba57125523ce4b9db67714a90bf6e or f555226df1963d1d3c09daf865abdc9a will silently miss the 2026 generation of the same campaign. The SHA-256 of the authorized_keys file remains the most reliable single indicator (it has not changed in four years), but operators relying on hassh enrichment as a leading indicator – for example, alerting on hassh values before a successful authentication occurs – should add 03a80b21afa810682a776a7d42e5e6fb to their watch lists.

More broadly, the four-year libssh version walk suggests the campaign operator (or operators – the persistence model has always been consistent with shared infrastructure rather than self-propagation in the strict sense) keeps the targeting infrastructure stable while letting the underlying client library age forward. A defender writing a detection rule against this campaign should expect the hassh to change again on a roughly multi-year cadence as libssh ships new defaults, and should pin alerting to the SHA-256, the public key blob, the mdrfckr comment string, and the recon command sequence – none of which have changed since 2018 – rather than to any single hassh value.

What I am not claiming

The 24-IP April 2026 cluster is much smaller than the populations port22 worked with. I cannot meaningfully extend port22’s hassh-confidence statistics from one sensor’s eight-day window. The 99.1% / 94.5% figures published in 2022 and 2023 should not be extrapolated to the 2026 hassh from this data alone – that calculation requires a multi-sensor population study, which is exactly the kind of analysis ISC handlers and the DShield operator community are positioned to do better than any of my sensors.

Indicators

  • authorized_keys SHA-256 (unchanged since 2018): a8460f446be540410004b1a8db4083773fa46f7fe76fa84219c93daa1669f8f2
  • Public key comment string: mdrfckr
  • April 2026 hassh: 03a80b21afa810682a776a7d42e5e6fb
  • April 2026 SSH client banner: SSH-2.0-libssh_0.11.1
  • Burst window: 19 April 2026, 06:05:19 → 06:07:30 UTC
  • Credential dictionary: steam:Steam29!, postgres:q1, dev:dev5, sammy:sammy26, root:AAAaaa111, root:root000@, sysadmin:test123, test1:passwd, tester:testerpass, sammy:12345
  • 24 source IPs from the April 2026 cluster (Appendix A)

Conclusion

The mdrfckr campaign is older than many of the SSH honeypots currently watching it. Its authorized_keys file is approaching its eighth anniversary on VirusTotal and has not been rotated. Its target dictionary, recon sequence, and competitor-cleanup playbook have all remained stable across the four years that public researchers have been tracking the libssh version walk. What changes is the client.

The April 2026 hassh 03a80b21afa810682a776a7d42e5e6fb joins 51cba57125523ce4b9db67714a90bf6e and f555226df1963d1d3c09daf865abdc9a as the third documented entry in this campaign’s lineage. Detection rules pinned to either earlier hassh will miss it. I would be very interested to hear from any other DShield operator or ISC handler who has independently observed the 0.11.x hassh writing the SHA-256 above – particularly with population data that would let the community update the hassh-to-mdrfckr confidence figures published by port22 in 2022 and 2023.

Acknowledgments

Drafting assistance from Claude (Anthropic) [13]. All log review, the hassh and SHA-256 verification, the credential and IP enumeration, and the comparison against prior reporting were done from the sensor’s own logs and the cited public sources.

References

[1] https://www.sans.edu/cyber-security-programs/bachelors-degree/

[2] https://www.virustotal.com/gui/file/a8460f446be540410004b1a8db4083773fa46f7fe76fa84219c93daa1669f8f2/details

[3] Trend Micro, https://www.trendmicro.com/en/research/20/b/outlaw-updates-kit-to-kill-older-miner-versions-targets-more-systems.html

[4] port22.dk, “mdrfckrs – part one,” March 2023. https://blog.port22.dk/mdrfckrs-part-one/

[5] Jesse La Grew, “More Data Enrichment for Cowrie Logs,” SANS Internet Storm Center, 24 May 2023. https://isc.sans.edu/diary/29878

[6] Guy Bruneau, “DShield Honeypot Activity for May 2023,” SANS Internet Storm Center, 11 June 2023. https://isc.sans.edu/diary/29932

[7] port22.dk, “mdrfckrs – part two,” July 2023. https://blog.port22.dk/mdrfckrs-part-two/

[8] https://isc.sans.edu/honeypot.html

[9] Yoroi, “Outlaw is Back: A New Crypto-Botnet Targets European Organizations.” https://yoroi.company/research/outlaw-is-back-a-new-crypto-botnet-targets-european-organizations/

[10] Juniper Threat Research, “Dota3: Is your Internet of Things device moonlighting?” https://blogs.juniper.net/en-us/threat-research/dota3-is-your-internet-of-things-device-moonlighting

[11] CounterCraft, “Dota3 malware again and again.” https://www.countercraftsec.com/blog/dota3-malware-again-and-again/

[12] https://github.com/salesforce/hassh

[13] https://www.anthropic.com/claude

Appendix A: Source IPs (24)

2.59.183.94 4.210.91.174 5.99.196.202 35.210.61.208
41.128.181.199 46.253.45.10 62.193.106.227 77.105.132.10
77.237.238.1 80.102.218.187 81.57.15.243 86.110.51.47
89.46.131.162 89.116.31.97 146.59.32.130 147.45.50.81
148.113.222.4 157.173.126.206 173.249.41.171 173.249.50.59
176.147.144.172 191.101.59.252 194.104.94.20 213.225.14.165

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Simple bypass of the link preview function in Outlook Junk folder, (Thu, May 14th)

This post was originally published on this site

Besides serving as a place where Microsoft Outlook places suspected spam, the Outlook Junk folder has one additional function that can be quite helpful when it comes to identifying malicious messages. Any e-mail placed in this folder is stripped of all formatting, and destinations of all links included in the message become visible to the user, as you can see in the following images which show the same e-mail when it is placed in the inbox, and when it is placed in the Junk folder.

Proxying the Unproxyable? Sending EXE traffic to a Proxy, (Wed, May 13th)

This post was originally published on this site

.. if “unproxyable” is a word that is ..

I had a recent engagement where I had to look at the network traffic generated by a Windows executable.  Unfortunately, it was all TLS, and all TLS1.3 to boot.  So from a PCAP all I got was a whole lot of “yup, that’s encrypted”, and since it was TLSv1.3 all I really had to work with was the IP addresses, not even server names in the server hello packets to help out.  And the IP addresses involved were those “500 DNS names AWS” shotgun addresses, so no help there.

What I really needed was something to take specific traffic, say traffic from an executable, and redirect that to a proxy.  If that proxy is then burp suite, then Bob’s yer Uncle, now I can look at the traffic!!  If you’d rather use fiddler or some other proxy, go for it, anything will work.

A few minutes of Googling, and I found Proxifier (https://www.proxifier.com/)

Proxifier allows you set up rules, for instance “send traffic from abc.exe to proxy A”, “send traffic from def.exe to proxy B”, or “send everything else direct”, or any combination.  Proxies can be direct or Socks5.

In my case, I was looking at a client executable, and was able to follow all the API calls and data transferred, it was EXACTLY what I needed that day.

I can’t show you the client output – watching the API’s roll by was as cool as it gets though, and the proxy intercept in burp lets you “play” with individual calls if that’s what you need.  But I can certainly show you how this works, let’s use curl as our example exe. 

Let's start in proxifier.  First you need to set up your proxy(s).  In this case I'm using Burp Suite Pro running locally, so the proxy is:

Next, we’ll set up the rules:

The first rule says “anything to my own machine, send direct”.  Given how much loopback cruft happens on a typical Windows box, this rule is gold (unless that’s what you are looking for that is).

The second rule is “anything from curl.exe, send to the proxy we just defined” (or whatever your executable is).

You can have multiple of these rules doing different things.

The final rule is “everything else, send direct”

Now, let’s run a test with curl:


(and so on)

On proxifier, you see the transaction happen in real time:

The top pane shows the executable, target and so on.  It’s somewhat ephemeral, it’ll show the live view, then will go grey after the transaction complets, then after a few second disappears.  The bottom pane scrolls in a more “log like” manner. 

Over in Burp, you see all the business that most sites have as their lead page:

Which is exactly what you need, and can't get these days from a packet capture!

What else does Proxifier do?  It also spits out a configurable log file, you can configure what’s in the logs and where to send it:

 

You can set similar sensitivity on the live on-screen log.

All in all, this tool was a life-saver for me, I’ve used it for a few years now and keep coming up with things that it can bail me out of!

Got a cool use for a tool like this?  Give it a try and share your experiences in our comment form below (please keep any NDA’s in mind).

Do you have a similar or better tool for this, again, by all means share in our comment form!

===============
Rob VandenBrink
rob@coherentsecurity.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Amazon Redshift introduces AWS Graviton-based RG instances with an integrated data lake query engine

This post was originally published on this site

Since 2013, Amazon Redshift has given the full power of a data warehouse in the cloud, at a fraction of the on-premises cost. Every architectural generation—from dense compute to Amazon RA3 instances, from provisioned to Amazon Redshift Serverless—has made each query cheaper, faster, and more efficient than the last.

For over a decade, as data volumes have grown and analytics requirements have evolved, organizations increasingly leverage both data warehouse tables for structured, frequently-accessed data and data lakes for cost-effective storage of diverse datasets. Add AI agents to the mix and they query your data warehouse at a scale that dwarfs typical human usage, leading to spiraling operational costs.

Amazon Redshift has doubled down on its core strengths to meet the demands of any workload — whether driven by humans or AI agents. For example, in March 2026, Amazon Redshift improved the performance of business intelligence (BI) dashboards and ETL workloads by speeding up new queries by up to 7 times. This significantly improves the response times of low-latency SQL queries, such as those used in near-real-time analytics applications, BI dashboards, ETL pipelines, and autonomous, goal-seeking AI agents.

Today, we’re announcing Amazon Redshift RG instances, a new instance family powered by AWS Graviton. RG instances deliver better performance, running data warehouse workloads up to 2.2x as fast as RA3 instances at 30% lower price per vCPU. Their integrated data lake query engine lets you run SQL analytics across your data warehouse and data lake from a single engine with performance up to 2.4x as fast as RA3 for Apache Iceberg and up to 1.5x as fast as RA3 for Apache Parquet. This blend of speed, cost efficiency, and an integrated data lake query engine makes Redshift RG instances well-suited to handle the high query volumes and low-latency requirements of today’s analytics and agentic AI workloads.

You can compare new RG instances and current RA3 instances:

Current RA3 Instance Recommended RG instance vCPU Memory (GB) Primary Use Case
ra3.xlplus rg.xlarge 4 32 Small cluster departmental analytics
ra3.4xlarge rg.4xlarge 12 → 16 (1.33:1) 96 GB → 128 GB (1.33:1) Standard production workloads, medium data volumes

This approach reduces total analytics costs for customers running combined data warehouse and data lake workloads, while simplifying operations through a single system for querying both warehouse tables and Amazon Simple Storage Service (Amazon S3) data lakes. We recommend using the AWS Pricing Calculator with your specific workload patterns to estimate savings.

Getting started with Amazon Redshift RG instances
You can launch new clusters or migrate existing clusters through the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS API. The integrated data lake query engine is enabled by default.

In the Amazon Redshift console, you can choose new RG instances when you create a cluster.

You can migrate previous-generation instances to RG instances with optimal paths based on your cluster configuration to estimate costs, validate compatibility, and automate execution.

  • Elastic Resize—in-place migration with 10-15 minutes downtime for compatible configurations
  • Snapshot and Restore—create a RG cluster from an RA3 snapshot. This is best for customers who want to make configuration changes during the migration

Your external tables, schemas, and query syntax—including existing Spectrum queries—remain unchanged. There is no need to recreate external tables or modify application code. To learn more, visit the Redshift Management Guide.

Amazon Redshift now executes data lake queries on cluster nodes—the same compute that processes data warehouse workloads. As a result, Amazon Redshift Spectrum is no longer required. Data lake queries stay within your VPC boundary, use existing IAM roles, and incur zero per-terabyte scanning charges. This removes the $5/TB Spectrum scanning fees that previously added to total Redshift costs.

Now available
Amazon Redshift RG instances are now available in the following AWS Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Malaysia, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, Taiwan, Tokyo), Canada (Central), Europe (Frankfurt, Ireland, Milan, London, Paris, Spain, Stockholm), Middle East (UAE), and South America (São Paulo). For Regional availability and a future roadmap, visit the AWS Capabilities by Region. For Redshift Provisioned, you can select On-Demand Instances with hourly billing and no commitments or choose Reserved Instances for cost savings. To learn more, visit the Amazon Redshift Pricing page.

Give RG instances a try in the Redshift console and send feedback to AWS re:Post for Amazon Redshift or through your usual AWS Support contacts.

Channy

Apple Patches Everything, (Mon, May 11th)

This post was originally published on this site

Apple today released its typical feature update across it's operating systems (iOS, iPadOS, macOS, tvOS, watchOS, vision OS). With this update, Apple patched 84 different vulnerabilities. Updates are available for the "26" series of operating systems, as well as for the previous "18" version of iOS/iPadOS, and two versions back for macOS (version 14 and 15).

AWS Weekly Roundup: Amazon Bedrock AgentCore payments, Agent Toolkit for AWS, and more (May 11, 2026)

This post was originally published on this site

My most exciting news of last week: Amazon Bedrock AgentCore previewed the first managed payment capabilities enabling AI agents to autonomously access and pay for APIs, MCP servers, web content, and other agents. Built in partnership with Coinbase and Stripe, it removes the undifferentiated heavy lifting of building customized systems for billing, credential management, and compliance.

You can connect a Coinbase CDP wallet or Stripe Privy wallet as a payment connection, set session-level spending limits, and your agent transacts autonomously during execution. What excites me most is what AgentCore payments can unlock—like a research agent that can pay for real-time market data on the fly, or a coding agent calling paid APIs mid-task.

To learn more, visit the blog post, dive deeper using the documentation, and get started with the AgentCore CLI.

Last week’s launches
Here are last week’s launches that caught my attention:

  • Agent Toolkit for AWS – A production-ready suite of tools and guidance, available at no additional charge, that helps AI coding agents build on AWS with fewer errors, lower token costs, and enterprise-grade security controls. The Agent Toolkit for AWS is the successor to the MCP servers, plugins, and skills available on AWS Labs. To get started, visit the quick start guide or browse the available skills and plugins on GitHub.
  • AWS MCP Server GA – You can use a managed remote Model Context Protocol (MCP) server that gives AI agents and coding assistants secure, authenticated access to all AWS services through a small, fixed set of tools. It is part of the Agent Toolkit for AWS. To learn more, visit Seb Stormacq’s blog post.
  • Amazon WorkSpaces for AI agents (Preview) – You can use AI agents to securely access and operate desktop applications through managed WorkSpaces environments. This capability allows organizations to automate everyday workflows at scale while maintaining full enterprise-grade governance and compliance. To learn more, visit Micah Walter’s blog post.
  • Amazon EC2 M8idn/M8idb and R8idn/R8idb instances – These instances are powered by custom sixth-generation Intel Xeon Scalable processors available only on AWS and the latest sixth-generation AWS Nitro cards. These instances deliver up to 43% better compute performance per vCPU compared to previous-generation instances. M8idn/R8idn instances offer up to 600 Gbps network bandwidth, and M8idb/R8idb instances deliver up to 300 Gbps EBS bandwidth.

For a full list of AWS announcements, be sure to keep an eye on the What’s New with AWS page.

Additional updates
Here are some additional news items that you might find interesting:

  • Valkey turns two – Valkey stands as proof that open, community-driven technology innovates faster, scales further, and delivers more value than any single-vendor model. Valkey has surpassed 100 million Docker pulls (up 17x year over year) and attracted more than 225 contributors who have submitted over 1,500 pull requests, roughly double the development pace of Redis over the same period. You can also use the latest Valkey 9.0 in Amazon ElastiCache.
  • Query billion-scale vectors with SQL – You can learn how to query Amazon S3 Vectors from Amazon Aurora PostgreSQL-Compatible Edition using standard SQL, and how to combine vector similarity results with relational filters in a single query, for example, finding the most semantically similar products and then filtering by price, stock status, or tenant in one SQL statement.
  • Building an end-to-end agentic SRE using AWS DevOps Agent – Learn how to configure DevOps Agent Spaces that define an investigation scope, integrating seamlessly with Amazon CloudWatch, Splunk, GitHub, and Slack. You can also learn how to trigger automated investigations via webhooks, generate mitigation plans, and hand off agent-ready specs to coding agents like Kiro for implementation.

For a full list of AWS blog posts, be sure to keep an eye on the AWS Blogs page.

Learn more about AWS, browse and join upcoming AWS-led in-person and virtual events, startup events, and developer-focused events as well as AWS Summits and AWS Community Days. Join the AWS Builder Center to connect with builders, share solutions, and access content that supports your development.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Channy

Another Universal Linux Local Privilege Escalation (LPE) Vulnerability: Dirty Frag, (Fri, May 8th)

This post was originally published on this site

Less than two weeks after the public disclosure of the Copy Fail vulnerability (CVE-2026-31431), another local privilege escalation (LPE) vulnerability in the Linux kernel has been revealed. Referred to as "Dirty Frag," this vulnerability was discovered and reported by Hyunwoo Kim (@v4bel) [1]. In this diary, I will provide a brief background on Dirty Frag, and discuss its relationship to Copy Fail. I will then discuss how to mitigate Dirty Frag and outline recommended next steps for system owners.

An Adaptive Cyber Analytics UI for Web Honeypot Logs [Guest Diary], (Wed, May 6th)

This post was originally published on this site

[This is a Guest Diary by Eric Roldan, an ISC intern as part of the SANS.edu BACS program]

Through the expansion of Large Language Models (LLMs), cybersecurity has exploded with a variety of tools for both offensive and defensive purposes. A majority of software and cyber tools are integrating Artificial Intelligence (AI) solutions into their applications, largely in the form of chatbots, automation tools through Model Context Protocol (MCP), or ingestion to prompt response type interfaces.

An overlooked and underestimated aspect of AI that is slowly arising is the creation of bespoke user interfaces (UI). That is simply put — a UI that is created custom fit to the specific needs and data provided impromptu from the user. With the ability for these models to ingest large amounts of data, it can orchestrate the appropriate elements for a UI that will be tailored to the ingested dataset.

Rather than the user having to adjust their queries or layouts for the logs that they are analyzing, the LLM will determine the proper UI elements to give to the user. This allows the user to focus on analyzing rather than tool setup.

Over days of web traffic on the DShield webhoneypot, there are large variations of intent behind the interactions. Some days, some actors may focus only on scanning and recon. Other days may be heavy on stealing credentials or trying to get a web shell exploit.

As a regular user would try to identify patterns then use their discretion to find the next proper 'grep', 'jq', or other similar pattern recognizing POSIX tools, the LLM does the same.

Before this type of bespoke UIs in cyber analytics, analysts would have to spend extensive time and energy to understand what to look for and how to use the appropriate tools. With LLM's able to do this heavy lifting, more analysts will be able to recognize attacks on their web servers with little to no cyber experience.

When developers have to manage feature implementations, documentation updates, meetings (which are always productive of course…), and dreaded bugs – security and active monitoring become an afterthought. To make the internet a safer place, we have to lower the barrier to entry for recognizing web attacks.

Okay enough selling you on how much potential this has, let's talk about how it actually works.

It works like this: the system reads your DShield web honeypot log file, then a Python analyzer goes through the entries and turns them into a clean summary of what happened instead of dumping raw attacker text into the AI. That summary includes things like top IPs, top URLs, time patterns, and tags for probe/attack types such as WordPress probes, SSRF, path traversal, CGI abuse, and other recognizable patterns. Then Claude looks at the cleaned summary and writes a React dashboard component that fits the shape of the attack activity for that day, so the UI can change depending on whether the logs are mostly one big campaign or a mix of background internet noise.

The safe part is that the LLM never gets the raw malicious strings directly, and the generated UI never gets to run loose in the main page. Instead, the app serves the generated dashboard through a backend API, caches it so it does not constantly change, and renders it inside a sandboxed iframe. If the generated code is broken, the system validates it and falls back to a static dashboard. So the whole flow is basically: logs came in -> analyzer summarizes them -> Claude generates a matching UI -> frontend loads it safely and pulls chart data from the backend.

Let's take a look at some examples now! On days where there is more noise we do not see any dominant patterns highlighted on the UI

However, on days where there is a clear pattern from certain actors we see an immediate highlight…

Furthermore, it is able to recognize and highlight attack signatures that were most obvious (or would be obvious to an experienced analyst) at the very top of the UI’s dashboard

There are sometimes some interesting quirks like the LLM creating a dashboard with light mode instead of dark mode.

Nonetheless it is interesting to see how the LLM adapts to each day’s attack logs. I imagine if I could “vibe code” this idea in a few hours, it could become a full-blown platform and toolkit for major organizations and analysts. So yea…I didn’t write the code for all this madness, I simply took a problem that I constantly face when looking at attack logs – what is it that I’m actually looking for? And created a unique bespoke UI for each day’s scenario.

Shout out to Claude Code for agentically writing the repo which can be found here.

Shout out to ChatGPT for helping me write the ‘how it works’ section of this blog.

And a special shout out to my Internship mentor Guy Bruneau for helping me think bigger in terms of recognizing interesting attacks on my webhoneypot.

Be sure to subscribe to my youtube channel for more edgy tech content and cyber insights.
youtube.com/@gnarcoding

[1] https://github.com/gnarcoding/bespoke-ui-cyber-analytics/
[2] https://isc.sans.edu/tools/honeypot/
[3] https://www.sans.edu/cyber-security-programs/bachelors-degree/

———–
Guy Bruneau IPSS Inc.
My GitHub Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.