Introducing AWS IoT Core Device Location integration with Amazon Sidewalk

This post was originally published on this site

Today, I’m happy to announce a new capability to resolve location data for Amazon Sidewalk enabled devices with the AWS IoT Core Device Location service. This feature removes the requirement to install GPS modules in a Sidewalk device and also simplifies the developer experience of resolving location data. Devices powered by small coin cell batteries, such as smart home sensor trackers, use Sidewalk to connect. Supporting built-in GPS modules for products that move around is not only expensive, it can creates challenge in ensuring optimal battery life performance and longevity.

With this launch, Internet of Things (IoT) device manufacturers and solution developers can build asset tracking and location monitoring solutions using Sidewalk-enabled devices by sending Bluetooth Low Energy (BLE), Wi-Fi, or Global Navigation Satellite System (GNSS) information to AWS IoT for location resolution. They can then send the resolved location data to an MQTT topic or AWS IoT rule and route the data to other Amazon Web Services (AWS) services, thus using different capabilities of AWS Cloud through AWS IoT Core. This would simplify their software development and give them more options to choose the optimal location source, thereby improving their product performance.

This launch addresses previous challenges and architecture complexity. You don’t need location sensing on network-based devices when you use the Sidewalk network infrastructure itself to determine device location, which eliminates the need for power-hungry and costly GPS hardware on the device. And, this feature also allows devices to efficiently measure and report location data from GNSS and Wi-Fi, thus extending the product battery life. Therefore, you can build a more compelling solution for asset tracking and location-aware IoT applications with these enhancements.

For those unfamiliar with Amazon Sidewalk and the AWS IoT Core Device Location service, I’ll briefly explain their history and context. If you’re already familiar with them, you can skip to the section on how to get started.

AWS IoT Core integrations with Amazon Sidewalk
Amazon Sidewalk is a shared network that helps devices work better through improved connectivity options. It’s designed to support a wide range of customer devices with capabilities ranging from locating pets or valuables, to smart home security and lighting control and remote diagnostics for appliances and tools.

Amazon Sidewalk is a secure community network that uses Amazon Sidewalk Gateways (also called Sidewalk Bridges), such as compatible Amazon Echo and Ring devices, to provide cloud connectivity for IoT endpoint devices. Amazon Sidewalk enables low-bandwidth and long-range connectivity at home and beyond using BLE for short-distance communication and LoRa and frequency-shift keying (FSK) radio protocols at 900MHz frequencies to cover longer distances.

Sidewalk now provides coverage to more than 90% of the US population and supports long-range connected solutions for communities and enterprises. Users with Ring cameras or Alexa devices that act as a Sidewalk Bridge can choose to contribute a small portion of their internet bandwidth, which is pooled to create a shared network that benefits all Sidewalk-enabled devices in a community.

In March 2023, AWS IoT Core deepened its integration with Amazon Sidewalk to seamlessly provision, onboard, and monitor Sidewalk devices with qualified hardware development kits (HDKs), SDKs, and sample applications. As of this writing, AWS IoT Core is the only way for customers to connect the Sidewalk network.

In the AWS IoT Core console, you can add your Sidewalk device, provision and register your devices, and connect your Sidewalk endpoint to the cloud. To learn more about onboarding your Sidewalk devices, visit the Getting started with AWS IoT Core for Amazon Sidewalk in the AWS IoT Wireless Developer Guide.

In November 2022, we announced AWS IoT Core Device Location service, a new feature that you can use to get the geo-coordinates of their IoT devices even when the device doesn’t have a GPS module. You can use the Device Location service as a simple request and response HTTP API, or you can use it with IoT connectivity pathways like MQTT, LoRaWAN, and now with Amazon Sidewalk.

In the AWS IoT Core console, you can test the Device Location service to resolve the location of your device by importing device payload data. Resource location is reported as a GeoJSON payload. To learn more, visit the AWS IoT Core Device Location in the AWS IoT Core Developer Guide.

Customers across multiple industries like automotive, supply chain, and industrial tools have requested a simplified solution such as the Device Location service to extract location-data from Sidewalk products. This would streamline customer software development and give them more options to choose the optimal location source, thereby improving their product.

Get started with a Device Location integration with Amazon Sidewalk
To enable Device Location for Sidewalk devices, go to the AWS IoT Core for Amazon Sidewalk section under LPWAN devices in the AWS IoT Core console. Choose Provision device or your existing device to edit the setting and select Activate positioning in the Geolocation option when creating and updating your Sidewalk devices.

While activating position, you need to specify a destination where you want to send your location data. The destination can either be an AWS IoT rule or an MQTT topic.

Here is a sample AWS Command Line Interface (AWS CLI) command to enable position while provisioning a new Sidewalk device:

$ aws iotwireless createwireless device --type Sidewalk 
  --name "demo-1" --destination-name "New-1" 
  --positioning Enabled

After your Sidewalk device establishes a connection to the Amazon Sidewalk network, the device SDK will send the GNSS-, Wi-Fi- or BLE-based information to AWS IoT Core for Amazon Sidewalk. If the customer has enabled Positioning, then AWS IoT Core Device Location will resolve the location data and send the location data to the specified Destination. After your Sidewalk device transmits location measurement data, the resolved geographic coordinates and a map pin will also be displayed in the Position section for the selected device.

You will also get location information delivered to your destination in GeoJSON format, as shown in the following example:

{
    "coordinates": [
        13.376076698303223,
        52.51823043823242
    ],
    "type": "Point",
    "properties": {
        "verticalAccuracy": 45,
        "verticalConfidenceLevel": 0.68,
        "horizontalAccuracy": 303,
        "horizontalConfidenceLevel": 0.68,
        "country": "USA",
        "state": "CA",
        "city": "Sunnyvale",
        "postalCode": "91234",
        "timestamp": "2025-11-18T12:23:58.189Z"
    }
}

You can monitor the Device Location data between your Sidewalk devices and AWS Cloud by enabling Amazon CloudWatch Logs for AWS IoT Core. To learn more, visit the AWS IoT Core for Amazon Sidewalk in the AWS IoT Wireless Developer Guide.

Now available
AWS IoT Core Device Location integration with Amazon Sidewalk is now generally available in the US East (N. Virginia) Region. To learn more about use cases, documentation, sample codes, and partner devices, visit the AWS IoT Core for Amazon Sidewalk product page.

Give it a try in the AWS IoT Core console and send feedback to AWS re:Post for AWS IoT Core or through your usual AWS Support contacts.

Channy

Formbook Delivered Through Multiple Scripts, (Thu, Nov 13th)

This post was originally published on this site

When I’m teachning FOR610[1], I always say to my students that reverse engineering does not only apply to “executable files” (read: PE or ELF files). Most of the time, the infection path involves many stages to defeat the Security Analyst or security controls. Here is an example that I found yesterday. An email was received via an attached ZIP archive. It contained a simple file: “Payment_confirmation_copy_30K__202512110937495663904650431.vbs” (SHA256:d9bd350b04cd2540bbcbf9da1f3321f8c6bba1d8fe31de63d5afaf18a735744f) identified by 17/65 antiviruses on VT[2]. Let’s have a look at the infection path.

The VBS script was obfuscated but easy to reverse. First it started with a delay loop of 9 seconds:

Dim Hump
Hump = DateAdd(“s”, 9, Now())
Do Until (Now() > Hump)
    Wscript.Sleep 100
    Frozen = Frozen + 1
Loop

This allow the script to wait before performing nasty actions and avoid using the sleep() function which is often considered as suspicious. Then the script will generate a PowerShell script by concatenating a lot of strings. The “PowerShell” string is hidden behind this line:

Nestlers= array(79+1,79,80+7,60+9,82,83,72,69,76,76)

The script is reconstructed like this:

Roastable11 = Roastable11 + “mv 'udenri”
Roastable11 = Roastable11 + “gstjenes”
Roastable11 = Roastable11 + “te’;”
Roastable11 = Roastable11 + “function "
Roastable11 = Roastable11 + “Microcoulomb”
Roastable11 = Roastable11 + " ($s”
Roastable11 = Roastable11 + “kattes”
Roastable11 = Roastable11 + “kemas='sel”
Roastable11 = Roastable11 + “vang”
Roastable11 = Roastable11 + “av’)”
...

The result is executed with an Shell.Application object. The PowerShell script is also heavily obfuscated. Two functions are used for this purpose:

function Microcoulomb ($skatteskemas=‘selvangav’)
{
    $bletr=4;
    do {
        folkesangeren+=skatteskemas[$bletr];
        $bletr+=5;
        overhringens=Get-Date
    }
    until (!skatteskemas[$bletr]);
    $folkesangeren
}

function Blokbogstavers65 ($srlings)
{
    countryish22(srlings)
}

The second function just invokes an “Invoke-Expression” with the provided string. The first one reconstrusts strings by extraction some characters from the provided one. Example:

$mesoventrally=Microcoulomb ’ :::n TTTEJJJJTjjjj.nnnnw::::E’;
$mesoventrally+=Microcoulomb ‘i iiB SSSCccc l EE INNNNe * *n;;;;t’;

The variable meseventrally will containt “nET.wEBClIent”.

The first part of the deobfuscated script will prepare the download of the next payload:

while ((!brandmesterens))
{
    Blokbogstavers65 (Microcoulomb '...’) ;
    Blokbogstavers65 retsforflgende;
    Blokbogstavers65 (Microcoulomb '...');
    Blokbogstavers65 (Microcoulomb '...') ;
    Blokbogstavers65 (Microcoulomb '...’) ;
    fedayee=serigraphic[$dichotomically]
}

The loop waits for a successful download from ths URL: hxxps://drive[.]google[.]com/uc?export=download&id=1jFn0CatcuICOIjBsP_WxcI_faBI9WA9S

It stores the payload in C:UsersREMAppDataRoamingbudene.con. Once decoded, it’s another piece of PowerShell that also implements deobfuscation functions.

The script will invoke an msiexec.exe process and inject the FormBook into it. The injected payload is C:UsersREMAppDataLocalTempbin.exe (SHA256:12a0f592ba833fb80cc286e28a36dcdef041b7fc086a7988a02d9d55ef4c0a9d)[3]. The C2 server is 216[.]250[.]252[.]227:7719.

Here is an overview of the activity generated by all the scripts on the infected system:

[1] https://www.sans.org/cyber-security-courses/reverse-engineering-malware-malware-analysis-tools-techniques
[2] https://www.virustotal.com/gui/file/d9bd350b04cd2540bbcbf9da1f3321f8c6bba1d8fe31de63d5afaf18a735744f
[3] https://www.virustotal.com/gui/file/12a0f592ba833fb80cc286e28a36dcdef041b7fc086a7988a02d9d55ef4c0a9d

Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

SmartApeSG campaign uses ClickFix page to push NetSupport RAT, (Wed, Nov 12th)

This post was originally published on this site

Introduction

This diary describes a NetSupport RAT infection I generated in my lab from the SmartApeSG campaign that used a ClickFix-style fake CAPTCHA page.

Known as ZPHP or HANEYMANEY, SmartApeSG is a campaign reported as early as June 2024. When it started, this campaign used fake browser update pages. But it currently uses the ClickFix method of fake CAPTCHA-style "verify you are human" pages.

This campaign pushes malicious NetSupport RAT packages for its initial malware infection, and I've seen follow-up malware from these NetSupport RAT infections.

How To Find SmartApeSG Activity

I can usually find SmartApeSG indicators from the Monitor SG account on Mastodon. I use URLscan to pivot on those indicators, so I can find compromised websites that lead to the SmartApeSG script.

The Infection

Sites compromised through this campaign display pages with a hidden injected script. Given the right conditions, this script kicks off a SmartApeSG chain of events. The image below shows an example.


Shown above: Injected SmartApeSG script in a page from the compromised site.

In some cases, this injected script does not kick off the infection chain. I've had issues getting an infection chain during certain times of day, or if I try viewing the compromised website multiple times from the same source IP address. I don't know what the conditions are, but if those conditions are right, the compromised site shows a fake CAPTCHA-style "verify you are human" page.


Shown above: Fake CAPTCHA page displayed by the compromised site.

Clicking the "verify you are human" box does the following:

  • Injects malicious content into the Windows host's clipboard
  • Generates a pop-up with instructions to open a Run window, paste content into the window, and run it.

The clipboard-injected content is a command string that uses the mshta command to retrieve and run malicious content that will generate a NetSupport RAT infection.


Shown above: Following ClickFix directions to paste content (a malicious command) into the Run window.

Below is a URL list of the HTTPS traffic directly involved in this infection.


Shown above: HTTPS traffic directly involved in this SmartApe SG activity.


Shown above: Traffic from the infection filtered in Wireshark.

The malicious NetSupport RAT package stays persistent on the infected host through a Start Menu shortcut. The shortcut runs a .js file in the user's AppDataLocalTemp directory. That .js file runs the NetSupport RAT executable located in a folder under the C:ProgramData directory.


Shown above: The malicious NetSupport RAT package, persistent on an infected Windows host.

Indicators From This Activity

The following URLs were noted in traffic from this infection:

  • hxxps[:]//frostshiledr[.]com/xss/buf.js  <– injected SmartApeSG script
  • hxxps[:]//frostshiledr[.]com/xss/index.php?iArfLYKw
  • hxxps[:]//frostshiledr[.]com/xss/bof.js?0e58069bbdd36e9a36  <– fake CAPCHA page/ClickFix instructions
  • hxxps[:]//newstarmold[.]com/sibhl.php  <– Script retrieved by ClickFix command
  • hxxps[:]//www.iconconsultants[.]com/4nnjson.zip  <– zip archive containing malicious NetSupport RAT package
  • hxxp[:]//194.180.191[.]121/fakeurl.htm  <– NetSupport RAT C2 traffic over TCP port 443

The following is the zip archive containing the malicious NetSupport RAT package:

  • SHA256 hash: 1e9a1be5611927c22a8c934f0fdd716811e0c93256b4ee784fadd9daaf2459a1
  • File size: 9,192,105 bytes
  • File type: Zip archive data, at least v1.0 to extract, compression method=store
  • File location: hxxps[:]//www.iconconsultants[.]com/4nnjson.zip
  • Saved to disk as: C:ProgramDatapsrookk11nn.zip

Note: These domains change on a near-daily basis, and the NetSupport RAT package and C2 server also frequently change.


Bradley Duncan
brad [at] malware-traffic-analysis.net

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Introducing Our Final AWS Heroes of 2025

This post was originally published on this site

With AWS re:Invent approaching, we’re celebrating three exceptional AWS Heroes whose diverse journeys and commitment to knowledge sharing are empowering builders worldwide. From advancing women in tech and rural communities to bridging academic and industry expertise and pioneering enterprise AI solutions, these leaders exemplify the innovative spirit that drives our community forward. Their stories showcase how technical excellence, combined with passionate advocacy and mentorship, strengthens the global AWS community.

Dimple Vaghela – Ahmedabad, India

Community Hero Dimple Vaghela leads both the AWS User Group Ahmedabad and AWS User Group Vadodara, where she drives cloud education and technical growth across the region. Her impact spans organizing numerous AWS meetups, workshops, and AWS Community Days that have helped thousands of learners advance their cloud careers. Dimple launched the “Cloud for Her” project to empower girls from rural areas in technology careers and serves as co-organizer of the Women in Tech India User Group. Her exceptional leadership and community contributions were recognized at AWS re:Invent 2024 with the AWS User Group Leader Award in the Ownership category, while she continues building a more inclusive cloud community through speaking, mentoring, and organizing impactful tech events.

Rola Dali – Montreal, Canada

Community Hero Rola Dali is a senior Data, ML, and AI expert specializing in AWS cloud, bringing unique perspective from her PhD in neuroscience and bioinformatics with expertise in human genomics. As co-organizer of the AWS Montreal User Group and a former AWS Community Builder, her commitment to the cloud community earned her the prestigious Golden Jacket recognition in 2024. She actively shapes the tech community by architecting AWS solutions, sharing knowledge through blogs and lectures, and mentoring women entering tech, academics transitioning to industry, and students starting their careers.

Vivek Velso – Toronto, Canada

Machine Learning Hero Vivek Velso is a seasoned technology leader with over 27 years of IT industry experience, specializing in helping organizations modernize their cloud infrastructure for generative AI workloads. His deep AWS expertise earned him the prestigious Golden Jacket award for completing all AWS certifications, and he actively contributes to the AWS Subject Matter Expert (SME) program for multiple certification exams. A former AWS Community Builder and AWS Ambassador, he continues to share his knowledge through more than 100 technical blogs, articles, conference engagements, and AWS livestreams, helping the community confidently embrace cloud innovation.

Learn More

Visit the AWS Heroes webpage if you’d like to learn more about the AWS Heroes program, or to connect with a Hero near you.

Taylor

Secure EKS clusters with the new support for Amazon EKS in AWS Backup

This post was originally published on this site

Today, we’re announcing support for Amazon EKS in AWS Backup to provide the capability to secure Kubernetes applications using the same centralized platform you trust for your other Amazon Web Services (AWS) services. This integration eliminates the complexity of protecting containerized applications while providing enterprise-grade backup capabilities for both cluster configurations and application data. AWS Backup is a fully managed service to centralize and automate data protection across AWS and on-premises workloads. Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service to manage availability and scalability of the Kubernetes clusters. With this new capability, you can centrally manage and automate data protection across your Amazon EKS environments alongside other AWS services.

Until now, for backups, customers relied on custom solutions or third-party tools to back up their EKS clusters, requiring complex scripting and maintenance for each cluster. The support for Amazon EKS in AWS Backup eliminates this overhead by providing a single, centralized, and policy-driven solution that protects both EKS clusters (Kubernetes deployments and resources) and stateful data (stored in Amazon Elastic Block Store (Amazon EBS), Amazon Elastic File System (Amazon EFS), and Amazon Simple Storage Service (Amazon S3) only) without the need to manage custom scripts across clusters. For restores, customers were previously required to restore their EKS backups to a target EKS cluster which was either the source EKS cluster, or a new EKS cluster, requiring that an EKS cluster infrastructure is provisioned ahead of time prior to the restore. With this new capability, during a restore of EKS cluster backups, customers also have the option to create a new EKS cluster based on previous EKS cluster configuration settings and restore to this new EKS cluster, with AWS Backup managing the provisioning of the EKS cluster on the customer’s behalf.

This support includes policy-based automation for protecting single or multiple EKS clusters. This single data protection policy provides a consistent experience across all services AWS Backup supports. It allows creation of immutable backups to prevent malicious or inadvertent changes, helping customers meet their regulatory compliance needs. In case there is a customer data loss or cluster downtime event, customers can easily recover their EKS cluster data from encrypted, immutable backups using an easy-to-use interface and maintain business continuity of running their EKS clusters at scale.

How it works
Here’s how I set up support for on-demand backup of my EKS cluster in AWS Backup. First, I’ll show a walkthrough of the backup process, then demonstrate a restore of the EKS cluster.

Backup
In the AWS Backup console, in the left navigation pane, I choose Settings and then Configure resources to opt in to enable protection of EKS clusters in AWS Backup.

Now that I’ve enabled Amazon EKS, in Protected resources I choose Create on-demand backup to create a backup for my already existing EKS cluster floral-electro-unicorn.

Enabling EKS in Settings ensures that it shows up as a Resource type when I create on-demand backup for the EKS cluster. I proceed to select the EKS resource type and the cluster.

I leave the rest of the information as default, then select Choose an IAM role to select a role (test-eks-backup) that I’ve created and customized with the necessary permissions for AWS Backup to assume when creating and managing backups on my behalf. I choose Create on-demand backup to finalize the process.


The job is initiated, and it will start running to back up both the EKS cluster state and the persistent volumes. If Amazon S3 buckets are attached to the backup, you’ll need to add the additional Amazon S3 backup permissions AWSBackupServiceRolePolicyForS3Backup to your role. This policy contains the permissions necessary for AWS Backup to back up any Amazon S3 bucket, including access to all objects in a bucket and any associated AWS KMS key.


The job is completed successfully and now EKS clusterfloral-electro-unicorn is backed up by AWS Backup.


Restore
Using the AWS Backup Console, I choose the EKS backup composite recovery point to start the process of restoring the EKS cluster backups, then choose Restore.


I choose Restore full EKS cluster to restore the full EKS backup. To restore to an existing cluster, I Choose an existing cluster then select the cluster from the drop-down list. I choose the Default order as the order in which individual Kubernetes resources will be restored.

I then configure the restore for the persistent storage resources, that will be restored alongside my EKS clusters.


Next, I Choose an IAM role to execute the restore action. The Protected resource tags checkbox is selected by default and I’ll leave it as is, then choose Next.

I review all the information before I finalize the process by choosing Restore, to start the job.


Selecting the drop-down arrow gives details of the restore status for both the EKS cluster state and persistent volumes attached. In this walkthrough, all the individual recovery points are restored successfully. If portions of the backup fail, it’s possible to restore the successfully backed up persistent stores (for example, Amazon EBS volumes) and cluster configuration settings individually. However, it’s not possible to restore full EKS backup. The successfully backed up resources will be available for restore, listed as nested recovery points under the EKS cluster recovery point. If there’s a partial failure, there will be a notification of the portion(s) that failed.


Benefits
Here are some of the benefits provided by the support for Amazon EKS in AWS Backup:

  • A fully managed multi-cluster backup experience, removing the overhead associated with managing custom scripts and third-party solutions.
  • Centralized, policy-based backup management that simplifies backup lifecycle management and makes it seamless to back up and recover your application data across AWS services, including EKS.
  • The ability to store and organize your backups with backup vaults. You assign policies to the backup vaults to grant access to users to create backup plans and on-demand backups but limit their ability to delete recovery points after they’re created.

Good to know
The following are some helpful facts to know:

  • Use either the AWS Backup Console, API, or AWS Command Line Interface (AWS CLI) to protect EKS clusters using AWS Backup. Alternatively, you can create an on-demand backup of the cluster after it has been created.
  • You can create secondary copies of your EKS backups across different accounts and AWS Regions to minimize risk of accidental deletion.
  • Restoration of EKS backups is available using the AWS Backup Console, API, or AWS CLI.
  • Restoring to an existing cluster will not override the Kubernetes versions, or any data as restores are non-destructive. Instead, there will be a restore of the delta between the backup and source resource.
  • Namespaces can only be restored to an existing cluster to ensure a successful restore as Kubernetes resources may be scoped at the cluster level.

Voice of the customer

Srikanth Rajan, Sr. Director of Engineering at Salesforce said “Losing a Kubernetes control plane because of software bugs or unintended cluster deletion can be catastrophic without a solid backup and restore plan. That’s why it’s exciting to see AWS rolling out the new EKS Backup and Restore feature, it’s a big step forward in closing a critical resiliency gap for Kubernetes platforms.”

Now available
Support for Amazon EKS in AWS Backup is available today in all AWS commercial Regions (except China) and in the AWS GovCloud (US) where AWS Backup and Amazon EKS are available. Check the full Region list for future updates.

To learn more, check out the AWS Backup product page and the AWS Backup pricing page.

Try out this capability for protecting your EKS clusters in AWS Backup and let us know what you think by sending feedback to AWS re:Post for AWS Backup or through your usual AWS Support contacts.

Veliswa.

AWS Weekly Roundup: Amazon S3, Amazon EC2, and more (November 10, 2025)

This post was originally published on this site

AWS re:Invent 2025 is only 3 weeks away and I’m already looking forward to the new launches and announcements at the conference. Last year brought 60,000 attendees from across the globe to Las Vegas, Nevada, and the atmosphere was amazing. Registration is still open for AWS re:Invent 2025. We hope you’ll join us in Las Vegas December 1–5 for keynotes, breakout sessions, chalk talks, interactive learning opportunities, and networking with cloud practitioners from around the world.

AWS and OpenAI announced a multi-year strategic partnership that provides OpenAI with immediate access to AWS infrastructure for running advanced AI workloads. The $38 billion agreement spans 7 years and includes access to AWS compute resources comprising hundreds of thousands of NVIDIA GPUs, with the ability to scale to tens of millions of CPUs for agentic workloads. The infrastructure deployment that AWS is building for OpenAI features a sophisticated architectural design optimized for maximum AI processing efficiency and performance. Clustering the NVIDIA GPUs—both GB200s and GB300s—using Amazon EC2 UltraServers on the same network enables low-latency performance across interconnected systems, allowing OpenAI to efficiently run workloads with optimal performance. The clusters are designed to support various workloads, from serving inference for ChatGPT to training next generation models, with the flexibility to adapt to OpenAI’s evolving needs.

AWS committed $1 million through its Generative AI Innovation Fund to digitize the Jane Goodall Institute’s 65 years of primate research archives. The project will transform handwritten field notes, film footage, and observational data on chimpanzees and baboons from analog to digital formats using Amazon Bedrock and Amazon SageMaker. The digital transformation will employ multimodal large language models (LLMs) and embedding models to make the research archives searchable and accessible to scientists worldwide for the first time. AWS is collaborating with Ode to build the user experience, helping the Jane Goodall Institute adopt AI technologies to advance research and conservation efforts. I was deeply saddened when I heard that world-renowned primatologist Jane Goodall had passed away. Learning that this project will preserve her life’s work and make it accessible to researchers around the world brought me comfort. It’s a fitting tribute to her remarkable legacy.

Transforming decades of research through cloud and AI. Dr. Jane Goodall and field staff observe Goblin at Gombe National Park, Tanzania. CREDIT: the Jane Goodall Institute

Last week’s launches
Let’s look at last week’s new announcements:

  • Amazon S3 now supports tags on S3 Tables – Amazon S3 now supports tags on S3 Tables for attribute-based access control (ABAC) and cost allocation. You can use tags for ABAC to automatically manage permissions for users and roles accessing table buckets and tables, eliminating frequent AWS Identity and Access Management (IAM) or S3 Tables resource-based policy updates and simplifying access governance at scale. Additionally, tags can be added to individual tables to track and organize AWS costs using AWS Billing and Cost Management.
  • Amazon EC2 R8a Memory-Optimized Instances now generally available – R8a instances feature 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, and they deliver up to 30% higher performance and up to 19% better price-performance compared to R7a instances, with 45% more memory bandwidth. Built on the AWS Nitro System using sixth-generation Nitro Cards, these instances are designed for high-performance, memory-intensive workloads, including SQL and NoSQL databases, distributed web scale in-memory caches, in-memory databases, real-time big data analytics, and electronic design automation (EDA) applications. R8a instances are SAP certified and offer 12 sizes, including two bare metal sizes.
  • EC2 Auto Scaling announces warm pool support for mixed instances policies – EC2 Auto Scaling groups now support warm pools for Auto Scaling groups configured with mixed instances policies. Warm pools create a pool of pre-initialized EC2 instances ready to quickly serve application traffic, improving application elasticity. The feature benefits applications with lengthy initialization processes, such as writing large amounts of data to disk or running complex custom scripts. By combining warm pools with instance type flexibility, Auto Scaling groups can rapidly scale out to maximum size while deploying applications across multiple instance types to enhance availability. The feature works with Auto Scaling groups configured for multiple On-Demand Instance types through manual instance type lists or attribute-based instance type selection.
  • Amazon Bedrock AgentCore Runtime now supports direct code deployment – Amazon Bedrock AgentCore Runtime now offers two deployment methods for AI agents: container-based deployment and direct code upload. You can choose between direct code–zip file upload for rapid prototyping and iteration or container-based options for complex use cases requiring custom configurations. AgentCore Runtime provides a serverless framework and model agnostic runtime for running agents and tools at scale. The direct code–zip upload feature includes drag-and-drop functionality, enabling faster iteration cycles for prototyping while maintaining enterprise security and scaling capabilities for production deployments.
  • AWS Capabilities by Region now available for Regional planning – AWS Capabilities by Region helps discover and compare AWS services, features, APIs, and AWS CloudFormation resources across Regions. This planning tool provides an interactive interface to explore service availability, compare multiple Regions side by side, and view forward-looking roadmap information. You can search for specific services or features, view API operations availability, verify CloudFormation resource type support, and check EC2 instance type availability including specialized instances. The tool displays availability states including Available, Planning, Not Expanding, and directional launch planning by quarter. The AWS Capabilities by Region data is also accessible through the AWS Knowledge MCP server, enabling automation of Region expansion planning and integration into development workflows and continuous integration and continuous delivery (CI/CD) pipelines.

Upcoming AWS events
Check your calendar and sign up for upcoming AWS events:

  • AWS re:Invent 2025 – Join us in Las Vegas December 1–5 as cloud pioneers gather from across the globe for the latest AWS innovations, peer-to-peer learning, expert-led discussions, and invaluable networking opportunities. Don’t forget to explore the event catalog.
  • AWS Builder Loft – A tech hub in San Francisco where builders share ideas, learn, and collaborate. The space offers industry expert sessions, hands-on workshops, and community events covering topics from AI to emerging technologies. Browse the upcoming sessions and join the events that interest you.
  • AWS Skills Center Seattle 4th Anniversary Celebration – A free, public event on November 20 with a keynote, learning panels, recruiter insights, raffles, and virtual participation options.

Join the AWS Builder Center to connect with builders, share solutions, and access content that supports your development. Browse here for upcoming AWS led in-person and virtual events, developer-focused events, and events for startups.

That’s all for this week. Check back next Monday for another Weekly Roundup!

— Esra

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

It isn't always defaults: Scans for 3CX usernames, (Mon, Nov 10th)

This post was originally published on this site

Today, I noticed scans using the username "FTP_3cx" showing up in our logs. 3CX is a well-known maker of business phone system software [1]. My first guess was that this was a default user for one of their systems. But Google came up empty for this particular string. The 3CX software does not appear to run an FTP server, but it offers a feature to back up configurations to an FTP server [2]. The example user used in the documentation is "3cxftpuser", not "FTP_3cx". Additionally, the documentation notes that the FTP server can run on a different system from the 3CX software. For a backup, it would not make much sense to have it all run on the same system.

The scans we are seeing likely target FTP servers users set up to back up 3CX configurations, and not the 3CX software itself. I am not familiar enough with 3CX to know precisely what the backup contains, but it most likely includes sufficient information to breach the 3CX installation.

The credentials we observe with our Cowrie-based honeypots are collected for telnet and ftp. In particular, on Linux systems, you often use a system user to connect via FTP. Any credentials working via FTP will also work for telnet or SSH. Keep that in mind when configuring a user for FTP access, and of course, FTP should not be your first choice for backing up sensitive data, but we all know it does happen.

Here are the passwords attacks are attempting to use:

Password Count
3CXBackup 4
3CXbackups 4
telecom 1
testbackup 1
backup3cx 1
ebsftpuser 1
ftp_cxn 1
ftp_inx 1

Here are some other "3cx" related usernames we have seen in the past:

Username
3cx
3CXBackup
3cxbackups
backup3cx
ftp3cx
FTP_3cx

If anyone with more 3CX experience reads this, is there a reason for someone to use these usernames? Or are there any defaults I didn't find?

[1] https://www.3cx.com
[2] https://www.3cx.com/docs/ftp-server-pbx-backups-linux/


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Introducing AWS Capabilities by Region for easier Regional planning and faster global deployments

This post was originally published on this site

At AWS, a common question we hear is: “Which AWS capabilities are available in different Regions?” It’s a critical question whether you’re planning Regional expansion, ensuring compliance with data residency requirements, or architecting for disaster recovery.

Today, I’m excited to introduce AWS Capabilities by Region, a new planning tool that helps you discover and compare AWS services, features, APIs, and AWS CloudFormation resources across Regions. You can explore service availability through an interactive interface, compare multiple Regions side-by-side, and view forward-looking roadmap information. This detailed visibility helps you make informed decisions about global deployments and avoid project delays and costly rework.

Getting started with Regional comparison
To get started, go to AWS Builder Center and choose AWS Capabilities and Start Exploring. When you select Services and features, you can choose the AWS Regions you’re most interested in from the dropdown list. You can use the search box to quickly find specific services or features. For example, I chose US (N. Virginia), Asia Pacific (Seoul), and Asia Pacific (Taipei) Regions to compare Amazon Simple Storage Service (Amazon S3) features.

Now I can view the availability of services and features in my chosen Regions and also see when they’re expected to be released. Select Show only common features to identify capabilities consistently available across all selected Regions, ensuring you design with services you can use everywhere.

The result will indicate availability using the following states: Available (live in the region); Planning (evaluating launch strategy); Not Expanding (will not launch in region); and 2026 Q1 (directional launch planning for the specified quarter).

In addition to exploring services and features, AWS Capabilities by Region also helps you explore available APIs and CloudFormation resources. As an example, to explore API operations, I added Europe (Stockholm) and Middle East (UAE) Regions to compare Amazon DynamoDB features across different geographies. The tool lets you view and search the availability of API operations in each Region.

The CloudFormation resources tab helps you verify Regional support for specific resource types before writing your templates. You can search by Service, Type, Property, and Config.For instance, when planning an Amazon API Gateway deployment, you can check the availability of resource types like AWS::ApiGateway::Account.

You can also search detailed resources such as Amazon Elastic Compute Cloud (Amazon EC2) instance type availability, including specialized instances such as Graviton-based, GPU-enabled, and memory-optimized variants. For example, I searched 7th generation compute-optimized metal instances and could find c7i.metal-24xl and c7i.metal-48xl instances are available across all targeted Regions.

Beyond the interactive interface, the AWS Capabilities by Region data is also accessible through the AWS Knowledge MCP Server. This allows you to automate Region expansion planning, generate AI-powered recommendations for Region and service selection, and integrate Regional capability checks directly into your development workflows and CI/CD pipelines.

Now available
You can begin exploring AWS Capabilities by Region in AWS Builder Center immediately. The Knowledge MCP server is also publicly accessible at no cost and does not require an AWS account. Usage is subject to rate limits. Follow the getting started guide for setup instructions.

We would love to hear your feedback, so please send us any suggestions through the Builder Support page.

Channy

Binary Breadcrumbs: Correlating Malware Samples with Honeypot Logs Using PowerShell [Guest Diary], (Wed, Nov 5th)

This post was originally published on this site

[This is a Guest Diary by David Hammond, an ISC intern as part of the SANS.edu BACS program]

My last college credit on my way to earning a bachelor's degree was an internship opportunity at the Internet Storm Center. A great opportunity, but one that required the care and feeding of a honeypot. The day it arrived I plugged the freshly imaged honeypot into my home router and happily went about my day. I didn’t think too much about it until the first attack observation was due. You see, I travel often, but my honeypot does not. Furthermore, the administrative side of the honeypot was only accessible through the internal network. I wasn’t about to implement a whole remote solution just to get access while on the road. Instead, I followed some very good advice. I started downloading regular backups of the honeypot logs on a Windows laptop I frequently had with me.

The internship program encouraged us to at least initially review our honeypot logs with command line utilities, such as jq and all its flexibility with filtering. Combined with other standard Unix-like operating system tools, such as wc (word count), less, head, and cut, it was possible to extract exactly what I was looking for. I initially tried using more graphical tools but found I enjoy "living" in the command line better. When I first start looking at logs, I was not always sure of what I’m looking for. Command line tools allow me to quickly look for outliers in the data. I can see what sticks out by negating everything that looks the same. 

So, what’s the trouble? None of these tools were available on my Windows laptop. Admittedly, most of what I mention above are available for Windows, but my ability to install software was restricted on this machine, and I knew that native alternatives existed. At the time I had several directories of JSON logs, and a long list of malware hash values corresponding to an attack I was interested in understanding better. Here’s how a few lines of PowerShell can transform scattered honeypot logs into a clear picture of what really happened.

First, let’s start with the script in two parts. Here’s the PowerShell array containing malware hash values:

$hashes = @(
"00deea7003eef2f30f2c84d1497a42c1f375d802ddd17bde455d5fde2a63631f",
"0131d2f87f9bc151fb2701a570585ed4db636440392a357a54b8b29f2ba842da",
"01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
"0291de841b47fe19557c2c999ae131cd571eb61782a109b9ef5b4a4944b6e76d",
"02a95dae81a8dd3545ce370f8c0ee300beff428b959bd7cec2b35e8d1cd7024e",
"062ba629c7b2b914b289c8da0573c179fe86f2cb1f70a31f9a1400d563c3042a",
"0be1c3511c67ecb8421d0be951e858bb3169ac598d068bae3bc8451e883946cc",
"0cbd5117413a0cab8b22f567b5ec5ec63c81b2f0f58e8e87146ecf7aace2ec71",
"0d2d316bc4937a2608e8d9a3030a545973e739805c3449b466984b27598fcdec",
"0d58ee0cd46d5908f31ba415f2e006c1bb0215b0ecdc27dd2b3afa74799e17bd"
)

The $hashes = @( ) between quoted, comma-separated values, establishes a PowerShell array of strings which represents the hashes we want to search for. Now let’s look at how we put this array to use.

Get-ChildItem -Path "C:UsersDaveLogs" -Filter 'cowrie.json.*' -Recurse |
ForEach-Object {
    $jsonContent = Get-Content $_.FullName
    write-output $_.FullName
    foreach ($hash in $hashes) {
        $searchResults = $null
        $searchResults = $jsonContent | Select-String $hash
        if (![string]::IsNullOrEmpty($searchResults)) { 
            write-output $searchResults 
        }
    }
}

Let's walk through the execution of the script. The first statement, Get-ChildItem, recurses every folder in the specified path (C:UsersDaveLogs) and passes along all filenames that match the filter argument. Each filename is passed through the "pipe" (|) directly into the first ForEach-Object statement. You can see what’s passed by observing the output of the write-output $_.FullName line. The $_ is a variable designation which represents whatever is passed through the pipe. In this case, we know what kind of data to expect (a filename) so we can access it’s attribute, "FullName". This tells us the specific JSON log file currently being searched.

Now let’s get into the meat of the script. The main body of the script contains two nested For-Loops. The outer loop begins with the first "ForEach-Object" block of code. The inner loop is described by the lowercase "foreach" block. We already know the name of the JSON log we’ll be searching next, so the next line, $jsonContent = Get-Content $_.FullName sets that up to happen. It takes the content of the first filename passed to $_ though the pipe, reads the contents of that filename, and stores the text in a variable named $jsonContent. Now we’ve got our first log to search, all we have to do is run through the list of hash values to search for! This takes us to the point of the script where we reach the inner-loop. The foreach inner-loop is similar to the outer loop with the exception of how it processes data. The statement, foreach ($hash in $hashes) takes each hash value found in the $hashes array and puts a copy of it into $hash before executing the code block it contains. 

When the inner-loop runs it does three things. First, $searchResults = $null empties the value of the $searchResults variable. This is also called "initializing" the variable, and it’s a good practice whenever you're working with loops that re-use the same variable names. Second, with the variable clear and ready to accept new values, the next line accomplishes a few things.

        $searchResults = $jsonContent | Select-String $hash

Starting to the right of the equals sign, we’re passing the JSON log text $jsonContent into the command "Select-String" while also passing Select-String a single argument, $hash.  Remember earlier when the lowercase foreach loop started, it takes each value found in the $hashes array and (one at a time) places their values into $hash before executing the block of code below it. So we’re passing the text in $jsonContent through another pipe to Select-String, which takes that text and searches for the value $hash within the contents of $jsonContent. The results of Search-String are then stored in the variable named $searchResults.

        if (![string]::IsNullOrEmpty($searchResults)) { 
            write-output $searchResults 
        }

Third and finally, we have an if statement to determine whether the prior Select-String produced any results. If it found the $hash value it was looking for, the $searchResults variable will contain data. If not, it will remain empty ($null). The if statement makes that determination and prints the $searchResults it found. Note the ! at the beginning of the statement which tells it to evaluate as, "if not empty."

While compact in size, this script introduces the PowerShell newcomer to a variety of useful functions: traversing files and folders, retrieving text, searching text, and nested loops are all sophisticated techniques. If you save this script, you can adapt it in many ways whenever a quick solution is needed. Understanding the tools that are available to us in any environment and having practice adapting those tools to our circumstances makes us all better cybersecurity professionals.

[1] https://www.sans.edu/cyber-security-programs/bachelors-degree/

———–
Guy Bruneau IPSS Inc.
My GitHub Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.