AWS re:Invent 2025 - somewhat non-GenAI recap
Released during re:Invent 2025 that you might have missed
Every re:Invent comes with a wave of announcements, some big, some subtle, and some that quietly make day-to-day engineering a bit better.
This is my quick recap of the non-GenAI updates that stood out to me this year: improvements in observability, new cost-optimization levers, updates to Lambda’s capabilities, and a notable step forward in multicloud networking.
Feel free to check more updates from re:Invent and pre:Invent 2025 here: https://aws-news.com/
CloudTrail events in CloudWatch: fewer moving parts, more visibility
AWS added a more straightforward way to enable CloudTrail events in CloudWatch using service-linked channels (SLCs).
Why I care
Because half the time you want CloudTrail events in CloudWatch, you’re not trying to build an archival strategy. You’re trying to answer:
“Who changed this security group?”
“Why did this API call spike?”
“Can I alert on this before it becomes a Slack incident?”
SLCs also include features such as safety checks and termination protection.
The “AWS fine print”
You still pay:
CloudTrail event delivery charges and
CloudWatch Logs ingestion (custom logs pricing).
Yes, it’s simpler, but don’t turn everything on everywhere and then act surprised.
Database Savings Plans
AWS introduced Database Savings Plans (up to 35% savings).
This is essentially AWS acknowledging that databases are expensive and that “please right-size” is not a strategy.
What to do with it
If you have steady-state usage (Aurora/RDS/others in the eligible set), you can treat it like:
commit for the baseline,
keep spikes on-demand.
AWS also integrated this into the billing console recommendations flow (Savings Plans recommendations + purchase analyzer).
Lambda Managed Instances: serverless DX, EC2-shaped compute
This one is spicy: Lambda Managed Instances lets you run Lambda functions on your Amazon EC2 instances via a capacity provider model. This update is very controversial, since I believe in Lambda being serverless, but you can still use Lambda (as it was initially intended) and ignore this release 😂
You define:
VPC config
optional instance requirements
scaling policies
…and then attach Lambda functions to that capacity provider via console, API, or IaC.
Why this matters
This is a new middle layer for teams that:
love Lambda event sources + tooling (CloudWatch, X-Ray, Config…)
but want more control over compute shape or cost for steady workloads.
Also, supported runtimes include the latest Java, Node.js, Python, and .NET.
The “this will come up in architecture review” part
Third-party reporting states that pricing is standard EC2 + a compute management fee + request pricing, and that this eliminates the usual Lambda duration charge (since you’re paying for EC2).
API Gateway adds MCP proxy support (AI-adjacent, but it’s really “API-as-a-tool”)
API Gateway now supports MCP proxy capability, which provides protocol translation so a REST API can communicate with MCP clients/agents without requiring app changes or additional infrastructure.
AWS frames it alongside Bedrock AgentCore Gateway services, but the “boring” value is:
governance,
auth,
throttling,
making APIs discoverable/usable as tools
If you’re not building “agents”, you can still read this as: API Gateway now supports another integration pattern that used to require custom glue (not the service).
AWS Interconnect (multicloud) preview: private connectivity to other clouds
AWS announced a preview of AWS Interconnect – multicloud: “simple, resilient, high-speed private connections” to other cloud providers.
It starts with Google Cloud as the first partner, and AWS says Azure comes later in 2026.
Why this matters
Multicloud connectivity is usually:
slow to procure,
annoying to operate,
and “fun” during incident response.
The promise here is: private links between clouds without weeks of paperwork and waiting.
We will see how this evolves over time.
Amazon S3 Vectors is GA: S3 continues to absorb the universe
S3 Vectors is now generally available, and AWS says it’s available in 14 Regions (up from 5 in preview).
Even if you don’t want to say “embeddings” out loud, vector storage shows up in:
similarity search,
dedupe,
recommendations,
anomaly detection,
and hybrid search patterns.
AWS also highlights integration patterns in which OpenSearch can manage vector storage in S3 to optimize hybrid search costs.
This is an update where I want to make a showcase blog post in the upcoming weeks.
Lambda Durable Functions: long-running workflows without paying for “waiting”
AWS added durable functions for Lambda to build multi-step workflows that can run for seconds to one year, without incurring idle compute costs while waiting for humans/external systems.
I am still a fan of Step Functions, and TBH, by using Step Functions’ native integration, there is no need for Lambda functions for everything (i.e., put DynamoDB item). I see the value here, but for me, Step Functions still win in that case.
Why this matters
Because today, teams tend to choose between:
Step Functions (great, but can become “JSON orchestration art”)
DIY state in DynamoDB + retries + idempotency + sadness
Durable functions are AWS's way of saying: “What if the workflow was code, but also reliable?”
The AWS News Blog post explicitly positions it for long-running multi-step coordination and not paying while waiting.
Conclusion
It’s easy to get swept up in the big re:Invent moments, but these quieter updates are the ones that tend to stick. They refine the everyday experience of building on AWS, and that’s often where the real gains show up.
Feel free to reach out if you have suggestions for my next blog post.
Till the next time, stay safe and have fun! ❤️



Solid roundup of the quieter releases. Lambda Managed Instances caught my attention becuase it seems like AWS is basically acknowledging that pure serverless doesn't always fit steady-state workloads where paying per invocation gets pricey. The capacity provider model could actualy help teams that are stuck between Lambda's DX and Fargate's cost structure, especially for things like background processing that dont need instant scale-to-zero.