Sustainability with AWS Graviton - The Quiet Migration That Actually Moves the Needle

A while ago I found myself doing the classic cloud-sustainability thing I do every once in a while:

I was looking at a carbon dashboard and feeling… weirdly powerless.

Not because sustainability isn’t important and I stoped caring about the enviorment. The opposite. It’s because a lot of the conversation feels abstract: scopes, methodologies, estimates, reports. Important but it can start to feel like you’re optimizing the story of emissions instead of the source of emissions.

So I wanted to do something and wile looking online I ran into one of those rare changes that’s both deeply boring and genuinely impactful:

Switching some workloads to AWS Graviton.

No new product. No offset program. No “green architecture” rebrand. Just the same service, doing the same job, using less energy.

AWS puts it bluntly on the Graviton pages: Graviton-based EC2 instances use up to 60% less energy than comparable EC2 instances for the same performance.

That sentence is the whole blog post. Everything else is just turning it into something you can trust, measure, and ship.


The moment “sustainability” stopped being a side quest

Here’s what helped me:

Instead of asking, “How do we report sustainability?” I started asking, “How do we do the same work with less electricity?”

Because at the end of the day, a lot of cloud sustainability is just physics and utilization:

  • how many watts your compute burns,
  • how efficiently you use the resources you’re paying for,
  • and what the grid is doing where your workloads run.

The reason Graviton is interesting isn’t that it magically solves emissions. It’s that it gives you a very practical lever: better performance per watt, in a way that often also improves cost.

AWS even has a dedicated “Sustainability with Graviton” page that frames it exactly like that: “achieve your sustainability goals without compromise,” while repeating the “up to 60% less energy” line.


A story that convinced me this isn’t just marketing (A usecase)

The case that kept showing up in my searchs was Pinterest.

In AWS’s published case study, Pinterest describes migrating an API-serving workload to Graviton-based instances and the result wasn’t just a small gain. They report:

  • 38% reduction in compute resources
  • 62% lower carbon emissions per API request
  • 47% reduction in workload costs

That’s the trifecta you almost never get to write in one sentence: less infrastructure, less carbon, less spend.

And it highlights something subtle: sometimes the sustainability win isn’t only “each instance is more efficient.” It’s also that your system ends up needing fewer instances (or can do more work per instance), which bigger the effect.


”Okay, but how do I measure this without hand-waving?”

This is where I see a lot of teams get stuck. They want to do the right thing, but they also don’t want to publish a feel-good chart that falls apart under one skeptical question.

AWS’s Customer Carbon Footprint Tool (CCFT) is useful here—not because it’s perfect (no estimation tool is), but because it gives you a consistent method to track change over time. AWS says CCFT provides historical data starting from January 2022, so you can look at trends before/after changes you make.

And in late 2025, AWS expanded CCFT to include Scope 3 emissions data (alongside Scope 1 and 2 coverage), which matters if you care about the “fuller” picture of cloud impact rather than only operational electricity.

One detail I appreciate: AWS explicitly calls out that you can view emissions using market-based or location-based methods (MBM vs LBM) inside the tool.

That’s important because “electricity usage” and “emissions” aren’t always aligned in the way people assume. Saving 1 kWh is always good. But the CO₂e impact of that 1 kWh depends on the grid and the accounting method.

So, if you want a measurement approach that feels honest, here’s the narrative I’d use:

You’re not trying to “prove the cloud is green.”

You’re trying to show that this workload now produces fewer emissions per unit of useful output.

For an API service, that might be “CO₂e per 1M requests.”

For a batch system, “CO₂e per 10,000 jobs.”

For a pipeline, “CO₂e per successful run.”

Pinterest’s “per API request” framing is exactly why that case study is so compelling.


The part nobody tells you: the migration is usually not the hard part

Most of the time, switching to Graviton isn’t a heroic rewrite. It’s a compatibility and build pipeline story:

  • your Docker images need to support arm64,
  • any native dependencies need to build cleanly,
  • and your vendor agents (monitoring/security) need to support the architecture.

If you’re already living in Linux + containers + managed services, it tends to be… surprisingly uneventful.

Which is why I call it the “quiet migration.” It doesn’t come with a shiny diagram. It doesn’t trigger a big architecture debate. It just quietly improves efficiency.

And that, honestly, is what makes it such a good sustainability lever.


Graviton keeps moving forward (and that matters)

This isn’t a one-off generation. AWS has kept pushing Graviton forward, and the platform support keeps widening.

At AWS re:Invent 2025, AWS introduced Graviton5 and previewed EC2 M9g instances powered by it. AWS says the M9g preview offers up to 25% better compute performance than the previous generation, plus higher networking and EBS bandwidth.

Whether you care about that for sustainability depends on your workloads—but directionally it’s the same theme: more work done per unit of energy, and often less infrastructure required.

(And if you’re the kind of person who likes tracking platform roadmap hints: at least one industry report noted additional Graviton5-based instance families planned for 2026 beyond M9g.)


What I’d actually do if I wanted a “sustainability win” this quarter

If I were trying to make this real—not as a keynote slide, but as an engineering change—I’d do it like a product experiment:

Pick one service that is stable, measurable, and meaningfully sized. Something where you can say “this service handles X% of our traffic” or “this worker tier consumes Y% of compute.”

Then run a calm, boring comparison:

  • same load shape,
  • same SLOs,
  • same configs as much as possible,
  • x86 baseline vs Graviton target.

If the Graviton version holds steady (or improves), you ship it.

Then you go back to CCFT and look at what happens over time—because now you’re not arguing in theory. You’re correlating an infrastructure change with an emissions trend in the system AWS provides, with consistent methodology and historical context.


The takeaway I wish more teams heard

Sustainability work doesn’t have to feel like a separate job.

Sometimes it’s just this:

Take the same workload. Run it on a more efficient CPU. Measure emissions per unit of output. Roll it out.

Graviton is compelling because it’s one of the rare moves that can improve:

  • sustainability (less energy, and often less CO₂e),
  • performance-per-dollar,
  • and sometimes even operational simplicity (fewer instances, fewer moving parts).

And AWS is being unusually direct about the intent: “reduce your carbon footprint,” “use up to 60% less energy,” right there on the product page.

If you’re looking for a sustainability story that doesn’t require poetry, Graviton might be the most practical chapter you can write.