Artificial intelligence (AI) is rapidly transforming how we use technology to address some of the world’s biggest challenges, from health care to climate change. As we scale our use of AI, it’s important to also minimize its environmental footprint. A new study commissioned by Amazon Web Services (AWS) and conducted by Accenture shows that an effective way to do that is by moving IT workloads from on-premises infrastructure to AWS data centers around the globe.
An image of an AWS data center in northern Virginia and an employee walking in the hallway. Inside an AWS data center in northern Virginia.
The report, "Moving onto The AWS Cloud Reduces Carbon Emissions," estimates AWS’s infrastructure is up to 4.1 times more efficient than on-premises, and when workloads are optimized on AWS, the associated carbon footprint can be reduced by up to 99%. On-premises refers to organizations running hardware and software within their own physical space, and 85% of global IT spend by organizations remains on-premises.
“AWS's holistic approach to efficiency helps to minimize both energy and water consumption in our data center operations, contributing to our ability to better serve our customers,” said Chris Walker, director of sustainability at AWS. “We are constantly working on ways to increase the energy efficiency of our facilities—optimizing our data center design, investing in purpose-built chips, and innovating with new cooling technologies. As AWS takes steps toward reaching Amazon's net-zero carbon by 2040, as part of The Climate Pledge, we will continuously innovate and implement ways to increase the energy efficiency across our facilities in an effort to build a brighter future for our planet.”
A graphic showing how AWS will decrease workload carbon emissions by 99%
AWS customers have experienced the efficiency benefits of moving to, and developing solutions on, AWS for several years. For example, global genomics and human-health company Illumina saw an 89% reduction of carbon emissions by moving to AWS. This type of efficiency gained by leveraging AWS versus organizations maintaining their own on-site IT infrastructure is expected to become more prominent as the world increasingly adopts the use of AI.
That is because as AI workloads become more complex and data-intensive, they will require new levels of performance from systems that complete millions of calculations every second, along with memory, storage, and networking infrastructure. This requires energy and has a corresponding carbon footprint. While on-premises data centers struggle to keep pace due to their inherent limitations in driving scalability and energy efficiency, AWS is continuously innovating to make the cloud the most efficient way to run our customers’ infrastructure and businesses.
A photo of two data center technicians inspecting hardware inside an AWS Data Center.
“This research shows that AWS's focus on hardware and cooling efficiency, carbon-free energy, purpose-built silicon, and optimized storage can help organizations reduce the carbon footprint of AI and machine learning workloads,” said Sanjay Podder, global lead for Technology Sustainability Innovation at Accenture. “As the demand for AI continues to grow, sustainability through technology can play a crucial role in helping businesses meet environmental goals while driving innovation.”

Industry leading standard used to quantify efficiency and estimated carbon reduction

The research quantified the energy efficiency and carbon reduction opportunity of moving customer workloads from on-premises to AWS by simulating and analyzing the differences between them. A workload is a collection of resources and code to accomplish tasks such as running a retail website or managing inventory databases. Accenture used the International Organization Standardization (ISO) Software Carbon Intensity (SCI) standard to analyze the carbon footprint of representative storage-heavy and compute-heavy workloads and went beyond that by considering the effect of carbon-free energy for both on-premises and AWS. This is one of the first times a hyperscale cloud provider has used the SCI specification to perform an analysis of this type.
The study first looked at the amount of estimated operational and embodied (IT hardware) carbon emissions avoided simply by moving workloads from on-premises infrastructure to AWS. This is referred to as “Lift-And-Shift” in the report. The report also analyzed how much more carbon can be avoided when those same workloads are optimized on AWS’s hardware, such as its purpose-built silicon designed for running AI models—and compared each scenario across four geographic areas: the United States and Canada, European Union, Asia Pacific, and Brazil.
A graphic of a map of the world.
Over the past decade, data volumes have grown exponentially, while the cloud has continued to open up an ever-greater degree of advanced data capabilities. Considering the data needs for analytics and training and inference of AI models, organizations will want to factor in the potential carbon savings associated with their storage requirements. The study showed storage-heavy workloads can be up to 2.5 times more efficient on AWS and when compared to on-premises, and optimizing them on AWS’s hardware can reduce associated carbon emissions by up to 93%.
For compute-heavy workloads, potential carbon emissions reductions of running AI workloads on AWS versus on-premises were assessed by analyzing the operational and embodied emissions of a representative workload provided by AWS. Accenture found when optimizing compute-heavy workloads on AWS, organizations can reduce their associated carbon footprint across several geographical regions by up to 99%.

How AWS innovates to increase efficiency and reduce the carbon footprint of AI workloads

AWS is continuously innovating to make the cloud the most efficient and sustainable way to run our customers’ businesses. Here are six ways AWS innovates to help organizations reduce their IT carbon footprint:

1. Data center infrastructure designed to increase efficiency

Through engineering—from electrical distribution to cooling techniques, AWS’s infrastructure is able to operate closer to peak energy efficiency. AWS optimizes resource utilization to minimize idle capacity, and continuously improves the efficiency of its infrastructure. For example, by innovating to improve evaporative media practices, we are able to reduce energy usage of the associated cooling equipment by 20%. This stands in contrast to traditional on-premises data centers that can overprovision to accommodate unpredictable demand spikes and future growth. This excess capacity translates to underutilized, energy-demanding resources with a higher associated carbon footprint.

2. Improving how we cool our facilities

After powering AWS’s server equipment, cooling is one of the largest sources of energy use in our data centers. To increase efficiency, AWS uses different cooling techniques, including free air cooling depending on the location and time of year, as well as real-time data to adapt to weather conditions. Implementing these innovative cooling strategies is more challenging on a smaller scale at a typical on-premises data center. AWS’s latest data center design seamlessly integrates optimized air-cooling solutions alongside liquid cooling capabilities for the most powerful AI chipsets, like the NVIDIA Grace Blackwell Superchips. This flexible, multimodal cooling design allows us to extract maximum performance and efficiency whether running traditional workloads or AI models.
A person working on water pipes.

3. Transitioning to carbon-free energy sources

Aligning with Amazon's commitment to achieving net-zero carbon emissions across all operations by 2040, AWS is rapidly transitioning its global infrastructure to match our electricity use with 100% renewable energy. Amazon has enabled over 500 renewable energy projects globally and has been the world’s largest corporate buyer of renewable energy for the last four years, according to Bloomberg New Energy Finance. As of 2022, the electricity consumed in 19 AWS Regions was attributable to 100% renewable energy.

4. Purpose-built silicon for AI workloads

When it comes to running complex AI workloads like large language models (LLMs), AWS offers a wide selection of hardware. To optimize performance and energy consumption, we developed purpose-built silicon like the AWS Trainium chip and AWS Inferentia chip to achieve significantly higher throughput than comparable accelerated compute instances. These purpose-built accelerators enable AWS to efficiently execute AI models at scale, reducing the carbon footprint for similar workloads and enhancing performance per watt of power consumption.
AWS trainium chip

5. More sustainable construction practices

While the study did not consider the embodied emissions from non-IT infrastructure (such as HVAC and lighting) since SCI does not include it, AWS is nonetheless also continuously evaluating and optimizing the design of our data centers, server racks, storage rooms, and supporting infrastructure. As of 2023, AWS saved over 22,000 tons of carbon dioxide equivalent emissions by using lower-carbon concrete and steel alternatives in the construction of 43 new data center facilities. AWS is also working across the supply chain to increase use of recycled materials and reduce embodied carbon from manufacturing processes.

6. Efficient data storage and replication strategies

AWS provides tools and guidance that enable customers to modernize their data management strategies. This includes keeping separate active "hot" data from inactive "cold" data sets using AWS's fully managed storage services. Additionally, AWS helps customers optimize their data replication processes by reducing replication size and throughput requirements, leading to decreased energy consumption and carbon emissions.
By designing data centers that provide the efficient, resilient service our customers expect while helping to minimize our carbon footprint—and theirs—AWS will continue to build a more sustainable business for our customers and for the world we all share.