Beyond the Hype: What ESG Leaders Need to Know About AI Risks and Opportunities

As artificial intelligence becomes more embedded in business operations, its influence on Environmental, Social, and Governance (ESG) practices is growing rapidly. From emissions modelling to automated reporting, AI is already reshaping how organisations measure, manage, and communicate sustainability performance.

But for all its promise, AI also brings risk—some visible, many hidden. The question is no longer whether AI will shape ESG, but whether it will help or harm. And the answer depends entirely on how it’s used.

Where AI Adds Value

At its best, AI helps sustainability teams do more with less. It can:

  • Automate ESG data extraction from reports, supplier disclosures and regulatory requirements
  • Provide real-time monitoring of climate, social, and governance risks
  • Generate draft reports aligned with frameworks like ASRS, ISSB, and CSRD
  • Model forward-looking risk scenarios and emissions trajectories

This is especially valuable for smaller organisations, or those scaling up their ESG reporting for the first time.

Government guidance is beginning to catch up. The Department of Industry’s AI and ESG: An Introductory Guide for ESG Practitioners frames ESG leaders as stewards of responsible AI use, highlighting the role they can play in aligning AI with environmental and social outcomes. The AI Impact Navigator is one practical tool to support that.

But while the upside is easy to sell, it’s the risks, much less discussed, that demand more scrutiny.

Four ESG Risks Hiding Behind the Hype

1. AI’s Carbon Footprint Is Growing

AI models—particularly large-scale ones used for language processing, image generation, and analytics—require enormous computing power. This translates into high electricity demand, often drawn from fossil-fuel-intensive grids. As AI tools become embedded across operations, the associated emissions can quietly inflate an organisation’s footprint. For most companies using cloud-based AI tools, the associated emissions should be captured under Scope 3—typically Category 1 (Purchased Goods and Services), reflecting the energy and infrastructure behind outsourced computing. Forward-looking operators like Start Campus in Portugal offer a blueprint—powering their hyperscale data centres entirely with renewable energy and advanced cooling technologies to reduce AI’s environmental load at scale.

2. Value Chain Waste and Resource Pressure

AI infrastructure relies on specialised hardware—GPUs, custom chips, high-density servers—often housed in third-party data centres. While companies don’t handle this equipment directly, they are increasingly accountable for its lifecycle impacts under Scope 3 (Purchased Goods and Services; Waste Generated in Operations). A recent study by Google found that emissions from manufacturing and disposing of AI hardware can rival or exceed its operational footprint. ESG teams can start by requesting emissions and lifecycle data from AI and cloud service providers, and asking whether environmental impacts from infrastructure are reflected in the services they procure.

3. Opaque Supply Chains and Social Risk

AI systems depend on global supply chains that often operate with low visibility and inconsistent labour standards. In March 2025, the U.S. Department of Labor opened an investigation into Scale AI—a major data labelling firm—over alleged unpaid overtime and exploitative working conditions among gig-based annotators supporting AI model development (Reuters). While digital, these operations have very real human rights implications. ESG due diligence should extend to service providers like AI vendors, with the same rigour applied to physical goods suppliers.

4. Overreliance and Ethical Blind Spots

While AI excels at processing data, it lacks the judgment, context, and ethical awareness required for many ESG decisions. A now-infamous 2023 case saw a New York lawyer submit fictitious case citations generated by ChatGPT in a court filing—an example of an AI “hallucination” with real-world consequences (MIT Sloan). If used uncritically in ESG reporting or scenario modelling, similar risks could damage credibility or mislead stakeholders. Human oversight, validation protocols, and responsible prompt design are essential for integrity.

Is AI Good or Bad for ESG?

AI isn’t good or bad by nature—it’s a tool. What matters is how it’s used. For sustainability teams, the key question is whether AI is supporting your organisation’s ESG goals, or quietly working against them.

In most companies, AI sits within procurement, finance, operations, or IT, not sustainability. But if AI is being used to reduce headcount, accelerate procurement without oversight, or optimise logistics in carbon-intensive ways, those effects matter. ESG professionals should be asking: “What is this tool enabling?” and “Are we tracking its downstream impacts?


Are We Reinforcing the Wrong Systems?
Will Alpine, a former Microsoft AI product lead, has warned that AI is being used “to keep us hooked on fossil fuels more than to get us off them.” In a recent interview, he outlined how AI is helping oil and gas companies increase extraction efficiency—lowering costs and expanding reserves. While tech companies promote the climate-positive use cases of AI, Alpine argues that these benefits are often outweighed by how AI is being commercially deployed. His view: ESG leaders must look beyond energy use and ask tougher questions about what AI systems are being trained to do—and who benefits.


For example, a company might use AI to optimise its workforce and reduce costs. On paper, that’s an efficiency gain. But if the result is widespread redundancies without a just transition plan, the social consequences may conflict with the company’s own ESG values. These are the kinds of trade-offs ESG teams should be mapping before AI tools are embedded into business decisions.

Putting AI to Work Responsibly

For AI to credibly support ESG performance, organisations must apply the same principles they expect from others: transparency, accountability, and responsible governance.

To start:

  • Build policies for responsible AI use, especially in ESG reporting, forecasting, and data processing
  • Reflect AI’s energy use and emissions in Scope 3 disclosures
  • Integrate AI procurement into sustainable sourcing and supply chain assessments
  • Maintain human oversight in ESG processes where context, ethics, and stakeholder values matter

There’s no shortcut to credible sustainability. But with thoughtful application, AI can help organisations move faster—with integrity, not illusion.