Experts think AI regulation is possible, even in the face of tech giants: 'The future set by them is not the future set by us'

Glasses sit in front of a screen filled with code

No matter where you fall on your everyday use of generative artificial intelligence tools like ChatGPT — love it, hate it, scared of it, or a full-on early adopter — you’ve likely been subjected to a common refrain when it comes to AI: We’re cooked.

It’s hard not to feel that way. 

The data centers that power AI are widening an ever-growing digital divide between the “haves” and “have-nots,” according to reporting from The New York Times, with the majority of OpenAI (the company that owns ChatGPT) data centers built primarily in the wealthier Northern Hemisphere. 

At the same time, those data centers threaten the very livability of these areas. 

For instance, Elon Musk’s latest xAI development in Memphis, Tennessee is allegedly being built with none of the turbines needed for pollution controls, as Politico reports. This means a predominantly Black community that already leads the state in emergency room visits for asthma could face further risks of disability and death.

The Massachusetts Institute of Technology just released a first-of-its-kind study showing that ChatGPT users may already be losing their ability to think critically, with the technology withholding the natural growth that comes from generating one’s own ideas.

And with rapid technological development — pushed by Silicon Valley’s richest and most powerful — comes a greater risk of irresponsible AI. 

In the allure of wealth, power, and industry dominance, the race for AI development has led to headlines from publications like Scientific American asking: “Could AI Really Kill Off Humans?” (Spoiler alert: “It turns out it is very hard — though not completely out of the realm of possibility — for AI to kill us all,” the journal concludes.)

Considering all of this, in addition to AI’s impact on job losses, creative expression, and the complicated umbrella of human rights, things look and sound bleak.

Kate Brennan of the AI Now Institute
Kate Brennan. Photo courtesy of the AI Now Institute

But Kate Brennan — a lawyer and the associate director of the AI Now Institute, an independent think tank founded in 2017 to provide expert analysis and policy guidance to rethink the trajectory of AI — believes there is a key element missing in the discourse.

“My good news is that the current trajectory of AI is not inevitable, and this moment is poised for disruption,” she told Good Good Good.

“We’re given this endless slew of AI in the headlines telling us that it’s going to enter our lives, take our jobs, and also solve all our problems,” she said. “What AI firms do is position us to feel like we’re on the receiving end, or that it’s something passive, that we’re just observers in this march towards a technology that is going to happen to us. That can certainly leave you feeling disempowered."

Even as tech companies petition for expanded contracts from the federal government, the public reels about ethical concerns about surveillance and data collection, and the geopolitical landscape seems increasingly primed for earth-shattering cybersecurity attacks, Brennan has hope.

She says that although AI may seem mind-bogglingly advanced and new-fangled, many fights have come before and are intertwined with this current technological revolution.

“Instead of accepting this vision offered to us by big tech companies, we can reassert a public agency over the vision of AI's future,” she said. 

“We can tap into the existing groundwork of organizing happening right now. We can demystify what AI firms are saying, and we can bring the receipts. The future set by them is not the future set by us.”

"My good news is that the trajectory of AI is not inevitable, and this moment is poised for disruption."
Illustration by Carra Sykes for Good Good Good

In addition to creating new regulations for the emerging world of AI technology, the safeguards needed to keep AI — and the companies that offer it — from going off the rails already exist, like labor union organizing, copyright law, anti-trust law, and climate regulations, she said. 

She offers up a handful of examples.

Amazon Employees for Climate Justice are doing an incredible job of pushing back on Amazon’s climate impact,” she said. “National Nurses United, which is the largest union of registered nurses in the United States, is incredible at raising the alarm for how and where automation is impacting their clinical decision-making and patient safety.”

Brennan said the Hollywood writers and actors strikes of 2023 also paint a picture of how union organizing can make a difference with the implementation of AI. Many of these strikes pushed back against the use of AI in film and television production, and the workers won. 

This success also led to the passage of legislation in California that protects against certain unauthorized digital replicas of a person, requires that AI developers watermark AI-generated images, and offers protection against sexually explicit digital content.

Brennan has also seen wins across climate activism groups, especially as they relate to the pausing, slowing down, or stopping of data center construction. 

“AI firms want you to believe that AI is this magical thing, this abstract thing that happens in the cloud, but actually it is deeply material, and it is happening on the front lines of many of our communities, next to many Black and brown communities, low income communities,” Brennan said.

But moratoriums on data center development are working.

“It’s working in the Netherlands, in Chile. Their community-led activism led to a pause on data center construction,” Brennan said. 

Citizens Action Coalition in Indiana also highlights how data center expansion is harming ordinary rate payers and the energy grid. A number of data center projects have stopped because of their advocacy. And Memphis Community Against Pollution has pushed back against the health and environmental effects, especially in historically Black neighborhoods.”

For everyday people, whose work requires the adoption of AI, or who have indeed benefited from asking ChatGPT a math question, there is still a role to play, and it doesn’t mean users must entirely refrain from using AI tools.

“Rather than spending time engaging in a debate about whether any individual technology, like ChatGPT, is good, the question we should be asking instead is, ‘Is OpenAI’s [the company that owns ChatGPT] unaccountable willpower good for society?’” Brennan said. 

“When we look at the power that these firms hold over our lives, we can target the sources of that power. I think the environmental justice movement does an amazing job at this. When people say you shouldn’t use plastic straws or we should recycle, yes, those are important, but sometimes they make it feel like it’s an individual calculation instead of what we know is the systemic mass polluting and industrialization of our world. And I think that holds here.” 

Most of all, Brennan’s call to action is that people reject the narrative that an unguarded AI future is the only one.

Other advocates, like Tristan Harris of the Center for Humane Technology, argue a similar path forward.

“Your role isn't to be responsible for solving all of this. Your role is to be one part of humanity’s collective immune system — to break others out of the trance of fatalism and inevitability, and advocate for another narrow path,” Harris said in his 2025 TED Talk.

Harris outlined a number of ways to protect humanity from the current “chaotic risks” that come from the AI frontier: Whistleblower protections, preventing ubiquitous surveillance, product liability standards, shared agreements on what is “too far” in tech development, and more. 

He concluded that in this window, framed by restraint and “technological maturity,” humanity can embrace the incredible progress made possible by AI, without allowing that technology to cause the dystopian “Black Mirror” plot lines so many of us fear.

“We can choose to stop being seduced by the ‘possible’ and bravely confront the reality of the ‘probable,’ Harris said. “And we can work to change the probable path if we don’t like where it takes us.”

Harris and Brennan are not the only ones doing this work. 

Stanford’s office of Human-Centered Artificial Intelligence is led by Fei-Fei Li, who is commonly called the “Godmother of AI.” Her team researches AI technologies that “enhance human productivity and quality of life,” while offering educational and policy recommendations that prioritize ethical development and governance. 

There’s also the Ada Lovelace Institute, a United Kingdom-based research institute named after the world’s first computer programmer. It works to ensure “data and AI work for people and society,” by advising on how AI can be justly and equitably distributed across Europe.

The nonprofit Responsible AI Institute also works in this area, offering a global certification to AI systems and practitioners that are dedicated to improving the social and economic well-being of society.

Whether it’s through law and policy, tech development, business regulation, or human rights, the fight to create a safe and stable AI future is happening. 

There is an entire global community of experts whose express purpose is to help humans — and the technology we create — be a force for good.

“We have been in this fight before,” Brennan said. “We don’t want to go into doomerism or unearned celebration, two ends of a pole that will not serve us. Right in the middle is a clear-eyed assertion that the public — we — should get to decide what happens to our society.”

You may also like: 4 ways states are putting guardrails around artificial intelligence, despite a lack of federal regulation

A version of this article was originally published in The 2025 Technology Edition of the Goodnewspaper.

Header image by Kevin Ku via Pexels

Article Details

December 9, 2025 10:15 AM
A school of fish circling above a cove.

New underwater tech helps scientists identify fish by burps, farts, and grunts. They've found 46 species so far

Half of the species were previously thought not to emit sound at all.
A photo collage of a popemobile encased in glass, a whale shark swims in the ocean, Pattie Gonia poses for a photo, solar panels cover a parking lot, and two people sitting on a bench

Good News This Week: December 6, 2025 - Sharks, Santas, & Solar Panels

Your weekly roundup of the best good news worth celebrating...
No items found.

Too much bad news? Let’s fix that.

Negativity is everywhere — but you can choose a different story.
The
Goodnewspaper brings a monthly dose of hope,
delivered straight to your door. Your first issue is
free (just $1 shipping).

Start your good news journey today