Today, I’m talking pinch Verge senior AI newsman Hayden Field astir immoderate of nan group responsible for studying AI and deciding successful what ways it might… well, ruin nan world. Those folks activity astatine Anthropic arsenic portion of a group called nan societal impacts team, which Hayden conscionable spent clip pinch for a profile she published this week.
The squad is conscionable 9 group retired of much than 2,000 who activity astatine Anthropic. Their only job, arsenic nan squad members themselves say, is to analyse and people quote “inconvenient truths” astir really group are utilizing AI tools, what chatbots mightiness beryllium doing to our intelligence health, and really each of that mightiness beryllium having broader ripple effects connected nan labour market, nan economy, and moreover our elections.
That of people brings up a full big of problems. The astir important is whether this squad tin stay independent, aliases moreover beryllium astatine all, arsenic it publicizes findings astir Anthropic’s ain products that mightiness beryllium unflattering aliases politically fraught. After all, there’s a batch of unit connected nan AI manufacture successful wide and Anthropic specifically to autumn successful statement pinch nan Trump administration, which put retired an executive bid successful July banning alleged “woke AI.”
Verge subscribers, don’t hide you get exclusive entree to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You tin sign up here.
If you’ve been pursuing nan tech industry, nan outline of this communicative will consciousness familiar. We’ve seen this astir precocious pinch societal media companies and nan spot and information teams responsible for doing contented moderation. Meta went done countless cycles of this, wherever it dedicated resources to solving problems created by its ain scale and nan unpredictable quality of products for illustration Facebook and Instagram. And then, aft a while, it seems for illustration nan resources dried up, aliases Mark Zuckerberg sewage saturated aliases more willing successful MMA aliases conscionable cozying up to Trump, and nan products didn’t really alteration to bespeak what nan investigation showed.
We’re surviving done 1 of those moments correct now. The societal platforms person slashed investments into predetermination integrity and different forms of contented moderation. Meanwhile, Silicon Valley is moving intimately pinch nan Trump White House to resist meaningful attempts to modulate AI. So arsenic you’ll hear, that’s why Hayden was truthful willing successful this squad astatine Anthropic. It’s fundamentally unsocial successful nan manufacture correct now.
In fact, Anthropic is an outlier because of really amenable CEO Dario Amodei has been to calls for AI regulation, some astatine nan authorities and national level. Anthropic is besides seen arsenic nan astir safety-first of nan starring AI labs, because it was formed by erstwhile investigation executives astatine OpenAI who were worried their concerns astir AI information weren’t being taken seriously. There’s really quite a fewer companies formed by erstwhile OpenAI people worried astir nan company, Sam Altman, and AI safety. It’s a existent taxable of nan manufacture that Anthropic seems to beryllium taking to nan adjacent level.
So I asked Hayden astir each of these pressures, and really Anthropic’s estimation wrong nan manufacture mightiness beryllium affecting really nan societal impacts squad functions — and whether it tin really meaningfully study and possibly moreover power AI merchandise development. Or, if arsenic history suggests, this will conscionable look bully connected paper, until nan squad softly goes away. There’s a batch here, particularly if you’re willing successful really AI companies deliberation astir information from a cultural, moral, and business perspective.
A speedy announcement: We’re moving a typical end-of-the-year mailbag section of Decoder later this period wherever we reply your questions astir nan show: who we should talk to, what topics we screen successful 2026, what you like, what you hate. All of it. Please nonstop your questions to decoder@theverge.com and we’ll do our champion to characteristic arsenic galore arsenic we can.
If you’d for illustration to publication much astir what we discussed successful this episode, cheque retired these links:
- It’s their occupation to support AI from destroying everything | The Verge
- Anthropic specifications really it measures Claude’s wokeness | The Verge
- The White House orders tech companies to make AI bigoted again | The Verge
- Chaos and lies: Why Sam Altman was booted from OpenAI | The Verge
- Anthropic CEO Dario Amodei conscionable made different telephone for AI regularisation | Inc.
- How Elon Musk Is remaking Grok successful his image | NYT
- Anthropic tries to defuse White House backlash | Axios
- New AI battle: White House vs. Anthropic | Axios
- Anthropic CEO says institution will prosecute gulf authorities investments aft each | Wired
Questions aliases comments astir this episode? Hit america up astatine decoder@theverge.com. We really do publication each email!
4 weeks ago
English (US) ·
Indonesian (ID) ·