Skip to content

Feel the ASI

We are approaching a pivotal moment in human history,
and we need your help to navigate it responsibly.

A growing consensus of notable AI researchers, tech leaders, and public officials believe that highly capable AI systems, possibly even superintelligence, could arrive within the next 5-10 years.

It sounds like science fiction, but it's a possibility we all need to take seriously. AI systems are approaching the ability to improve themselves without human help. Once that happens, progress could accelerate beyond our ability to predict or control.

What Leading Voices Are Saying

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Hundreds of AI researchers and industry leaders
Statement on AI Risk, organized by the Center for AI Safety

In a 2023 survey of 2,778 AI researchers, respondents estimated on average a 9% probability that advanced AI leads to outcomes that are extremely bad for humanity (e.g., human extinction).

Essential Resources

The AI Revolution: The Road to Superintelligence

Tim Urban (Wait But Why)

A classic 2015 exploration of AI, exponential growth, and why superintelligence matters. Still highly relevant and accessible.

Article · 30-40 min

The Problem

MIRI (Machine Intelligence Research Institute)

A clear explanation of the core challenges in AI alignment and why they matter.

Article · 15-20 min

AI 2027

Daniel Kokotajlo et al. (AI Futures Project)

A well-researched fictional scenario exploring plausible future outcomes in the race to AI supremacy. See also the AI Futures Blog for ongoing updates, forecast tracking, and policy analysis from the same team.

Scenario · 2-4 hours

Situational Awareness: The Decade Ahead

Leopold Aschenbrenner

A comprehensive analysis of the path from GPT-4 to AGI to superintelligence, including national security implications.

Report · 165 pages

If Anyone Builds It, Everyone Dies

Eliezer Yudkowsky & Nate Soares

A New York Times bestseller making the case that building superhuman AI with current techniques poses an existential threat to humanity, and why the situation is still preventable.

Book · 2025

AI Is A Massive Problem. Here's Why.

Palisade Research & SciencePetr

A thorough walkthrough of how modern AI actually works, why it's advancing so fast, and what the risks are. Well-produced and accessible even if you have no technical background.

Video · ~42 min
~42 min

Taking AI Doom Seriously For 62 Minutes

Primer

A thorough, well-produced deep dive into AI existential risk.

Video · ~1 hour
~1 hour

AI Explained

YouTube Channel

Stay updated on major AI developments with balanced, sensible analysis from the creator of SimpleBench.

Channel · Ongoing
Ongoing

AI in Context

YouTube Channel

Thoughtful analysis and commentary on AI developments, exploring both the technical and societal implications.

Channel · Ongoing
Ongoing

BlueDot Impact Courses

BlueDot Impact

Free courses on AI safety and governance designed to sharpen your thinking on what actually matters. No technical background required for the governance track.

Course · Multi-week

Epoch AI

Epoch AI Research Institute

The gold standard for empirical AI trend data. Interactive dashboards tracking compute growth, model capabilities, and hardware trends. Hard numbers instead of hype or speculation.

Data · Dashboards
Dashboards

AISafety.com

Community Hub

A curated portal for the AI safety ecosystem: field map of organizations, job boards, study courses, community links, events calendar, and funding directories. The best starting point if you want to explore what's out there.

Directory · Hub
Directory

AI Futures Project Blog

Daniel Kokotajlo et al. (Substack)

Ongoing analysis and forecasting updates from the team behind AI 2027. Follow along as they track how reality compares to their scenario, grade their predictions, and explore what comes next.

Newsletter · Ongoing

Target Curve

Alon Torres (Substack)

A blog exploring why automating intelligence is fundamentally different from previous technological revolutions, and what it means for society. Covers the evidence, addresses common objections, and focuses on what we can actually do about it.

Newsletter · Ongoing

Planned Obsolescence

Ajeya Cotra (Substack)

Practical writing on AI preparedness from a leading safety researcher, focused on concrete milestones, timelines, and what we should actually expect.

Newsletter · Ongoing
Newsletter

Take Action

You now know what's at stake. Here's how to make a difference.

2 minutes right now

Add Your Name

Sign open letters calling for international AI safety standards. Takes seconds, and every signature strengthens the case for regulation.

5 minutes right now

Contact Your Representatives

Tell your elected officials you want real AI oversight. ControlAI provides pre-written templates. Just fill in your details and send.

5 minutes per week

Make a Microcommitment

One small action every week: share an article, sign a petition, email a representative. Microcommit sends you a simple weekly prompt so you never have to wonder what to do next.

A few hours per week

Join a Community

Connect with others who take this seriously. Organize local events, coordinate advocacy campaigns, or contribute to grassroots efforts pushing for a pause on frontier AI development.

Go deeper

Learn AI Safety

Take free courses on the technical and governance challenges of advanced AI. BlueDot Impact runs cohort-based programs that have trained thousands of people now working in AI safety.

Change your career

Work on AI Safety

If you want to dedicate your career to AI safety, 80,000 Hours offers free one-on-one advising to help you find high-impact roles in AI safety policy, research, and governance.

Spread the Word

Share this page and speak up about the need for responsible AI regulation.

"Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it's the only thing that ever has." - Margaret Mead