AGI Timelines
Timeline to Weak AGI
Timeline to Strong AGI
Timelines for AGI have trended down sharply since 2022. View full prediction on Metaculus →
AGI-Adjacent Stock Prices
Stock Price - Last 90 Days (%)
Market Capitalization - Last 3 Years (USD)
Although not the only driver of stock prices, markets price in AGI expectations through AI company valuations. Sudden moves often reflect breakthrough announcements or capability demonstrations before they hit mainstream awareness.
LLM Arena Leaderboard
Loading leaderboard data...
Real-time model capability rankings track which models receive the best scores from human evaluators. For the full rankings and methodology, visit https://huggingface.co/spaces/lmarena-ai/chatbot-arena-leaderboard.
GPU Processing Power
The processing power of AI hardware continues to increase dramatically, enabling larger and faster models. NVIDIAs GPUs are currently the favored AI training platform. The human brain uses roughly 5,000 PFLOPS.
Data Center Megaprojects
Leading edge AI model training is happening in massive data centers that are larger, and consume far more power, than traditional data centers.
Endangered Professions (2025-2030)
Profession | People employed | % of all U.S. jobs | Why they're cooked | |
---|---|---|---|---|
Drivers – ride-share, delivery & long-haul trucking | ≈ 3,700,000 (2,200,000 heavy-truck, 1,100,000 light-truck, 400,000 taxi/ride-hail) | 2.3% | Fixed-route autonomous rigs are already hauling freight on interstates and delivery bots are creeping into cities; "first/last-mile" work keeps humans in the loop but head-count pressure rises fast. | |
Call-center & customer-service reps | 3,000,000 | 1.8% | LLM agents already resolve Tier-1 tickets in telecom, banking and retail, slashing queue time and staffing needs. | |
Software engineers (devs & QA) | 1,900,000 | 1.2% | Cursor-style tools now write boilerplate and even entire features; firms expect fewer junior hires and broader "one dev ≈ many" productivity. | |
Bookkeeping, accounting & auditing clerks | 1,700,000 | 1.0% | Cloud ERPs now bundle GPT-powered invoice coding and anomaly detection, slashing routine reconciliation and data entry. | |
Mental-health therapists & counselors | 450,000 | 0.3% | Chatbots provide 24/7 text therapy and triage, yet regulation and the need for human empathy limit full replacement, expect augmentation more than attrition. | |
Graphic designers | 270,000 | 0.2% | Midjourney, Imagen & Canva's AI layouts churn out logos and ads in seconds; humans shift to art-direction while routine tasks vanish. | |
Paralegals & legal assistants | 370,000 | 0.2% | AI research and drafting tools cut discovery hours and first-pass motion writing; senior review stays, but routine doc work shrinks. | |
Video production & actors (film/video editors + on-screen talent) | 160,000 | 0.1% | Generative video, voice cloning and "digital double" tech reduce shoots, reshoots and background roles; synthetic actors already headline ads. | |
Writers & authors | 150,000 | 0.1% | 4o-class models crank out press releases, blogs and even novels; human writers move to high-touch investigative or brand voice work. |
Economic disruption precedes AGI arrival as narrow AI systems automate specific job functions. These displacement patterns preview the broader transformation coming when AI matches human general intelligence.
P(doom) Estimates
P(doom) is a vague but useful indicator of the percent chance of things going very very wrong. Leading AI thinkers' doom estimates reveal deep disagreements about AGI safety. The wide range, from 0.01% to 99%, reflects genuine uncertainty about humanity's biggest challenge. These are people that I think have opinions especially worth considering, even if I don't agree with them. For a more complete list visit https://pauseai.info/pdoom.
People to Follow Starter Pack
Name | Why They're Worth Following | Doominess |
---|---|---|
Ethan Mollick
|
Wharton professor and author of Co-Intelligence; posts daily experiments on how to use AI in work and education. Stresses near-term misuse over apocalypse and offers practical playbooks. |
Low
Generally optimistic; focuses on opportunity + manageable risks.
|
Dwarkesh Patel
|
Host of The Dwarkesh Podcast, he interviews leading AI researchers, founders, and historians. Patel typically spends weeks studying each guest's work, which lets him fire off razor-sharp follow-up questions and surface fresh angles on scaling laws, timelines, and incentives. |
Medium
Cleared eyed about acceleration risks but for charging full steam ahead.
|
Liron Shapira
|
Entrepreneur who runs the "Doom Debates" series and writes sharp Substack pieces on failure modes of alignment. Regularly spars with both optimists and x-risk skeptics; assigns double-digit % odds to catastrophe. |
High
Vocal about AI-driven existential risk.
|
Key voices shape AGI discourse through research breakthroughs, policy decisions, and public communications. Following the right people provides early signals on developments that will matter.
Reading List Starter Pack
Title | Author | Source | Publication Year | Why Read? |
---|---|---|---|---|
The Information | Jessica Lessin and team | theinformation.com | Ongoing | Premium tech journalism with exclusive insider reporting on AI company strategy, funding rounds, and executive moves. Essential for understanding the business side of the AI race. |
SemiAnalysis | Dylan Patel and team | semianalysis.com | Ongoing | Deep technical analysis of semiconductor trends, AI chip architectures, and compute infrastructure. Unmatched insights into the hardware foundation powering the AI revolution. |
The AI Revolution two-part post | Tim Urban | waitbutwhy.com | 2015 | A funny, stick-figure tour of narrow → general → super-intelligence that still gets referenced in boardrooms and Reddit threads alike. Perfect on-ramp before diving into heavier material. |
Superintelligence: Paths, Dangers, Strategies | Nick Bostrom | Oxford Univ. Press (print/e-book) | 2014 | The scholarly cornerstone of modern AI-risk talk. Bostrom maps scenario space, alignment pitfalls, and governance options, dense but foundational. |
Situational Awareness: The Decade Ahead | Leopold Aschenbrenner | situational-awareness.ai essay series | 2024 | A former OpenAI researcher's data-backed case that GPT-4-class scaling puts AGI on a 2027 track, and why geopolitics will reshape the race. Sharp, contrarian, and full of compute math. |
AGI 2027 | AI 2027 project team | ai-2027.com forecast site | 2025 | A month-by-month, model-performance forecast that stress-tests the "AGI in five years" meme. Useful as a living baseline to compare against your own timeline intuitions. |
Blood in the Machine: The Origins of the Rebellion Against Big Tech | Brian Merchant | Little, Brown & Company (publisher page) | 2023 | We've been here before: Merchant retraces how the first big automation wave, industrial‐era textile machines, upended work, provoked backlash, and ultimately reshaped society, offering clear lessons for today's AI surge. |
Understanding AGI requires grasping both the technical possibilities and societal implications. These foundational texts provide essential context for navigating the most consequential transition in human history.
Takeoff Tracker Blog
For the latest thoughts on the AGI story: blog.takeofftracker.com