The AI Situation Room is an independent, continuously updated observatory that tracks the global state of artificial intelligence through quantitative measurement. It synthesises heterogeneous data streams into composite indices designed for longitudinal monitoring rather than point-in-time snapshots.
"How can we quantify the global state of AI across adoption, capability, geopolitics, market sentiment, and public interest—and present these dimensions as a single, interpretable dashboard?"
The dashboard is organised around five orthogonal measurement dimensions:
Data is updated through a combination of semi-automated API integrations (Arxiv, Google Trends) and manual curation via a restricted admin interface. The dashboard is not a real-time feed; update frequency varies by dataset from daily (ETF prices) to quarterly (robotics density). All timestamps reflect the date of last verified update, not the date of original publication by the source.
Five headline indices distil the dashboard's raw data into interpretable signals. Each index is constructed from weighted sub-components, documented below with their formulae, component definitions, source mappings, and weighting rationale.
A normalised composite (0–1 scale) capturing the breadth and depth of AI integration across society. The index deliberately weights enterprise adoption most heavily, reflecting the thesis that commercial deployment is the strongest near-term signal of systemic AI integration.
WAI = 0.10 × A + 0.45 × E + 0.20 × D + 0.25 × I
| Component | Symbol | Weight | Source | Proxy Metric |
|---|---|---|---|---|
| Public Awareness | A | 10% | Google Trends | Normalised global search interest for “Artificial Intelligence” topic (0–100) |
| Enterprise Adoption | E | 45% | Deloitte State of AI | % of workers with access to AI tools; agentic AI deployment rate |
| Developer Ecosystem | D | 20% | Artificial Analysis, GitHub, arXiv API | Composite of coding tool maturity, open-source parity, research velocity, and framework traction |
| Industrial Automation | I | 25% | IFR / Google Deep Search | Robotics density (robots per 10k manufacturing employees) |
D = 0.30 × coding_maturity + 0.25 × os_parity + 0.25 × research_velocity + 0.20 × framework_traction
| Sub-component | Weight | Source | Formula |
|---|---|---|---|
| Coding Maturity | 30% | Artificial Analysis | avg(top 3 Coding benchmark scores) ÷ 100 |
| Open-Source Parity | 25% | Artificial Analysis | Best open-source intelligence score ÷ best overall intelligence score |
| Research Velocity | 25% | arXiv API | World papers published ÷ 2,000,000 |
| Framework Traction | 20% | GitHub | Total agentic framework stars ÷ 5,000,000 |
An experimental metric estimating cumulative progress toward artificial general intelligence, expressed as a percentage. The index uses a geometric mean of three sub-dimensions to enforce the constraint that balanced progress across all fronts is required—excellence in one area cannot compensate for near-zero capability in another.
AGI = (Intelligence × Digital_Agency × Physical_Agency)1/3
| Sub-Index | Current Value | Definition |
|---|---|---|
| Intelligence | 60.00% | Frontier model performance on reasoning, coding, and knowledge benchmarks relative to estimated human-expert ceiling |
| Digital Agency | 57.74% | Ability of AI systems to autonomously plan, execute multi-step tasks, use tools, and self-correct in digital environments |
| Physical Agency | 3.50% | Robotic manipulation, locomotion, and real-world task completion relative to human dexterity and adaptability |
A forward-looking projection of when each AGI pillar reaches 100%, derived from the Progress to AGI index above. The estimated year is determined by the bottleneck—the slowest pillar to reach full capability. This is a moderate scenario using conservative growth assumptions.
months_to_100 = ln(100 / current_pct) / r
Where r is the monthly exponential growth rate.
The estimated year is then: current_year + months_to_100 / 12,
taken from the pillar with the largest value (the bottleneck).
| Pillar | Growth Model | Projected Year |
|---|---|---|
| Intelligence (P1) | Historical exponential fit from frontier model scores (3 data points, Jan 2024–present) | ~2027 |
| Digital Agency (P2) | Historical exponential fit from agentic adoption data (Deloitte survey, 2 data points) | ~2027 |
| Physical Agency (P3) | Industry-forecast CAGR of 35% (moderate estimate, IFR & analyst consensus for robotics deployment) | ~2037 |
A market-sentiment gauge measuring the ratio of hype momentum to fundamental value creation, modulated by a substance factor. Higher values indicate greater divergence between narrative enthusiasm and demonstrated economic utility.
Bubble = (H / V) × S
The Hype Growth component (H) blends two signals: cumulative growth since the earliest recorded data and recent weighted year-over-year momentum. This captures both the total accumulated hype and whether it is still accelerating or cooling off.
H = 0.5 × Hcumulative + 0.5 × Hrecent
| Component | Definition |
|---|---|
| Hcumulative | Current year average search interest divided by the earliest year average — measures total hype growth over the full observation window |
| Hrecent | Weighted average of year-over-year growth ratios (exponential decay = 0.5). Most recent year receives ~52% weight, each older year halves. Captures whether hype is accelerating or decelerating |
| Value Growth (V) | Average of latest/earliest price ratios across six AI-focused ETFs |
| Substance Modifier (S) | Dampening factor based on benchmark improvements and capability milestones: S = 1 − (adoption × 0.5) + (convergence × 0.3) |
A composite country-level ranking scoring 12 nations across five strategic pillars. Each pillar is normalised to a 0–100 scale using min-max normalisation within the observed dataset, then weighted and summed.
Powerc = 0.20×Infra + 0.15×HW + 0.20×IP + 0.15×Research + 0.30×Models
| Pillar | Weight | Proxy Indicators |
|---|---|---|
| Infrastructure | 20% | Number of AI-relevant datacenters (hyperscale and colocation) |
| Hardware | 15% | Domestic AI chip / semiconductor fabrication facilities |
| IP (Patents) | 20% | Total AI-related patent registrations (Lens.org) |
| Research | 15% | AI papers published since Jan 2026 (Arxiv API) |
| Models | 30% | Number of frontier models originating from the country ranked on Artificial Analysis |
An exponential-growth metric measuring how many months it takes the frontier AI intelligence score to double. Analogous to Moore's Law for transistors, this index tracks the pace of AI capability improvement rather than absolute capability. A declining doubling time indicates accelerating progress.
Tdouble = Δt / log2(Scoreend / Scorestart)
| Component | Definition |
|---|---|
| Overall Doubling Time | Months for the best Intelligence benchmark score to double, computed across the full historical time span |
| Recent Doubling Time | Doubling time computed from the last two data points only, capturing the most recent pace of progress |
| Trend | Accelerating if recent < 85% of overall; Decelerating if recent > 115% of overall; Steady otherwise |
model_rankings table (Intelligence ranking type).
At each distinct date_updated, the highest score among all models is selected.
The resulting time series of frontier scores is fitted to an exponential model to extract the
doubling period. This approach assumes exponential growth—if the underlying trajectory
is sigmoidal (approaching a ceiling), the metric will show deceleration. A minimum of two
data points is required; the metric returns null if insufficient history exists.
The following table catalogues every dataset consumed by the dashboard, including provenance, collection method, temporal coverage, and known constraints. All data is stored as JSON exports from a SQLite database.
| Field | Detail |
|---|---|
| Source | Artificial Analysis |
| Records | 57 models across 5 ranking categories |
| Categories | Intelligence, Coding, Agentic, Text-to-Image, Open Source Intelligence |
| Collection | Manual curation from leaderboard snapshots |
| Frequency | Weekly to bi-weekly |
| Key Fields | model_name, score, ranking_type, country_of_origin, date_updated |
| Limitations | Benchmark scores may not reflect real-world performance; leaderboard methodology is controlled by Artificial Analysis and may change without notice |
| Field | Detail |
|---|---|
| Sources | Lens.org (patents), Arxiv API (papers), Google Deep Search (datacenters, hardware) |
| Records | 52 records across 4 indicator types |
| Indicators | Datacenters (14), Hardware factories (13), AI Patents (13), Papers published (13) |
| Countries | 12 nations + world aggregate |
| Collection | API queries (Arxiv), database search (Lens.org), manual verification (infrastructure counts) |
| Frequency | Monthly |
| Limitations | Datacenter counts are approximations; patent databases have filing-to-publication lags of 12–18 months; Arxiv skews toward English-language and Western-institution publications |
| Field | Detail |
|---|---|
| Source | Google Trends (Artificial Intelligence topic) |
| Records | 793 data points |
| Coverage | March 2021 – present, monthly granularity |
| Metric | Relative search interest (0–100 scale, normalised to peak within the period) |
| Collection | Google Trends export + manual entry |
| Frequency | Monthly |
| Limitations | Google Trends data is relative, not absolute; geographic weighting favours countries with higher internet penetration and Google market share; weekly Google Trends exports are averaged to monthly granularity. Using the “Artificial Intelligence” topic (rather than a keyword) improves cross-language consistency but may still miss niche AI-related queries |
| Field | Detail |
|---|---|
| Source | Google Finance |
| ETFs Tracked | Roundhill Generative AI & Technology (CHAT), Global X AI & Technology (AIQ), iShares Future AI & Tech (ARTY), Global X Robotics & AI (ROBO), Autonomous Tech & Robotics (ARKQ), First Trust Nasdaq AI & Robotics (ROBT) |
| Records | 45 price points |
| Coverage | March 2023 – present |
| Collection | Manual price recording from Google Finance |
| Frequency | Weekly to bi-weekly |
| Limitations | Does not capture private market valuations, venture capital flows, or non-US-listed AI equities; ETF composition changes over time |
| Field | Detail |
|---|---|
| Source | Deloitte State of AI in Enterprise |
| Records | 4 (2 metrics × 2 time periods) |
| Metrics | Worker access to AI tools (%), Agentic AI adoption rate (%) |
| Collection | Manual extraction from Deloitte report publications |
| Frequency | Annual (aligned with Deloitte publication cycle) |
| Limitations | Survey-based; sample skews toward large enterprises in developed markets; self-reported adoption may overstate actual integration depth |
| Field | Detail |
|---|---|
| Source | SimilarWeb Pro |
| Records | 4 websites (ChatGPT, Claude, Gemini, Copilot) |
| Metric | Estimated monthly unique visitors |
| Collection | Manual extraction from SimilarWeb dashboard |
| Frequency | Monthly |
| Limitations | SimilarWeb estimates are modelled, not measured; API-only usage (not through web interface) is not captured; mobile app traffic may be under-counted |
| Field | Detail |
|---|---|
| Source | GitHub |
| Records | 5 frameworks (Openclaw, CrewAI, smolagents, NemotronClaw, PydanticAI) |
| Metric | GitHub star count |
| Collection | Manual snapshot from GitHub repository pages |
| Frequency | Bi-weekly |
| Limitations | Stars are a popularity signal, not a usage metric; does not capture enterprise adoption via private forks or internal deployments; star-farming is possible |
| Field | Detail |
|---|---|
| Source | International Federation of Robotics (IFR) via Google Deep Search |
| Records | 4 (2 metrics × 2 time periods) |
| Metrics | Total industrial robots deployed (millions), Robotics density (per 10k employees) |
| Collection | Secondary source extraction (IFR reports cited via search) |
| Frequency | Quarterly to annual |
| Limitations | IFR data has a 6–12 month reporting lag; "industrial robots" definition excludes consumer, agricultural, and service robots; density metric uses manufacturing employment only |
| Field | Detail |
|---|---|
| Source | Derived — computed from datasets 3.1–3.8 |
| Records | 5 composite indices |
| Indices | World AI Adoption, Progress to AGI, AI Bubble Index, Global AI Power Index, Intelligence Doubling Time |
| Collection | Server-side computation on data export |
| Frequency | Recomputed on each data update |
| Limitations | Composite quality is bounded by the accuracy and timeliness of upstream datasets; see Section 6 for a full discussion of limitations |
Data flows through a four-stage pipeline from source acquisition to frontend rendering. No stage is fully automated; human verification is required at each checkpoint to ensure data integrity.
All validated data is stored in a SQLite database (ai_situation_room.db) with
9 normalised tables. The schema enforces primary keys, non-null constraints on required
fields, and foreign-key-like consistency for country codes.
A Python export script (export_data.py) serialises each table to JSON in the
data/ directory alongside a metadata.json manifest containing
row counts and export timestamps. The frontend consumes these JSON files directly via
fetch requests, with Chart.js handling visualisation.
Country-level indicators in the Global AI Power Index are normalised using min-max scaling within each pillar:
X_norm = (X - X_min) / (X_max - X_min) × 100
This maps each country's raw score to a 0–100 range within the observed dataset. The normalisation is relative, not absolute—a country scoring 100 is the best in the current sample, not at a theoretical maximum.
The World AI Adoption Index components are expressed as percentages or ratios before weighting. Where raw data is not natively percentage-based (e.g., GitHub stars), it is normalised against an estimated total population denominator.
Composite index weights are determined through editorial judgement, not statistical optimisation. Weights reflect the research team's assessment of each component's relative importance to the phenomenon being measured. All weights are disclosed in Section 2 and should be interpreted as explicit analytical choices, not objective truths.
Transparency about limitations is essential for responsible interpretation. The following constraints should be considered when citing or acting upon dashboard outputs.
This methodology has not been peer-reviewed or published in an academic venue. The indices are designed for monitoring and discussion, not for policy decisions or investment advice. Users are encouraged to examine the underlying data and form independent assessments.
The methodology is versioned to track analytical evolution. Breaking changes to index formulae or weight structures will increment the major version; data source additions or corrections increment the minor version.
| Date | Version | Change | Affected |
|---|---|---|---|
| 2026-03-24 | v1.0 | Initial methodology documentation published; all 4 composite indices documented with formulae, weights, and source mappings | All indices |
| 2026-03-24 | v1.0 | Data Sources Registry established with 9 datasets fully catalogued | All datasets |
| 2026-03-24 | v1.1 | Italy Google Trends data replaced: search term changed from “AI” to “IA” (Intelligenza Artificiale) to better reflect local-language search patterns; source data converted from weekly to monthly averages | AI Public Interest (Italy) |
| 2026-03-24 | v1.2 | New composite index added: Intelligence Doubling Time. Tracks how many months it takes frontier AI benchmark scores to double, with overall rate, recent rate, and acceleration trend. Derived from model_rankings Intelligence scores using exponential growth model | Intelligence Doubling Time (new index) |
| 2026-03-25 | v1.3 | New metric added: Estimated AGI Year (Time to AGI). Projects when each AGI pillar reaches 100% using exponential growth (P1, P2) and industry-forecast 35% CAGR (P3). The bottleneck pillar determines the estimated arrival year. Hero card updated from “Progress to AGI” percentage to “Time to AGI” year display | Estimated AGI Year (new metric), Progress to AGI (hero card redesign) |
| 2026-03-25 | v1.4 | Developer Ecosystem metric (D) redesigned from single-signal (GitHub stars ÷ 30M devs) to 4-component composite: coding tool maturity (30%), open-source parity (25%), research velocity (25%), and framework traction (20%). Data sources expanded from GitHub-only to Artificial Analysis, GitHub, and arXiv API | World AI Adoption Index (D component) |
| 2026-03-30 | v1.5 | Google Trends data source changed from keyword search (“AI” / “IA”) to the “Artificial Intelligence” topic across all 13 countries and World. Topic-based tracking improves cross-language consistency and eliminates false positives from non-AI uses of the term. All interest scores recalculated from new weekly exports averaged to monthly granularity | AI Public Interest (all regions) |
| 2026-03-30 | v1.6 | AI Bubble Index Hype Growth (H) redesigned from single-baseline ratio (Q1 current year / Q1 2021) to a 50/50 blend of cumulative growth (current year avg / earliest year avg) and recent weighted year-over-year momentum (exponential decay = 0.5, most recent year ~52% weight). Blended approach preserves cumulative hype signal while incorporating trend direction | AI Bubble Index (H component) |
For methodological inquiries, data corrections, or collaboration proposals:
All data displayed on this dashboard is aggregated from publicly available sources cited above. The composite indices and analytical commentary represent independent editorial analysis and do not constitute financial advice, policy recommendations, or official benchmarking. Redistribution of aggregated data should credit the original sources and this dashboard.
The AI Situation Room is, at its current stage, a conceptual prototype. It was built with real data sources and genuine analytical intent, but it should be understood as an early-stage demonstration of what a comprehensive AI situation room could look like — not a production-grade intelligence platform. The indices, dashboards, and data pipelines presented here are functional proofs of concept designed to explore how disparate AI signals might be synthesised into a single, coherent observatory.
If you are interested in contributing — whether through data engineering, frontend development, analytical methodology, or domain expertise — the project welcomes collaboration. Reach out at aisituationroom@proton.me to discuss how you can help shape the next iteration.