Meaning in the Age of AI

The Twelve Who Shape the Future

The people whose decisions about AI will determine how the rest of us live with it.

Scroll to explore ↓
Norman Rockwell mixed-media. A long wooden table viewed from one end, oil-painted in warm, golden realism.

In December 2025, Time magazine named eight AI leaders as its collective Person of the Year, calling them the "Architects of AI." The selection reflected a reality that had been building for years: a small number of people are making decisions that will reshape the global economy, the labor market, and daily life for billions.

This essay profiles twelve of them. The list is opinionated. It prioritizes the people whose current decisions carry the most consequence for how AI develops and who benefits from it, not simply who runs the largest company or commands the most media attention. Some names will be familiar. Others may be new. All of them are worth knowing because their choices, in the next two to five years, will shape whether AI becomes a tool that widens opportunity or one that concentrates it further.

The Builders

The people constructing the models and platforms that everyone else builds on.

The Accelerationist
Sam Altman
CEO, OpenAI
Altman runs the company that made AI a household word. Under his leadership, OpenAI has shifted from a research nonprofit to a capped-profit company to what is now a full commercial entity pursuing artificial general intelligenceAI that can match or exceed human-level performance across all cognitive tasks. AGI remains theoretical, but the major labs are actively pursuing it. Estimates for when it might arrive range from 2027 to never, depending on who you ask.. His influence comes from the combination of ChatGPT's reach (more than 800 million weekly users), OpenAI's partnerships with Microsoft and a new Pentagon deal, and his willingness to make bold claims about timelines. OpenAI's annual revenue run rate recently topped $25 billion, and the company is closing in on $100 billion in total funding. His critics say he moves too fast on deployment while lobbying for regulations that would favor incumbents. His defenders say someone has to push the frontier, and at least he does it in public.
The Safety-First Builder
Dario Amodei
CEO, Anthropic
Amodei left OpenAI in 2021 because he believed the company was prioritizing speed over safety. He founded Anthropic to build AI systems with safety at the core of the technical architecture, not as an afterthought. His essay "Machines of Loving Grace" laid out a vision where AI could dramatically improve healthcare, governance, and scientific research within the next decade. Anthropic's annualized revenue has crossed $19 billion, and the company is valued at roughly $380 billion as it prepares for a potential IPO in one of the most anticipated public offerings in tech history. His influence: proof that a safety-focused lab can compete commercially with the move-fast labs.
The Original Mind
Ilya Sutskever
CEO, Safe Superintelligence (SSI)
Sutskever is the researcher most responsible for the technical breakthroughs that made modern AI possible, including the scaling insights behind GPT. After a dramatic departure from OpenAI in mid-2024, he co-founded Safe SuperintelligenceA startup focused exclusively on developing superintelligent AI safely. SSI has raised nearly $3 billion and is valued at $32 billion despite having no publicly released product. Sutskever took over as CEO in mid-2025 after co-founder Daniel Gross departed to join Meta., a company working on approaches to superintelligence that he believes are radically different from the direction the major labs are heading. SSI has raised nearly $3 billion and is valued at $32 billion, all without shipping a single product. Sutskever became sole CEO in mid-2025 after co-founder Daniel Gross left for Meta. If he's right, SSI could leapfrog the entire industry. If he's wrong, the company's secrecy will have been the most expensive silence in tech history.
The Open Source Bet
Mark Zuckerberg
CEO, Meta
Zuckerberg's decision to release Meta's Llama models as open source was the single most consequential strategic choice in the AI industry outside of the labs themselves. But the strategy is evolving: Meta's next frontier model, code-named Avocado, may launch as a closed model that Meta can sell access to, signaling a potential pivot from pure open source. Meta's planned AI capital expenditure for 2026 is between $115 billion and $135 billion, a staggering escalation fueled by Zuckerberg's stated goal of building "personal super intelligence." His 2025 acquisition of Scale AI for $14.3 billion brought founder Alexandr Wang and his team to lead a new AI unit inside Meta, further consolidating talent at the top.

The Infrastructure

The people who build the physical and computational foundation that AI runs on.

The Chip King
Jensen Huang
CEO, NVIDIA
Every major AI model trains on NVIDIA GPUs. Huang saw the AI wave before anyone else in hardware and positioned NVIDIA to capture it. In October 2025, NVIDIA became the first company in history to reach a $5 trillion market capitalization, and Huang has announced over $500 billion in orders for its Blackwell and Rubin GPUs through the end of 2026. His influence is structural: if NVIDIA chips are unavailable or too expensive, AI development slows everywhere. He has more leverage over the pace of AI progress than any individual model builder, because he controls the supply chain they all depend on.
The Challenger
Lisa Su
CEO, AMD
Su turned AMD from a struggling competitor into a credible alternative to NVIDIA in the AI chip market. Her MI350 series became the fastest-ramping product in AMD's history in 2025, and the MI400 series planned for 2026 represents the company's most ambitious architectural leap yet. AMD has signed a multi-year, multi-billion-dollar chip deal with OpenAI, and Su projects AMD's AI data center business will grow at roughly 80 percent per year. The importance of her role is competitive: without AMD's pressure, NVIDIA's pricing power would be unchecked, and the cost of AI development would be even higher. Competition in AI hardware keeps the technology accessible to more builders.
Norman Rockwell mixed-media. A woman in her forties standing at a podium, oil-painted in confident realism.

The Thinkers

The researchers and philosophers shaping how we understand what AI means and what it should become.

The Bridge Builder
Fei-Fei Li
Professor, Stanford; Co-founder, World Labs
Li created ImageNetA dataset of over 14 million labeled images created in 2009 that became the benchmark for training computer vision AI. ImageNet is widely credited with igniting the deep learning revolution by giving researchers the data they needed to train neural networks that could see., the dataset that ignited the deep learning revolution. Her startup World Labs raised $1 billion in February 2026 from investors including NVIDIA, AMD, and Autodesk, and has already shipped its first product, Marble, which lets users create editable 3D environments from AI-generated world models. Her influence extends beyond her company: she's the most prominent voice arguing that AI development should be guided by humanistic values and that the research community has a responsibility to ensure the technology serves the broadest possible population.
The Skeptic Who Put His Money Down
Yann LeCun
Founder, Advanced Machine Intelligence (AMI); Professor, NYU
LeCun is one of the three recipients of the Turing Award for foundational work on deep learning, alongside Geoffrey Hinton and Yoshua Bengio. For years he argued from inside Meta that current large language models were a dead end and that entirely new architectures were needed. In 2025, he left his role as Meta's Chief AI Scientist to prove it, founding Advanced Machine Intelligence (AMI), which raised $1.03 billion to build AI systems focused on reasoning, planning, and real-world understanding rather than text generation. AMI is a contrarian bet against the entire LLM paradigm. His influence is no longer just corrective commentary; he is now building the alternative he spent years insisting was necessary.
The Conscience
Demis Hassabis
CEO, Google DeepMind
Hassabis won the 2024 Nobel Prize in Chemistry for AlphaFold, the AI system that predicted the 3D structure of virtually every known protein. That achievement alone would secure his place on this list. But his ongoing influence comes from running Google DeepMind, where the Gemini model family competes directly with OpenAI and Anthropic, and from Isomorphic Labs, where he is applying AlphaFold to drug discovery, with preclinical cancer drug trials already underway and clinical trials expected by the end of 2026. Hassabis represents the possibility that AI's most important contributions might be scientific rather than commercial, solving problems in biology, materials science, and medicine that have resisted human analysis for decades.
Composite portrait, fictional person, real circumstances
Portrait of Dr. Priya Chattopadhyay
Dr. Priya Chattopadhyay
41, AI policy researcher, former congressional staffer, Washington D.C.
One Person's Story

I spent three years on the Hill trying to explain AI to senators who could barely operate their phones. The industry lobbyists would come in with slide decks about innovation and job creation, and my job was to translate: here's what this actually means, here's who it affects, here's what they're not telling you. Most days it felt like I was whispering into a hurricane.

What changed was when the major labs started testifying publicly. Suddenly the people on this list had faces and names. Senators could see that five or six individuals were making decisions that affected more people than most legislation. The power concentration became impossible to ignore once you could point to specific humans and say: this person decided to release this model, and here's what happened next.

I left government last year because I realized the real leverage was on the research side. These twelve people, their equivalents in five years will be different names, but the dynamic won't change. A small group making enormous decisions with global consequences. The question is whether the rest of us learn their names and understand their choices, or whether we just live with the results.

The Power Brokers

The people whose decisions about policy, investment, and deployment determine who gets access to AI and on what terms.

The Integrator
Mustafa Suleyman
CEO, Microsoft AI
Suleyman co-founded DeepMind, built the AI startup Inflection, and now runs Microsoft's AI division. His role matters because Microsoft is the distribution channel that puts AI into the hands of more people than any other company, and because he is leading Microsoft's push to build its own frontier models (branded MAI) to reduce its dependence on OpenAI. Every Office user, every Azure customer, every Copilot interaction runs through his organization. His influence is in deployment scale and strategic direction: the decisions he makes about how AI integrates into productivity software shape how hundreds of millions of knowledge workers experience the technology daily, and his vision of "Human Superintelligence" is steering the largest software company on Earth.
The Regulator's Dilemma
Margrethe Vestager
Former Executive VP, European Commission
Vestager architected the EU's AI Act, the world's first comprehensive AI regulation. She left the European Commission in late 2024 and now serves as Chair of the Board of Governors at the Technical University of Denmark, but her regulatory legacy is what earns her a place on this list. Whether the AI Act proves wise or counterproductive will take years to assess, but its influence is immediate: every AI company operating in Europe must comply, and the regulatory framework is being studied and adapted by governments worldwide. Her approach, classifying AI applications by risk level and requiring transparency for high-risk uses, has become the template that other jurisdictions build on or push back against.
The Dark Horse
Liang Wenfeng
CEO, DeepSeek
Liang leads the Chinese AI lab that stunned the industry in early 2025 by releasing models that matched or exceeded Western competitors at a fraction of the training cost. DeepSeek's R1 model demonstrated that the compute-intensive approach favored by American labs may not be the only path to capable AI, and the lab has not slowed down: in early 2026 Liang co-authored papers introducing new architectures that allow larger models to train on cheaper hardware, and DeepSeek's V4 model targets performance that internal benchmarks suggest surpasses both Claude and GPT in code generation. His influence is geopolitical: DeepSeek proved that export controls on advanced chips haven't stopped Chinese AI development, and it forced Western labs to reconsider their assumptions about the relationship between spending and capability.

Why These Twelve

This list is a snapshot. In two years, some names will have faded and new ones will have emerged. The pattern will persist.

The concentration of decision-making power in AI is unusual even by the standards of the tech industry. In previous technology waves, influence was distributed across many companies, open standards bodies, and government agencies. AI's influence is concentrated because the technology itself is concentrated: only a handful of organizations have the capital, talent, and computational resources to build frontier models. The decisions of the people who lead those organizations ripple outward into every industry, every economy, and eventually every household.

That concentration is the most important fact about AI governance in 2026. It means that the values, priorities, and blind spots of a small group of mostly American, mostly male technologists are encoded into systems that serve billions of people. Some of them, like Fei-Fei Li and Dario Amodei, think deeply about that responsibility. Others are primarily competing to build the most powerful system as fast as possible. The rest of us have a stake in knowing which is which.

Time's 2025 Person of the Year was not an individual but the "Architects of AI" collectively. The editorial choice itself made the point: AI's story in 2025 was shaped by the decisions of specific people, and those people were, for the first time, as consequential as any head of state.

Know Their Names

These twelve people are building the technology that will define the next era of human civilization. Their choices about safety, access, pricing, and transparency will determine whether AI narrows the distance between the powerful and everyone else, or widens it. Learn their names. Understand their incentives. Watch their decisions. Because whether you follow AI closely or barely think about it, the world these twelve are building is the one you'll live in.

Jesse Walker
Jesse Walker
Jesse Walker is a philosopher, a meditation teacher, a business founder and a father. He is optimistic about humanity’s ability to shape AI into a force for global good.