Progress is possible. Not inevitable. Possible.
It becomes possible only if we confront uncomfortable truths.
The first: progress has been unevenly distributed.
The second: when progress concentrates, societies stagnate.
But when the ability to create, innovate, and solve problems spreads to diverse minds across the globe, progress becomes distributed—and possible. Again
Progress is possible. Not inevitable. Possible.
We live in a world undergoing rapid technological advancement, geopolitical realignment, and the emergence of new centers of power. What we're witnessing isn't just decentralization. It's a complex reshuffling of the cards.
It simultaneously centralizes and distributes, concentrates and diffuses, unifies and fragments. But that contradiction isn’t something to eliminate—it's an opportunity. It creates space for real-world experiments and new realities to emerge.
But let's be honest: to seize this opportunity, our institutions—which work very hard to avoid change—must change. And that may be the most uncomfortable truth of all.
What Does “Distributed Progress” Actually Mean?
So what is distributed progress?
It’s the name of this Substack community, after all, right?
Honestly, it’s more of a direction than a definition, so here’s a starting point:
At its core, distributed progress happens when more and more people gain the ability to create value and—this part is crucial—empower others to do the same.
This compounding effect happens when previously excluded groups get access to tools, knowledge, and capital to solve problems others miss and create paths for still more people to contribute.
Traditional economic measurements, like GDP, fail to capture new realities of our economies and societies.
As British economist Diane Coyle highlights in The Measure of Progress, standard metrics miss the value created by collaborative networks, while institutional structures heavily influence who benefits from progress across sectors and regions.
Reid Hoffman’s Superagency gets the potential right, but misses something vital: the complex power dynamics that determine how technology's benefits actually flow.
The real story isn't just about empowering individuals—it's about who gets empowered, how they connect with others, and whether these connections create virtuous cycles or further entrenchment.
To capture this complexity, we explore distributed progress through three lenses:
As a knowledge ecosystem – a shift in how knowledge flows, from top-down hierarchies to hybrid co-creation.
As a geopolitical reality – the collision of fast technological acceleration with a fracturing post-Cold War world order.
As an ethical imperative – a fight against concentrating unprecedented power in a few tech elites, essentially democracy vs. digital feudalism.
We’re entering a new era of internationalization. And it is just beginning.
What matters now is distributed power—and, more so, distributed progress.
Will universities become engines of opportunity in the Intelligence Age? Or will they simply reinforce existing power asymmetries? The question turns on who creates intellectual capital. Who accesses it. Who benefits.
High stakes? Exactly.
William Gibson famously remarked, "The future is already here—it's just not evenly distributed."
I’ve always loved this quote, but we’re making a slightly different argument: the “distributed” nature of reality is itself the future.
While Gibson highlights existing inequalities, our argument goes further, positioning “distributed” as humanity's urgent challenge. The incentives that shape the distribution of intelligence, power, and agency will decide the extent to which technological progress empowers or enslaves.
Distribution itself is the future. The incentives that shape the distribution of intelligence, power, and agency will decide the extent to which technological progress empowers or enslaves.
1: A Knowledge Ecosystem
Knowledge thrives in ecosystems where diverse thinking flourishes, or withers within the sterile confines of monoculture.
We’re constantly puzzled by how we've designed our schools and universities.
From kindergarten to PhD programs, they're built on what neuroscientist Antonio Damasio called "Descartes' error"—this stubborn myth that intelligence lives only inside individual minds. This Enlightenment-era belief persists even though we have mountains of evidence showing that thinking has always been distributed across brains, bodies, technologies, and social networks.
But we must face an uncomfortable truth: most universities cling to models of education that don’t match what science shows about intelligence. This institutional inertia robs students of the kind of education that's needed for the hybrid human-machine world of co-creation.
Philosophers Andy Clark and David Chalmers challenged traditional thinking with their "extended mind thesis" – the idea that our cognition extends beyond our brains. When Annie Murphy Paul writes about "thinking outside the brain," she means it literally. Our bodies, surroundings, relationships, and technologies aren't mere tools—they are cognitive infrastructure. This changes how we think about thinking itself. Intelligence is distributed.
So if we want real progress, our institutions need to catch up with reality.
Individuals and Agents
At first, we were skeptical of the implications of AI for the labor market. But a recent experiment at Procter & Gamble showed just how much these systems change the game. Individuals working with AI can match traditional teams in performance. Economic and creative power are shifting toward individuals and small, agile groups where advantage depends on how skillfully they direct and engage with AI systems.
As evolutionary biologist E.O. Wilson observed in The Social Conquest of the Earth, our species' dominance emerged not through individual strength but through collaborative problem-solving capabilities. Human groups that developed complex social cooperation gained decisive advantages over both less cooperative human bands and stronger, more capable species. Our evolutionary edge was never individual strength but group-level adaptive intelligence.
I think most people make a fundamental mistake when they treat AI as "just another tool"—as if ChatGPT is merely the Netscape browser of our era. That's not just wrong—it's a category error of historic proportions.
Unlike previous technologies, AI systems have qualities that were once considered exclusive to living organisms—they learn, understand logically, and form abstractions to navigate novel situations. These systems exist in a space of "in-betweenness" that escapes our binary classifications of human/machine and natural/artificial.
Don’t get us wrong. We find this deeply unsettling and intriguing at the same time. But distributed progress requires confrontation with uncomfortable truths.
What we're experiencing isn't simply a new tool but a transformation of our cognitive ecosystem that invites us to see ourselves and reality in entirely new ways.
Here is how researcher Dean Ball describes this shift:
First thing’s first: eject the concept of a chatbot from your mind. Eject image generators, deepfakes, and the like. Eject social media algorithms. Eject the algorithm your insurance company uses to assess claims for fraud potential. I am not talking, especially, about any of those things.
Instead, I’m talking about agents. Simply put and in at least the near term, agents will be LLMs configured in such a way that they can plan, reason, and execute intellectual labor. They will be able to use, modify, and build software tools, obtain information from the internet, and communicate with both humans (using email, messaging apps, and chatbot interfaces) and with other agents. These abstract tasks do not constitute everything a knowledge worker does, but they constitute a very large fraction of what the average knowledge worker spends their day doing.
The real transformation isn't just automating a few tasks—it's unlocking complex cognitive work that was impossible for either humans or machines to do alone. We don't think we can overstate how profound this shift really is, and we say that as people generally skeptical of tech hype.
Culture as Cognitive Infrastructure
Culture isn't just some fuzzy concept—it's the hidden infrastructure that makes collective intelligence work. We love to say "thinking happens in brains," but the truth? Thinking happens with others.
Biologists Itai Yanai and Martin Lercher highlight what they call "dyadic thinking" – the powerful creativity that emerges when two minds engage deeply. Their observation that "we often do not even know what we are thinking until we express it" captures why intellectual progress isn't merely additive when minds connect - it's multiplicative.
Distributed intelligence isn't primarily about technology—it's about cultural infrastructure and cooperation. Just like E.O. Wilson's research shows, advantage doesn't come from tools alone but from how groups organize to use them. His work demonstrates how human success stems from the interplay between individual and group selection, where cooperative groups outcompete less cohesive ones.
I found economic historian Joel Mokyr's book A Culture of Growth particularly eye-opening on this. He identifies specific cultural factors that make progress possible:
Intellectual openness that welcomes unfamiliar ideas, even when disruptive to established paradigms
Formal and informal institutional frameworks that reward innovation rather than rent-seeking
Knowledge networks that connect thinkers across disciplinary and geographic boundaries
Cultural legitimacy for individuals who champion new paradigms against entrenched resistance
Environments where ideas can challenge established wisdom through evidence-based discourse
Mechanisms to transform theoretical insights into practical applications at scale
The most productive scientific and creative ecosystems in history—they weren't just collections of geniuses. They were structured environments that carefully balanced competition with cooperation, tradition with disruption, specialization with cross-pollination.
Get that environment right, and boom—you get an explosion of progress.
2: A Geopolitical Reality
Knowledge ecosystems don’t exist in a geopolitical vacuum—they collide with uncomfortable truths about global power distribution that we must confront if progress is to remain possible.
Progress depends on navigating a world where cooperation and competition are hopelessly tangled together. Yesterday's trade partners become today's security rivals. We’re not being pessimistic here—just realistic. And realism is the essential foundation for meaningful progress in our multipolar world.
The Global Middle Ages
Geopolitical strategist Parag Khanna calls our current period the Global Middle Ages. The point isn’t that we’re descending into chaos, but that power is being reconfigured. Much like in medieval times before modern nation-states, influence now spreads across a complex lattice of players.
Three Incompatible Systems in Conflict
When we first read Ian Bremmer's analysis in Foreign Policy, it clicked immediately. His framework reveals not just instability but fundamental contradiction—three incompatible orders locked in tension:
A U.S.-led security order, weakening but dominant, built on military supremacy and liberal values
An economic order rapidly fracturing into multiple poles, each driven by national interests and development imperatives
A technopolar order where digital giants function as geopolitical actors, wielding state-like power while existing beyond state boundaries
The kicker–these systems actively undermine each other:
Security imperatives disrupt economic integration. Economic multipolarity weakens security hegemony. Tech giants subvert state authority.
It's a three-way tug-of-war where no single victor emerges, yet each can destabilize the others.
However, we don’t think Bremmer’s framework captures the realities of a new Cold War, as tensions at the most recent AI Summit and Paris highlight. Eric Schmidt argues that breakthroughs in artificial intelligence will intensify global commercial competition and reshape international security dynamics, echoing similar arguments from Leopold Aschenbrenner last year.
This makes the stakes of technological multipolarity existentially higher than traditional geopolitical competition of the first Cold War. Unlike the ideological standoff of the first Cold War, today's conflicts unfold in digital environments where cyberattacks, disinformation campaigns, and AI-powered weapons systems are the new frontlines in the struggle to defend Western democratic values.
Shifts in Knowledge Systems
This geopolitical shake-up manifests most concretely in higher education, where we spend much of our time. The Western knowledge monopoly is ending. We're seeing a dramatic shift towards multipolarity in mobility and research:
China has rapidly gone from primarily sending students abroad to becoming a major destination for international students with its own world-class universities.
New education hubs in places like Singapore, Qatar, and the UAE are building universities and research centers that rival Western institutions, offering alternative models and drawing talent from everywhere.
Meanwhile, emerging coalitions – like the growing number of nations attending this year's BRICS Summit in Kazan – aren't just economic arrangements. They signal a push toward a multipolar knowledge order, prioritizing regional partnerships over Western-centric globalization.
The flow of knowledge and talent is no longer one-way from "the West to the rest” — especially after the dramatic cuts to science by the Trump administration. It's becoming distributed.
That shift in intellectual gravity will profoundly shape the directions—and diversify the values—of progress in the decades to come.
The Rise of the Middle Powers
In fact, 2025 is shaping up as the Year of the Middle Powers. These nations—neither superpowers nor minnows—are tired of being passive pawns. They're becoming active architects of a new world order.
As economist Hung Tran points out, the intensifying rivalry between the big powers is actually pushing these middle-sized nations to form their own "coalitions of the willing" – partnerships that protect their interests and reduce vulnerability to pressure from Cold War 2.0.
We see it happening:
Just look around: India deftly balances relationships between the U.S., China, and Russia; Turkey extends influence across Europe, the Middle East, and Central Asia; Brazil champions South-South cooperation; Indonesia expands its role in ASEAN while building Jakarta into a regional financial hub.
This isn't some speculative future—it's happening right now. And it raises a fundamental question:
If both technology and geopolitics are in upheaval, who will shape the future?
3: Distributed Progress as an Ethical Imperative
That brings us to the ethical imperative of distributed progress.
For months, I've been wrestling with what we think is the biggest ethical question of our time. It's not "AI versus humans, who wins?" That's a distraction. The real issue is power distribution: concentrated versus distributed agency.
Will the immense new powers of technology be concentrated in the hands of a few, or distributed across society? This isn't theoretical—it's a fork in the road we're approaching at high speed.
Will the immense new powers of technology be concentrated in the hands of a few, or distributed across society? This isn't theoretical—it's a fork in the road we're approaching at high speed.
Digital feudalism or digital democracy?
Centralized control or participatory creation?
The direction we choose will shape civilization for centuries.
Here's an uncomfortable truth we keep coming back to:
Progress depends on how we handle the fundamental tension between proprietary, centralized AI architectures (which optimize for control and efficiency) and open, decentralized ones (which preserve autonomy and diversity).
I've tried to search for middle ground, but we’re increasingly convinced these approaches reflect fundamentally different values and will have a significant impact on humanity’s future.
Brendan McCord, chair of the Cosmos Institute, frames the dilemma bluntly:
"Can human values be translated into code? Or will code's logic reshape human values instead?"
We need a re-decentralized internet that offers a powerful counterbalance to centralization by enabling new forms of collective ownership and governance where both value and control remain with creators and communities.
The Next Five Years
What looks like technical decisions about AI system architecture are actually profound political choices with civilization-scale consequences.
McCord explains why the next five years will be decisive:
A trillion-dollar AI compute infrastructure is being built by 2030. This will establish unprecedented control over global information flows and could reshape alliance structures that have defined the post-WWII order.
The contest between decentralized and centralized AI approaches is the most consequential governance decision of our lifetimes. It will determine whether a few entities control the infrastructure of intelligence itself.
Social media, AI, the Internet of Things, and the data economy increasingly shape how humans understand and control our world as humanity navigates new risks of polluted information ecosystems.
Technology raises human potential only when its shapers build systems that amplify human reason, preserve human autonomy, and resist central control.
These stakes go beyond economics, politics, or conventional power—they cut to the core of human agency itself. McCord offers a stark history lesson:
Benjamin Franklin transformed the printing press into a democratizing force by establishing libraries and independent publishers—distributing knowledge beyond the reach of authorities. Yet that same technology, just two centuries later, became the central instrument of mass manipulation under Goebbels' Ministry of Propaganda.
His conclusion hit me like a brick:
Technology raises human potential only when its shapers build systems that amplify human reason, preserve human autonomy, and resist central control.
Moving Forward: The Politics of Distributed Progress
So where does all this leave us?
After months of thinking about these issues, we’re both worried and hopeful.
We're standing at a critical moment where three massive transformations are colliding in ways no one fully understands:
Knowledge ecosystems emerge - diverse intelligences learning and co-creating together
Power becomes multipolar - splintering from a unipolar world into competing centers of influence across states, companies, and networks
Values become contested - forcing us to decide who controls the cognitive infrastructure that will increasingly shape human consciousness
In this wild new landscape, the binary debates that pit techno-optimism vs techno-pessimism are a dead end. We need something more nuanced.
I like how AI safety researcher Richard Ngo puts it—we need a "techno-humanism" that "combines an appreciation for the incredible track record of both technology and liberty with a focus on ensuring that they actually end up promoting our values."
Of course, believing in a vision isn’t enough; we need to act. As policy researcher Lucia Asanache puts it, "Progress shouldn't be an abstract concept; it should be a lived experience." People need to feel progress in their daily lives – safer communities, better opportunities, more control over their future.
I believe distributed progress is humanity's fundamental challenge: To build systems that expand human capability rather than replace it. To foster geopolitics that reduces conflict rather than accelerates it. To create structures that distribute power rather than concentrate it."
As MIT economist David Autor's research demonstrates, technological advancements don't automatically translate into distributed progress.
In fact, technology often concentrates benefits among a select few rather than spreading them widely.
If controlled by just a few influential actors, digital systems become uniform in both their design and information. When that happens, the information entropy decreases - we get fewer perspectives and more repetitive viewpoints, preventing new ideas from emerging.
We gain short-term efficiency but sacrifice long-term possibility. Consolidated systems quickly exhaust their creative possibilities. Power becomes concentrated in fewer hands, creating clear winners and losers. People lose their ability to influence the tools that shape their lives, hampering ethical progress.
Technological advancements only become meaningful progress when people can influence how these tools are developed and used in their communities.
What does it mean to be agentic?
It's not just using technology handed to us, but actively modifying it to meet our needs. It means adapting digital tools to fit our cultural practices, professional requirements, and community values—so we shape technology with our human values rather than allowing technology's logic to reshape us.
We believe distributed progress is humanity’s fundamental challenge:
To build ecosystems that expand human capability rather than replace it.
To foster geopolitics that reduces conflict rather than accelerates it.
To create structures that distribute power rather than concentrate it.
We cannot shape what we deny or do not understand.
It’s time to face hard truths.
Because our future—your future, our future, everyone's future—depends on it.
Progress is possible. Not inevitable. Possible. But only if we make it so.
What do you think?
+1s: Which points resonated with you, and why?
Gaps: What are we missing? Undervalue? Overvalue?
Share: If you found this post useful, we hope you’ll share it so more voices can join this conversation.