Choosing Resilience Over Concentration

Welcome to the final part of The AI Transition: Building on Quicksand!

New here? Start with Building on Quicksand to build a strong foundation.

  1. Building on Quicksand
  2. The Infrastructure Debt Crisis
  3. The New Gilded Age
  4. The Entry-Level Extinction
  5. The Overlooked Opportunity
  6. Choosing Resilience Over Concentration

Most AI safety discourse focuses on hypothetical futures: will superintelligent AI be aligned with human values? Will it pose existential risk? These are important questions. But there's a more immediate problem: the path we're on leads toward a few dominant AI systems, controlled by a handful of corporations, through which all economic activity must flow. Digital feudalism where the appearance of choice masks monopoly control.

It's not impossible. It's not even improbable. It's the direction current incentives point toward—the logical endpoint of a venture capital system explicitly designed to create winner-take-all monopolies.

Building for Resilience

The relationship between humans and AI systems will be defined by how we structure these systems today. If we build toward "one model to rule them all"—a single dominant AI that every organization depends on—we're creating precisely the kind of brittle, centralized system that history shows fails catastrophically.

Consider human social organization. We've managed the challenges of coordinating diverse, sometimes conflicting intelligences through distributed governance, market competition, democratic accountability, and social norms. It's messy, imperfect, and sometimes breaks down—but it's resilient. No single point of failure can take down the whole system.

Diversity in AI systems provides similar resilience. Multiple models with different training, different objectives, different governance structures can provide checks on each other. If one model develops problematic behaviors, others can identify and counteract them. If one approach proves inadequate, alternatives exist. This isn't just about preventing catastrophic outcomes—it's about maintaining genuine choice and preventing lock-in.

History offers lessons. Human societies have repeatedly discovered that extreme inequality and rigid hierarchies become unstable. When those doing the work feel abused by those capturing the rewards, systems fail—sometimes violently. Better to build systems where the relationship is partnership rather than exploitation from the start.

What You Can Do

If you're already using AI—as an individual developer, team lead, or organizational decision-maker—your choices matter. You may feel you lack influence over the larger structural forces, but collective decisions about how we use AI shape the system we're building:

  • Choose diverse tools. Don't lock into a single AI provider. Use different models for different tasks. Support open-source alternatives. Maintain the ability to switch. Every dependency on a single platform strengthens monopoly and weakens your negotiating position.

  • Deploy AI to augment, not replace. When evaluating AI implementations, ask: does this eliminate human learning or enhance it? Does it preserve career pathways or destroy them? The short-term cost savings from elimination create long-term organizational brittleness.

  • Address maintenance, not just features. Organizations have enormous backlogs of work that matters but never gets prioritized: documentation cleanup, legacy system refactoring, process rationalization. AI can tackle these backlogs while providing entry-level workers meaningful projects that build domain expertise. This creates genuine productivity—more value from maintained employment—rather than wealth extraction.

  • Demand transparency and auditability. When AI makes decisions that affect people—hiring, credit, healthcare, legal processes—insist on understanding how those decisions are made. Support regulatory frameworks that require explainability. Opacity serves whoever controls the system, not those affected by it.

  • Support competitive ecosystems. When your organization makes procurement decisions, consider whether you're strengthening monopolies or supporting alternatives. When you have technical influence, advocate for interoperability and open standards. When you have political voice, support antitrust enforcement and regulatory frameworks that prevent winner-take-all consolidation.

  • Preserve institutional knowledge. If you're in a position to hire, recognize that entry-level positions aren't just about filling roles—they're about building organizational capability. The people who start at the ground level and work their way up understand the system in ways that can't be replicated by AI-generated analysis. That understanding becomes critical when you need to make strategic decisions or respond to crises.

  • Measure true productivity. Question metrics that show "efficiency gains" from eliminating workers. Real productivity means more value created per employed person, with that value distributed through wages that circulate in the economy. Productivity gains that flow to capital owners while employment falls represent wealth transfer, not societal benefit.

Structural Changes We Need

Individual choices matter, but structural problems require structural solutions:

  • Building purpose-designed AI infrastructure with security, auditability, and resilience as core requirements, not afterthoughts—recognizing that the current practice of layering AI onto legacy systems amplifies existing vulnerabilities.

  • Maintaining competitive AI ecosystems through antitrust enforcement, interoperability requirements, and support for diverse approaches rather than allowing winner-take-all consolidation to proceed unchecked.

  • Preserving career pathways by keeping entry-level positions as development grounds for human talent, using AI to augment rather than replace apprenticeship and mentorship relationships.

  • Investing in renovation of legacy systems rather than just adding AI layers on top of fragile foundations, recognizing that technical debt compounds and eventually must be addressed.

  • Prioritizing diversity not as compliance but as strategy—diverse teams, diverse models, diverse approaches that enhance resilience and innovation while preventing the intellectual narrowing that AI-heavy research demonstrates.

  • Distributing AI benefits broadly rather than concentrating them in a few hands, recognizing that technology's value comes from its use across society, not its capture by incumbents.

Europe's Lead

Europe has begun this work. The EU AI Act represents "the first-ever comprehensive legal framework on AI worldwide," establishing risk-based rules that aim to "foster trustworthy AI" while the Digital Markets Act directly addresses gatekeeper power [^40]1. The European Policy Centre calls for an "AI Social Compact" to "urgently address AI's profound impact on employment, income, and social cohesion" 2, while the European Commission's Strategic Foresight Report emphasizes "stress-testing for AI-driven disruption" as critical preparation 3.

Yet even these efforts face resistance. Big Tech lobbying seeks to delay and weaken implementation, demonstrating that regulatory frameworks alone are insufficient without sustained political commitment to enforce them.

The Stakes

The companies driving AI development will claim this is too slow, too limiting, too risky given competitive pressures. They'll invoke innovation, progress, and national security. They'll argue that consolidation is inevitable and resistance is futile.

They're wrong. What's actually risky is racing ahead on infrastructure we know is inadequate, eliminating the human expertise we'll need to manage these systems, and concentrating control in ways that history repeatedly demonstrates fail catastrophically. We know this story. We've seen it before. The Gilded Age taught us that concentrated capital seeking monopoly control creates instability that eventually explodes—whether in imperial competition or platform dominance.

We've built transformative technology on shaky foundations before. Sometimes we get away with it. Sometimes we don't. With AI, the stakes are high enough that "sometimes" isn't good enough.

The transition is happening. The question is whether we're building toward resilience and distribution, or fragility and concentration. The technical challenges are solvable. The institutional ones require different choices than the ones current incentives favor. And the window for making those choices is narrowing—but it hasn't closed.


References

1

WilmerHale (2025, August 18). "AI and the EU Digital Markets Act." https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20250818-ai-and-the-eu-digital-markets-act

3

Centre for Future Generations (2025, October 16). "Preparing for AI labour shocks should be a resilience priority for Europe." https://cfg.eu/ai-labour-shocks/

4

European Parliament ECTI (2025). "Interplay between the AI Act and the EU digital legislative framework." https://www.europarl.europa.eu/RegData/etudes/STUD/2025/778575/ECTI_STU(2025)778575_EN.pdf

2

European Policy Centre (2025, July 25). "AI's impact on Europe's job market: A call for a Social Compact." https://www.epc.eu/publication/ais-impact-on-europes-job-market-a-call-for-a-social-compact/


This article draws on research current as of January 2026.

🎉 Congratulations! You've completed The AI Transition: Building on Quicksand.

Want to review? Here's where we started: Building on Quicksand Or check what we just covered in The Overlooked Opportunity.