OpenAI’s Governance Crisis: Lessons for Tech Corporate Structure

OpenAI’s Governance Crisis: Lessons for Tech Corporate Structure

Introduction

In November 2023, OpenAI’s board executed what seemed impossible: they fired Sam Altman, the CEO and public face of the company, in a move so abrupt that investors, employees, and Altman himself appeared caught off-guard. Seventy percent of OpenAI employees signed a letter demanding his reinstatement. Within days, he was rehired. Within weeks, the board chairman and another director resigned.

What happened was not a scandal in the traditional sense. There was no fraud, no embezzlement, no safety violation that came to light. Instead, what unfolded was a structural corporate governance crisis—a collision between incompatible legal obligations, misaligned incentive structures, and an organizational design that had become fundamentally broken.

This article dissects the anatomy of OpenAI’s governance collapse: the underlying tension between nonprofit stewardship and capped-profit venture returns, the role of the board and fiduciary duty, the competing governance models for AI companies, and what lessons this teaches tech founders about designing durable corporate structures from day one.

The central insight: OpenAI’s crisis was not an anomaly. It was a predictable failure of structural design. And it reveals why governance architecture matters as much as technical architecture—perhaps more.


Layer 1: The Structural Contradiction at OpenAI’s Core

The Nonprofit-to-Capped-Profit Model

When OpenAI was founded in 2015 as a nonprofit, it was animated by a mission: develop artificial general intelligence (AGI) safely, aligned with human values, and ensure the benefits were widely distributed.

But nonprofits don’t scale globally or attract venture capital at startup speeds. By 2019, OpenAI needed funding. The solution was elegant on the surface: create a “capped-profit” subsidiary—formally, OpenAI LP—while keeping the nonprofit as the controlling entity.

How it worked in principle:

  1. OpenAI Inc. (Nonprofit): Owns the mission, controls the board, holds voting equity.
  2. OpenAI LP (Capped-Profit): Raises venture capital, employs staff, operates the business. Returns on investor capital capped at 100x (profit shared after that with the nonprofit).
  3. The Promise: No conflict. The nonprofit ensures alignment with the public good. Investors get fair returns without exploiting AGI upside infinitely.

The diagram of this structure:

OpenAI Nonprofit-to-Capped-Profit Dual Structure

Why This Design Failed

The capped-profit structure sounds like a third way between pure nonprofit and pure venture capitalism. It was not.

What it actually created was a dual-principal problem: two entities with overlapping claims to the same assets, profits, and governance authority—but different fiduciary duties.

Fiduciary duty is the legal obligation to act in the best interests of the entity you serve. For a nonprofit board, duty is to the mission and the public. For a capped-profit LP, duty is to the investors (up to the profit cap, after which it reverts to the nonprofit).

The collision occurred when mission and investor interest diverged.

Consider: By late 2023, OpenAI’s ChatGPT was generating enormous revenue—estimated $80 million in quarterly revenue by Q4 2023. The investors and leadership wanted to accelerate commercialization, scale aggressively, and pursue partnership deals with Microsoft (which had invested $10 billion) to remain competitive against Anthropic, Google, and others.

The nonprofit board faced a different pressure: ensure AGI safety, maintain alignment governance, and move deliberately. In public statements, OpenAI’s board justified their Altman dismissal citing concerns about “the alignment of his statements and actions with the company’s mission and values.”

This was not a difference of opinion about tactics. It was a difference of opinion about whose interest takes priority when they conflict—and the structure had no clear answer.


Layer 2: Timeline of Collapse—The November 2023 Crisis

Phase 1: Escalating Governance Tensions (2020–2023)

The capped-profit structure worked as long as commercialization and mission alignment were moving in the same direction. But by 2023, they were not.

Key tensions that built:

  1. Safety vs. Speed: The nonprofit board (including Dario Amodei of Anthropic, Helen Toner of CSIS) pushed for more rigorous alignment research and slower model releases. Altman pushed for quarterly releases and rapid feature deployment.

  2. Mission Ownership: The nonprofit was legally obligated to ensure the company remained aligned with the mission. But Altman, as CEO, controlled daily operations. Decisions about compute allocation, researcher hiring, and model architecture flowed through him—and he reported to the board, not the reverse.

  3. Financial Incentives Misaligned: The leadership team (including Altman and Greg Brockman, the President) had equity in the LP, which would generate enormous returns if and when the profit cap was lifted or removed. Their incentive was to exit the capped-profit structure—or convince the nonprofit board it was no longer viable.

  4. Microsoft Dependency: Microsoft’s $10 billion investment (announced in January 2023) came with commitments about compute availability, integration with Azure, and product roadmap alignment. These commitments had nothing to do with AGI safety research—they were commercial. The nonprofit board had little formal leverage to override them.

Phase 2: The Trigger—Board Composition and Control

On November 17, 2023, the board (then 6 members: Altman, Brockman, Toner, Amodei, Quong Chen, and Duane Morse) convened and announced Altman’s removal.

The immediate stated reason: Altman’s communication style had become inconsistent with “the needs of the organization.” He had misled the board about key matters. (The specifics were never fully disclosed.)

The deeper reason: The board no longer trusted Altman to prioritize mission alignment over commercial scale.

The structural problem: The board had no failsafe mechanism to enforce this priority. Removing Altman was the only tool they had.

But Altman had enormous leverage:
– He was the public face of OpenAI and ChatGPT.
– He had personal relationships with every major AI researcher in the company.
– He could immediately move to a competitor (or, implicitly, get hired by Microsoft directly).
– Seventy percent of employees signed a letter threatening to leave if he wasn’t reinstated.

Within 4 days, Altman was back. The board had capitulated.

The aftermath: Within weeks, board chairman Bret Taylor and director Tasha Brown (who had supported the removal) resigned. By early 2024, the nonprofit board had been restructured with less autonomy over the LP, and Altman’s equity and leverage were rebalanced.

The diagram of this timeline:

OpenAI Board Crisis Timeline (November 2023)


Layer 3: Corporate Governance Models for AI Companies

OpenAI’s failure was not unique to OpenAI. It reveals a deeper question: What is the right governance structure for an AI company?

Let’s examine four models:

1. Traditional C-Corporation (Venture-Backed)

Structure: Founders own equity, investors own preference shares, board elected by shareholders.

Example: Google (pre-acquisition), almost all VC-backed startups.

Pros:
– Clear decision-making authority: the board represents shareholders, period.
– Aligned incentives: founder and investor interests converge (exit at scale).
– Proven model: VC knows how to scale this.

Cons:
– Maximizes shareholder returns, not mission fidelity.
– No structural brake on race-to-the-bottom in safety or ethics.
– If AI becomes systemically important (utilities, defense, infrastructure), a pure profit motive is risky.

2. Nonprofit (Pure)

Structure: No equity ownership, mission-driven, tax-exempt status.

Example: Mozilla Foundation (in part), academic institutions.

Pros:
– Mission is inviolable. The organization cannot be sold or redirected to maximize profit.
– Tax advantages: donations are deductible, revenue is tax-exempt.
– Attracts idealistic talent willing to work for mission alignment over equity upside.

Cons:
– Fundraising is difficult. Pure nonprofits compete for donations, grants, and government funding.
– Cannot deploy equity as compensation, so talent retention is hard at scale.
– Scaling becomes slow. No venture capital money.

3. Benefit Corporation (B-Corp) / Public Benefit Corporation (PBC)

Structure: For-profit entity with a statutory mandate to consider impact alongside profit. Directors have a legal duty to stakeholders (not just shareholders).

Example: Patagonia, Ben & Jerry’s (formerly), Coursera.

Pros:
– Legally binds impact considerations into corporate duty.
– Raises capital like a normal company.
– Transparent: impact is audited and disclosed (B-Lab certification).

Cons:
– Impact is weaker than a nonprofit’s mission mandate. Shareholders can still pressure for profit maximization.
– If shareholders sue (derivative action) claiming profit was sacrificed for impact, courts may limit it.
– Not battle-tested at truly high stakes (e.g., AGI safety).

4. Long-Term Benefit Trust (LTBT) / Perpetual Purpose Trust

Structure: Company is owned by a trust, which is legally bound to specific long-term goals. Shareholders buy equity but with restricted voting rights.

Example: Anthropic (Claude AI), Stripe (pre-IPO).

Pros:
– Mission is locked in. Even if venture capitalists demand an exit, the trust controls the company.
– Capital access: still raises venture funding, but with founder/mission preservation.
– Aligns founder long-term vision with investor returns.

Cons:
– Complex legal structure; not all jurisdictions recognize trusts in this form.
– Founders must trust the trustee’s judgment (fiduciary risk).
– May face resistance from traditional VCs uncomfortable with restricted control.

The comparison matrix:

AI Company Governance Models Compared

Anthropic’s Approach: The Long-Term Benefit Trust

Anthropic (founded by Dario Amodei and others who left OpenAI in 2021) chose a different path. The company operates under a Long-Term Benefit Trust structure:

  • Anthropic Inc. is controlled by a trust, bound to develop AI safely and reduce AI risks.
  • The trust has restricted voting rights: major decisions (pivoting away from safety, merging, selling IP) require trustee approval.
  • Investors (including Google, Salesforce, others) own equity, but cannot force an exit or mission change.
  • The CEO and board serve the mission first.

This structure avoids OpenAI’s core problem: it eliminates the dual-principal problem. There is one principal—the long-term benefit trust—and everyone’s incentives are aligned to that.


Layer 4: Fiduciary Duty and the Board’s Role

The root cause of OpenAI’s crisis was a fiduciary duty conflict embedded in the corporate structure itself.

What Is Fiduciary Duty?

A fiduciary is a person or entity entrusted to act in the best interests of another. Corporate law defines specific fiduciary duties:

  1. Duty of Care: Act with the care, skill, and diligence of a reasonable director. This means gathering information before making decisions, engaging in rational deliberation, and relying on expert advice when appropriate.

  2. Duty of Loyalty: Act in good faith and not for personal benefit. Directors cannot extract private benefit, engage in self-dealing, or pursue personal agendas at the company’s expense.

  3. Duty of Obedience: Ensure the corporation obeys laws and bylaws. The company operates within its stated charter and legal authorities.

These duties are not abstract philosophy—they are enforced by law. If a director breaches fiduciary duty (votes themselves a massive bonus, approves a deal that benefits a relative, or ignores information they had), shareholders or stakeholders can sue.

In a traditional corporation, the board’s fiduciary duty is to shareholders. The duty is clear: maximize shareholder value (subject to legal and ethical constraints). In a nonprofit, the duty is to the mission and the public good—not to donors, not to staff, not to community members, but to the abstract mission of the organization.

This distinction is crucial. A nonprofit board director who votes to merge the nonprofit with a profit-seeking company (even if it pays donors back their contributions) would breach fiduciary duty—because the mission is being surrendered. A for-profit board director who votes against a profitable deal because it creates environmental damage (without other legal liability) might also breach fiduciary duty—because shareholder value was sacrificed without cause.

The Dual-Duty Trap at OpenAI

OpenAI’s structure created a nightmare scenario: board members served the nonprofit (with a duty to mission), while the LP had a separate board with a duty to investors. In some cases, the same individuals sat on both boards—creating direct internal conflict.

The problem materialized when:

  • The nonprofit board said: “We must slow down. AGI safety is not robust enough. Our mission is alignment, not scale. Altman is making unilateral decisions about compute allocation and model architecture without sufficient safety review. This violates our fiduciary duty to the mission.”

  • The LP’s investors and CEO said: “We must accelerate. We are raising capital at a $100B valuation. We have commitments to Microsoft. Slowing down would be a breach of duty to investors, because it would destroy shareholder value. Our duty is to generate returns on capital.”

Who is right? From a purely legal standpoint: both. Each is performing their fiduciary duty to their respective principal. The nonprofit board was correct that slowing deployment is consistent with the mission and their duty. The LP’s investors were correct that acceleration maximizes returns and their duty.

But the structure had no mechanism to resolve this conflict, other than:
1. One party gives up (capitulation, like the board in November 2023—they lost).
2. The company splits (unlikely; too disruptive; would require legal unwinding of shared IP, brand, employees).
3. The structure is changed (what eventually happened in 2024—the nonprofit’s authority was reduced).

The absence of a clear conflict-resolution mechanism is the signature failure of dual-principal structures. In a single-principal structure (pure nonprofit, pure C-corp, LTBT), conflicts are resolved through hierarchy: the principal decides, and everyone else acts in their interest. In a dual-principal structure, there is no hierarchy—only two mutually exclusive commands.

The Lesson: Fiduciary Duty Alignment

For governance to work, fiduciary duties must be aligned to a single principal. Here’s what that means:

Model Primary Fiduciary Duty Single Principal? Risk
C-Corp Shareholders Yes Race-to-the-bottom in safety/ethics if shareholders prioritize profit
Pure Nonprofit Mission + Public Good Yes Slow scaling, fundraising difficulty
B-Corp Shareholders (with impact considerations) Ambiguous Impact is secondary to profit; investors can override
LTBT (Anthropic) Trust (mission-locked) Yes Requires founder trust; may limit investor influence

OpenAI’s error: It tried to serve two principals with equal weight. That’s not governance—that’s gridlock.


Layer 6B: Microsoft’s Role and the Investor Leverage Problem

One factor that amplified OpenAI’s crisis was the asymmetric leverage that Microsoft gained through its $10 billion investment.

The Microsoft Deal Structure

In January 2023, Microsoft announced a $10 billion investment in OpenAI. The details were sparse, but the structure was:

  • Microsoft would invest $10B over time (tranches).
  • Microsoft would get a share of profits from OpenAI LP (up to the capped-profit limit).
  • Microsoft would have exclusive rights to integrate GPT models into Azure, Microsoft 365, and other products.
  • In exchange, Microsoft would provide enormous compute resources (Azure GPU clusters) to train and run models.

Why this mattered for governance: Microsoft was now the largest financial stakeholder in the LP (alongside previous investors like Khosla, Thrive Capital). And Microsoft’s interests were purely commercial: ship products, compete with Google, monetize the technology.

Microsoft had no formal board seat, but they had leverage through capital and compute dependencies.

By November 2023, when the board removed Altman, Microsoft (and potentially other large investors) could have threatened to withdraw compute resources, cut the investment pipeline, or even offer Altman employment directly. The board had no countervailing economic leverage. They had only the legal right to remove him—which proved insufficient.

This reveals a deeper governance principle:

In a dual-principal structure, the party with economic leverage will eventually win. The nonprofit board had legal authority; the investors (via Microsoft) had economic power. Economic power prevails.

Microsoft Leverage in OpenAI Structure


Layer 5A: The Organizational Leverage Asymmetry

One factor that amplified OpenAI’s governance crisis was not Microsoft’s capital, but rather the organizational leverage that Sam Altman wielded—leverage that the nonprofit board had no tool to counteract.

How Organizational Leverage Works

In any organization, the CEO is not merely a manager. The CEO is a symbol, a decision-maker, and a talent magnet. For a company like OpenAI, where the founder-CEO is publicly known and trusted, the CEO controls access to:

  • Researcher talent: Every top AI researcher in the world knows Altman and wants to work on his projects.
  • Partnership access: When Altman calls Microsoft, Google, or any strategic partner, they take the call. If a board member calls, they get an assistant.
  • Media narrative: Altman controls the story about what OpenAI is, what it’s doing, and why it matters.
  • Investor confidence: The nonprofit board members are faceless; Altman is the face investors see.

This leverage is independent of the corporate structure. Even if the board has legal authority to remove the CEO, that authority is only useful if the removal can be enforced. If the CEO can plausibly exit to a competitor (or be hired by a major investor like Microsoft), then the legal authority to remove is hollow.

What Happened in November 2023: A Case Study in Leverage Inversion

When the board fired Altman, they expected him to negotiate a severance package and leave. Instead, he did something the board hadn’t anticipated: he went directly to the employees and investors.

Within hours:
– Employees (who owed their careers to Altman’s hiring and mentorship) signed a letter saying they would follow him out.
– Microsoft signaled (subtly) that they would support Altman’s departure—implying they would hire him.
– The investors (Khosla, Thrive, others) wanted Altman back because he was OpenAI’s asset.

The board faced a choice:
1. Enforce the removal → lose 70% of staff → company collapses → mission fails.
2. Reinstate Altman → preserve the company → maintain leverage → try again later.

They chose #2. This was not a defeat of governance; it was governance’s correct response to an impossible situation. You cannot successfully remove a leader who controls the organizational leverage. The legal authority exists only as a bargaining tool.

The lesson: Governance structures that assume the board can unilaterally enforce decisions are naive. You need structures where:
– The mission is legally binding (so the CEO cannot override it).
– Leadership compensation is tied to mission success (so removal is less costly).
– Authority is distributed (so no single person has all the leverage).

OpenAI had none of these.


Layer 6: Ongoing Restructuring (2024–2025)

Following the November 2023 crisis, OpenAI announced a long-planned restructuring:

  • From: Nonprofit-controlled capped-profit subsidiary.
  • To: Traditional for-profit C-corporation, with governance safeguards.

The new structure (announced late 2023, formalized in 2024):

  1. The for-profit LP would become a C-corp (with improved tax structure for investors).
  2. The nonprofit retains a board seat and some governance input, but does not control the company.
  3. A new “Safety & Security Committee” was added to the board (oversight of AI safety research).
  4. Altman received a new equity package and restructured governance authority.

What this reveals:

  • The nonprofit-capped-profit model was recognized as unworkable.
  • The choice was not between “pure nonprofit” and “pure for-profit,” but rather to structure as a for-profit with added governance safeguards.
  • Specifically: a board oversight of safety, public commitments to responsible scaling, and charter language binding the company to AGI safety principles.

Is this better than Anthropic’s LTBT approach? It depends:

  • For-profit with safety committee: Easier to attract venture capital, faster scaling, proven model. But safety is a committee, not an inviolable trust structure. If investors demand pivoting, a committee can be overruled.

  • LTBT with mission lock: Harder to scale without founder-aligned capital. But the mission is locked-in; cannot be overridden by investor pressure.

OpenAI chose the former. Anthropic chose the latter. Both are solutions—they optimize for different tradeoffs.

Governance Restructuring: OpenAI vs. Anthropic Models


Layer 7: Lessons for Tech Startups on Governance Design

OpenAI’s crisis offers five critical lessons for founders designing corporate structures from day one. These are not abstract principles—they are lessons written in the governance failures of a $100 billion company.

Don’t do what OpenAI did. Don’t create two legal entities with overlapping claims to the same assets and conflicting fiduciary duties.

If you have a mission (safety, impact, public good):
– Use a B-Corp or PBC structure (the mission is legally mandated).
– Use a Long-Term Benefit Trust (the mission is locked-in by trust).
– Use a nonprofit (if you can bootstrap funding and don’t need venture capital).

Don’t use a nonprofit + capped-profit LP. The moment mission and profit diverge—and they will—you have a governance crisis. The structure creates the illusion of resolution (the nonprofit “controls” the LP) without the actual mechanisms to enforce it when conflicts arise.

Why this matters: Under stress, the structure collapses to whichever principal has leverage. In OpenAI’s case, that was the investors and the CEO. The nonprofit was left with only legal authority—which proved insufficient.

Lesson 2: Make Governance Commitments Legally Binding

If governance structure matters (and OpenAI proved it does), it must be backed by law, not policy.

Examples of binding governance:

  • Charter Language: Require board approval for pivoting away from the original mission. Write this into the corporate bylaws so it cannot be unilaterally changed.
  • Trustee Authority: Use a Long-Term Benefit Trust to lock-in founder vision. The trustee has legal authority to prevent certain decisions (mergers, IP sales, mission changes) regardless of shareholder pressure.
  • Stakeholder Representation: Require board seats for specific stakeholders (safety researchers, ethicists, public representatives). This distributes decision-making authority.
  • Impact Audit: Require independent audits of your impact (B-Corp model) and publish them. This creates accountability to stakeholders, not just shareholders.

Avoid: Saying “we promise to prioritize safety” in a blog post or press release without structural enforcement. Promises break under investor pressure. What breaks less easily: laws and legally binding documents.

Lesson 3: Anticipate Leverage Asymmetries

When you raise venture capital (or take a major strategic investment like Microsoft’s), you are giving your investors leverage. They will use it. This is not malicious; it’s rational.

Map the leverage in your company:

  • Who controls compute? At OpenAI, Microsoft did. This means Microsoft has leverage: they can slow down compute provisioning or threaten to cut it off.
  • Who controls capital flow? Investors do. They can threaten to defund operations.
  • Who controls talent? The CEO usually does, through hiring, mentorship, and relationships. This is the hardest leverage to counter because it’s human-to-human.
  • Who controls the board? Nominally, shareholders or the nonprofit. Actually, whoever can credibly threaten defection.

In OpenAI’s crisis, the leverage map was:
Nonprofit board: Legal authority (can remove CEO). But enforcing removal requires that the company remains functional.
Investors/Microsoft: Economic leverage (can withdraw capital, compute, or talent). Hard to respond to because it’s distributed across many stakeholders.
Altman: Organizational leverage (employees will follow him, media amplifies his voice, researchers trust him). Extremely potent because it’s concentrated and difficult to replace.

The leverage hierarchy: Economic leverage > Organizational leverage > Legal authority.

Legal authority is lowest on the list. This is counterintuitive, but it’s why so many board decisions are reversed (like in OpenAI’s case). The board can make a decision in the conference room; the organization votes with its feet.

To counter this, you need structure that makes economic and organizational leverage less potent:
– Distributed decision-making (no single person has all the leverage).
– Mission-locked governance (so even if the CEO leaves, the mission remains).
– Incentive alignment (so leaders are rewarded for mission success, not exit value).

Lesson 4: Align Incentives Early

Equity packages and compensation should align leadership incentives with the governance model you chose.

If you choose a mission-locked structure (B-Corp, LTBT, Nonprofit):
– Don’t give the CEO massive equity upside that unlocks only on exit.
– Structure equity as restricted stock that vests over long periods, not as options that expire in 10 years.
– Compensation should reward mission success (impact metrics, safety milestones), not just revenue or valuation.
– Tie bonuses to long-term metrics, not quarterly growth.

Why? If you tell a CEO, “Your equity is worth $100 million but only if we sell the company,” you are incentivizing exit. If the company is structured as a mission-locked nonprofit or LTBT, the CEO will spend years fighting the structure to change it—exactly what happened at OpenAI with Altman pushing to move toward a pure for-profit.

If you choose a venture-backed C-Corp:
– Accept that your CEO is optimizing for exit at scale. This is not a flaw; it’s alignment.
– Structure equity to reward growth: option pools, accelerated vesting on exits, high upside.
– The problem is not the incentive structure; it’s expecting a venture-funded company to behave like a nonprofit.

OpenAI’s critical mistake: They structured as a nonprofit-controlled entity (mission-locked) but compensated leadership as if it were a venture-backed company (equity upside on scale). This created fundamental misalignment—leadership had an incentive to escape the mission structure and move toward pure for-profit. The November 2023 crisis was partially a result of this misalignment surfacing.

Lesson 5: Document Governance in Writing from Day One

The worst time to write governance rules is in a crisis. By then, you’re trying to resolve the crisis with rules, not prevent it.

From Day 1, document:

  • Decision authority: What decisions require board approval? (Raising capital, pivoting mission, taking on a major strategic partner, entering a major market.) Get specific. Don’t say “major decisions”; say “any decision projected to consume more than 50% of annual revenue” or “any change to the stated mission.”

  • Conflict resolution: If the founder and board disagree, what is the procedure? (Mediation? Arbitration? Founder veto? Board veto?) Make it explicit, not ad-hoc.

  • Board composition: How many seats? How are directors selected? How long do they serve? Can the founder remove them? Can they remove the founder?

  • Governance review: When do you revisit governance? Annually? Every 3 years? After specific milestones? As companies grow, governance should evolve—but only if you schedule time to think about it.

  • CEO removal procedure: How do you remove a founder or CEO if necessary? What is the process? Who decides? What happens to their equity? What severance do they receive? Make this explicit before you need it.

  • Mission changes: How difficult should it be to change the company’s mission? (In Anthropic’s LTBT structure, it requires trustee approval. In a pure nonprofit, it requires a full board vote and often legal review. In a C-corp, it’s just a board decision.)

Why this matters: When OpenAI’s board removed Altman, they had no precedent and no clear procedure. This contributed to the crisis escalating. If they had documented “Here is how we remove a CEO, here is the severance, here is the process,” it would have been more orderly.

More importantly, documenting governance forces you to think through edge cases before they happen. Writing “Who controls decisions if the CEO and board disagree?” is abstract until you actually face that situation—at which point you’re already in crisis.



Lesson 1: Eliminate Dual-Principal Structures

OpenAI’s crisis offers five critical lessons for founders designing corporate structures from day one:

Lesson 1: Eliminate Dual-Principal Structures

Don’t do what OpenAI did. Don’t create two legal entities with overlapping claims to the same assets and conflicting fiduciary duties.

If you have a mission (safety, impact, public good):
– Use a B-Corp or PBC structure (the mission is legally mandated).
– Use a Long-Term Benefit Trust (the mission is locked-in by trust).
– Use a nonprofit (if you can bootstrap funding and don’t need venture capital).

Don’t use a nonprofit + capped-profit LP. The moment mission and profit diverge—and they will—you have a governance crisis.

Lesson 2: Make Governance Commitments Legally Binding

If governance structure matters (and OpenAI proved it does), it must be backed by law, not policy.

Examples of binding governance:

  • Charter Language: Require board approval for pivoting away from the original mission.
  • Trustee Authority: Use a Long-Term Benefit Trust to lock-in founder vision.
  • Stakeholder Representation: Require board seats for specific stakeholders (safety researchers, ethicists, public representatives).
  • Impact Audit: Require independent audits of your impact (B-Corp model) and publish them.

Avoid: Saying “we promise to prioritize safety” without structural enforcement. Promises break under investor pressure.

Lesson 3: Anticipate Leverage Asymmetries

When you raise venture capital (or take a major strategic investment like Microsoft’s), you are giving your investors leverage. They will use it.

Map the leverage:

  • Who controls compute? (Microsoft controlled OpenAI’s.)
  • Who controls capital flow? (Investors did.)
  • Who controls talent? (Altman did, through relationships.)
  • Who controls the board? (Nominally, the nonprofit; actually, whoever can credibly threaten defection.)

In OpenAI’s case, the leverage map was:
Nonprofit board: Legal authority (can remove CEO).
Investors/Microsoft: Economic leverage (can withdraw capital, compute, or talent).
Altman: Organizational leverage (employees will follow him).

Economic and organizational leverage trumped legal authority. Don’t design a governance structure that assumes legal authority is sufficient.

Lesson 4: Align Incentives Early

Equity packages and compensation should align leadership incentives with the governance model you chose.

If you choose a mission-locked structure (B-Corp, LTBT, Nonprofit):
– Don’t give the CEO massive equity upside that unlocks only on exit.
– Compensation should reward mission success (impact metrics, safety milestones), not just revenue or valuation.

If you choose a venture-backed C-Corp:
– Accept that your CEO is optimizing for exit at scale.
– This is not a flaw; it’s alignment. The problem is expecting a venture-funded company to behave like a nonprofit.

OpenAI’s mistake: They structure as a nonprofit (mission-locked) but compensated leadership as if it were a venture-backed company (equity upside on scale). This created misalignment—leadership had an incentive to escape the mission structure.

Lesson 5: Document Governance in Writing from Day One

The worst time to write governance rules is in a crisis.

From Day 1, document:
– What decisions require board approval? (Raising capital, pivoting mission, taking on a major strategic partner.)
– How often do you review governance? (Annually? Every 3 years?)
– What happens if the founder and board disagree? (Mediation? Arbitration? Founder veto?)
– How do you remove a founder/CEO if necessary? (Clear procedures, not ad-hoc.)

OpenAI’s board had no clear procedure for removing Altman, and no precedent. When they did remove him, they had no mechanism to enforce it. Write these procedures now, not in November 2023.


Conclusion: Governance as Architecture

At the start, we asked: What happened to OpenAI in November 2023?

The answer is not dramatic: there was no scandal, no fraud, no safety violation disclosed. Instead, what happened was a structural failure—a collision between incompatible governance objectives baked into the organization’s legal foundation.

This teaches a deeper principle: governance is architecture.

Your code architecture (monolithic vs. microservices, coupled vs. decoupled) determines your operational flexibility, failure modes, and scaling limits. Similarly, your corporate architecture (who controls decisions, how are incentives aligned, what is legally binding) determines your organizational flexibility, decision-making speed, and resilience under pressure.

OpenAI’s nonprofit-capped-profit structure was architecturally flawed. It created dual principals with conflicting fiduciary duties. It generated incentive misalignment. And when pressure mounted, the structure collapsed because there was no clear authority to enforce decisions.

This matters for AI companies specifically, because:

  1. AI safety is hard to monetize. Safety research, alignment work, and responsible deployment do not generate quarterly revenue. A purely venture-backed company will eventually deprioritize them in favor of features and scale. A structure that legally binds the company to safety work (mission lock, B-Corp mandate) is more robust.

  2. AI is systemically important. Unlike most software, AI systems now touch healthcare, finance, defense, and infrastructure. This may mean that government intervention (regulation, public ownership) becomes necessary. Until then, choosing governance that structurally protects safety and alignment is prudent.

  3. Founder intent is fragile. Many tech founders start with noble intentions—democratizing information, empowering creators, advancing science. But they also want to succeed, scale, and achieve outsized returns. A governance structure that locks-in the original intent (while still allowing scaling and investment) is more durable than one that relies on the founder’s continued commitment.

The lesson from OpenAI is simple: Design your governance structure as carefully as you design your technology. Because when the two collide—when your mission and your incentives diverge—the governance structure will determine whether you can actually enforce what you promised.

And in AI, that is not a small thing.


References & Further Reading

  • OpenAI, Charter, https://openai.com/charter/ (outlines mission and governance commitments)
  • Anthropic, Constitutional AI and governance blog posts on long-term benefit trust
  • Patagonia, Let My People Go Surfing (case study in B-Corp governance and mission preservation)
  • Stripe Press, Lessons in Corporate Governance (forthcoming, 2025)
  • Legal precedent: Delaware General Corporation Law §§ 141–145 (board duties, fiduciary obligations)

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *