blog insights

On OpenAI's Entity Structure and Governance

What the shakeup in OpenAI's leadership tells us for the future of AI governance and the role of nonprofits in tech.

Peter Bull
Co-founder

This commentary is related to the shakeup in leadership at Open AI in November 2023. Here’s a good timeline of coverage of the happenings. I hope the points made here stay relevant even as the situation evolves.

OpenAI's Structure

I remember when OpenAI launched I had an interesting conversation with my wife. She’s a tax attorney, and when I described their novel new structure with a brand new kind of legal entity, she rolled her eyes. Of course, as she explained, there was no new kind of entity here—new entity types are the domain of legislators not entrepreneurs. Instead they had frankensteined together a few entities in a way that was unusual, but not unprecedented.

And, perhaps unsurprisingly, Frankenstein’s monster has ransacked the village.

Article content

OpenAI entity structure (from OpenAI website)

The much emphasized takeaway of the entity structure is that a non-profit owns the for-profit OpenAI entity. Seems great. OpenAI is a nonprofit. Now let’s talk a little bit about what this means in practice.

Nonprofits and tax-exempt status

When I started DrivenData, we asked a lot of people, including the aforementioned spouse, for advice on being a nonprofit versus a for profit. In the US, being a non-profit means that your corporate entity (which is like any other corporate entity) is exempt from federal income taxes under the Internal Revenue Code, for example, under section 501(c)(3). As I learned at the time, the federal government doesn’t have anything to say about profit at all. The Code doesn’t refer to “nonprofits” and instead specifies tax exempt entities which qualify such as “charitable organizations.”  The deal is that in order to qualify, the organization must serve a charitable purpose (if you’re curious, you can read for yourself what that means) and not have any benefit inure to a private interest—i.e., shareholders or individuals. Assuming all requirements are met, the organization is generally exempt from federal income tax and can accept tax-deductible contributions from individuals and entities.

The last point about accepting tax-deductible contributions is more important than I thought it was. Every social sector organization gets asked by their funders about their “sustainability plan.” A sustainability plan is a plan for how the organization will continue to bring in enough funding to keep operations running. If the answer is grants and individual donations, it’ll need to get the 501(c)3 exemption to accept these funds and provide the tax benefit to the donors. If the answer is that you’ll have a fee for your services, you may be better off not starting out by jumping through the hoops to qualify for the 501(c)3 exemption. In this case, if you don’t make much profit, you won’t pay much tax anyway. Plus, you reserve the optionality to file for 501(c)3 if it turns out your funding will instead be grants and donations. Not filing for a 501(c)3 exemption does not preclude you from having a mission and operating according to it—despite the omnipresent misconception that corporations have a legal obligation to maximize shareholder value.

Governance for AI organizations

So, with all that said, I had some background and interest in thinking about the organizational structures that facilitate organizations that want to use data, AI, and machine learning for social good. In fact, I was interested—and maybe a little hopeful—when I saw that OpenAI, as a leading AI lab, had made an interesting structural choice. I have always been bullish on the social sector spending more time building core technology and infrastructure.

And, especially when it comes to AI, there is at least a defensible position that AI development is so important that we want it not to be done by unfettered capitalist entities. We’ve all seen how negative externalities get neglected by corporate entities, and it’s reasonable to believe that many of the harms of AI will be externalized from the builders’ business models. Without the profit motive dominating decision making calculus, charitable organizations may be better positioned to design and build technology in a way that reduces these harms.

So, that’s where we get back to OpenAI. The absolute best case explanation of the board’s firing of Sam Altman is that they were concerned his actions were not in line with the charitable purpose. I have no special insight into if this is the right decision from a mission perspective. However, the lack of concrete evidence presented by the board indicates it was a rash decision driven by interpersonal conflict, and this interpersonal conflict has thrown OpenAI into turmoil. And, while I’m not personally worried about ChatGPT from an AI safety perspective, imagine if OpenAI had made advances towards superhuman intelligence and it was only at that point that the governance of the organization had been revealed to be so incredibly unstable. We got lucky that we have relatively quickly learned just how shaky a foundation a “mission” alone can be.

Interestingly, when these breakups happen at corporations, they’re usually managed better, or at least more predictably. There are three simple reasons for that. First, a shared financial interest aligns incentives to resolve political disputes amicably. Second, major investors put managers on boards that (at least in theory) provide experience managing crises within organizations. And third, the board would likely face a lawsuit if a CEO with the success metrics of OpenAI was fired without clear and explicit justifications. This isn’t an argument for unfettered capitalism. It is just an observation that financial incentives sometimes align in a way that provides stability for an organization, and stability itself can be a substantial public benefit.

What have we learned here?

I’ve got three takeaways from the saga. First, the structure itself of OpenAI does little to enforce publicly beneficial outcomes, and we should ensure both public pressure and strong regulatory regimes are part of our accountability mechanisms for AI organizations. Second, organizing as a tax exempt entity does not in itself make any guarantees of good governance, and may even—through its lack of personal incentives beyond influence—create a more challenging environment for effective governance. We should expect more disagreements within social sector organizations, not fewer. And finally, as with every type of organization, the sustainability model trumps all. In the end, Microsoft is OpenAI’s primary supporter right now, and so if OpenAI wants to stick around, they need to do it in a way that keeps Microsoft happy.

This makes now a good time for social sector organizations to look inside and ask themselves about mission, accountability, and how easily board decisions are driven by the politics of internal factions. What are our governance procedures? How do we go about having good-faith disputes about differences of opinion on what best serves the mission? How does this input go into a deliberative process that guides us to the best outcomes without letting organizations flail about wildly? How do we plan for stability and resilience of organizations operating important technologies in times of crisis?

These are not questions that can be outsourced to your corporate structure and a 500-word charter of mission.


As always, I'm interested in your comments and feedback! If you're interested in practical ethics training for AI practitioners, responsible AI strategy, or implementation with likeminded builders don't hesitate to reach out.

Tags

Stay updated

Join our newsletter or follow us for the latest on our social impact projects, data science competitions and open source work.

There was a problem. Please try again.
Subscribe successful!
Protected by reCAPTCHA. The Google Privacy Policy and Terms of Service apply.

Latest posts

All posts

insights

Life beyond the leaderboard

What happens to winning solutions after a machine learning competition?

insights

(Tech) Infrastructure Week for the Nonprofit Sector

Reflections on how to build data and AI infrastructure in the social sector that serves the needs of nonprofits and their beneficiaries.

winners

Meet the winners of Phase 2 of the PREPARE Challenge

Learn about how winners detected cognitive decline using speech recordings and social determinants of health survey data

insights

AI sauce on everything: Reflections on ASU+GSV 2025

Data, evaltuation, product iteration, and public goods: reflections on the ASU+GSV Summit 2025.

resources

Open-source packages for using speech data in ML

Overview of key open-source packages for extracting features from voice data to support ML applications

tutorial

Getting started with LLMs: a benchmark for the 'What's Up, Docs?' challenge

An introduction to using large language models via the benchmark to a document summarization challenge.

winners

Meet the Winners of the Goodnight Moon, Hello Early Literacy Screening Challenge

Learn about the results and winning methods from the early literacy screening challenge.

resources

Where to find a data job for a good cause

Finding data jobs for good causes can be difficult. Learn strategies, job lists, and tips to find organizations with open positions working on causes you care about.

winners

Meet the Winners of the Youth Mental Health Narratives Challenge

Learn about the winning solutions from the Youth Mental Health Challenge Automated Abstraction and Novel Variables Tracks

winners

Meet the winners of the Forecast and Final Prize Stages of the Water Supply Forecast Rodeo

Learn about the winners and winning solutions from the final stages of the Water Supply Forecast Rodeo.

insights

10 takeaways from 10 years of data science for social good

This year DrivenData celebrates our 10th birthday! We've spent the past decade working to use data science and AI for social good. Here are some lessons we've learned along the way.

tutorial

Goodnight Moon, Hello Early Literacy Screening Benchmark

In this guest post from the MIT Gabrieli Lab, we'll show you how to get started with the literacy screening challenge!

tutorial

Youth Mental Health: Automated Abstraction Benchmark

Learn how to process text narratives using open-source LLMs for the Youth Mental Health: Automated Abstraction challenge

winners

Meet the winners of Phase 1 of the PREPARE Challenge

Learn about the top datasets sourced for Phase 1 of the PREPARE Challenge.

resources

Teaching with DrivenData Competitions

Inspiration and resources for teaching students data science, machine learning, and AI skills with DrivenData competitions.

insights

What a non-profit shutting down tells us about AI in the social sector

When non-profits when they shut down, we should pay attention to the assets they produce as public goods and how they can be used to drive impact.

winners

Meet the winners of the Pose Bowl challenge

Learn about the top solutions submitted for the Pose Bowl: Spacecraft Detection and Pose Estimation Challenge.

winners

Meet the winners of the Water Supply Forecast Rodeo Hindcast Stage

Learn about the winning models for forecasting seasonal water supply from the first stage of the Water Supply Forecast Rodeo.

tools

Cookiecutter Data Science V2

Announcing the V2 release of Cookiecutter Data Science, the most widely adopted data science project template.

resources

How to make data science projects more open and inclusive

Key practices from the field of open science for making data science work more transparent, inclusive, and equitable.

Work with us to build a better world

Learn more about how our team is bringing the transformative power of data science and AI to organizations tackling the world's biggest challenges.