OpenAI, the world-renowned artificial intelligence research laboratory, made headlines when it announced its decision to transition from a nonprofit organization to a for-profit company in 2019. From 2015 to 2018, the company allegedly remained non-profit. Today, the company has a staggering valuation of $29 billion.
The move, which raised eyebrows and sparked debates, represented a seismic shift for the organization which has long been regarded as a bastion of ethical and responsible AI development by those who made it and came amid growing demand for advanced AI technology. But as OpenAI seeks to balance its lofty ideals with the realities of the market, questions remain about how the company will navigate the complex landscape of AI development and ensure that its technology is used for the greater good.
In this story, we delve deep into OpenAI’s transformation and explore the implications of its bold move.
The Early Days of OpenAI: A Nonprofit Research Laboratory
Once upon a time, in a galaxy far, far away…just kidding, it was in Silicon Valley, OpenAI was founded as a nonprofit research lab with a mission to save the world with artificial intelligence. Their plan? Use machine learning to smoke cancer, build self-driving cars, and end world hunger straight out of a lab in San Francisco. No biggie.
As a nonprofit research laboratory, OpenAI focused on developing cutting-edge AI technologies and conducting groundbreaking research in the field. The pretext? The company announced they were throwing a patent party and everyone was invited! declaring that it would “freely collaborate” with other organizations and researchers by making its patents and research available to the general public.
When the company started in 2015 by a group of high-profile entrepreneurs and researchers – among them were Sam Altman, Reid Hoffman, Jessica Livingston, Elon Musk, Ilya Sutskever, Peter Thiel, and Olivier Grabias, its mission was clear:
“to advance artificial intelligence in a way that would benefit society as a whole, unconstrained by a need to generate a financial return.”
A list of the “top researchers in the area” was reportedly created by Greg Brockman, AI researcher and co-founder of OpenAI, after a meeting with Yoshua Bengio, a Canadian computer scientist, who is allegedly one of the “founding fathers” of the deep learning movement, according to Wired. In December 2015, Brockman was able to hire nine of them as the company’s first employees, and the promise and mission of OpenAI were said to be what attracted these researchers to the company.
Wojciech Zaremba, a former Intern at both Google and Facebook who later came to work at OpenAI said he joined the company partly because of the people and the mission, even though OpenAI could not match the offers he was made somewhere else. These offers, he said, were two or three times his market value. According to him,
“OpenAI was the best place to be.”
Because the company provided something more valuable, such as the opportunity to investigate future-focused research and eventually share the majority, if not all, of this study for free with anyone who requests it.
OpenAI’s research director, Ilya Sutskever, who left Google to join the company, said about his transition from his former position at Google to OpenAI,
“They did make it very compelling for me to stay, so it wasn’t an easy decision, but in the end, I decided to go with OpenAI, partly because of the very strong group of people and, to a very large extent, because of its mission.”
The founding of the company was accompanied by a commitment of $1 billion in funding from the founding members, which also included YC leader Paul Graham, as well as corporate sponsors like AWS and Infosys, with an aim to create a safety net that would allow researchers to explore the full potential of AI without compromising safety or ethics.
One of the first batches of AI software OPenAI released was, OpenAI Gym, a toolkit for building artificially intelligent systems by way of a technology called “reinforcement learning.” Exactly what Brockman said the company will be starting with. The same year, “Universe,” a software platform for assessing and honing an AI’s general intelligence via a variety of games, websites, and other applications, was also introduced.
However, during this nascent stage of the company’s ascendancy, Elon Musk left his board seat in 2018 on account of a plausible and prospective conflict of interest with Tesla’s Artificial Intelligence (AI) research and development endeavors concerning autonomous driving, though he remained a donor to the company. But is it so? According to the most recent reports, there was more to the “conflict of interest” than we know, and we will soon find out.
The Challenge of Funding a Nonprofit
As a nonprofit organization, OpenAI faced the constant struggle of balancing its research goals with its budget constraints. The researchers had to learn how to stretch a dollar like a rubber band, while still managing to develop cutting-edge AI technologies that would impress their investors.
The company has said before that an open nonprofit organization was the most effective way to accomplish its objectives.
“Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.”
But, a few years later, they came to the conclusion that being an open and nonprofit organization would make it more difficult to carry out this purpose. So, In the epoch of 2019, OpenAI experienced a monumental inflection point. In a rather unorthodox move for a non-profit, OpenAI made an announcement of creating OpenAI LP, a distinct entity that operates as a “capped-profit” corporation, which marked a transformative shift in its foundational structure.
Motivations Behind the Transition to For-Profit
When OpenAI made the bold decision to transition to a for-profit model. Some say it was because they were tired of eating ramen noodles for every meal, while others argue it was because they wanted to make more money than Elon Musk. Regardless of the reason, it was a big change for the nonprofit research lab.
In reality, this decision was motivated by a desire to increase financial sustainability and pursue commercial applications of its technology, and, apparently, this move was not without controversy, as some critics argued that it could compromise the organization’s commitment to ethical AI development.
First, the company claimed that it needed to raise “billions of dollars” and pay out enormous signing bonuses to recruit top talent.
“We’ll need to invest billions of dollars in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers.”
The company also thought it was also dangerous to keep OpenAI open. According to a Vox report, the safety team at OpenAI came to the conclusion that making all of their work open-source might, rather than serving humanity’s best interests, invite trouble. So, when they created GPT-2, they withheld it from the public due to concerns that it might be easily abused for plagiarism, bots, fake Amazon reviews, and spam.
But while the company went through this transition, it insisted that it would continue to pursue the same goals and set financial restrictions on both investors and employees. Despite the organization’s new objective to amass a fortune, it remained cognizant of its moral compass and claimed that it was determined to refrain from any unethical practices. As a result, the idea was to establish OpenAI LP as a “capped-profit” firm, a combination of a for-profit and nonprofit organization.
“We want to increase our ability to raise capital while still serving our mission, and no pre-existing legal structure we know of strikes the right balance.”
The aforementioned “capped-profit” framework alludes to the limitation placed on first-round investors, whereby they are only authorized to receive a maximum return of 100 times their original investment.
Balancing Profitability with Ethical Considerations
The transition from nonprofit to for-profit required OpenAI to balance its desire for profit with its commitment to ethical AI development. It was kind of like trying to juggle while riding a unicycle, except with more existential questions about the nature of humanity.
Information on the company’s website stated that the lab operates under a non-profit governance structure ( The OpenAI LP will be “governed” by OpenAI Inc. which is the Nonprofit arm) that prioritizes the betterment of humanity over for-profit interests, which also allows it to emphasize safety concerns and undertake significant initiatives, such as sponsoring the most comprehensive Universal Basic Income (UBI) experiment, and the ability to cancel equity obligations to shareholders if necessary.
It said that the cap on the profits that its stockholders can receive prevents them from being enticed to try and grab wealth without limits and risk deploying something that could be very much harmful.
Regardless, Elon Musk maintains his reservations. On March 15th, he took to Twitter, the social media platform which he now owns, to express his grievances.
Apparently, the meme billionaire invested $100 million in the LLM giant, way before it became a for-profit company. But, while Elon seems to be concerned about the legal implications of OpenAI’s transition, his past advocations show how much he is rather really concerned about the potential dangers of artificial intelligence.
While the Tesla boss cited that he left OpenAI simply because of a conflict of interest, a new report by Semafor show that Elon’s real reason for leaving was that his proposal to take over the company and run it himself was rejected by both Sam Altman and other founders. The reason he proposed this idea was that he felt that the venture had significantly lagged behind Google and that he was the Messiah who could turn things around.
Elon made it clear that his objective was to establish the AI lab as a competing force against Google.
With his leaving, he also held back from making a large intended donation. OpenAI’s statement indicated that Musk would continue to provide financial support to the organization, but he did not follow through with the commitment.
So, really, the conflict of interest was a power struggle that Elon lost?
As per Semafor’s report, it appears that only a small number of individuals at OpenAI shared the belief that Musk’s departure was due to potential conflicts of interest. Furthermore, the speech that he gave during his departure from OpenAI’s office, which primarily centered on this topic, did not receive favorable feedback from the majority of employees who remained skeptical of the explanation provided.
Fortunately, after Elon’s departure, Microsoft invested $1 billion in OpenAI, gaining exclusive licensing rights to the company’s GPT-3 model a year later, while also becoming its exclusive cloud provider. The investment was instrumental in allowing OpenAI to pursue further research endeavors. Through a collaborative effort between the AI lab and its new investor Microsoft, a supercomputer was built for training large-scale models, which ultimately led to the creation of ChatGPT and the image generator DALL-E.
The Future of OpenAI
The company’s aim, as it noted itself, is to develop AI systems that have a positive impact on society, as well as to explore profitable commercial applications of its technology with no stated intentions to compete, but instead, seeks to collaborate with other research and policy institutions to promote safety in the final stages of AGI development.
Nevertheless, the competition continues to intensify. While Elon, for example, might present a facade of not being an adversary,
In truth, he has actually joined the bandwagon of direct competitors of OpenAI. He is actively pursuing the creation of a new startup to square off against OpenAI. Obviously, Elon has not left the battle ring; therefore, the power struggle is still on.
Despite Sam Altman’s uncommon decision to forego equity in the new for-profit entity as a way to stay aligned with the original mission, Elon Musk remains suspicious of OpenAI’s shift from an open to a closed organization.
Looking ahead, the company continues to push the boundaries of artificial intelligence through innovative research and the development of new technologies. Who knows what they’ll come up with next? Maybe a robot that can fly humans to Mars or a machine that can predict the end of the world. Either way, it looks super interesting.
This article was originally published by Chinecherem Nduka on Hackernoon.