Business

Elon Musk Reveals Letter Exposing Allegations Against Sam Altman

Elon Musk, on Tuesday 11/21/2023, served another curve ball in the winding tale of OpenAI’s evolving leadership. In the tweet below, Musk shares a link to a now-deleted GitHub post—a “Letter of Concerns” to the OpenAI board by Ex-employees of the company.

The letter alleges that Sam Altman, the CEO, engaged in misconduct, abuse of power, and other unethical practices.

Please find the letter’s content below:

To the Board of Directors of OpenAl:

We are writing to you today to express our deep concern about the recent events at OpenAl, particularly the allegations of misconduct against Sam Altman.

We are former OpenAl employees who left the company during a period of significant turmoil and upheaval. As you have now witnessed what happens when you dare stand up to Sam Altman, perhaps you can understand why so many of us have remained silent for fear of repercussions. We can no longer stand by silent.

We believe that the Board of Directors has a duty to investigate these allegations thoroughly and take appropriate action.

We urge you to:

  • Expand the scope of Emmett’s investigation to include an examination of Sam Altman’s actions since August 2018, when OpenAl began transitioning from a non-profit to a for-profit entity.
  • Issue an open call for private statements from former OpenAl employees who resigned, were placed on medical leave, or were terminated during this period.
  • Protect the identities of those who come forward to ensure that they are not subjected to retaliation or other forms of harm.

We believe that a significant number of OpenAl employees were pushed out of the company to facilitate its transition to a for-profit model. This is evidenced by the fact that OpenAl’s employee attrition rate between January 2018 and July 2020 was in the order of 50%.

Throughout our time at OpenAl, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their insatiable pursuit of achieving artificial general intelligence (AGl). Their methods, however, have raised serious doubts about their true intentions and the extent to which they genuinely prioritize the benefit of all humanity.

Many of us, initially hopeful about OpenAl’s mission, chose to give Sam and Greg the benefit of the doubt. However, as their actions became increasingly concerning, those who dared to voice their concerns were silenced or pushed out. This systematic silencing of dissent created an environment of fear and intimidation, effectively stifling any meaningful discussion about the ethical implications of OpenAl’s work.

We provide concrete examples of Sam and Greg’s dishonesty & manipulation including:

  • Sam’s demand for researchers to delay reporting progress on specific “secret” research initiatives, which were later dismantled for failing to deliver sufficient results quickly enough. Those who questioned this practice were dismissed as”bad culture fits” and even terminated, some just before Thanksgiving 2019.
  • Greg’s use of discriminatory language against a gender-transitioning team member. Despite many promises to address this issue, no meaningful action was taken, except for Greg simply avoiding all communication with the affected individual, effectively creating a hostile work environment. This team member was eventually terminated for alleged underperformance.
  • Sam directing IT and Operations staff to conduct investigations into employees, including Ilya, without the knowledge or consent of management.
  • Sam’s discreet, yet routine exploitation of OpenAl’s non-profit resources to advance his personal goals, particularly motivated by his grudge against Elon following their falling out.
  • The Operations team’s tacit acceptance of the special rules that applied to Greg, navigating intricate requirements to avoid being blacklisted.
  • Brad Lightcap’s unfulfilled promise to make public the documents detailing OpenAl’s capped-profit structure and the profit cap for each investor.
  • Sam’s incongruent promises to research projects for compute quotas, causing internal distrust and infighting.

Despite the mounting evidence of Sam and Greg’s transgressions, those who remain at OpenAl continue to blindly follow their leadership, even at significant personal cost. This unwavering loyalty stems from a combination of fear of retribution and the allure of potential financial gains through OpenAl’s profit participation units.

The governance structure of OpenAl, specifically designed by Sam and Greg, deliberately isolates employees from overseeing the for-profit operations, precisely due to their inherent conflicts of interest. This opaque structure enables Sam and Greg to operate with impunity, shielded from accountability.

We urge the Board of Directors of OpenAl to take a firm stand against these unethical practices and launch an independent investigation into Sam and Greg’s conduct. We believe that OpenAl’s mission is too important to be compromised by the personal agendas of a few individuals.

We implore you, the Board of Directors, to remain steadfast in your commitment to OpenAl’s original mission and not succumb to the pressures of profit-driven interests. The future of artificial intelligence and the well-being of humanity depend on your unwavering commitment to ethical leadership and transparency.

Sincerely,

Concerned Former OpenAl Employees


This article was originally published by NewsByte.Tech on Hackernoon.

HackerNoon

Recent Posts

What sitting all day is quietly doing to your body and why you don’t even realize it (Brains Byte Back Podcast)

Adults today spend over nine hours a day sitting, according to national health data. On…

9 hours ago

Kryterion and Automattic partner to create a gold standard in WordPress developer credentials

The web has a WordPress problem – not the platform itself, but the people who…

21 hours ago

Consciousness computing tech exists, ‘whoever governs identity governs society’: World Forum

Neural rights was a hot topic during a session called "Approaching Singularity: Our Brains Interfacing…

2 days ago

Decision Points: The “Tiger” Methodology for Decisive Action

At some point in the last 10 years, I started viewing Colonel John Boyd as…

6 days ago

Architecting Zero-Click AI Eval Pipelines

When I started designing an AI Evaluation pipeline/framework at my organization, I had no idea…

6 days ago

Tech executive Bob Reisenweber named Director of Operations at Source Meridian

This week software firm Source Meridian announced that Bob Reisenweber was named its new Director…

1 week ago