Imagine a fight that is taking place in a boardroom of a company that is worth a huge amount of money, and its technology that is high-tech could either save or ruin the planet.
The chief executive, who is respected by global leaders, was removed from office as top personnel expressed opposition - only for the remaining staff of the business to call for their own dismissal.
No, that isn't a concept for a script for a Netflix series, that has been what the last few days have been like at OpenAI.
Tech journalists, enthusiasts and investors have been intently following the spectacle - with some viewing it as an intense suspense drama while others find it comical.
The announcement of Sam Altman's dismissal as co-founder and chief executive of OpenAI, the company behind the AI chatbot ChatGPT, came without warning on Friday, resulting in a tussle at the top of the organization.
The board wrote in a blog post that they had lost confidence in Mr Altman's leadership due to his lack of consistent candor in his communications.
There is a total of six individuals on that board - two of which are Sam Altman and his associate Greg Brockman, the latter of whom stepped down after Mr Altman was kicked out.
Four individuals with an extensive knowledge of Mr Altman and the business in question arrived at a critical juncture, leading them to take immediate action without warning, and seemingly surprising the entire tech industry, including their own investors.
Elon Musk - a founding member of OpenAI - voiced his concerns on X, formerly known as Twitter, saying that he was "very concerned".
Ilya Sutskever, the firm's chief scientist, was part of that board, and he communicated that he "would not take such a drastic step unless he felt it was absolutely necessary".
Mr Sutskever has made his remorse known - as one of many who have signed an authoritative letter to the board of directors, requesting that Mr Altman and Mr Brockman be allowed to come back, and warning that OpenAI may lose them if they are not.
This article includes content from Twitter. Before anything loads, we will require your permission as it may involve the use of cookies and other technologies. To gain insight about Twitter’s cookie policy, external and privacy policy, external we recommend you read them before granting permission. To view the content, please select ‘accept and continue’.
What was the source of this quickly developing snowball effect? We still don't know - however, let's contemplate some possibilities.
It has been reported that Mr Altman was weighing up some hardware projects, which may have included the financing and formation of an artificial intelligence processor. This would have marked a significant deviation from the regular activities of OpenAI. Was he making any promises of which the board of directors was unaware?
Is it possible that it all boils down to an ancient and very human issue: money?
The board communicated in an internal memo, the details of which have been reported extensively, that they were not charging Mr Altman with any "financial mismanagement".
OpenAI was founded as a non-profit organization, meaning a company whose purpose is not to generate profit. Money generated is used to cover its operational expenses with any additional funds reinvested in the business. The majority of charities are also non-profit entities.
In 2019, a new arm of the firm was created whose primary goal was to achieve profits. The firm established a framework to ensure that the non-profit side would lead the profit side and would impose a limit on the returns investors could gain.
Many individuals were not pleased with this, which was allegedly the key motivation for Elon Musk's departure from the company.
Although OpenAI currently has a lucrative valuation, the reported staff stock sale, which did not go through today, was valued at $86bn (£68bn).
Was it intended to give the for-profit side of the venture increased strength?
OpenAI is striving to achieve AGI - artificial general intelligence. Although it has yet to be created, it arouses both fear and admiration. This concept involves the possibility of machines being able to perform numerous tasks as well as, or even superior to, us humans.
It can revolutionize the way we conduct our activities. Positions, funds, teaching - all these can be affected when machines can execute tasks in place of people. It is an unbelievably influential device - or it will be in the near future.
Is OpenAI near to achieving that which is beyond what we realize, and is this something Mr Altman is aware of? In a speech he gave recently, he declared that the ChatGPT bot that we know today will appear to be "a quaint relative" in comparison to what will be available in the following year.
Emmett Shear, the new interim head of OpenAI, declared on X that "the board did not remove Sam as a result of any particular discrepancy on safety."
He declared that an investigation will be conducted regarding the occurrence.
Microsoft, OpenAI's main investor, has determined not to risk Mr Altman taking the technology to another corporation. It was declared that he will be heading up a new AI research crew in the tech company situated in Seattle. His fellow founder Greg Brockman will be joining him and judging from the online posts by staff members on X, it seems like he will have some of OpenAI's elite employees as well.
Many OpenAI staff members have been posting the same statement on X: "OpenAI would not be what it is today without its people."
Could it be a sign for Mr Shear that he could have to carry out some recruiting? I was informed by a partner from the BBC who was located outside OpenAI's building in San Francisco that as of 0930 there had not been any people that had turned up for work yet.
This saga is about the technology that is transforming the world, but, at the core, it is still a human drama.
top of page
bottom of page
Comments