top of page
Lanon Wee

Demis Hassabis Responds to Allegation of Scaremongering by Meta AI CEO

Demis Hassabis, the head of Google DeepMind, informed CNBC that the business was not attempting to gain an advantageous position regarding the discussion on the most effective approach to AI. Yann LeCun, the Chief AI Scientist at Meta, declared that Hassabis and other CEOs of AI firms were vigorously advocating for their respective companies in order to guarantee that only a small number of technology giants end up controlling AI. Hassabis argued that it was significant to begin the conversation on how to supervise potentially superintelligent Artificial Intelligence quickly, as the implications that could arise from a procrastinated intervention may be dire. In an interview with CNBC's Arjun Kharpal, DeepMind boss Hassabis denied allegations from Meta's Artificial Intelligence head that the company was aiming to manipulate the discussion around the regulation of AI in its own interest. He stated that DeepMind is providing assistance to the U.K. government on the upcoming summit on the technology. Yann LeCun, Meta's Chief AI Scientist, had critizised Hassabis and other corporate AI players for supposedly attempting to control the AI industry and inciting worries about the existential risks it poses to humanity. On X, LeCun expressed his support for open AI platforms, where creativity, democracy, market forces and regulation come together to ensure a safe and controlled AI. He also proposed some concrete solutions to this end. Yann LeCun is a major advocate for open-source AI (software that is available for research/development purposes to the public), in contrast to "closed" AI whose source code is held secret by companies. He argued that vision Hassabis and other AI CEOs have for AI regulation would essentially make open-source AI "regulated out of existence," giving only a handful of companies from the West Coast of the USA and China control of the technology. Meta is one of the leading technology companies that is attempting to open-source its AI models. For instance, its Large Language Model (LLM) is one of the most comprehensive open-source AI models available and has been used to develop more advanced language translation features. In response to LeCun's views, Hassabis said: "I pretty much disagree with most of Yann's comments." He went on to explain that there are three categories of risk related to AI that need to be considered: near-term harm, misuse of AI by bad actors, and the long-term risk of AGI (general artificial intelligence). He commented that it was important to begin discussing how to regulate potentially superintelligent AI now, as waiting too long could have dire consequences. Meta was unavailable for comment. Hassabis and Google's senior vice president of research, technology and society, James Manyika, both voiced a desire for a worldwide agreement on how to approach the responsible development and regulation of AI. According to Manyika, having the governments of U.K. and the U.S. in agreement that global consensus is necessary is "a good thing". He further noted the importance of including all nations in the conversation. The impending attendance of Beijing's Ministry of Science and Technology at the U.K. AI summit has sparked concerns among certain political circles in the U.S. and the U.K., concerning potential risks to national security due to China's strong control over its tech sector. When asked about involving China, Hassabis responded by saying that AI has no borders and requires coordination from multiple countries for an international agreement on the standards. Moreover, he noted that it is vital to have a dialogue with everyone, since it is a "global technology". U.S. tech firms have distanced themselves from any commercial work in the country as a result of the U.S. imposing pressure on China regarding technology.

コメント


bottom of page