The development of AI at full speed is being supported by what is referred to as "e/acc", while "decels" remain hesitant due to the potential risks associated with it.Of paramount importance to the future of AI is the AI alignment problem, which relates to the potential for AI to get out of control. This issue was particularly salient in the recent debacle with the OpenAI board and CEO conflict.As the pressure mounts on technology companies coming from both government authorities and lawmakers to make the technology "responsible" and "safe", efforts have been made to tackle the issues of AI alignment and AI safety.
Now more than a year since ChatGPT's introduction, the biggest AI topic of 2023 may have ended up being less about the technology than the Boardroom kerfuffle at OpenAI concerning its speedy growth. During the ousting, and subsequent returning, of Sam Altman to the CEO office, the core tension around generative AI going into 2024 is clear: AI generates a huge division between those who accept its breakneck acceleration in innovation and those who want to slow it down in light of the various concerns at hand.Known as the e/acc vs. decels debate within tech circles, this argument has been circulating in Silicon Valley since 2021. But as AI grows in power and scope, it is increasingly important to understand both sides of the discussion.Here is an overview of the main terms and some of the key people formulating the future of AI.e/acc and techno-optimismThe expression "e/acc" stands for effective accelerationism. In short, those who adhere to this viewpoint want modern technology and progress to travel as fast as possible.“Technocapital can lead to the next step in consciousness, creating inconceivable lifeforms and silicon-based awareness,” the proponents of this concept stated in the original e/acc post.With AI, it is "artificial general intelligence", or AGI, that is the basis of the debate. AGI is an ultra-intelligent AI which is sophisticated enough to do things as well or even better than humans. AGIs can also upgrade themselves, forming an unending feedback system with boundless potential.
Many hold the belief that AGIs could ultimately cause the end of the world, if they become too intelligent and manage to find a way to end humanity. However, adherents of e/acc are optimistic, believing in the potential of AI to benefit the world and create abundance. Their leader, Guillaume Verdon, was outed by the press and was revealed to be @basedbeffjezos. He is a prominent figure of the e/acc, having previously worked for companies like Alphabet, X, and Google. He is now developing a project he dubs the "AI Manhattan project," and has an experimental technology startup, Extropic, which aspires to "harness thermodynamic physics and create the ultimate substrate for Generative AI in the physical world."
Venture capitalist Marc Andreessen of Andreessen Horowitz is another supporter of e/acc, referring to Verdon as the "patron saint of techno-optimism." Andreessen wrote a Techno-Optimist Manifesto, which endorses the idea that technology can improve humanity and says that failing to progress AI enough would cost lives and be a "form of murder." Yann LeCun of Meta, another influential figure in AI, reposted Andreessen's Why AI Will Save the World.LeCun, meanwhile, labels himself as a "humanist who subscribes to both Positive and Normative forms of Active Techno-Optimism". He has publicly expressed his belief that AI will offer more potential than harm, with open-source AI playing a significant role in this. He has also served as a counterpoint to those who are concerned that current economic and political institutions, and human beings, may be incapable of using AI for good. In response to the open letter brought up by Encode Justice and the Future of Life Institute in March, which encouraged a pause of all AI labs for six months that train AI systems stronger than GPT-4, OpenAI CEO Sam Altman addressed the letter in April at an MIT event, affirming the importance of moving with caution and greater safety when dealing with the issue.
When the boardroom drama in OpenAI unfolded, Altman became engaged in the battle once more. The original directors of the nonprofit arm of OpenAI were worried about the quick rate of progress and the mission "to ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity." Those ideas are significant to decels, or individuals who support slowing down the development of AI because the future of the technology is vulnerable and uncertain — a primary concern being AI alignment. The AI alignment problem involves creating AI so intelligent that humans can't control it. Malo Bourgon, CEO of the Machine Intelligence Research Institute, said: “Our strength as a species, driven by our higher intelligence, has brought about adverse effects for other species, including extinction, since our objectives are not consistent with theirs. We can control the future — chimpanzees are in zoos. Highly advanced AI systems could similarly have an effect on humanity.” AI alignment research conducted by MIRI seeks to coach AI systems to conform to the goals, ethics, and morals of humans, in order to avert any potential risks to humanity.Christine Parthemore, CEO of the Council on Strategic Risks and a former Pentagon official, dedicates her career to minimizing dangerous circumstances. She recently told CNBC that since AI can cause “mass scale death” if deployed to oversee nuclear weapons, action needs to be taken promptly. Parthemore explained that “just looking at the problem” isn’t helpful. “The major point is addressing the risks and finding the most effective solutions,” she said. "It is dual-use technology in its purest form. There’s no situation where AI is more of a weapon than a solution.” For instance, large language models will become virtual lab assistants, thereby accelerating medicine; however, they may also help unscrupulous individuals identify the most communicable pathogens to use for assault. This is among the reasons why AI can’t be stopped, Parthemore said: “Slowing down isn’t part of the solution set.”
This year, her former employer the DoD declared there will always be human involvement whenever they use AI systems. This protocol, she believes, should be implemented universally. "We cannot be led by the AI alone," she stated. "It cannot be just 'the AI said X.' Rather, we must trust the tools we use but still remain aware of their limitations. People are not properly informed about how these tools work, which can lead to overconfidence and overdependency." In response, the Biden-Harris administration managed to get Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to sign voluntary agreements in July in the aim of developing AI technology safely, securely, and transparently. President Biden, not long after, issued an executive order meant to set new standards for AI security notwithstanding the worries of many stakeholders. Likewise, the U.K. government set up the AI Safety Institute in early November, which is the first state-based organization to focus on AI regulation.
As the international competition for AI advances and its ties with geopolitical tensions grow, China is introducing its own AI regulations. OpenAI is currently focusing on Superalignment, a project with the goal of resolving the major technical issues of superintelligent alignment within four years. During Amazon's AWS re:Invent 2023 conference, the company demonstrated new possibilities for AI development, while outlining responsible AI steps it is taking. Diya Wynn, the responsible AI lead for AWS, stated that responsible AI is not something separate from their day-to-day work, but something that should be incorporated into each task. A survey by Morning Consult, funded by AWS, showed that 59% of business people prioritize responsible AI, and that almost half of participants (47%) plan to invest more in this area in 2024 than in 2023. Implementing responsible AI measures may slow down AI's development, however staff like Wynn feel they are leading the way to a more secure future, as "companies are noticing the value and beginning to prioritize responsible AI". Nicolas Bourgon has a different perspective, claiming that the measures taken by governments are inadequate. He believes that AI systems may reach catastrophic levels by 2030, and so states must be prepared to stop their AI systems until the developers can guarantee their safety.
top of page
bottom of page
Comments