November 21, 2024

Ferrum College : Iron Blade Online

Complete Canadian News World

OpenAI promised to devote 20% of its computing power to combating the most dangerous types of AI, but it never delivered

OpenAI promised to devote 20% of its computing power to combating the most dangerous types of AI, but it never delivered

In July 2023, OpenAI unveiled a new team dedicated to ensuring that future AI systems that may be smarter than all humans combined can be safely controlled. To signal how serious the company was about this goal, it publicly promised to devote 20% of its available computing resources at the time to the effort.

Now, less than a year later, that team, which was called Superalignment, has disbanded amid employee resignations and accusations that OpenAI is prioritizing product launches over AI safety. According to six sources familiar with the work of the OpenAI Superalignment team, OpenAI never fulfilled its commitment to provide the team with 20% of its computing power.

Instead, according to sources, the team repeatedly saw its requests for access to graphics processing units, specialized computer chips needed to train and run AI applications, rejected by OpenAI leadership, even though the team’s overall computing budget never came close to what was requested. A 20% threshold was promised.

These revelations raise questions about how seriously OpenAI takes its public pledge, and whether other public commitments the company makes should be trusted. OpenAI did not respond to requests for comment on this story.

The company is currently facing backlash over its use of voice in its AI-based speech generation features that bear a striking resemblance to those of actress Scarlett Johansson. In this case, questions have been raised about the credibility of OpenAI’s public statements that the similarity between the voice of the AI ​​it calls “Sky” and Johansson’s is purely coincidental. Johansson says Sam Altman, co-founder and CEO of OpenAI, contacted her last September, when Sky’s voice debuted, and asked for permission to use her voice. I refused. She says Altman asked for permission again to use her voice last week, right before a demonstration of the latest GPT-4o model, which uses Sky’s voice. OpenAI has denied using Johansson’s voice without her permission, saying it paid a professional actress, who it says it cannot legally reveal, to create Sky. But Johansson’s claims have now cast doubt on this, with some speculating on social media that OpenAI actually cloned Johansson’s voice or perhaps mixed another actress’ voice with Johansson’s in some way to create Sky.

The OpenAI Superalignment team was created under the leadership of Ilya Sutskever, OpenAI co-founder and former chief scientist, whose departure from the company was announced last week. Jan Laecke, a long-time researcher at OpenAI, co-led the team. He announced his resignation on Friday, two days after Sutskever’s departure. The company then informed the remaining employees of the team, numbering about 25 people, that it was being disbanded and that they would be reassigned within the company.

See also  Get ready for the April CPI reading

It was a precipitous downfall for a team whose work less than a year ago OpenAI had deemed vital to the company and important to the future of civilization. Superintelligence is the idea of ​​a future hypothetical artificial intelligence system that will be smarter than all humans combined. It’s a technology that would go beyond the company’s stated goal of creating artificial general intelligence, or AGI — a single AI system that’s as smart as any person.

Superintelligence, Inc He said when announcing the teamIt could pose an existential threat to humanity by seeking to kill or enslave people. “We have no solution to guide and control potentially super-intelligent AI, and prevent it from going rogue,” OpenAI said in its announcement. The Superalignment team was supposed to research these solutions.

It was a mission so important that the company said in its announcement that it would devote “20% of the computing we’ve acquired so far over the next four years” to the effort.

But six sources familiar with the work of the Superalignment team said that such computing was never assigned to the group. Instead, it received an amount much lower than the company’s normal computing allocation budget, which is reevaluated every three months.

There haven’t been any clear metrics on exactly how to calculate the 20%, making it open to wide interpretation, said one source familiar with the Superalignment team’s work. For example, the source said the team was never told whether the promise meant “20% per year for four years,” “5% per year for four years,” or some variable amount that could end up being “1% or 2%.” % for four years” the first three years, then the bulk of the commitment in the fourth year. Anyway, all sources luck This story confirmed that the Superalignment team never had anything close to 20% secure compute for OpenAI as of July 2023.

OpenAI researchers can also submit requests for what is known as “elastic” computing — access to additional GPU capacity beyond what was budgeted — to handle new projects between quarterly budget meetings. But these sources said that flexible requests from the Superalignment team were routinely rejected by senior officials.

See also  Watch the electric vortex nearly break the record for climbing Goodwood Hill

Bob McGraw, vice president of research at OpenAI, was the executive who informed the team of rejecting these requests, but others at the company, including CTO Mira Moratti, were involved in making the decisions, the sources said. Neither McGraw nor Moratti responded to requests for comment for this story.

While the team did some research, it was released paper Detailing its experiences successfully getting a less powerful AI model to control a more powerful model in December 2023, the source said, the lack of compute thwarted the team’s more ambitious ideas.

Following his resignation, Lyke on Friday made a series of posts on X (formerly Twitter) in which he criticized his former employer, saying: “Safety culture and processes have fallen behind brilliant products.” He also said: “For the past few months, my team has been sailing against the wind. Sometimes we struggled to compute and completing this important research was becoming increasingly difficult.

Five sources familiar with the Superalignment team’s work supported Leike’s account, saying that computing access problems worsened in the wake of the pre-Thanksgiving standoff between Altman and the board of directors of the nonprofit OpenAI.

Sutskever, who was a member of the board, had voted to fire Altman and was the person chosen by the board to inform Altman of the news. When OpenAI employees rebelled in response to the decision, Sutskever later posted on X that he “deeply regrets” his involvement in Altman’s firing. Ultimately, Altman was reappointed, and Sutskever and several other board members involved in his removal stepped down from the board. Sutskever never returned to work at OpenAI after Altman was rehired, but he did not officially leave the company until last week.

One source disagreed with the other sources’ method luck She spoke about the computing problems Superalignment faced, saying they predated Sutskever’s involvement in the failed coup, which had plagued the group from the beginning.

While there were some reports that Sutskever was continuing to co-lead the Superalignment team remotely, sources familiar with the team’s work said that was not the case and that Sutskever did not have access to the team’s work and played no role in directing the team thereafter. Thanksgiving.

With Sutskever’s departure, the Superalignment team lost the one person on the team who had enough political capital within the organization to successfully argue for its computing appropriation, sources said.

See also  Visa and Mastercard settlement and what it means for your credit card

In addition to Leike and Sutskever, OpenAI has lost at least six other AI safety researchers from various teams in recent months. One of the researchers, Daniel Kokotaglo, told the Vox news site that he “gradually lost confidence in OpenAI’s leadership and their ability to responsibly handle AGI, so I resigned.”

In response to Leike’s comments, Altman and co-founder Greg Brockman, who is the president of OpenAI, posted on [Leike] For everything he did for OpenAI. The two went on to write, “We need to continue to raise the bar of our safety work to match the risks of each new model.”

They then offered their view on the company’s approach to future AI safety, which will include a much greater focus on testing models currently in development rather than trying to develop theoretical approaches on how to make future, more powerful models safe. “We need a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony between safety and capabilities,” Brockman and Altman wrote, adding that “experiential understanding can help determine the way forward.”

The people they talked to luck They did so anonymously, either because they said they feared losing their jobs, or because they feared losing vested shares in the company, or both. Employees who left OpenAI were forced to sign separation agreements that included a strict non-disparagement clause stating that the company could redeem their vested shares if they publicly criticized the company, or even if they admitted the clause existed. Employees have been told that anyone who refuses to sign the separation agreement will also lose their shares.

After Fox mentioned Regarding these terms of separation, Altman posted on X that he was not aware of this clause and was “really embarrassed” by this fact. He said OpenAI never tried to enforce the clause and claw back vested shares for anyone. He said the company was in the process of updating its exit papers to “fix” the issue and that any former employees who were concerned about the provisions in the exit papers they signed could contact them directly about the issue and they would be changed.