- Sputnik International, 1920
Americas
Sputnik brings you all the latest breaking stories, expert analysis and videos from North and South America.

Leading AI Companies Move to Self-Regulate After Lawmakers Discuss Industry Controls

CC0 / / Artificial intelligence
Artificial intelligence - Sputnik International, 1920, 26.07.2023
Subscribe
A group of tech firms behind some of the most advanced artificial intelligence (AI) programs have teamed up to form a new industry regulatory body amid fears that US lawmakers may soon impose their own regulations on the industry.
The Frontier Model Forum (FMF), which says its purview is to develop safety standards for the emerging AI industry, was announced on Wednesday by Google, ChatGPT-maker OpenAI, Microsoft and Anthropic.

"It is vital that AI companies - especially those working on the most powerful models - align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible," Anna Makanju, OpenAI's vice president of global affairs, said in a statement.

Microsoft President Brad Smith added that "companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity."
FMF says any organization is free to join, so long as it is working on what it calls "frontier AI models," which it defines as "large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks."
In this Sept. 24, 2019, file photo a sign is shown on a Google building at their campus in Mountain View, Calif. Google is formally pushing back on antitrust claims brought against it by the Justice Department two months ago. - Sputnik International, 1920, 20.07.2023
Americas
Report: Google Testing 'Unsettling' AI Tool Capable of Writing News Stories

Congress Seeks Oversight

The announcement comes after the four FMF founding firms, plus Amazon, Meta*, and Inflection, met at the White House and pledged to voluntarily adopt safeguards for “safety, security and trust,” as US President Joe Biden put it.
That includes making sure their technology is safe before releasing it to the public; safeguarding their AI against cyber threats, labeling content that has been altered or AI-generated, as well as “rooting out bias and discrimination, strengthening privacy protections, and shielding children from harm,” according to Biden.
The firms also agreed to “find ways for AI to help meet society’s greatest challenges - from cancer to climate change - and invest in education and new jobs,” he said.
However, voluntary self-policing is one thing, but on Tuesday, federal lawmakers held their latest hearing aimed at something more concrete: legal regulations.
The panel included Anthropic CEO Dario Amodei; Yoshua Bengio, an AI professor at the University of Montreal and one of the fathers of modern AI science; and Stuart Russell, a computer science professor at the University of California at Berkeley. All three called for some kind of industry regulation, but disagreed on how it should be done.
Tesla, SpaceX, and Twitter CEO Elon Musk. - Sputnik International, 1920, 06.07.2023
World
China is 'Going to Be Great' in Artificial Intelligence, Elon Musk Tells AI Conference in Shanghai
Sen. Josh Hawley (R-MO), an outspoken critic of Big Tech firms, said he was concerned about monopolization of the technology.
“I’m confident it will be good for the companies, I have no doubt about that,” Hawley said. “What I’m less confident about is whether the people are going to be all right.”
Hawley pointed out that the tech firms pushing the forefront of AI research were “the same people” that had for years dodged other forms of congressional oversight.
The European Union has also moved to regulate AI, introducing its first regulatory framework last month that bans AI judged to pose an "unacceptable risk" to people and restricting and governing what it considers "high-risk" uses of the technology. China, a leading center for AI development, is also in the process of developing a regulatory framework.

Threat or Hype?

Paradoxically, AI pioneers have also raised fears about the technology, but on the other side, critics say AI developers are deliberately overhyping their product’s potential in an attempt to mold the public’s image.
Earlier this year, Bengio and his fellow pioneering AI developer, Geoffrey Hinton, both of whom have worked since the 1990s to lay the foundation for modern AI like ChatGPT and Bard, warned that AI was developing so quickly it could be dangerous. Speaking before Congress in May, Hinton compared the arrival of AI to first contact with an extraterrestrial species.
Artificial intelligence - Sputnik International, 1920, 15.06.2023
Beyond Politics
Almost Half of CEOs Think 'AI Could Destroy Humanity in 5-10 Years' - Poll

A letter was circulated in March by Anthony Aguirre, a professor of Physics of Information at the University of California, Santa Cruz, and executive director of the Future of Life Institute, an organization founded in 2014 to study what it claims are various existential risks to humanity, including superintelligent artificial general intelligence (AGI), asking AI developers for a six-month pause on developing the technology.

AGI is regarded as the point at which artificial intelligence surpasses human intelligence.

The institute is closely tied to effective altruism, a utilitarian movement popular with entrepreneurs who claim to prioritize actions that will generate the most good for all of humanity. Twitter CEO Elon Musk is both a strong proponent of effective altruism and a board member of the Future of Life Institute, which he gave a grant in 2015 to specifically research the threat posed to humanity by AGI.
Musk has criticized the leading AI research firms, calling Google co-founder Larry Page “cavalier” about the threat, although Musk was also one of the founding board members of OpenAI in 2015.
Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала