Several prominent executives in the field of artificial intelligence, including Sam Altman, CEO of OpenAI, have issued a warning.
Several prominent executives in the field of artificial intelligence, including Sam Altman, CEO of OpenAI, have issued a warning.

The following is a news article from Reuters. In a letter published by the nonprofit Center for AI Safety (CAIS), over 350 signatories have expressed that mitigating the risk of extinction from AI should be considered a The concept of global priority is comparable to other risks that affect society on a large scale, such as pandemics.

The meeting attendees comprised the CEOs of AI companies DeepMind and Anthropic, executives from Microsoft and Google, and Altman.

The group included Geoffrey Hinton and Yoshua Bengio, who are recognized as two of the three “godfathers of AI” and were awarded the Turing Award in 2018 for their contributions to deep learning. Additionally, the group consisted of professors from various institutions, including Harvard and Tsinghua University in China.

CAIS issued a statement that specifically mentioned Meta, the workplace of Yann LeCun, one of the pioneers of AI, for failing to endorse the letter.

According to CAIS director Dan Hendricks, many Meta employees were requested to provide their signatures. Requests for comment made to Meta were not promptly answered.

Discussions regarding regulations.

The correspondence happened to align with the US-EU Trade and Technology Council gathering in Sweden, during which policymakers are anticipated to discuss the regulation of artificial intelligence.

In April, AI experts and industry executives, including Elon Musk, were the first to identify potential societal risks associated with artificial intelligence. According to Hendricks, Musk has offered an invitation, and he is expected to sign it within the week.

Advancements in AI have created tools that proponents claim can be utilized in various applications, such as medical diagnostics and legal brief writing. However, this has also raised concerns that the technology may result in privacy breaches, enable the spread of misinformation, and give rise to complications with autonomous decision-making by “smart machines”.

The warning was issued following an open letter by the nonprofit organization Future of Life Institute (FLI) two months ago. The letter, which Musk and several others signed, called for an immediate halt in advanced AI research due to potential risks to humanity.

However, this practice may lead to the extinction of certain elements,” commented Max Tegmark, the Future of Life Institute (FLI) president, who endorsed the above letter. A constructive and open conversation can now commence.

According to a statement made by AI pioneer Hinton to Reuters, the potential threat posed by AI to humanity may be more pressing than that of climate change.

During the previous week, Sam Altman, the CEO of OpenAI, made a statement regarding EU AI, which is the initial attempt to establish a regulation for AI. He referred to it as over-regulation and expressed a potential intention to withdraw from Europe. The individual swiftly altered their position shortly after receiving feedback from political figures.

Altman has gained prominence in artificial intelligence due to the widespread success of his ChatGPT chatbot. On Thursday, Ursula von der Leyen, the President of the European Commission, is scheduled to meet with Altman. Additionally, Thierry Breton, the EU industry chief, is expected to meet Altman in San Francisco next month.

By Nawaz

Leave a Reply

Your email address will not be published. Required fields are marked *