Make Kitco Your Homepage

To avoid a future of AI overlords, prominent figures call for a pause in development

Kitco News

Editor note Get all the essential market news and expert opinions in one place with our daily newsletter. Receive a comprehensive recap of the day's top stories directly to your inbox. Sign up here!


(Kitco News) - The emergence of ChatGPT has taken the world by storm as the artificial intelligence-powered chatbot has shown just what AI is capable of and how it can potentially disrupt numerous sectors of the economy and put certain professions out of business.

In response to the risks posed by rapidly advancing AI, numerous prominent individuals, including Elon Musk and Steve Wozniak, have signed an open letter created by the Future of Life Institute that calls for an immediate pause on the training of AI systems more powerful than GPT-4.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” the letter opened. “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”

The letter went on to say that this level of planning and management has yet to be addressed despite the multiple AI competitors that have been introduced since the release of ChatGPT. It warned that these systems have now become human-competitive at general tasks, and cautioned that left unchecked, “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us.”

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter warned. “This confidence must be well justified and increase with the magnitude of a system's potential effects.”

These concerns have been echoed by OpenAI, the creator of ChatGPT, which has released a statement recommending that “At some point, it may be important to get an independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”

The authors of the letter suggest this time is now and have called on all AI labs “to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

While the pause is in force, the letter said AI labs and independent experts should use the time to collaborate on the creation of a set of shared safety protocols for advanced AI design and development that are thoroughly audited and supervised by independent outside experts.

“These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt,” the letter said. “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

It also called for AI developers to work with policymakers to accelerate the development of robust AI governance systems. Some of the recommendations include, “new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; liability for AI-caused harm; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.”

The letter closed by saying that the developments thus far have led to an “AI summer,” and recommended we use this time to reap the rewards and engineer these systems in a way that will benefit all and give society a chance to adapt. “Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.”

At the time of writing, the open letter has 1,187 signatures.

ChatGPT launch sparks 300% rally for A.I.-related crypto tokens

A new report from researchers at OpenAI, OpenResearch and the University of Pennsylvania appears to validate many of the concerns laid out in the letter.

According to the study’s authors, “Our findings reveal that around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs [large language models], while approximately 19% of workers may see at least 50% of their tasks impacted.”

While previous advances in technology affected certain segments of the economy, the effects that LLMs will have is projected to span across all wage levels, “with higher-income jobs potentially facing greater exposure to LLM capabilities and LLM-powered software.”

“Significantly, these impacts are not restricted to industries with higher recent productivity growth,” the authors wrote. “Our analysis suggests that, with access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to between 47 and 56% of all tasks.”

These findings suggest that LLM-powered software is poised to have a substantial impact on modern society, the paper said. “We conclude that LLMs such as GPTs exhibit traits of general-purpose technologies, indicating that they could have considerable economic, social, and policy implications. As capabilities continue to evolve, the impact of LLMs on the economy will likely persist and increase, posing challenges for policymakers in predicting and regulating their trajectory.”

Disclaimer: The views expressed in this article are those of the author and may not reflect those of Kitco Metals Inc. The author has made every effort to ensure accuracy of information provided; however, neither Kitco Metals Inc. nor the author can guarantee such accuracy. This article is strictly for informational purposes only. It is not a solicitation to make any exchange in commodities, securities or other financial instruments. Kitco Metals Inc. and the author of this article do not accept culpability for losses and/ or damages arising from the use of this publication.