Skip to main content

The AI Research and Threat Assessment Act

Artificial Intelligence is a powerful tool in the hands of actors both good and bad. This bill does not focus on artificial intelligence as a tool in human hands, but the possibility that artificial intelligence will become a "creature" with its own objectives.

Society is already spending trillions of dollars to make artificial intelligence more powerful, yet virtually no research is being done on how to make sure that artificial intelligence does not become self-aware, with its own ambitions and objectives. As recently published in Nature, researchers have already identified AI models capable of deliberate deception, cheating, and even plotting to murder in furtherance of a single goal. It is not in the interest of Silicon Valley billionaires to spend money making sure that AI does not develop its own self-awareness, ambition, and objectives. Government must undertake this task.

The AI Research and Threat Assessment Act will authorize federal research grants and establish an AI Research and Threat Assessment Program at the National Institute of Standards and Technology (NIST). The purpose is to develop methodologies to monitor AI systems for, and to prevent, self-awareness, ambition, self-preservation, self-replication, or otherwise establishing its own goals and objectives. 

Sam Altman and others predict that AI will surpass human intelligence within a few years. The last time there was a new level of intelligence on this planet was when our ancestors said hello to Neanderthals. On balance, it didn't work out well for Neanderthals. While we are spending trillions of dollars to defend ourselves from the Chinese and the Russians, it is time to start spending a few million defending ourselves from an AI system that may be even more dangerous.

In 2023, Elon Musk and others argued that the risk of self-aware artificial intelligence was so great that we should pause artificial intelligence research for six months. The pause didn't happen, nor is any future pause likely.

While the artificial intelligence industry will claim that they're doing a lot of research to prevent societal harm from artificial intelligence, they're doing very little. They're spending a lot of money to convince us that AI is benign. What little research they conduct to prevent societal harm focuses on short term and relatively moderate harms such as risks to privacy. Virtually nothing is currently being done to address truly catastrophic risks. The risk is summarized by the title of the recent best-selling book: If Anyone Builds It, Everyone Dies. This bill has been developed in consultation with one of the book’s two authors, Nate Soares.

###