In context: While the entire technology world is focused on generative AI and its alleged capabilities to destroy the economy and the job market, researchers are employing neural networks to tackle challenges in science, energy, health and security, such as detection of rogue nuclear weapons.

The Pacific Northwest National Laboratory (PNNL) is trying to hunt for unknown nuclear threats by using machine learning (ML) algorithms. PNNL, which is one of the United States Department of Energy national laboratories, said that ML is everywhere now, and that it can be used to create "secure, trustworthy, science-based systems" designed to give people and nations answers to different kinds of difficult scientific challenges.

The official public debut of an ML algorithm dates back to 1962, PNNL said, when an IBM 7094 computer won against a human opponent in checkers. The system was able to learn by itself, thanks to the aforementioned algorithm, without being explicitly programmed to change its strategy against chess player Robert Nealey.

Today, PNNL said, machine learning is everywhere as it powers personalized shopping recommendations and voice-driven assistants like Siri and Alexa. Generative AI tools like ChatGPT are just the latest public face of a technology that has had many decades to mature and evolve.

PNNL researchers are employing machine learning for national security, too, as the laboratory's experts are combining their knowledge in nuclear nonproliferation and "artificial reasoning" to detect and (possibly) mitigate nuclear threats. The main target of their research is to employ data analytics and machine learning algorithms to monitor nuclear materials that could be used to produce nuclear weapons.

The AI employed by PNNL can be useful for the International Atomic Energy Agency (IAEA), which is monitoring nuclear reprocessing facilities in non-nuclear weapon nations to see if the plutonium separated from spent nuclear fuel is later employed for nuclear weapons production. The IAEA uses sample analysis and process monitoring in addition to in-person inspections, which can be a time-consuming and labor-intensive process.

PNNL's algorithms can create a virtual model of the facility inspected by the IAEA, tracking "important temporal patterns" to train the model and predict the pattern belonging to normal use of the various areas in the facility. If data collected on-site doesn't match the virtual prediction, the inspectors could be called to check the facility once more.

Another ML-powered solution designed in PNNL labs can process images of radioactive material through an "autoencoder" model, which can be trained to "compress and decompress images" into small descriptions that are useful for computational analysis. The model looks at images of microscopic radioactive particles, searching for the unique structure that the radioactive material develops because of the environmental conditions or purity of the source materials at its production facility.

Law enforcement agencies (i.e., the FBI) can then compare the microstructures of field samples with a library of electron microscope images developed by university and national laboratories, PNNL said, so that they can speed the identification process up. Machine learning algorithms and computers "will not replace humans in detecting nuclear threats any time soon," PNNL researchers warn, but they can be useful in detecting and averting a potential nuclear disaster on US soil.