Japan to build the world's first zeta-class supercomputer, promising 1,000-fold speed increase

Skye Jacobs

Posts: 582   +13
Staff
Why it matters: In the global race for high-performance-computing dominance, Japan has positioned itself as a leader with its plans to build a zeta-class supercomputer. If it manages to achieve this feat, the advanced computational capabilities will significantly boost its economic competitiveness. First, though, Japan has to figure out how to meet the monumental energy requirements.

Japan has announced plans to build the world's first zeta-class supercomputer. Still a theoretical exercise, this groundbreaking machine could reach processing speeds a thousand times faster than current top supercomputers.

Dubbed Fugaku Next, the project is being spearheaded by Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT). It is expected to cost over $750 million to build and is projected to be fully operational by 2030.

Fugaku Next would reach speeds on the zetaFLOPS scale, a feat never before achieved. To put this in perspective, today's most advanced supercomputers operate at the exaFLOPS level, which is capable of a quintillion calculations per second. Fugaku Next would be able to perform a staggering sextillion calculations per second.

However, Japan must first overcome significant challenges, namely regarding energy efficiency. Experts have estimated that a zeta-class machine using current technologies could require the energy equivalent to the output of 21 nuclear power plants.

MEXT plans to tackle this issue by incorporating the latest in technologies, including custom-designed CPUs and high-bandwidth memory systems. It also wants Fugaku Next to be compatible with existing infrastructure, potentially leading to collaborations with Fujitsu and RIKEN, which were key players behind the development of the Fugaku supercomputer.

Fugaku achieved a performance of 442 petaFLOPS, or 442 quadrillion floating-point operations per second, on the TOP500 benchmark. It was the world's fastest supercomputer from June 2020 to June 2022, and ranks fourth on the TOP500 as of June 2024.

The fastest supercomputer in the world is Frontier, located at Oak Ridge National Laboratory in Tennessee. It has achieved a performance of 1.206 exaflops, or 1.206 quintillion calculations per second, on the TOP500 benchmark. Frontier has maintained its top position for multiple iterations of the TOP500 list, which is updated twice a year. It was built using HPE Cray EX architecture, AMD EPYC processors and AMD Instinct GPUs. Scientists are using Frontier for various research purposes, including astrophysics, climate modeling, materials science, and AI development.

Fugaku Next would be capable of significantly more advanced and complex computations than Frontier, ranging from simulating the entire human brain to modeling the most intricate climate systems. The increased computational power could, for example, accelerate the process of identifying and testing new pharmaceutical compounds, or develop more precise simulations of molecular structures, potentially leading to the development of new materials.

As Japan embarks on this ambitious project, the global computing community watches with keen interest and likely concern as well. Japan's push for a zeta-class supercomputer could ignite a supercomputer race among nations, similar to the ongoing competition for exascale systems, with countries increasing funding for high-performance computing research and development to close the technological gap.

Permalink to story:

 
Is "zeta" a marketing term, or did they rename "zetta" to "zeta"?

Edit: Tom's Hardware published it with the typo in their headline, LiveScience copied the typo wholesale, and that's how it ended up here.
 
Last edited:
Please someone name a breakthrough or something really important in science achieved with supercomputers in physics or medical research or else. The real problem is programming and what we feed into them, instead we only increase speed and power consumption.
 
Please someone name a breakthrough or something really important in science achieved with supercomputers in physics or medical research or else. The real problem is programming and what we feed into them, instead we only increase speed and power consumption.

They all do it for just the s**t* and giggles. This is something you can look up yourself, instead of asking us. better to do your own research.

The more accurate the modeling you need, the more calculations to represent all the permutations. The faster the computer you need.
Example's have been on techspot over the years, ie one a few months back was the 3 model problem, which is solar travel is much more than 3, probably don't need a supercomputer for this, but it would have been a supercomputer of yore
Quantum wave functions, modeling the universe, normal particles , climate, protein folding material science all need lots of calculations. Even the vaunted RTX 4090 fails hopelessly in doing real time ray tracing.

You are right expensive, huge power requirements. But supercomputers can test the accuracy of much cheaper modelling algorithms/approximations. If the approximations are good , then we can mostly trust them, with less power etc. Also powerful modelling can show ways to do things more efficiently , eg dropping the steps to make an expensive drug from 77 steps to 39. Or show us how to make fusion power
Plus supercomputers and neural networks can whittled down , or building up items to test - eg antibiotics that are effective and cause minimum damage to those taking them

When there were only a few supercomputers scientist book time slots way in advance and had everything meticulously prepared. Like getting onto a skylab mission

If physics and material science was so settled and easy, then we would know what the best superconductor was at a higher temperature , the best battery etc. We are far far away to model reality in any meaningful way. Supercomputers are nowhere close to certain abilities of our brains, which are in organ terms are super hungry for energy, but vs supercomputers incredibly efficient . So hopefully we can build more neural computers with billions of links as well ( synapses ) with variable response ( eg voltage )
 
Back