
Japan’s Silent AI Revolution: Inside ABCI 3.0 and Beyond | Image Source: www.nextplatform.com
TOKYO, Japan, 7 April 2025 – In the global AI arms race, attention is often paid to Western hyperclimateists and Chinese technology giants. But in silence and constantly, Japan has built itself a formidable AI ecosystem. At the heart of this evolution is the artificial intelligence cloud infrastructure project (ABCI), an initiative that began in 2017 and has now reached its third phase with ABCI 3.0. With the largest AI-based supercomputer system outside the United States, Japan has a distinct niche: a sovereign, efficient and research-oriented AI infrastructure.
The origins of the vision of the Japanese AI factory
In March 2017, the prototype of AI Bridging Cloud Infrastructure (ABCI), called AIST AI Cloud (AAIC), laid the foundation for what would become a pillar of IA national computing. Built by NEC for Japan’s National Institute of Advanced Industrial Science and Technology (AIST), it was one of the first global attempts to integrate accelerated GPU computing with cloud accessibility. At a time when the cloud-based AI infrastructure remained nascent, even in North America, the AAIC experience has placed Japan as a silent pioneer.
According to AIST’s Ryousei Takano, Japan faced a paradox in 2017: while industrial interest in AI increased, the infrastructure for large-scale experiments was lacking. This vacuum gave rise to ABCI 1.0, a bold 5 billion project developed by Fujitsu. Target? He believes that what will later be called the world’s first artificial intelligence plant: a multi-user, high-performance cluster equipped with cloud for artificial intelligence workloads.
“There was no room for large-scale AI experiments,” Takano said in an interview with The Next Platform. “Providing AI infrastructure for all was our strong motivation to create ABCI 1.0. »
How have ABCI 1.0 and 2.0 changed the Japanese landscape of AI?
ABCI 1.0 was not only powerful, it was strategic. With more than 4300 GPUs Nvidia Volta V100, it marked Japan’s affirmation of AI sovereignty. The cluster was deliberately built on open software batteries and supported a wide range of tools, from Docker containers to AI frames such as TensorFlow and PyTorch. It was also one of the few facilities in the world to support a hybrid of traditional HPC tools with modern AI needs.
Three years later, in May 2021, the ABCI 2.0 extension arrived. With another investment of 2 billion yen, AIST added 120 knots driven by Nvidia and InfiniBand’s Ampère A100 interconnections. The result? A 50 per cent increase in return for an additional 40 per cent budget shows that Japan is focusing on profitability and value.
The singularity container platform has also been integrated, enhancing user flexibility and supporting a wide range of AI interference models. According to HPCWire, this movement was essential to maintain compatibility with NIM (NVIDIA Inference Microservice) formats, the key to implementing containerized AI tasks on a scale.
What does ABCI 3.0 do with a game change?
In January 2025, Japan presented ABCI 3.0, a -35 billion force built by Hewlett Packard Enterprise (HPE). Using the Cry XD670 server nodes equipped with the new Nvidia Hopper H200 processors, ABCI 3.0 offers impressive capabilities: 415 petaflops with FP64 precision and an incredible 6.22 exaflops on FP16. These figures are not just specifications; they reflect a jump seven times higher than the previous ABCI combined infrastructure.
Each Hopper GPU has 141 GB of HBM3e memory with 4.8 TB/dry bandwidth, a specification that responds directly to one of the main AI challenges: memory bottle necks. “Additional memory and bandwidth allow Hopper’s GPUs to address their theoretical performance in real world workloads,” said The Next Platform.
The ABCI 3.0 network architecture is equally ambitious. With 6 128 GPUs linked to 200 Gb/dry NDR InfiniBand, the topology of three-level trees guarantees a complete bandwidth of the two-critical section for IA workloads that depend largely on communication between GPUs.
Who uses ABCI and for what?
The ABCI system has over 3,000 users, ranging from start-ups in artificial intelligence to electronic conglomerates. Although modelling is the main workload, the infrastructure is sufficiently versatile to manage simulation, analysis and even national research projects.
A major initiative is the LLM Building Support Program, launched in August 2023. In this context, Japan has begun to develop major language models such as PLaMo and Swallow, which are essential to maintaining linguistic and cultural nuance in IA systems in the Japanese language. According to Tech In Asia, these models are intended for commercial and governmental applications.
And then what? AIST aims to create physical AI systems, where the sensory input of the real world is fed directly into AI models, which then guide robotic systems. “It’s about creating a cyberphysical ecosystem,” said Takano. This is a vision where real-time feedback loops on AI can affect manufacturing industries in disaster response.
How is ABCI compared to AI’s global infrastructure?
The ABCI of Japan contrasts with the centralized AI infrastructure managed by dominant companies in the US and China. Instead of relying only on the hyperclimates of high technology, ABCI focuses on shared national infrastructure. This approach aligns with what industry trainees now call “IA sovereignty,” that is, a country’s ability to develop and operate AI systems independently.
According to Asia Nikkei, the Japanese model has inspired the interest of neighbouring countries seeking to strengthen their own capacity for sovereign artificial intelligence. In many ways, the ABCI of Japan resembles a “public use of AI”: a concept that could gain global traction as data location and digital autonomy concerns grow.
In addition, ABCI systems emphasize the need for heterogeneity in IT environments IA. While American giants like OpenAI use custom silicon with Nvidia chips, Japan continues to rely heavily on Nvidia’s GPU. But it could change.
Can Japan appear in AI chip innovation?
There is growing agreement that the future of Japanese AI cannot be left alone in importing GPUs. Get into startups like Revelion. As MK.co.kr said, the AI semiconductor recently opened a Tokyo branch to take advantage of the growing Japanese market in AI data centres. The company is already testing PoC with Japanese cloud providers and telecommunications giants.
The CEO of Revelion Park Sung-hyun stressed that Japan’s IT infrastructure should include both training and inference accelerators. He mentions DeepSeek, a Chinese start-up that uses GPus Nvidia for training and Huawei’s NPus Ascend for inference, as a model of architectural diversity. “When building AI infrastructure, it must consist of two types: Nvidia and Nvidia products,” Park wrote on social media last month.
This sentiment echoes the national view that Japan’s AI strategy should be expanded beyond procurement and include long-term investments in semiconductor design. By diversifying its portfolio, Japan can protect itself from future supply chain disruptions and geopolitical dependencies.
What future for ABCI and Japanese AI?
Although there is no immediate plan for an ABCI 4.0, the incremental updates to ABCI 3.0 are on the horizon. “We intend to gradually update ABCI 3.0 in the coming years,” said Mr. Takano. “Any decision on future UPGs or interconnections will depend on the new user requirements. “
Japan’s AI strategy is pragmatic, not exotic. Instead of pursuing appalling milestones, it invests in long-term and cost-effective infrastructure that serves national interests. It is a model that favours access, sovereignty and research over hypocrite.
Whether to support AI startups, accelerate university research or form the next Japanese generation of LLM, ABCI 3.0 is a silent giant of the AI era. And as the race deepens, Japan’s approach could offer the balance between innovation and the independence the world needs.