Product Lineup

Turnkey, one-box operation, one-click start. Scale linearly from one unit to ten thousand. Your data stays local.

Pilot
微算-B

微算-B Basic

Small-scale AI inference, data analysis, training. Turnkey, 48-72h deployment. Free during pilot.

Compute1×CPU + Optional GPU
Storage4TB NVMe SSD
Network25G/100G
OutputUp to 1 PFLOPS

~¥50K (free during pilot)

Leasing: ¥2,000/mo

View Details
微算-P

微算-P Professional

Mid-scale AI training & inference, industrial edge. Multi-node cluster with EBOF all-flash, on-demand scaling.

ComputeMulti CPU+GPU
Storage16×3.84TB NVMe SSD
Network100G RDMA RoCEv2
OutputUp to 12 PFLOPS

¥2-5M

View Details
微算-E

微算-E Enterprise

Large-scale model training, HPC. Thousand-card heterogeneous cluster, PB-scale storage. Custom solutions.

Compute1000+ card cluster
StoragePB-scale distributed
Network200G/400G
Output50+ PFLOPS

¥5M+ (custom)

View Details

Product Architecture

Weisuàn integrates compute, storage, and management, built on compute-storage disaggregation and EBOF.

产品架构
产品实物

Scaling Path

Scale from one unit to ten thousand. Like building a wall brick by brick, or coupling train cars.

StageScaleComputeInvestment
Single Unit1 units1 PFLOPS¥50K (free)
Small Cluster5-10 units40-80 PFLOPS¥4-8M
Medium Cluster50-100 units400-800 PFLOPS¥40-80M
Large Cluster500-1000 units4-8 EFLOPS¥4-8亿
Mega Scale5000-10000 units40-80 EFLOPS¥40-80亿

Leasing from ¥2,000/month

No upfront investment needed for local AI compute. Data stays local, deploy in 48-72 hours. 3-year TCO significantly lower than traditional or cloud solutions.