Inspur Goes for the Golden Hyperscale/AI Ring

Inspur, the no. 3 server vendor in the world (Gartner) is taking aim at the biggest, shiniest golden ring in the entire data center server marketplace: hyperscale and AI infrastructure for tier one social media and cloud services providers – companies whose server purchasing decisions move markets and mountains of money.

The China-based company, which already has a server manufacturing plant in Fremont, CA, is building a major new facility just up Rte. 880 in Milpitas specialized for building customized OCP (Open Compute Project)-compliant servers at scale using the Intel Rack Scale Design architecture and with a mix of processors (CPUs, GPUs and FPGAs) to handle the highest volumes of data, along with AI training and inferencing.

Given Inspur’s track record in China, the company’s attempt to scale the U.S. market might be a formidable one. The new, robotics-driven plant in Milpitas, partially completed and scheduled to open next year, will be a copy of one Inspur already has operating in China (where Inspur is no. 1 in servers). That plant is used to supply the needs of Chinese hyperscalers Baidu, Alibaba and Tencent, for which Inspur is the majority server supplier, according to Dolly Wu, VP/GM at Inspur Systems.

Tasked with leading the Inspur’s expansion in the American market, Wu told EnterpriseTech the company has invested in open computing protocols, including the OCP and the China-based Open Data Center Computing Committee (ODCC), providing the company a storehouse of open compute “building blocks” “we can leverage…to quickly come up with customized designs for every server need out there.”

“The goal for participating in all of these open platforms is for Inspur to provide modular architectures that we can leverage across all of these platforms and come up with scale savings for hyperscale customers,” Wu said. “All of the open platforms are geared towards massive deployments and fast deployment globally.”

Conventional data center servers are deployed at the rate of 300-500 per day, she said. But Inspur claims to have deployed 10,000 nodes per day, using the Rack Scale Design Architecture, in 2016 at the “Baidu Brain” neural network data center that receives 250,000 hits per day for voice and facial recognition jobs.

Wu said the Milpitas facility will be capable of producing 250,000 servers annually and will use robots to counteract the cost of labor in Silicon Valley. Robots also will be more efficient than humans, who are capable of bolting about 130 screws in an hour, whereas robots can handle that many screws in eight minutes, according to Wu.

“We can reduce labor by 60 percent and improve production efficiency by about 30 percent,” Wu said. “Labor costs are so expensive in Silicon Valley, we want to leverage robots to automate the manufacturing process, so we can yield huge production volumes and be close to our customers here in the Valley.”

As part of its U.S. market push, Inspur announced at the 2018 OCP Regional Summit in Amsterdam new computing, storage and GPU nodes, software management and resource pooling designed to resolve the complexities of hyperscale and AI data centers. Its OCP Standard Rack Server incorporates five new node configurations, including three compute nodes based on Inspur’s San Jose Motherboard — the first OCP-accepted Intel Xeon Scalable processor motherboard — one JBOD and one GPU box.

As part of its offering within data centers with changing workload scenarios, Inspur said it enables dynamic matching of requirements through resource pooling and flexible allocation of resources, such as storage and co-processing calculations through the utilization of SAS and PCI-E switching technologies.

The newest Inspur Rack Server solution Key technical aspects include:

  • Compute node 1 – Designed for search engine acceleration, deep learning inference and data analysis applications. This is a 2-socket with 1OU3 nodes, 96 nodes can be deployed in a standard Open Rack V2.0 cabinet.

  • Compute node 2 – Designed for data acceleration, I/O expansion, transaction processing and image search applications. This is a 2-socket with 2OU3 nodes with high flexibility, 16*DIMMs, 1*M.2, 4*2.5”HDD/NVMe SSD, 3*PCIe x8 can support half-height and half-length external cards.

  • Compute node 3 – Built for NFV applications, with a range of half-height and half-length external cards, which can also support 100Gb Ethernet.

  • Storage node – 2OU JBOD 34*hard drive, can support NVMe SSD/HDD, the highest density storage expansion box, which can be used as a storage expansion module for compute nodes or as a storage pool for the entire rack with SAS Switch

  • AI node – GPU resource pooling solution, supporting up to 16GPU in 4OU form factor, which Inspur said is the highest density with most powerful computing performance available for AI training and inference scenarios. PCI-e switch enabled to offer GPU pooling solution.

  • Converge Redfish and OpenBMC – Inspur’s converged OpenBMC with Redfish solution developed for Inspur OCP-certified solutions.

“Open source hardware is essential to the further advancement of efficient, open and scalable data centers, and Inspur has fully embraced these principles of open sourcing cloud technology with the contribution of its San Jose Motherboard,” said Bill Carter, CTO of the OCP Foundation. “They are continuing to support OCP with technology leveraged from servicing hyper-scale cloud providers. We are encouraged by our engagement with Inspur in its new product development—one based upon integrating open computing technology and helping to build a complete open source ecosystem that brings technology and benefits to enterprises and businesses worldwide.”


Read Also
Inspur Goes for the Golden Hyperscale/AI Ring
Inspur Announces Transformation to A New Generation of Internet Enterprise of "Cloud+Data" at Inspur World 2018
21VIANET GROUP, INC. REPORTS UNAUDITED SECOND QUARTER 2018 FINANCIAL RESULTS

Research