Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Learn more.


Graphcore, a Bristol, U.K.-based startup developing chips and systems to accelerate AI workloads, today announced it has raised $222 million in a series E funding round led by the Ontario Teachers’ Pension Plan Board. The investment, which values the company at $2.77 billion post-money and brings its total raised to date to $710 million, will be used to support continued global expansion and further accelerate future silicon, systems, and software development, a spokesperson told VentureBeat.

The AI accelerators Graphcore is developing — which the company calls Intelligence Processing Units (IPUs) — are a type of specialized hardware designed to speed up AI applications, particularly neural networks, deep learning, and machine learning. They’re multicore in design and focus on low-precision arithmetic or in-memory computing, both of which can boost the performance of large AI algorithms and lead to state-of-the-art results in natural language processing, computer vision, and other domains.

Graphcore, which was founded in 2016 by Simon Knowles and Nigel Toon, released its first commercial product in a 16-nanometer PCI Express card — C2 — that became available in 2018. It’s this package that launched on Microsoft Azure in November 2019 for customers “focused on pushing the boundaries of [natural language processing]” and “developing new breakthroughs in machine intelligence.” Microsoft is also using Graphcore’s products internally for various AI initiatives.

Graphcore IPU-POD 64

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.

Register Here

Earlier this year, Graphcore announced the availability of the DSS8440 IPU Server, in partnership with Dell, and launched Cirrascale IPU-Bare Metal Cloud, an IPU-based managed service offering from cloud provider Cirrascale. More recently, the company revealed some of its other early customers — among them Citadel Securities, Carmot Capital, the University of Oxford, J.P. Morgan, Lawrence Berkeley National Laboratory, and European search engine company Qwant — and open-sourced its libraries on GitHub for building and executing apps on IPUs.

In July, Graphcore unveiled the second generation of its IPUs, which will soon be made available in the company’s M2000 IPU Machine. (Graphcore says its M2000 IPU products are now shipping in “production volume” to customers.) The company claims this new GC200 chip will enable the M2000 to achieve a petaflop of processing power in a 1U datacenter blade enclosure that measures the width and length of a pizza box.

The M2000 is powered by four of the new 7-nanometer GC200 chips, each of which packs 1,472 processor cores (running 8,832 threads) and 59.4 billion transistors on a single die, and it delivers more than 8 times the processing performance of Graphcore’s existing IPU products. In benchmark tests, the company claims the four-GC200 M2000 ran an image classification model — Google’s EfficientNet B4 with 88 million parameters — more than 32 times faster than an Nvidia V100-based system and over 16 times faster than the latest 7-nanometer graphics card. A single GC200 can deliver up to 250 TFLOPS, or 1 trillion floating-point-operations per second.

Graphcore GC011 rack

Beyond the M2000, Graphcore says customers will be able to connect as many as 64,000 GC200 chips for up to 16 exaflops of computing power and petabytes of memory, supporting AI models with theoretically trillions of parameters. That’s made possible by Graphcore’s IPU-POD and IP-Fabric interconnection technology, which supports low-latency data transfers up to rates of 2.8Tbps and directly connects with IPU-based systems (or via Ethernet switches).

The GC200 and M2000 are designed to work with Graphcore’s bespoke Poplar, a graph toolchain optimized for AI and machine learning. It integrates with Google’s TensorFlow framework and the Open Neural Network Exchange (an ecosystem for interchangeable AI models), in the latter case providing a full training runtime. Preliminary compatibility with Facebook’s PyTorch arrived in Q4 2019, with full feature support following in early 2020. The newest version of Poplar introduced exchange memory management features intended to take advantage of the GC200’s unique hardware and architectural design with respect to memory and data access.

Graphcore might have momentum on its side, but it has competition in a market that’s anticipated to reach $91.18 billion by 2025. In March, Hailo, a startup developing hardware designed to speed up AI inferencing at the edge, nabbed $60 million in venture capital. California-based Mythic has raised $85.2 million to develop custom in-memory architecture. Mountain View-based Flex Logix in April launched an inference coprocessor it claims delivers up to 10 times the throughput of existing silicon. And last November, Esperanto Technologies secured $58 million for its 7-nanometer AI chip technology.

Beyond the Ontario Teachers’ Pension Plan Board, Graphcore’s series E saw participation from funds managed by Fidelity International and Schroders. They joined existing backers Baillie Gifford, Draper Esprit, and others.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics