Embedded Chatbox
Computer.Com AI Chat Bot
x

AI Intelligent Processing Units (IPU)

Infrastructure

Exclusive AI cloud infrastructure to accelerate machine learning.

AI Infrastructure as a Service

We bring together Computer.Com's GPU / IPU and the Computer.Com's Cloud services for building AI infrastructure under unified UI and API for ML acceleration.

Get started quickly, save on computing costs, and seamlessly scale to massive GPU & IPU compute on demand and with ease.

Computer.Com's GPU & IPU cloud services are now available, with free trials and a range of pricing options enabling innovators everywhere to make new breakthroughs in machine intelligence.

Exclusive solution pack

Computer.Com’s IPU-based AI cloud is a Graphcore Bow IPU-POD scale-out cluster, offering an effortless way to add state of the art machine intelligence compute on demand, without the need for on-premises hardware deployment and AI infrastructure building from scratch.

The IPU is an entirely new kind of massively parallel processor, co-designed from the ground up with the Poplar® SDK, to accelerate machine intelligence. Cloud IPU’s robust performance and low cost make it ideal for machine learning teams looking to iterate quickly and frequently on their solutions.

solutionDedicatedBg

Cloud IPU virtual vPODs make it easy to use IPU hardware by providing direct access to the BOW host machines. Users have full access to each IPU virtual instance, allowing them to install and run any code they wish in an ultrafast connection with IPU accelerators. This also enables the use of ephemeral storage, execution of custom code in input pipelines, and better integration of Cloud IPUs into research and production workflows.

  • Connect external block storage for system and data volumes. Easily attach new data volumes.
  • Reduced deployment time: compared with dedicated vPOD, virtual vPODs spawned up to 5 minutes.
  • Suspension mode to temporarily pause the cluster and avoid unnecessary charges while also being able to resume quickly.
v-ipu

Massive Performance Leap

World-leading performance for natural language processing, Computer.Com's vision and graph networks

Unique architecture for differentiated results

Low latency inference

Much More Flexible

Designed for training and inference

Support for wide range of ML models

Make new breakthroughs for competitive advantage

Easy to Use

Support from AI experts

Extensive documentation, tutorials and pre-canned models

Popular ML framework support

suspensionModeBg

Suspension mode for Cloud virtual vPODs

Suspension mode provides a cost-effective and resource-efficient solution for temporarily pausing a virtual private cloud environment when it is not in use. By utilizing this feature, customers can effectively reduce expenses while preserving the integrity of the data and configurations.

  • Only storage and Floating IP (if active) are charged when a cluster is suspended
  • Cluster can be easily reactivated with the same configuration
  • The network configuration and cluster data are stored on external block storage, excluding ephemeral storage information. This offers the ability to modify the configuration and expand the cluster as required, providing greater flexibility

Features and advantages

World-class performance for natural language processing

Build, train and deploy ready-to-use ML models via dashboard, API, or Terraform

Dataset management and integration with S3/NFS storage

Version control: Hardware, Code, Dataset

Secure Trusted Cloud platform

Free egress traffic (for public or hybrid solutions)

SLA 99.9% guaranteed uptime

High-skilled technical support 24/7

Made in the EU

AI full lifecycle tools and integrations

ML and AI solutions:

Receiving and processing data:

Development tools:

Exploration and visualization tools:

Programming languages:

Data platforms:

pytorch
tensorflow
lightning
keras
onnx
hugging_face
paddle_paddle
slurm
kubernetes
prometheus
graphana
openbmc
redfish
openstack
vmware
accelerate_ml

Accelerate ML with ready-made AI Infrastructure

With the AI Infrastructure, customers can now easily train and compare models or custom code training, and all your models are stored in one central model repository. These models can now be deployed to the same endpoints on Computer.Com’s AI Infrastructure.

Computer.Com’s IPU-based AI cloud is designed to help businesses across various fields, including finance, healthcare, manufacturing, and scientific research. It is built to support every stage of their AI adoption journey, from building proof of concept to training and deployment.

AI model development

ML models: Face recognition, Object detection

AI training and hyperparameter tuning

infographic

ML Model delivery and deployment pipelines

Locations

IPU-POD systems

Ready to order in Luxembourg in June 2022

IPU-POD systems let you break through barriers to unleash entirely new capabilities in machine intelligence with real business impact. Get ready for production with IPU-Pod64 and take advantage of a new approach to operationalize your AI projects.

IPU-Pod64 delivers ultimate flexibility to maximize all available space and power, no matter how it is provisioned. 16 petaFLOPS of AI-compute for both training and inference to develop and deploy on the same powerful system.

luxembourg

IPU-POD systems

Ready to order

IPU-Pod256 is available in Amsterdam. It allows customers to explore AI compute at a supercomputing scale. Designed to accelerate large and demanding machine learning models, IPU-Pod256 gives you the AI resources of a tech giant.

amsterdam

Pricing

Depending on the location, servers have different traffic options*. You can find explicit information below:

Product Server Config IPUs Quantity Price
Bow Pod4
2×7763/ 512GB RAM / 2×450 SATA + 7×1.8Tb nvme / 2x100G
4
1
Bow Pod16
2×7763/ 512GB RAM / 2×450 SATA + 7×1.8Tb nvme / 2x100G
16
1
Bow Pod64
2×7763/ 512GB RAM / 2×450 SATA + 7×1.8Tb nvme / 2x100G
64
1
Bow Pod128
2×7763/ 512GB RAM / 2×450 SATA + 7×1.8Tb nvme / 2x100G
128
1
BowPod256
2×7763/ 512GB RAM / 2×450 SATA + 7×1.8Tb nvme / 2x100G
256
1
BowPod1024
2×7763/ 512GB RAM / 2×450 SATA + 7×1.8Tb nvme / 2x100G
1024
1
Product Server Config IPUs Quantity Price
BOW-vPOD4
60 vCPU / 116GB RAM / 1100GB NVMe (ephemeral) / 100Gbit/s Interconnect
4
1
BOW-vPOD16
120 vCPU / 232GB RAM / 2200GB NVMe (ephemeral) / 100Gbit/s Interconnect
16
1
BOW-vPOD16
240 vCPU / 464GB RAM / 4400GB NVMe (ephemeral) / 100Gbit/s Interconnect
16
1
BOW-vPOD64
240 vCPU / 464GB RAM / 4400GB NVMe (ephemeral) / 100Gbit/s Interconnect
64
1
Product Server Config IPUs Quantity Price
IPU-POD4
2×5320 / 384GB RAM / 2x960GB SSD / 2x100G
4
1
$ 2.66 / 1 hour
IPU-POD16
2×5320 / 384GB RAM / 2x960GB SSD / 2x100G
16
1
$ 11.76 / 1 hour
IPU-POD64
2×5320 / 384GB RAM / 2x960GB SSD / 2x100G
64
1
$ 49.21 / 1 hour
IPU-POD128
2×5320 / 384GB RAM / 2x960GB SSD / 2x100G
128
1
$ 98.43 / 1 hour
IPU-POD256
2×5320 / 384GB RAM / 2x960GB SSD / 2x100G
256
1
$ 196.87 / 1 hour
IPU-POD1024
2×5320 / 384GB RAM / 2x960GB SSD / 2x100G
1024
1
$ 787.51 / 1 hour
Prices do not include TAX or VAT.

Try out vPOD4 for free for 24 hours! Contact our sales team to get the offer!

Request access to ready-to-use AI Infrastructure

Contact us to get a personalized offer

Tell us about the challenges of your business, and we’ll help you grow in any country in the world.

Which service you want?