computer_logo
Embedded Chatbox
Computer.Com AI Chat Bot
x

Lambda Cloud GPUs from $0.50/hour > _ _ _

The ultimate GPU server
for deep learning

Now available with NVIDIA H100 Tensor Core GPUs

ambda-deep-learning-server-768x600

10,000+ research teams trust Lambda

Engineered for your workload

Tell us about your research and we’ll design a machine that’s perfectly tailored to your needs.
flexibility

Easily scale from server to cluster

As your team's compute needs grow, Lambda's in-house HPC engineers and AI researchers can help you integrate Scalar and Hyperplane servers into GPU clusters designed for deep learning.

PREMIUM SUPPORT

Service and support by technical experts who specialize in machine learning

Lambda Premium Support includes:

support-1-600x576
LAMBDA STACK
lambda-stack-small

Plug in. Start training.

Our servers include Lambda Stack, which manages frameworks like PyTorch® and TensorFlow. With Lambda Stack, you can stop worrying about broken GPU drivers and focus on your research.

COLOCATION

Your servers. Our datacenter.

Lambda Colocation makes it easy to deploy and scale your machine learning infrastructure. We'll manage racking, networking, power, cooling, hardware failures, and physical security. Your servers will run in a Tier 3 data center with state-of-the-art cooling that's designed for GPUs. You'll get remote access to your servers, just like a public cloud.

hero-600x523

Fast support

If hardware fails, our on-premise data center engineers can quickly debug and replace parts.

Optimal performance

Our state-of-the-art cooling keeps your GPUs cool to maximize performance and longevity.

High availability

Our Tier 3 data center has redundant power and cooling to ensure your servers stay online.

No network set up

We handle all network configuration and provide you with remote access to your servers.

RESEARCH

Explore our research

TECH SPECS
RESEARCH

Technical Specifications

4U-600x244

Up to 10 dual-slot PCIe GPUs. Options include:

  • NVIDIA H100: 80 GB of HBM3, 14,592 CUDA cores, 456 Tensor Cores, PCIe 5.0 x16
  • NVIDIA A100: 80 GB of HBM2e, 6,912 CUDA cores, 432 Tensor Cores, PCIe 4.0 x16
  • NVIDIA L40: 48 GB of GDDR6, 18,176 CUDA cores, 568 Tensor Cores, PCIe 4.0 x16
  • NVIDIA A40: 48 GB of GDDR6, 10,752 CUDA cores, 336 Tensor Cores, PCIe 4.0 x16
  • NVIDIA A30: 24 GB of GDDR6, 3,584 CUDA cores, 224 Tensor Cores, PCIe 4.0 x16
  • NVIDIA RTX 6000 Ada: 48 GB of GDDR6, 18,176 CUDA cores, 568 Tensor Cores, PCIe 4.0 x16
  • NVIDIA RTX A6000: 48 GB of GDDR6, 10,752 CUDA cores, 336 Tensor Cores, PCIe 4.0 x16
  • NVIDIA RTX A5500: 24 GB of GDDR6, 10,240 CUDA cores, 320 Tensor Cores, PCIe 4.0 x16
  • NVIDIA RTX A5000: 24 GB of GDDR6, 8,192 CUDA cores, 256 Tensor Cores, PCIe 4.0 x16
  • NVIDIA RTX A4500: 20 GB of GDDR6, 7,168 CUDA cores, 224 Tensor Cores, PCIe 4.0 x16
  • NVIDIA RTX A4000: 16 GB of GDDR6, 6,144 CUDA cores, 192 Tensor Cores, PCIe 4.0 x16

2 AMD EPYC or Intel Xeon Processors

  • AMD EPYC 7004 (Genoa) Series Processors with up to 192 cores total
  • Intel Xeon 4th Gen (Sapphire Rapids) Scalable Processors with up to 112 cores total
  • Up to 8 TB of 4800 MHz DDR5 ECC RAM in 32 DIMM slots
  • Up to 491.52 TB of storage via 16 hot-swappable U.2 NVMe SSDs
  • Up to 61.44 TB of storage via 8 hot-swappable 2.5″ SATA SSDs

Built-in networking:

  • 2 RJ45 10 Gbps BASE-T LAN ports
  • 1 RJ45 1 Gbps BASE-T LAN out-of-band management port

Optional high-speed NIC. Options include:

  • NVIDIA ConnectX-7 400 Gb/s NDR InfiniBand Adapter, OSFP56, PCIe 5.0 x16
  • NVIDIA ConnectX-7 200 Gb/s NDR200 InfiniBand Adapter, OSFP56, PCIe 5.0 x16
  • NVIDIA ConnectX-7 200 Gb/s NDR200 InfiniBand/VPI Adapter, QSFP112, PCIe 5.0 x16
  • NVIDIA ConnectX-6 200 Gb/s HDR InfiniBand/VPI Adapter, QSFP56, PCIe 4.0 x16
  • NVIDIA ConnectX-6 100 Gb/s HDR100 InfiniBand/VPI Adapter, 1x QSFP56, PCIe 4.0 x16
  • NVIDIA ConnectX-6 Dx EN 200 Gb/s Ethernet Adapter, QSFP56, PCIe 4.0 x16
  • NVIDIA ConnectX-6 Dx EN 100 Gb/s Ethernet Adapter, QSFP56, PCIe 4.0 x16
  • NVIDIA ConnectX-5 EN 100 Gb/s Ethernet Adapter, QSFP28, PCIe 3.0 x16
  • 4 hot-swappable 2000 watt 80 PLUS Titanium PSUs
  • 2 + 2 redundancy

AC input:

  • 220-240 Vac / 10-9.8A / 50-60 Hz

Max output per PSU:

  • 220-240Vac / 2000 watts
  • Power button
  • Reset button
  • Power LED
  • ID button with LED
  • Information LED
  • Power supply failure LED
  • 2 LAN activity LEDs, one for each RJ45 1
  • Gbps BASE-T LAN port
    Storage drive activity LED
  • VGA port
  • 2 USB 3.0 ports
  • 2 RJ45 10 Gbps BASE-T LAN ports
  • 1 RJ45 1 Gbps BASE-T LAN out-of-band management port
  • Rackmounting kit
  • 4 configurable C19 power cables
  • Form factor: 4U rackmount
  • Width: 17.2 inches (437 mm)
  • Height: 7.0 inches (179 mm)
  • Depth: 29.0 inches (737 mm)
  • Server weight: 73 lbs (33 kg)
  • Rackmounting kit weight: 5 lbs (2.3 kg)
  • Total weight with packaging: 99 lbs (45 kg)

Which service you want?