Signal Board
GPU Profile192 GB VRAMData Center GPU

B100

The NVIDIA B100 leverages the Blackwell architecture with 192 GB of HBM3e VRAM.

Currently out of stock across tracked providers

Live Floor

n/a

No active offers

Average Price

n/a

Waiting for new inventory

Reliability

n/a

No reliability telemetry

Last Day Delta

n/a

No history yet

30-Day Price Trend

Daily Lowest Price

No price history available yet. Data will appear after the first full day of tracking.

No offers currently available for B100. Check back later or browse other GPU models.

Browse All GPUs

GPU Overview

About B100

The NVIDIA B100 leverages the Blackwell architecture with 192 GB of HBM3e VRAM. It provides datacenter-grade scalability and memory bandwidth, serving as a high-capacity accelerator for organizations building large-scale AI training and inference infrastructure.

Its 192 GB of VRAM supports Large Language Models (LLM) and fine-tuning of frontier-scale models with minimal partitioning overhead. With a benchmark score of 112, the B100 delivers uncompromising performance for enterprise-grade AI workloads including multi-trillion-parameter model training and high-throughput production inference.

Key Features

  • VRAM192 GB
  • CategoryData Center GPU
  • Benchmark Score112

Best For

LLM InferenceAI TrainingEnterprise AIScalable DeploymentFine-TuningRendering