Categories

AI 2U 2x RTXP6KBSE INT 1x32C 2.5G 512GB

Part No:

SYS-212GB-FNR-01-G2

exc VAT
£33,168.43
£39,802.12
inc VAT
Save 5% - First order discount - Any product - Any price*
Use Promo Code: FIRST5
*This promotion may only be used once, 5% discount is applied to your first order. Can not be used in-conjunction with any other offers. Not available if paying with PayPal.

This item is currently unavailable for next day delivery, however you may still order this item as a back-order?.

AI 2U 2x RTXP6KBSE INT 1x32C 2.5G 512GB

Available on back-order

Non-Returnable*

Safe & Secure shopping

*This item is non-returnable and non-cancellable due to customisation or built to order.

SKU: SYS-212GB-FNR-01-G2 Category:

Can't find what your looking for?

Our qualified I.T. experts are here & ready to help.

Contact us on 0203 617 7663 or email us directly

Description

AI 2U 2x RTXP6KBSE INT 1x32C 2.5G 512GB

Supermicro SYS-212GB-FNR-01-G2 is a 2U, single-socket Intel Xeon 6700-series GPU server, factory-configured for high-throughput AI inference, LLM, and edge/IoT workloads.

Product overview
2U rackmount SuperServer based on the Supermicro SYS-212GB-FNR platform with Gold “G2” configuration.

Optimized for SaaS LLM, RAG, agent frameworks, and other GPU-accelerated inference services where latency and throughput are critical.

Ships as a complete, ready-to-deploy configuration with CPU, GPUs, memory, storage, networking, and redundant power.

Key hardware configuration (G2)
CPU: 1× Intel Xeon 6731P, 32 cores / 64 threads, 2.5 GHz base, for Intel Xeon 6700-series single-socket platform (Socket E2 / LGA‑4710).

GPU: 2× NVIDIA RTX PRO 6000 Server Edition (Blackwell-based “Server Edition”), each with 96 GB GDDR7, double‑width PCIe accelerator cards.

System memory: 512 GB DDR5‑6400 ECC RDIMM installed, using the platform’s 16‑DIMM, 8‑channel memory subsystem.

Maximum memory capability (platform):

Up to 1 TB DDR5‑6400 ECC RDIMM in 1DPC.

Up to 512 GB DDR5‑8000 ECC MRDIMM in 1DPC.

Up to 2 TB DDR5‑5200 ECC RDIMM in 2DPC

Storage and expansion

Rear hot‑swap storage: 4× E1.S PCIe 5.0 x4 NVMe SSD bays; G2 config typically populated with 2× 3.84 TB E1.S SSDs for primary data.

Onboard boot / cache: 2× M.2 PCIe 5.0 x2 NVMe slots (M‑key, 2280/22110); G2 configuration includes 1× 960 GB M.2 NVMe SSD as OS / boot drive.

PCIe GPU capacity (platform): up to 4 double‑width PCIe 5.0 x16 FHFL GPUs plus 3 additional PCIe 5.0 x16 FHFL slots for NICs or other accelerators (G2 uses 2 of the double‑width GPU positions)

Networking, power and cooling
Networking:

1× dedicated 1 GbE RJ45 BMC/IPMI management port.

Add‑on network via PCIe (G2 build commonly includes 1× dual‑port 10 GbE RJ45 adapter).​

Power: 2× 2700 W redundant Titanium‑level (96% efficiency) power supplies, hot‑swap and load‑balanced.

Cooling: 5× 8 cm heavy‑duty fans with optimal fan speed control and thermal monitoring for CPU, GPUs, and chassis

Management and software readiness

Out‑of‑band management via integrated BMC with IPMI support and dedicated management LAN.

Supported management tools include SuperCloud Composer, Supermicro Server Manager (SSM), Super Diagnostics Offline (SDO), and related Supermicro utilities.

Ideal for deployment as an AI inference node in Kubernetes, OpenShift, or other container/orchestrated environments, supporting modern LLM and RAG stacks on NVIDIA RTX PRO 6000 GPUs.

Brand

Supermicro