Discovery Phase Tool: This builder walks through the key decisions required to specify a complete vendor-agnostic SMB datacenter solution — 3 servers, 2 switches, 1 shared storage array. Outputs are designed to start the conversation with your SE or VAR, not replace it.

Step 1 — Compute & Memory
Processor platform, core density, and memory architecture

Intel Xeon

8 Memory Channels / Socket
  • Broadest ecosystem support
  • Strong ISV certifications
  • Familiar to most VAR channels
  • Slight edge on niche workloads

AMD EPYC

12 Memory Channels / Socket
  • More cores per dollar
  • Lower TDP — runs cooler
  • More memory bandwidth
  • Growing enterprise adoption
Intel Xeon holds the largest installed base and enjoys the broadest VAR and ISV support. For 99% of virtualized SMB workloads either platform executes x64 code identically — platform choice often comes down to vendor pricing and support preference.
Single socket configuration limits PCIe lane availability. High card density requires a second CPU to activate all expansion slots.
Max Cluster VMs
--
N-1 failover @ 4:1 vCPU
Avg vRAM / VM
--
Virtual allocation ceiling
Physical RAM / Node
--
Balanced per architecture
Est. Node Power
--
CPU + memory overhead
Architect's Note: Physical RAM is your hard floor. Modern hypervisors use memory ballooning and transparent page sharing to stretch allocation — but sizing to physical is always the right starting point.
Step 2 — Boot Drives
Local storage for hypervisor only — workload data lives on shared storage

Boot Storage Reference

BEST

Dedicated Boot Module (BOSS/NS204i) — Vendor-specific M.2 RAID card that keeps boot drives off PCIe and SATA backplane. Cleanest solution, no slot consumed, purpose-built for hypervisor boot.

GOOD

NVMe SSD Pair — Modern hypervisors require frequent reads and writes to boot media. NVMe provides the performance headroom. A mirrored pair (RAID 1) adds resilience. Two M.2 slots consumed.

OK

SATA SSD Pair (480GB) — Sufficient for boot-only workloads. Budget-friendly. Consumes SATA backplane ports. Adequate if NVMe slots are being used for other purposes.

Important: In a 3-2-1 architecture, local drives on each server are for hypervisor boot only. All VM storage, snapshots, and workload data reside on the shared storage array — not on local drives. This is intentional: it enables VM mobility (live migration) between nodes and centralizes data protection.
Boot Option
--
Per node
PCIe / Slots Used
--
For boot storage
Step 3 — Connectivity & Fabric
Production Ethernet, storage fabric, and out-of-band management
Copper vs. Fiber: 10Gb copper (RJ45) connects directly to standard switches and is backward-compatible with 1Gb infrastructure. Fiber (SFP28) offers higher density and better signal integrity over distance but requires SFP transceivers and fiber cabling — higher cost, higher performance.
FC vs. iSCSI: Fibre Channel is a dedicated storage protocol — highest performance, lowest latency, highest cost. Most SMB deployments use iSCSI (storage over Ethernet) which eliminates the need for HBAs and separate FC switches. FC is worth the investment for latency-sensitive workloads like large SQL or Oracle databases.
Switch Ports Required
--
Eth + OOB ports
PCIe Slots / Node
--
Expansion cards
Cluster Heat
--
BTU/hr total
Stack Rack Units
--
Servers + switches + storage
Out-of-Band Management: Dell iDRAC, HPE iLO, and Lenovo XCC each provide a dedicated 1Gb RJ45 management port built into the server motherboard. These connect to a separate management VLAN and do not consume any PCIe expansion slots.
Step 4 — Shared Storage Array
The "1" in 3-2-1 — centralized storage for all VM workloads
Array Form Factor
--
Rack space
Est. Array Power
--
Watts
Protocol alignment: If Fibre Channel is selected as storage protocol, HBA cards must be added in Step 3. iSCSI and NFS run over existing Ethernet infrastructure — no additional hardware required.
Step 5 — Support & Service
Contract term and response time level
Most Common

3-Year Term

Standard SMB contract. Aligns with typical refresh cycles. Usually the best price-per-year value. Renewable by vendor at expiration.

5-Year Term

Lower annual cost. Good for stable environments with predictable workloads. Less flexibility if technology needs change.

Most Common

Next Business Day

Parts and technician on-site by next business day. Standard for most SMB environments. Assumes overnight is acceptable downtime window.

4-Hour Response

Mission-critical workloads. Technician on-site within 4 hours, 24x7x365. Meaningful cost premium — justified for revenue-generating systems.

Rails note: All 3 servers require rack rail kits. These are typically ordered separately and are vendor and rack-post specific (square hole vs. round hole). Confirm rack post type before ordering. Cable management arms (CMA) are optional but recommended for dense deployments.
Solution Summary
Vendor-agnostic 3-2-1 SMB Datacenter — Discovery Phase Output
Compute (×3 Nodes)
Platform--
Sockets / Cores--
Memory / Node--
Boot Storage--
Support--
RailsRequired — order separately
Networking (×2 Switches)
Speed / Media--
NIC Config--
Storage Fabric--
Ports Required--
Shared Storage (×1 Array)
Tier--
Protocol--
Usable Capacity--
Rack Layout
Capacity Summary
Max VMs (N-1 failover)--
Avg vRAM / VM--
Total Rack Units--
Cluster Heat Output--
--
Watts / Node
--
Total Cluster Watts
--
kW (PDU Sizing)