banner
News center
Competitive factory price & great quality

QSAN XCubeFAS 3126D All

Aug 20, 2023

The QSAN XCubeFAS 3126D is an entry-level all-flash array that features 100% Native NVMe3U26 high-density storage. It has been a while since we’ve reviewed a QSAN product, so it's nice to get our hands on one again, and put it through our tests. That said, the XCubeFAS 3126D is a decently-specced system that is released mainly as a low-cost alternative to higher-end products. It also features a modular system design to simplify the upgrading and replacement process of system components and is equipped with 26 bays that support 2.5″ U.2 dual-port NVMe SSDs.

The QSAN XCubeFAS 3126D is an entry-level all-flash array that features 100% Native NVMe3U26 high-density storage. It has been a while since we’ve reviewed a QSAN product, so it's nice to get our hands on one again, and put it through our tests. That said, the XCubeFAS 3126D is a decently-specced system that is released mainly as a low-cost alternative to higher-end products. It also features a modular system design to simplify the upgrading and replacement process of system components and is equipped with 26 bays that support 2.5″ U.2 dual-port NVMe SSDs.

The new QSAN storage array is powered by a 6-core Intel Xeon Processor and 16GB DDR4 ECC RDIMM preinstalled RAM, which is upgradable to a generous 384GB via 6 memory slots while having a raw potential capacity of almost 400TB of storage. For connectivity, the 3126D features one USB 2.0 and 3.0 port, one 1GbE RJ45 LAN port (for onboard management), two 10GbE RJ45 LAN ports (optional iSCSI), and six 10GbE SFP+ LAN ports (4 of which are optional). It also supports 25GbE iSCSI and 32Gb FC connectivity with up to 20 ports to meet various network deployment needs.

For performance, QSAN claims that their new storage array can reach up to 450,000 in random read IOPS (at 500μs latency) and 220,000 IOPS in random writes (at 300μs latency).

The QSAN XCubeFAS 3126D is managed by XEVO, QSAN's flash-based storage management system that offers a range of useful capabilities and allows users to access their data almost immediately after the storage is installed. It also features a user-friendly dashboard and report system that generates useful business usage analyses as well as the ability to monitor storage in real-time. In addition, it comes with external management features like RESTful API, SNMP, and email notifications.

The diskless base model goes for roughly $20,000 (which includes two10G SFP+ host connections per controller). Our review model comes with FC (4x32Gb FC per controller and GBICs) and is priced at $28,827 MSRP without SSDs.

FCC

BSMI

The QSAN XCubeFAS 3126D is of a 19" rackmount 3U form factor, with all 26 bays located on the front panel. Weighing in at roughly 34kg, this is a hefty storage array, though installing it onto a rail was a fairly easy process.

In addition to the drive bays (each of which has its own disk power and status LEDs), the front panel is home to the power button, unique identifier (UID) button/LED indicator, enclosure access and enclosure status LEDs, and a USB 2.0 port at the top right.

Installing an SSD is just a matter of fastening it onto the tray via the usual 4 screws, inserting it into the drive bay, and locking the release button.

Turning the storage array around to the back reveals its dual-active controller build, both of which help to provide continuous operation, ensuring uninterrupted storage service. This redundant design also makes it easy to upgrade and maintain without having to worry about downtime.

On the back panel itself is are the power supply units, a multitude of network ports, a USB 3.0 port, options host card slots, master/slave/UID/dirty cache/controller status LEDs, console and service ports, buzzer mute button, and reset to factory default button.

Under the hood, you will notice how widely spread out the components and hardware is, making it easy for the six system fans to spread the airflow and cool the components.

The QSAN XCubeFAS 3126D is maintained by XEVO, the company's user-friendly management software that helps you to easily administer storage and deploy or integrate into any environment via its user-friendly interface. XEVO also supports Dual Active (Active/Active) controller system architecture with high availability and can be managed using a mobile device. Overall, we had no trouble setting up our server and it was fairly easy to do so.

Once you load up the software, you will see the dashboard. Here, you will see hardware alert and system alert and array capacity information and storage overview, as well as the option to reboot or shut down the system. On the dashboard, you can also monitor storage array performance by latency, IOPS, and throughput, SSD usage and learn SSD usage effectively with custom notifications, and optimize SSD performance and longevity.

Navigating through the menus from the dashboard is quick, easy, and responsive. In the System tab, you will see a layout/diagram of both the front and back panel of your QSAN array, as well as the components that are active (highlighted in green). Mousing over a disk will give you a detailed popup display, which shows things like the disk name, temperature, status, type, pool designation, and more. This is a really handy feature.

The Storage tab is where you manage your pools, disk groups, and volumes located on your server, accompanied by information and statues about each. In our case, we created two pools, which can be selected on the left navbar.

In the Host tab, you can create, modify or delete your host groups as well as display their status, and configure profiles and connected volumes.

From the Analysis tab, you can easily find historical data about your array. It can also generate analytic reports for volumes, storage capacity, and consumption as far back as one year. This will certainly make it easier for IT to allocate their resources better.

The Protection tab displays the protect group status and allows users to configure and manage protection volumes. That is, users will be able to binder one more volume together, so they can perform data backup services at the same time (e.g., snapshots, cloning, and remote replication). To create a protection group, we just had to click the green "+" icon at the top left to load the wizard. From there, it was easy as following directions and choosing some settings like the name, schedule, and type.

QSAN XCubeFAS 3126D Performance

QSAN XCubeFAS 3126D Test Configuration

SQL Server Performance

StorageReview's Microsoft SQL Server OLTP testing protocol employs the current draft of the Transaction Processing Performance Council's Benchmark C (TPC-C), an online transaction processing benchmark that simulates the activities found in complex application environments. The TPC-C benchmark comes closer than synthetic performance benchmarks to gauging the performance strengths and bottlenecks of storage infrastructure in database environments.

Each SQL Server VM is configured with two vDisks: 100GB volume for boot and a 500GB volume for the database and log files. From a system-resource perspective, we configured each VM with 16 vCPUs, 64GB of DRAM, and leveraged the LSI Logic SAS SCSI controller. While our Sysbench workloads tested previously saturated the platform in both storage I/O and capacity, the SQL test looks for latency performance.

This test uses SQL Server 2014 running on Windows Server 2012 R2 guest VMs and is stressed by Dell's Benchmark Factory for Databases. While our traditional usage of this benchmark has been to test large 3,000-scale databases on local or shared storage, in this iteration we focus on spreading out four 1,500-scale databases evenly across our servers.

SQL Server Testing Configuration (per VM)

For our average latency SQL Server benchmark, we saw an average aggregate score of 18.3ms, with individual VMs showing 17 to 20ms when.

Looking at TPS, the QSAN XCubeFAS 3126D saw an aggregate score of 12,507.38 with 4 VMs.

Sysbench MySQL Performance

Our first local-storage application benchmark consists of a Percona MySQL OLTP database measured via SysBench. This test measures average TPS (Transactions Per Second), average latency, and average 99th percentile latency as well.

Each Sysbench VM is configured with three vDisks: one for boot (~92GB), one with the pre-built database (~447GB), and the third for the database under test (270GB). From a system resource perspective, we configured each VM with 16 vCPUs, 60GB of DRAM, and leveraged the LSI Logic SAS SCSI controller.

Sysbench Testing Configuration (per VM)

With the Sysbench OLTP, we tested 8VMs for an aggregate score of 12,446 TPS with individual VMs running from 1,539.68 TPS to 1,572.96 TPS.

For Sysbench average latency, we saw an aggregate score of 20.57ms with individual VMs ranging from 20.34ms to 20.85ms.

For our worst-case scenario latency (99th percentile), the QSAN XCubeFAS 3126D had an aggregate latency of 38.36ms with individual VMs ranging from 37.81ms to 39.96ms.

VDBench Workload Analysis

When it comes to benchmarking storage arrays, application testing is best, and synthetic testing comes in second place. While not a perfect representation of actual workloads, synthetic tests do help to baseline storage devices with a repeatability factor that makes it easy to do apples-to-apples comparison between competing solutions. These workloads offer a range of different testing profiles ranging from "four corners" tests, common database transfer size tests, as well as trace captures from different VDI environments. All of these tests leverage the common vdBench workload generator, with a scripting engine to automate and capture results over a large compute testing cluster. This allows us to repeat the same workloads across a wide range of storage devices, including flash arrays and individual storage devices.

Profiles:

With random 4K read, the QSAN XCubeFAS 3126D has a peak performance 259,509 IOPS with a latency of 9.77ms.

For 4K random write, the server showed a peak of 130,907 IOPS at 10ms before.

Next up is sequential workloads where we first looked at 32k. In reads, we saw 271,975 IOPS (or 8.5GB/s) at 2.69ms in latency.

For 32k writes, the QSAN recorded 59,788 IOPS (or 1.87GB/s) at 8.56ms in latency.

Next up is sequential 64k. For reads, the XCubeFAS 3126D saw a peak of 218,133 IOPS or 13.6GB/s at a latency of 3.58ms.

For 64K sequential write, the XCubeFAS 3126D was able to peak at 2.01GB/s with a latency of 31.8ms.

Our next set of tests are our SQL workloads: SQL, SQL 90-10, and SQL 80-20. Starting with SQL, the XCubeFAS 3126D peaked at 242,626 IOPS with a latency 3.14ms.

For SQL 90-10 the XF3126D peaked at 228,029 IOPS with a latency of only 3.25µs.

In SQL 80-20, the QSAN server showed a peak of 209,003 IOPS at a latency of 3.46µs.

Next up are our Oracle workloads: Oracle, Oracle 90-10, and Oracle 80-20. Starting with Oracle, the XCubeFAS XF3126D stayed below 1000µs until near the end of the test, where went on to peak at 205,817 IOPS with a latency of roughly 4.5ms.

For Oracle 90-10, the QSAN server peaked at 227,457 IOPS at a latency of 2.19ms.

With Oracle 80-20, the XCubeFAS 3126D saw a peak of 208,658 IOPS with a latency of 2.39ms.

Next, we switched over to our VDI clone test, Full and Linked. For VDI Full Clone (FC) Boot, we see sub-1000µs latency performance up until just before the 200K IOPS mark and a peak of 222,904 IOPS with a latency of 3.63ms.

VDI FC Initial Login had the XCubeFAS 3126D peak performance at 114,074 IOPS with 7.57ms in latency.

VDI FC Monday Login showed a peak of 131,157 IOPS with a latency of 3.88ms.

Switching to VDI Linked Clone (LC) Boot, the XCubeFAS 3126D peaked at 225,506 IOPS with a latency hovering around 1.8ms.

In Initial Login, the XCubeFAS 3126D showed a peak of 101,894 IOPS at 2.5ms.

Finally, we look at VDI LC Monday Login where the XCubeFAS 3126D server peaked at 103,241 IOPS with a latency of 4.92ms.

The QSAN XCubeFAS 3126D is a solid all-flash storage array and is certainly an affordable alternative to higher-end systems. Equipped with 26 bays (2.5″ U.2 dual-port NVMe SSDs), its modular system design also simplifies the upgrading and maintaining of the QSAN server system components. Under the hood lies a 6-core Intel Xeon Processor, 16GB DDR4 ECC RDIMM preinstalled RAM (upgradable to 384GB via 6 memory slots), and a potential raw capacity of almost 400TB. It also features two USB 2.0/3.0 connectivity and a comprehensive selection of network ports to meet the needs of a range of deployment environments.

The XCubeFAS 3126D showed some good results for its class in our benchmarks as well. In our VDBench Workload analysis, highlights included 259,509 IOPS in 4K read, 130,907 IOPS 4K write, 8.5GB/s in 32K reads, 1.87GB/s in 32K writes, 13.6GB/s in 64K reads, and 2.01GB/s in 64K writes.

With our SQL tests, we saw peaks of 243K IOPS, 228K IOPS in SQL 90-10, and 209K IOPS in 80-20. With Oracle, the QSAN server saw peaks of 206K IOPS, 227K IOPS for 90-10, and 209K IOPS for Oracle 80-20. Moving on to VDI Full Clone results, we saw peaks of 223K IOPS boot, 114,074 IOPS Initial Login, and 131K IOPS Monday Login, while VDI Linked Clone recorded 226K IOPS boot, 102K IOPS Initial Login, and 207K IOPS Monday Login.

Overall, QSAN's 3126D is one of the better entry-level all-flash arrays we’ve come across. Its easy-to-maintain modular design is highly customizable, offers solid performance for its class, and comes with a range of connectivity options to suit many different use cases. So, if you’re looking for an affordable way to set up an all-flash storage server for your organization, then the 26-bay XCubeFAS 3126D is definitely worth checking out.

QSAN

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

Lyle is a staff writer for StorageReview, covering a broad set of end user and enterprise IT topics.

CPU Memory Storage External Port Connectivity Port Expansion Port PCIe Expansion Host Card Expansion Appearance Memory Protection Others Environment Temperature Certification Warranty Design and build QSAN XCubeFAS 3126D Performance SQL Server Performance SQL Server Testing Configuration (per VM) Sysbench MySQL Performance VDBench Workload Analysis Conclusion Engage with StorageReview