banner
News center
Competitive factory price & great quality

QSAN XCubeSAN XS1200 Series Review

May 31, 2023

The QSAN XCubeSAN XS1200 Series is a dual controller SAN designed to meet the needs of SMBs and ROBO. Supporting both Fibre Channel and iSCSI, the XS1200 can handle the requisite workloads. QSAN offers a wide variety of features in the array via SANOS 4.0 highlighted by thin provisioning, SSD read/write cache, tiering, snapshots, local volume clones and remote replication. Internally, the controllers are powered by Intel D1500 two-core CPUs and 4GB of DDR4 memory. For those who need to scale up, QSAN offers the XD5300 expansion unit; the XS1200 can support up to 286 drives in total.

The QSAN XCubeSAN XS1200 Series is a dual controller SAN designed to meet the needs of SMBs and ROBO. Supporting both Fibre Channel and iSCSI, the XS1200 can handle the requisite workloads. QSAN offers a wide variety of features in the array via SANOS 4.0 highlighted by thin provisioning, SSD read/write cache, tiering, snapshots, local volume clones and remote replication. Internally, the controllers are powered by Intel D1500 two-core CPUs and 4GB of DDR4 memory. For those who need to scale up, QSAN offers the XD5300 expansion unit; the XS1200 can support up to 286 drives in total.

Within the XS1200 family, QSAN offers a bevy of form factors with either one (S) or two (D) controllers. The XS1224S/D is 4U, 24x 3.5″, the XS1216S/D is 3U, 16x 3.5″ and the XS1212S/D is 2U, 12x 3.5″ bay system. QSAN also offers a model optimized for flash, which is the system under review here in the dual controller configuration. The XS1226D uniquely offers 26x 2.5″ bays across the front, two more than most arrays or servers typically offer. This comes in handy in a variety of ways depending on RAID configuration. In this case testing was done in RAID10, so the extra bays can be leveraged for hot spares. Other RAID configs could use the bays to provide extra capacity.

Getting access to all of this flash means controller connectivity is important. Each controller offers two expansion slots that can support 1GbE, 10GbE, Fibre Channel or some combination. Each controller has twin 10GbE ports on board, meaning a total of up to 10 10GbE ports per controller. If Fibre Channel, the XS1200 supports 4 ports per controller.

Data integrity and reliability in a system like this is important. QSAN claims five nines of reliability, on par with most enterprise systems. For those who want an extra layer of data path protection, QSAN offers an optional Cache-to-Flash module, which comes with an M.2 SSD and either a BBM (Battery Backup Module) or a SCM (Super Capacitor Module), which protects in-flight data in the event of unexpected power loss.

As configured, without disks included, the cost of our XS1226D as reviewed was $9,396 (base XS1226D, plus rails and two 4-port 16Gb FC cards).

QSAN XCubeSAN XS1200 Series Specifications

Design and Build

The XS1226D is a dual-controller active/active storage array with a 2U profile featuring 26 2.5″ bays for SAS HDDs or SSDs. The 26-drive format is a bit unique in the space, as most systems only fit 24 bays up front, giving QSAN a bit of a leg up on competition. On the right side of the front panel are the system power button, the UID (Unique Identifier) button, system access and system status LEDs and a USB port for the USB LCM module.

The rear of the chassis has the dual redundant power supplies, as well as the dual controllers. Each controller has onboard twin 10Gbase-T network connectivity, in addition to an out-of-band management interface. For additional connectivity, each controller has two host card slots, which can be loaded up with dual or quad port 8/16Gb cards, or dual or quad port 1-10Gb Ethernet cards. This gives users a wide range of options for attaching storage into a diverse datacenter environment. Expansion capabilities are also supported through two 12Gb/s SAS ports per controller, enabling SAS 3.0 expansion shelves.

Management and Usability

The QSAN XS1200 series uses the company's QSAN SANOS operation system, currently in its 4.0 release. The OS has an overall simple and intuitive layout. Along the left-hand side of the screen are various main and sub menus for functions such as Dashboard, System Settings, Host Connectivity, Storage Management, Data Backup, Virtualization, and Monitoring. Each of the main menus has sub menus allowing users to drill down into specifics. Basically SANOS 4.0 gives users easy access to all the functions they will need when managing a SAN.

The first screen we look at is Dashboard. The Dashboard screen gives users a general look at the system (breaking it down into specific information), performance, storage, and event logs.

Right-click and open in new tab for a larger image

The only sub-menu for Dashboard is Hardware Monitoring. As the name implies, this function allows users to drill down into what hardware is in the system and information on it such as whether it is functioning properly or whether it has been installed (one can see at the bottom that we didn't install the power module for the Cache to Flash and it is showing up absent).

Right-click and open in new tab for a larger image

Under System Settings users can access menus such as general settings, management port, power settings, notifications, and maintenance. Under the maintenance menu users are given system information (for the overall system and each controller), the ability to update the system, firmware synchronization, system identification, reset to defaults, configure backup, volume restoration, and the ability to reboot or shut down the system.

Right-click and open in new tab for a larger image

Host Connectivity gives users an overview of each controller as well as location, port name, status, and MAC address/WWPN. Users also have the option of drilling down further into either the iSCSI ports or Fibre Channel Ports.

Right-click and open in new tab for a larger image

The last main menu we are going to look at for this review is, of course, Storage Management. This menu has four sub-menus. The first looks at Disks. Here one can easily see the slot the disk is in, its status, health, capacity, type (interface and whether it is a SSD or HDD), usage, pool name, manufacturer, and model.

Right-click and open in new tab for a larger image

The next sub-menu looks at pools. Here one can see the pool name, status, health, total capacity, free capacity, available capacity, whether thin provisioning is enable or not, which volume is being used, and the current controller.

Right-click and open in new tab for a larger image

The volumes sub-menu is similar to the other in this category with the ability to create volume and see information such as the volume name, status, health, capacity, type, whether the SSD Cache is enabled or not, snapshot space, the amount of snapshots, clone, write, and pool name.

Right-click and open in new tab for a larger image

The final sub-menu is LUN Mappings. Through this screen users can map LUNs and see information such as Allowed Hosts, target, LUN, permission, sessions, and volume name.

Right-click and open in new tab for a larger image

Application Workload Analysis

The application workload benchmarks for the QSAN XCubeSAN XS1200 consist of the MySQL OLTP performance via SysBench and Microsoft SQL Server OLTP performance with a simulated TPC-C workload. In each scenario, we had the array configured with 26 Toshiba PX04SV SAS 3.0 SSDs, configured in two 12-drive RAID10 disk groups, one pinned to each controller. This left 2 SSDs as spares. Two 5TB volumes were then created, one per disk group. In our testing environment, this created a balanced load for our SQL and Sysbench workloads.

SQL Server Performance

Each SQL Server VM is configured with two vDisks: 100GB volume for boot and a 500GB volume for the database and log files. From a system resource perspective, we configured each VM with 16 vCPUs, 64GB of DRAM and leveraged the LSI Logic SAS SCSI controller. While our Sysbench workloads tested previously saturated the platform in both storage I/O and capacity, the SQL test is looking for latency performance.

This test uses SQL Server 2014 running on Windows Server 2012 R2 guest VMs, and is stressed by Quest's Benchmark Factory for Databases. While our traditional usage of this benchmark has been to test large 3,000-scale databases on local or shared storage, in this iteration we focus on spreading out four 1,500-scale databases evenly across the QSAN XS1200 (two VMs per controller).

SQL Server Testing Configuration (per VM)

SQL Server OLTP Benchmark Factory LoadGen Equipment

We measured the performance of an SQL Server configuration that leveraged 24 SSDs in RAID10. Individual VM TPS performance was virtually identical with 3,158.4 to 3,158.8 TPS. The aggregate performance recorded was 12,634.305 TPS.

With average latency, the XCubeSAN XS1200 recorded latencies between 5ms and 6ms, with individual VMs and an aggregate of 5.8ms.

Sysbench Performance

Each Sysbench VM is configured with three vDisks, one for boot (~92GB), one with the pre-built database (~447GB), and the third for the database under test (270GB). From a system-resource perspective, we configured each VM with 16 vCPUs, 60GB of DRAM and leveraged the LSI Logic SAS SCSI controller. Load gen systems are Dell R740xd servers.

Dell PowerEdge R740xd Virtualized MySQL 4 node Cluster

Sysbench Testing Configuration (per VM)

In our Sysbench benchmark, we tested several sets of 4VMs, 8VMs, and 16VMs. Unlike SQL Server, here we only looked at raw performance. In transactional performance, the XS1200 posted solid performance beginning with 7,076.82 TPS for 4VM and up to 16,143.94 TPS at 16VM.

With average latency, the XS1200 had 18.14ms at 4VM and went up to just 20.63 when the VMs were doubled to 8. When doubling the VMs again, latency jumped to only 32.22ms.

In our worst-case scenario latency benchmark, the XS1200 again showed very consistent results with a 99th percentile latency of 32.40ms at 4VM and topping out at 62.1ms latency when testing with 16VMs.

VDBench Workload Analysis

When it comes to benchmarking storage arrays, application testing is best, and synthetic testing comes in second place. While not a perfect representation of actual workloads, synthetic tests do help to baseline storage devices with a repeatability factor that makes it easy to do apples-to-apples comparison between competing solutions. These workloads offer a range of different testing profiles ranging from "four corners" tests, common database transfer size tests, as well as trace captures from different VDI environments. All of these tests leverage the common vdBench workload generator, with a scripting engine to automate and capture results over a large compute testing cluster. This allows us to repeat the same workloads across a wide range of storage devices, including flash arrays and individual storage devices. On the array side, we use our cluster of Dell PowerEdge R740xd servers:

Profiles:

The XS1200 performed very well in our first synthetic profile, which looks at 4K random read performance. The unit maintained sub-1ms latency until roughly 198,000 IOPS, and offered a peak throughput at 284,000 IOPS, with an average latency of 13.82ms.

Looking at 4K peak write performance, the XS1200 showed impressively-low latency performance starting at 0.38ms and staying under 1ms until roughly 222,000 IOPS. It peaked at a latency of 7.9ms and IOPS at over 246,000.

Switching to 64K peak read, the XS1200 began the test at 3.98ms and was able to dip as low as 2.62ms at roughly 28,000 IOPS. It peaked at 70,000 IOPS with a latency of 7.29ms and a bandwidth of 4.37GB/s.

For 64K sequential peak write, the XS1200 started at 2.32ms latency with its lowest latency reaching 1.44ms at 24,800 IOPS. The array peaked at 60,800 at 4.2ms latency and 3.80GB/s bandwidth.

In our SQL workload, the XS1200 started at 2.21ms with its lowest latency reaching 1.66ms at just over 154,000 IOPS. It peaked at 249,000 IOPS at 3.35ms latency.

The SQL 80-20 benchmark started off with 2.12ms and recorded its best latency at the 1.593ms during 100,000 IOPS through 128,000 IOPS. It peaked at 247,000 IOPS at 3.26ms latency.

In the SQL 90-10 benchmark, the XS1200 started at 2.18ms and recorded its lowest latency at 1.6ms around the 154,000 IOPS mark. It peaked at 249,000 IOPS at 3.29ms latency.

With the Oracle Workload, the XS1200 started at 1.67ms while its lowest latency was recorded at 126,000 IOPS with 1.31ms. It peaked at 246,186 IOPS with a latency of 2.21ms.

With the Oracle 90-10, the XS1200 started at 1.76ms while recording its lowest latency at 1.32ms during the 153,427 IOPS mark. It peaked at 248,759 IOPS at 2.2ms latency.

With the Oracle 80-20, the XS1200 started at 2.5ms and managed to dip to 1.78ms at 121,600 IOPS. The array peaked at 242,000 IOPS with a latency of 4.16ms.

Switching over to VDI Full Clone, the boot test showed the XS1200 starting at a latency of 2.85ms with a low latency of 1.92ms up to around 110,190 IOPS. It peaked at 218,000 IOPS with a latency of 4.26ms.

The VDI Full Clone initial login started off at 2.48ms and dipped to a low 1.68ms at 74,370 IOPS. It peaked at 185,787 IOPS at 3.91ms latency.

The VDI Full Clone Monday Login started off at 1.85ms and went as low as 1.28ms at around 73,000 IOPS. It peaked at 182,376 IOPS at 2.55ms latency.

Switching over to VDI Linked Clone, the boot test showed the XS1200 starting at a latency of 2.33ms, and its lowest latency of 1.62ms at 60,200 IOPS. It peaked at 149,488 IOPS with a latency of 3.39ms.

The VDI Linked Clone initial login started off at 1.143ms and reached its lowest latency at 59,689 IOPS with 1.11ms. It peaked at 147423 IOPS at 1.71ms latency.

The VDI Linked Clone Monday started off at 2.16ms and reached its lowest latency at 60,000 IOPS with 1.52ms. It peaked at 248.514 IOPS at 3.24ms latency.

Conclusion

The QSAN XCubeSAN XS1200 Series are dual controller SANs aimed more for the smaller side of business or remote and branch locations. The XS1200 Series has a wide variety of form factors depending on the total amount of capacity needed. The units are powered by Intel D1500 two-core CPUs and 4GB of DDR4 memory per controller. They also support both iSCSI and Fibre Channel connectivity. For our particular review, we looked at the XS1226D dual controller SAN with 26 Toshiba PX04SV 960GB SAS 3.0 SSDs.

In our transactional benchmark for SQL Server, the XCubeSAN XS1200 had an impressive aggregate score of 12,634.305 TPS and an aggregate average latency of only 5.8ms. With these numbers, it certainly is one of the fastest SQL Server storage arrays we have seen so far. Sysbench results showed solid TPS scores as well, posting 7,076.82 TPS for 4VM and 16,143.94 TPS at 16VM. The XS1200 continued it great performance with average latency posting 18.14ms at 4VM and just 20.63 at 8VM, while jumping to only 32.22ms when doubling the VMs again. This trend continued when looking at our worst-case scenario results with a 99th percentile latency of 32.40ms at 4VM and topping out at 62.1ms latency when testing with 16VMs.

The results of our VDBench tests told a similar story, although with average latency climbing above flash arrays we’ve tested. In random 4K, the XS1200 recorded sub-1ms latency up until 198,000 IOPS, while boasting a peak throughput of 284,000 IOPS with 13.82ms in average latency. When looking at 64K peak read, the XS1200 started at 3.98ms and was able go as low as 2.62ms at the 28,000 IOPS mark. Throughput peaked at around 70,000 IOPS with a latency of 7.29ms and a bandwidth of 4.37GB/s. We also put the new QSAN XS1200 through three SQL workloads: 100% read, 90% read and 10% write, and 80% read and 20% write. Here, the XS1200 peaked at 249,000 IOPS, 249,000 IOPS, and 247,000 IOPS, all of which posted latency just over 3ms. The same three tests were run with an Oracle workload, resulting in performance that peaked at 246,186 IOPS, 248,759 IOPS, and 242,000 IOPS, respectively, at just over 3ms again. Lastly, we ran VDI Full Clone and Linked Clone benchmarks for Boot, Initial Login, and Monday Login. The XS1200 peaked at 218,000 IOPS, 185,787IOPS, and 182,376 IOPS in Full Clone, and 149,488 IOPS, 147,423 IOPS, and 248,514 IOPS in Linked Clone.

Overall the QSAN XCubeSAN XS1200 has a lot of great capabilities to help it make a name for itself in the market. At entry-level midmarket pricing, it outpaced many of the systems we’ve tested in much higher price brackets. That said, there are areas where those more expensive models are able to show their strengths. UI is a big one, where the QSAN system is functional but lacks the fit and finish many other systems provide. Feature set is another; other systems are able to maintain similar performance levels, with full in-line data services activated, such as inline compression and dedupe. At the end of the day, though, customers looking for a great performance/budget ratio and who don't mind compromising some in other areas, will be enticed by the XCubeSAN XS1200.

Bottom Line

The QSAN XCubeSAN XS1226D offers a compelling blend of feature set, performance and pricing, making it a very good storage solution for SMB/ROBO situations that want it all, while remaining as cost effective as possible.

QSAN XCubeSAN XS1200 Series Product Page

Discuss this review

Sign up for the StorageReview newsletter

QSAN XCubeSAN XS1200 Series Specifications Design and Build Management and Usability Application Workload Analysis SQL Server Performance SQL Server Testing Configuration (per VM) SQL Server OLTP Benchmark Factory LoadGen Equipment Sysbench Performance Dell PowerEdge R740xd Virtualized MySQL 4 node Cluster Sysbench Testing Configuration (per VM) VDBench Workload Analysis Conclusion Bottom Line