Storage

Terramaster F8 SSD Plus Review: All Flash NAS with NVMe

Terramaster F8 SSD Plus Review. See how this 8-bay NVMe all flash NAS performs in the home lab with iSCSI datastores for virtualization!

I was graciously sent a review unit of the soon to be released (September 2024) Terramaster F8 SSD Plus All Flash NAS for review. I was intrigued when I saw the specs for the unit as it has some great hardware specs, with 8 NVMe bays and 10 gig connectivity. Let’s take a look at the unit and see how it performs running virtual workloads.

Terramaster F8 SSD and F8 SSD Plus specs

The model sent over to me is the “Plus” unit, which sports the following hardware:

  • Intel N305 (8 Core i3)
  • 16-32 GB of DDR5 memory
  • 10GbE Base-T adapter
  • 8 bays of M.2 NVMe storage (max 64 TB)

The difference with the F8 SSD:

  • Intel N95 4 core
  • 8-16 GB DDR5 memory
  • 2.5GbE adapter
  • 8 bays of M.2 NVMe storage

Pictures of the F8 SSD Plus

Below is a look at the unit in the upright position as it is designed to stand. You can also lay it over on the side as well. But it has the rubber feet so that it is positioned on the bottom. Also, you can see the round power button is on the top/side of the unit depending on how you lay it.

Front view of the f8 ssd plus
Front view of the f8 ssd plus

A close up look at the Terramaster F8 SSD Plus model badge.

F8 ssd plus badge
F8 ssd plus badge

There is one thumb screw to take out and then the internals slide out of the housing which is pretty nice making for easy access.

F8 ssd plus internal m.2 drive bays
F8 ssd plus internal m.2 drive bays

Another look from a different angle of the Terramaster F8 SSD Plus internals. As you can see there are (4) M.2 slots on one side.

A different view of the drive bays in the terramaster f8 ssd plus
A different view of the drive bays in the terramaster f8 ssd plus

Then flipped around to the other side, you see the other four slots.

The other side where there are more m.2 slots in the terramaster f8 ssd plus
The other side where there are more m.2 slots in the terramaster f8 ssd plus

The below shows the heatsinks the unit came with, or at least the sample I was sent. I received the aluminum heat sinks with the unit and the bag of rubber bands for attaching to the NVMe drives.

The set of heat sinks that came with the terramaster f8 ssd plus sample
The set of heat sinks that came with the terramaster f8 ssd plus sample

Below is a look at one of the 980 pros with the heatsink installed and one without.

Attaching the heatsink to the 980 pro nvme drives
Attaching the heatsink to the 980 pro nvme drives

Below is with 4 drives installed in slots 1-4.

After loading up 4 of the nvme drives in the terramaster f8 ssd plus
After loading up 4 of the nvme drives in the terramaster f8 ssd plus

Getting up and running

I have not used or had any experience with Terramaster NAS devices before the sample unit. They have a desktop app you download that discovers the device on the network. From there, it allows you to initialize the device.

The desktop app will need to be on the same network to perform the discovery for the device. Below, I captured the screenshot before the IP was refreshed, since it is showing autoprivate below. When you see the device show up, you simply right-click and login.

The uninitialized terraform f8 ssd plus device found
The uninitialized terraform f8 ssd plus device found

By default, you won’t be prompted for credentials since this is part of the process during setup. It will just begin the setup wizard.

Automatic initialization of the f8 ssd plus
Automatic initialization of the f8 ssd plus

It will download the bootloader.

Online or manual configuration
Online or manual configuration

Bootloader beginning to load.

Loading the bootloader on the f8 ssd plus
Loading the bootloader on the f8 ssd plus

You will select the disks that the system files will be installed on. The default the screen showed me was with 4 disks selected, which I believe is the max according to what the screen shows. I left this selected this way below.

Select the system disks on the f8 ssd plus
Select the system disks on the f8 ssd plus

You will see the note about the fact the disks will be erased.

Warning on disk initialization
Warning on disk initialization

TOS is the operating system which will begin installing.

Tos is getting installed on the nas
Tos is getting installed on the nas

After a time, you will see the EULA displayed to scroll down to the bottom and then check that you agree and confirm.

Accept the eula
Accept the eula

Next, you will configure your superuser. Root is disabled by default. You also can’t use admin as a user name. One interesting thing to note, I had errors, if I created any user that had admin in it and it wouldn’t let me proceed. So, just note that on this screen.

Setting up the superuser settings for the f8 ssd plus
Setting up the superuser settings for the f8 ssd plus

Create the storage pool on the F8 SSD Plus

Next, you will be prompted to create the storage pool.

Note to create a storage pool on the f8 ssd plus
Note to create a storage pool on the f8 ssd plus

Click the Create button.

Beginning the process to create a new storage pool
Beginning the process to create a new storage pool

Select to Create a volume on an new storage pool.

Create a new storage pool on the volume
Create a new volume on a new storage pool

Below, you select what RAID mode you want to use. The default is TRAID, which you see here takes the space of 1 disk. This is akin to RAID5. Note, in the drop down, you can still select traditional RAID levels like RAID 1, RAID 5, RAID 6, etc.

TRAID and TRAID plus have the benefit of being hybrid RAIDs that can do things like use dissimilar disk sizes, etc.

Viewing the space of the volume and selecting the raid level
Viewing the space of the volume and selecting the raid level

Below is a look at selecting TRAID Plus. You can see it is akin to RAID 6, which takes the space of (2) disks.

Traid plus takes the equivalent of two drives
Traid plus takes the equivalent of two drives

Confirm your choice and select Confirm.

Confirm the drive wipe operation
Confirm the drive wipe operation

Interestingly, it gives us a warning about heat considerations.

Warning note about nvme heat
Warning note about nvme heat

Name the new volume.

Create volume on the f8 ssd plus
Create volume on the f8 ssd plus

WORM configuration.

Worm options
Worm options

You have the choice between BTRFS and EXT4 for the file system on the new volume.

Select the file system for the new volume
Select the file system for the new volume

Confirm the creation of the new volume.

Confirm the creation of the new volume
Confirm the creation of the new volume

The security advisor launches which is a nice touch to help keep security front and center on recommended best practices.

Security advisor will run for the nas
Security advisor will run for the nas

We get the note about pool synchronization that it will take some time.

The storage manager synchronization of raid begins
The storage manager synchronization of raid begins

The volume is created and synchronized successfully.

The terramaster f8 ssd plus volume is synchronized successfully
The terramaster f8 ssd plus volume is synchronized successfully

Setup an iSCSI LUN for virtualized environments

Terramaster makes setting up iSCSI easy with an app you can install and use for configuration. Just go to the App center and search for iSCSI.

Installing the iscsi app on the terramaster f8 ssd plus
Installing the iscsi app on the terramaster f8 ssd plus
Beginning to create the iscsi lun
Beginning to create the iscsi lun
Creating an iscsi target in terramaster f8 ssd plus
Creating an iscsi target in terramaster f8 ssd plus

Make sure to check the box below to Allow multiple sessions. This will allow multiple ESXi or Proxmox hosts to access the storage simulataneously.

Note about using vmfs and multiple sessions
Note about using vmfs and multiple sessions

Finishing out the creation of the iSCSI target.

Confirm creating the iscsi target on the f8 ssd plus
Confirm creating the iscsi target on the f8 ssd plus

Adding the iSCSI target in VMware vSphere

Below, I am adding the new target in VMware vSphere.

Adding the new iscsi target in vmware vsphere
Adding the new iscsi target in vmware vsphere
Confirming adding the iscsi target in esxi
Confirming adding the iscsi target in esxi

HCIbench test results

Even though this isn’t an HCI configuration, we can use HCI bench and use VDBench to test the datastore performance. I am driving this datastore with a Minisforum MS-01.

Below are the results of the test case where I ran some fairly common benchmark settings that fit virtualized workloads. My unit has (6) 2TB Samsung 980 Pro NVMe drives loaded.

  • 30% working set
  • 4k block size
  • 70% read, 30% write
  • 100% random

Test Case Name:vdb-8vmdk-30ws-4k-70rdpct-100randompct-2threads-1724424512
Report Date: 2024-08-23 14:54:18 +0000
Generated by: HCIBench_2.8.3
Datastore: F8DS01
=============================
Number of VMs: 4
I/O per Second: 71863.9 IO/S
Throughput: 280.72 MB/s
Latency: 0.89 ms
Read Latency: 0.9 ms
Write Latency: 0.86 ms
95th Percentile Latency: 5.62 ms

=============================
Resource Usage
Cluster cpu.usage cpu.utilization mem.usage
minicluster 13.18% 12.01% 72.17

Results of 30 percent 70 read and 100 percent random
Results of 30 percent 70 read and 100 percent random

Thoughts and Wrapping up the F8 SSD Plus Review

All in all, this is a great little unit that I think has a lot of potential in the home lab. You have 8 M.2 slots for loading up quite a bit of storage, which you can max out at 64 TB of NVMe.

Pros:

  • Easy setup using the app and the TOS6 software worked well to do what I needed it to do (set up storage, networking, and iSCSI). I didn’t test the consumer’y type apps
  • Very good performance, 80k IOPs which is great for home lab
  • It allows you to set up a cluster of hypervisor hosts to have failover, live migrations, HA, etc.
  • Very quiet operation
  • It came with NVMe heatsinks (not sure if the production units will?)
  • Low-profile looks basically like a mini PC-size device

There are a few nit pick things I would like to mention that are cons:

  • There is only 1 network adapter at 10 gig
  • The network adapter is not VLAN-aware
  • I believe the NVMe slots are PCI gen 3 which doesn’t allow Gen 4 drives to achieve full performance
  • Would be great if we had 2 network adapters on the unit to split out storage traffic and management traffic since not VLAN-aware
  • It would also be great if the NVMe drives were somehow hot-swappable, so if you have a drive fail, this could be corrected online. But, this is nitpicking as that is delving into more enterprise features, which is obviously a little outside of what this is designed for.

Subscribe to VirtualizationHowto via Email ๐Ÿ””

Enter your email address to subscribe to this blog and receive notifications of new posts by email.



Brandon Lee

Brandon Lee is the Senior Writer, Engineer and owner at Virtualizationhowto.com, and a 7-time VMware vExpert, with over two decades of experience in Information Technology. Having worked for numerous Fortune 500 companies as well as in various industries, He has extensive experience in various IT segments and is a strong advocate for open source technologies. Brandon holds many industry certifications, loves the outdoors and spending time with family. Also, he goes through the effort of testing and troubleshooting issues, so you don't have to.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.