This build consists of an Intel Xeon E5-2620 v4, Asus X99-E WS/USB3.1 (SSI-CEB) mainboard, Crucial 32GB RDIMM ECC DDR4 RAM, Corsair AX860i PSU, and a Coolermaster Evo 212 cooler.

I plan to eventually migrate these components into a 4U Rackmount chassis in the future, with hotswap bays and an Extended SGPIO 6Gb/s SAS backplane.

Here are some questions I received on my build-log from the forums, so I thought that would be a good starting point.

What's your ultimate goal for the system?

I wanted to stick to X99 as it is a mature platform. Early on I decided ECC was a requirement that I was settling on as well, this lead me to going Xeon.

From this point, I came across the Asus X99 WS boards - they are on the more expensive side - sure, but they are a cross between the 'gamer/overclocker' aesthetic and server grade components. I like the LED-error code read out, hardware start/reset switches, having a thunderbolt header etc. These are 'additions' a cheaper Supermicro board would generally lack.

The I/O at the back on the Asus X99 WS board is great too, has BIOS flash-back/reset switch. The Asus UEFI is fantastic - so that settled my choice there. I chose one of the cheaper E5 Xeons, enough cores to allow running at least 1-2 VMs if needed.

I went with Crucial 32GB RDIMM RAM; crucial make quality RAM so no issues there. Power supply, I went for a Corsair AX860i, the second unit I own. I also run a HX1000i and a HX750i.

In my other Synology units I'm running WD Red Pros that have served me well over the last 2+ years, and I went with WD Reds as they were cheaper.

One point to note, I went with the Asus X99 WS/USB3.1 board vs the Asus X99 WS/10G board, where the latter has dual-10G NICs (both are Intel X550 grade NICs) but you loose the Thunderbolt header. I wanted to give myself the option of having the FreeNAS box in another room, and run a Corning thunderbolt cable the way Linus did.

I can buy Intel X520 cards for 10G ethernet at around $200 per card; additional cost - sure, but hey at least I have the option of Thunderbolt (in the future).

Why did you choose FreeNAS?

I was keen in ZFS and that was the 'priority'; as far as operating systems go, FreeBSD has a track-record of being solid, considered to have a more robust network stack; it's Unix. Had there been a linux-alternative, I may have gone with that. For my professional work I mainly rely on Debian for production environments.

I didn't spend too much time unRAID; it certainly is there as an option. I do plan to try my hand at setting up a EXSi lab at home, and virtualise other aspects of my lab-stuff. I'd do it mostly out of curiosity, as it's typically cheaper to just go with instances in AWS for any real need, rather than paying EXSi licensing costs.

In any case, running VMs wasn't an initial priority - but I did pick the CPU/Mobo ensuring they would support it.

Thunderbolt - well, VNC is nice, running headless is great (99% of my systems are headless over SSH), but if you want to have a physical terminal, say many meters away, Thunderbolt is a real option. Why? Cause PCs/Servers generate heat/noise. I can move them to my server-rack space downstairs and have a near-latency free console over Thunderbolt if I so ever wanted it.

Do I need it? No. But I have the option. True, that also lets me manage UEFI stuff; an alternative would have been either (i) an IPMI chip on the board or (ii) an IP-based KVM solution.

Belly of the Beast: Corsair Air 740 Case

The tricky part was figuring out a way to get all the drives to sit inside my Corsair Air 740 case.

My solution was to design and 3D print brackets that were optimised to be cost-effective, and modular, allowing to stack 3.5" HDDs without any limit on the total stack size.

Given the design, orientation is also configurable to either mount the drives vertically — and using the included base-plate — or in a typical caddy style setup.

All 3D Printing files are hosted on Github for you to use.

TIL Setting up my FreeNAS 11 Xeon Server

You'll find various tips n' tricks that I picked up along the way detailed on my blog, including how I setup UPS monitoring and ensured that ECC is working etc.

Build Logs:

Log in to rate comments or to post a comment.


  • 29 months ago
  • 1 point

I've updated the description with a link to my blog detailing various tips n' tricks that I picked up along the way detailed on my blog, including how I setup UPS monitoring and ensured that ECC is working etc. Scroll up for the link.

  • 21 months ago
  • 1 point

I like the homemade HDD holders. My server case doesn't have the cage capacity I want and this looks like a good fix.

[comment deleted by staff]
  • 29 months ago
  • 3 points

It has a TDP of 85 the 212 will be more than sufficient.

  • 29 months ago
  • 2 points

You really didnt manage to have anything nice to say about this build, all youve managed to do is complain and critique. Also a specific chip being server grade has absolutely nothing to do with its ability to be cooled effectively by any cooler, its about the specific TDP of the chip itself and the total TDP capability of the cooler. The E5-2620 has a total TDP of 85 watts which is well within the range of the cooling capability of the 212 EVO. Maybe try to be more constructive and do a little research before tossing out baseless criticism?

[comment deleted by staff]
  • 29 months ago
  • 1 point

The TDP for the 212 EVO is explicitly stated within its specs, because its TDP is not a static measurement, it is based on the type, RPM rating and number of fans you choose to use. Also since it states it can be used for 2011-V3 chips and none of those are below 110w then one can safely infer that OP can use the 212 for his Xeon.

You make that 2nd comment still stating there is improvement needed, yet your comments seem to have a pattern of being rather critical instead of positive, this site is based upon positivity and not trying to find the faults in someones builds.

[comment deleted by staff]
  • 29 months ago
  • 1 point

You make these comments, yet you've been a member of this site for 2+ years and have not posted a single build.

TDP can change, would TDP of a cooler not be the same if no fans were installed vs a 500RPM fan vs a 1500 RPM fan? My apologies, I meant to say it is NOT explicitly stated because it is not a static measurement.

  • 29 months ago
  • 0 points

This RhettR055 is your typical butt hurt millennial that clearly doesn't know what he's talking about even after being on the site for two years. He tried bashing my criticism as well on my other posts. I agree with the cooling problem and I agree with the lack of expansion area when it comes to this case.

  • 29 months ago
  • 1 point

Hi mate, totally agree.

This is absolutely temporary. I plan to shift all the hardware into a X-Case RM242 Pro sometime in 2018.

I will also be replacing the cooler with a Noctua. That said, right now here are my temps -

[mdesilva@freenas ~]$ sysctl -a | grep temperature

dev.cpu.15.temperature: 32.0C dev.cpu.14.temperature: 32.0C dev.cpu.13.temperature: 32.0C dev.cpu.12.temperature: 32.0C dev.cpu.11.temperature: 33.0C dev.cpu.10.temperature: 33.0C dev.cpu.9.temperature: 33.0C dev.cpu.8.temperature: 32.0C dev.cpu.7.temperature: 33.0C dev.cpu.6.temperature: 33.0C dev.cpu.5.temperature: 33.0C dev.cpu.4.temperature: 34.0C dev.cpu.3.temperature: 34.0C dev.cpu.2.temperature: 34.0C dev.cpu.1.temperature: 32.0C dev.cpu.0.temperature: 32.0C

  • 29 months ago
  • 1 point

Yup, this is on a dedicated APC 1400VA UPS as well.

[mdesilva@freenas ~]$ upsc ups@localhost

battery.charge: 100
battery.charge.low: 10
battery.charge.warning: 50 2001/09/25 2016/06/23
battery.runtime: 1012
battery.runtime.low: 120
battery.type: PbAc
battery.voltage: 27.3
battery.voltage.nominal: 24.0
device.mfr: American Power Conversion
device.model: Back-UPS XS 1400U

[comment deleted by staff]