Description

2nd version of this build https://pcpartpicker.com/b/mfFtt6

I'm a developer, so I enjoy using all this CPU power. THIS IS NOT A GAMING BUILD =)
I do not overclock, runs at stock.

the memory sticks could run at 3200@DCOP with 1950, but with 2990 it's unstable, so I loaded DCOP profile and downclocked memory to 2933 and it's running rock stable now.
I'm thinking on getting full 128GB of mem, because 1GB/thread is a bit low to my taste.

update
Got 128B of samsung b-die ecc sticks, running at 2400 stock currently.
pcpartpicker apparently lists this PN as non-ECC 2666, but in reality it's ECC 2400.
Simply bumping to 2933@1.35v does not produce a stable result, will be trying to tune this amount of memory to run at 2933.
/update

update2
replaced all fans with noctua PPC.
got RAM running stable 2866-16-16-16-16-36-52
/update2

update3
PSU started behaving a bit strange.
changed to EVGA 1600 T2
/update3

Difference between 1950 and 2990 is not that dramatic for my use case, but still very noticeable.
as an example, chromium build:
4 core 5557U ~6 hours
16 core 1950 25 minutes
32 core 2990 15 minutes

This machine runs a compile server (distcc), several VMs, dozens of docker images and a lot of applications, does occasional video encoding and scientific computations. very snappy and reliable.

rx560 drives single ultrawide CF791 which is essentially a 2k display. Vega64 is occasionally used for heavy stuff and some games, I play S.T.A.L.K.E.R mods maxed out and some WoW, next on my list is METRO series.

Freesync does not work in linux (yet, seems there is progress there for 4.19 kernel) and works fine in windows, but sometime flickers in WoW, the infamous freesync brightness flickering.

Noctua does a great job cooling this thing down, CPU runs 33-35 at idle (i'm in SoCal, kinda hot here) and 55 at my typical load. 65 at torture tests.
2nd cpu fan does make a difference, but something like 2-4C, not a lot, but temps are more stable with it.
I wanted to try out Enermax cooler, but looks like they go bad after about 4month of usage and I totally do not want a custom loop. Air works, is cheap and I do not overclock.
2 front FANS set to turbo, CPU set to normal and 2.6 sec delay to stop it from oscillating due to boost, bottom front fan hooked up to Vega and mostly sits stopped until Vega is used.
Top fans are set to 20% and just passively expel heat.
Back FAN set to normal.
This setup maintains positive pressure inside the case and dust barely can get inside.
M2 fan is useful, about 10c difference on SSD with it.

Storage setup is the following:
2x spinning rust in a ZFS mirror pool, for long term static storage and some virtual machines.
2x NVME mdadm stripe, for speed and caches, hot data and some virtual machines. it's very fast.
2xSSD in a ZFS mirror for linux.
1x Evo SSD for windows.

I'm still playing with pass-thru setup, need a kvm to properly handle single screen scenario, otherwise it works fine.

here is IOMMU group listing

IOMMU Group 0 00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 1 00:01.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]
IOMMU Group 2 00:01.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]
IOMMU Group 3 00:01.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]
IOMMU Group 4 00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 5 00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 6 00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 7 00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 8 00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]
IOMMU Group 9 00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 10 00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]
IOMMU Group 11 00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 59)
IOMMU Group 11 00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)
IOMMU Group 12 00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1460]
IOMMU Group 12 00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1461]
IOMMU Group 12 00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1462]
IOMMU Group 12 00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1463]
IOMMU Group 12 00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1464]
IOMMU Group 12 00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1465]
IOMMU Group 12 00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1466]
IOMMU Group 12 00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 [1022:1467]
IOMMU Group 13 00:19.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1460]
IOMMU Group 13 00:19.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1461]
IOMMU Group 13 00:19.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1462]
IOMMU Group 13 00:19.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1463]
IOMMU Group 13 00:19.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1464]
IOMMU Group 13 00:19.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1465]
IOMMU Group 13 00:19.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1466]
IOMMU Group 13 00:19.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 [1022:1467]
IOMMU Group 14 00:1a.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1460]
IOMMU Group 14 00:1a.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1461]
IOMMU Group 14 00:1a.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1462]
IOMMU Group 14 00:1a.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1463]
IOMMU Group 14 00:1a.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1464]
IOMMU Group 14 00:1a.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1465]
IOMMU Group 14 00:1a.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1466]
IOMMU Group 14 00:1a.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 [1022:1467]
IOMMU Group 15 00:1b.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1460]
IOMMU Group 15 00:1b.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1461]
IOMMU Group 15 00:1b.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1462]
IOMMU Group 15 00:1b.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1463]
IOMMU Group 15 00:1b.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1464]
IOMMU Group 15 00:1b.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1465]
IOMMU Group 15 00:1b.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1466]
IOMMU Group 15 00:1b.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 [1022:1467]
IOMMU Group 16 01:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset USB 3.1 xHCI Controller [1022:43ba] (rev 02)
IOMMU Group 16 01:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset SATA Controller [1022:43b6] (rev 02)
IOMMU Group 16 01:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset PCIe Bridge [1022:43b1] (rev 02)
IOMMU Group 16 02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 16 02:01.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 16 02:02.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 16 02:03.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 16 02:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 16 02:09.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 16 05:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
IOMMU Group 16 08:00.0 USB controller [0c03]: ASMedia Technology Inc. Device [1b21:2142]
IOMMU Group 17 09:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961 [144d:a804]
IOMMU Group 18 0a:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Baffin [Radeon RX 550 640SP / RX 560/560X] [1002:67ff] (rev cf)
IOMMU Group 18 0a:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:aae0]
IOMMU Group 19 0b:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:145a]
IOMMU Group 20 0b:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:1456]
IOMMU Group 21 0b:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] USB 3.0 Host controller [1022:145f]
IOMMU Group 22 0c:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:1455]
IOMMU Group 23 0c:00.2 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU Group 24 0c:00.3 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller [1022:1457]
IOMMU Group 25 20:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 26 20:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 27 20:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 28 20:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 29 20:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 30 20:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]
IOMMU Group 31 20:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 32 20:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]
IOMMU Group 33 21:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:145a]
IOMMU Group 34 21:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:1456]
IOMMU Group 35 22:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:1455]
IOMMU Group 36 40:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 37 40:01.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]
IOMMU Group 38 40:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 39 40:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 40 40:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]
IOMMU Group 41 40:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 42 40:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 43 40:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]
IOMMU Group 44 40:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 45 40:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]
IOMMU Group 46 41:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961 [144d:a804]
IOMMU Group 47 42:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:1470] (rev c1)
IOMMU Group 48 43:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:1471]
IOMMU Group 49 44:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Vega 10 XT [Radeon RX Vega 64] [1002:687f] (rev c1)
IOMMU Group 50 44:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:aaf8]
IOMMU Group 51 45:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:145a]
IOMMU Group 52 45:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:1456]
IOMMU Group 53 45:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] USB 3.0 Host controller [1022:145f]
IOMMU Group 54 46:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:1455]
IOMMU Group 55 46:00.2 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU Group 56 60:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 57 60:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 58 60:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 59 60:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 60 60:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 61 60:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]
IOMMU Group 62 60:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 63 60:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]
IOMMU Group 64 61:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:145a]
IOMMU Group 65 61:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:1456]
IOMMU Group 66 62:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:1455]

Part Reviews

CPU

Simply the most powerful consumer CPU on the market. Jumping from 1950 to 2990 was not as dramatic as initial impression from 1950, but improvement is noticeable.

Comments

  • 12 months ago
  • 5 points

Nice to see more people using Linux. How nice is Gentoo? I've never used it myself

  • 12 months ago
  • 6 points

gentoo is nice, but can be brutal and gruesome if you are linux novice. Think of it like a more stable Arch with friendlier community, much more customizable and configurable. But you have to know what you want and how to achieve that.
Cons are :
1) little bit less packages in repos compared to arch+AUR (but main repo is still larger that arch main repo) and no single repo of user-submitted packages like AUR, but there are hundreds of "overlays" with extra packages (like multiple smaller AURs).
2) much longer package install time.
3) requires significant effort to learn and some effort to maintain( but maintenance is not that bad as soon as you are comfortable with it).

steam, wine is there and work just fine.
you can actually have all imaginable wine versions on single system with different patchsets (d3d9, staging, dxvk, vkd3d, vulkan etc) and can even combine patchets into single wine binary. and you can switch between wine versions and patchsets on the fly.

  • 10 months ago
  • 5 points

The little SSD fan is cute. :)

  • 6 months ago
  • 2 points

So much storage

  • 6 months ago
  • 2 points

Wow, that’s a lot of storage

  • 12 months ago
  • 1 point

Sweet specs

  • 12 months ago
  • 1 point

Awesome build. Always love seeing workstations on here.

  • 12 months ago
  • 1 point

Nice. I toyed with doing a Threadripper build, but I spend more time debugging and coding than I do benching and couldn't quite justify the extra cores. Good to know that the Noctua can keep up with the 2990WX!

  • 12 months ago
  • 1 point

Nice build! Noctua fans are a must for air cooling.

  • 12 months ago
  • 1 point

Nice looking build, glad to see another 2990WX build here on PCPP :) Also, love the setup and how you're using it. Great build!

  • 12 months ago
  • 1 point

congrats on the 2nd of what I hope to be many 2990wx builds

  • 12 months ago
  • 1 point

You have learned well from Wendell!

Both a lstopo and a IOMMU listing in the same build description!

  • 12 months ago
  • 1 point

I thought the last one would’ve been enough. I was wrong.

Awesome!!!

  • 11 months ago
  • 1 point

Aha, a fellow KDE user!

What is the reason for the compile server given that it's a single local workstation? For the VMs to access?

I'm working on a somewhat similar build (but smaller-scale, Threadripper 1900X) and was wondering why you went with the drive layout you did for the Linux side, with 3 separate volumes (2 mirrored ZFS pools + non-ZFS stripe). I was planning on pretty much putting everything in one big ZFS filesystem with RAID-Z2 and SSD for SLOG, to simplify and not have to deal with multiple logical volumes. Did you consider a similar setup / are there any issues with what I'm planning?

  • 11 months ago
  • 2 points

Hi!

Yeah, plasma5 is nice, but I could not stand plasma4, it was awful.
but on laptops I use dwm or i3 for speed and efficient workflow.

distcc is for several x86 laptops/boards running gentoo. saves so much time on upgrades. since CPUs in all the devices are different, I don't use binpackages, just let the big boy compile stuff for the rest of the crowd.

Well, system zpool is system zpool, it hosts main gentoo system, literally hundreds of auto-rotating snapshots (15min, hourly, daily, weekly, monthly, yearly ).
Special script creates and deletes backups using tower of hanoi algorithm.
Also my /home lives there.
So it's just separate zfs pool and I like to keep it that way. Just 2 mirrored disks for the system. Data and systems should always be separated.

I had no benefit using slog or l2arc ( I tried attaching nvme partitions to the big zfs pool), it all boils down to how you use the storage. My big pool is just for storing stuff, it's inactive 80% of the time.

If I had more spinning drives they'd be in raidzn for sure. But I only have 2 of them hence the mirror.

Also it does not make sense mixing different type of drives in the pool (aside from slog/l2arc) because zfs does not support tiering and you'll be limited by slowest drive in the pool.

I considered adding bcache layer under zfs, but in the end I went with less complex setup, because I value my data and don't want to loose it)
zfs just werks, I use it since t's inception on Solaris and later freeBSD and never lost a single bit of data, I'm so happy it works on linux now.
I did experiment with btrfs and all experiments ended in fatal data loss...

Before adding slog, do some research.
slog can kill ssd drive really quickly, because it just keeps writing to the same blocks over and over.
And you really want to have mirrored slogs. And since slog device is rather small thing (I bet you don't need anything over 8gb), it gets really stupid. basically you need 2x fast small ssd drives with high endurance.
slog always should be a separate drive (2x preferably), don't even think having slog as a partition on ssd drive that is used for something else.

As for l2arc, it's a hot cache, it needs to be warmed up before it starts working, and if you reboot - all the cache gets invalidated. It's very useful for filers constantly serving a lot of data, but I not for my use-case.

the double-nvme volume is kinda temporary fast storage, sometimes it's a mirrored or striped zpool with zvols, sometimes it's mdraid stripe, sometimes just several volumes, sometimes it's ccache. I destroy and re-create it every once in a while, depending on my current needs.
sometimes wine behaves funky on zfs, so I need to have ext4 fs nearby.

But lately I just put everything in ram/tmpfs, planning on replacing memory with 128G of ECC sticks.
Apparently certain Samsung 2400 ECC sticks can be overclocked to 3200, since it's the same B-die as in all the overclock-able memory around and ryzen looooves B-die.
https://www.youtube.com/watch?v=1NxSZil8KS8

I used to run the system from nvme pool but the difference is not very noticeable compared to SATA drives I have. I do not care about boot times. Post is long anyway, otherwise system boots in about 6 seconds on SATA, 4 seconds on NVME.

  • 11 months ago
  • 1 point

Thanks for the detailed response!

I have 2 Optane 900P drives which I'm planning to use for mirrored SLOG; they're fast and provide high endurance (but expensive!). But what's the issue with partitioning the drives and using the remaining space for other things like fast scratch space? I expect the drives are fast enough to service I/O from multiple sources, and ZFS just sees a block device so it shouldn't care if you're using a partition, right?

I also considered running a cache layer underneath ZFS, though I was looking at lvmcache for the ability to add/remove caching without having to reformat the disk. My hope is that by running the caches in write-through mode I won't lose reliability (SLOG will accelerate writes before they hit the underlying devices, so the lower write performance of write-through vs. write-back is not an issue). I'll need to verify that lvmcache behaves sensibly if a cache drive dies before enabling it in production, though.

For ECC RAM, I ended up going with four sticks of Crucial CT8G4WFD8266 8 GB RAM (DDR4, 2666 MHz, ECC, unbuffered, dual-ranked); I'd read reports on forums that these (Micron B-die) are highly-binned and can be overclocked to 3200. It's hard to find good ECC UDIMMs as well as information on them. They booted up at rated speeds without issue; I haven't tried overclocking them yet.

  • 11 months ago
  • 1 point

yeah, those drives are good for the task. Perfect drives for slog, not so perfect for the wallet =)
and scratch space is ok, I meant don't put OS there, or any non-temorary, constantly accessed data.

measure, measure and test several times before you settle. for me average win was 10%, but it's not enough for me to care. your usecase may be different.
I strongly recommend against any layer (software or hardware raid) below zfs if you care about data, but you are the boss =)
btw, removing and adding slog/l2arc is a dynamic operation. and loosing slog you don't loose the pool, only single txg commit (amount of data you can write at max speed per 5 sec by default). and slog does not handle async writes by default, make sure you play with sync parameter. and of course slog/zil can be striped, which will reduce redundancy but can improve performance even more. latency is everything with slog.

good luck with zfs.

PS ordered samsung ECC sticks, upgrades coming tomorrow =)

  • 11 months ago
  • 1 point

Where did you get the Samsung sticks? I looked previously, and the major retailers don't stock them so I would have had to buy from some random site or Marketplace vendor. Do you know a reputable place to get them?

  • 11 months ago
  • 1 point

I got them from serversupply. just arrived today.
M391A2K43BB1-CRC are the sticks, i got them for $200 each, used quote button on their website.

all 128gb working fine on 2933 so far, just bumped frequency and set voltage to 1.35. without bumping voltage system crashed.
haven't played with timings yet.
moderate latency, something like 55 in passmark, I'll try to squeeze a bit more from those bad boys and will update the post with timing and all the related info.

  • 9 months ago
  • 1 point

Hi. Sorry for the reply to a message 2 months old. I tried ZFS, my main issue with it on SSDs is that it has no TRIM in its Linux implementation, that means an SSD will gradually slow down with ZFS on it. (FreeBSD's ZFS has TRIM, by the way.) I still want to use it on my "spinning rust" drives, but currently, due to needing to use a DAW, do some dual-booting so my Windows VM (an installation of Windows on an nvme passed through to KVM/QEMU) is using NTFS on my hard-drives and I use another SSD for my Linux-boot.

The main reason why I happened across your build by the way is I just got the above monitor! It's going to be sweet, and a better fit for my GTX 1070 than a 1080p/60hz 16:9 monitor.

  • 11 months ago
  • 1 point

You should try Awesome VM - it's the main thing apart from watching emerge -avudn world output that i miss working on Windows (i'm using .net for development and no my company still didn't swith to net core). Also i'm curious about kernel/libre office compile time - this beast should crunch it in minutes.

  • 11 months ago
  • 2 points

I used awesomewm for a long time, takes too much time to set up and maintain configs =)
dwm is set and forget for many years. plasma is also nice lately, I mean veeery nice. and takes almost same amount of ram as xfce (not kidding). not that I care about RAM, but some folks do.

kernel times depend on the configuration A LOT.

make defconfig
make -j64

takes about 30 seconds.

my configuration (very static, everything possible stripped out) takes 15-20, sometimes 40 seconds, but I do module signing and debuginfo, it adds some delay.

libreoffice takes 10-14 minutes.

I run emerge with 19 niceness, and somewhat throttled io

PORTAGE_NICENESS=19
PORTAGE_IONICE_COMMAND="ionice -c 3 -p \${PID}"

also have some qa/binary checks on top that, with disabling all that stuff above times can be improved further. But it does not really bother me a lot, current speed is more than enough.

  • 11 months ago
  • 1 point

You run linux on the world's most powerful comsumer cpu, great idea (if you are a creator)

  • 6 months ago
  • 1 point

thats hot

  • 4 months ago
  • 1 point

Nice build! Pretty impressive squeezing out those Cinebench scores on Air cooling -- intrigued. Also glad to see someone using the 2990WX for development.

  • 12 months ago
  • 0 points

Nice PC, which framerate on GTA V maximum graphics settings?

  • 12 months ago
  • 2 points

thanks. idk, don't play or own GTA so can't answer.

[comment deleted by staff]
[comment deleted]
[comment deleted]
[comment deleted by staff]
[comment deleted by staff]
  • 12 months ago
  • 2 points

both lamp and desk are IKEA.
desk is GERTON Tabletop. It is just a slab of raw wood, heavy as hell, kinda like big butchers block and you have to oil it to protect it and give some visual texture. I used Watco brand danish oil (mix of evaporating oil and varnish), 3 coats of it and it came out nice and even repels liquids. danish oil can come with some colors, so you can even make it dark, like cherry or walnut. it's really affordable tabletop of great quality if you are ready to spend some time oiling it properly. google the name, there are tons of DIY examples how to make this desk awesome. lamp is "hansakogg", but seems they no longer sell it.

[comment deleted by staff]