In my last post I gave everyone an overview of the lab. Today, I want you to see how I’ve configured each node and give you some general information about the testing I’ll be doing. Much of the earliest testing is related to installation, system configuration, etc. Nothing very exciting but necessary.
For now though, I want everyone to understand how our testing will help node runners. If you don’t have hardware to build a node now, we see no reason you should overspend to build a BetaNet node if you can build one for less. If you’ve got everything except a GPU or SSD, great. We want to be able to recommend the most cost effective hardware when the time comes to build or complete your node.
Before I get into too much detail, it is really important that everyone understand the hardware I’m using is not a final recommendation. Much testing needs to be done and some components may not meet the final requirements. So with that said, NONE OF THE FOLLOWING IS RECOMMENDED HARDWARE.
We do expect much of the hardware to pass our testing and we expect some to fail but it’s important to understand why we chose the hardware we did.
You’ll notice all nodes are using 500GB Samsung 860 EVO SSDs. That’s because endurance, longevity and capacity are not required for the in-house testing. During in-house testing the nodes will run short sprints to test specific functions. At this point, I can comfortably say this is definitely not the recommended SSD. An enterprise grade SSD with a higher capacity will be recommended.
I chose these for their reliability. I wanted to ensure all nodes have enough power and they wouldn’t need to be replaced during the in-house testing. I will be testing actual power consumption. If it turns out a 500W PSU can handle the load, hey it’s less expensive. But we’ll have to see.
Expensive or vast quantities of RAM are not expected to be required. I’ve no intention to do any overclocking so RAM with heatsinks isn’t required. I’ve configured the nodes with 16GB of the least expensive and most readily available modules. I’ll be testing for minimum capacity requirements and performance of single vs dual channel configurations.
I’m using the AMD Ryzen 7 3700x because that’s what was readily available on the Korean market. When the original specs were published a 2700x cost about the same as what I paid for a 3700x. I’m using an Intel processor due to market penetration of Intel. It isn’t our first choice because Intel processors are just more expensive. However, many people have Intel processors on hand so I chose the i7-9700K because it’s on par with the computing power to the 3700x. It should be noted that the GPU will be doing the brunt of the work so 64 core AMD EPYC or Xeon Phi are absolute overkill. We’re talking 8 cores.
That covers the bulk of the table above. When it comes to motherboards and graphics cards, things get a little more complicated since there’s a much wider range of options. There are quite a few factors to consider such as form factor, chipset, manufacturer, availability, price, and the ability to do empirical testing.
MOTHERBOARD, actually chipset
I think there are 7 Socket AM4 chipsets. Many of the features of the X70 chipsets aren’t required and motherboards with those chipsets are more expensive in comparison to A3** and B50 based motherboards. Features like being able to overclock, SLI, or USB 3.2 Gen. 2x2 aren’t necessary. So on motherboards to give you an idea of the things I’m testing, nodes 2 and 3 use the B450 chipset so I’ll test different GPUs and memory configuration. Node 1 uses the B450M chipset and its technically the same as the B450 but a smaller form factor with fewer PCIe slots which reduces the price considerably. Another example is nodes 4 and 5. By using the same motherboard and memory configuration I can test the performance of the RTX 2070 vs the RTX 2060. Those are just a couple examples.
This is the one piece of hardware that will affect the cost of building a node and the ability to keep up the most. Graphics cards with Turing GPUs run between $300 to $5000. If you consider there are about 20 Turing GPUs, dozens of manufactures plus individual models, that is a lot to choose from! We selected the RTX2070 because on paper, it can handle the workload. I got a couple others just to check but mathematics is a beautiful thing and very smart people were able to narrow the field before we even began. So rather than testing the performance of the RTX 2070 vs the RTX 2070 Super or the RTX 2070 vs the RTX 2080, we chose to try to determine whether all GPUs are created equally. You may have a preferred manufacturer but they don’t necessarily make cards that will meet your budget. LEDs and a cool decal aren’t going to improve your node’s performance so if manufacturers like Palit and EmTek can manufacture an RTX 2070 that performs as well and doesn’t melt after 6 months, then I see no reason anyone should pay an extra $200 - $300 for a more expensive card. In the end the graphics cards we recommend will be based on the number of calculations it can do reliability and the price.
When all is done, our goal is to be able to offer a cost effective baseline configuration to build a BetaNet node. From there if you choose to use a terabyte of RAM or a $3000 graphics card, that’s on you.
That’s it for today. If you’ve got questions or suggestions please join the conversation! I’ll do my best to answer all your questions. Keep an eye out for more posts in the near future.