Build a Farm: Hardware considerations -

Null

Ooperator
Patriarch
kiwifarms.net
A long time ago, when we were just moving onto our own hardware, I bought a server that'd carry us for like 3 years.

Farm 1
1 x Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz (4 cores, 8 logical)
4 x Micron 8GB DDR3 1600MHz

The disks for this are unusual and not set up by me. This is a Software RAID that Facebook uses.
It is a 250GB SSD that fronts and caches for a 4TB HDD that mirrors onto another disk.

This computer worked for the entire stack for years. Currently, the DB is 1.7TB.
The main constraint on this was the RAM. The database consumes an absolutely enormous amount of RAM and this server barely contains it anymore.

After the hack, I rebuilt the frontend on a completely different server shared with a few other containers.

Farm 2
My other two servers are basically cheap hand me downs with completely random disks, none of them have any resilience what so ever, and important stuff (like KF) I back up to cloud very routinely.

1 x Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz (4 cores, 8 logical)
2 x Super Talent 8GB DDR3 1333 MHz

Disks for this are all over the place. The main one to consider is a 128 GB LV for the PHP scripts (huge amount of overkill) and a 2 TB HDD for the static files. Just directly on the HD, no RAID. Things are backed up to cloud instead. There's about 1.3 TB of static content.

The main bottleneck is now the HDD as every single attachment on the site is on that. Even with the most aggressive Cloudflare CDN caching available to me, there's nothing more I can do to alleviate this. I can try setting up an nginx cache onto the SSD, but it's not going to be any faster than Cloudflare. There are too many files all over the place sporadically cached for NGINX to do anything Cloudflare is not already doing.


So my current goal now is to move the KF off the second device to a new one. We don't really cap on CPU but I don't want to touch this for a few years. I'm probably going to go for 32GB DDR4 because it's cheap and a 1 socket CPU with 8 physical cores at or above 3.4GHz, which will be a significant expense.

Further I don't really know much about RAID performance. I was thinking of doing 2 RAID10s but I don't think the SSDs the scripts reside on need a RAID10, maybe no RAID at all. Considering 256GB SSD RAID1 for scripts and something like a 2TB 10k HDD RAID10 for static files, giving 4TB in total. Maybe 4TB 10k HDD RAID10 if I really want to not have any money.

Hard cap I'm willing to spend is $5k.

Discuss.
 

OwO What's This?

𝑖𝑡'𝑠 𝑛𝑜𝑡 𝑝𝑜𝑟𝑛, ℎ𝑜𝑛𝑒𝑠𝑡
True & Honest Fan
kiwifarms.net
As far as processors go I think the Threadripper 3960X or 3970X is a good choice, this new gen of Threadripper processors basically got rid of all the old disadvantages of using a Threadripper and now it's more or less like a normal processor that just has an obscene number of threads.

my biggest recommendation regarding RAID is to NOT get a RAID controller, the days of hardware RAID are over, the linux kernel has been well-optimized to support software RAID, and RAID10 has excellent performance. you shouldn't rely on an overpriced piece of hardware that can fail and make it extremely hard to recover your data.

I also recommend having what's called a 'hot spare' in your RAID10 configuration, essentially if any drive in the array should fail, the array begins rebuilding itself immediately on the hot spare without any downtime.
 
Last edited:

Null

Ooperator
Patriarch
kiwifarms.net
As far as processors go I think the Threadripper 3960X or 3970X is a good choice,
Threadrippers (EPYC is the server brand name) are for things that require many threads and low processor speed. The opposite is going to be true with PHP. I'd get an EPYC if I was doing something like a reverse-proxy I'd share with ED and several other sites that need IP source protection, but for the Kiwi Farms I'd emphasize physical core speed over core count.
 

OwO What's This?

𝑖𝑡'𝑠 𝑛𝑜𝑡 𝑝𝑜𝑟𝑛, ℎ𝑜𝑛𝑒𝑠𝑡
True & Honest Fan
kiwifarms.net
Threadrippers (EPYC is the server brand name) are for things that require many threads and low processor speed. The opposite is going to be true with PHP. I'd get an EPYC if I was doing something like a reverse-proxy I'd share with ED and several other sites that need IP source protection, but for the Kiwi Farms I'd emphasize physical core speed over core count.
I see. Well, regardless, I'd like to point out that the new threadrippers have 3.7 GHz base and 4.5 GHz boost (single-core), they're not weak on single thread tasks like they used to be. But if you don't need all the threads then one of the new Ryzens should suffice.
 

Coffee Anon

kiwifarms.net
I see. Well, regardless, I'd like to point out that the new threadrippers have 3.7 GHz base and 4.5 GHz boost (single-core), they're not weak on single thread tasks like they used to be. But if you don't need all the threads then one of the new Ryzens should suffice.
This guy is 100% right, threadrippers aren't the measily 2 GHz EPYC speeds. They are as fast or faster than desktop parts, although more $ per core than desktop parts. Also note that clock speed doesn't equal overall performance, and AMD's new parts of IPC as good or better than Intel in some work loads.

If you don't foresee ever using 32 cores/64 threads and over 128 GB of RAM (I don't recall how much the new TR boards support), then perhaps just get a 3700x and spend the spare cash on 128 GB of RAM and a 4 TB HDD.

If you want to future proof it a bit, maybe 3900x since it still has good price per core. 3950x has a 30% premium or so, and threadrippers even more.

But a 3970x would allow for many more customers if you expect your VM hosting business to grow significantly. You could also say you bought a "threadraper" for the memes.

Another downside with TRs is the new TRx40 socket mobos are expensive, like $500 plus, although if you get one with 10 GbE port then it's even further future proofed.

And all of this doesn't even touch on the vulnerabilities in Intel chips, with a new one appears every few months for the past 2 years, the fixes for which degrade performance.
 
Last edited:

Reverend

Avatar of Change
kiwifarms.net
Questions:

1. Are you bound by Power requirements? (So many Amps on a Power Bar)
2. Are there any surplus server providers in or around where you or the data center is or at.
2a. If so, Are there any Dell's, HP's, or SuperMicro's kicking around?
3. Can you get 10GB/40GB in your switches? Segment your traffic from Front End clients vs Back End replication.
4. Does the data center provided "Helping Hands" so you don't have to do shit but have hardware on hand for them to do something
5. Whatever systems you buy do they have Remote Access/IPMI/IDRAC? If so you'll need a dedicated switch that's on a separate network outside of everything else (security!)
6. What is your Budget?

As for what to buy, Skip anything single socket. Skip anything that is NOT Xeon or Epyc. These CPU's are designed and certified to run in a Data Center.

Don't care what people say, Threadripper is not meant for a Data Center, does AWS/Google Cloud/Azure use them? Fuck no they don't and for damn good reason: they are 'failed' epyc chips and single socket.

Don't buy desktop HDD's or laptop SSD's. You want SAS (Serial Attached SCSI) drives that are hotswap, built like HUMVEE's, and yes they are more expensive but they will last you through a power fail and fucking hot swap=0 downtime. I have a customer who thought he'd save $500 by buying consumer SSD's that are SATA. Sure as she when his array failed we had to turn the server down (outage, customer had to be notified),replace the disk, then go through the headache of bringing the server back online and then rebuilding the array and WOULDNTYAKNO another disk failed. 2 outages+loss of revune+My Time=WayMoreThan500.

Don't go cheap, get used enterprise gear, it will last you 3 years+.
 

REGENDarySumanai

Not So Bad Once You Try It, Right?
Supervisor
True & Honest Fan
kiwifarms.net
If you want to splurge a bit and still use the Linux kernel, there's always ZFS for Linux. Other than that, I got nothing else. ZFS is futureproofed in the fact that it a file system, a logical volume manager, and is a 128-bit file system.

https://zfsonlinux.org/
 
Last edited:
  • Informative
Reactions: JoshPlz

Null

Ooperator
Patriarch
kiwifarms.net
Questions:
1. Power Circuit - Primary10A/120V Power Circuit (8A / 1kW Useable)
2. It's in Vegas but I don't know any.
3. I don't know what you're asking.
4. Yes, but I am going back to the US and will probably visit the datacenter, so if it's something I have to do I can do it then.
5. I have to rent KVMs and do not have my own hardware for it.
6. Read OP.

As I said in my OP and in posts following, I avoid consumer grade electronics. There's a reason we've never had a hardware failure.
 

Hellion

Person of Disinterest
kiwifarms.net
@Null are you missing a sentence on the happenings banner? At the moment it just says 'Build one with me in our fabulous Tech board.'

Edit: Glad it's fixed now! It didn't initially say 'Do you build servers?' or link to this thread, so it was pretty vague what 'building one' meant, unless you'd seen the previous threads.
 
Last edited:

Null

Ooperator
Patriarch
kiwifarms.net
@Null are you missing a sentence on the happenings banner? At the moment it just says 'Build one with me in our fabulous Tech board.'
... ? no

"Build [a server] with me in [the Kiwi Farm's] [Internet & Technology] [forum]."
 

Give Her The D

You have been BAMBOOZLED
kiwifarms.net
What motherboard do you intend to use? Supermicro?

Also, do you plan to get any anti-DDoS appliances? I know one of the cheap options is to BGP route through a BuyVM VPS, but I'd imagine Francisco wouldn't want you back on there after the fiasco with Vordrak.
 
  • Thunk-Provoking
Reactions: JoshPlz

Coffee Anon

kiwifarms.net
After thinking about it on the bus, I would actually be very careful if installing a desktop mobo in a rack, make sure if it will work.

My main concern is cooling. The TR part is 280 W. Since we want to avoid water cooling, you would usually put a big ass heat sink/radiator on it with a big as fan. Server cooling usually goes with a smaller heat sink to maintain the rack height, and VERY VERY fast/high power fans to increase airflow, however, you would need to make sure it can cover the wattage that a TR is outputting.

The other downside is the RAM orientation on a server mobo is parallel with airflow and perpendicular to the rear side of the mobo, while for desktop boards (including TRx40), the RAM is perpendicular to air flow and parallel to rear of case, which means it will block airflow.

Reverend actually sounds like he knows what he's talking about, so maybe listen to him.
 

Reverend

Avatar of Change
kiwifarms.net
3. I don't know what you're asking.
10GB/40GB network speeds. Instead of 1GB connectivity between your devices you have 10GB or 40GB ports that pass traffic. This speeds up the shuffling of data between front/back ends. You can get 10GB/40GB switches fairly inexpensively these days (brocade/arista) on ebay and ESPECIALLY in Vegas.

I may have a line to a person who could help you out if it depends on the datacenter you are in as I know they are in Vegas as well. I'll have to see what is out there for Dell/HP/SM equipment.

My comment in regards to consumer products were for those folks advocating for Threadripper's. Just no. Build your own servers? Fuck no.
 
  • Informative
Reactions: JoshPlz
Tags
None