Build a Farm: Hardware considerations -

Reverend

Avatar of Change
kiwifarms.net
There are bunches of older enterprise SSDs floating around for affordable prices, assuming whatever hardware you end up with supports full height PCI. Here's an example, 3.2TB for under $300.
Why use PCI when you can get the same thing in a SATA format?


I have 4 of these running my farm for my Linux ISOs and they work like a fucking dream.

Yes they aren't hot swappable but having 1.2TB of Flash is fucking awesome.
 
  • Informative
Reactions: JoshPlz

Leibowitz

Bless me Father, I ate a lizard.
kiwifarms.net
Why use PCI when you can get the same thing in a SATA format?


I have 4 of these running my farm for my Linux ISOs and they work like a fucking dream.

Yes they aren't hot swappable but having 1.2TB of Flash is fucking awesome.
Bandwidth mostly, those were made before NVME. The downside is you can't boot off those old PCI cards and they require software drivers, whereas SATA would not.
 
  • Informative
Reactions: JoshPlz

Looney Troons

i need that money i really do i need that money
True & Honest Fan
kiwifarms.net
There is no way I am buying enough SSD to cover the existing consumption (1.3 TB) + RAID it properly. That is at least 2n, but probably 4n for the redundancy we need. A single 2TB SSD is too much and isn't much more spacious than what we already have. I'd much rather haven 4 x 10k 4TB and RAID10 them. If we ever consume 4TB it's probably time for a proper dedicated storage device.
I mentioned this in another, similar thread, but would you be willing to accept hardware donations when you’re in the US? I’m not sure where you’re colocated, or their “smart hands” or customer storage policy. I bet there’s a few of us who have some goodies to spare. I have a few Dell PERC 6i/e controller cards that I plucked out my aforementioned R710s, have some WD Black drives (I believe 4TB) that need to be zeroed, some fully-buffered DDR2, and some really shitty Spider KVMs.
 

Reverend

Avatar of Change
kiwifarms.net
Economically, you would be better off purchasing 10gbe NICs for your existing servers and acquiring a NAS.

Yes, I quoted OP in a reply to OP - fight me
No he wouldn't that's a terrible idea. What would centrally locating all his data solve for him? A Network attached storage is pointless as his data needs to be at or near line speed and in RAM. The latency of pulling the data from the disks, across the network , all at a snails pace of an ARM cpu, would slow the process way the fuck down especially on anything outside of an EMC type storage that isn't already kicking a Xeon process or equivalent.

A SAN? Sure, that may help for backups and redundancies, shared storage is highly useful if done correctly. Having a server dedicated to storage would be a good idea down the road.
 

DNA_JACKED

kiwifarms.net
You can get a Dell R340, brand new, with 32GB of RAM and a 3.7 GHz (5.0GHz turbo) 8 core Xeon for about $2250 with a 240GB boot SSD, and a second for $120. That would give you your Raid10 SSD setup. The only real limit is that it is a 1u server and thus cant run more then 4 drives. The 2u servers are mostly running EPYC processors, which you have expressed disinterest in.

I'd offer to send you one of my old servers, but they are all set up for multi threaded workloads, and thus have high core count/low clock speed setups, and are all generations older then what you have currently. How much RAM can either current box take? There may be some of us willing to buy and ship you large quantities of DDR3 RAM sticks, or may have, say, 48-96GB of the stuff laying around. Just saying......

Alternatively, the R240 can be had with 16-32GB of RAM and the same processors as the R340 for a cheaper price. It has no hardware RAID controller, but you are not going to use that anyway.

So, best plan, get a pair of R240 servers, one set up with lots of RAM and a E-2236 CPU (6 core 12 thread 3.4 GHz, 4.8 GHz boost) to replace BOX 1, the other set up with a Xeon E-2288G CPU (8 core 16 thread, 3.7 GHz base 5 GHz turbo) for your PHP box 2, then take one of your current boxes, fill it with the HDDs you need, and use it as a NAS for box 1 and 2.

EDIT: both servers with 32GB of RAM each would run you about $2700. Wouldnt come with SSDs, and instead are outfitted with 1TB HDDs, but that would give you everything but storage for a little more then half your budget. Buy a couple of large HDDs for your old BOX 1/2, turn it into BOX 3, set it up as remote storage for the new box 1/2, and call it a day. It would give you a lot more breathing room.
 
Last edited:

Reverend

Avatar of Change
kiwifarms.net
Dell R420 dual 6cores Xeons, 128GB RAM $400:

Dell R430 (dual 8 cores Xeons) 128GB RAM $900:

Dell R620 (dual 6 core Xeons), 128GB RAM: $550


All Low Form Factor (2.5" drives), 2 PCI Express slots, perfect for a 10gb card later on and come with Hardware RAID and remote management (iDRAC).

Fill the hard drives for the front end with 10-15k RPM SAS disks, 2.5" 450-900GB disks are inexpensive and Dell's are rock standard. RAM is also 'cheap' as you can pile a ridiculous amount in these servers if you want (R620 goes upto 512GB).


I second @DNA_JACKED's solution of reusing one of your old servers and making it into a backup/repository of your data onsite or offsite. The 3-2-1 Backup Strategy is highly advised here as the Farms can be shut down at any time and your data seized. Replication is key and that can be done on an older/slower server to not tie up bandwidth from your current production system. This is also why I advocate for a separate set of switches for your back-end communication to further wall off your data warehouses from being seen/exposed through the firewall/router.
 

JoshPlz

🎇Patron Saint of Good Boys🎆
True & Honest Fan
kiwifarms.net
my biggest recommendation regarding RAID is to NOT get a RAID controller, the days of hardware RAID are over, the linux kernel has been well-optimized to support software RAID, and RAID10 has excellent performance. you shouldn't rely on an overpriced piece of hardware that can fail and make it extremely hard to recover your data.
Hardware RAID is almost mandatory and thankfully most HP or Dell servers have some sort of integrated RAID controller in the system board.
Since you two seem to disagree, could you explain why you are for or against hardware RAID?

I have always been under the impression that albeit more expensive, a proper hardware RAID controller is always better than software RAID, as it has its own dedicated CPU, RAM, cache and sometimes battery backup, that should make it faster and more reliable than software RAID.

Is there a reason why this could not be true anymore?
 

RightToBearBlarms

The Red Lobster Cheddar Biscuit of People
kiwifarms.net
Since you two seem to disagree, could you explain why you are for or against hardware RAID?

I have always been under the impression that albeit more expensive, a proper hardware RAID controller is always better than software RAID, as it has its own dedicated CPU, RAM, cache and sometimes battery backup, that should make it faster and more reliable than software RAID.

Is there a reason why this could not be true anymore?
My take is probably a little biased towards hardware solutions having worked in the server hardware space for a bit.

Hardware RAID is easier, faster, and safer generally. You're not eating up unnecessary CPU cycles to manage it, as efficient as it may be now it'll never be perfect. Hardware RAID also generally offers more options when it comes to your RAID level (5,6,10). A couple of gigs of write cache can also make a difference.

I don't think there's anything inherently wrong with software RAID (I use it on my desktop at home) but I don't see any advantage to it other than being cheaper but even then a lot of these mid range servers come with a RAID controller integrated into the peripheral board.
 

Lesbian Sleepover

Party Announcement
True & Honest Fan
kiwifarms.net
Finally, something in my realm
Have you looked at NVMe SSDs?
I run massive SQL Server AAG configs physical and virtual and these drives are the fastest.

HP and DELL ship servers (for example Dell Poweredge 430-730) if configured. One 6tB disk is about $6000UDD but requires no real mirror as it's not an option for raid. You can hack and raid these but you're better off configuring a storage pool.
 

RightToBearBlarms

The Red Lobster Cheddar Biscuit of People
kiwifarms.net
Those processors are all a huge step down. I have a much larger budget than that.

- Two E5-2699v3 processors at 18/36 a pop running at 2.3GHz(3.6GHz boost).
- 128GB DDR4
- Hardware RAID controller
- 4 port 1Gb NIC
- Redundant power
- 3.5TB of SSD

Maybe there's a better price or specific config out there but is this more what we're looking to build than the above post?
 
  • Agree
Reactions: Reverend

RightToBearBlarms

The Red Lobster Cheddar Biscuit of People
kiwifarms.net

This seems like it would be a fairly solid choice in terms of performance and it's also well within budget.
Gen10 being HP's current generation of these Proliant servers means that getting any significant performance increase over the previous generation will put you about $20k in the hole. The one linked here is the absolute base model with a single socket (8 cores running at 2.1 GHz) with 16GB memory and no storage.

The Dell poweredge equivalent x30 vs x40 models are in the same boat
 
  • Like
  • Informative
Reactions: bebop and JoshPlz

Spedestrian

Based and Scrabblepilled
True & Honest Fan
kiwifarms.net
If you're getting cucked by disk I/O in nginx you might wanna look into using thread pools to make it so read() and open() calls aren't blocking before you spend a bunch of money on new hardware:
 
  • Thunk-Provoking
Reactions: Vecr
Tags
None