Build a Farm: Hardware considerations -

DNA_JACKED

kiwifarms.net
I'm very experienced in everything above the kernel because I've always worked in virtualization. I've been told hardware raids are still important.
Hardware RAID is faster, if you have some bizarre raid 6 setup with tons of I/O and weird parity setups. For the size we are looking at, software RAID will be more then adequate.

We use hardware RAID at work, but we have two 48 disk NAS units hosting 100+ VMs at once, both set up with 6+1 RAID with additional parity disks and supplying the VMs to 6 load balancing servers. The big advantage of hardware RAID is the ease of rebuilding a bad disk, but sometimes they fuck up and obliterate data integrity for an entire raid 6 array, as happened to us this week. Not worth the headache for a site this small.
 

OwO What's This?

𝑖𝑡'𝑠 𝑛𝑜𝑡 𝑝𝑜𝑟𝑛, ℎ𝑜𝑛𝑒𝑠𝑡
True & Honest Fan
kiwifarms.net
hardware RAID is important if you're going RAID 6, but RAID 6 is already losing a lot of favorability over RAID 10 because even with a hardware RAID controller, rebuild time is still significant. RAID 10 with hot spares seems to be what enterprise systems are moving toward, better performance and near non-existent downtime
 

SoapQueen1

speed bump, failed business, retired tism wrangler
True & Honest Fan
Retired Staff
kiwifarms.net
Out of curiosity what system distribution(s) does the site run?
 

אΩ+1

The Aleph
kiwifarms.net
How many units of rackspace do you have access to?

With the Hard Drive what is the specific hard drive do you use?
 

Spawn

TRULY AND HONESTLY DOSENT GIVE A DAMN
kiwifarms.net
How many units of rackspace do you have access to?

With the Hard Drive what is the specific hard drive do you use?
I would also be curious to know if we are using HDD or SSD storage primarily. We could in theory use a pcie based SSD system combined with raid 10 to create a high-speed drive for our site files like ratings and videos or pictures to cut down on load time and storage space on other drives.
 

Null

Ooperator
Patriarch
kiwifarms.net
Sorry for the suffering, it seems to be actively dying after I figured out what the issue was.

I'm working with my usual hardware guy and put in a request with the local surplus provider to see if he can find something fitting the bill and get it slotted by monday. Fingers crossed.
 

DragoonSierra

kiwifarms.net
Discuss

Thinkmate® RAX-1208-SH 1U Chassis - 8x Hot-Swap 2.5" SATA/SAS3 - 600W Single Power
Intel® C246 Chipset - 6x SATA3 - 2x M.2 - Dual Intel® 1-Gigabit Ethernet (RJ45)
Six-Core Intel® Xeon® Processor E-2286G 4.0GHz 12MB Cache (95W)
2 x 16GB PC4-21300 2666MHz DDR4 ECC UDIMM

LSI MegaRAID 9341-8i SAS 12Gb/s PCIe 3.0 8-Port Controller
4 x 240GB Intel® SSD D3-S4510 Series 2.5" SATA 6.0Gb/s Solid State Drive
4 x 1.8TB SAS3 12.0Gb/s 10000RPM - 2.5" - Seagate Exos 10E2400 Series (512e/4Kn)

View attachment 1133069

CONFIGURED PRICE:
$4,160.00

($377/mo)
I dunno anything about servers but would a SSD cache drive with Primocache running help any?
 

Reverend

Avatar of Change
kiwifarms.net

AMD Ryzen 7 3800X - 3.9 GHz (4.5 GHz Turbo) - 8 cores (16 threads) - 32MB cache - 105W - $300 if you live near a Micro Center, otherwise $340
GTFO of this thread with this home user shit. This is a thread for big boy enterprises not gaming garbage. We want servers to last for longer than you will be living in your mother's basement and go through more content than your entire Hentai history up until this point.

The build looks solid to me, except that the SSD's should be connected to the Hardware RAID Controller too, instead of the Intel Mainboard as shown there. (Unless there is a good reason not to.)

I have done some price checking on the separate components:
Intel® C246 Chipset - 6x SATA3 - 2x M.2 - Dual Intel® 1-Gigabit Ethernet (RJ45)
$219 (Amazon.com) - (but is the form factor correct for a rack chassis?)

Six-Core Intel® Xeon® Processor E-2286G 4.0GHz 12MB Cache (95W)
$489 (Connection.com - but out of stock)

2 x 16GB PC4-21300 2666MHz DDR4 ECC UDIMM
$184 (Amazon.com)

LSI MegaRAID 9341-8i SAS 12Gb/s PCIe 3.0 8-Port Controller
$265 (Newegg.com)

4 x 240GB Intel® SSD D3-S4510 Series 2.5" SATA 6.0Gb/s Solid State Drive
$1136 ($284 each, Amazon.com)

4 x 1.8TB SAS3 12.0Gb/s 10000RPM - 2.5" - Seagate Exos 10E2400 Series (512e/4Kn)
$980 ($245 each, Serversupply.com)

In total: $3273

I couldn't find the price of just the chassis + PSU, so I left that out.
So, the markup would be $887 minus whatever a good rack chassis with PSU costs.
You are paying for someone to build this unit, test it, support it and drop ship it to the data center. $800 is a small price to pay to make sure this shit works on Day one and you aren't fucking around with building shit yourself. You can't put a price on "Peace of Mind" especially if it has On-site warranty so they send some other dipshit out at 2AM to fix your shit for you while you sit at home sipping Rye bourbon and singing "Damn it's good to be a gangsta"

I'm very experienced in everything above the kernel because I've always worked in virtualization. I've been told hardware raids are still important.
Hardware raid is PREFERABLE when and where you can get it. Offload as much work as you humanly can to another device that's sole job is to sit there and make shit flow between I/O ports. It's like having a cop in a tank directing traffic vs some old lady in a crossing guard doing it while retired. Sure it works, but who do you want making sure data gets from A -> D correctly?

That LSI card is one of the better mid range RAID cards:

  • RAID 0/1/5/10/50 JBOD
That's more than what you'll need to setup whatever you type of parity/redundancy you desire.

This one has onboard flash cache which will save your data in case of a power failure. $600 more though



I also advocate HIGHLY for dual power supplies. Redundancies as much as possible everywhere you can. If you can get 2 PDU's in the half-cab do it, split your power load between both legs.

Might be too late, you never know. Sorry my supplier didn't pan out :(
 
Last edited:

Slav Power

Tag jes.
kiwifarms.net
GTFO of this thread with this home user shit. This is a thread for big boy enterprises not gaming garbage. We want servers to last for longer than you will be living in your mother's basement and go through more content than your entire Hentai history up until this point.
He just mistook his plans to build his zoomer Minecraft server with our plans to build a proper web server.
 

dinoman

⚡🐹🐹⚡
kiwifarms.net
GTFO of this thread with this home user shit. This is a thread for big boy enterprises not gaming garbage. We want servers to last for longer than you will be living in your mother's basement and go through more content than your entire Hentai history up until this point.

You are paying for someone to build this unit, test it, support it and drop ship it to the data center. $800 is a small price to pay to make sure this shit works on Day one and you aren't fucking around with building shit yourself. You can't put a price on "Peace of Mind" especially if it has On-site warranty so they send some other dipshit out at 2AM to fix your shit for you while you sit at home sipping Rye bourbon and singing "Damn it's good to be a gangsta"
I don't have anything to add to this but can I just say this response was fucking hysterical.
 
Last edited:

AlexJonesGotMePregnant

he put a baby in my butt
kiwifarms.net
I propose "Melinda" in honor of Melinda Leigh Scott

- null loves isreal
- MLS is big brained like the new stronk CPU
- MLS is as likely to leave null alone as the new server is to fail
- it would infuriate MLS to know that something named in her honor is responsible for the constant hate crimes she believes are being committed against her
 
  • Thunk-Provoking
Reactions: 3119967d0c

OwO What's This?

𝑖𝑡'𝑠 𝑛𝑜𝑡 𝑝𝑜𝑟𝑛, ℎ𝑜𝑛𝑒𝑠𝑡
True & Honest Fan
kiwifarms.net
Hardware raid is PREFERABLE when and where you can get it. Offload as much work as you humanly can to another device that's sole job is to sit there and make shit flow between I/O ports. It's like having a cop in a tank directing traffic vs some old lady in a crossing guard doing it while retired. Sure it works, but who do you want making sure data gets from A -> D correctly?
you keep preaching about servers that can stand the test of time but you're pushing a component notorious for failing silently and in the most disastrous ways. you have such a wide support net when you're using software raid, lots of people familiar with it, documentation on it, testing tools available for it, and endless knobs to tweak until it's just right, and it's all open-source.

you're not going to have that wide range of support using these proprietary as fuck death sticks, and the people who make them will be more than happy to fleece you on data recovery once you learn you're out of warranty. some extra performance is not worth introducing a single point of failure to an array specifically meant to mitigate that.
 
  • Thunk-Provoking
Reactions: dinoman

Reverend

Avatar of Change
kiwifarms.net
you keep preaching about servers that can stand the test of time but you're pushing a component notorious for failing silently and in the most disastrous ways. you have such a wide support net when you're using software raid, lots of people familiar with it, documentation on it, testing tools available for it, and endless knobs to tweak until it's just right, and it's all open-source.

you're not going to have that wide range of support using these proprietary as fuck death sticks, and the people who make them will be more than happy to fleece you on data recovery once you learn you're out of warranty. some extra performance is not worth introducing a single point of failure to an array specifically meant to mitigate that.
I have experience more software failures due to some shitty driver getting pushed either to the kernel or an update that cause headaches than rock solid RAID controllers who's sole job it is to keep shit alive.

I"ve yet to experience a hardware failure with a raid controller made by LSI/Broadcom. Do they fail? Yes. Rarely, allot less than software raid which is just as bad as getting data restored as hardware raid. What's your point?
 

OwO What's This?

𝑖𝑡'𝑠 𝑛𝑜𝑡 𝑝𝑜𝑟𝑛, ℎ𝑜𝑛𝑒𝑠𝑡
True & Honest Fan
kiwifarms.net
I have experience more software failures due to some shitty driver getting pushed either to the kernel or an update that cause headaches than rock solid RAID controllers who's sole job it is to keep shit alive.

I"ve yet to experience a hardware failure with a raid controller made by LSI/Broadcom. Do they fail? Yes. Rarely, allot less than software raid which is just as bad as getting data restored as hardware raid. What's your point?
i have had the exact opposite experience so I guess we can agree to disagree or whatever
 
Tags
None