Wednesday, October 31, 2007

Wisdom

Wisdom showing wit alone
is worthless to this world.
This wisdom must be fed and grown
in order to unfurl.
For Steven, Albert, Thomas
did not sit idly.
They tried and asked and then alas
became what we now see.
Intellect requires knowledge
which from experience comes.
And all who try will gain an edge
o'er the foolish and the dumb.
“Be not afraid,” parents often say
to children in their youth.
While they're young they should be made
to be (moderately) uncouth.
So try and live and always give
the best you poss'bly may,
For those who die and never live
do their own selves betray.
Hawking, Einstein, Edison
would disapprove inaction.
Their effort, brain, and a bit of fun
Has formed their tiny faction.
Once again reiterating
for wisdom's sake and yours,
All these things are intertwined
Lack one and all are lost.

Labels: , ,

Saturday, October 13, 2007

Solaris File Server (w/ ZFS)

Preface
Building a terabyte file server (very predictably) proved to be a great learning experience and an exercise in hardware buying decisions and selecting the right software to run. My goal here isn't to provide a step-by-step how-to for building a home file server, but instead to provide insight in the decision-making process involved in such a project.

The Hardware/Software Relationship
When most people are buying PC hardware, they typically don't give much of a thought to the type of software they'll be running on it. Usually, though, the software they buy doesn't rely much on the hardware and as a result everything works out just great regardless. I faced a pretty unique situation in deciding which hardware to purchase for this server, especially when the financial constraints of a college student were factored in. Should I buy a low-end motherboard with relatively few features and use add-in cards wherever necessary? Should I stick to older hardware so that I could use components I already own to save some money? Should I buy a RAID card, or would I be using software RAID? Or should I just make sure to get a motherboard with on-board RAID? Obviously, these are only a few of the questions I faced going into this. What I found is that clearly defining goals will save you a lot of time when it comes to deciding which hardware to buy - and so I set my goals. I wanted all-new hardware so that upgrading and repurposing the guts of the machine would be feasible in the future. I wanted relatively low power consumption wherever possible but the ability to turn on the horsepower when I really needed it. Obviously, I wanted an obscene amount of storage to keep everything on. Lastly, I want this thing to be as resilient and robust as possible in the face of catastrophic hardware failures (and easy/inexpensive repairs would obviously also be nice) so I chose software-based RAID to keep it independent of the hardware.

Solaris and ZFS
To be totally honest, I was already sold on ZFS before I made any of the hardware buying decisions; any geek would be considering that Sun has put together what just might be, as they've termed it, "the last word in file systems." The ease-of-use, guaranteed data integrity due to checksums, reliability, speed, efficient use of disk space and disks, etc all really had me sold on ZFS as a file system and that really was the main factor that ultimately drove me to make the hardware choices I did.

The Hardware
Here were the final specs:
  • An old Lian-Li case capable of holding at least 8 3.5" devices. That's no joke.
  • Old PCI video card
  • EPoX EP-MF570SLI AM2 Motherboard
  • AMD Athlon64 X2 4000+ Brisbane
  • 430 watt Thermaltake PSU
  • 2x1 GB of DDR2 800 from Transcend
  • 80GB Seagate drive
  • 6x500 GB Western Digital Caviar SE16 (WD5000AAKS)
It should be noted that all the hardware excluding the 500 GB hard drives and stuff I already owned cost me <$300. After hard drives, the whole system came to just under $900. I settled on not buying a new enclosure because the one I already owned had no practical limit on the number of drives I could store. Besides the 8 3.5" bays, I had 2 5.25"-to-3.5" bay converters just in case. Onboard video would have been ideal, but unfortunately most high-end motherboard manufacturers don't bother with it. Since I planned on running this thing out of a closet and SSHing for administration, the less power the better. I opted for an old PCI video card I had sitting around just to satisfy the requirement. The motherboard I chose was a beast, to put it subtly. This board has PCI and PCIe (including support for SLI, which is nice though I'll obviously never use it for this purpose), dual Gb NICs which has been one of the nicest features, 8 SATA and 2 PATA connectors, dual-channel RAM support, and a bunch of smaller conveniences that make for a nice package. The complaints? 2 of the SATA ports and 1 PATA port require drivers to work since they're tacked onto the chipset as opposed to being part of it. In Solaris, this means they might as well not exist. This board also has two fans to keep it cool, one for the chipset and one hanging out off the I/O plate that can be disabled (but that I have enabled regardless). Even though this is going into a closet, I'd be more comfortable knowing it didn't need those two fans there. The fewer the better. Lastly, onboard video, while too much to expect from a high-end motherboard, would have made this thing perfect. For the $80 I paid for this thing (open box), it can't be beat. The AMD processor I chose was dual-core, something Solaris could easily take advantage of given its heritage, $65, which was a perfect price-point, and was rated at 65 watts. Throw in overclockability in case I'm ever in the mood, and you've got a winner. The PSU was selected because it was a) $40 and b) very highly rated with over 1400 reviews on newegg.com. Normally I wouldn't be so cheap with a power supply, but this offer was too good to pass up. The RAM, too, I got lucky with. DDR2 800, very fast timings, and it didn't use more than the standard 1.8v for DDR2. The 80 GB Seagate drive and optical drive were things I had lying around, and the rest of the drives were the cheapest I could find online that weren't refurbs. As a bonus, all reviews indicate they're excellent quality drives.

The Software
I've already said a little bit about Solaris and ZFS, but I haven't even begun to do them justice. I'd heard through ZFS from a friend and done my own research and it was love at first sight. Problem was, I'd never worked with any version of Solaris before and support for those who aren't Sun's customers is relatively sparse compared to certain Linux communities (Ubuntu comes immediately to mind). Also, this needed to run well and run for a long time with occasional changes and easy recovery in case of failure. I was already sure Solaris would work with the hardware since someone mentioned in a review that they'd gotten it running just fine. So while I waited for my hardware to arrive, I found a few tutorials online and worked through them on a Solaris virtual machine I'd installed on my Windows desktop. Turns out ZFS was even easier to use than everyone made it out to be, and the only thing I wasn't really confidant about was configuring Samba for sharing on the network, but I didn't let that phase me. How hard could that be?

Worst-Case Scenario
It didn't happen, but what if? What if everything came crashing down? With software and hardware decided upon I began wondering what my chances of losing data would be in the event of catastrophic failure. Let's see. If everything except my hard drives went up in flames but somehow left the drives themselves untouched, I'd be just fine. Pop the drives into another machine and import the ZFS pool. Done. So failure was isolated to the drives. I let a friend convince me to run the drives in a raid-z configuration as opposed to the raid-z2 that I'd originally planned. The extra data security wasn't worth the 500 GB it was costing me, we decided. If any one of the drives failed it could be replaced no problem. Give the pool some time to resilver once the new drive has been put in place and we're ready to go again. What about multiple drive failures? Hard drives suffer two types of failures - electronic and mechanical. The platters are stored inside an almost totally airtight tomb, with the electronics exposed to the world. Say we had multiple failures of the drives' electronics because the drives got wet... I did a thought experiment and found that as long as one drive had a working circuit board, I'd be fine. I'd be out a lot of money in replacing lost hardware, but I'd be just fine. I could use that one working circuit board on each drive to clone the data to other 500 GB drives and get the array back up and running. What about multiple mechanical failures? Well, then I'd be totally screwed. The chances of this happening? I didn't bother with probability calculations, but my guess is that it's significantly less than my winning the lottery twice in a week. And I don't play the lottery.

Conclusion
Overall, the operation was a success. I did run into an annoying bug in the version of Samba that ships with Solaris 10 Update 4. but once I realized what was happening it wasn't very difficult to work around. 2.5 TB of usable storage with all the data integrity of raid-z and ZFS, and the reliability of Solaris.

Labels: , , , ,