Posts Tagged ‘ storage ’

Centralized Home Storage

The Problem

A few months ago, my wife started running out of room on her Mac Book Pro. I’d purchased her a nice little Western Digital USB drive to use as her Time Machine backup target, but now she needed to start using it for her primary digital photo storage.  I considered looking into replacing her laptop’s internal drive, but that seemed more of a band-aid than anything else.

I’d tried using a HP Media Vault (MV2010) for network storage previously, but without significant hacking, I would be limited to ~300gb of mirrored space.  With the wife likely to be getting a DSLR in the near future, and my desire to start backing up all the videos I’ve been taking lately, the existing hardware seemed to be a deal-breaker.

The Strategy

Still, centralized storage seemed to be the right way to go for long-term scaling.  I took stock of what was available to me around the house:

  1. Gigabit wired network
  2. Wireless-N WiFi router (Linksys WRT610N)
  3. Older multi-core Linux machine
  4. 4 spare SATA drives

I was reluctant to try just another NAS machine; given that I had a machine to turn into a dedicated home server, Direct-Attached-Storage (DAS) seemed to make more sense.  Not only would I be less likely to run into embedded-OS pains, but the more modular approach would help future upgrade plans.

The next question to answer was technology would connect the server to the drives?  USB 2.0 would be far too slow, and version 3 is still a ways out, and likely to have some painful initial costs.  The remaining choices were eSATA and Serial-Attached-Storage (SAS).

eSATA has become a fairly commodity-level technology; it seems particularly tied to the expansion of DVRs that utilize it for local storage expansion.  When you’re dealing w/ a single external device, there are zero cons to going this route, but I was envisioning an expandable array, maxing out at perhaps 8 devices.  Most eSATA implementations would utilize port-multiplication, and here’s the eSATA gotcha:  you end up dividing the effective bandwidth of your cable.  With projected workloads including virtualization and multimedia-streaming, I didn’t want to incur those penalties.

SAS is a standard both for connectors and cables, as well as drives, but most folks use commodity SATA drives with SAS controllers.  Using cheap SAS-SATA connectors, you can drive up to 4 SATA devices off of one multi-lane SAS connection.  You can even use specialized hardware (edge and fan-out expanders) to vastly increase the number of devices supported.  The only downside is that SAS is a new technology, with not a ton of hardware or driver support out there.

The Hardware

After crunching some numbers in a Google Docs spreadsheet, I took the plunge and ordered:

The total ended up being about $330 USD (I purchased an open-box ARC-1300 for ~$100, regularly ~$160).  This will get the array started for the 4 devices I already have; I’ll eventually need to order another controller card and cable to support the eventual final 4 hard drives.

Here’s a shot of the case w/ the cover off:

addonics tower (cover off)

The build quality is excellent; beveled edges everywhere meant very few nicks and gouges for my fingers… always appreciated 🙂

Adding in the extra multilane-to-SATA bridge board proved trivial; I just removed one of the SCSI punch-outs on the back of the case, and screwed the board in place.  As you can see, not much to the board itself:

ad4saml multilane connector

I’m also using an existing SATA drive cage to temporarily keep costs down; I’ll migrate to 8 external mobile racks as future budget permits:

supermicro sata 5x raid cage

I ran into one annoyance–the multilane cable did not like the screw-holes on the bridge board:

highpoint cable + bridge mismatch

Some cursing and 5 minutes with a small set of pliers later, the issue was fixed:acceptor screws removed

Operating System & Drivers

Recent versions (2.6.32+?) of the Linux kernel support the areca 1300 card via the mvsas driver.  Thankfully, Red Hat has backported the majority of the driver to RHEL 5.4; I only had to add the following patch to get the driver to recognize the card and the attached drives:

diff -up ./mvsas.c.orig ./mvsas.c
— ./mvsas.c.orig      2009-11-01 08:18:41.000000000 -0500
+++ ./mvsas.c   2009-11-01 08:18:41.000000000 -0500
@@ -69,6 +69,9 @@


+#define PCI_DEVICE_ID_ARECA_1300       0x1300
+#define PCI_DEVICE_ID_ARECA_1320       0x1320
/* driver compile-time configuration */
enum driver_configuration {
MVS_TX_RING_SZ          = 1024, /* TX ring size (12-bit) */
@@ -482,6 +485,8 @@ enum chip_flavors {
+        chip_1300,
+        chip_1320

enum port_type {
@@ -678,6 +683,8 @@ static const struct mvs_chip_info mvs_ch
[chip_6320] =           { 2, 16, 9  },
[chip_6440] =           { 4, 16, 9  },
[chip_6485] =           { 8, 32, 10 },
+        [chip_1300] =           { 4, 16, 9  },
+        [chip_1320] =           { 4, 64, 9  },

static struct scsi_host_template mvs_sht = {
@@ -2816,6 +2823,8 @@ static struct pci_device_id __devinitdat
{ PCI_VDEVICE(MARVELL, 0x6440), chip_6440 },
{ PCI_VDEVICE(MARVELL, 0x6485), chip_6485 },
+       { PCI_VDEVICE(ARECA, PCI_DEVICE_ID_ARECA_1300), chip_1300 },
+       { PCI_VDEVICE(ARECA, PCI_DEVICE_ID_ARECA_1320), chip_1320 },

{ }     /* terminate list */

RAID & LVM Configuration

When I was doing my initial research, I was very interested in the Drobo Pro; its RAID-6’ish configuration and aesthetic design were pretty compelling.  The $1500 USD price-tag, on the other hand, was not.

At this point, given how ridiculously cheap SATA drives are, I’m just going to use a combination of RAID1 and LVM.  Yes, I know, RAID6 could be pretty compelling, but I’ve got sets of mis-matched drives, so pairing them up based upon capacity seems like the smartest move for now.

For every two-drive set, I’ll create a new /dev/mdX device using mdadm.

mdadm –create /dev/md0 –level=raid1 –raid-devices=2 /dev/sdb /dev/sdc

mdadm –create /dev/md1 –level=raid1 –raid-devices=2 /dev/sdd /dev/sde

Each mdX device will then be added in turn to a vg_storage LVM volume group.  I’ll spin logical volumes off that volume group as needed.

[root@localhost ~]# vgscan
Reading all physical volumes.  This may take a while…
Found volume group “vg_system” using metadata type lvm2
[root@localhost ~]# lvscan
ACTIVE            ‘/dev/vg_system/lv_root’ [68.59 GB] inherit
ACTIVE            ‘/dev/vg_system/lv_swap’ [5.81 GB] inherit
[root@localhost ~]# pvcreate /dev/md0
Physical volume “/dev/md0” successfully created
[root@localhost ~]# pvcreate /dev/md1
Physical volume “/dev/md1” successfully created
[root@localhost ~]# pvscan
PV /dev/sda2 VG vg_system lvm2 [74.41 GB / 0 free]
PV /dev/md0 lvm2 [149.05 GB]
PV /dev/md1 lvm2 [279.46 GB]
Total: 3 [502.92 GB] / in use: 1 [74.41 GB] / in no VG: 2 [428.51 GB]
[root@localhost ~]# vgcreate vg_storage /dev/md0 /dev/md1
Volume group “vg_storage” successfully created
[root@localhost ~]# vgdisplay vg_storage
— Volume group —
VG Name vg_storage
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 428.50 GB
PE Size 4.00 MB
Total PE 109697
Alloc PE / Size 0 / 0
Free PE / Size 109697 / 428.50 GB
VG UUID AN7IqW-UBsQ-D55g-vUmh-z6h0-wC49-l09oaz

So, I now have ~430GB of storage available for new logical volumes!

I’ll probably pick up 2 x 1TB drives at some point to replace the old SATA-I device (md0); that’ll be a future post on migrating physical volumes out of an LVM volume group 🙂

Logical Volumes & Filesystems

I did some research on alternative filesystems, such as ZFS or btrfs, but they don’t seem quite ready yet.  Once btrfs gets RAID-6 and is declared production ready, then about the only other thing I’d like to see is some of the storage-virtualization functionality present in the Drobo Pro.  For now, though, ext3 logical volumes on the infrastructure defined above should more than meet my needs.

Need storage enclosure…

So, Janel filled up her laptop’s hard drive. I got her one of those little Western Digital external usb drives, and it’s nice and all, but we really are starting to need centralized storage for the house.

I’ve got a number of old drives, and even a 5-bay raid array to hold them in, but no actual case to hold it all in.

Ideally, I’d like something that’s about mini-tower-sized or less, and just holds the drives, fan, and power supply. Would be happy w/ ESATA, regular SATA, or SAS connections.

Something like this tower from addonics:

Anyone have recommendations?