In Defiance of Titles

Just another WordPress.com weblog

Archive for the ‘Uncategorized’ Category

My Home NAS, A Performance Sidenote

leave a comment »

Still more to go on my home NAS series, but I thought I’d take a moment to point out a recently-published article that benchmarks the performance of the MSI Wind box against several other DIY and off-the-shelf NAS units. The author ends with the conclusion that DIY NAS boxes based on Intel Atom chipsets (and also the VIA C7) typically get twice the throughput of their store-bought counterparts. So, if you were wondering if all this fiddling is worth it, there’s at least some evidence that it might be 🙂

Coming soon: Samba file sharing and possibly some extras about monitoring, security and performance.

Written by jazzslider

January 30, 2009 at 6:51 pm

Posted in Uncategorized

Tagged with , ,

My Home NAS, Part 9: NFS Over SSH

with 2 comments

If it seems like I’m on an acronym kick, it’s not my fault. In the previous bits of my home NAS series, I’ve shown you all sorts of them: S.M.A.R.T., RAID, LVM, SMB…wait, what happened to SMB? Astute readers will no doubt notice that I initially intended to connect my NAS to my network using the ubiquitous Windows networking protocol, SMB, which in its UNIX implementation is referred to as Samba. So why is this post’s acronym NFS?

Ultimately, it was the realization that none of the computers I use on a daily basis are running Windows anymore. If I’m using UNIX-like systems all the time, why not use a UNIX-native networked file system tool? Enter NFS, which, appropriately enough, stands for Networked Filesystem. Cue the band. (For those of you who are hoping for Windows connectivity, don’t worry; I’ll cover Samba in a future post. Nice thing is, you can run both if you want to.)

Now, before we get started, there are a few kind of strange things about NFS you’ll want to know. First, you don’t log in to an NFS share. NFS is designed on the assumption that, within a given network, one user always has the same numeric user ID. The net result of this is that if you can log on to an authorized client computer as, say, user 1000, you are effectively logged into the NFS server as user 1000 as well (but only within the shared directories). Fortunately for security’s sake, it is possible to very strictly control which network hosts are authorized to access the share, which effectively means that the only way someone can get to the share is by gaining access to one of the client computers (or compromising the server itself).

One other thing worth noting up front: ultimately, my goal was to make this NFS share securely accessible over the internet. However, given the login-less nature of the system, exposing an NFS share to the open internet is really kind of stupid without additional security measures. I considered setting up a full-blown VPN for this, but just then one of my co-workers introduced me to SSH tunneling (or port forwarding). Tunneling NFS through an SSH connection allows just the extra security I need for internet access to the share, and is a good idea even if you’re only accessing it locally; you just can’t be too careful.

(Much of the following tutorial was adapted from this HowToForge page. All credit where credit is due, after all.)

OK, time to get the hands dirty. First things first, we need to install the NFS server components. Debian gives you two choices here (nfs-kernel-user and nfs-kernel-server); I opted for the kernel version, which is easily installed via a simple apt-get install nfs-kernel-server command. (Note: if you uninstalled the RPC services early on in this guide, this should reinstall them. That’s a good thing; they’re necessary for NFS.)

Next, since we’re going to be tunneling this through SSH, we need to set the various NFS-related services up such that they use static (i.e., predictable) port numbers. To do this, first edit /etc/default/nfs-common; if there’s a line beginning with STATDOPTS, change it to the following (if it’s not already there, just add it to the end of the file):

STATDOPTS='--port 2231'

Next, edit /etc/modules.conf such that it includes the following line:

options lockd nlm_udpport=2232 nlm_tcpport=2232

And finally, edit /etc/default/nfs-kernel-server such that the line beginning with RPCMOUNTDOPTS reads:

RPCMOUNTDOPTS='-p 2233'

The next step is to create the user that will dig the tunnel each time you connect. In my setup, this is also the user who owns the files shared over the network, so it’s important that you choose the user’s UID carefully. For example, all of the client computers in my setup are Macs, and the default UID in OS X is 501, so the user I added here is also user 501. I named it macshare so that I know exactly what it’s for. To add the user…

adduser macshare --uid 501
adduser macshare users

You’ll also want to set up public key authentication for the new user to avoid having to enter in a password. I’d put the steps in here, but you can find them in detail in lots of places all over the internet, and I really don’t want to rewrite any more of that earlier HowToForge article’s content. So, I’ll leave that step up to you for now.

Moving forward, we’ve still got to set up the NFS shares, or “exports”. This is actually pretty easy; it’s all handled through NFS’s central nervous system, the /etc/exports file. It should just be comments at this point, describing the various kinds of things you can do. My own goal was to set up each of the logical volumes we created earlier as its own NFS share, so my /etc/exports ended up looking something like this:

/export/standard 127.0.0.1(rw,sync,no_subtree_check,insecure)
/export/critical 127.0.0.1(rw,sync,no_subtree_check,insecure)
/export/pictures 127.0.0.1(rw,sync,no_subtree_check,insecure)
/export/music 127.0.0.1(rw,sync,no_subtree_check,insecure)

Each line is a different share, and the various configuration fields are separated by spaces. The first field is the full path to the directory you’re sharing; that’s easy enough. The second field begins with the hostname or IP address of a machine that’s allowed to access that directory; in this case, we used 127.0.0.1, which is the server’s own localhost address. In effect, this means that the server is only exporting these directories to itself, thus ensuring that other computers can only access them via port forwarding. Anyway, immediately following the IP address is a parenthesized list of options specifying how this share can be mounted; I won’t go into the details, but suffice it to say, these were the options that worked for me.

The configuration doesn’t stop there! For security purposes, you’ll need to take a look at your /etc/hosts.allow file to make sure that the localhost IP address above is allowed to access a few particular services. If you put the following lines at the top of your /etc/hosts.allow file (under the comments, of course), you should be good to go:

portmap: 127.0.0.1
lockd: 127.0.0.1
mountd: 127.0.0.1
rquotad: 127.0.0.1
statd: 127.0.0.1

It’s also important that your client machines be able to access the sshd daemon; otherwise, your SSH tunnel won’t be allowed. So, if you haven’t already, make sure to add an appropriate record for sshd in /etc/hosts.allow. If your NFS configuration doesn’t end up working, this is a really important place to look for errors.

Now, on to the SSH tunnel. There are actually two distinct services we’ll need to tunnel, listening on two distinct ports…so we’ll need two tunnels. The first service is nfs itself; to find out which port it’s using, type in rpcinfo -p (on the server) and look for the nfs line(s). Mine was listening on port 2049. The second service we’ll need to tunnel is mountd, which earlier we set up to listen on port 2233. Knowing these details, the ssh command to set up both tunnels looks a little like this:

ssh macshare@hostnameorip -L 48151:127.0.0.1:2049 -L 62342:127.0.0.1:2233 -f sleep 600m

As you can see, there are two -L options, one per tunnel. The first forwards port 48151 on my client computer to port 2049 on the server (which is identified by its own localhost IP, just like in the /etc/exports file on the server); that’s the tunnel for NFS. The second forwards port 62342 on my client computer to port 2233 on the server; that’s the tunnel for mountd. For the life of me, I don’t know why the other NFS-related services aren’t involved, but I’m kind of glad they weren’t; I ran out of LOST numbers. (By the way, if you’re concerned about using Hurley’s magic lottery numbers to forward your data around your home network, be advised that the client-side port numbers are entirely arbitrary; just make them very large.)

Once you’ve got the tunnel running (ideally you’d set it up to run automatically), all that’s left is to mount the NFS share(s) to appropriate locations in your filesystem. This process varies by operating system (even across UNIXes), so for now I’ll leave that up to you. If my hands stop hurting from writing this terrifically long post, I may add the details later; for now, happy fiddling!

Written by jazzslider

January 15, 2009 at 6:03 pm

My Home NAS, Part 8: Hardware Manifest

with 2 comments

I know I said my next post was going to be about NFS setup, but I thought it might be useful instead to take a momentary break for listing off the final hardware manifest. My previous posts have been a little unclear on this subject, so to avoid confusion, here’s a list of everything I bought that’s currently part of the machine, along with links to NewEgg product pages:

Component Purpose Price (as of today)
MSI Wind PC Barebone Motherboard, processor and case, all pre-assembled in a nice, neat, power-saving package $139.99
Kingston Elite Pro Compact Flash Card Operating System Storage $22.99
512 DDR2 533 Memory (don’t remember which brand I used…) $7-8
Western Digital Caviar SE16 WD5000AAKS 500GB Hard Drive, /dev/sdb in the main RAID array $64.99
Seagate Barracuda ST3750640AS 750GB Hard Drive, /dev/sda in the main RAID array (no longer available) $80
VIGOR iSURF II HCC-S2BL Aluminum HDD Cooler for the 750GB drive $22.99

All of this is variable to suit your tastes, of course…this is just what I used. If you’ve got a bit more money lying around, it might be worth investing in larger hard drives. Make sure they’re SATA (ideally SATA 3Gb/s) drives rather than IDE, as the Wind PC doesn’t have any IDE ports. 7200RPM is also a good idea for quick access times.

Also, I’ve found that 8GB for the operating system is a bit on the high side; on my current setup, Debian is only taking up a measly 1.7GB. You could probably get away with a 4GB card instead, but the prices are low enough that it doesn’t save you much.

Oh, and as far as memory is concerned…512MB has been plenty for my purposes (my swap file is pretty much never used), but if you can afford 1 or 2 gigabytes instead, why not? Just make sure it’s DDR2 533 laptop memory (yes, this is basically a laptop without all the mobility-related bits), and only one stick of it (the Wind PC only has one memory slot).

Meanwhile, the entire software stack is free and open source, so there’s nothing else to buy beyond what I’ve listed above (unless you want something else, of course).  Based on the components/prices above (and assuming you don’t need to buy any of the extra stuff I’m about to list off), you’re looking at a total project cost of about $340, not including any shipping costs you might incur.  Not a bad deal considering the degree of customization you get along with it.

There are a couple of other things you’ll need, but only temporarily; for these, it’s best to just use something you’ve got lying around if you can:

  • 1GB SD card or USB flash drive, to use as the installer media
  • Monitor
  • USB keyboard (PS/2 won’t work; the Wind PC doesn’t have a PS/2 port)
  • Another computer which (among other things) will be used to set up the installer media; to really get this right, you’ll need to install something like VirtualBox and create a dummy Debian installation that can access your installer media…which I suppose I should document 🙂

Anyway, next time we’ll get back after the step-by-step setup; there’s still a fair amount of software setup to do, but we’re almost there.

Written by jazzslider

January 13, 2009 at 7:14 am

Posted in Uncategorized

Tagged with , ,

My Home NAS, Part 7: Breaking Things Down with LVM

leave a comment »

As I mentioned at the end of my RAID setup post, I want the storage space on my home NAS divided up into several fixed-size filesystems, each associated with a different purpose. Now, one approach here would have been to divide the physical disks up into several partitions and create several separate RAID arrays on top of those…but that seems a bit like overkill, and certainly isn’t very flexible if I later increase the size of the array. So, after a little research, I discovered a better solution: Logical Volume Management, or LVM.

Linux LVM allows you to create flexible logical volumes on top of an existing set of devices, either to join them together into one giant filesystem, or to separate them out by logical purpose. In my case, I wanted the final setup to look something like this:

Purpose Allocated Space Mount Point
Non-Critical Documents 45GB /export/standard
Critical Documents 25GB /export/critical
Operating System Backup 8GB /root/os-backup
Pictures/Home Videos 182GB /export/pictures
Music/Purchased Videos 182GB /export/music
MySQL Storage 8GB /var/lib/mysql
Cacti RRD Files 8GB /var/lib/cacti
Extra Swap Space 2GB /root/swap

I know, the math doesn’t quite work out right…but ignore that, you’ll enjoy this more.

Anyway, there are a couple of unusual details here that I’d like to explain. First…I broke my “Documents” storage up into two separate volumes, one labeled “Critical” and the other “Non-Critical.” The “Non-Critical” documents are things I’ve already backed up to DVDs, but might want immediate access to. The “Critical” documents are things I’m working on right now that I haven’t quite gotten backed up yet; everything on that volume is backed up nightly (using the duplicity command-line utility) to Amazon S3, so I needed to make sure it couldn’t get too large. I am not, after all, made of money. Meanwhile, the “Operating System Backup” volume is, as its name implies, a place to store a copy of everything on the CompactFlash card in case it fails and I need to put in a new one. Can’t be too careful.

Anyway, it’s reasonably simple to set up a logical volume structure like this. Make sure you’ve got the lvm package installed (apt-get install lvm2); then, create an LVM “physical volume” out of the device (or devices) that you want LVM to manage. In our case, we’ll use the RAID array we created last time (/dev/md0):

pvcreate /dev/md0

The next step is to create a “volume group;” volume groups are used to collect various physical volumes so that they can be treated as a single unit. Since we’ve only got a single physical volume to worry about, this is easy:

vgcreate -s 16M export /dev/md0

The -s parameter there is the size of the “physical extents” that make up the volume. Because this device is so large, it was important to choose a reasonably large physical extent size; unfortunately I can’t remember exactly why. “export”, meanwhile, is the name I used for the volume group; we’ll be using that again in a moment.

To figure out the total number of physical extents in the new volume group, you can run vgdisplay export; in my case, it came to 29808. However, you don’t necessarily have to know this to get things working properly, since LVM allows you to create your logical volumes using actual byte-size values instead. It’s just useful to know it’s there.

Anyway, creating each logical volume is pretty simple; for each volume you want, run something like this:

lvcreate -L 25G export -n critical

The above command would give you a 25GB logical volume in the “export” group named “critical”. For each logical volume you set up, you’ll also need to create a new filesystem as follows:

mkfs.ext3 /dev/export/critical

See how the device filename works? The volume group is the second path component, and the logical volume name is the third.

Finally, once you’ve got your filesystems created, you just need to pick appropriate mount points and add entries to your /etc/fstab to get them assigned. I put pretty much everything in /export, since I’ll be “exporting” these filesystems via NFS later. One thing to note: it’s easiest if you don’t mount any of these inside each other, since NFS will get a little confused by that. Keep things simple. So, for instance, to add a mount point for our new “critical” volume, we do the following:

mkdir -p /export/critical
echo "/dev/export/critical /export/critical ext3 defaults,errors=remount-ro 0 1" >> /etc/fstab
mount /dev/export/critical

Easy enough. Next time I’ll show you how to get these things shared to other UNIX-based computers using NFS.

Written by jazzslider

January 11, 2009 at 10:43 am

Posted in Uncategorized

Tagged with , , ,

My Home NAS, Part 6: RAID Setup

leave a comment »

Now that my home NAS has its hard drives installed, it’s time to set up the RAID-1 array. As it turns out, this is pretty simple work. First, create a single Linux RAID Autodetect partition on each disk, taking up its entire usable space. You can do this by running fdisk /dev/sda; fdisk is pretty powerful, so just in case you’ve never done this before, I’ll walk you through the steps. The listing below shows you fdisk’s prompts, with the appropriate response in bold; press the {enter} key after each command, of course.

Command (m for help): o
{several lines of output from fdisk}

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-x, default 1): {just press enter to keep the default}
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-x, default x): {again, just press enter}
Using default value x

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Repeat the process for your second disk (e.g., /dev/sdb). When you’ve finished with this process for both disks, you’ll have two new device special files in /dev, corresponding to the new partitions you created. These should be called /dev/sda1 and /dev/sdb1 (i.e., partition 1 on /dev/sdb). You’re now ready to create your RAID-1 array out of these partitions.

Make sure you’ve got the mdadm package installed (apt-get install mdadm); then, create your new RAID array as follows:

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[ab]1

Like I said, dang simple. The arguments work as follows:

  • --create /dev/md0 tells mdadm that the device special filename for the new RAID device should be /dev/md0. This is conventional; I think you could name it something else if you wanted, but it’s best not to get too creative with this kind of thing.
  • --level=1 tells mdadm that this is a RAID-1 array. In a RAID-1 array, each disk is basically a copy of every other disk, getting you full redundancy in case one disk fails.
  • --raid-devices=2 tells mdadm that this array will have two devices in it, something which it could probably figure out from the fact that we’re about to tell it which two devices are in the array. But it never hurts to specify.
  • Finally, the flagless argument /dev/sd[ab]1 is actually expanded by the shell into two arguments, /dev/sda1 and /dev/sdb1 (that’s what the square brackets do; fun trick); these are the devices that will make up the array.

It’ll take awhile to finish setting up the array, but you can actually start using /dev/md0 right away (if, for instance, you want to move on to my next post about LVM setup). You can always monitor the new array at any point by running one of two commands; either cat /proc/mdstat or mdadm --detail /dev/md0 will give you useful information about how things are running. Once it’s had time to create the array, both of these commands should show you that the state of the array is clean. But that may take awhile, so be patient.

Now, things aren’t quite usable yet in this state. For reasons related to my backup policy (which I will explain later hopefully), I wanted to separate the array out into several distinct filesystems, each with a fixed size. Unfortunately, I’m out of time for now, so I’ll have to show you that process in another post.

Written by jazzslider

January 11, 2009 at 10:13 am

Posted in Uncategorized

Tagged with , ,

My Home NAS, Part 5: Finally, Hard Drives

leave a comment »

Well, it’s been a couple of months since my last post, and I’ve gotten quite a bit more work done on my home NAS project. When I last posted, I had finished installing the basic hardware and operating system, but hadn’t quite settled on the right set of hard drives.  But all that’s changed; in fact, the ol’ Wind PC has been up and running pretty solid for the last month or two, and I’m happy to report that the project is a success.  In this post (and any that might follow it) I’ll tell you a bit more about the rest of the setup.

First, the hard drives.  I decided to break with one of my initial project requirements; the final NAS has only 500GB of usable storage.  For scalability’s sake, I decided to install two disks in a RAID-1 configuration, one of them a 500GB from Western Digital, and the other a 750GB from Seagate.

Installing to the 3.5" Bay

Installing to the 3.5" Bay

Since the size of a RAID-1 array is always equal to the size of its smallest member, that gives me 500GB of usable storage; however, if I later replace the 500GB drive with, say, a 1TB drive, the size of the array is increasable to 750GB.  And so on, and so forth, etc., etc.  I’ve tested this procedure on a virtual machine, but so far haven’t had the chance to try it out on the actual box…so if a year from now I try it and it fails, please don’t blame me 🙂

Anyway, there were a few steps I wanted to take to make sure things stayed running smoothly. First, simple disk monitoring using the Debian smartmontools package. Make sure it’s installed with a simple apt-get install smartmontools; then, in /etc/default/smartmontools, make sure you’ve got a line (uncommented) that says start_smartd=yes . And, just to make sure we’ve got a clean slate…

cp /etc/smartd.conf /etc/smartd.conf.org
echo "# /etc/smartd.conf config file" > /etc/smartd.conf

Then, for each installed drive (on the Wind box, they’re /dev/sda and /dev/sdb), add appropriate monitoring instructions to /etc/smartd.conf. If you’re the copy-and-paste type, this should get you going:

echo "/dev/sda \
-d ata \
-S on \
-o on \
-a \
-I 194 \
-n standby \
-m youremail@yourhost.com \
-s (S/../.././01|L/../../6/03)" >> /etc/smartd.conf

Obviously, you’ll want to put your own email in there, and run that code a second time with /dev/sdb in the first bit so that both your drives are monitored. Then, run /etc/init.d/smartmontools restart to make sure your new settings take effect.

You can also check on things manually using the smartctl command; for instance, the following command will get you all the monitoring data for /dev/sda:

smartctl -d ata -a /dev/sda

When I first set all this up, I’d mounted one of the drives in the Wind PC’s 5.25″ bay using a set of simple metal adapter brackets:

Drive installed on brackets in the 5.25" bay

Drive installed on brackets in the 5.25" bay

However, in this setup, both drives were floating around 45 degrees Celsius, which is really a bit too warm for my tastes. I’ve since installed the 750GB drive in a Vigor iSurf II cooling unit (I did have to reverse the fans to get it right), and things are down to 38 on both drives. Great news.

Drive installed in Vigor iSurf II

Drive installed in Vigor iSurf II

Anyway, the next step is to set up the RAID array and the filesystems…which I will show you next time.

Written by jazzslider

January 10, 2009 at 6:41 pm

My Home NAS, Part 4

with one comment

Having finished assembling the hardware and installing the base operating system for my home-grown NAS device, I’ve moved on to the fine art of tweaking.  For now, since there isn’t a hard drive in the box for major storage, I’m focusing on two things: the long-term stability of the CompactFlash card, and the general security of the machine.

As far as the CompactFlash card is concerned, there are a few things left to do to optimize its performance.  I’ve read from various sources that flash media tends to have a limited lifetime, especially when written to frequently.  As I noted in part 3, I avoided creating any swap space for this very reason.  However, there are still a few areas of the Linux filesystem that could cause extremely frequent write access, and it’d be nice to avoid this.  Based on some useful instructions from another blog, I decided to mount a few areas of the filesystem in RAM rather than on the CF card itself.  To do this, I edited /etc/fstab to look like the following:

proc            /proc           proc    defaults        0       0
/dev/hda1       /               ext2    defaults,errors=remount-ro,noatime 0       1
tmpfs           /var/run        tmpfs   defaults,noatime  0       0
tmpfs           /var/log        tmpfs   defaults,noatime  0       0
tmpfs           /var/lock       tmpfs   defaults,noatime  0       0
tmpfs           /var/tmp        tmpfs   defaults,noatime  0       0
tmpfs           /tmp            tmpfs   defaults,noatime  0       0

One important feature of this setup is that pretty much everything is mounted with the “noatime” option; this keeps the filesystem from recording information about when files were last accessed.  Otherwise, every time we read a file, the system would write a little bit of information to the card, decreasing its overall lifespan.

Now, the mounting setup above is great if we never turn the machine off; however, as you may have guessed, rebooting or shutting down the machine would mean that everything in those tmpfs directories is lost for good.  So, as it turns out, it’s not a bad idea to create a separate persistent version of at least some of it (especially /var/log) using cron jobs and init/shutdown scripts.  I followed the instructions in the blog referenced above pretty closely on this score, so I won’t list the details here.

As far as security is concerned, there are a few things worth considering.  First, since this is going to be a “headless” box, I needed ssh access from other machines on my local network.  The Debian package manager makes this extremely easy to set up.  First, remove the installation CD from the list of possible package sources (/etc/apt/sources.list).  Then, just run apt-get install ssh. Once the installation is complete, you’ll be able to log in from any computer on your local network using the machine’s IP address (which you can find out by looking at the output of ifconfig eth0).

There are, naturally, some security concerns that arise when using ssh (or any other program that opens up ports to the outside world, as we’ll see in a moment). To lock it down further, you might consider editing its configuration file (/etc/ssh/sshd_config) to include the following:

PermitRootLogin no

That’ll keep users from logging on as root through an ssh session (you can still be root by using su once you’ve logged in as another user).

By default, there are a few additional services running in a Debian installation that should probably be disabled in this setup. For instance, we won’t be needing RPC services, so we can remove them. If we weren’t planning on sharing files over NFS, we could remove RPC services at this point. We also won’t be needing the standard identd daemon running, so we can remove it too. I’d have more detailed instructions on how to do this, but I seem to have forgotten to document it; maybe I’ll add it later. In any case, the end result (at least at this stage) is that the only open port is the one you’re using for ssh.

That’s about it for initial setup; I will probably not write much more about this for another couple of weeks, since it will probably take that long for me to get my hard drive(s) in, and that’s the only step left in this process. Here’s hoping it goes well!

Written by jazzslider

November 9, 2008 at 11:44 pm