In Defiance of Titles

Just another WordPress.com weblog

Posts Tagged ‘network attached storage

My Home NAS, Part 10: Mac OS X Automounting

leave a comment »

So, uh, some time ago when I wrote the last post in my home NAS tutorial (for reference, here’s all previous posts in the series), I made a rather bold omission:

Once you’ve got the tunnel running (ideally you’d set it up to run automatically), all that’s left is to mount the NFS share(s) to appropriate locations in your filesystem. This process varies by operating system (even across UNIXes), so for now I’ll leave that up to you.

Unfortunately, yesterday I found myself deeply regretting the decision not to document this step; somehow, when I upgraded my Mac to Snow Leopard, my NFS share configurations were wiped out, leaving me with nothing but scp to get files off of the NAS. So, to avoid future headaches, I’ve decided to finish my documentation.

First, a word of warning for you Linux folks out there: Mac OS X handles NFS mounting differently than a lot of other UNIXes do; you’ll be tempted to start out in /etc/fstab, but don’t do it…it’s deprecated. OS X offers a more flexible solution allowing you to automatically mount various volumes from various kinds of directory services…but that’s kind of beyond what we’re looking for, so we’re just going to make a few simple changes to the automount configs instead.

The end goal here is to take the NFS shares we previously exported (critical, standard, pictures, and music) and mount them all in one directory under /Volumes. To do this, we’ll first want to create an automount map file called “/etc/auto_your-server-name”. The contents should look something like this, assuming you used the LOST-inspired port forwarding from my previous post:

critical -tcp,port=48151,mountport=62342,resvport,locallocks localhost:/export/critical
standard -tcp,port=48151,mountport=62342,resvport,locallocks localhost:/export/standard
pictures -tcp,port=48151,mountport=62342,resvport,locallocks localhost:/export/pictures
music    -tcp,port=48151,mountport=62342,resvport,locallocks localhost:/export/music

So far so good. Next, we’ll need to tell the automounter to use this new file to populate a particular directory under /Volumes; to do so, add the following line to /etc/auto_master:

/Volumes/your-server-name auto_your-server-name

Finally, tell the automounter to re-read the configs by executing the following command:

sudo automount -vc

If all goes well, you should see your four new mount points show up as shared directories under /Volumes/your-server-name; assuming, of course, that you’ve got your SSH tunnel running as per the previous tutorial. Hope this helps someone!

Written by jazzslider

September 10, 2009 at 6:38 pm

My Home NAS, Part 5: Finally, Hard Drives

leave a comment »

Well, it’s been a couple of months since my last post, and I’ve gotten quite a bit more work done on my home NAS project. When I last posted, I had finished installing the basic hardware and operating system, but hadn’t quite settled on the right set of hard drives.  But all that’s changed; in fact, the ol’ Wind PC has been up and running pretty solid for the last month or two, and I’m happy to report that the project is a success.  In this post (and any that might follow it) I’ll tell you a bit more about the rest of the setup.

First, the hard drives.  I decided to break with one of my initial project requirements; the final NAS has only 500GB of usable storage.  For scalability’s sake, I decided to install two disks in a RAID-1 configuration, one of them a 500GB from Western Digital, and the other a 750GB from Seagate.

Installing to the 3.5" Bay

Installing to the 3.5" Bay

Since the size of a RAID-1 array is always equal to the size of its smallest member, that gives me 500GB of usable storage; however, if I later replace the 500GB drive with, say, a 1TB drive, the size of the array is increasable to 750GB.  And so on, and so forth, etc., etc.  I’ve tested this procedure on a virtual machine, but so far haven’t had the chance to try it out on the actual box…so if a year from now I try it and it fails, please don’t blame me 🙂

Anyway, there were a few steps I wanted to take to make sure things stayed running smoothly. First, simple disk monitoring using the Debian smartmontools package. Make sure it’s installed with a simple apt-get install smartmontools; then, in /etc/default/smartmontools, make sure you’ve got a line (uncommented) that says start_smartd=yes . And, just to make sure we’ve got a clean slate…

cp /etc/smartd.conf /etc/smartd.conf.org
echo "# /etc/smartd.conf config file" > /etc/smartd.conf

Then, for each installed drive (on the Wind box, they’re /dev/sda and /dev/sdb), add appropriate monitoring instructions to /etc/smartd.conf. If you’re the copy-and-paste type, this should get you going:

echo "/dev/sda \
-d ata \
-S on \
-o on \
-a \
-I 194 \
-n standby \
-m youremail@yourhost.com \
-s (S/../.././01|L/../../6/03)" >> /etc/smartd.conf

Obviously, you’ll want to put your own email in there, and run that code a second time with /dev/sdb in the first bit so that both your drives are monitored. Then, run /etc/init.d/smartmontools restart to make sure your new settings take effect.

You can also check on things manually using the smartctl command; for instance, the following command will get you all the monitoring data for /dev/sda:

smartctl -d ata -a /dev/sda

When I first set all this up, I’d mounted one of the drives in the Wind PC’s 5.25″ bay using a set of simple metal adapter brackets:

Drive installed on brackets in the 5.25" bay

Drive installed on brackets in the 5.25" bay

However, in this setup, both drives were floating around 45 degrees Celsius, which is really a bit too warm for my tastes. I’ve since installed the 750GB drive in a Vigor iSurf II cooling unit (I did have to reverse the fans to get it right), and things are down to 38 on both drives. Great news.

Drive installed in Vigor iSurf II

Drive installed in Vigor iSurf II

Anyway, the next step is to set up the RAID array and the filesystems…which I will show you next time.

Written by jazzslider

January 10, 2009 at 6:41 pm

My Home NAS, Part 4

with one comment

Having finished assembling the hardware and installing the base operating system for my home-grown NAS device, I’ve moved on to the fine art of tweaking.  For now, since there isn’t a hard drive in the box for major storage, I’m focusing on two things: the long-term stability of the CompactFlash card, and the general security of the machine.

As far as the CompactFlash card is concerned, there are a few things left to do to optimize its performance.  I’ve read from various sources that flash media tends to have a limited lifetime, especially when written to frequently.  As I noted in part 3, I avoided creating any swap space for this very reason.  However, there are still a few areas of the Linux filesystem that could cause extremely frequent write access, and it’d be nice to avoid this.  Based on some useful instructions from another blog, I decided to mount a few areas of the filesystem in RAM rather than on the CF card itself.  To do this, I edited /etc/fstab to look like the following:

proc            /proc           proc    defaults        0       0
/dev/hda1       /               ext2    defaults,errors=remount-ro,noatime 0       1
tmpfs           /var/run        tmpfs   defaults,noatime  0       0
tmpfs           /var/log        tmpfs   defaults,noatime  0       0
tmpfs           /var/lock       tmpfs   defaults,noatime  0       0
tmpfs           /var/tmp        tmpfs   defaults,noatime  0       0
tmpfs           /tmp            tmpfs   defaults,noatime  0       0

One important feature of this setup is that pretty much everything is mounted with the “noatime” option; this keeps the filesystem from recording information about when files were last accessed.  Otherwise, every time we read a file, the system would write a little bit of information to the card, decreasing its overall lifespan.

Now, the mounting setup above is great if we never turn the machine off; however, as you may have guessed, rebooting or shutting down the machine would mean that everything in those tmpfs directories is lost for good.  So, as it turns out, it’s not a bad idea to create a separate persistent version of at least some of it (especially /var/log) using cron jobs and init/shutdown scripts.  I followed the instructions in the blog referenced above pretty closely on this score, so I won’t list the details here.

As far as security is concerned, there are a few things worth considering.  First, since this is going to be a “headless” box, I needed ssh access from other machines on my local network.  The Debian package manager makes this extremely easy to set up.  First, remove the installation CD from the list of possible package sources (/etc/apt/sources.list).  Then, just run apt-get install ssh. Once the installation is complete, you’ll be able to log in from any computer on your local network using the machine’s IP address (which you can find out by looking at the output of ifconfig eth0).

There are, naturally, some security concerns that arise when using ssh (or any other program that opens up ports to the outside world, as we’ll see in a moment). To lock it down further, you might consider editing its configuration file (/etc/ssh/sshd_config) to include the following:

PermitRootLogin no

That’ll keep users from logging on as root through an ssh session (you can still be root by using su once you’ve logged in as another user).

By default, there are a few additional services running in a Debian installation that should probably be disabled in this setup. For instance, we won’t be needing RPC services, so we can remove them. If we weren’t planning on sharing files over NFS, we could remove RPC services at this point. We also won’t be needing the standard identd daemon running, so we can remove it too. I’d have more detailed instructions on how to do this, but I seem to have forgotten to document it; maybe I’ll add it later. In any case, the end result (at least at this stage) is that the only open port is the one you’re using for ssh.

That’s about it for initial setup; I will probably not write much more about this for another couple of weeks, since it will probably take that long for me to get my hard drive(s) in, and that’s the only step left in this process. Here’s hoping it goes well!

Written by jazzslider

November 9, 2008 at 11:44 pm

My Home NAS, Part 3

with 2 comments

Now that the hardware’s put together, the next step is installing the operating system.  As I mentioned earlier, my goal here is to install Debian Etch (actually, for reasons related to my backup policy, I ended up going with Debian Lenny; the install process is almost exactly the same, but you get slightly newer software) onto the onboard CompactFlash card without having to install an optical drive to do it.  I considered doing a PXE network install, but it looked pretty complicated given that I don’t have any other servers in my network setup…so instead, I set up a bootable SD card installer and worked from there.

I thought setting up the installer would be a bit easier than it actually was, given the simplicity of these instructions in the Debian installer guide.  However, despite my typically well-performing network setup at home, the process landed me right here:

DHCP fail: "Your network is probably not using the DHCP protocol. Alternatively, the DHCP server may be slow or some network hardware is not working properly."

Long story short, the network interface on the Wind PC is a Realtek RTL8111/8168B, and its drivers are either unsupported or unavailable in the Linux kernel version used in Debian Etch.  However, there is a slightly later version of Debian Etch (quaintly referred to as etchnhalf) that combines the stability of Etch with the later Linux kernel used in Lenny (the next Debian release). I initially tried to fix this by using the Debian “etchnhalf” distribution instead, but unfortunately I ended up needing a newer version of the backup utility duplicity than is available in any Etch distro. So, the following instructions will actually get you the appropriate install media for Debian Lenny. Christmas comes but once a year.  To get the SD installer working in this configuration, you can use the following script on any currently-functional Debian machine (I did it on a VM).  First, however, a couple of warnings:

  • This procedure will totally erase anything you’ve already got on your SD card.  Make sure you back it up first.
  • For safety’s sake, I’ve left it up to you to determine the device special file for your SD card (or, for that matter, any other USB device you’d like to use).  You can figure it out pretty easily:
    1. First, take the card out of the machine.
    2. Then, run tail -f /var/log/messages so you can see what new device shows up when you plug the card back in.

    In my case it turned out to be /dev/sdd …but don’t take my word for it, run the test yourself. You don’t want to overwrite the wrong device!

Once you know which device you’ll be writing to, you can copy and paste the following code into a file, make it executable (chmod +x filename), and run it with the device filename as its sole argument (e.g., if you save it as “createbootsd”, you’d run ./createbootsd /dev/sdd).

#!/bin/bash

# The $1 argument should be the full path of the SD card's device special file.
USBDEV=$1

# Download the Debian Lenny boot image file
cd ~
wget http://http.us.debian.org/debian/dists/lenny/main/installer-i386/current/images/hd-media/boot.img.gz

# Write the boot image file directly to the USB/SD device
zcat boot.img.gz > $USBDEV

# Mount the device onto the filesystem
mount $USBDEV /mnt

# Download the installer CD image to the device
cd /mnt
wget http://cdimage.debian.org/debian-cd/4.0_r5/i386/iso-cd/debian-40r4a-etchnhalf-i386-netinst.iso
wget http://cdimage.debian.org/cdimage/lenny_di_rc1/i386/iso-cd/debian-testing-i386-netinst.iso

# Unmount the device so it's safe to remove it
cd ~
umount /mnt

When that’s finished running, you’ll have yourself a nice, working Debian Etchnhalf Lenny installer from which you can boot the Wind PC and start the process.  As you can see, DHCP configuration now works just fine:

DHCP success!

DHCP success!

The rest of the installation process was really quite uneventful.  On nearly every screen, you can just choose the defaults, especially when it comes to selecting a network mirror for the package manager. There are, however, a couple of settings worth noting:

  • You’ll need to give your machine a host name early on in the installation. You’ll probably end up using this host name a lot when you connect to the machine over the network, so pick something you’ll enjoy remembering.
  • It’ll also ask you for a domain name. In my case, I used a domain name I pulled from my router’s configuration page. In practice, I don’t know that this will matter too terribly much for a home network, and the installer says as much.
  • Since I didn’t yet have a working hard drive at the time of this installation, partitioning was very simple.  I just created a single partition taking up the full CompactFlash card and mounted it as the root filesystem.  No swap space is necessary here; the machine’s got plenty of memory, and swapping to the CF card could seriously reduce its lifespan. The steps for this, once you get to the “Partition disks” screen, are as follows:
    1. Choose “Manual”.
    2. Select the disk from the list; it should have “hda” in its name, since it’s the /dev/hda device.
    3. When asked about creating an empty partition table on this device, choose “Yes”.
    4. It’ll take you back to the first screen, but there should now be an entry right underneath the original “hda” entry, whose label ends with “FREE SPACE”. Select that entry.
    5. Choose “Create a new partition.”
    6. The default partition size should be fine, since we just want a single partition using up the entire device.
    7. Choose “Primary” when asked what type the new partition should be.
    8. The next screen gives you a variety of options for setting up the filesystem on your new partition. Here are the settings you’ll want to use:
      • Use as: Ext2 file system
      • Mount point: /
      • Mount options: check “noatime”
      • Label: none
      • Reserved blocks: 5%
      • Typical usage: standard
      • Bootable flag: on

      When you’ve got it configured, choose “Done setting up the partition.”

    9. Choose “Finish partitioning and write changes to disk.”
    10. It’ll warn you about how you haven’t set up a partition for swap space; don’t worry about this, as we definitely do not want a swap partition on the CF card. That’d wear it out pretty badly. Choose “No” to get past this message.
    11. One more warning screen; when you’re asked to write the changes to disks, choose “Yes”.
  • The Debian installer at one point asks if you want to install any of their various predefined collections of software; since I’d much prefer to install everything myself later, I just chose the “Standard system” option:
    Choose "Standard system"

    Choose "Standard system;" we'll install everything else manually later on.

  • And, of course, make sure to install the grub boot loader when asked.

And that’s the basic installation!  You can toss the SD card installer now if you like, since the NAS is ready to boot itself from CompactFlash, like so:

Ready to boot

Ready to boot

Next time I’ll show you what steps I took post-installation to secure the machine and make it run more efficiently; then, eventually, we’ll pop in a hard drive and get our RAID-1 array going.

Written by jazzslider

November 9, 2008 at 10:12 am

My Home NAS, Part 2

with one comment

Well, the hardware’s here…at least most of it.  Aside from the defunct Hitachi Deskstar I mentioned in Part 1 of this lovely series, everything else has arrived intact and ready to go.  So, I thought it might be a good time to post some notes on the assembly process.

First, I should mention that installing a CompactFlash card on the motherboard of the MSI Wind PC is like no other CompactFlash card installation I’ve ever experienced.  Given the amount of disassembly required, I’m glad I don’t intend on swapping it out that frequently.  Due to the small form factor of the machine and the position of the hard drive chassis, case walls, etc., it is pretty much impossible to install the CF card without completely removing the motherboard.

Some advice for those of you following along at home: the motherboard is attached to the case, not just via the obvious four screws at each corner, but also in another area:

 

Dont forget to detach the RGB connector when you remove the motherboard.

Don't forget to detach the RGB connector when you remove the motherboard.

Yes, I know, kind of stupid of me.  Anyway, once the four main screws and the RGB connector have been removed, the motherboard lifts out pretty easily.  Here’s the bottom of it, which I hope to never see again:

Once the motherboard’s out, installing the CF card is as easy as ever; just slide it right in.

You can also see in the picture up above that I went ahead and installed my 512MB RAM stick; that part didn’t require any major disassembly; just popped it right in.

The next major bit of assembly was the hard drive; at the time I didn’t know it was broken, so I went ahead and installed it.  This was quite a bit easier than the CF card, and only required lifting the drive chassis out for a short while to mount the drive properly.

I made sure to connect it to SATA port 1 (the blue cable) per the instructions.

And that’s it!  Had the hard drive not been broken, that would’ve been the end of the assembly process.  Very easy overall, I’d have to say.  However, as I’ll show you in my next post, installing the OS was a bit more interesting.

Written by jazzslider

November 9, 2008 at 1:00 am

My Home NAS, Part 1

with 13 comments

I don’t throw away a lot of data. It’s a bit silly, really, but who knows when I might unexpectedly need to read a high school English paper I wrote around the turn of the century?

Unfortunately, storing everything forever can be a bit of a challenge. My most recent attempt, a 500GB external Seagate hard drive from Circuit City, ended in a disastrous puff of bluish smoke this past summer; some well-timed backups helped save a lot of that data, but even so, I found myself in need of a much better solution.  Flickr has helped for photos, but what about that high school English paper?

In an attempt to solve this dilemma, I decided recently that I would try network attached storage instead: it’s dedicated hardware, remotely accessible, infinitely expandable, and as redundant as I can possibly afford. Geek that I am, I also decided that it would be fun to build it myself rather than settle for one of the more expensive commercial alternatives.

The requirements I had in mind:

  • It should provide at least 1TB of storage.
  • It should provide a certain degree of reliable failover in case of hardware problems.  I’d be satisfied with a simple RAID-1 mirror with decent S.M.A.R.T. disk monitoring enabled.
  • It should be mountable as part of the filesystem on any of our locally-networked computers, regardless of operating system.  The ubiquity of SMB/CIFS file sharing software makes it an ideal choice for this scenario.
  • It should take up very little space.
  • It should use up very little power, since it will be running 24/7.
  • Eventually, it should be possible to access it from outside our local network, in as secure a manner as possible.
  • It should regularly back up the most irreplaceable of its contents to remote storage such as Amazon S3.  For budget reasons, this simply cannot include large media files like pictures and music, so I will also need to make careful DVD backups from time to time.

Given the above requirements, here’s what I’ve decided to start with:

  • For the case, motherboard, and processor, meet the MSI Wind PC.  It’s about the size of two laptops stacked, so it definitely meets the space requirements; plus, it’s Intel Atom-based, so it’s designed for extremely low-power applications like this.  Other notable features:
      

    • It’s got a CompactFlash card reader right on the motherboard, which provides a great way to install the operating system separately from the data. I don’t think I’ll make it completely read-only as some folks have suggested, but obviously I’ll need to be careful about how often it gets written to. Anyway, I thought that was a pretty cool feature.
    • It’s got two SATA ports; only one 3.5″ drive bay, but since I won’t be using the 5.25″ bay for an optical drive (no need), I can easily mount the second RAID disk in there.
    • It’s also got an SD card reader accessible from the outside, which will make it pretty easy to install the operating system without having to mount a CD or DVD drive.
    • The onboard Ethernet adapter supports transfer rates of up to a gigabit, which would be very helpful if I also had a gigabit router.  Maybe someday, if it turns out to be slow on megabit LAN.  Anyway, it’s nice that it’s there.
  • I’m going to give it 512MB of RAM; nothing special there.
  • Just to make sure I’ve got plenty of space, I’m using an 8GB CompactFlash card in the onboard slot; that’s quite a bit more space than my chosen OS needs, but it was only $30.
  • For budget reasons, I’m only starting with a single terabyte hard drive, running as one half of a degraded RAID-1 array.  Once I can afford a second disk, it’ll be easy enough to build the other half of the array.  My original choice of hard drive, the Hitachi Deskstar 7K1000, shipped dead-on-arrival from ZipZoomFly, so I’m currently working on a replacement; I’ll let you know when I decide, but I think I’m probably going to go with the Western Digital WD10EACS instead (the drawback there is that it’s not 7200rpm).
  • For the OS, I’ve chosen Debian Etch, a very popular, stable Linux distribution.

So that’s the setup; keep watching for details and photos, as this is bound to be a pretty interesting process.

Written by jazzslider

November 1, 2008 at 6:02 pm