In Defiance of Titles

Just another WordPress.com weblog

Output Transformation in a Zend Framework Model Layer

with 4 comments

A few weeks back, Matthew Weier-O’Phinney wrote a very helpful discussion of model layer infrastructure using various components of the Zend Framework. I especially appreciated his advice on using Zend_Form as an input filter inside the model class itself; it provides a very clean way to keep validation and filtering logic properly encapsulated.

Zend_Form’s use of Zend_Filter and Zend_Validate also makes it very easy to get precisely the filtering and validation rules you need. You can even filter through an external library like HTMLPurifier if you find you need the extra functionality, just by writing a new filter class; this has already been covered quite well (for example, see Part 8, Step 3 of Pádraic Brady’s Zend Framework blog tutorial). As Weier-O’Phinney demonstrates, you can then use this Zend_Form object as a screening filter in your model class, so that certain properties must always pass through the form’s validation process before they are set in the model itself. I won’t duplicate his logic here either, but you should definitely take a look at it.

However, I’ve run into a minor problem, and I’m not sure my solution is particularly ideal. See, the Zend_Form approach described above does a great job of implementing Chris Shiflett’s Filter Input, Escape Output principle…user input is filtered for invalid HTML before it’s ever saved to the model, and can then be escaped as appropriate in the view layer. But what happens if you need to be able to retrieve the user’s original unfiltered input later?

That might not sound like an appropriate thing to do, but consider this. Suppose that instead of simply sanitizing user-contributed HTML, you wanted to allow your users to use a simpler text input format (such as Markdown) and generate the HTML for them later? It wouldn’t be appropriate to save the generated HTML to the model, since your users would then be unable to retrieve their original Markdown version for later editing. However, if you don’t pre-generate the HTML, then you can’t perform your HTMLPurifier sanitizing at the input stage either, since there isn’t any HTML to sanitize yet.

In this situation, it looks to me like you’d be stuck doing all your input filtering in the presentation (output) layer, which doesn’t really dovetail well with Shiflett’s principle. But then again, there do appear to be two distinct types of “filtering” at work here, one of which is what Shiflett was talking about, and the other of which probably isn’t:

  1. Sanitization, or making sure that user input doesn’t contain any security risks.
  2. Transformation, or converting user input for presentational purposes. (I feel like this is different from escaping, since escaping is mainly concerned with defusing special characters?)

So what do you think? It’s clear that sanitization ought to be done immediately upon input (preferably in the form object), but where should transformation happen?

Rob Allen’s Zend Framework Overview from last year hints at implementing things like Markdown formatting in the view layer through the use of view helpers. This is certainly appropriate from a strict MVC perspective, as output transformation is definitely presentation-layer stuff. However, this isn’t particularly DRY; every time you wrote a view script utilizing this data, you’d need to remember to run it through the appropriate chain of output filters.

So, my best overall idea (building on Weier-O’Phinney’s examples) is to implement it in the getters in my model:

class My_Model
{
  // ...
  public function __get($property)
  {
    $method = 'get' . ucwords($property);
    if (method_exists($this, $method)) {
      return $this->$method();
    }
    if (array_key_exists($property, $this->_data)) {
      return $this->_data[$property];
    }
    return null;
  }

  public function getBody($applyOutputFilter = true)
  {
    $body = $this->_data['body'];
    if ($applyOutputFilter) {
      $body = $this->getOutputFilter()->filter($body);
    }
    return $body;
  }

  public function getOutputFilter()
  {
    $filterChain = new Zend_Filter();
    // add specific filter objects as appropriate, and then...
    return $filterChain;
  }
  // ...
}

This guarantees that whenever the “body” is accessed as a property, it’s correctly transformed for HTML output (a sensible default).

However, both of these approaches still leave us with the same core problem: you almost inevitably end up doing all your input filtering at the presentation stage, rather than prior to saving it to the persistence layer as is usually recommended. This can be a security risk if you’re not careful, and is almost certainly a performance hit for the average visiting user.

Any ideas on how best to resolve these issues?

Written by jazzslider

April 6, 2009 at 7:11 am

Spades in PHP: Play-by-Play versus Play-at-Once

leave a comment »

Earlier this week I posted about my PHP spades project for automated testing of bidding and playing strategies. In that post I highlighted my use of the strategy design pattern to make it easy to test a variety of approaches to the game; however, I didn’t provide much structural detail. Lucky you, as it turns out, because the structure I was using at the time was far from ideal.

My overall idea for running the tests was to be able to use a very thin controller script, something along these lines:

$game = new Spades_Game();
for ($i=0;$i<3;$i++) {
  $game->registerPlayer(new Spades_Player(new Spades_Strategy_Something()));
}
$scores = $game->play();

This simple “play-at-once” approach is very easy to call, as it leaves every last bit of the game’s business logic to the model layer. Unfortunately, it doesn’t leave a lot of room for flexibility, particularly in two areas:

  1. Since the entire game happens in a single function call, it’s impossible for the controller layer to keep track of the game’s progress for output and/or logging purposes.
  2. It’s also impossible for the controller layer to allow user input and thereby facilitate the introduction of human players.

So, today I spent a good chunk of time re-architecting the game structure. In terms of tracking progress, I started to think in terms of atomicity: what’s the smallest sequential part of a game of spades? Well, let’s break it down. A game in its entirety consists of as many hands as necessary to get one team to 400 points without a tie. Each hand consists of each player bidding once, followed by as many tricks as necessary to lay down the entire deck. Each trick consists of every player laying down a single card in sequence. Tricks are won by the team who played the highest card (spades being higher than other suits), and hands are scored based on the number of tricks each player bid on winning.

The most atomic parts of all that are individual players’ bids and plays. So, if the controller layer was to be able to track the progress of the game, it would need to be able to check in after each of those events to see what happened. This led to the following “play-by-play” structure:

$game = new Spades_Game();
for ($i=0;$i<3;$i++) {
  $game->registerPlayer(new Spades_Player(new Spades_Strategy_Something()));
}
while ($nextPlayer = $game->getNextPlayer()) {
  if ($game->getCurrentPhase() == Spades_Hand::PHASE_BIDDING) {
    $game->acceptBid($nextPlayer->placeBid(), $nextPlayer);
  } else if ($game->getCurrentPhase() == Spades_Hand::PHASE_PLAYING) {
    $game->acceptPlay($nextPlayer->playCard(), $nextPlayer);
  }
}
$scores = $game->getScores();

The nice thing about this is that, although there’s quite a bit going on behind the scenes, the controller layer really only needs to be aware of the Spades_Game, Spades_Player, and Spades_Strategy_X classes. As long as the game object returns a player, the controller layer knows that there’s more game to be played.

So what’s going on internally? Well, Spades_Game is basically a facade managing a sequence of Spades_Hand objects, which in turn manage a sequence of Spades_Trick objects. Take a look:

class Spades_Game
{
  // ...
  public function getNextPlayer()
  {
    $hand = $this->getCurrentHand();
    if (null === $hand) {
      return null;
    }

    $player = $hand->getNextPlayer();
    if (null === $player) {
      // this hand is over...score it, register it, and
      // check to see if the game has been won
      $this->_score();
      $this->_handsPlayed[] = $hand;
      if ($this->_isWon()) {
        return null;
      }

      // hasn't been won yet, so we need to start up
      // a fresh hand...
      $this->_incrementDealer();
      $this->_currentHand = new Spades_Hand($this);
      $player = $this->_currentHand->getNextPlayer();
    }
    return $player;
  }
  // ...
}

Notice that, at this point, there isn’t any logic regarding the phase we’re in (bidding or playing); since the two phases each occur once per hand, it makes more sense to manage them within the Spades_Hand class. Each phase is handled by separate logic, so Spades_Hand::getNextPlayer() makes use of a couple of protected methods to keep things distinct:

class Spades_Hand
{  
  // ...
  public function getNextPlayer()
  {
    switch ($this->getCurrentPhase()) {
      case Spades_Hand::PHASE_BIDDING :
        $player = $this->_getNextBidder();
        break;
      case Spades_Hand::PHASE_PLAYING :
      default :
        $player = $this->_getNextPlayer();
        break;
    }
    return $player;
  }

  protected function _getNextBidder()
  {
    if (count($this->getPlayerBids()) <= 0) {
      // nobody has bid yet, so this is the beginning of the hand;
      // deal, and then the player after the dealer is the first bidder
      $this->_cardsDealt = $this->getDeck()->deal($this->getPlayers(), $this->getDealer());
      $this->_currentTrick = new Spades_Trick($this->getGame());
      $position = $this->getDealer() + 1;
    } else {
      $lastbid = end($this->_bids);
      $lastpos = key($this->_bids);
      $position = $lastpos + 1;
    }
    if ($position > count($this->getPlayers()) - 1) {
      $position = 0;
    }
    $players = $this->getPlayers();
    $player = $players[$position];
    return $player;
  }

  protected function _getNextPlayer()
  {
    $trick = $this->getCurrentTrick();
    if (null === $trick) {
      return null;
    }
    $player = $trick->getNextPlayer();
    if (null === $player) {
      // the trick is over...score it, register it, and
      // check to see if the hand is complete
      $this->_score();
      $this->_tricksPlayed[] = $trick;
      if ($this->_isFinished()) {
        return null;
      }

      // not done yet, start a fresh trick
      $this->_currentTrick = new Spades_Trick($this->getGame());
      $player = $this->_currentTrick->getNextPlayer();
    }
    return $player;
  }
  // ...
}

Notice that the _getNextPlayer() half of the procedure is extremely similar to Spades_Game::getNextPlayer(); we’re still passing the buck along the chain to a smaller unit of play…the Spades_Trick instance.

class Spades_Trick
{
  // ...
  public function getNextPlayer()
  {
    if (count($this->_plays) == count($this->getPlayers())) {
      // everybody's played, so we're done...determine the winner
      // and get on out of here.
      $this->_determineWinner();
      return null;
    } 
    if (null === $this->_currentPlayer) {
      // first play...determine the lead player
      $this->_currentPlayer = $this->getLeadPlayer();
    } else {
      // cycle through to next player
      $this->_currentPlayer = $this->getPlayerAfter($this->_currentPlayer);
    }
    return $this->_currentPlayer;
  } 
  // ...
}

All these layers taken together make it possible for the controller layer to always have access to the next player in sequence, without having to know much of anything about that sequence.

One nice side bonus of this structure is that it would be dead simple for the controller layer to identify certain players as user-controlled, injecting their input into the system in the $game->acceptBid() and $game->acceptPlay() methods.

The next step in all of this, I suppose, will be to develop a simple web interface for viewing the results, and possibly trying to beat all these wonderful computer players.

Written by jazzslider

March 14, 2009 at 8:43 pm

Spades and the Strategy Pattern

with 5 comments

So lately my wife and I have been playing quite a bit of spades with some good friends of ours; if you’ve never played, it’s quite fun, but you don’t want to be on my team 🙂

The thing is, it strikes me as the kind of game a well-informed computer would be great at; that is, if one could remember everything that’s been played in a given hand and, from that, calculate the probability of any of one’s cards being beaten in a given trick, one could win the game far more easily.  But I have my doubts about this, so as any good developer would, I decided to test it programmatically.

Here’s the project (and I deliberately didn’t check to see if anyone’s done this before): write a command-line PHP script that runs any number of automated spades games, involving a variety of players utilizing different play algorithms.  Since the algorithms are what we’re most interested in, I decided to use the strategy pattern: each player is an instance of the same basic Spades_Player class, but each has its own instance of one of several Spades_Strategy_Interface implementations that controls how it bids and how it chooses the card to play.  Here are the interfaces:

interface Spades_Player_Interface
{
  public function __construct(Spades_Strategy_Interface $strategy, $name = null);

  // player should have an identity
  public function getName();

  // player is part of a particular Spades_Game, and
  // is seated in a particular position from 0 to 3
  public function getGame();
  public function setGame(Spades_Game $game);
  public function getPosition();
  public function setPosition($position);

  // player has several cards
  public function getCards();
  public function receiveCard(Spades_Card $card);

  // actual playing methods
  public function placeBid();
  public function playCard(Spades_Trick $trick);

  // player should also be able to respond to certain events
  public function preHand(Spades_Hand $hand);
  public function preTrick(Spades_Trick $trick);
  public function postPlay(Spades_Trick $trick, Spades_Play $play);
  public function postTrick(Spades_Trick $trick);
  public function postHand(Spades_Hand $hand);
}

Now on to the strategy pattern. You’ll notice a lot of the same methods here; as I understand it, that’s the idea behind this particular pattern. It implements a lot of its owner’s methods so that different instances of the same owner class can have very different behaviors. Here’s the code I used:

interface Spades_Strategy_Interface
{
  // strategy should know its player
  public function getPlayer();
  public function setPlayer(Spades_Player_Interface $player);

  // convenience methods for getting state information from the player
  public function getGame();
  public function getPosition();
  public function getCards();

  // player hook implementations
  public function preHand(Spades_Hand $hand);
  public function preTrick(Spades_Trick $trick);
  public function postPlay(Spades_Trick $trick, Spades_Play $play);
  public function postTrick(Spades_Trick $trick);
  public function postHand(Spades_Hand $hand);

  public function placeBid();
  public function playCard(Spades_Trick $trick);
}

And, just so you see how it works in practice, here’s a sample method from my actual Spades_Player implementation:

class Spades_Player implements Spades_Player_Interface
{
  // ...
  public function playCard(Spades_Trick $trick)
  {
    $toPlay = $this->_strategy->playCard($trick);
    $trick->receivePlay(new Spades_Play($toPlay, $this));
    return $toPlay;
  }
  // ...
}

So there it is, nice and simple. Testing a new strategy algorithm is as simple as defining a new PHP class.

It’s also probably a good idea at this point to define some sort of reference strategy to test against…something that any intelligent spades player ought to be able to beat. How about randomizing it? (Note that I’ve implemented a lot of the hook methods in a separate abstract class, so there’s a lot less to write here than the interface suggests.)

class Spades_Strategy_Random extends Spades_Strategy_Abstract
{
  public function placeBid()
  {
    return mt_rand(0, 4);
  }

  public function playCard(Spades_Trick $trick)
  {
    $playable = $trick->getPlayableCards($this->getCards());
    return $playable[array_rand($playable)];
  }
}

Hmm, that kind of looks like the strategy I use.

Anyway, there are quite a few places we could go from here; I’ve already implemented most of the classes you see mentioned in the code above (Spades_Game, Spades_Hand, Spades_Trick, Spades_Play, and Spades_Card), but the real fun is in writing new strategies. Some ideas I’ve had:

  • Probability-based: the player remembers which cards have been played and figures out how likely it is any of its playable cards can be beaten by any of its opponents. (More on this later; I’m not quite sure about the math here.)
  • Evolutionary: the player starts out playing at random, but always remembers whether a given card ends up beating a given trick state. For instance, if it plays the Ace of Clubs on top of the Two of Clubs and then ends up winning the trick, it’ll be more likely to play the Ace of Clubs against the Two of Clubs in later hands.
  • Cheater: you’ll notice that the Spades_Player::getCards() method is public, and astute readers may have already guessed that Spades_Game implements a public getPlayers() method; as a result, unscrupulous players would technically be able to take a peak at their opponents’ hands and play accordingly.

That’s all I’ve got for now; if anyone is interested, I may post the full code later once it’s finished. Thanks for reading!

Written by jazzslider

March 12, 2009 at 6:26 am

Looks like Google’s testing something…

with one comment

Ran a search or two this morning, and discovered to my dismay that my computer was in danger from the entire internet; every result I saw in every search I tried contained this ominous warning about how the site “may damage [my] computer”:

Searching for chickens (or anything else) may harm your computer.

Searching for chickens (or anything else) may harm your computer.

Same comical results for every search I tried, so that’s fun…what’s even better is that when you try to visit any of these sites, Google makes you really think hard about it with yet another warning screen. Good stuff. I’ll leave you with my favorite two results:

microsoft-may-harm2

apple-may-harm

They finally have something in common!  (I didn’t search for Linux; that’d just make me sad.)

Oh, and here’s one more just to prove Google is being fair about this:

google-may-harm

Written by jazzslider

January 31, 2009 at 8:52 am

Posted in Computers, Humor

Tagged with , , ,

My Home NAS, A Performance Sidenote

leave a comment »

Still more to go on my home NAS series, but I thought I’d take a moment to point out a recently-published article that benchmarks the performance of the MSI Wind box against several other DIY and off-the-shelf NAS units. The author ends with the conclusion that DIY NAS boxes based on Intel Atom chipsets (and also the VIA C7) typically get twice the throughput of their store-bought counterparts. So, if you were wondering if all this fiddling is worth it, there’s at least some evidence that it might be 🙂

Coming soon: Samba file sharing and possibly some extras about monitoring, security and performance.

Written by jazzslider

January 30, 2009 at 6:51 pm

Posted in Uncategorized

Tagged with , ,

My Home NAS, Part 9: NFS Over SSH

with 2 comments

If it seems like I’m on an acronym kick, it’s not my fault. In the previous bits of my home NAS series, I’ve shown you all sorts of them: S.M.A.R.T., RAID, LVM, SMB…wait, what happened to SMB? Astute readers will no doubt notice that I initially intended to connect my NAS to my network using the ubiquitous Windows networking protocol, SMB, which in its UNIX implementation is referred to as Samba. So why is this post’s acronym NFS?

Ultimately, it was the realization that none of the computers I use on a daily basis are running Windows anymore. If I’m using UNIX-like systems all the time, why not use a UNIX-native networked file system tool? Enter NFS, which, appropriately enough, stands for Networked Filesystem. Cue the band. (For those of you who are hoping for Windows connectivity, don’t worry; I’ll cover Samba in a future post. Nice thing is, you can run both if you want to.)

Now, before we get started, there are a few kind of strange things about NFS you’ll want to know. First, you don’t log in to an NFS share. NFS is designed on the assumption that, within a given network, one user always has the same numeric user ID. The net result of this is that if you can log on to an authorized client computer as, say, user 1000, you are effectively logged into the NFS server as user 1000 as well (but only within the shared directories). Fortunately for security’s sake, it is possible to very strictly control which network hosts are authorized to access the share, which effectively means that the only way someone can get to the share is by gaining access to one of the client computers (or compromising the server itself).

One other thing worth noting up front: ultimately, my goal was to make this NFS share securely accessible over the internet. However, given the login-less nature of the system, exposing an NFS share to the open internet is really kind of stupid without additional security measures. I considered setting up a full-blown VPN for this, but just then one of my co-workers introduced me to SSH tunneling (or port forwarding). Tunneling NFS through an SSH connection allows just the extra security I need for internet access to the share, and is a good idea even if you’re only accessing it locally; you just can’t be too careful.

(Much of the following tutorial was adapted from this HowToForge page. All credit where credit is due, after all.)

OK, time to get the hands dirty. First things first, we need to install the NFS server components. Debian gives you two choices here (nfs-kernel-user and nfs-kernel-server); I opted for the kernel version, which is easily installed via a simple apt-get install nfs-kernel-server command. (Note: if you uninstalled the RPC services early on in this guide, this should reinstall them. That’s a good thing; they’re necessary for NFS.)

Next, since we’re going to be tunneling this through SSH, we need to set the various NFS-related services up such that they use static (i.e., predictable) port numbers. To do this, first edit /etc/default/nfs-common; if there’s a line beginning with STATDOPTS, change it to the following (if it’s not already there, just add it to the end of the file):

STATDOPTS='--port 2231'

Next, edit /etc/modules.conf such that it includes the following line:

options lockd nlm_udpport=2232 nlm_tcpport=2232

And finally, edit /etc/default/nfs-kernel-server such that the line beginning with RPCMOUNTDOPTS reads:

RPCMOUNTDOPTS='-p 2233'

The next step is to create the user that will dig the tunnel each time you connect. In my setup, this is also the user who owns the files shared over the network, so it’s important that you choose the user’s UID carefully. For example, all of the client computers in my setup are Macs, and the default UID in OS X is 501, so the user I added here is also user 501. I named it macshare so that I know exactly what it’s for. To add the user…

adduser macshare --uid 501
adduser macshare users

You’ll also want to set up public key authentication for the new user to avoid having to enter in a password. I’d put the steps in here, but you can find them in detail in lots of places all over the internet, and I really don’t want to rewrite any more of that earlier HowToForge article’s content. So, I’ll leave that step up to you for now.

Moving forward, we’ve still got to set up the NFS shares, or “exports”. This is actually pretty easy; it’s all handled through NFS’s central nervous system, the /etc/exports file. It should just be comments at this point, describing the various kinds of things you can do. My own goal was to set up each of the logical volumes we created earlier as its own NFS share, so my /etc/exports ended up looking something like this:

/export/standard 127.0.0.1(rw,sync,no_subtree_check,insecure)
/export/critical 127.0.0.1(rw,sync,no_subtree_check,insecure)
/export/pictures 127.0.0.1(rw,sync,no_subtree_check,insecure)
/export/music 127.0.0.1(rw,sync,no_subtree_check,insecure)

Each line is a different share, and the various configuration fields are separated by spaces. The first field is the full path to the directory you’re sharing; that’s easy enough. The second field begins with the hostname or IP address of a machine that’s allowed to access that directory; in this case, we used 127.0.0.1, which is the server’s own localhost address. In effect, this means that the server is only exporting these directories to itself, thus ensuring that other computers can only access them via port forwarding. Anyway, immediately following the IP address is a parenthesized list of options specifying how this share can be mounted; I won’t go into the details, but suffice it to say, these were the options that worked for me.

The configuration doesn’t stop there! For security purposes, you’ll need to take a look at your /etc/hosts.allow file to make sure that the localhost IP address above is allowed to access a few particular services. If you put the following lines at the top of your /etc/hosts.allow file (under the comments, of course), you should be good to go:

portmap: 127.0.0.1
lockd: 127.0.0.1
mountd: 127.0.0.1
rquotad: 127.0.0.1
statd: 127.0.0.1

It’s also important that your client machines be able to access the sshd daemon; otherwise, your SSH tunnel won’t be allowed. So, if you haven’t already, make sure to add an appropriate record for sshd in /etc/hosts.allow. If your NFS configuration doesn’t end up working, this is a really important place to look for errors.

Now, on to the SSH tunnel. There are actually two distinct services we’ll need to tunnel, listening on two distinct ports…so we’ll need two tunnels. The first service is nfs itself; to find out which port it’s using, type in rpcinfo -p (on the server) and look for the nfs line(s). Mine was listening on port 2049. The second service we’ll need to tunnel is mountd, which earlier we set up to listen on port 2233. Knowing these details, the ssh command to set up both tunnels looks a little like this:

ssh macshare@hostnameorip -L 48151:127.0.0.1:2049 -L 62342:127.0.0.1:2233 -f sleep 600m

As you can see, there are two -L options, one per tunnel. The first forwards port 48151 on my client computer to port 2049 on the server (which is identified by its own localhost IP, just like in the /etc/exports file on the server); that’s the tunnel for NFS. The second forwards port 62342 on my client computer to port 2233 on the server; that’s the tunnel for mountd. For the life of me, I don’t know why the other NFS-related services aren’t involved, but I’m kind of glad they weren’t; I ran out of LOST numbers. (By the way, if you’re concerned about using Hurley’s magic lottery numbers to forward your data around your home network, be advised that the client-side port numbers are entirely arbitrary; just make them very large.)

Once you’ve got the tunnel running (ideally you’d set it up to run automatically), all that’s left is to mount the NFS share(s) to appropriate locations in your filesystem. This process varies by operating system (even across UNIXes), so for now I’ll leave that up to you. If my hands stop hurting from writing this terrifically long post, I may add the details later; for now, happy fiddling!

Written by jazzslider

January 15, 2009 at 6:03 pm

My Home NAS, Part 8: Hardware Manifest

with 2 comments

I know I said my next post was going to be about NFS setup, but I thought it might be useful instead to take a momentary break for listing off the final hardware manifest. My previous posts have been a little unclear on this subject, so to avoid confusion, here’s a list of everything I bought that’s currently part of the machine, along with links to NewEgg product pages:

Component Purpose Price (as of today)
MSI Wind PC Barebone Motherboard, processor and case, all pre-assembled in a nice, neat, power-saving package $139.99
Kingston Elite Pro Compact Flash Card Operating System Storage $22.99
512 DDR2 533 Memory (don’t remember which brand I used…) $7-8
Western Digital Caviar SE16 WD5000AAKS 500GB Hard Drive, /dev/sdb in the main RAID array $64.99
Seagate Barracuda ST3750640AS 750GB Hard Drive, /dev/sda in the main RAID array (no longer available) $80
VIGOR iSURF II HCC-S2BL Aluminum HDD Cooler for the 750GB drive $22.99

All of this is variable to suit your tastes, of course…this is just what I used. If you’ve got a bit more money lying around, it might be worth investing in larger hard drives. Make sure they’re SATA (ideally SATA 3Gb/s) drives rather than IDE, as the Wind PC doesn’t have any IDE ports. 7200RPM is also a good idea for quick access times.

Also, I’ve found that 8GB for the operating system is a bit on the high side; on my current setup, Debian is only taking up a measly 1.7GB. You could probably get away with a 4GB card instead, but the prices are low enough that it doesn’t save you much.

Oh, and as far as memory is concerned…512MB has been plenty for my purposes (my swap file is pretty much never used), but if you can afford 1 or 2 gigabytes instead, why not? Just make sure it’s DDR2 533 laptop memory (yes, this is basically a laptop without all the mobility-related bits), and only one stick of it (the Wind PC only has one memory slot).

Meanwhile, the entire software stack is free and open source, so there’s nothing else to buy beyond what I’ve listed above (unless you want something else, of course).  Based on the components/prices above (and assuming you don’t need to buy any of the extra stuff I’m about to list off), you’re looking at a total project cost of about $340, not including any shipping costs you might incur.  Not a bad deal considering the degree of customization you get along with it.

There are a couple of other things you’ll need, but only temporarily; for these, it’s best to just use something you’ve got lying around if you can:

  • 1GB SD card or USB flash drive, to use as the installer media
  • Monitor
  • USB keyboard (PS/2 won’t work; the Wind PC doesn’t have a PS/2 port)
  • Another computer which (among other things) will be used to set up the installer media; to really get this right, you’ll need to install something like VirtualBox and create a dummy Debian installation that can access your installer media…which I suppose I should document 🙂

Anyway, next time we’ll get back after the step-by-step setup; there’s still a fair amount of software setup to do, but we’re almost there.

Written by jazzslider

January 13, 2009 at 7:14 am

Posted in Uncategorized

Tagged with , ,