Blog

Ponderings of a kind

This is my own personal blog, each article is an XML document and the code powering it is hand cranked in XQuery and XSLT. It is fairly simple and has evolved only as I have needed additional functionality. I plan to Open Source the code once it is a bit more mature, however if you would like a copy in the meantime drop me a line.

Atom Feed

Data 1, Disk 0

NAS disk failure

NAS disk failure outputAfter having finally built my NAS and had it happily working away in the background for a couple of weeks, it would seem that failure has struck; one of the disks forming the ZFS RAIDZ2 storage pool has failed! Whilst I am sure this seems a little ironic or sounds like a commissioned advert for ZFS by Sun, I can only try to reassure you that this is not the case.

Recently I experienced an unexpected crash with the NAS (no network response whatsoever), I am still unsure of the cause but have not had the time to investigate further. However, after powering the NAS off (ouch!) and back on again, I did take a quick look to make sure my data was intact by checking the zpool status. Unfortunately the bad news was that the pool status was reported as "degraded" with details of a failed disk, the good news however (and the whole point behind this setup) was that my data was fine :-)

I am fairly new to ZFS so I made some enquiries with the seasoned professionals in #opensolaris on FreeNode, to make sure that the errors I was seeing were definitely hardware related and not misconfiguration on my part. Whilst I was surprised that such a new disk would fail so soon, I was pointed to something called the "bathtub curve", which can be seen in chapter 4.2 of this paper. The "bathtub curve" basically follows that there will be high failure rates at the begining of a product's life (infant mortality) and at the end (wear-out); the statistics gathered in a further paper by Google entitled "Failure Trends in a Large Disk Drive Population" also seems to back this to a certain extent.

Overall I was glad to be reassured that this was a hardware failure and not a mistake on my part, and most importantly that I lost no data. The failed disk will shortly be replaced by the supplier, lets hope the replacement lasts a little longer.

Adam Retter posted on Sunday, 12th July 2009 at 17.50 (GMT+01:00)
Updated: Sunday, 12th 2009 at July 17.50 (GMT+01:00)

tags: ZFSZPOOLfailNASdiskOpenSolaris

1 comments | add comment

Building my DIY NAS

DIY NAS - Part 3 of 3

After previously deciding to build my own NAS, having defined my requirements in Part 1 and identified suitable hardware and software in Part 2, I will now discuss the build first in terms of the physical hardware build and then the software installation and configuration.

Hardware

I will not detail the exact build process for the Chenbro chassis as that information is available in the manual, instead I will try and capture my own experience, which will hopefully complement the available information.

Once all the parts had arrived, the first think to do was un-box everything before starting to put the system together. My immediate impression of the Chenbro ES34069 NAS chassis was that it was robustly built and manufactured to a high standard.
Box with DVDRW, Card Reader and Cable. Chenbro NAS chassis removedUnpacked Chenbro chassis and boxed disk drives1.0TB hard disks, packed two in each box

The first step in building the NAS with the Chenbro chassis, is to open up the chassis and then install the Motherboard. To open up the Chassis you need to remove the side cover and then the front panel.
Chenbro chassis with side cover removedChenbro chassis with motherboard tray removedChenbro chassis with secured motherboard

The second step is to get the Card Reader, DVD-RW and 2.5" Hard Disk for the operating system in place and cabled to the motherboard. The Hard disk needs to go in first, followed by the Card Reader and then the DVD-RW. I realised this too late, but luckily the DVD-RW is easily removed!
Chenbro chassis front panel removedChenbro chassis front panel with DVDRW fittedBack of Chenbro chassis front panel with DVDRW fitted

The third step is to finish connecting any cables, secure the cables away from the fan (I used some plastic cable ties for this) and then switch on and check that the system POSTs correctly. I did this before inserting any of the storage disks in the hot swap bays for two reasons - 1) if there is an electrical fault, these disks wont also be damaged, 2) if there is a POST fault, it rules out these disks as a possibility.
Chenbro NAS complete side viewChenbro NAS Power OnChenbro NAS POST

The final step is to install the storage disks into the hot swap caddies and those into the hot swap bays of the NAS.

This is where I hit upon a show stopper. Securing the disks in the hot swap caddies requires some special low profile screws, these seemed to be missing, I checked the manual and it stated that these were shipped with the chassis, but unfortunately not for me :-(.

After a week of not hearing from the supplier and unable to find suitable screws, I cracked and decided to improvise. The mounting holes on the hot swap caddies are a combination of plastic layered on metal, I reasoned that by cutting away the top plastic layer I would have more space for the screw heads. Very carefully I removed the plastic around the screw holes using a sharp knife, I am sure I probably voided some sort of warranty, but now standard hard disk mounting screws fit perfectly :-). Chenbro Disk Caddie before modificationChenbro Disk Caddie after modificationFinally loading the disks into the NAS

Software

A standard installation of OpenSolaris 2009.06 from CD-ROM was performed. Once the installation was completed, the remaining configuration was completed from the terminal.

ZFS Storage Configuration

As previously discussed in Part 2, I decided to use a RAIDZ2 configuration across the 4 1TB storage disks.

To configure the disks, I first needed to obtain their id's, this can be done using the format command -

                aretter@mnemosyne:~$ pfexec format
                Searching for disks...done
                
                
                AVAILABLE DISK SELECTIONS:
                0.  c8d0 <DEFAULT cyl 9726 alt 2 hd 255 sec 63>
                /pci@0,0/pci-ide@1f,1/ide@0/cmdk@0,0
                1.  c9d0 <WDC WD10-  WD-WCAU4862689-0001-931.51GB>
                /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
                2.  c9d1 <WDC WD10-  WD-WCAU4864114-0001-931.51GB>
                /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
                3.  c10d0 <WDC WD10-  WD-WCAU4862741-0001-931.51GB>
                /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
                4.  c10d1 <WDC WD10-  WD-WCAU4848518-0001-931.51GB>
                /pci@0,0/pci-ide@1f,2/ide@1/cmdk@1,0
                Specify disk (enter its number): ^C                                    
        

From this we can see that disks 1 through 4 are our 1TB storage disks. The following command uses the ids of these disks to create a new RAIDZ2 zpool called 'thevault' consisting of these disks -

                aretter@mnemosyne:~$ pfexec zpool create thevault raidz2 c9d0 c9d1 c10d0 c10d1                
        

We can then view/check the newly created zpool -

                aretter@mnemosyne:~$ pfexec zpool list
                NAME       SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
                rpool       74G  4.06G  69.9G     5%  ONLINE  -
                thevault  3.62T  1.55M  3.62T     0%  ONLINE  -
                
                aretter@mnemosyne:~$ pfexec zpool status thevault                
                pool: thevault
                state: ONLINE
                scrub: none requested
                config:
                
                NAME            STATE       READ    WRITE   CKSUM
                thevault        DEGRADED    0       0       0
                raidz2      DEGRADED    0       0       0
                c9d0    ONLINE      0       0       0
                c9d1    ONLINE      0       0       0
                c10d0   ONLINE      0       0       0
                c10d1   ONLINE      0       0       0
                
                errors: No known data errors
        

Now that we have our zpool we need to create some filesystems to make use of it. This NAS system will be used on our home network and so I opted for two simple filesystems, a 'public' filesystem which everyone may read and write to and a 'private' filesystem for more personal data -

                aretter@mnemosyne:~$ pfexec zfs create thevault/public
                aretter@mnemosyne:~$ pfexec zfs create thevault/private
                
                aretter@mnemosyne:~$ pfexec zfs list
                NAME                        USED  AVAIL  REFER  MOUNTPOINT
                rpool                      4.92G  67.9G  77.5K  /rpool
                rpool/ROOT                 2.85G  67.9G    19K  legacy
                rpool/ROOT/opensolaris     2.85G  67.9G  2.76G  /
                rpool/dump                 1019M  67.9G  1019M  -
                rpool/export               84.5M  67.9G    21K  /export
                rpool/export/home          84.5M  67.9G    21K  /export/home
                rpool/export/home/aretter  84.5M  67.9G  84.5M  /export/home/aretter
                rpool/swap                 1019M  68.8G   137M  -
                thevault                    180K  1.78T  31.4K  /thevault
                thevault/private           28.4K  1.78T  28.4K  /thevault/private
                thevault/public            28.4K  1.78T  28.4K  /thevault/public
        
Users and Permissions

Now that we have our filesystems we need to setup some accounts for our network users and assign permissions on the filesystems for the users.

I will create accounts for each of the three other people in the house and to make permission administration easier, each of these users will also be added to a common group called 'vusers' -

                aretter@mnemosyne:~$ pfexec groupadd vusers
                
                aretter@mnemosyne:~$ pfexec groupadd phil
                aretter@mnemosyne:~$ pfexec groupadd lesley
                aretter@mnemosyne:~$ pfexec groupadd andy
                
                aretter@mnemosyne:~$ pfexec useradd -c “Philip” -g phil -G vusers -m -b /export/home -s /bin/bash phil
                aretter@mnemosyne:~$ pfexec useradd -c “Lesley” -g lesley -G vusers -m -b /export/home -s /bin/bash lesley
                aretter@mnemosyne:~$ pfexec useradd -c “Andrew” -g andy -G vusers -m -b /export/home -s /bin/bash andy
        

So that all users in the 'vusers' group can read and write to the public filesystem, I set the following permissions -

                aretter@mnemosyne:~$ pfexec chgrp vusers /thevault/public
                aretter@mnemosyne:~$ pfexec chmod g+s /thevault/public
                aretter@mnemosyne:~$ pfexec chmod 770 /thevault/public
        

I then set about creating a private folder for each of the users on the private filesystem. All users in 'vusers' can access the private filesystem, but users cannot access each others private folder -

                aretter@mnemosyne:~$ pfexec chgrp vusers /thevault/private
                aretter@mnemosyne:~$ pfexec chmod 770 /thevault/private
                
                aretter@mnemosyne:~$ pfexec mkdir /thevault/private/phil
                aretter@mnemosyne:~$ pfexec chmown phil:phil /thevault/private/phil
                aretter@mnemosyne:~$ pfexec chmod 750 /thevault/private/phil
                
                aretter@mnemosyne:~$ pfexec mkdir /thevault/private/lesley
                aretter@mnemosyne:~$ pfexec chmown lesley:lesley /thevault/private/lesley
                aretter@mnemosyne:~$ pfexec chmod 750 /thevault/private/lesley
                
                aretter@mnemosyne:~$ pfexec mkdir /thevault/private/andy
                aretter@mnemosyne:~$ pfexec chmown andy:andy /thevault/private/andy
                aretter@mnemosyne:~$ pfexec chmod 750 /thevault/private/andy
        
Network Shares

Well this is a NAS after all, and so we need to make our filesystems available over the network. Apart from myself, everyone else in the house uses Microsoft Windows (XP, Vista and 7) on their PCs, and because of this fact I decided to just share the filesystem using OpenSolaris's native CIFS service.

I used this this article in the Genuix Wiki as a reference for installing the CIFS service. I took the following steps to install the CIFS service and join my workgroup '88MONKS' -

                aretter@mnemosyne:~$ pfexec pkg install SUNWsmbskr 
                aretter@mnemosyne:~$ pfexec pkg install SUNWsmbs
                
                ...I had to reboot the system here, for the changes to take effect...
                
                aretter@mnemosyne:~$ pfexec svcadm enable -r smb/server
                aretter@mnemosyne:~$ pfexec smbadm join -w 88MONKS
        

To authenticate my users over CIFS I needed to enable the CIFS PAM module by adding this to the end of /etc/pam.conf -

                other password required pam_smb_passwd.so.1 nowarn
        

Once you have enabled the CIFS PAM module, you need to (re)generate passwords for your users who will use CIFS, this is done with the standard 'passwd' command. Then the last and final step is to export the ZFS filesystems over CIFS -

                aretter@mnemosyne:~$ pfexec zfs create -o casesensitivity=mixed thevault/public
                aretter@mnemosyne:~$ pfexec zfs create -o casesensitivity=mixed thevault/private
        
Build Issues

When building your own custom system from lots of different parts (some of which are very new to market), there are likely to be a few unanticipated issues during the build and this was no exception. Luckily the issues I had were all minor -

  • Not enough USB port headers - The MSI IM-945GC motherboard I used only had two USB headers. I used these to connect the NAS SD card reader, which meant that I could not connect the USB sockets on the front of the NAS chassis. This is not a major problem as I can just use the sockets on the back.
  • Missing hard disk caddy screws - As soon as I discovered these were missing, I contacted mini-itx.com by email (they have no phone number). After several emails and only one very poor response saying they would look into it, I gave up on mini-itx.com. As described above, I managed to work around this issue, although after about 3 weeks a package of screws did turn up in the post unannounced and I can only assume these are from mini-itx.com. My advice to anyone would now be DO NOT USE mini-itx.com, their after-sales customer service is abysmal, I probably should have guessed by the fact that when I made a pre-sales enquiry they never even replied!
  • Fitting everything in - Mini-ITX form cases, can be quite a tight fit once you have all the cabling in. I would recommend avoiding using large cables such as IDE where possible. It took me a couple of attempts at re-routing my cables to make best use of the available space.
usage Findings

Once the NAS was built and functional I decided to make some measurements to find out its real power consumption (whether it is as low as I had hoped) and also its network performance for file operations.

Plug-in energy monitorFor measuring the power usage I used a simple Plug-in Energy monitor that I got from my local Maplin store. Whilst this device gives me a good idea of power consumption, it is actually very hard to get consistent/reliable figures from it, as the readout tends to fluctuate quite rapidly. The figures I present here are my best efforts and the average figures are based on observation not calculation.

For measuring the network performance, I placed a 3.1GB ISO file on the public ZFS RAIDZ2 filesystem and performed timed copies of it to two different machines using both SCP and CIFS. The first machine was a Dell Latitude D630 Windows XP SP3 laptop, which is connected to our home Gigabit Ethernet LAN using 802.11g wireless networking (54Mbit/s) via our router. The second machine I used was a custom desktop based on an AMD Phenom X4, MSI K92A Motherboard with Gigabit Ethernet, 8GB RAM and Ubuntu x64 9.04, which is connected directly to our home Gigabit Ethernet LAN.

Power and Performance

Task Description Actual Power Consumption Performance
Standby (Power-off) 2W N/A
Boot 50W N/A
Idling 40W to 47W (avg. 42W) N/A
File Duplication on RAIDZ2 ZFS 54W to 57W (avg. 55W) 50MB/s
SCP Copy to Wifi Laptop 40W to 57W (avg. 42W) 473KB/s
CIFS Copy to Wifi Laptop 40W to 57W (avg. 42W) 1.2MB/s
SCP Copy to GbE Desktop 48W to 52W (avg. 49W) 22MB/s
CIFS Copy to GbE Desktop 49W to 52W (avg. 50W) 25MB/s
Conclusions

Overall I am very happy with my DIY NAS system, I believe it meets the requirements I set out in Part 1 very well. It is physically small and quiet, provides 2TB of reliable storage and does not use any proprietary drivers.

The power consumption is slightly higher (42W to 50W) than I estimated (33W to 50W), which is not unsurprising considering I only had figures for some components and not a complete system. However, I have also measured the power consumption of my desktop with and without the old HighPoint RAID 5 storage that I am aiming to replace with this NAS, and without it I have saved a huge 40W! Admittedly I am now using 10W more overall, but I have a networked storage system that is used by the whole house. I think if I replaced my desktop's primary hard disk with a Western Digital Green drive I could probably claw back those additional watts anyhow.

I am very happy with the network performance, and it is more than adequate for our needs. I have been told that I could probably increase it with careful tuning of various OpenSolaris configuration options.

The cost whilst high for a home IT mini-project, is not unreasonable, and I think I would struggle to find a commercial product at the same price point which offered the same capabilities and flexibility.

Further Considerations

We have both an XBox 360 and PlayStation 3 in our house that can be used as media streamers. The PS3 requires a DLNA source and the 360 a UPnP source, and it looks like ps3mediaserver should support both. However ps3mediaserver also requires a number of open source tools such as MPlayer and ffmpeg amongst others. There are no OpenSolaris packages for these, so I will have to figure out how to compile them, which will take some time.

A website for controlling and administering the NAS would be a nice feature. Especially if you could schedule HTTP/FTP/Torrent downloads straight onto the NAS. When I have a rainy week, I may attempt this. I could see this eventually leading to a custom cut-down OpenSolaris distribution built especially for NAS.

Adam Retter posted on Sunday, 5th July 2009 at 20.10 (GMT+01:00)
Updated: Sunday, 5th 2009 at July 20.10 (GMT+01:00)

tags: ChenbroOpenSolarisNASDIYZFSRAIDZ2CIFS

12 comments | add comment

Choosing Software and Hardware for my DIY NAS

DIY NAS - Part 2 of 3

In deciding to build my own NAS, after having identified my requirements in Part 1, I set about searching for the perfect hardware and software combination...

There are plenty of open source operating systems available that offer multiple options for reliable storage, including both hardware and software supported RAID. To avoid getting into the situation of outdated/unsupported hardware again, I have decided not to use any sort of hardware assisted RAID, instead I will use the software RAID support provided by the operating system itself.

Many of these operating systems support a vast array of system hardware, however I did not want to just reuse a standard PC/Server because of its large power requirements and physical size when compared to commercial NAS systems aimed at the SMB (Small and Medium Business) Market.

Software

Linux, FreeBSD and OpenBSD all offer options for software RAID. There are also a number of distributions specifically designed for NAS appliances such as OpenFiler (based on Linux) and FreeNAS (based on FreeBSD), however I have settled on OpenSolaris because of ZFS and its RAIDZ feature. Also worthy of a mention is the very interesting and well suited looking NexentaStor (based on OpenSolaris), but the added goodness is not open source, so I will consider it no further.

A few reasons why I chose OpenSolaris and ZFS -

  • OpenSolaris is based on Solaris (it feels solid, just like Solaris)
  • OpenSolaris has CIFS and NFS support built in
  • ZFS RAIDZ and RAIDZ2 provide better than RAID functionality
  • ZFS has 256 bit checksumming and self healing
  • ZFS does not suffer from the RAID-5 write hole
  • ZFS snapshots

Looking at ZFS and the reliability of disk failures I decided to go for a RAIDZ2 approach, which requires at least 4 disks.
RAIDZ2 is an advancement of the traditional RAID-6. It writes a double-parity and parity is distributed across the disks. In this configuration approximately 50% of your total disk space is available for file storage and the other 50% is used for parity information. In RAIDZ2 with four disks, continuous operation is ensured even with the failure of two of the disks.

Simon Breden has written some excellent blog entries on building a home fileserver using ZFS, he has informed my decisions and I think perhaps he explains the salient points more eloquently. A article of Simon's on the advantages of using ZFS is here

Hardware

My first and by far hardest task was finding a suitable chassis for my DIY NAS. I wanted it to hold at least four disks for storage, and one disk for its operating system – so a total of 5 disks... and it needed to be physically small, this is my home not a data-center.

Norco NS-520 NAS Chasis - front/side viewOriginally after much searching, I found the Norco NS-520, which looked absolutely perfect; it supported 6 disks, came with a Mini-ITX motherboard, Celeron-M processor, 512MB RAM, 180W PSU and was physically small (277x187x230mm). Unfortunately the cheapest option was shipping it directly from the manufacturers in Shenzhen, China at $687 =~ £464.82. The cost seemed high (and would of been higher after import duty and VAT) and the maximum power consumption was more than I had hoped for.

Chenbro ES340059 NAS Chasis - front viewThe only other NAS chassis that I eventually found was the Chenbro ES34069, again it ticked all the boxes; it supported 5 disks, accepted a Mini-ITX motherboard, had a small 120W PSU and was physically small (260x140x260mm). It was also available from a UK reseller for £205.85 inc.VAT. Cheaper than the Norco (even after adding a motherboard, CPU and RAM) and the maximum power consumption was lower :-)

For the Chenbro chassis I needed to source my own Mini-ITX motherboard. The main requirement was that it support at least 4 SATA disks for my storage and an additional IDE/SATA disk for the operating system and have a low power consumption. Finding a motherboard with a low-power CPU and at least 4 SATA ports turned out to be a tough task, I found only two –

  • VIA EPIA SN 18000 EG – VIA 1.8GHz C7 32bit CPU / 26W - £178.25 from mini-itx.com (inc. VAT)
  • MSI IM-945GC – Intel Atom N330 1.6GHz Dual Core 64bit CPU / 24.33W Max - $169.00 from orbitmicro.com (£191.13 after currency exchange, import VAT and handling charges)

I chose MSI's Intel Atom board as it offered considerably more processing power at lower power consumption, it is also 64 bit unlike the VIA - there are rumoured problems with ZFS on 32 bit systems.

I would also like to take a moment to congratulate MSI on their excellent pre-sales technical support, a quick call to their UK office and within a day their Taiwan office had emailed me the power consumption specifications for the motherboard :-)

The complete bill of materials and estimated power consumption follows. The remaining items were chosen for their suitability with the chasis and motherboard and/or for their low power consumption.

Bill of Materials

Supplier Part Description Cost
mini-itx.com Euro C5 Power Cord £3.00
2.5” to 3.5” IDE Hard Disk Adapter £7.50
Sony Optiarc AD-7590A-01 Trayload Slimline DVD+-RW Drive £37.50
2GB DDR2 667 DIMM for EPIA SN / Atom / JNC62K and Socket LGA775 Boards £29.00
Chenbro 4-in-1 Card Reader (SD/Mini-SD/MMC/MCS) £9.50
Chenbro ES34069 Mini-ITX Home Server/NAS Chassis £179.00
SHIPPING £12.00
VAT £41.63
Sub Total £319.13


CCL Computers 4 x 1TB Western Digital Caviar Green 3.5” SATA 3Gb/s Hard Disk (WD10EADS) £247.96
80GB Seagate Momentus 5400.3 2.5” IDE Hard Disk (ST980815A) £35.85
SHIPPING £5.21
VAT £43.35
Sub Total £332.37


PC World Power Cable Y Splitter (4-pin Molex to 4-pin Molex + 4-pin Floppy) £3.39
VAT £0.51
Sub Total £3.90


Orbit Micro MSI IM-945GC Mini-ITX Motherboard, Intel Atom 330 Dual Core 1.6GHz $169.00
SHIPPING TO UK (USPS) $57.69
Sub Total $ $226.69
VISA Exchage Rate @ 1.46082 £155.18
Overseas Transaction Fee £1.00
Pracelforce Import VAT £21.45
Parcelforce Handling fee £13.50
Sub Total £191.13

TOTAL NAS COSTS £846.53

Estimated Power Consumption

Part Idle Power Consumption Max Power Consumption Quantity Total Idle Power Consumption Total Max Power Consumption
MSI IM-945GC Motherboard 17.46W 24.33W 1 17.46W 24.33W
1 TB Western Digital Caviar Green Hard Disk (WD10EADS) 3.70W 6.00W 4 14.80W 24.00W
80GB Seagate Momentus Hard Disk (ST980815A) 0.80W 2.00W 1 0.80W 2.00W

TOTAL ESTIMATED POWER CONSUMPTION 33.06W 50.33W

Read more in Part 3 - Building my DIY NAS.

Adam Retter posted on Tuesday, 2nd June 2009 at 23.00 (GMT+01:00)
Updated: Tuesday, 7th 2009 at July 17.59 (GMT+01:00)

tags: NASRAIDZOpenSolarisChenbroMSI

5 comments | add comment

Tag Cloud