The Winlibre distribution provides a collection of first-class open-source software bundled in a easy single installer & updater. Through time and thanks to the former editions of the Google Summer Of Code, the Winlibre project has evolved and created other sub-projects to fill gaps in the open-source desktop software offering.
I recently did the same but on a real machine. I wasnt that comfortable with the terminal, but I got used to it and it works very good.
I use SMB and NFS (which are both build in) to share the files to my Macs. SMB seems to have a problem with umlauts but maybe not generally. I’m investigating that 😛 Otherwise it works great. All the little stuff can be remote-configured via SSH, if you want the GUI you can use X to be your X-server for OpenSolaris. Very nice and way faster than VNC (which also works out of the box). The flexibility of ZFS is also great.
I use 3 1TB-drives in a raidZ and 2 500GB-drives in a mirror put together in one zpool. Further drives can easily be added (but only whole pools can be added, not a drive to an existing pool) or the drives can be replaced with bigger ones.
Ok, that worked so here’s a quick recap. I’ve had a drobo and have successfully upgraded to 4.1TB, however I’m less excited about their now-closed forums & support. I’ve also had zfs on solaris, opensolaris, & nexenta, but now struggling with the cifs / sbm disappearing share issue. Solaris smbd issues a message “NbtDatagramDecoder11: too small packet” (google for more) about every twelve minutes and eventually the share no longer works and “sharemgr show -vp” hangs. So, a head’s up and insight appreciated, joseph. I’ve lusted after the Drobo since learning about them last year, but I’ve been disappointed with the lack of performance(we bought one for the office to mess around with; copying to/from it while someone else is accessing a database file leaves much to be desired). As a result, I’ve been in the market to build a NAS.
This opens up an avenue of opportunity as long as I can get my raid cards to work with opensolaris! Now, if I could just grow my array the way the drobo does, I’d be a very happy camper!
Hi, I’m also building a new NAS at the moment, using my hardware which I bought two years ago and used with Debian. Now I replaced the four hard drives with six bigger disks and switched to OpenSolaris. I also have a Drobo. There are some more pro’s and con’s: + power consumption of the Drobo should be lower than my pc hardware + management and status (!) software + DroboPro may be used with 8 hard disks and single or double redundancy – just a storage box – no power switch – no information about S.M.A.R.T. Also I’m looking for an USB LCD device as a status display. Do you know if there exists a small Windows App for retrieving status information?
Best regards and thank you for the nice article, Nils. Hi Nice article thanks. Just out of interest I had a ZFS OpenSolaris box for all my services (1.5 TB Video, 0.5TB Music, 1TB Data) for almost 2 years. I moved away from that a year ago towards Drobo.
My main reason was two things. ZFS can’t be upgraded. Now hear me out here. You are using RAIDZ – I too used RAIDZ and even 2 spare drive version. But I can now not upgrade that and all disks must be the same size. You can upgrade the ZFS file system by adding another set of three or more drives, but I can’t add just one drive.
You mention in your talk here about not being able to use the spare space. Yes in ZFS you can make partitions on a new big disk small enough to work with old disks – and then use that other space for something else. In reality though this becomes very difficult to manage. Let me start with the best thing – ZFS had snaptshots – this is the one feature I am missing from Drobo.
I loved the security of having snapshots. Now let me talk about your main points:. Drobo does know hosted file systems. This is one of the reasons it has limited file system support (e.g.
Ext3, NTFS, HFS+). If it did not know, it would not be able to provide a ‘larger than really available’ solution. So resilver a replaced disk is only the data it needs.
Upper file size limit – this is a limit of the file system, not the drobo. NTFS, HFS+ and Ext3 do not understand resizing (well not really, there is ways). You can not use the spare disk with different sizes – there is scenarios where disk space can’t be all used, e.g. If you added a 1TB and 3.500GB disks. But the same goes for a ZFS partition. You could in theory use the 3 500GB in a RAIDZ and then the 1TB as a single, but then if the 1TB single drive fails, you loose everything on that ZFS volume. 4 drive limit sux.
This was my main reason to start with ZFS – I wanted to add 7. 320GB disks. Drobo has more recently added a 5 disk and 8 disk model, but very expensive.
Network accessible – you are talking here about DroboShare. ZFS is a file system. Drobo is an external Hard Disk. Network access is effectively a computer. You can of course run a computer with all bits, including internal hard disks, ZFS and various file systems.
But Drobo is just like an old SCSI RAID Controller – it is not the operating system, nor network access – so not really comparable. That said however, I would like to see Drobo become more intelligent.
The down side of Drobo is that it does NOT have its own file system, therefore it has all the faults of external drives (both human and computer software errors). Drobo is too expensive – agreed! This information is just to explain location. It seems you are comparing Drobo with a NAS. Even Drobo+DroboShare is a simple NAS. What Drobo is good at, and what it does well, is zero management of disk space.
To add a new drive, you pull one out and add a new one – you now have a bigger disk system (like you said, as long as you made it big enough in the first place). Drives do not have to be the same size – ZFS RAIDZ they do. What Drobo does badly though, even with Drobo Share, is file system. ZFS is more secure / reliable – because of snapshots and In the end, living with ZFS for more than a year, it was just too much maintenance, and almost impossible to add more space to (7 disks takes up all the slots on the board, and I need to add more to move data across, before I can unmount the old ones). Living live for more than a year with Drobo, I can say that I have had no issues, and I have added new disk space 3 times, each time buying the best GB/$ drive and replacing the smallest disk. BUT I expect to have a problem one day.
Drobo does not protect me from accidental delete (ZFS snapsthos) or from corruption (e.g. HFS+ external drives loosing their directory structures). So ZFS is still better, if you have the time to manage it, and don’t mind upgrading the drives in one big go, instead of one drive at a time.
Good article. I’m building out a second server to use for offsite duplication of my data. In both cases, I am using hot swap bays from StarTech, such as: to put my drives into. There is a 4 drive bay as well, these have fans and are easily installed and seem robust. The use of snapshots and “zfs send” and “zfs receive” is what I am most interested in.
Having to pay for 4-6TB of storage and upload to Z3 seems pretty pricey. So far, it has been nice to have everything in one place. I also will be using VirtualBox to run an instance of Windows Home Server to allow the few remaining PCs in the house to do backups. This will allow me to power off my HP Home Server box, my drobo and my time machine. One point of failure in the house now, but the remote copy is what I will be relying on for robust data integrity.
It seems that there is a simple workaround for the ZFS expandibility issue: This is the basic idea I will be creating five raid5 sets inside one ZFS pool. I will be creating four partitions on each of the four drives and using these partitions as the storage containers for the raid arrays.
This way I can control the sizes of each partition and get the best use out of the drive space within a raid5 array. ZFS allows you to expand the size of a raid array and the method I’m describing let’s you take advantage of that cool feature. What I will be doing is slicing up the four hard drives into as many equal sized parts as I can.
From there I’ll build five Raid5 arrays (one for each line of partitions in the phase 1 illustration) and create a zpool called tank0 containing all five raid5 arrays. I’ve got a drobo S and once I get a replacement it will need a new home. The drive has been used with a mac, used for Time Machine and I keep getting IOFireWireSBP2ORB::prepareFastStartPacket – fast start packet not full, yet pte doesn’t fit errors on the console log. Drobo said all was fine.
Running Disk Utility said all was fine. Restoring from Time Machine failed consistently. Connecting with USB is currently working. But I don’t trust hardware that fails silently. As to the expandability issue: Conceptually it’s easy. I think FreeNas understands GUID partition tables, which means you can have up to 128 partitions per disk. So construct partitions of 100 GB each on every disk.
For a 3 TB disk that would be 30 partitions. Now, for mirroring you can construct vdev’s using one partition each from multiple disks.
So if you had a 1 TB and 1.5 TB and a 2 TB drive, you would start off by matching 7 of the 2TB’s partitions with the 1 TB drive, and 12 with the 1.5’s. And the 1.5 and the 1.0 would share 3. Net result is you are able to use 4.4 out of the 4.5 TB available. Now replace the 1 TB with a 2 TB. The splits then become 3, 12, 12 and you can use 5.2/5.5 TB This is done with a simple set of 3 eq.
Call the splits a, b c. Then in the first problem a+b = 10 (1 TB = 10 100G partitions) a+c=15 and b+c=20 You then round answers, but you have to round more down than up. (They are actually inequalities — boundary conditions) Now add a new 3 TB drive to the mix. You now have 4 drives. 1.5, 2, 2, 3 The first 1.5 TB of each drive is evenly used.
Leaving you with 0.5 0.5 and 1 unused. Mathematically then you have 3 drives left. This one is simple. The two small parts match up the large part.
Striping these vdevs would NOT be a good idea, as it would make for a lot of disk seeks. Concatenating them would make more sense. Breaking a large drive in to many small pieces and piecemealing together, a larger pool, is something I’ve also considered as a way to make ZFS become the Drobo replacement. At issue, is just deciding on the Math as you’ve illustrated Sherwood. It seems doable, and you just need to decide on small enough “pieces” that you could quickly shuffle space use around to handle a drive loss. But, you still can’t delete vdev’s in ZFS, nor make them smaller, so the loss of a drive on a ZFS system, still requires replacement media in the form of hot spares for raidz, or mates for mirroring. So, you can’t effectively do what Drobo does, in downsizing available space on the loss of a drive, while maintaining loss of another drive, providing there is still space available for that, after a drive dies.
80's homecomputer (C64/CPC-464) kid bought an Acorn Archimedes with my first self earned money work experience as a software developer for over 20 years, from Palm to MainFrame, in C/C/Java/COBOL(;-). Got addicted to Macs through a 2nd gen. Since 2003 PowerBook 12', MacBook user. Working with Cocoa since summer 2008. Attended WWDC 2008 in San Francisco. Developing applications for Mac OS X and Cocoa Touch. Blogroll.
Search Recent Posts. Recent Comments on swapnil on on on Thorsten on Archive Archive.