Category Archives: ZFS

Home »  ZFS

I have a new Debain 7 server (VM) and would like to use a raw iSCSI Lun on it presented from my QNAP (and put ZFS on it at the end but thats already covered elsewhere here). Prerequisites: Read and understand some iSCSI best practices (like those ones (I have simply ignored them all here so don’t blame me when you put my construct into production) I have a QNAP NAS under presenting a iSCSI Target and Lun (iSCSI Target) I have a standard Debian 7 Server (VM) on the same network. (iSCSI initiator) On the Debian 7 VM Install the open-iscsi package apt-get install open-iscsi There are 2 things to edit in the /etc/iscsi/iscsid.conf 1) I want my iSCSI to start automatically when I boot my server. Search for node.startup = manual #and change to node.startup = automatic 2) If you use CHAP authentication on the iSCSI device you will need […]

It is widely know that ZFS can compress and deduplicate. The deduplication works across the pool level and removes duplicate data blocks as they are written to disk. This results into having only unique blocks stored on the disk while the duplicate blocks are shared among the files. There is a good read about how dedupe works and some tweaking things like changing the checksum hashing function. Note: Compression works fine under zfsonlinux but the current version is not yet supporting deduplication (16.09.2014). ZFS on FreeBSD (for example FreeNAS) and Solaris (and Opensolaris) have a higher pool version and support deduplication. Deduplication has been introduced with pool version 31. Zpool versions and features ( List of operating systems supporting ZFS (wikipedia) Now, how to determine if you would actually benefit from deduplicated and compressed datasets? I run the following under FreeNAS with a testsetup filled with real data. (self recorded camera .mov, ISOs, virtual […]

In the wake of the current Truecrypt FUD. It seems not too widely known that you can encrypt your data with zfs for quite some while. And it also works along with compression and deduplication. However this applies only to ZFS zpool version 30 onwards (introduced with Solaris 11) while zfs on Linux currently still runs on zpool version 28. So its not running there. To read a bit in detail on how it works you will find here a few interesting posts. How to Manage ZFS Data Encryption Introducing ZFS Crypto in Oracle Solaris 11 Express Having my secured cake and Cloning it too (aka Encryption + Dedup with ZFS) The encryption options are: aes-128-ccm (default) aes-192-ccm aes-256-ccm aes-128-gcm aes-192-gcm aes-256-gcm Only CCM supports encryption along with compression and deduplication so I ditch the GCM and go for (putting my weak half torn tinfoil hat on) aes-256-ccm. I’ll create a new Filesystem […]

In relation to the outdated post Things have become much easier now How to install ZFS native on Debian 7 or Proxmox 3.x 1) become root sudo su – 2) Install ZFS wget dpkg -i zfsonlinux_3~wheezy_all.deb apt-get update apt-get install debian-zfs kernel upgrade After updating the Kernel you most likely have to do the following steps: make sure you have the headers installed (proxmox example) aptitude install pve-headers-$(uname -r) (it will pick the current running kernel version. If you just updated the kernel you better reboot first.) ln -s /lib/modules/$(uname -r)/build /lib/modules/$(uname -r)/source aptitude reinstall spl-dkms zfs-dkms That should cover you on a kernel update Related posts: Debian / Kanotix / Proxmox: Install ZFS Native Linux: Install Proxmox Virtual Environment on Debian 6.0 Squeeze Distro (Kanopix) ZFS: Fun with ZFS – is compression and deduplication useful for my data and how much memory do I need for zfs dedup? […]

Talking about ZFS and ARC CACHE Generally ZFS is designed for servers and as such its default settings are to allocate: – 75% of memory on systems with less than 4 GB of memory – physmem minus 1 GB on systems with greater than 4 GB of memory (Info is from Oracle but I expect the same values for ZFS native on Linux) That might be too much if you intent to run anything else like virtualisation or applications on the server and while ZFS returns cached memory when a memory intensive application asks for it, there might be a delay to do so causing some waits. We want to limit the memory ZFS can allocate to give some air to breath for the applications. However this won’t make any significant performance improvement to the filesystem itself. It’s just to free memory for other application which might have started to […]

Here we go again. After installing ZFS on a Debian based Proxmox Node I now need some bang on a CentOS Server. To remind you all: ZFS on Linux is considerably stable and matured, but you put it in place on your own risk. ZFS Native comes from I use a Centos 6.3 minimal installation and I have a 2GB disk configured to it for demo. fdisk -l Disk /dev/sdb: 2147 MB, 2147483648 bytes Updated (26.04.2013) And a Russian fellow describes his way to install it which I used and improved. He build his own repo under which is maintained as per 05.03.2013 So we start with: cd /etc/yum.repos.d/ wget rpm –import There is an original repo from zfs on Linux now which we are going to use since it has the latest version 0.6.1 of ZFS yum localinstall –nogpgcheck If you don’t have […]

Install ZFS Native on Kanotix / Debian Squeeze / Proxmox 2.1 This whole thing below is obsolete. I created a new post with up to date details for Debian 7.0 Wheezy Ok, Ok, I admit, I’m going really crazy now…. as I described here -> Linux: Install Proxmox Virtual Environment on Debian 6.0 Squeeze Distro (Kanopix) I have Proxmov Virtual Environment running on Kanotix Debian. I now spotted that I’m running slowly low of free disk space. I could mount space from my NAS via NFS-CIFS or ISCSI but that’s kind of too easy and for testing purpose the break to success ratio is too low 🙂 . And BTW I love ZFS. so I install it (cause I want to deduplicate and compress). Based on Proxmox …This is pretty straight forward and almost too easy too. Kanotix is using Debian and not ubuntu and certainly not the latest ubuntu so we need […]

Howto rename a Zpool and a ZFS mountpoint I accidentially named a pool tets rather than test. So I renamed it. $ zpool status -v pool: tets state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tets ONLINE 0 0 0 c0d1 ONLINE 0 0 0 c1d0 ONLINE 0 0 0 c1d1 ONLINE 0 0 0 errors: No known data errors To fix this, I first exported the pool: $ zpool export tets And then imported it with the correct name: $ zpool import tets test After the import completed, the pool contained the name I had originally intended to give it: $ zpool status -v pool: test state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 c0d1 ONLINE 0 0 0 c1d0 ONLINE 0 0 0 c1d1 ONLINE 0 0 0 errors: No known data errors Step 2 Fix the also messed u ZFS […]

ZFS Evil Tuning Guide From Siwiki Overview Tuning is Evil Tuning is often evil and should rarely be done. First, consider that the default values are set by the people who know the most about the effects of the tuning on the software that they supply. If a better value exists, it should be the default. While alternative values might help a given workload, it could quite possibly degrade some other aspects of performance. Occasionally, catastrophically so. Over time, tuning recommendations might become stale at best or might lead to performance degradations. Customers are leery of changing a tuning that is in place and the net effect is a worse product than what it could be. Moreover, tuning enabled on a given system might spread to other systems, where it might not be warranted at all. Nevertheless, it is understood that customers who carefully observe their own system may […]

%d bloggers like this: