Tag Archives: compression

Home »  Tag: compression

Update 02.03.2015: added (modified) Centos / Redhat: A successor to compcache is zram which is fully integrated in the Linux kernel since and uses lzo compression. The idea behind it is to create swap devices made of chunks of the ram and to compress those chunks on the fly to increase the available space used and ideally reduce the need of swapping to slow disks. It uses a small extra amount of the CPU, however, the reduced i/o usage should more than make up for this. This is primarily interesting for a small scaled VPS, Netbooks or low memory devices. Also virtualisation hosts should benefit of compressed memory. Unfortunatly the zram-config script is currently not part of the Debian and Centos distributions. I will run some further tests and update here. In Ubuntu, from 12.04 onwards, the install script is included and it takes only a minute to setup zram. How to […]

It is widely know that ZFS can compress and deduplicate. The deduplication works across the pool level and removes duplicate data blocks as they are written to disk. This results into having only unique blocks stored on the disk while the duplicate blocks are shared among the files. There is a good read about how dedupe works and some tweaking things like changing the checksum hashing function. https://blogs.oracle.com/bonwick/entry/zfs_dedup Note: Compression works fine under zfsonlinux but the current version is not yet supporting deduplication (16.09.2014). ZFS on FreeBSD (for example FreeNAS) and Solaris (and Opensolaris) have a higher pool version and support deduplication. Deduplication has been introduced with pool version 31. Zpool versions and features (blogs.oracle.com) List of operating systems supporting ZFS (wikipedia) Now, how to determine if you would actually benefit from deduplicated and compressed datasets? I run the following under FreeNAS with a testsetup filled with real data. (self recorded camera .mov, ISOs, virtual […]

In the wake of the current Truecrypt FUD. It seems not too widely known that you can encrypt your data with zfs for quite some while. And it also works along with compression and deduplication. However this applies only to ZFS zpool version 30 onwards (introduced with Solaris 11) while zfs on Linux currently still runs on zpool version 28. So its not running there. To read a bit in detail on how it works you will find here a few interesting posts. How to Manage ZFS Data Encryption Introducing ZFS Crypto in Oracle Solaris 11 Express Having my secured cake and Cloning it too (aka Encryption + Dedup with ZFS) The encryption options are: aes-128-ccm (default) aes-192-ccm aes-256-ccm aes-128-gcm aes-192-gcm aes-256-gcm Only CCM supports encryption along with compression and deduplication so I ditch the GCM and go for (putting my weak half torn tinfoil hat on) aes-256-ccm. I’ll create a new Filesystem […]

The initial reason reason for installing BTRFS was that I like the zfs features of snapshots, compression and deduplication. Deduplication under btrfs is possible but somehow external and I don’t trust it until I was able to trash my system here. See here on how to do it and I will give it a try with a test system soon. But compression my second important thing is working fine and is super easy. All I did was to enable the compression flag under /etc/fstab and all newly created data will be compressed. On how the compression works please refer to here. I edited the fstab to look like following: vi /etc/fstab UUID=f4705516-4eac-435c-920c-1f2be8fa07af /               btrfs   defaults,autodefrag,discard,clear_cache,compress=lzo,subvol=@ 0       1 UUID=f4705516-4eac-435c-920c-1f2be8fa07af /home btrfs defaults,autodefrag,discard,clear_cache,compress=lzo,subvol=@home 0 2 After a loads of updates and other messy stuff, it looks like the following. # df -h / […]

%d bloggers like this: