Debian 7: Installing iSCSI LUN as raw disk (iSCSI initiator)

I have a new Debain 7 server (VM) and would like to use a raw iSCSI Lun on it presented from my QNAP (and put ZFS on it at the end but thats already covered elsewhere here).
Prerequisites:
Read and understand some iSCSI best practices (like those ones http://storageblog.typepad.com/storage_blog/2009/03/simple-iscsi-best-practices-top-3.html (I have simply ignored them all here so don’t blame me when you put my construct into production)
I have a QNAP NAS under 192.168.1.9 presenting a iSCSI Target and Lun (iSCSI Target)
I have a standard Debian 7 Server (VM) on the same network. (iSCSI initiator)
On the Debian 7 VM
Install the open-iscsi package
apt-get install open-iscsi
There are 2 things to edit in the /etc/iscsi/iscsid.conf
1) I want my iSCSI to start automatically when I boot my server. Search for
node.startup = manual #and change to node.startup = automatic
2) If you use CHAP authentication on the iSCSI device you will need to configure those
For the ease of this guide I left it without CHAP authentication which is not best practice in security terms.
Search for, uncomment and edit:
node.session.auth.authmethod = CHAP node.session.auth.username = username node.session.auth.password = password
/etc/iscsi/nodes/iqn.*/<Target_Server_IP>,3260,1/default
iscsiadm -m node --targetname "iqn.2004-04.com.qnap:datastore.lun1" --portal "<Target_Server_IP>:3260" --op=update --name node.session.auth.authmethod --value=CHAP iscsiadm -m node --targetname "iqn.2004-04.com.qnap:datastore.lun1" --portal "<Target_Server_IP>:3260" --op=update --name node.session.auth.username --value=myuser iscsiadm -m node --targetname "iqn.2004-04.com.qnap:datastore.lun1" --portal "<Target_Server_IP>:3260" --op=update --name node.session.auth.password --value=mypasword
/etc/init.d/open-iscsi restart [ ok ] Unmounting iscsi-backed filesystems: Unmounting all devices marked _netdev. [....] Disconnecting iSCSI targets:iscsiadm: No matching sessions found. ok [ ok ] Stopping iSCSI initiator service:. [ ok ] Starting iSCSI initiator service: iscsid. [....] Setting up iSCSI targets: iscsiadm: No records found . ok [ ok ] Mounting network filesystems:.
iscsiadm -m discovery -t sendtargets -p 192.168.1.9 192.168.1.9:3260,1 iqn.2004-04.com.qnap:ts-110:iscsi.xen.bd0bff 192.168.1.9:3260,1 iqn.2004-04.com.qnap:ts-110:iscsi.zfs.bd0bff
As you can see I have 2 target configured. To hint you, I even have under each target a few iscsi LUNs. That will be slightly cluttered if you now run fdisk -l. So I pick my desired target which I want to have on my server connected to and remove the discovered entry for the other one.
Easiest way is to go to /etc/iscsi/nodes
cd /etc/iscsi/nodes
# ls -al drw------- 3 root root 4096 Oct 15 20:45 iqn.2004-04.com.qnap:ts-110:iscsi.xen.bd0bff drw------- 4 root root 4096 Oct 15 20:45 iqn.2004-04.com.qnap:ts-110:iscsi.zfs.bd0bff
rm -rf iqn.2004-04.com.qnap:ts-110:iscsi.xen.bd0bff
If you now restart open-iscsi you will see that it tries to login only to the leftover target.
After a reboot you will only see the luns from that target.
/etc/init.d/open-iscsi restart [ ok ] Unmounting iscsi-backed filesystems: Unmounting all devices marked _netdev. [....] Disconnecting iSCSI targets:Logging out of session [sid: 1, target: iqn.2004-04.com.qnap:ts-110:iscsi.zfs.bd0bff, portal: 192.168.1.9,3260] Logout of [sid: 1, target: iqn.2004-04.com.qnap:ts-110:iscsi.zfs.bd0bff, portal: 192.168.1.9,3260] successful. . ok [ ok ] Stopping iSCSI initiator service:. [ ok ] Starting iSCSI initiator service: iscsid. [....] Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2004-04.com.qnap:ts-110:iscsi.zfs.bd0bff, portal: 192.168.1.9,3260] (multiple) Login to [iface: default, target: iqn.2004-04.com.qnap:ts-110:iscsi.zfs.bd0bff, portal: 192.168.1.9,3260] successful. . ok [ ok ] Mounting network filesystems:.
When you rerun the discovery by using “iscsiadm -m discovery -t sendtargets -p 192.168.1.9” it will add again all targets and on a open-iscsi restart it will try to login to all of them.
If you now run fdisk -l or lsblk you will see all disks presented under the target. If like me you have multiple disks under one target you will see all of them and need to make sure you pick the right one
example:
lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 8G 0 disk ├─xvda1 202:1 0 243M 0 part /boot ├─xvda2 202:2 0 1K 0 part └─xvda5 202:5 0 7.8G 0 part ├─linuxmanagement00-root (dm-0) 254:0 0 7.4G 0 lvm / └─linuxmanagement00-swap_1 (dm-1) 254:1 0 376M 0 lvm [SWAP] sr0 11:0 1 1024M 0 rom sdb 8:16 0 50G 0 disk sda 8:0 0 150G 0 disk sdc 8:32 0 100G 0 disk
Feel free to add them to you OS and you usually would do.
Oh, and if you are here for the full craig and really want to see it (assuming that you have zfsonlinux running)
zpool create -f aptly-pool /dev/sdb
zpool status aptly-pool pool: aptly-pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM aptly-pool ONLINE 0 0 0 sdb ONLINE 0 0 0 errors: No known data errors
zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT aptly-pool 49.8G 124K 49.7G 0% 1.00x ONLINE -