Bauer-power san 3.0 ~ bauer-power media gas 2 chainz

####

Many moons ago I wrote about how to configure an Ubuntu Linux based iSCSI SAN. The first iteration used iSCSITarget as the iSCSI solution. The problem with that is that it didn’t support SCSI-3 Persistent Reservations. That means it wouldn’t work for Windows failover clustering, and you would probably see issues if you were trying to use it in VMWare, XenServer or Hyper-V.

The second iteration used SCST as the iSCSI solution, and that did work pretty well, but you had to compile it from source and the config file was kind of a pain in the ass. Still though, it did support SCSI-3 Persistent Reservations, and was VMWare ready. It’s the solution I’ve been using sing 2012 and it’s worked out pretty well.

Well the other day I decided to rebuild one of the original units I setup from scratch. The first two units I did this setup on were SuperMicro SC826TQ’s with 4 NICs, 2 quad core CPUs and 4GB of RAM, 3Ware 9750-4i RAID Controller, and twelve 2TB SATA Drives. This sucker gave me about 18TB of usable backup storage after I configured the 12 disks in RAID 6.

This time I used Ubuntu 18.04 server because unlike the first time I did this, the latest versions of Ubuntu have native drivers for 3Ware controllers. On top of that, the latest versions of Ubuntu have the iSCSI software I wanted to use in the repositories… More on that later.

After Ubuntu was installed I needed to setup my network team. Ubuntu 18.04 uses Netplan for network configuration now, which means that NIC bonding or teaming is built in. In order to setup bonding or teaming you just need to modify your /etc/netplan/50-cloud-init.yaml file. Here is an example of how I setup my file to team the four NICs I had, as well as use MTU 9000 for jumbo frames:

After setting up my bonded network, I installed my software. I opted to use tgt this time. If you are unfamiliar with it, it’s apparently a re-write of iscsitarget, but it supports SCSI-3 Persistent Reservations. I tested it myself using a Windows Failover Cluster Validation test:

Once you have your LUN file, you will want to create a config file for your LUN. You can create separate config files for each LUN you want to make in /etc/tgt/conf.d. Just append .conf at the end of the file name and tgt will see it when the service restarts. For our purposes, I created one called lun1.conf and added the following:

The above creates an iSCSI target and restricts access to it to only 100.100.10.148. You can also use initiator-name to restrict access to particular iSCSI initiators, or you can use incominguser to specify chap authentication. You can also use a combination of all three if you want. Restricting by IP works for me though.

You can also create LUNs on the fly without restarting tgt. This is handy if you need to add a LUN and you don’t want to mess up connections to LUNs you’ve already created. To do that, create your LUN file like you did before. Obviously, name it something new like lun2.

The only issue with the above is that it dumps all running target information in you new file. You will have to go in there and remove the other targets. In this case, it’s just better to manually create the config file… but that’s just me. Also, that is not a typo… tgt-admin is a different tool than tgtadm… Weird right?

It’s important to note that using the above hardware is not going to give you high performance. It’s suitable for backup storage, and that’s about it. If you want to run VMs or databases, I’d recommend getting 10GBe switches for use in iSCSI. You can get one fairly cheap here ( 10GBe switches). If you get 10GB switches, you will need a 10GB NIC as well. You can get one here ( 10GB NICs). Finally you will need faster disks. You can get 15K RPM SAS disks here ( 15K RPM SAS).