Linux/iSCSI
iSCSI
iSCSI uses TCP Port 3260 by default
iSCSI - Wikipedia - http://en.wikipedia.org/wiki/ISCSI
- "iSCSI (Listeni/aɪˈskʌzi/ eye-SKUZ-ee), is an abbreviation of Internet Small Computer System Interface, an Internet Protocol (IP)-based storage networking standard for linking data storage facilities. By carrying SCSI commands over IP networks, iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval. The protocol allows clients (called initiators) to send SCSI commands (CDBs) to SCSI storage devices (targets) on remote servers. It is a Storage Area Network (SAN) protocol, allowing organizations to consolidate storage into data center storage arrays while providing hosts (such as database and web servers) with the illusion of locally-attached disks. Unlike traditional Fibre Channel, which requires special-purpose cabling, iSCSI can be run over long distances using existing network infrastructure."
IQN
iSCSI Qualified Name (IQN)
Format: The iSCSI Qualified Name is documented in RFC 3720, with further examples of names in RFC 3721. Briefly, the fields are:
- literal iqn
- date (yyyy-mm) that the naming authority took ownership of the domain
- reversed domain name of the authority (org.alpinelinux, com.example, to.yp.cr)
- Optional ":" prefixing a storage target name specified by the naming authority.
From the RFC:
Naming String defined by Type Date Auth "example.com" naming authority +--++-----+ +---------+ +-----------------------------+ | || | | | | | iqn.2001-04.com.example:storage:diskarrays-sn-a8675309 iqn.2001-04.com.example iqn.2001-04.com.example:storage.tape1.sys1.xyz iqn.2001-04.com.example:storage.disk2.sys1.xyz
Linux iSCSI Initiator
Install initiator utils:
yum install iscsi-initiator-utils
Name: iscsi-initiator-utils URL: http://www.open-iscsi.org Summary: iSCSI daemon and utility programs Description : The iscsi package provides the server daemon for the iSCSI protocol, as well as the utility programs used to manage it. iSCSI is a protocol for distributed disk access using SCSI commands sent over Internet Protocol networks.
Files and tools:
/sbin/iscsiadm # open-iscsi administration utility /sbin/iscsi-iname # iSCSI initiator name generation tool /etc/iscsi/iscsid.conf # iscsi initiator configuration (usually no need to modify) /etc/iscsi/initiatorname.iscsi # iqn for client /etc/rc.d/init.d/iscsi # Logs into iSCSI targets needed at system startup /etc/rc.d/init.d/iscsid # Starts and stops the iSCSI daemon
iscsiadm - open-iscsi administration utility iscsi-iname - iSCSI initiator name generation tool
The RPM executes the following script on install:
echo "InitiatorName=`/sbin/iscsi-iname`" > /etc/iscsi/initiatorname.iscsi
Sample /etc/iscsi/initiatorname.iscsi:
InitiatorName=iqn.1994-05.com.redhat:41bd651b632e
Note: The Initiator does not need to be on the same Subnet as the Target! :-)
Show database: (same as '--op show')
iscsiadm -m discoverydb # show targets iscsiadm -m discovery # show targets iscsiadm -m node # show nodes (iqns) # the ",1" is the Target Port Group Tag (node.tpgt) # a ",-1" usually indicates manually added target iscsiadm -m session # show login sessions iscsiadm -m node -T [IQN] # show node detail records
Discover targets: (and add to database)
iscsiadm --mode discovery --type sendtargets --portal 10.50.47.40 iscsiadm -m discovery -t sendtargets -p 10.50.47.40 iscsiadm -m discovery -t sendtargets -p [SERVER] --op nonpersistent # discover, but don't add to DB
# example: # iscsiadm -m discovery -t sendtargets -p 10.50.47.40:3260 (or by hostname iscsi-t, still shows IP) 10.10.10.40:3260,1 iqn.1994-05.com.redhat:storage.for.web 10.10.10.40:3260,1 iqn.1994-05.com.redhat:storage.for.db
Manually add target to database:
iscsiadm --mode node --targetname [IQN] --portal [SERVER] --op new
Login to the iscsi target session:
iscsiadm --mode node --targetname [TARGET_IQN] --login # if IQN already in DB iscsiadm --mode node --targetname [TARGET_IQN] --portal [TARGET_SERVER] --login iscsiadm -m node -T [TARGET_IQN] -p [TARGET_SERVER] -l
# example: # iscsiadm --mode node --targetname iqn.2009-02.com.example:for.all --portal 10.50.47.40:3260 --login Logging in to [iface: default, target: iqn.2009-02.com.example:for.all, portal: 10.50.47.40,3260] (multiple) Login to [iface: default, target: iqn.2009-02.com.example:for.all, portal: 10.50.47.40,3260] successful.
Login to all discovered targets:
iscsiadm --mode node --loginall=automatic
Note: restarting the 'iscsi' service will also login to all discovered targets!
Show session details, like luns:
iscsiadm -m session --print 3
Attached SCSI devices: ************************ Host Number: 10 State: running scsi10 Channel 00 Id 0 Lun: 0 scsi10 Channel 00 Id 0 Lun: 1 Attached scsi disk sde State: running
Verify storage attach:
# tail -f /var/log/messages Apr 18 07:48:02 localhost kernel: scsi1 : iSCSI Initiator over TCP/IP ... Apr 18 07:48:02 localhost kernel: sd 1:0:0:1: Attached scsi disk sdc Apr 18 07:48:02 localhost kernel: sd 1:0:0:1: Attached scsi generic sg3 type 0 Apr 18 07:48:03 localhost iscsid: Connection1:0 to [target: iqn.2009-02.com.example:for.all, portal: 10.50.47.40,3260] through [iface: default] is operational now
You may need to restart iSCSI to probe partition and check disks:
service iscsi restart partprobe # in parted package fdisk -l
Now you can fdisk, format and mount the lun:
fdisk /dev/sdc mkfs.ext3 /dev/sdc1 mount /dev/sdc1 /mnt/iscsi
Note: the targets are added to the local iscsi database (/var/lib/iscsi/ nodes/ and send_targets/), so on reboot they will automatically be reconnected.
To mount automatically, add this to /etc/fstab:
/dev/sdd1 /mnt/iscsi ext3 _netdev 0 0
Logout of a target:
[TARGET_IQN] --logout
Logout of all discovered targets:
iscsiadm --mode node --logoutall=automatic
Delete a target from db: (must be logged out)
iscsiadm --mode node --targetname [TARGET_IQN] --op delete
Note: This is how you would mask targets
CHAP (/etc/iscsi/iscsid.conf) [1]
discovery.sendtargets.auth.authmethod = CHAP discovery.sendtargets.auth.username = jdoe discovery.sendtargets.auth.password = YourSecurePwd1 node.session.auth.authmethod = CHAP node.session.auth.username = jdoe node.session.auth.password = YourSecurePwd1
Manually specify CHAP for target:
iscsiadm --mode node --targetname "iqn.2007-01.org.debian.foobar:CDs" -p 192.168.0.1:3260 --op=update --name node.session.auth.authmethod --value=CHAP iscsiadm --mode node --targetname "iqn.2007-01.org.debian.foobar:CDs" -p 192.168.0.1:3260 --op=update --name node.session.auth.username --value=[USERNAME] iscsiadm --mode node --targetname "iqn.2007-01.org.debian.foobar:CDs" -p 192.168.0.1:3260 --op=update --name node.session.auth.password --value=[PASSWORD]
References:
- Linux tgtadm: Setup iSCSI Target ( SAN ) - http://www.cyberciti.biz/tips/howto-setup-linux-iscsi-target-sanwith-tgt.html
- CentOS / Red Hat Linux: Install and manage iSCSI Volume - http://www.cyberciti.biz/tips/rhel-centos-fedora-linux-iscsi-howto.html
Linux Target
FreeNAS
FreeNAS 8 | Storage For Open Source - http://www.freenas.org/
"FreeNAS™ is an Open Source Storage Platform based on FreeBSD and supports sharing across Windows, Apple, and UNIX-like systems. FreeNAS™ 8 includes ZFS, which supports high storage capacities and integrates file systems and volume management into a single piece of software."
Note: Installation is 64bit FreeBSD using an ISO (FreeNAS-8.0.4-RELEASE-p1-x64.iso)
Local disks are labeled 'da0', 'da1', etc...
---
ISCSI - Freenas - http://doc.freenas.org/index.php/ISCSI
- iSCSI is a protocol standard that allows the consolidation of storage data. iSCSI allows FreeNAS™ to act like a storage area network (SAN) over an existing Ethernet network. Specifically, it exports disk devices over an Ethernet network that iSCSI clients (called initiators) can attach to and mount. Traditional SANs operate over fibre channel networks which require a fibre channel infrastructure such as fibre channel HBAs, fibre channel switches, and discreet cabling. iSCSI can be used over an existing Ethernet network, although dedicated networks can be built for iSCSI traffic in an effort to boost performance. iSCSI also provides an advantage in an environment that uses Windows shell programs; these programs tend to filter "Network Location" but iSCSI mounts are not filtered.
---
Administrator Web Interface:
http://[SERVER]/
Turn iSCSI on:
- Services -> iSCSI (on)
Configure iSCSI:
- Services -> iSCSI (wrench)
iSCSI:
Target Global Configuration:
- Base Name: IQN (default: iqn.2011-03.example.org.istgt | example: qn.2000-00.net.iodd)
Portals: (Listening Address)
- Add Portal
- Portal: 0.0.0.0:3260 (default)
Authorized Initiator:
- Add Authorized Initator
- Initators: ALL (default)
- Authorized network: ALL (default)
Targets:
- Add Target
- Target Name (appended to iqn: 'storage.for.db')
- Target Alias (optional friendly name)
- Type: Disk
- Target Flags: read-write
- Portal Group ID: (required, do Portals first)
- Initiator Group ID: (required, do Authorized Initiator first)
- Auth Method: None
Device Extents: (For Device)
- Add Extent
- Extent Name: (string identifier: 'this is da1')
- Produces Path: /dev/da1
NOTE: If you select a device that was used as a volume, this will delete the volume.
Extents: (For File off of volume/dataset mount point)
- Add Extent
- Extent Name (string identifier)
- NOTE: Couldn't get the volumes to format!
Associated Targets:
- Add Extent to Target
- Target: (friendly name of target)
- Extent: (friendly name of extent)
Authentication:
- CHAP, not used in this example
-
Storage: (For Extents)
- Active Volumes:
- Create Volume
- Volume name (ken1)
- Member disks (da1)
- Filesystem type: UFS or ZFS
- Create Volume
- NOTE: Always get an error, use Device Extents only??
-
# iscsiadm -m discovery -t sendtargets -p 10.50.47.44 --op nonpersistent 10.10.10.44:3260,1 iqn.2000-00.com.oeey:storage.for.test
OpenFiler
Openfiler — Openfiler - Open Source Storage Management Appliance - http://www.openfiler.com/
"Openfiler is a network storage operating system, fronted by a web based management user interface. With the features we built into Openfiler, you can take advantage of file-based Network Attached Storage and block-based Storage Area Networking functionality in a single cohesive framework.product hand Any industry standard x86 or x86/64 server can be converted into a powerful multi-protocol network storage appliance, replete with an intuitive browser-based management interface, in as little as 15 minutes. File-based storage networking protocols such as CIFS and NFS ensure cross-platform compatibility in homogeneous networks - with client support for Windows, Linux, and Unix. Fibre channel and iSCSI target features provide excellent integration capabilities for virtualization environments such as Xen and VMware. iSCSI target functionality is especially useful for enterprise applications such as Microsoft Exchange server integration, Oracle 10g RAC backend storage or video surveillance and disk-to-disk backup. "
"The Openfiler distribution is available as an installable ISO CD image to be deployed on bare-metal or a pre-installed disk image for use in one of the several supported virtual machines monitors. Installable images are available for both x86 and x86-64 architectures. Supported virtual machines monitors include Xen, VMware, QEMU, Virtual Iron and Parallels."
Installer: 64bit Linux - Openfiler ESA ISO x86_64 - openfileresa-2.99.1-x86_64-disc1.iso
OpenFiler Features:
- NAS Features - CIFS, NFS, HTTP
- SAN Features - iSCSI, FC
- High Availability / Failover
- Block Replication (LAN & WAN)
- Web-based Management
- Cost-free Storage Capacity Expansion
Web administration GUI:
site: https://[SERVER]:446/ (notice httpS) Administrator Username: openfiler Administrator Password: password
NOTE: The Web GUI is *MUCH* slower with Openfiler, than it is with FreeNAS
Root Access:
ssh: root@[SERVER] username: root password: (set during installation)
---
Services:
- iSCSI Target - Enable
- iSCSI Target - Start
-
System -> Network Access Configuration
- Name: "Free for All"
- Network/Host: 10.0.0.0
- Netmask: 255.0.0.0
- Type: Share
NOTE: 0.0.0.0/0 does not appear to work, probably due to matching with the deny rule in /etc/initiators.deny. :-(
-
Volumes -> Block Devices -> Edit /dev/sd? -> Create a partition
- Mode: Primary
- Partition Type: *Physical volume*
- Start/Ending cylindar: (leave default)
- CREATE
NOTE: If you selected 'Extended partition' you will need to then also create a 'Physical volume' under it.
Volumes -> Volume Groups -> add volume group (volume group name and add physical volumes)
Volumes -> Add Volume -> Create a volume in "volume group"
- Volume Name (no spaces)
- Required Space: (manually enter space listed above - LAME!)
- Filesystem / Volume type: block (iSCSI,FC,etc)
-
Volumes -> iSCSI Targets
Target Configuration -> Add new iSCSI Target - Target IQN: iqn.2000-01.com.oeey:storage.for.db
Network ACL -> "Free For All" -> Access: Allow
LUN Mapping
- R/W Mode: write-thru (options: write-true, write-back, read-only)
- Transfer Mode: blockio (options: blockio, fileio)
---
Error on search:
[root@iscsi-i sde]# iscsiadm -m discovery -t sendtargets -p 10.10.10.45 --op nonpersistent iscsiadm: No portals found
Cause:
- the iSCSI host access configuration is wrong! Check if Access is set to 'Allow'. If you add a specific target, it will work.
- Try stopping and starting the iSCSI service
- Give the service time to finish starting up...
If that doesn't work, you can manually remove the deny in /etc/initiators.deny. Note, this file will revert back the next time the ACLs are updated.
---
Articles:
Openfiler — Graphical Installation - http://www.openfiler.com/learn/how-to/graphical-installation
- Installation of Openfiler using the standard graphical-based installation method.
- This document describes the process of installing Openfiler using the default graphical installation interface. If you experience any problems with the graphical install track, such as a garbled screen due to the installer not being able to auto-detect your graphics hardware, please try a text-based install.
HowTo:Openfiler - Greg Porter's Wiki - http://greg.porter.name/wiki/HowTo:Openfiler
- I've been doing a lot of work with Openfiler, a well regarded Open Source Storage Appliance Software package. With Openfiler, you can take "any old" server you have laying around and make a iSCSI, CIFS or NFS file server easily and quickly.
- I don't intend for this to duplicate the contents of the Openfiler manual. The Openfiler people made the basic install and configuration dead easy. Really. If you've ever loaded an operating system before (especially Linux, like Red Hat) it's dead easy, a no-brainer. If you aren't comfortable with this, do what I and many others have done. Try it over and over again until you get it. If you want a screenshot by screenshot manual, buy one from Openfiler. The Openfiler folks make their living selling manuals and support. I bought one, because I think they are doing a great job. You should, too. Their manual covers basic OpenFiler operations, but doesn't cover some of the cooler advanced features, like replication.
- COMPLETE ISCSI SETUP!
How to set up a free iSCSI or NAS storage system for VMware ESX using Openfiler - http://www.dabcc.com/article.aspx?id=9768
- "Everything I am about to demonstrate to you here is free. You won't have to spend a penny on software to build this architecture, the end result here is a centralised storage system that can be used for iSCSI or NAS storage hosting to all your ESX clients to enable the use of VMotion, HA and DRS services."
How to configure OpenFiler iSCSI Storage for use with VMware ESX. | ESX Virtualization - http://www.vladan.fr/how-to-configure-openfiler-iscsi-storage-for-use-with-vmware-esx/
- "I wanted to test an Openfiler as a solution for HA and vMotion with ESX Server and vCenter. Using Openfiler is a great way to save some money on expensive SANs and for home lab testing and learning, this choice is just great."
Use OpenFiler as your Free VMware ESX SAN Server - http://www.petri.co.il/use-openfiler-as-free-vmware-esx-san-server.htm
- Many of the VMware ESX Server advanced features cannot be used without a SAN (storage area network). Besides the high cost of the ESX Server software and the Virtual Infrastructure Suite, a SAN can be a huge barrier to using VMware ESX and features like VMotion, VMware High Availability (VMHA), and VMware Distributed Resource Scheduler (DRS). In this article, we take a look at how you can download a free open-source iSCSI server and use it as your SAN storage for VMware ESX and its advanced features.
- Includes pictures
Connect VMware ESX Server to a Free iSCSI SAN - http://www.petri.co.il/connect-vmware-esx-server-iscsi-san-openfiler.htm
- Many of the VMware ESX Server advanced features cannot be used without a SAN (storage area network). Besides the high cost of the ESX Server software and the Virtual Infrastructure Suite, a SAN can be a huge barrier to using VMware ESX and features like VMotion, VMware High Availability (VMHA), and VMware Distributed Resource Scheduler (DRS). In this article, we take a look at how you can download a free open-source iSCSI server and use it as your SAN storage for VMware ESX and its advanced features.
- Follow up to previous article
Build Your Own Cheap iSCSI SAN for ESX Server - http://www.petri.co.il/iscsi-san-vmware-esx.htm
- Many of the features of VMware ESX Server and VMware Virtual Infrastructure depend on having a storage area network (SAN). That applies to all the "cool" features like vMotion and VMHA. With a SAN, you have two choices, FC or iSCSI. A fiber channel (FC) SAN can easily cost as much as a small house and enterprise iSCSI equipment may cost half that. Still, what if you just want a test or demonstration iSCSI SAN? No one wants to have to buy one of these expensive options if you want to just test a couple of ESX Servers and Virtual Center. What are your options?
How to configure OpenFiler iSCSI Storage for use with VMware ESX. | ESX Virtualization - http://www.vladan.fr/how-to-configure-openfiler-iscsi-storage-for-use-with-vmware-esx/#ixzz1sMweK7do
- I wanted to test an Openfiler as a solution for HA and vMotion with ESX Server and vCenter. Using Openfiler is a great way to save some money on expensive SANs and for home lab testing and learning, this choice is just great.
Linux tgt
Install tgtd:
yum install scsi-target-utils
Name: scsi-target-utils URL: http://stgt.sourceforge.net/ Summary: The SCSI target daemon and utility programs Description: The SCSI target package contains the daemon and tools to setup a SCSI targets. Currently, software iSCSI targets are supported.
tgtd files and utilities:
/etc/rc.d/init.d/tgtd /etc/sysconfig/tgtd /etc/tgt/targets.conf /usr/sbin/tgt-admin /usr/sbin/tgt-setup-lun /usr/sbin/tgtadm /usr/sbin/tgtd /usr/sbin/tgtimg
tgtadm - Linux SCSI Target Administration Utility
Start tgtd daemon:
chkconfig tgtd on service tgtd restart
Show targets: (target name, connected clients, luns and acls)
tgtadm --lld iscsi --mode target --op show
Define an iscsi target name (TID = Target ID)
tgtadm --lld iscsi --mode target --op new --tid [ID] --targetname `/sbin/iscsi-iname` tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.2001-04.com.example:storage.disk2.amiens.sys1.xyz tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.1994-05.com.redhat:storage.for.db
NOTE: TID cannot be zero
Delete a target:
tgtadm --lld iscsi --mode target --op delete --tid [ID]
Add a logical unit to the target: (aka Backing store path)
tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1 -b /dev/sdb1
NOTE: LUN cannot be zero - reserved for controller
Add a file based logical unit to target:
dd if=/dev/zero of=/fs.iscsi.disk bs=1M count=512 tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1 -b /fs.iscsi.disk
Accept iSCSI Target. To enable the target to accept any initiators, enter:
# note: can use '-I' for initiator address # note: do for each TID tgtadm --lld iscsi --mode target --op bind --tid 1 --initiator-address ALL
This should open network port # 3260:
netstat -tulpn | grep 3260
---
Make settings persistent across reboots:
cd /etc/tgt/targets.conf /etc/tgt/targets.conf.original tgt-admin --dump > /etc/tgt/targets.conf chkconfig tgtd on
/etc/tgt/targets.conf
default-driver iscsi <target iqn.1994-05.com.redhat:alpha> backing-store /dev/sdb1 backing-store /dev/sdb2 backing-store /dev/sdb3 backing-store /dev/sdb4 </target> <target iqn.1994-05.com.redhat:storage.for.vmware> backing-store /dev/sdc1 </target>
---
ACL - IP-based restrictions
If you've previously configured this target to accept ALL initiators, you'll need to remove that first.
tgtadm --lld iscsi --mode target --op unbind --tid 1 -I ALL
Restrict access to a specific IP/subnet
tgtadm --lld iscsi --mode target --op bind --tid 1 -I 10.10.0.65 # restrict to ip tgtadm --lld iscsi --mode target --op bind --tid 1 -I 10.10.0.0/24 # restrict to subnet
---
CHAP Permissions
Show accounts:
tgtadm --lld iscsi --mode account --op show
There are two types of CHAP configurations supported for iSCSI authentication:
Authentication Type | A.K.A. | Description |
---|---|---|
Initiator Authentication | Forward, One-Way | The initiator is authenticated by the target. |
Target Authentication | Reverse, Bi-directional, Mutual, Two-way | The target is authenticated by the initiator. This method also requires Initiator Authentication. |
- Initiator Authentication is basic CHAP authentication. A username and password is created on the target. Each initiator logs into the target with this information.
- Target Authentication is an authentication method in addition to Initiator Authentication. A separate "outgoing" username is created on the target. This username/password pair is used by the target to log into each initiator. Initiator Authentication must also be configured in this scenario.
CHAP Target Authentication
Create user/password:
tgtadm --lld iscsi --mode account --op new --user [CONSUMER_USER] --password [PASSWORD]
Add user to existing device:
tgtadm --lld iscsi --mode account --op bind --tid 1 --user [CONSUMER_USER]
On the initiator's system, this username/password information is entered into /etc/iscsi/iscsid.conf as:
- For discovery authentication (not supported by tgt yet): discovery.sendtargets.auth.{username,password}
- For session authentication: node.session.auth.{username,password}
CHAP Target Authentication (outgoing)
Create user/password:
tgtadm --lld iscsi --mode account --op new --user [PROVIDER_USER] --password [PASSWORD]
Add user to existing device:
tgtadm --lld iscsi --mode account --op bind --tid 1 --user [PROVIDER_USER] --outgoing
On the initiator's system, this username/password information is entered into /etc/iscsi/iscsid.conf as:
- For discovery authentication (not supported by tgt yet): discovery.sendtargets.auth.{username_in,password_in}
- For session authentication: node.session.auth.{username_in,password_in}
NOTE: Careful with target authentication, there is a bug that doesn't allow the removal of outgoing binding:
$ tgtadm --lld iscsi --mode account --op unbind --tid 4 --user ken --outgoing tgtadm: target mode: option '-O' is not allowed/supported
---
References:
- Linux tgtadm: Setup iSCSI Target ( SAN ) - http://www.cyberciti.biz/tips/howto-setup-linux-iscsi-target-sanwith-tgt.html
- Scsi-target-utils Quickstart Guide - FedoraProject - http://fedoraproject.org/wiki/Scsi-target-utils_Quickstart_Guide
keywords
NAS SAN ISCSI Linux