GNU/Linux and SCSI are a natural pair because they both have similar characteristics in their respective environments. GNU/Linux is a secure and reliable operating system that runs non-stop. SCSI is the obvious choice for reliable and high-performance storage. Both are also open source. You can download and read the various SCSI specifications from the T10 Technical Committee of the International Committee on Information Technology Standards (INCITS). Similarly, you can download the GNU/Linux source to understand their implementations. Each dominate in their respective industries, so it's not surprising that GNU/Linux supports SCSI better than any other operating system out there.
SCSI is an interesting interface to study because it is one of the oldest interfaces that is still evolving today. The first SCSI standard, called SCSI-1, was created by Shugart Associates around 1979. SCSI-1 defined an 8-bit parallel interface with a 5MHz data clock, providing a maximum data transfer rate of 5 megabytes per second (MB/s).
The SCSI-2 standard began in 1985 and resulted in a faster data rate (10MHz) and a wider bus (16 bits). Called Fast/Wide, SCSI-2 allowed data transfer rates up to 20 MB/s with backward compatibility with SCSI-1, albeit at the SCSI-1 data rate.
The development of SCSI-3 began in 1993 and is actually a collection of standards defining protocols, command sets, and signaling methods. Under the moniker SCSI-3, you'll find a collection of parallel SCSI standards with the name Ultra and modern serial SCSI-based protocols such as the IEEE 1394 (FireWire), Fibre Channel, Internet SCSI (iSCSI), and the new kid on the block, SAS. These standards changed the storage landscape by introducing storage network technologies (such as FC-AL or iSCSI), extended data rates above 1 gigabit per second (Gbit/s), increased the maximum number of addressable devices above 100, and extended the maximum cable lengths beyond 25 meters. Figure 1 shows the data rate evolution of SCSI from 1986 through 2007.
Figure 1. The evolution of SCSI data rates
SCSI implements a client/server style of communications architecture. An initiator sends a command request to a target device. The target processes the request and returns a response to the initiator. An initiator can represent a SCSI device in a host computer, and a SCSI target can be a disk drive, CD-ROM, tape drive, or special device such as an enclosure service.
While the protocols over which SCSI is transported have changed over the years, the SCSI command set retains many of its original elements. A SCSI command is defined within a Command Descriptor Block (CDB). The CDB contains an operation code defining the particular operation to perform and a number of operation-specific parameters.
SCSI commands support reading and writing data (four variants each) and also a number of non-data commands such as test-unit-ready (is the device ready), inquiry (retrieve basic information about the target device), read-capacity (retrieve storage capacity of the target device), and numerous others. The particular commands supported by a target device depend upon the type of device. An initiator identifies the device type through the inquiry command. Table 1 lists the most-common SCSI commands you'll encounter.
Table 1. Common SCSI commands
|Test unit ready||Inquire whether the device is ready for transfers|
|Inquiry||Request basic information about the device|
|Request sense||Request error information for a previous command|
|Read capacity||Request storage capacity information|
|Read||Read data from the device|
|Write||Write data to the device|
|Mode sense||Request mode pages (device parameters)|
|Mode select||Configure device parameters in a mode page|
With about sixty commands available, SCSI provides command capabilities for a wide range of devices (including random access devices such as disks and sequential storage devices such as tape). SCSI also provides special commands to access enclosure services (such as current sensing and temperature within a storage enclosure). See the Resources section for more information.
Figure 2 shows where the SCSI subsystem fits within the Linux kernel. At the top of the kernel is the system call interface, which handles the routing of user-space calls to their appropriate destination in the kernel (such as an open, read, or write). The virtual file system (VFS) is the abstraction layer for the multitude of file systems that are supported in the kernel. This takes care of routing requests to the appropriate file system. Most of the file systems communicate through a buffer cache, which is a cache that optimizes access to the physical devices by caching recently touched data. Next is the block device drivers layer, which contains the various block drivers for underlying devices. The SCSI subsystem is one of these block device drivers.
Figure 2. Where the SCSI subsystem fits in the Linux kernel
Not unlike other major subsystems in the Linux kernel, the SCSI subsystem exists as a layered architecture with three distinct layers. At the top is what's called the upper level, which represents the highest interface of the kernel for SCSI and drivers for the major device types. Next is the mid level, also called the common or unifying layer. In this layer are common services for both the upper and lower levels of the SCSI stack. Finally, there's the lower level, which represents the actual drivers for the physical interfaces that are applicable to SCSI (see Figure 3).
Figure 3. Layered architecture of the Linux SCSI subsystem
You can find the source for the SCSI subsystem (SCSI upper level, mid level, and a plethora of drivers) in ./linux/drivers/scsi. SCSI data structures can be in the SCSI source directory and also in ./linux/include/scsi.
The upper level of the SCSI subsystem represents the highest-level interface from the kernel (the device level). It consists of a set of drivers such as the block devices (SCSI disk and SCSI CD-ROM) and the character devices (SCSI tape and SCSI generic). The upper level accepts requests from above (such as the VFS) and converts them into SCSI requests. The upper layer also completes SCSI commands and notifies the layer above of the status.
The SCSI disk driver is implemented in ./linux/drivers/scsi/sd.c. The SCSI disk
driver initializes itself with a call to
register_blkdev (as a block driver) and provides a
common set of functions that represent all SCSI devices with
scsi_register_driver. Two of interest here are
sd_init_command. Whenever a new SCSI device is attached
to a system, the
sd_probe function is called from the
SCSI mid layer. The
sd_probe function determines
whether the device will be managed by the SCSI disk driver and, if so, creates a
scsi_disk structure to represent it. The
sd_init_command is the function that takes a request
from the file system layer and turns the request into a SCSI read or write command
sd_rw_intr is called to complete this I/O request).
The SCSI tape driver is implemented in ./linux/drivers/scsi/st.c. The tape driver
is a sequential access device and registers itself as a character device through
register_chrdev_region. The SCSI tape driver also
provides a probe function called
st_probe. This creates
a new tape device and adds it into a vector called
scsi_tapes. The SCSI tape driver is unique in that it
performs I/O transfers directly from user space, if possible. Otherwise, data is
staged through a driver buffer.
The SCSI CD-ROM driver is implemented in ./linux/drivers/scsi/sr.c. The CD-ROM
driver is another block device and provides a similar set of functions to the SCSI
disk driver. The
sr_probe function is used to create a
scsi_sd structure to represent the CD-ROM device and
also registers the CD-ROM with
register_cdrom. The SCSI
tape driver also exports an
sr_init_command that turns
a request into a SCSI CD-ROM read or write request.
Finally, the SCSI generic driver is implemented in ./linux/drivers/scsi/sg.c. This driver allows user applications to send SCSI commands to devices (such as format, mode sense, or diagnostic commands). You can take advantage of the SCSI generic driver from user space with the sg3utils package. This user-space package contains a variety of utilities for sending SCSI commands and parsing their responses.
The SCSI mid level is a common services layer for both the SCSI upper level and lower level (implemented partly in ./linux/drivers/scsi/scsi.c). It provides a number of functions that are used by upper- and lower-level drivers and, therefore, serves as the glue between these two distinct layers. This layer is important because it abstracts the implementation of lower-level drivers (LLD), partially implemented in ./linux/drivers/scsi/hosts.c. This means that Fibre Channel host bus adapters (HBAs) with different interfaces can be used in the same way.
Low-level driver registration and error handling are provided by the SCSI mid level. The mid level also provides SCSI command queuing between the upper and lower levels. A key aspect of the SCSI mid level is conversion of command requests from the upper layer into SCSI requests. It also manages SCSI-specific error recovery.
The mid layer fundamentally acts as a go-between for the upper and lower levels of the SCSI subsystem. It accepts requests for SCSI transactions and queues them for processing (as shown in ./linux/drivers/scsi/scsi_lib.c). When these commands are completed, it receives the SCSI response from the LLD and performs notification for upper-level completion of the request.
One of the most important aspects of the mid layer is error and timeout handling. When a SCSI command does not complete within a reasonable amount of time or an error is returned for a SCSI request, the mid level manages the error or retries the request. The mid level also manages higher-level recovery such as requesting an HBA (LLD) or SCSI device reset. The SCSI error and timeout handler is implemented in ./linux/drivers/scsi/scsi_error.c.
At the lowest level is a collection of drivers called the SCSI low-level drivers. These are the specific drivers that interface to the physical devices such as HBAs. The LLD provides an abstraction from the common mid layer to the device-specific HBA. Each LLD provides the interface to the particular underlying hardware but uses the standard set of interfaces to the mid layer.
The lower level contains the largest amount of code because it accounts for the variations of SCSI adapter types. For example, the Fibre Channel protocol includes LLD for adapters from Emulex and QLogic. LLDs for SAS adapters are included from Adaptec and LSI.
If there's one thing for sure, there's a future for SCSI, and it has a home in Linux. As SCSI evolves, Linux will be there with support for the new cutting-edge advancements. Linux supports the new SAS protocol with drivers for a number of HBAs. As the protocols advance to greater speeds (such as 6 Gb SAS or 8 Gb FC), Linux will be at the forefront of development and deployment.
You'll also find Linux at the cutting edge of new SCSI protocols. One that's important to mention is Fibre Channel over Ethernet (FCoE). FCoE is a mapping of Fibre Channel frames over full duplex Ethernet networks (typically 1Gb or 10Gb Ethernet). FCoE is important because it joins the most-dominant enterprise storage protocol with the most-dominant networking medium. This new technology will certainly be one to watch, and Linux will be there.
End-to-end data protection is also on the way for SCSI, coming out of the T10's new data integrity standard. This standard adds a data integrity field (DIF) to each sector to maintain data protection on the medium. The new 8-byte DIF field includes a cyclical redundancy code (CRC) to protect the data, a reference tag to protect against misdirected writes, and an application tag. The application tag is specific to the application and can define the purpose behind the data—part of a PDF document, for example. See the Resources section for more information.
The Linux kernel is yet another model example of an abstracted layered architecture. It joins disparate file systems of differing types to different physical storage mediums. When those storage mediums are SCSI related, the SCSI subsystem translates the common Linux block requests into SCSI requests for the particular underlying device. The SCSI subsystem itself has gone through many changes over the years, and the changes aren't done yet. New technologies such as end-to-end data protection are finding their way into Linux, as are new protocols such as FCoE.
Check out the
Technical Committee T10 to learn more about the
various SCSI specifications, from the fundamental commands of SCSI to the newest
SCSI protocols like SAS.
This article from the
Open Source Development Lab
focuses on the 2.6 kernel and the SCSI disk driver for a detailed look at the SCSI implementation
in the Linux kernel.
list of user-space tools to
query, view, or manage SCSI devices makes use of the
SCSI generic driver in the Linux kernel. (Linux supports a number of SCSI tools,
including the sg3utils mentioned in this article.)
"Anatomy of the Linux kernel"
(developerWorks, June 2007) gives a general overview of the Linux
kernel and each of its major subsystems.
"Kernel command using Linux system calls"
(developerWorks, March 2007) explores the system call interface within the Linux
kernel (from user-space call to kernel completion).
"Anatomy of the Linux networking stack"
(developerWorks, June 2007) introduces the basic architecture of the networking
stack in Linux, including the major components and structures that are involved.
"Anatomy of the Linux file system"
(developerWorks, October 2007) explores the virtual file system (VFS)—sometimes
called the virtual filesystem switch—in the Linux kernel and then reviews some
of the major structures that tie file systems together.
- Fibre Channel over Ethernet (FCoE)
is one of the interesting SCSI protocols coming in
the future. It is not yet available but is actively under development by a number of
Learn more in this
paper on the Data Integrity Field (DIF) technique for end-to-end protection.
DIF provides protection on disk, detects
misdirected writes, and allows application-specific tags to accompany the data for
further protection. DIF is a multi-vendor technology, which means you'll find support for it in
HBAs from a variety of vendors.
- In the
developerWorks Linux zone,
find more resources for Linux developers, and scan our
most popular articles and
- See all
- Stay current with
developerWorks technical events and Webcasts.
Get products and technologies
IBM trial software,
available for download directly from developerWorks, build your next development
project on Linux.
- Get involved in the
through blogs, forums, podcasts, and community topics in our
new developerWorks spaces.
M. Tim Jones is an embedded software architect and the author of GNU/Linux Application Programming, AI Application Programming, and BSD Sockets Programming from a Multilanguage Perspective. His engineering background ranges from the development of kernels for geosynchronous spacecraft to embedded systems architecture and networking protocols development. Tim is a Consultant Engineer for Emulex Corp. in Longmont, Colorado.