Disk Drives - Terms, Definitions

Laboratory Home
Test Results
This Site
Email List
Neal Nelson
& Assoc. Home

Click one of the following links or just scroll through the list of terms and definitions.

  1. Average Access
  2. Average Seek
  3. ATA, "AT", Advanced Technology Attachment
  4. Buffers, Buffering, Cache, Cached Reads/Writes
  5. Bus, Bus Arbitration
  6. Command Queueing
  7. Cylinders
  8. EIDE
  9. Estimated Access
  10. Fibre Channel
  11. Firewire
  12. Full Duplex
  13. Gbps, Gb/s, Gigabits per Second
  14. GBps, GB/s, Gigabytes per Second
  15. Half Duplex
  16. Heads
  17. IDE
  18. IEEE 1394
  19. Interface Type, Speed
  20. Maximum Internal Data Rate
  21. Mbps, Mb/s, Megabits per Second
  22. MBps, MB/s, Megabytes per Second
  23. PATA, Parallel ATA Bus
  24. Parallel Interface
  25. PCI, PCI-X, PCIe
  26. PCMCIA, PC Card
  27. Recording Density
  28. Revolutions per Minute
  29. Rotational Delay (Latency)
  30. SATA, Serial ATA Bus
  31. SCSI, Small Computer System Interface
  32. Serial Interface
  33. Sectors
  34. Tracks
  35. USB, Universal Serial Bus

Average Access -

The average amount of time it takes for a storage peripheral to transfer data to the Central Processing Unit (CPU).

Back to Top

Average Seek -

The average time it takes for the read/write head to move to a specific location. Calculated by dividing the time it takes to complete a large number of random seeks by the number of seeks performed.

Back to Top

ATA, "AT", Advanced Technology Attachment -


ATA is a standard interface for connecting storage devices such as hard drives and CD-ROM drives inside personal computers. It is based on the IBM PC AT (Advanced Technology) ISA 16-bit bus. The ATA specification deals with the power and data signal interfaces between the motherboard and the integrated disk controller and drive. The ATA "bus" supports only two devices - master and slave.

ATA was originally called IDE. When the interface was submitted to ANSI group X3T10 (now known as NCITS T10), it was renamed Advanced Technology Attachment before ratification in 1990.

ATA standards allow only cable lengths in the range of 18 to 36 inches (450 to 900 mm), so the technology usually appears as an internal computer storage interface. It provides the most common and least expensive interface for this application.

With the introduction of Serial ATA (SATA), ATA was renamed Parallel Advanced Technology Attachment (PATA) referring to the method in which data travels over wires in the interface.

Back to Top

Buffers, Buffering, Cache, Cached Read/Writes -

A buffer is a region of memory used to temporarily hold input or output data, comparable to buffers in telecommunication. The data can be input from or output to devices outside the computer or processes within a computer. Buffers can be implemented in either hardware or software, but the vast majority of buffers are implemented in software. Buffers are used when there is a difference between the rate at which data is received and the rate at which it can be processed, or in the case that these rates are variable, for example in a printer spooler.

A cache (pronounced "cash") is a temporary storage area where frequently accessed data can be stored for rapid access. Once the data is stored in the cache, future use can be made by accessing the cached copy rather than refetching or recomputing the original data, so that the average access time is lower.

Caches have proven extremely effective in many areas of computing because in typical computer applications recently accessed items tend to be accessed again in the near future and instructions tend to be accessed in sequential memory locations.

The difference between buffers and cache:

Buffers are allocated by various processes to use as input queues, etc. Most of the time, buffers are some processes' output, and they are file buffers. A simplistic explanation of buffers is that they allow processes to temporarily store input in memory until the process can deal with it.

Cache is typically frequently requested disk I/O. If multiple processes are accessing the same files, much of those files will be cached to improve performance (RAM being so much faster than hard drives).

Back to Top

Bus, Bus Arbitration -

The bus is a set of electronic pathways that allows information and signals to travel between components inside or outside a computer. The system bus connects the CPU, system memory and all other components on the motherboard. The external bus, or expansion bus, connects the different external devices, peripherals, expansion slots, I/O ports and drive connections to the rest of the computer.

Whenever a device needs to communicate with another device connected to the motherboard, it must do so over the bus. Because the board is shared among all of the devices, a method must be used to determine which device gets access to the bus. This method is referred to as bus arbitration. The bus arbitration mechanism is designed so that high priority devices like the processor and RAM get first access to the bus, while other devices (disks, video cards, sound cards, etc.) get lower priority, and often have to wait to access the bus. Usually this prioritization is accomplished by assigning lower numbered interrupts to higher priority systems. On many systems, the CPU has interrupt 0 and always goes first.

Back to Top

Command Queueing -

Native Command Queuing (NCQ) and Tagged Command Queuing (TCQ) are features created to improve the hard disk performance by re-ordering the commands sent by the computer to the hard disk drive.

Native Command Queuing is a technology designed to increase performance of SATA hard disks by allowing the individual hard disk to receive more than one I/O request at a time and decide which to complete first. Using detailed knowledge of its own seek times and rotational position, the drive can compute the best order to perform the operations. This can reduce the amount of unnecessary seeking (going back-and-forth) of the drive's heads, resulting in increased performance (and slightly decreased wear of the drive) for workloads where multiple simultaneous read/write requests are outstanding, most often occurring in server-type applications.

Note that while command queuing can be a tremendous help if there are multiple outstanding I/O requests, NCQ adds a small amount of overhead to single requests, resulting in slightly lower performance on some single-threaded benchmarks typical of single-user computer use. The difference is never large.

For NCQ to be enabled, it must be supported and turned on in the SATA controller driver and in the hard drive itself. Method of activation varies depending on the controller. On some Intel chipset-based PC motherboards, this technology requires the enabling of the Advanced Host Controller Interface (AHCI) in the BIOS and the installation of the Intel Application Accelerator software.

Tagged Command Queuing (TCQ) technology built into certain PATA and SCSI hard drives allows the operating system to send multiple read and write requests to a hard drive. TCQ is almost identical in function to Native Command Queuing (NCQ) used by SATA drives.

Before TCQ, an operating system was only able to send one request at a time. In order to boost performance, it had to decide the order of the requests based on its own, possibly incorrect, idea of what the hard drive was doing. With TCQ, the drive can make its own decisions about how to order the requests (and in turn relieve the operating system from having to do so). The result is that TCQ can improve the overall performance of a hard drive.

Back to Top

Estimated Access -

There has never been agreement about the "right" way to report access time for a drive. Read seeks are usually faster than write seeks. Rotational delay should be a consideration. Some manufacturers publish "average seek" others publish "average access".

The numbers that we display as "Estimated Access" are computed as seek time plus rotational delay. If the manufacturer publishes a single average seek time we use that number. If they publish separate seek times for read and write we compute an average value of the two. If the manufacturer publishes a number for average latency or rotational delay we use that number. If they do not publish a rotational latency figure we estimate the latency as twice the result of 1/RPM.

Back to Top

Fibre Channel -

Fibre Channel is a gigabit speed network technology primarily used for Storage Networking. It started for use primarily in the supercomputer field, but has become the standard connection type for storage area networks in enterprise storage. Despite its name, Fibre Channel signaling can run on both twisted-pair copper wire and fiber optic cables.

Fibre Channel is standardized in the T11 Technical Committee of the InterNational Committee for Information Technology Standards (INCITS), an American National Standard Institute (ANSI) accredited standards committee.

There are three major Fibre Channel topologies:

Point-to-Point (FC-P2P). Two devices are connected back to back. This is the simplest topology, with limited connectivity.

Arbitrated Loop (FC-AL). In this design, all devices are in a loop or ring, similar to token ring networking. Adding or removing a device from the loop causes all activity on the loop to be interrupted. The failure of one device causes a break in the ring. Fibre Channel hubs exist to connect multiple devices together and may bypass failed ports. A loop may also be made by cabling each port to the next in a ring. Often an arbitrated loop between two ports will negotiate to become a P2P connection, but this is not required by the standard.

Switched Fabric (FC-SW). All devices or loops of devices are connected to Fibre Channel switches, similar conceptually to modern Ethernet implementations. The switches manage the state of the fabric, providing optimized interconnections. Very limited security is available in today's fibre channel switches.

Fibre Channel is a layered protocol. It consists of 5 layers, namely:

FC0 The physical layer, which includes cables, fiber optics, connectors, pinouts etc.
FC1 The data link layer, which implements the 8b/10b encoding and decoding of signals.
FC2 The network layer, defined by the FC-PI-2 standard, consists of the core of Fibre Channel, and defines the main protocols.
FC3 The common services layer, a thin layer that could eventually implement functions like encryption or RAID.
FC4 The Protocol Mapping layer. Layer in which other protocols, such as SCSI, are encapsulated into an information unit for delivery to FC2.
FC0, FC1, and FC2 are also known as FC-PH, the physical layers of fibre channel.

Fibre Channel products are available at 1 Gbps, 2 Gbps and 4 Gbps. An 8 Gbps standard is being developed. A 10 Gbps standard has been ratified, but is currently only used to interconnect switches. No 10 Gbps initiator or target products are available yet based on that standard. Products based on the 1, 2, 4 and 8 Gbps standards should be interoperable, and backward compatible; the 10 Gbps standard, however, will not be backward compatible with any of the slower speed devices.

Fibre Channel switches are divided into two classes of switches. These classes are not part of the standard, and the classification of every switch is left up to the manufacturer.

Director switches are characterized by offering a high port-count in a modular (slot-based) chassis with no single point of failure (high availability).
Fabric switches are typically fixed-configuration (sometimes semi-modular) non-redundant switches.

The following ports are defined by Fibre Channel:

E_port is the connection between two fibre channel switches. Also known as an Expansion port. When E_ports between two switches form a link, that link is referred to as an InterSwitch Link or ISL.
F_port is a fabric connection in a switched fabric topology. Also known as Fabric port. An F_port is not loop capable.
FL_port is the fabric connection in a public loop for an arbitrated loop topology. Also known as Fabric Loop port. Note that a switch port may automatically become either an F_port or an FL_port depending on what is connected.
G_port or generic port on a switch can operate as an E_port or F_port.
L_port is the loose term used for any arbitrated loop port, NL_port or FL_port. Also known as Loop port.
N_port is the node connection pertaining to hosts or storage devices in a Point-to-Point or switched fabric topology. Also known as Node port.
NL_port is the node connection pertaining to hosts or storage devices in an arbitrated loop topology. Also known as Node Loop port.
TE_port is a term used for multiple E_ports trunked together to create high bandwidth between switches. Also known as Trunking Expansion port.

Back to Top

FireWire -

See also IEEE 1394

FireWire is a personal and digital video serial bus interface standard offering high-speed communications and isochronous real-time data services developed primarily by Apple Computer. It was developed using the IEEE 1394 standards as a serial replacement for the SCSI Parellel Interface.

Almost all modern digital camcorders include this connection. All Macintosh computers currently produced have built-in FireWire ports as does the Apple iPod music player.

Back to Top

Full Duplex -

A full-duplex system allows communication in both directions, and unlike half-duplex allows this to happen simultaneously. Most telephone networks are full duplex as they allow both callers to speak at the same time.

A good analogy for a full-duplex system would be a two lane road with one lane for each direction.

Back to Top

Gbps, Gb/s, Gigabits per Second -

Gbps stands for billions of bits per second and is a measure of bandwidth on a digital data transmission.

Back to Top

GBps, GB/s, Gigabytes per Second -

GBps stands for billions of bytes per second and is a measure of bandwidth on a digital data transmission. A byte consists of 8 bits.

Back to Top

Half Duplex -

A half-duplex system allows communications in both directions, but only one direction at a time (not simultaneously). Any radio system where you must use "Over" to indicate the end of transmission, or any other procedure to ensure that only one party broadcasts at a time would be a half-duplex system.

A good analogy for a half-duplex system is a road under construction which is down to one lane with traffic controllers at each end. Traffic can flow in both directions, but only one direction at a time with this being regulated by the controllers.

Back to Top

IDE, EIDE, Integrated Drive Electronics -

See also ATA, PATA, SATA

The IDE (Integrated Drive Electronics) bus is more correctly known as the ATA (Advanced Technology Attachment) specification (ATA Bus). The IDE bus is used in Personal Computers as a hard-drive or peripheral bus to interconnect the PC mother board and a hard drive. The IDE bus is a Parallel bus.

ATA-2, more commonly known as EIDE, and sometimes known as Fast ATA or Fast IDE, is a standard approved in 1996. Today, ATA-2 is considered obsolete.

Back to Top

IEEE 1394 -

See also Firewire

IEEE 1394 is a standard defining a high-speed serial bus with data transfer rates of up to 400Mbps (in 1394a) and 800Mbps (in 1394b). A single 1394 port can be used to connect up to 63 external devices. It also supports isochronous data, making it ideal for devices that need to transfer high levels of data in real-time, such as video devices.

The standard provides for self-configured addressing and, as a result, there is no potential for address conflicts.

Apple uses the trademarked name FireWire for its IEEE 1394 technology and Sony uses i.Link.

Back to Top

Interface Type, Speed

Often times in the case of disk interfaces different terms or phrases are used to describe the same specification. For example "Ultra SCSI" and "SCSI-II" describe the same version of the SCSI interface.

On this web site we try to use only one term or phrase for each specification. The following table defines our standard term and relates it to other common terms in the industry.

Our Standard

Other Terms


Web Site

SCSI (Small Computer System Interface),
8 bit wide bus, 5 Mhz, 5 MB/s
Fast SCSI-2
8 bit wide bus, 10 Mhz, 10 MB/s
Wide SCSI-2
16 bit wide bus, 10 Mhz, 20 MB/s
Ultra SCSI
Ultra SCSI-3
8 bit wide bus, 20 Mhz, 20 MB/s
Ultra Wide SCSI
Ultra Wide SCSI-3
16 bit wide bus, 20 Mhz, 40 MB/s
Ultra2 SCSI
Ultra2 SCSI
8 bit wide bus, 40 Mhz, 40 MB/s
Wide SCSI-2
16 bit wide bus, 10 Mhz, 20 MB/s
Wide SCSI-2
16 bit wide bus, 10 Mhz, 20 MB/s
Wide SCSI-2
10 Mhz, 16 bit bus, 20 MB/s
Wide SCSI-2
10 Mhz, 16 bit bus, 20 MB/s
Wide SCSI-2
10 Mhz, 16 bit bus, 20 MB/s
The original 1977 SCSI, 5 MB/s async
The original 1977 SCSI, 5 MB/s async
The original 1977 SCSI, 5 MB/s async
The original 1977 SCSI, 5 MB/s async
Back to Top

Maximum Internal Data Rate -

Internal Data Rate is the speed at which data can be transmitted internally to and from the drive's media or platters. Data rates are usually measured in megabits per second (Mbps). There is no single data transfer rate figure for a modern hard disk. They are typically stated as a range, from minimum to maximum (with the maximum figure given alone, of course, if only one number is provided). The faster the data transfer rate, the better the performance of the drive.

Back to Top

Mbps, Mb/s, Megabits per Second -

Mbps stands for millions of bits per second and is a measure of bandwidth on a digital data transmission.

Back to Top

MBps, MB/s, Megabytes per Second -

MBps stands for millions of bytes per second and is a measure of bandwidth on a digital data transmission. A byte consists of 8 bits.

Back to Top

PATA, Parallel ATA Bus -


Refer to the section on ATA. With the introduction of Serial ATA (SATA), ATA was renamed Parallel ATA (PATA), referring to the method in which data travels over wires in the interface.

Back to Top

Parallel Interface -

A parallel interface is a computer connection capable of transmitting more than one bit of information at the same time.

Parallel interfaces are most often used by microprocessors to communicate with peripherals. The most common kind of parallel port is a printer port. Disks are also connected via special parallel ports, e.g. SCSI, ATA.

Recently, the Universal Serial Bus (USB) port has grown in popularity and has started displacing parallel ports because USB makes it simple to add more than one device (such as printers) to a computer.

Back to Top


Peripheral Component Interconnect
The PCI bus debuted over a decade ago at 33MHz, with a 32-bit bus and a peak theoretical bandwidth of 132MBps. This was pretty good for the time, but as the rest of the system got more bandwidth hungry, both the bus speed and the bus width were cranked up to keep pace. Later versions of PCI included a 64-bit, 33MHz bus combination and a peak bandwidth of 264MBps and a more recent 64-bit, 66MHz combination with a bandwidth of 512MBps.

PCI uses a shared bus topology to allow for communication among the different devices on the bus. The PCI devices (i.e., a network card, a sound card, a RAID card, etc.) are all attached to the same bus, which they use to communicate with the CPU. The CPU accesses PCI devices via a fairly straightforward load-store mechanism. A unified portion of address space is dedicated for PCI use, which looks to the CPU somewhat like main memory address space, except that at each range of addresses there is a PCI device instead of a group of memory cells with code or data. In the same way the CPU accesses memory by performing loads and stores to specific addresses, it accesses PCI devices by performing reads and writes of specific addresses.

PCI as it exists today has some serious shortcomings that prevent it from providing the bandwidth and feature needed by current and future generations of I/O and storage devices. Specifically, its highly parallel shared-bus architecture limits its bus speed and scalability, and its simple, load-store, flat memory-based communications model is less robust and extensible than a routed, packet-based model.

PCI-X is an expansion card standard designed to supercede PCI.

Although they are commonly confused, PCI-X and PCIe (Express) are not the same. PCI-X (Extended) is a computer bus technology that increases the speed that data can move within a computer from 66 MHz to 133 MHz. The technology was developed jointly by IBM, HP, and Compaq. PCI-X doubles the speed and amount of data exchanged between the computer processor and peripherals. With the current PCI design, one 64-bit bus runs at 66 MHz and additional buses move 32 bits at 66 MHz or 64 bits at 33 MHz. The maximum amount of data exchanged between the processor and peripherals using the current PCI design is 532 MB per second. With PCI-X, one 64-bit bus runs at 133 MHz with the rest running at 66 MHz, allowing for a data exchange of 1.06 GB per second. PCI-X is backwards-compatible, meaning that you can, for example, install a PCI-X card in a standard PCI slot but expect a decrease in speed to 33 MHz. You can also use both PCI and PCI-X cards on the same bus but the bus speed will run at the speed of the slowest card. PCI-X is more fault tolerant than PCI. For example, PCI-X is able to reinitialize a faulty card or take it offline before computer failure occurs.

PCI-X was designed for servers to increase performance for high bandwidth devices such as Gigabit Ethernet cards, Fibre Channel, Ultra3 Small Computer System Interface, and processors that are interconnected as a cluster. While PCI-X increased PCI's bandwidth and usefulness, it is more expensive to implement.

PCI Express (PCIe) is the newest name for the technology formerly known as 3GIO. PCIe's most drastic and obvious improvement over PCI is its point-to-point bus topology, in which a shared switch replaces the shared bus as the single shared resource by which all of the devices communicate. Each device in the system has direct and exclusive access to the switch. This connection is called a link.

PCIe is a layered protocol, consisting of a Transaction Layer, a Data Link Layer, and a Physical Layer. The Physical Layer is further divided into a logical sublayer and an electrical sublayer. The logical sublayer is frequently further divided into a Physical Coding Sublayer (PCS) and a Media Access Control (MAC) sublayer (terms borrowed from the OSI model of networking protocol).

PCIe was designed to be completely transparent to software developers - an operating system designed for PCI can boot in a PCI Express system without any code modification.

Back to Top


PCMCIA is short for Personal Computer Memory Card International Association. PCMCIA is an organization consisting of some 500 companies that has developed a standard for small, credit card-sized devices, called PC Cards. Originally designed for adding memory to portable computers, the PCMCIA standard has been expanded several times and is now suitable for many types of devices.

There are three types of PCMCIA cards which have the same rectangular size (85.6 by 54 millimeters), but different widths.
Type I cards can be up to 3.3 mm thick, and are used primarily for adding additional ROM or RAM to a computer.
Type II cards can be up to 5.5 mm thick. These cards are often used for modem and fax modem cards.
Type III cards can be up to 10.5 mm thick, which is sufficiently large for portable disk drives.

As with the cards, PCMCIA slots also come in three sizes:
A Type I slot can hold one Type I card
A Type II slot can hold one Type II card or one Type I card
A Type III slot can hold one Type III card or any combination of two Type I or II cards.

In general, you can exchange PC Cards on the fly, without rebooting your computer. For example, you can slip in a fax modem card when you want to send a fax and then, when you're done, replace the fax modem card with a memory card.

Back to Top

Recording Density -

Recording density is the numbers of bits recorded in a single linear track measured per unit length, area or volume.

Back to Top

Revolutions per Minute -

Revolutions per minute is a unit of frequency, commonly used to measure rotational speed, in particular in the case of rotation around a fixed axis. It represents the number of full rotations something makes in one minute. The measurement applies commonly to hard drives, and removable drives like CD-ROMs and DVD-ROMs. If you are using a drive with higher RPMs, you usually have better performance. Since the disk is spinning faster the drive can usually read data at a faster rate. For example, 5,400 RPM hard drives are generally slower than 7,200 RPM hard drives. The downside to higher RPM rates is that it is harder to stabilize and read data from a disk that is spinning faster, so the mechanism may be more complex and more expensive. Also, faster spinning drives are usually louder and run hotter.

Back to Top

Rotational Delay (Latency) -

Rotational delay is a term applicable to rotating storage devices (such as a hard disk or floppy disk drive). The rotational delay is the time required for the addressed area of the disk to rotate into a position where it is accessible by the read/write head.

Maximum rotational delay is the time it takes to do a full rotation (as the relevant part of the disk may have just passed the head when the request arrived). Most rotating storage devices rotate at a constant angular rate (constant number of revolutions per second). The maximum rotational delay is simply the reciprocal of the rotational speed (appropriately scaled). If a hard disk drive makes 7200 revolutions per minute, its maximum rotational delay will be 60/7200 s or about 8 ms.

Average rotational delay is also a useful concept - it is half the maximum rotational delay.

Back to Top

SATA, Serial ATA Bus -


Serial ATA (SATA) is a computer bus technology primarily designed for transfer of data to and from a hard disk. It is the successor to ATA (which has been retroactively renamed Parallel ATA (PATA)).

SATA provides greater scalability, simpler installation, thinner cabling and faster performance (up to 3Gbps). SATA also maintains backward compatibility with Parallel ATA drivers.

SATA cables are narrower than PATA (only 7 pins) and have a greater maximum length. In addition, SATA devices require much less power than PATA. Finally, SATA is hot-swappable, meaning that devices can be added or removed while the computer is operating.

SATA is planned to increase I/O speeds up to 6Gbps, which may make it robust enough for higher-end enterprise servers.

Back to Top

SCSI, Small Computer System Interface -

SCSI is a parallel interface standard used by Apple Macintosh computers, PCs, and many UNIX systems for attaching peripheral devices to computers. Nearly all Apple Macintosh computers, excluding only the earliest Macs and the recent iMac, come with a SCSI port for attaching devices such as disk drives and printers.

SCSI interfaces provide for faster data transmission rates (up to 80 megabytes per second) than standard serial and parallel ports. In addition, you can attach many devices to a single SCSI port, so that SCSI is really an I/O bus rather than simply an interface.

Although SCSI is an ANSI standard, there are many variations of it, so two SCSI interfaces may be incompatible. For example, SCSI supports several types of connectors.

While SCSI has been the standard interface for Macintoshes, the iMac comes with IDE, a less expensive interface, in which the controller is integrated into the disk or CD-ROM drive. Other interfaces supported by PCs include enhanced IDE and ESDI for mass storage devices, and Centronics for printers. You can, however, attach SCSI devices to a PC by inserting a SCSI board in one of the expansion slots. Many high-end new PCs come with SCSI built in. Note, however, that the lack of a single SCSI standard means that some devices may not work with some SCSI boards.

The following varieties of SCSI are currently implemented:

SCSI-1: Uses an 8-bit bus, and supports data rates of 4 MBps
SCSI-2: Same as SCSI-1, but uses a 50-pin connector instead of a 25-pin connector, and supports multiple devices. This is what most people mean when they refer to plain SCSI.
Wide SCSI: Uses a wider cable (168 cable lines to 68 pins) to support 16-bit transfers.
Fast SCSI: Uses an 8-bit bus, but doubles the clock rate to support data rates of 10 MBps.
Fast Wide SCSI: Uses a 16-bit bus and supports data rates of 20 MBps.
Ultra SCSI: Uses an 8-bit bus, and supports data rates of 20 MBps.
SCSI-3: Uses a 16-bit bus and supports data rates of 40 MBps. Also called Ultra Wide SCSI.
Ultra2 SCSI: Uses an 8-bit bus and supports data rates of 40 MBps.
Wide Ultra2 SCSI: Uses a 16-bit bus and supports data rates of 80 MBps.

Back to Top

Serial Interface -

A serial interface is a computer connection capable of transmitting only one data bit at a time, sequentially.

The communications links across which computers-or parts of computers-talk to one another may be either serial or parallel. A parallel link transmits several streams of data along multiple channels (wires, printed circuit tracks, optical fibers, etc.); a serial link transmits a single stream of data.

At first sight it would seem that a serial link must be inferior to a parallel one, because it can transmit less data on each clock tick. However, it is often the case that serial links can be clocked considerably faster than parallel links, and achieve a higher data rate. A number of factors allow serial to be clocked at a greater rate:
* Clock skew between different channels is not an issue (for unclocked serial links).
* A serial connection requires fewer interconnecting cables (e.g. wires/fibers) and hence occupies less space. The extra space allows for better isolation of the channel from its surroundings.
* Crosstalk is less of an issue, because there are fewer conductors in close proximity.

In many cases, serial is a better option because it is cheaper to implement. Many integrated circuits have serial interfaces, as opposed to parallel ones, so that they have fewer pins and are therefore cheaper.

Some examples of serial communication architectures:
* Morse code telegraphy
* RS-232
* RS485
* Universal Serial Bus
* FireWire
* Fibre Channel
* InfiniBand
* Serial Attached SCSI
* Serial ATA
* PCI Express

Back to Top

Cylinders, Heads, Sectors, Tracks -


A sector is the basic unit of data storage on a hard disk. The term "sector" comes from a mathematical term referring to a pie shaped angular section of a circle, bounded on two sides by radii and the third by the perimeter of the circle. In its simplest form, a hard disk is comprised of a group of predefined sectors that form a circle. That circle of predefined sectors is defined as a single track. A group of concentric circles (tracks) define a single surface of a disk's platter. Early hard disks had just a single one-sided platter, while today's hard disks are comprised of several platters with tracks on both sides, all of which comprise the entire hard disk capacity. Early hard disks had the same number of sectors per track location, and in fact, the number of sectors in each track were fairly standard between models. Today's advances in drive technology have allowed the number of sectors per track, or SPT, to vary significantly.

Generally, when a hard disk is prepared with its default values, each sector will be able to store 512 bytes of data. There are a few operating system disk setup utilities that permit this 512 byte number per sector to be modified, however 512 is the standard, and found on virtually all hard drives by default. Each sector, however, actually holds much more than 512 bytes of information. Additional bytes are needed for control structures, information necessary to manage the drive, locate data and perform other functions. Exact sector structure depends on the drive manufacturer and model, however the contents of a sector usually include the following elements: ID Information, Synchronization Fields, Data, Error Correcting Code (ECC), Gaps, and Servo Information.


The information on a disk is accessed by a read/write head. Read/write heads are in essence tiny electromagnets that convert electrical information to magnetic and back again. Each bit of data to be stored is recorded onto the hard disk using a special encoding method that translates zeros and ones into patterns of magnetic flux reversals.

Older, conventional (ferrite, metal-in-gap and thin film) hard disk heads work by making use of the two main principles of electromagnetic force. The first is that applying an electrical current through a coil produces a magnetic field; this is used when writing to the disk. The direction of the magnetic field produced depends on the direction that the current is flowing through the coil. The second is the opposite, that applying a magnetic field to a coil will cause an electrical current to flow; this is used when reading back the previously written information. Again here, the direction that the current flows depends on the direction of the magnetic field applied to the coil. Newer (MR and GMR) heads don't use the induced current in the coil to read back the information; they function instead by using the principle of magnetoresistance, where certain materials change their resistance when subjected to different magnetic fields.

The heads are usually called "read/write heads", and older ones did both writing and reading using the same element. Newer MR and GMR heads however, are in fact composites that include a different element for writing and reading. This design is more complicated to manufacture, but is required because the magnetoresistance effect used in these heads only functions in the read mode. Having separate units for writing and reading also allows each to be tuned to the particular function it does, while a single head must be designed as a compromise between fine-tuning for the write function or the read function. These dual heads are sometimes called "merged heads".


A hard disk is usually made up of multiple platters, each of which use two heads to record and read data, one for the top of the platter and one for the bottom (this isn't always the case, but usually is). The heads that access the platters are locked together on an assembly of head arms. This means that all the heads move in and out together, so each head is always physically located at the same track number. It is not possible to have one head at track 0 and another at track 1,000.

Because of this arrangement, often the track location of the heads is not referred to as a track number but rather as a cylinder number. A cylinder is basically the set of all tracks that all the heads are currently located at. So if a disk had four platters, it would (normally) have eight heads, and cylinder number 720 (for example) would be made up of the set of eight tracks, one per platter surface, at track number 720. The name comes from the fact that if you mentally visualize these tracks, they form a skeletal cylinder because they are equal-sized circles stacked one on top of the other in space.

For most practical purposes, there really isn't much difference between tracks and cylinders--its basically a different way of thinking about the same thing. The addressing of individual sectors of the disk is traditionally done by referring to cylinders, heads and sectors (CHS).

Back to Top

Univeral Serial Bus -

A single, standardized, easy-to-use way to connect up to 127 devices to a computer, either directly or by way of USB hubs. Prior to the use of USBs, in most cases computers came with only one parallel port, one or two serial ports and limited card slots for additional devices. This frequently made connecting devices to a computer a difficult process.

Devices that come in a USB version include printers, scanners, mice, joysticks, flight yokes, digital cameras, webcams, scientific data acquistion devices, modems, speakers, telephones, video phones, storage devices such as Zip drives and network connections.

Low-power devices, such as mice, can draw their power directly form the bus. High-power devices, such as printers, have their own power suppies and draw minimal power from the bus. Hubs can have their own power supplies to provide power to devices connected to the hub.

The USB standard uses "A" and "B" connectors. "A" connectors head "upstream" toward the computer and "B" connectors head "downstream" and connect to individual devices.

Back to Top

Copyright © 2006-2008
Neal Nelson & Associates
Trademarks which may be mentioned on this site are the property of their owners.