Day: June 3, 2005

  • RAID: Redundant Array of [Independent|Inexpensive] Disks

    During the last semester I wrote two papers for my Computer Architectures class. I spent quite a bit of time on them and have been thinking about posting them on my weblog for quite some time. I’m a bit worried about plagarism though, and I’m not sure what to do about it. I’m pretty sure that I can submit it to the auto-plagarism-detector service that my university subscribes to, and I’m probably going to do that now that this paper is posted.

    Secondly, I’m releasing this paper under the by-nc-sa (Attribution NonCommercial ShareAlike 2.0) license, so unless you can turn in your paper to your teacher with a by-nc-sa license displayed on it, you can’t include it in your paper without proper citation.

    PLEASE NOTE: If you are considering plagarising this, please don’t. If your teacher allows you to cite non-academic internet sources, then by all means borrow my ideas and cite me. What I would really suggest doing is taking a look at my primary sources and then heading to your university library or computer system to consult them yourself. All of the ACM journal sources that I cited are available online if your university subscribes to the ACM Portal. This paper was thoroughly researched but there were some late nights involved in the production of it so it is provided WITHOUT WARRANTY against correctness or anything like that.

    Creative Commons License
    This work is licensed under a Creative Commons License.

    Matt Croydon

    CMSC 311

    March 9, 2005

    The term RAID originally stood for “Redundant Arrays of Inexpensive Disks” [1], although an effort has been made to replace Inexpensive with Independent [2] in order to deemphasize the importance of cost. In modern practice, the words can be used interchangeably, and in most computer-oriented contexts the meaning is commonly understood. RAID technology was developed to improve upon monolithic SLED (Single Large Expensive Disks) [1] devices. In addition to being large and expensive, these drives have fixed input and output levels and in the late 80’s and early 90’s were not keeping pace with the rest of semiconductor technology [2].

    There are several discreet configurations or levels of RAID, each with its advantages and disadvantages. The various levels are conceptual, and not necessarily tied to a specific implementation. RAID can be accomplished on either the hardware or software level. Hardware-based RAID tends to provide higher overall performance while software-based RAID offers lower cost and greater flexibility.

    Redundancy is required because as more disks are added to an array, the MTTF (Mean Time to Failure) [1] decreases sharply. For example, if each individual drive is rated for 30,000 hours and if there are 100 disks in the array, the MTTF for the array is the MTTF of each individual drive divided by the number of drives. The MTTF of the 100 drive array is 30 hours, a long cry from the 30,000 hours that each unit is rated for [2].

    RAID Level 0 and JBOD

    RAID 0 is not part of the original specification [3] and provides absolutely no redundancy; however it does employ data striping. Data striping is an important concept in some RAID configurations. RAID 0 is often implemented in hardware controllers that also support other levels of RAID. RAID 0 allows extremely write performance but does not significantly improve on read access time [2].

    The other non-redundant RAID technology is JBOD, which stands for “Just a Bunch of Disks” [4]. JBOD uses either RAID hardware or software to combine multiple disks so that they appear as one logical device to the operating system. JBOD allows for easy storage capacity expansion and is in common usage on both Windows and Linux platforms among others.

    RAID Level 1

    RAID 1 uses mirroring in order to achieve redundancy [3]. For every disk of data, there is a mirrored disk that contains an exact copy of the original disk [5]. While every write to the array has to be performed twice (fist on the original drive, then to the mirrored drive), read speeds can be improved. Because there are two copies of the data, the drive that can retrieve the data quickest can be used. Both drives may also simultaneously serve read requests thereby increasing the read speed. If one drive in a two drive array fails, the remaining drive can be used for reading and writing until the defective disk can be replaced. Once a new drive is placed in the array, data can be copied over and eventually mirroring once again takes place in real time.

    RAID Level 2

    RAID 2 uses the same ECC (Error Correcting Code) as ECC memory [2]. In addition to the data disks, a number of check disks are used to store the ECC data. If Hammering ECC is used, an array of 10 data disks would need 4 check disks and an array of 25 data disks would require 5 check disks [1]. The extra disks are required to be able to detect and repair an unrecoverable error. In RAID 2, data is striped bit by bit across the data disks while the ECC data is written to the check disks [1].

    RAID Level 3

    The next level of RAID assumes that most hardware or software RAID controllers will be able to detect an error. A single check disk can be used to recover from an error, so if we leave the job of error detection to the controller and eliminate all but one of the check disks as compared to RAID 2 [1]. This strategy cuts down on cost without sacrificing redundancy as long as every bit on all of th other data disks and the check disk can be successfully read. The contents of the bad disk can be obtained by finding the parity of the disks that have not failed and comparing each bit to the parity of all of the disks as stored on the check disk. If the values are identical, the bad disk originally held a 0 in that position. If the values differ, it held a 1 [1].

    RAID Level 4

    RAID 4 also only uses one check disk but stripes data across the data drives in chunks rather than bit by bit. The check disk stores the parity information for each chunk of data. RAID 4 is very efficient for systems such as transaction processing that require many very small reads from the disk array. If the data is smaller than the storage chunk size, the array can furnish multiple request simultaneously [6].

    RAID Level 5

    RAID 5 is the most commonly deployed configuration [7] in commercial settings and distributes the parity blocks evenly across all disks [2]. Because the data and parity are spread across all disks, RAID 5 excels at both small and large reads, and large writes. RAID 5 requires a “read-modify-write” [2] cycle to calculate and write parity information, so RAID 5 is less than optimal when it comes to many small writes. [2]

    Advanced RAID Configurations

    There are several hybrid RAID configurations that while not in the original RAID specification, can improve reliability and redundancy in certain situations. RAID 6 employs two distinct parity calculations for each chunk of data stored [4]. RAID 6 appears to be more theoretical than practical; as there are no guidelines for implementing it. RAID 6 differs from most RAID configurations in that it can recover from two unrecoverable errors, as long as the rest of the data and parity information can be read successfully.

    While many combinations of RAID components are possible, only a few are common. These include RAID 10, RAID 50, and RAID 0+1. RAID 10 significantly improves reliability by providing “a stripe set across mirrored pairs” [7]. This means that RAID 10 can recover from two total failures as long as the failures are on opposite sides of the mirror. Similarly, RAID 50 combines two RAID 5 arrays. RAID 50 is extremely redundant and not practical for most purposes. RAID 0+1 simply constructs a RAID 1 array out of several RAID 0 arrays. In RAID 0+1, one disk failure brings down the mirror half of the array until the bad disk is replaced [7].

    Increasing RAID Throughput

    Many modern hardware RAID controllers contain onboard memory caches to speed up input and output. Caching of data and parity blocks was found to increase throughput in the early to mid 90’s [9]. The physical location of parity blocks in RAID 5 has been proven to influence throughput [10]. In their study, Lee and Katz determined that left-symmetric, extended-left-symmetric, and flat-left-symmetric parity configurations were the best for overall use [10]. The absolute best parity configuration for RAID 5 drives depends on the size and number of both reads and writes.

    Strategies for Increased Reliability

    There are several ways to increase RAID reliability, even in simple arrays. Because the different RAID levels are merely suggestions for how to accomplish redundancy, specific implementations may vary. For a simple 2 disk RAID 1 array, you have the option of placing both disks on one hardware controller or (if supported) you may place each disk on its own controller and have the two controllers coordinate mirroring [7]. In this configuration, the failure of any one RAID controller does not bring down the entire array.

    Hybrid arrays (as discussed in the Advanced RAID Configurations section above) can also increase reliability by creating mutli-tiered or multi-leveled arrays. Advanced configurations need to be used with caution, since the MTTF decreases exponentially as the total number of disks increases.

    As per-disk capacity increases, it is possible to implement RAIDs with identical storage capacity while using fewer overall disks. If fewer disks are used, the MTTF increases. Unfortunately with increased storage capacity comes an increased need for storage, so decreasing the total number of disks in a RAID may not be possible.

    RAID Today

    In the early days of RAID research, SCSI was the only technology that easily allowed for RAID configurations. Today that is changing rapidly with the introduction of extremely large capacity IDE and Serial ATA drives as well as lower cost hardware controller cards for them. These lower costs to entry have allowed RAID to spread from university research labs and large corporations all the way down to home users seeking data protection. Many mid-range to high end motherboards have a built-in IDE or Serial ATA RAID controller built in.

    RAID technology is also being used extensively in large server farms and storage facilities. Elaborate collections of RAID arrays are often combined with network technology such as SAN (storage area networks) and NAS (network attached storage) to meet the always-on accessible-anywhere needs of today’s customers.

    RAID has also become an built-in part of Microsoft’s Windows operating system and has also been incorporated in to the Linux Kernel [11]. Software-based RAID further reduces entry costs, though generic IDE RAID controllers can be found in stores for well below $50. A more well known hardware RAID controller from Adaptec or others can rage from $100 for IDE to several hundred dollars for advanced SCSI Ultra 160 controllers.

    Conclusion

    Using a RAID may lull users in to a false sense of security. Most RAID configurations protect against only one unrecoverable error and usually require that every other bit be read successfully in order to recover the data. Just because a RAID is in use does not mean that users are invincible. Rigorous and recoverable backups should also be implemented in addition to the use of RAID technology.

    With that caution in mind, RAID can provide redundancy that would not otherwise be available. If a specific RAID configuration is tailored to a specific profile (many small writes, continuous large reads, etc) a significant increase in throughput can be realized.

    RAID, a technology that started out as graduate and Doctoral research projects, now powers a wide array of technology from home computers to large datacenters. RAID allows advanced research facilities and corporate databanks alike to achieve redundancy on collections of data that commonly reach terabytes and petabytes [12].

    References

    [1] D. Patterson, G. Gibson, and R. Katz, “A Case for Redundant Arrays of Inexpensive Disks (RAID),” in Proceedings of the 1988 ACM SIGMOD international conference on Management of data, 1988, pp. 109-116.

    [2] P. Chen et al, “RAID: High-Performance, Reliable Secondary Storage,” ACM Computing Surveys, Vol 26, pp. 145-185, June 1994.

    [3] M. Scnier, Ed., Dictionary of PC Hardware and Data Communications Terms, Sebastopol: O’Reilly and Associates, 1996, pp.362-363.

    [4] M. Shooman, Reliability of Computer Systems and Networks, New York: John Wiley and Sons, 2002, pp.119-126.

    [5] G. Gibson, Redundant Disk Arrays: Reliable, Parallel Secondary Storage, Cambridge: MIT Press, 1992.

    [6] R. Jain et al. Eds., Input/Output in Parallel and Distributed Computing Systems, Boston: Kluwer Academic Publishers, 1996, pp.106-108.

    [7] C. Zacker and J. Rourke, PC Hardware: The Complete Reference, Berkeley: Osborne/McGraw Hill, 2001, pp.606-613.

    [8] PC Guide, “Multiple (Nested) RAID Levels”, March 2005, http://www.pcguide.com/ref/hdd/perf/raid/levels/mult.htm.

    [9] J. Menon and J. Cortney, “The Architecture of a fault-tolerant cached RAID controller,” in Proceedings of the 20th annual international symposium on Computer architecture, 1993, pp.76-87

    [10] E. Lee and R. Katz, “Performance consequences of parity placement in disk arrays,” in Proceedings of the fourth international conference on Architectural support for programming languages and operating systems, 1991, pp.190-199.

    [11] I. Molnar, G. Oxman, and M. de Icaza, “Kernel Korner: The New Linux RAID Code,” Linux Journal, Vol 1997, Article No. 25, December, 1997.

    [12] Los Alamos National Laboratories Networked Systems Research Team, “Announcements,” March 2005, http://public.lanl.gov/netsys/.

  • DLP on the Big Screen

    Last weekend I saw Star Wars: Revenge of the Sith in one of the handful of theatres (I count 4) in the DC-Virginia-Frederick-Baltimore metro area equipped with DLP (Digital Light Processing) technology. Just like Mike Washlesky at The Mac Observer, I was blown away. I first noticed the crispness and clarity when the first preview splash screen came up and was blown away by the effects and their digital projection throughout the movie. The movie won’t top my greats list but it was a lot of fun and great to see in digital.

    Further reading:

    • Episode III Digital Theater List: Make sure you find the one digital theater, buy your tickets online, and show up early for a good seat.
    • DLPMovies: An excellent place to find your local DLP theater (if there is one). DLPMovies found more theaters in my area than the ones showing Star Wars, including one that is currently showing Madagascar in DLP.
    • DLP.com: A lot of marketing, but it boils down to amazing picture quality and an insane contrast ratio.
    • DLP Wikipedia entry: Excellent information as always.
  • SDLQuake on Maemo

    SDLQuake on Maemo x86

    Yep, it had to be done. Above you can see SDLQuake running on Maemo x86. I haven’t tried it on the ARM target but I heard that it or a port of it should run just fine on ARM. Between various emulators and game engines, it shouldn’t be hard at all to amuse yourself with a Nokia 770.

    No changes were required for this x86 build. ./configure, make and run-standalone.sh ./sdlquake.

  • Python For Series 60: 1.1.0 Pre-Alpha

    There’s a new version out, 1.1.0 Pre-Alpha. Grab the .SIS installer for first edition devices (3650, N-Gage, etc) or for 2nd edition devices. Don’t forget to pick up the first edition or second edition SDK

    I’ll read over the new API docs tonight and hope to find all kinds of juicy morsels.

    Update: Erik Smartt fills in some details on his weblog. Thanks again to the whole Python for Series 60 team for all the hard work.