[PDF] Tertiary Storage: An Evaluation of New Applications





Previous PDF Next PDF



Module 14: Tertiary-Storage Structure

Tertiary Storage Devices. • Low cost is the defining characteristic of tertiary storage. • Generally tertiary storage is built using removable media.



Performance Measurements of Tertiary Storage Devices

tape-based tertiary storage devices. Applications that generate and use massive data sets drive the use of and research into tertiary storage. For example 



Database Systems for E cient Access to Tertiary Memory 1 Introduction

Abstract. Tertiary storage devices have long been in use for storing massive amounts of data in le-oriented mass storage systems.



A brief survey of tertiary storage systems and research

A more detailed version of the paper is available in [9]. 2 Tertiary Devices - Current Technology. The most common tertiary storage devices are mag- netic 



A BRIEF SURVEY OF TERTIARY STORAGE SYSTEMS AND

A more detailed version of the paper is available in 9]. 2 Tertiary Devices - Current Technology. The most common tertiary storage devices are mag- netic 



Tertiary Storage: An Evaluation of New Applications

Below RAM is solid state memory and then magnetic disk devices commonly called sec- ondary storage. At the bottom of the hierarchy are tertiary storage devices 



Query Processing in Tertiary Memory Databases

Two tertiary memory storage devices { a Sony opti- cal jukebox and an HP magneto-optical jukebox have already been interfaced with the postgres's storage.



EFFICIENT ORGANIZATION AND ACCESS OF MULTI

Characterization of Tertiary Storage Devices. The optimal partitioning depends also on the characteristics of the tertiary storage devices. Because we do not 



Chapter 5 Storage Devices Chapter 5 Storage Devices

Storage Devices. Types of Storage. There are four type of storage: • Primary Storage. • Secondary Storage. • Tertiary Storage. • Off-line Storage. Page 5. 5.



Scheduling Queries for Tape-resident Data ?

Tertiary storage devices have traditionally been used as archival storage. The new resides on automated tertiary storage containing multiple storage devices.



Chapter 5 Storage Devices

A storage device is used in the computers to store the data. Tertiary Storage. • Off-line Storage ... data storage device in a computer.



Performance Measurements of Tertiary Storage Devices

information about tertiary storage devices has been published. In this paper we present de- tailed measurements of several tape drives and robotic storage 



Module 14: Tertiary-Storage Structure

Operating System Concepts. Silberschatz and Galvin 1999. 14.1. Module 14: Tertiary-Storage Structure. • Tertiary Storage Devices. • Operating System Issues.



A Study on the Use of Tertiary Storage in Multimedia Systems

Tape media is still two orders of magnitude less expensive than magnetic disk storage; although tape drives exhibit access latencies two to four orders of 



TERTIARY STORAGE DEVICES

May 30 2017 Tertiary storage or tertiary memory



Chapter 5 Storage Devices

A storage device is used in the computers to store the data. Tertiary Storage. • Off-line Storage ... data storage device in a computer.



1 10: Storage and File System Basics Storage Hierarchy Example

Jun 15 2004 Tertiary Storage Devices. ? Used primarily as backup and archival storage. ? Low cost is the defining characteristic.



Query Processing in Tertiary Memory Databases

database systems to handle tertiary storage devices. The characteristics of tertiary mem- ory devices are very di erent from secondary.



Tertiary Storage: An Evaluation of New Applications

including increased tape capacities less expensive tape drives and optical disk drives



Chapter 14: Mass-Storage Systems

Swap-Space Management. ? RAID Structure. ? Disk Attachment. ? Stable-Storage Implementation. ? Tertiary Storage Devices. ? Operating System Issues.



Tertiary Storage - an overview ScienceDirect Topics

TertiaryStorageDevices OperatingSystemIssues PerformanceIssues 14 1 StructureTertiary thedefining Storage characteristic Lowcostis Generallytertiarystorage Commonexamplesof Devices of isbuiltusing removable CD-ROMs; othertypesaremedia tertiarystorage removablemedia arefloppydisksand available 14 2•Floppydisk— thin emovableD flexiblediskcoated



Chapter 5 Storage Devices - FTMS

Storage Devices Tertiary Storage • Typically it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device • It is a comprehensive computer storage system that is usually very slow so it is usually used to archive data that is not accessed frequently



Storage Systems - Department of Computer Science

Tertiary Storage Devices • Low cost is the defining characteristic of tertiary storage • Tradeoff between cost and access time • Tradeoff between data stability and access time • Generally tertiary storage is built using removable media • Floppy disks • ZIP drives • CD-ROMs • CD-RWs • DVDs • Magneto-optical storage • MEMS



Hierarchy and Characteristic of Storage Devices

a second storage tertiary storage and off-line storage Primary storage is the main memory or internal memory of the computer Second storage is an external memory or auxiliary memory Tertiary storage is a third level storage such as cloud storage Off-line storage is computer data storage on a medium or a device Primary storage is the only

What are tertiary storage devices?

For large-scale servers, economics will dictate the use of large tertiary storage devices such as tape and optical jukeboxes. Tertiary storage devices are highly cost-effective and offer enormous storage capacities by means of robotic arms that serve removable tapes or disks to a few reading devices (see Table 3 ).

Are tertiary storage devices suitable for cm playback?

Tertiary storage devices are highly cost-effective and offer enormous storage capacities by means of robotic arms that serve removable tapes or disks to a few reading devices (see Table 3 ). However, their slow random access—due to long seeking and loading times—and relatively low data transfer rates make them inappropriate for CM playback.

What is an example of secondary storage device?

An example of the secondary storage device is a hard disk The hard disk drive is the primary, and usually most considerable, data storage apparatus in a computer. It can stow from 160 gigabytes to 2 terabytes. Hard disk pace is the swiftness at which content can be read and documented on a hard disk.

What is a storage device?

A storage device is utilized in the computers to store, preserve accumulated data. The storage device is one of the most vital parts of the computer. It is capable of providing the crude and core functions of the system. The computer is incomplete without the storage device.

TertiaryStorage:AnEvaluationofNewApplicationsbyAnn Louise ChervenakB.S. (University of Southern California) 1987M.S. (University of California at Berkeley) 1990A dissertation submitted in partial satisfaction of the

requirements for the degree ofDo ctor of Philosophyin Computer Sciencein theGRADUATE DIVISIONof theUNIVERSITY of CALIFORNIA at BERKELEYCommittee in Charge

Professor Randy H. Katz, Chair

Professor David A. PattersonProfessor Kyriakos Komvop oulos1994 The dissertation of Ann Louise Chervenak is approved:

ChairDate

Date

DateUniversity of California at Berkeley1994

1994byAnn Louise Chervenak

1AbstractTertiaryStorage:AnEvaluationofNewApplicationsbyAnn Louise ChervenakDo ctor of Philosophy in Computer ScienceUniversity of California at BerkeleyProfessor Randy H. Katz, ChairThis thesis fo cuses on an often-neglected area of computer system design: tertiary storage.

In the last decade, several advances in tertiary storage have made it of increasing interest,including increased tap e capacities, less exp ensive tap e drives and optical disk drives, andthe proliferation of rob ots for loading tertiary devices automatically. Concurrently, fasterpro cessor sp eeds have enabled a growing numb er of applications that would b enet from fastaccess to massive storage. Weevaluate the usefulness of current tertiary storage systemsfor some of these new applications.First, we describ e the design and p erformance of tertiary storage pro ducts. Next,weevaluate the technique of data striping in tap e arrays.We nd that tap e stripingimproves the p erformance of sequential workloads. However, strip ed tap e systems p erformp o orly for applications in which there are several non-sequential, concurrent requests activein the tap e library b ecause of contention for a small numb er of tap e drives.Wecharacterize two new workloads: video-on-demand servers and digital libraries.For the former, weevaluate design alternatives for providing storage in a movies-on-demand

2system. First, we study disk farms in which one movie is stored p er disk. This is a simplescheme, but it wastes substantial disk bandwidth, since disks holding less p opular moviesare under-utilized; also, go o d p erformance requires that movies b e replicated to re

ect theuser request pattern. Next, we examine disk farms in whichmovies are strip ed across disks,and nd that strip ed video servers oer close to full utilization of the disks byachievingb etter load balancing. Finally,weevaluate the use of storage hierarchies for video servicethat include a tertiary library along with a disk farm. Unfortunately,we show that thep erformance of neither magnetic tap e libraries nor optical disk jukeb oxes as part of a storagehierarchy is adequate to service the predicted distribution of movie accesses.Throughout the dissertation, we identify several desirable changes in tertiary stor-age systems.To supp ort new applications with higher concurrencies, tertiary librariesshould b e redesigned with a higher ratio of drives to media, higher bandwidth p er drive andfaster access times.

Professor Randy H. Katz, ChairDate

iiiTomy parents,Loretta and Norris Chervenak,for their constant love and support.

ivContentsList of FiguresviiList of Tablesix1Intro duction11.1Contributions:: ::: ::: ::: :: ::: ::: ::: ::: ::: :: ::: ::41.2Metho dology:: ::: ::: ::: :: ::: ::: ::: ::: ::: :: ::: ::51.3Thesis Outline: ::: ::: ::: :: ::: ::: ::: ::: ::: :: :::-1000(::62TertiaryStorageTechnologies72.1Introduction::::::::::::-1000(:::::-1000(::::::-1000(:::::::::::-1000(::72.2MagneticTape Technology:::::::::::::-1000(::::::::::::72.2.1TheTapeDrive::::::: ::: :::::::::-1000(::::::::-1000(::72.2.2RepresentativeProducts::::::-1000(::::::::::::::::::132.2.3TapeRobots::::-1000(::::::::-1000(:::::::::::: :::::::132.2.4ReliabilityofTapeDrivesandMedia::::::::::::::::: 162.2.5TrendsinMagneticTapeDrives::::-1000(::::::::::::::::192.3 Optical Disk T ec hnology :: ::: :::::::::::::::::::::::212.3.1OpticalDisks::::::::::::::::::::: :::::::::212.3.2Magneto-opticalRecording::::-1000(::::::::::::::::: ::222.3.3RepresentativeOpticalandMagneto-opticalDrive Pro ducts::::242.3.4OpticalDiskJukeboxes:::::::::::::::-1000(::::::::::252.3.5TrendsinOpticalDiskTechnology::::::::::::: ::: :: 25 2.4 Other T ertiary StorageTechnologies::::::::::::::::::::262.4.1HolographicStorage:::::::::::::: :::::::::::::262.4.2OpticalTape::::-1000(:::::::: ::::::-1000(:::::::::::::272.5Summary:::: ::: :::::::::::::::::::::::::::::283Simulation,Measurements,ModelsandMetricsforEvaluatingTertiaryStorageSystems 31 3.1 In tro duction ::: :: :::::::::::::::::::::::::::313.2TertiaryStorageSimulation:::::::: :::::::::::::::::::313.3SimulationModelsandDeviceMeasurements::::-1000(::::::::::::34

v3.3.1Tap e Drive Mo del: ::: : ::: :: ::: :: :: :: ::: ::343.3.2Tap e Drive Measurements:: ::: ::: ::: ::: :: :: ::: ::353.3.3Optical Disk Mo del::: :: :: ::: ::: ::: ::: :: :::-1000(::383.3.4OpticalDiskMeasurements::::::::::::::::::::::393.3.5RobotMo del::::-1000(::::::::-1000(:::::::::::::::::::393.3.6Robot Measuremen ts:::::::::::::-1000(::::::-1000(:::::::403.3.7SummaryofTapeandOpticalDiskModels::::-1000(:::::::::443.4 Ev aluating Storage System P erformance : :::-1000(::::::::-1000(:::::-1000(::443.5Summary:::::::::::::-1000(:::::::::::::::::::::::46 4Workload-BasedPerformanceEvaluationofTertiaryStorageSystems474.1 Hardw are SimulationParameters::::::::::::::::::::-1000(::]TJ/T12 1 Tf895.0001 0 TD (484.1.1EXB10i:::::::::-1000(:::::::: ::: :: ::: :: ::: :: 48 4.1.2 EXB120::::::::::-1000(::::::::::::::::::::::-1000(::]TJ/T12 1 Tf1249 0 TD (494.1.3HighPerformanceLibrary:::::::::: :::::::::::::504.1.4HP120::::::::::::::::::: :: :::::::::::::514.1.5CDChanger:::::::::::::::::::::::::::::514.2WorkloadCharacterization:::: :::-1000(::: :: ::: ::: :: ::: ::524.2.1TheSequentialWorkload::::::::::::::::::::-1000(::]TJ/T12 1 Tf895.0001 0 TD (524.2.2VideoServerW orkload:::::::::::::::::::::-1000(::]TJ/T12 1 Tf931 0 TD (534.2.3DigitalLibraryW orkload :::::-1000(::::::-1000(:::::::::::::544.3Performance ::: :: ::: :::-1000(:: ::: ::::::::::::::::::544.3.1SequentialWorkload::::::::::::::::::: :: ::: :: 544.3.2VideoServerWorkload::::::::::::::::::::::::554.3.3Digital LibraryWorkload:::::-1000(:::-1000(:::::::::-1000(::::::594.4Summary::::-1000(:::-1000(:::::::::::::::::-1000(:::-1000(::::::::-1000(::]TJ/T12 1 Tf1355 0 TD (605T ap e Striping615.1Introduction:::::::::::::::::::::::::::::::::::615.2DataStripinginDiskArraysand T ap e Libraries:::::::::::::::625.2.1StripinginDiskArrays::::::-1000(:::-1000(:::::::::: :::::625.2.2DiskArrayReliability:::::::::::::::::::::::::635.2.3RAIDTaxonomy::::::::::::::: ::: :: :: ::: :: 65 5.2.4 T ap e Striping ::::-1000(:::-1000(:::::::::::::::::::::::665.3TapeStripingIssues:::::::::::::::::::::::::::::: 675.3.1ConguringaTapeArray:::::-1000(:::-1000(::::::::::::::::675.3.2StripeWidth::::-1000(:::-1000(:: ::::::::::::::-1000(::::::695.3.3RAIDLevelsinTapeStriping::::: :::::::::::::-1000(::]TJ/T12 1 Tf824 0 TD (705.3.4SynchronizationIssuesandBuer Space Requirements::::::-1000(::]TJ/T12 1 Tf330.0001 0 TD (725.3.5FutureDevices::: ::: ::::::::::::::::::::::::735.4PerformanceofStripedTapeSystems :::::::::::::::::::745.4.1Simulations::::::::::::::::::::::::::::: ::745.4.2SequentialRequestPerformance::::::: ::::::::::::755.4.3PerformanceforRandomWorkloadwithHighConcurrency:::::785.4.4ImprovingTapeDrivesand Rob ots ::: :: ::: ::: ::::::: 81

vi5.4.5Hyp othetical Rob ots::: : :: ::: ::: ::: :: :: :: ::905.4.6Improving individual prop erties of tap e drives and rob ots: ::: ::915.5Summary: ::: ::: ::: ::: :: :: ::: ::: ::: ::: :: ::: ::956Toward a Workload Characterization for Video Servers and Digital Li-braries986.1Intro duction::: ::: :: ::: :: ::: ::: ::: ::: ::: :: ::: ::986.2AWorkload Mo del: ::: ::: :: :: :: ::: ::: ::: :: :: ::996.3Video-on-Demand Servers: ::: :: ::: ::: ::: ::: ::: :: ::: ::1026.3.1Background:: :: ::: :: ::: ::: ::: :: ::: :: ::: ::1026.3.2Application-Level Workload Characterization:: :: :: ::: ::1086.4Digital Libraries::: ::: ::: :: ::: ::: ::: :: :: :: ::: ::1136.4.1Existing Systems:: ::: :: ::: ::: ::: ::: ::: :: ::: ::1146.4.2Application-Level Workload Characterization::: :: :: ::: ::1176.5Summary: ::: ::: :: ::: :: ::: ::: ::: ::: ::: :: :::::1237Storage Systems for Video Service1257.1Intro duction::: :: ::: ::: :: ::: :::::: ::: ::: :: ::: ::1257.2Disk Systems for Video Service: :: ::: :::::: :: ::: :: ::: :1287.2.1One Movie Per Disk:: :: ::: ::: ::: ::: ::: : :: ::1287.2.2Striping Movies Among Disks::: :: ::::: :: :: ::: ::1307.2.3Performance of Disk-Based Video Servers:: ::: ::: :: :::::1327.3Storage Hierarchies for Video Service::: :: ::: ::: :: :: ::::1407.3.1Storage Hierarchy Simulations:: ::: :: :: ::::: :: :1417.3.2Tap e Libraries in the Hierarchy:: ::: ::::::::: :: ::: :1437.3.3Optical Disk and Storage Hierarchies for VOD:: :: :: :: ::1487.3.4Dierent Lo calityPatterns: ::: :: :: ::: ::: :: ::: ::1517.3.5Summary of Storage Hierarchies: ::: :: :: ::::: ::: ::1547.4Future Work:: ::: ::: :: : ::: :::::: ::: ::: :: ::: ::1547.5Summary: :: ::: :: ::: :: ::: :: ::: ::: ::: :: ::: ::1558Conclusions158Bibliography163

viiListofFigures1.1The storage hierarchy::: ::: :: ::: ::: :: ::: :: :: ::: :22.1A Linear Recording Magnetic Tap e Drive:: :: ::: ::: :: ::: ::92.2A Helical-Scan Magnetic Tap e Drive::: ::: :: ::: ::: :: ::: ::102.31990 Predictions for bandwidth and capacity improvements in Exabyte 8mmtap e drives::: ::: ::: ::: :: ::: ::: :: ::: :: : :: ::202.4Optical disk structure::: ::: :: ::: ::: :: : ::: :: ::: ::212.5Optical disk comp onents:: ::: :: ::: ::: ::: ::: ::: :: ::: ::222.6Magneto-optical disk comp onents: : :::::: ::: ::: :: :::::232.7Holographic store comp onents:: :: ::: :::::: ::: ::::: ::: ::273.1Measured rewind b ehavior for Exabyte EXB8500 drive:: ::: : : ::373.2Measured search b ehavior for Exabyte EXB8500 drive:: ::: :: ::::373.3Layout of EXB120 tap e rob ot: : ::: :: : ::: ::: :: :::::413.4EXB120 rob ot arm movement measurements: ::: ::: ::: :: :::::423.5EXB120 rob ot load time measurements: :::::: ::: ::::: ::: ::424.1Illustration of EXB10i stacker:: :: ::: ::: :::::: :::: : ::484.2Performance of EXB10i stacker on sequential workload: : :: :::::564.3Performance of EXB120 library on sequential workload:: ::::: :::::564.4Performance of high p erformance library on sequential workload: :::::574.5Performance of magneto-optical jukeb ox on sequential workload: : ::575.1Data striping in a disk array: : :::::: ::: ::: ::: :: :::::635.2Data striping with single bit parity in a disk array: ::: ::: :: :::::645.3Disk array taxonomy for describing redundancy schemes: ::::: :::::655.4Options for tap e striping within or across rob ots:: ::: ::::: ::: ::685.5Sequential Performance for an EXB120 Library with 32 tap e drives::: ::775.6Sequential p erformance for a variety of request sizes for EXB120 library with8 drives:: ::: ::: :::::: :: ::: ::: ::::: : : : ::775.7Performance of EXB120 rob ots with four and sixteen tap e drives: ::: ::795.8Performance of high p erformance library with four and sixteen tap e drives:80

viii5.9Performance for EXB120 rob ot with four EXB8500 tap e drives when tap edrives, rob ots and b oth improve:: ::: ::: ::: ::: ::: :: ::: :::835.10 Performance for EXB120 rob ot with sixteen EXB8500 tap e drives when b othtap e drives and rob ots improve: :: ::: ::: ::: ::: ::: :: ::: :::855.11 Performance for EXB120 rob ot with sixteen EXB8500 tap e drives when onlytap e drives improve: ::: ::: :: ::: ::: ::: ::: ::: :: ::: :::875.12 Performance for EXB120 rob ot with sixteen EXB8500 tap e drives when onlyrob ots improve: ::: ::: ::: :: ::: :: ::: ::: ::: :: ::: ::895.13 Performance for hyp othetical rob ot with 24 tap e drives: ::: :: ::: ::915.14 Performance for hyp othetical rob ot with 40 tap e drives: ::: :: ::: ::925.15 Resp onse time vs. Tap e Drive transfer rate:: :: ::: ::: :: ::: ::935.16 Resp onse time vs. Rob ot arm sp eed::: : ::: ::: :: :: :::::935.17 Resp onse time vs. Search/rewind rate:: :::::: ::: ::: :: :::::946.1Cumulative Zipf 's Law distribution with 1000 movies::: :: :: :::::1107.1Disk farms and storage hierarchies for video service: :::::: :: ::: ::1277.2Price and access times of storage hierarchy comp onents: :: :: ::: ::1277.3Disk farm for video service with one movie stored p er disk::: :: ::: ::1297.4Disk farm with movies strip ed across disks::: :::::: : :: :::::1307.5One-movie-p er-disk p erformance, resp onse time limit 60 seconds: :: ::1347.6One-movie-p er-disk p erformance, resp onse time limit 5 minutes:: ::: ::1367.7One-movie-p er-disk p erformance, resp onse time limit 100 minutes: ::: ::1367.8Strip ed disk p erformance, resp onse time limit 60 seconds: ::: :: :::::1387.9Performance of EXB120 in storage hierarchy for video-on-demand::: ::1447.10 Performance of EXB120 with 20 tap e drives in storage hierarchy for video-on-demand::: :: ::: ::: :: ::: ::: :::::: :: :: : :::1467.11 Performance of EXB120 with 20 drives and ten times faster tap e drives androb ots in storage hierarchy for video-on-demand:: :: ::: : ::: :::1467.12 Performance of high p erformance tap e library in storage hierarchy for video-on-demand::: : ::: ::::: :::::: ::: ::: ::::: ::: :::1487.13 Performance of hyp othetical rob ots in storage hierarcy for video-on-demand1497.14 Performance of HP120 jukeb ox in storage hierarchy for video-on-demand::1507.15 Performance of HP120 jukeb ox with 20 drives in storage hierarchy for video-on-demand::: ::: ::: ::: :: ::: ::: ::: ::: : :: ::: :::1517.16 Paramterized Zipf distribution: :: ::: ::: ::: ::: : :: ::: ::153

ixListofTables2.1Price and p erformance comparison of tap e drive pro ducts.::: : ::: :142.2Classication of tap e rob ots:: :: :: ::: ::: :: ::: :: :: ::152.3Price and p erformance comparison of tap e rob ots:: ::: :: :: ::: :152.4Tap e drive error rates::: ::: :: ::: ::: :: ::: ::: :: ::: ::162.5Exabyte EXB8200 tap e drive repair statistics: ::: ::: ::: :: ::: ::192.6Price and p erformance comparison of optical disk drive pro ducts: :::-1000(::242.7Priceandperformancecomparisonofopticaldiskjukeboxes:::::::252.8 Comparisonofmediacostsfor1terabytetertiarysystem::::::::-1000(::292.9 Comparison of rob ot costs for 1 terabytetertiarysystem::::-1000(:::::-1000(::303.1Descriptionof simulatorevents::::::-1000(::::::-1000(:::::::::::::333.2Tapedrivemeasurements:::::::::::: :::-1000(:: ::: :::::::363.3Averageseektimesfortapedrives::::-1000(::::::::::::::::: ::383.4DataratesforHPmagneto-opticaldiskdrive.::::::::::-1000(:: :::-1000(:: 39 3.5 Measuremen ts ofEXB120robotgrabandpushtimes::::::-1000(::::: :: 413.6EXB120tapeswitchtime:::::::::::-1000(::::::-1000(::::::::::433.7HP100 MO jukeboxrobotmeasurements:::::::::::::::::::434.1SimulationP arametersforEXB8500tapedrive::::::::::::::::494.2 Sim ulation parameters for EXB10i stac ker::::::::::::::::::494.3 Sim ulationparametersforEXB120tapelibrary::::::::::::::::494.4SimulationParametersfor high performancetapedrive:::::::::::504.5Simulationparametersforhighperformancelibrary ::::-1000(:::-1000(:::::::504.6SimulationParametersusedtosimulateC1617T magneto-optical diskdrive514.7SimulationparametersforHP120magneto-optical diskjukebox:::::::514.8SimulationParametersusedtosimulateCDROMopticaldiskdrive:::::514.9SimulationparametersforCDautochanger ::::: ::: ::: :: ::: :: 524.10Sequentialworkloadcharacterization::::::::::::::::::::::534.11Videoserver workloadcharacterization:::::-1000(:::-1000(:::::::::::::534.12Digitallibraryworkloadcharacterization::::::::::::: :: ::::544.13Performanceofrobotsonvideo serverworkload:::::::::::::::584.14Performanceofrobotsondigitallibraryworkload::::::::::::-1000(::]TJ/T12 1 Tf577 0 TD (59

x5.1Summary of chapter simulations:: ::: ::: ::: ::: ::: :: ::: ::765.2Simulation parameters for tap e drives and rob ots at various sp eedup factors825.3Comparison of resp onse times for ten times faster tap e drive and rob ot:::845.4Comparison of resp onse times for ten times faster tap e drive only: ::: ::865.5Comparison of resp onse times for ten times faster rob ot only: :: ::: ::906.1Workload mo del::: ::: ::: :: ::: ::: ::: ::: ::: :: ::: ::1006.2Data rates and image dimensions for various compression schemes::: ::1086.3Ob ject sizes for typical video server ob jects:: ::: ::: ::: :: ::: ::1096.4Video rental statistics::: :: :: :::-1000(::: :: ::: :::-1000(:: ::: ::1116.5Videoserverpopulationsizes::::::::::::-1000(:::::::::::::1136.6Usage statisticsforMelvylbibliographicdatabase:::::::::::::::1146.7Response timestatisticsforMelvylbibliographicdatabase::::::::::115 6.8 Statistics onnumberofobjectsincampuslibrariesofUniversityof California 119 6.9 Statistics on n um berofobjectsindepartmentlibrariesforBerkeleycampus1206.10Typicalsizes forASCIIandbit-mappageimages:::::::::::::::1206.11 W eekly statistics for IACandINSPECdatabases::::::::::::::1216.12StatisticsforlesoftheIAC database :: :::-1000(::: ::: ::::::::::1226.13Summaryofworkloadcharacterizationofvideoserverapplication::::-1000(::1236.14 Summaryofworkloadcharacterizationfordigitallibraryapplication::::1247.1Summary of video serv er w orkload characterization::::::::::::::1267.2Cost-per-streamcomparisonsofreplicationschemesinone-movie-p er-disk video server::::::::::::::::::::::::::::::::::::1357.3Comparisonofstriped videoperformancefordierentreplicationpatterns:1397.4Comparisonofnum b erofstreamssupportedbyone-movie-per-diskandstripedvideoservers:: ::::::-1000(::: ::::: :::::: ::: ::: :: ::: ::: 1407.5Comparisonofcostperstreamforone-movie-per-diskandstripedvideoservers 1407.6 Sim ulationparametersforstorage hierarchysimulationsforvideo-on-demandapplication:::::::::::::::::::: ::: ::::::::::::::1427.7Streamspredictedtobeservicedbytertiarystorage:::::: ::::: ::142 7.8Lo calit yo fmovierequestsforparameterizableZipfdistribution::::: :: 153

xiAcknowledgementsIt has b een my privilege at Berkeley to work closely with two ne systems professors,Randy Katz and David Patterson. Randy has b een an excellent advisor. His rapid

owof ideas and wry humor make him an exhilarating p erson to work with. Randy combinesa tremendous breadth of knowledge with impressive technical depth; discussions with himinvariably reveal new p ersp ectives on problems and fresh research directions. From the rstdraft of my masters rep ort to the fourth revision of my dissertation, Randy has b een acareful, constructive, tough critic of my written work. Always generous in encouraging hisstudents to attend conferences and make presentations, Randy has help ed me dramaticallyimprovemy sp eaking skills. Throughout my graduate career, Randy has b een a great sourceof advice regarding all asp ects of graduate scho ol, publishin g pap ers, the job search and thecareer of a young professor. Over the last twoyears, the exp eriences he has shared fromhis time in Washington at ARPAhave given me insightinto the workings of governmentfunding agencies and great resp ect for Randy's dedication to public service.In the last twoyears, DavePatterson assumed the role of my lo cal advisor, and Iam very grateful to him for the tremendous help he has given me. He has oered valuabletechnical input and encouraged me to pursue several interesting new research directions.Dave has also oered advice and supp ort on many topics b esides my dissertation research,including interviewing, eective presentations, dissertation organization and writing, thegoals of a rst-year professor and coping with the travails of life. Warm, funny, and generouswith his time and attention, Dave communicates enthusiasm and joy in b eing a professorand advisor. It is very satisfying and validating for a graduate studenttohave an advisorwho takes such evident pleasure in sharing new ideas and results. I have greatly enjoyedworking with him.I am also grateful to the third reader on my dissertation, Professor KyriakosKomvop oulos, who oered valuable input on trib ology and reliability issues for tap e systems

xiiearly in my research in those areas.My last year as a graduate student has b een particularly dicult, and I am deeplygrateful to many p eople for their love, friendship and supp ort.Go d blessed me with wonderful parents. My mother has a truly unselsh heart.She has always given me unconditional love and supp ort and is my most tireless cheerleaderand my staunchest defender. She also has a keen intelligence, a deep understanding of hu-man nature, and a compassion that allows her genuinely to forgive other p eople and alwaysseek to nd go o d in them. My father is a warm, kind, intelligent, honorable, and unfail-ingly honest man. He has always b een quietly condentofmy abilities, and over the yearshe has patiently oered counsel on everything from calculus to nances to relationships.I treasure his condence in me. Together, my parents have taught me the imp ortance offamily,loyalty, honesty, compassion and forgiveness. I love them with all my heart.I am equally blessed with wonderful siblings, my sisters, Mary and Virginia, mybrother, Tim, and my brother-in-law, John. My nephew, Jack, is the most brilliant andb eautiful baby imaginable. All of them have given me tireless and generous supp ort thisyear, as well as the joy of sp ending time with them. I am esp ecially grateful to Mary forhelping me to remain sane during crazy times.Because I have a large, warm extended family, I always feel that the world is p op-ulated with kind, loving p eople. I deeply miss my grandmother, a remarkable woman, buther in

uence is felt in the presence of her children, grandchildren and great-grandchildren.Among manywonderful aunts, uncles and cousins, I treasure my aunts, Connie and Rosalie,and my cousin Connie and her family, who have always b een so loving and supp ortiveofme.There are several friends on whom I have leaned particularly hard in the last year,and I am very grateful to them for their supp ort. Kim Keeton has b een a generous andloyal friend, on several o ccasions dropping everything in her busy life to help me and always

xiiib eing a patient and compassionate listener.Shellie Sakamoto and John Takamoto haveoered the incomparable comfort of longstanding friendship as well as the pleasure of theircompany and their cats to cheer me up. Janet and Peter Chen have given me many prayers,loyal friendship and invaluable spiritual counsel. John Hartman and Ken Shirri have b eenso go o d to me, oering humor, supp ort, and the male p ersp ective while welcoming me intothe fun, eclectic community centered at the Hillegass house. Judi Franz, Sunita Sarawagi,Hilary Kaplan, Bret Fausett, Valerie Taylor, Mary Baker and Margo Seltzer have all givenme warm friendship over the years as well as much-needed supp ort at essential moments inthe last few months.Perhaps my most valuable professional exp erience at Berkeley was my participationin the RAID pro ject.Iwas fortunate to work with many bright systems students whob ecame go o d friends. My former ocemates, Peter Chen and Ed Lee, were a great pleasureto work, talk, argue and sp end time with. Other go o d friends and valued colleagues on thepro ject were Garth Gibson, Srini Seshan, and Ethan Miller. Additional participants whoplayed a big part in the success of RAID were Rich Drewes, Rob Ple, Rob Quiros and ManiVaradara jan. In addition, the overlap of the RAID, Sprite and Postgres pro jects intro ducedme to a collection of wonderful p eople, including John Hartman, Ken Shirri, Mary Baker,Margo Seltzer, and Mendel Rosenblum. Besides their participation in the XPRS pro jectand retreats, John Ousterhout wasavaluable critic of my masters' pro ject, and MikeStonebraker was a helpful member of my qualifying exam committee. Tom Anderson oeredvery helpful criticism of myinterview talk and gave a lot of great advice ab out job-hunting.Over the years, I also had a wide collection of ocemates who made coming to work funand from whom I learned a great deal. They include David Wo o d, Mario Silva, Tzi-ckerChiueh, Jo el Fine, Remzi Arpachi, Elan Amir and Hari Balakrishnan.Besides my advisors, the p erson at Berkeley from whom I learned most is Ken Lutz,the engineer resp onsible for making RAID work. He taught me not only how to design and

xivdebug hardware b etter, but also metho ds for approaching problems and management skills.He generously oered advice for dealing with fellow students, professors, companies andbureaucrats. Ken approaches his work with savviness, skill, straightforward honesty andslightly cynical go o d humor. I can oer no b etter advice to young systems students thanto learn all they can from Ken.Theresa Lessard-Smith and Bob Miller, our grant administrator and assistant, playan imp ortant role in the lives of all systems students at Berkeley. They make our lives easierin so manyways and have b ecome valued friends to me. Terry is a warm, sweet p erson, andIhave greatly enjoyed discussing gardens and life with her. Bob, with his slightly sarcasticwit, always makes me laugh.Being at Berkeley has b een a wonderful exp erience for me, and I will miss thisplace and all the p eople who have b een a part of it.

1Chapter1Intro ductionJust as a biologist studying animal lo comotion in a jungle might ignore a lumb eringelephantin favor of a sp eeding cheetah, so do computer systems designers largely ignorethe tap e and optical disk systems that store masses of digital information to concentrateon making fast pro cessors run faster and small disks grow smaller. In this dissertation, wefo cus on one of the most neglected areas of computer system design, tertiary storage.Tertiary storage includes magnetic tap e and optical disk devices, as well as more ex-otic technologies like optical tap e and holographic storage. Named b ecause it is traditionallythe third level in a computer system storage hierarchy from which data are fetched, tertiarystorage is inexp ensive but slow. Figure 1.1 shows a typical storage system hierarchy.Atthe top of the hierarchyisprimary storage: random access memory (RAM) used for cachesand main memory.Typical RAM technologies are static (SRAM) and dynamic (DRAM).Below RAM is solid state memory and then magnetic disk devices, commonly calledsec-ondary storage.At the b ottom of the hierarchy aretertiary storagedevices: magnetic tap eand optical disk.In going from the top of the hierarchy to the b ottom, average accesstimes increase dramatically from tens of nanoseconds for DRAM to tens of millisecondsfor magnetic disks, tens of seconds for optical disk jukeb oxes and several minutes for tap e

2 tertiarysecondaryprimary RAM

Magnetic Disk

Magnetic Tape Optical DiskSolid State Disk

Figure 1.1:The storage hierarchy shows a traditional classication of devices for computerdata storage.libraries. Descending the pyramid also results in dramatically decreased cost p er megabyteof storage. The size of the pyramid blocks suggests that computer systems will generallycontain small amounts of more exp ensive technologies like RAM and larger amounts of lessexp ensive storage like tap e.Magnetic tape devices have long b een an important comp onent of storage systems,but access to data on tap e is notoriously slow. Loading a tap e has historically requiredhuman intervention; accessing information stored in an archive might take hours or days.As a result, data were typically sent to tertiary storage only if data sets were to o largeto b e stored on disk or if the data were unlikely to b e accessed again.Thus, the mainapplications that use tertiary storage have long b een backups of le systems, archives oflarge databases, and staging op erations that move large scientic data sets onto and oof disks for sup ercomputer computation.The resulting workload to the tertiary storagesystem consists mostly of sequential write op erations for backup and archival applications;for staging data onto disk systems, the tertiary system also p erforms large read op erations[22], [29], [39], [49], [72], [110], [114], [113], [63], [64], [75], [51], [70], [70], [89], [16], [23].There is generally a single process reading or writing tertiary storage at any time; since thetertiary storage system has long b een o-line, the fast resp onse times required for interactiveop eration are not assumed by the applications.In the last decade, several advances in tertiary storage technology have made it

3of increasing interest to computer system designers. First, increases in bits-p er-inch andtracks-p er-inch densities have increased tap e capacity dramatically.Second, a varietyofinexp ensive tap e drives has b ecome available. Next, magnetic tap e and optical disk me-dia are inexp ensive compared to magnetic disk; typical tap e cost is $.005 p er megabytecompared to $.50 p er megabyte for magnetic disk. The low cost of storage makes it eco-nomical to build massive tertiary storage systems. Fourth, optical disk technology, and inparticular CD-ROM technology,have emerged as a p opular, convenient, and inexp ensiveway to distribute information. Fifth, a large numb er of rob otic devices for handling b othmagnetic tap e and optical disk allow access to tertiary storage without human intervention,making resp onse times more predictable. Rob ots make it p ossible to consider the tertiarystorage system as b eing nearly on-line. Finally, increases in computer pro cessing sp eed haveenabled a growing numb er of applications, ranging from climate mo deling and geographicinformation systems to digital libraries and multimedia servers, that would b enet fromfast access to a massive storage system.In this dissertation, weevaluate howwell tertiary storage systems succeed in sup-p orting some of these new applications. These applications will havevery dierentwork-loads from traditional backup and archival applications. We demonstrate that digital li-braries and video servers, for example, will have a high concurrency of active requests,will have fairly strict resp onse time requirements, and will not necessarily make sequentialrequests for data. Tertiary storage systems cannot b e exp ected to replace magnetic disks;rather, weinvestigate whether tap e libraries and optical disk jukeb oxes p erform well enoughthat they can b ecome an eective part of a storage hierarchy servicing these new applica-tions. In applications like video service and digital libraries, we assume that a magneticdisk system would serve as a cache for the most p opular data, while the tertiary storagesystem would service infrequent accesses to less p opular data.Unfortunately,we nd that despite the advantage of inexp ensive mass storage, the

4promise of current tertiary storage systems is not fullled b ecause of their p o or p erformance.Bandwidth of tap e drive and optical disk drives is quite low, and access times are on averageminutes and tens of seconds, resp ectively, compared to milliseconds for accesses to magneticdisk.In addition, tap e libraries and optical disk jukeb oxes are designed for traditional,archival applications, with a large numb er of tap es or platters and a handful of drives; thisconguration do es not supp ort the high concurrency and fast access required by the newapplications. After showing the inadequacy of current tertiary systems, weevaluate severalfeasible technology improvements. We show that increased drive bandwidth, lower latencyfor tap e op erations, and a higher prop ortion of drives to media within rob ots are essentialto supp orting workloads with high concurrency and strict resp onse time constraints.1.1ContributionsThe contributions of this dissertation are as follows:We present the rst extensiveworkload analysis of large-scale tertiary storage systemsused for traditional backup and archival workloads as well as new kinds of multime-dia applications. We describ e the basic op eration of tertiary devices, including thep erformance of existing pro ducts on a varietyofworkloads.We nd that the technique of data striping, whichwas used so successfully in diskarrays to increase the bandwidth and reduce the latency of large accesses, is onlyeective in tap e libraries for a limited class of applications. Striping is eective forworkloads that have mainly sequential accesses or low concurrency.For a greaternumb er of outstanding requests to more randomly distributed data, striping the tap elibrary is detrimental to p erformance. This p o or p erformance is caused by increasedcontention for a limited numb er of tap e drives in a typical library.Wecharacterize two new workloads: video-on-demand service and digital libraries.

5In a movies-on-demand video server application, we predict that accesses to movieswill b e highly lo calized, p otentially making the video server a go o d candidate fora storage hierarchy in which tertiary storage services accesses to the least p opularmovies. Unfortunately,we nd that tap e libraries and optical disk jukeb oxes don'tp erform adequately as part of a storage hierarchy to service the predicted workload.The small numb er of tap e or optical disk drives in a typical tertiary library and theirlow bandwidth make tertiary storage systems unable to service more than a smallnumb er of concurrent video streams.Weevaluate the use of disk farms for movies-on-demand video service. We show thatdisk systems are much more eective at supp orting a large numb er of video streamsthan are storage hierarchies that include tap e or optical disk. Among disk systems,we show the advantage of using striping to use bandwidth eectively,thus supp ortingmore video streams with the same I/O hardware.We present a list of desired improvements in magnetic tap e and optical disk systems.1.2Metho dologyThe metho dology emb o died in this dissertation rst uses measurements of real de-vices to derive mo dels of their b ehavior. Then we incorp orate these mo dels into p erformancesimulators (Chapters 3, 7). We also prop ose workload mo dels for various applications thatare used to drive simulations (Chapters 4, 5, 6). Finally,we use simulation results to analyzedesign tradeos for dierent storage system congurations and to make recommendationsintended to in

uence the design of future tertiary storage systems (Chapters 4, 5, 7).

61.3Thesis OutlineThis thesis is comp osed of eightchapters.Chapter 2 describ es the technologyof magnetic tap e and optical disk devices.It surveys existing drive and rob ot pro ductsand discusses tradeos in the design of tertiary systems. Chapter 3 describ es the event-driven tertiary storage simulator used to generate many of the results in this thesis. Thechapter also describ es our measurements of tap e drive, tap e rob ot and optical disk devices,and presents the p erformance mo dels used in our simulations.The fourth chapter is ageneral study of tap e library and optical jukeb ox p erformance. It presents simulation resultsforavarietyofworkloads including traditional archival applications and more demandingmultimedia applications. Chapter 5 presents a study on the usefulness of striping in tap elibraries. It shows that striping is very eective in increasing the bandwidth of sequentialworkloads, but that when accesses are more random in nature or the concurrency of theworkload is high, striping p erforms p o orly.Next, Chapter 6 presents an initial characterization of two emerging applications:video-on-demand and digital libraries. Wecharacterize typical access sizes, lo cality patterns,resp onse time constraints, and loads.The seventh chapter evaluates storage systems tosupp ort video-on-demand service. We compare storage systems comp osed entirely of disksto storage hierarchies that include magnetic disk and tertiary storage. For disks, we comparenon-strip ed and strip ed systems and nd that strip ed video servers supp ort many moreviewers. We also nd that storage hierarchies p erform p o orly for this application b ecauseof the long access times and low bandwidth of tertiary libraries. The dissertation ends withconcluding remarks and a bibliography.

7Chapter2TertiaryStorageTechnologies2.1Intro ductionIn the last chapter, we describ ed a typical computer system storage hierarchy.Inthis chapter, we describ e the workings of tertiary storage, the lowest level of the hierarchy.In Section 2.2, we discuss magnetic tap e drive technology, including tradeos in the designof tap e systems.In Section 2.3, we do the same for optical disk technology.We giveexamples of b oth magnetic tap e and optical disk drive and rob ot pro ducts and describ etechnology trends.We nish the chapter with a discussion of two more exotic tertiarystorage technologies: holographic storage and optical tap e.2.2Magnetic Tap e Technology2.2.1The Tap e DriveMagnetic tap e drives [4], [34], [38], [40], [43], [48], [58], [59], [66], [96], [97], [107],[118] store data as small magnetized regions on a tap e comp osed of magnetic materialdep osited on a thin,

exible substrate such as plastic.Magnetic tap es are written byinducing a current prop ortional to an input signal in the coil of an inductive write head;

8this current induces a magnetic eld in the tap e b elow the write head that magnetizes asmall tap e region. After data have b een written, a read head can detect magnetic

ux onthe tap e; the rate of change of this

ux results in a voltage on the read head that is used todeduce the original input signal [59] [96]. A tap e drive mechanism consists of reels, motors,gears, brakes and b elts that thread the tap e, control tap e reeling, guide the tap e to maintainits p osition relative to the heads, control the acceleration and steady-state velo cityofthetap e, and control tap e tension [66].The substrate for magnetic tap es is a p oly-ethylene terepthalate (PET) base lmfrom 12 to 36 microns thick [13]. There is a tradeo b etween reliability and capacityincho osing the thickness of the substrate. A thicker substrate is more durable and able towithstand high accelerations and start and stop op erations without excessive distortions[66].A thinner substrate allows a single reel to store more tap e, with greater resultingcapacity.On top of the substrate is the magnetic layer that stores data. Ideally, this mag-netic layer should b e very smo oth to allow the minimum spacing, or air b earing, b etweenthe read/write head and the tap e; the smaller this air b earing, the greater the densityofmagnetic information that can b e stored on the tap e. (Typically, the separation b etweenthe head and moving tap e is approximately 0.2 microns [66].)There are twotyp es ofmagnetic layers: Magnetic Particle (MP) and Metal Evap orated (ME). Historically, mosttap e systems have used magnetic particle tap es in which the PET substrate is coveredwith a particulate magnetic coating that is 2 to 5 microns thick.The coating containsmagnetic particles suchasCrO2

orFe2 O3

, p olymeric binders and lubricants. In the lastveyears, metal evap orated tap e systems haveovercome earlier diculties with corrosionand trib ology (the head/tap e interface) and now oer sup erior recording qualities in manytap e pro ducts. Metal evap orated tap es have a thin lm of metal (usually Co-Ni) evap o-rated onto the PET substrate in a layer ab out 100 nanometers thick.Because the thin

9 tracks on linear tapeinductive head fixed linearmachine reel tape cartridge air bearings read/ write head

Figure 2.1:A Linear Recording Magnetic Tape Drivelm magnetic layer is much thinner than a particulate magnetic coating, thin lm tap eshold greater capacity. In addition, thin lm media have recording characteristics sup eriorto those of particulate media, including higher eective magnetization, lower error rates,greater smo othness, highly isolated asp erities, and b etter signal-to-noise ratios [59], [82],[13].Besides the plastic substrate and the magnetic coating, most magnetic tap es havea back coating for protection against static. Electrostatic attraction can lure and embeddust particles from the atmosphere into the tap e, causing read and write errors.Figures 2.1 and 2.2 show the two most common tap e formats and drive mechanismsfor magnetic recording of computer data: linear and helical-scan.In linear recording magnetic tap e drives, data are written in tracks parallel to theedges of the tap e. Many linear recording devices use stationary,multi-track heads to reador write the entire width of the tap e simultaneously. Figure 2.1 shows a simplied drawingof a recording head for a linear tap e drive. The recording head is made up of a coil and agapp ed magnetic structure that intensies and localizes the magnetic eld during writing[96]. The read and write elements are of dierent size, orientation and comp osition; theread elements are magnetoresistive, while the write elements are inductive. Before writingthe tap e, a separate erase head erases the tap e with a magnetic eld. The read/write head

10 read head write headread headwrite head rotary drum tape cassette tracks on helical tape

Figure 2.2:A Helical-Scan Magnetic Tape Driveis narrower than the erase head, so when new data are written, gaps or \write guard bands"are left b etween adjacent tracks to minimize cross talk. Further, the read elements on thehead are narrower than the write elements, creating a \read guard band" to further reducethe risk of reading erroneous information. Data are read immediately after b eing writtento check for errors and rewritten if errors o ccur. In older linear tap e drives, the write guardbands or separations b etween adjacent tracks are large; as a result, the areal density of thesedrives is relatively low compared to helical scan tap es. More recent linear tap e drives havereduced or eliminated the separation b etween tracks and greatly increased the areal density.Another typ e of linear recording drive is a serp entine drive that reads or writes a few tracksat a time down the entire length of the tap e; then tap e motion switches directions, and theheads themselves are shifted to read or write a dierent set of tracks.Figure 2.2 shows a helical scan tap e drive. Helical scan tap e drives achieve highareal densities and, p otentially, high data rates [10] [74] [100] [108]. Read and write headsare situated on a drum that rotates at high sp eed (around 2000 r.p.m.). Tracks are writtenat a small angle (10 to 20 degrees) with resp ect to the direction of tap e movementontoa slow-moving (around 1/2-inch p er second) magnetic tap e. On some drives, a stationaryerase head erases old data b efore writes o ccur.In addition, helical scan drives usuallycontain aservohead that can read servo information written at the edges of the tracks.

11The presence of the servo head makes it p ossible for the heads to b e p ositioned precisely,so that gaps b etween adjacent tracks can b e small. If two sets of read/write heads are setat opp osite angles, then two tracks can b e written simultaneously. These adjacent tracksmayoverlap slightly, writing a herringb one pattern as shown in Figure 2.2. Using dierentwrite angles on adjacent tracks minimizes crosstalk [108]. Because adjacent-track gaps aresmall or nonexistent in helical scan drives, areal density on these drives is high.As forlinear tap e drives, in helical-scan drives, data are read immediately after b eing written; thedrive continues to write the data in subsequent lo cations until the write completes withouterrors.There is substantial controversy over the relative merits of linear and helical scantechnologies. Stationary head, linear drives are thoughtto have a less abrasive head-to-tap e interface than helical scan systems, resulting in longer-lived heads and tap es thattolerate more recording and play passes. Helical, rotating-head systems b oast a relativelyslow normal tap e sp eed that allows substantial sp eedup during fast search and rewindop erations; for example, the 4mm DAT drives have a fast p ositioning op eration that is 200times faster than the normal tap e sp eed.By contrast, the tap e typically moves quicklyin a linear drive, since the tap e sp eed along with the track density and numb er of headsdetermines the data rate of the tap e drive. In addition, Bhushan argues that helical systemsp otentially have higher data rates, since it should b e easier to sp eed up the rotating drumin a helical tap e drive than to quickly move heavy rolls of tap e in a linear drive [13].Even among helical scan drives, there is controversy over which design is sup erior.For example, the \wrap angle," or amount of tap e that is wrapp ed around the rotating drumthat contains the recording heads, is 90 degrees for a 4mm DAT drive and 220 degrees foran 8mm drive. Advo cates of 4mm technology claim the shorter wrap angle is less stressful,reducing wear and improving access times [74], while 8mm prop onents claim their longerwrap angle is actually less stressful, providing b etter tap e guidance [10].

12Several features of magnetic tap e drives are notable.Most tap es are app end-only, although a few up date-in-place magnetic tap e drives do exist [108]. When data arewritten on any p ortion of an app end-only tap e, whatever had b een written b eyond thenew write p oint is irretrievable. There are several reasons for the app end-only nature oftap es.First, tap e mechanisms are not well-suited for the large numb ers of forward andreverse p ositioning op erations that are typical of up date-in-place devices like magnetic diskdrives. The mechanical stop and start op erations are time-consuming and cause wear onthe tap e, so they are avoided. Instead, applications strive to use tap e drives in astreamingmo de of op eration in which data are sent at a constant rate to or from the tap e head.Second, up date-in-place is dicult b ecause the high rate of errors on the media makeitdicult to predict exactly where data will lie on the tap e. When a tap e drive writes data,it immediately reads back what it has written to check for correctness.When there isan error, the tap e drive re-writes the data at a later p osition on the tap e, rep eating theread-after-write checking until the data are written without errors. Unlike magnetic disks,where data are mapp ed into particular sectors at predictable lo cations on a disk platter, itis imp ossible to know b efore a write op eration exactly where data will physically reside ona magnetic tap e.There have b een a few up date-in-place magnetic tap e drives, notably the recentData/DAT format for 4mm tap e drives [108]. This format has not found favor comparedto its more traditional, app end-only comp eting format, DST. Up date-in-place tap e driveswork by dividing the tap e into zones or sectors, leaving wide buers of empty tap e b etweensectors to allow data to b e rewritten if write errors o ccur. These buer zones must also b elarge to protect go o d data from the tap e drive's large erase head. Because of these largebuer zones, up date-in-place tap es havelower storage capacity than app end-only tap es.Many tap e drives require that tap e cartridges b e rewound b efore they can b eejected from a tap e drive. There are several reasons for this requirement. Some tap es are

13stored on a single tap e reel and must b e rewound b efore they can b e physically removed.Even for double-reel cassettes, rewind is often required b ecause there may b e servo infor-mation, used for precise p ositioning, at the start of the tap e that must b e read on insertion.Also, b ecause inserting and ejecting a tap e can cause considerable wear, some manufactur-ers prefer that those op erations o ccur on a strip of tap e where no data are stored. Sometap e drives include load/eject \zones" scattered throughout the tap e; these zones containservo information to allow tap es to b e inserted at those p oints without rewinding to thestart of tap e. Finally,keeping tap es in streaming op eration is essential to achieving go o dp erformance. In order to minimize tap e and head wear, tap e drives will release tension onthe tap e if there is a lapse of a certain length of time after an op eration; if the delaygoesoneven longer, the drum in a helical scan tap e drive will stop spinning. Subsequent requestswill havetowait for the drum to spin up and for tap e tension to b e reapplied. Toavoidthese delays that reduce throughput, the drive should op erate in streaming mo de.2.2.2Representative Pro ductsThere are a numb er of magnetic tap e pro ducts available that provide a varietyoftap e capacities and sp eeds for a range of prices. Table 2.1 compares several tap e drivesbased on cost, cartridge capacity and data transfer rate [1], [10], [62], [74], [119], [77],[100], [108], [115], [93], [11]. At the low end, an inexp ensive 4mm DAT drive stores severalgigabytes of data but has a low transfer rate of 366 kilobytes/second. At the high end, thevery exp ensive Sony D-1 drive stores ab out 25 times the capacity stored by the DAT driveand transfers data at approximately 100 times the transfer rate of the DAT drive.2.2.3Tap e Rob otsTo provide higher bandwidth and capacity than can b e supplied by a single device,several companies have built automated library systems. These libraries hold tens, hundreds

14

MediaData

Tap ePriceCostCapacityTransferDrive

Drive($)($/MB)(GBytes)RateType

Sony SDT-4000$ 1,695$0.0154183-366 KB/sec4mm helical Exabyte EXB8500$ 2,315$0.0085500 KB/sec8mm helical Storage Tek 4220$ 19,000$0.0130.2-0.8712 KB/sec1/2-inch linear Metrum 2150$ 32,900$0.0013182 MB/sec1/2-inch helical

Ampex D-2$100,000$0.00182515 MB/sec19mm helical

Sony D-1$270,000$0.00169032 MB/sec19mm helical

Table 2.1:Price, cartridge capacity and data transfer rates of a variety of magnetic tapeproducts. Prices from January, 1994 issue ofSunWorldmagazine.or thousands of cartridges that can b e loaded by rob ot arms into a collection of magnetictap e drives.Table 2.2 shows a classication of some of the robots available for handling mag-netic tap e cartridges automatically [62], [52]. Table 2.3 describ es examples of eachtyp eof robot. Large libraries generally contain many cartridges, several drives and one or tworob ot arms for handling cartridges. The cartridges are often arranged in a rectangular array.Other \large library" congurations include a hexagonal \silo" with cartridges and drivesalong the walls, and a library consisting of several cylindrical columns holding cartridgesthat rotate to position them. Usually these large libraries are quite exp ensive ($500,000 ormore), but they often havelower cost p er megabyte than less exp ensive rob otic devices. Onedisadvantage of large tap e libraries is the low ratio of drives and rob ot arms to cartridges.In a heavily-loaded system, there may b e contention for b oth arms and drives.Carousel devices are mo derately priced (around $40,000) and hold around 50 car-tridges. The carousel rotates to p osition the cartridge over a drive, and a rob ot arm pushesthe cartridge into the drive. In most cases, there are one or two drives p er carousel.Finally, the least exp ensive rob otic device ($10,000 or less) is a stacker, whichholds approximately 10 cartridges in a magazine and loads a single drive. The magazinemaymovevertically or horizontally to p osition a tap e in front of the drive slot, or the stacker

15

Typ eNo. CartridgesNo. DrivesNo. Rob ot ArmsCost

Large Library10 to 1000severalone or twohigh

Carouselaround 50one or twoone (carousel)mo derate

Stackeraround 10oneone (magazine or arm)low

Table 2.2:Classication of taperobots.

TapeTotalPriceTap e DriveRob ot

LibraryDrivesTapesCapacity($)TechnologyTyp e

ADIC DAT

Auto changer 1200C11224 GB$ 8,9004mm helicalstacker

Exabyte EXB10i11050 GB$ 13,0008mm helicalstacker

Sp ectralogic

STL800 Carousel240200 GB$ 31,4658mm helicalcarousel Exabyte EXB1204116580 GB$ 100,0008mm helicallibrary Metrum RSS600560010.8 TB$ 265,9001/2-in helicallibrary Ampex DST60042566.25 TB$1,000,00019mm helicallibrary

Table 2.3:Comparison of several taperobots. Prices from January, 1994 issue ofSun-Worldmagazine.mayhave a rob ot arm that moves across the magazine to pick a cartridge. A storage systemcomp osed of stackers would have the highest ratio of rob ot arms and drives to cartridges.Rob ot access times are fairly short compared to tap e p ositioning op erations likesearch and rewind. A tap e switch op eration, which replaces a tap e loaded in a tap e drivewith a new tap e from a shelf, involves the use of the rob ot arm. In the next chapter, weshow that a tap e switch op eration may take several minutes. The tap e switchmay rstrequire rewinding the currently-loaded tap e. Next, that tape must b e physically ejectedfrom the tap e drive. The robot arm moves to unload the old tap e and load a new one.Then the tap e drivephysically loads the new tap e, including reading servo information atthe start of the tap e. The tap e drive performs a forward search op eration to p osition thetap e. Finally the tap e drive p erforms the read or write data transfer op eration. The rob otcontribution to the tap e switch time is b etween 5 and 40 seconds on typical rob ots.

16

DriveCorrected Bit Error Rate

DAT10 15

Exabyte EXB850010

13

1/4"10

14

Metrum 1/2"10

13

19mm D-210

12

Table 2.4:Compares the error rates per bits read or written for a variety of tape drives.2.2.4Reliabil ityofTap e Drives and MediaMagnetic tape systems face dicult reliabilitychallenges [9].Tap e wear, headwear and rates of errors uncorrectable by error correction codes (ECC) may make errorsmore frequent in large tap e systems than in disk arrays. It is an op en research questionhowmuch error correction will b e required to make tap e arrays adequately reliable.Tape Media ReliabilityThe rate of raw errors (i.e., errors b efore any error correction has b een p erformed)is quite high on magnetic tap e media: approximately one error in every 10

5bits read. Mostof these errors are caused by \drop outs," in which the signal b eing sensed by the tap ehead drops b elow a readable value. Drop outs are most commonly caused by protrusionson the tap e surface that temp orarily increase the separation b etween the head and thetap e, causing a drop in signal intensity [66]. There are several ways that debris can becomeemb edded in the tap e and cause drop outs. When the tap e is originally sliced, lo ose piecesof tap e substrate and coating are left at the edges of the tap e; these may b ecome emb eddedin the tap e surface. Second, as the tape passes through the drive mechanism, it can b ecomecharged and attract particles in the atmosphere. Another way debris accumulate on thetap e is through the wear that occurs when the tap e drive starts and stops. Atvery lowvelo cities, the separation b etween the head and tap e surface normally caused by air

owisnot maintained. The head contacts the tap e coating, pro ducing a ne, dry p owder. Pushedby the head, this p owder can accumulate and cause drop outs. Drop outs are also caused by

17inhomogeneities in the tap e's magnetic coating.

Because of the high raw bit error rates on all magnetic tap e devices, all drivesincorp orate large amounts of error-correction co de. However, some errors will o ccur thatthe ECC cannot correct. The rate of such errors is called the corrected or uncorrectablebit error rate.Table 2.4 shows that uncorrectable bit error rates for the dierent tap etechnologies vary from one error in every 10

12bits read to one in every 10

15bits read. Takingthe Exabyte 8mm drive as an example, supp ose a small tap e library contains eight Exabytedrives that each transfer data at 500 KBytes/sec. Assuming that the drives op erated onjust a 10% duty cycle, an uncorrectable bit error will o ccur on average every 36.2 days.It is notable that one of the most exp ensive tap e drives, the Amp ex DST800,has one of the highest error rates, while the inexp ensiveDAT drive has the lowest rate.The DAT driveachieves this low error rate by including 3 layers of Reed-Solomon errorcorrection. The extensive enco ding required to p erform this error correction consumes agreat deal of capacity.For example, the rst two ECC layers consume 30% of the bitsavailable for data storage; dep ending on the amount of enco ding p erformed for the thirdECC level, another 4% to 18% of tap e capacitymay b e consumed [108]. Thus, there is atradeo b etween data reliability and tap e capacityavailable for data storage. Incorp oratingthe third ECC level is also exp ensive, requiring more complex electronics and larger buers.DAT manufacturers recommend the use of the third level of ECC only for sensitive data.Magnetic tap es that are frequently read or written eventually wear out.In atraditional archival system, where data are written and probably never read again, this isnot a serious concern. However, in applications like digital libraries, there is no limit on thenumb er of times a tap e may b e read. Tap es last on average several hundred passes [13],[47], [60]. However, they wear out so oner if a particular segment of the tap e is accessedrep eatedly. A Hitachi study of DAT tap e drives showed that the raw error rate (b efore errorcorrection) after 2500 passes to a single segmentofatape was over one error in ten ECC

18blo cks read [31]. Tap es written by linear recording drives do not suer so quickly from tap ewear-out as tap es written by helical scan drives b ecause the interface with the head is lessabrasive, but wear is still a concern. In large tap e libraries, controllers will b e required tomonitor the numb er of passes to a tap e cartridge and replace it b efore wearout o ccurs.In an interactive library application, wear due to stops and starts on the tap eis likely to b e severe, since accesses to the library will not b e sequential. Severe wear ismanifested by large p ortions of the magnetic binding material

aking away from the tap ebacking. Such problems make large sections of a tap e unreadable.Another set of reliability concerns involves the long-term storage of data on tap e.Over time, the metal pigments in tap e are sub ject to corrosion; this problem is eliminatedto a large extentby the use of appropriate binders. The tap e also undergo es mechanicalchanges including tap e shrinkage, creasing of the edges, p eeling of the magnetic layer,and deterioration of surface smo othness [47].Back coating transfer can also o ccur, inwhich the magnetic coating and the back coating from adjacent tap e layers are pressedtogether; when shrinkage o ccurs during storage, the roughness of the back coating cantransfer onto the magnetic layer and cause a deterioration in tap e smo othness [47]. Manymanufacturers recommend rewinding tap es every 6 months to avoid such problems. Finally,the proliferation of incompatible recording formats threatens the future accessibilityofarchival data [60].Tap e Head WearTap e heads undergo considerable wear in all tap e systems. In helical scan systems,they last for a few thousand hours of actual contact b etween the head and medium. Lineartap es are thought to pro duce less wear b ecause the interface b etween the tap e and the headis less abrasive. Some tap e wear is necessary in order to keep the heads in optimum condi-tion [67]. Tap e wear helps remove particles from the head that mayhave b een transferred

19

Repair Typ e%

Replace heads44

Tap e mechanism (reel motors, tap e tension, etc.)21

Card failure17

Other (rmware, power supply, etc.)14

No defect found4

Table 2.5:1991 repair statistics for Exabyte 8mm drives. (Source: Megatape)there from the tape surface or the atmosphere, or that came from the tape coating under

conditions of friction or extremely high or lowhumidity. All tap e drive manufacturers rec-ommend p erio dic use of a cleaning cartridge to remove debris from the tape head. Extensivewear o ccurs when new tap es are used, since new tapes tend to have a lot of surface debristhat is removed by the heads during the rst few passes. One way to extend the life of driveheads is to use burnished tap es, from whichmuch of the surface debris has b een removed.Eventually, the head wear b ecomes extreme. Tap e library controllers will need toschedule b oth cleaning and replacement of the heads to assure adequate reliability. Thismay require keeping statistics on how many hours particular drives have run.Mechanical ReliabilityHead failure is the main cause of tap e drive failure; however, the drivemay alsohave other mechanical or electrical failures. Figure 2.5 shows repair statistics for Megatap e,an OEM of Exabyte 8mm drives. 44% of the time, tap e drive failures were due to failedheads. 21% of the time, some other comp onent of the tap e drive mechanism failed. 17% ofthe time a failure with the electronics was to blame, while 14% of the time other comp onentssuchaspower supplies caused the failures.2.2.5Trends in Magnetic Tap e DrivesMagnetic tap es follow the same technology curves [57], [50], [73] as magnetic diskssince the magnetic material, whether dep osited on a hard disk or a

exible tap e, is much the 20 0 12 34
5 6

199019921994

199619982000

Tape Drive Transfer Rate (MB/sec)

Year projections

1995 drive

2.3a:Predictions for Band-width Improvements.

0 10 20 30
40
50
60
70

19901992

1994199619982000

Cartridge Capacity (Gigabytes)

Year projections

1995 drive2.3b:Predictions for CapacityImprovements.Figure 2.3:Predictions made in 1990 for bandwidth and capacity improvements for Exabyte8mm tape drives in this decade. Improvements requiredto reach these goals include increasedtrack density, decreasedtrack width and pitch, reduced tape thickness and increasedrotorspeed.Source: Harry Hinz, Exabyte Corporation.The crosses in each graph show thebandwidth and capacity, respectively, of the drive to be introduced in 1995, which exceedsthe 1990 projections.same. Currently, magnetic disk capacity is increasing at a rate of over 50% p er year [79], andmagnetic tap es should increase in capacity at a similar rate. In 1990, Hinz [37] predicted thegrowth shown in Figure 2.3 for tap e capacity and data transfer rate for 8mm tap e drives inthis decade. The gure shows b oth capacity and throughput doubling approximately everytwoyears, reaching 67 GBytes p er tap e and 6 MBytes/sec by the end of the decade. Thegeneration of Exabyte tap e drives b eing intro duced in 1995 will exceed Hinz's predictions forthat year; each cartridge will hold 20 gigabytes of storage and transfer data at 3 megabytesp er second vs. Hinz' prediction of 14.4 gigabytes and 1 megabyte p er second for 1994 drives.The data p oints for the new drive are marked by crosses in Figure 2.3.Besides data transfer rate, other comp onents of tap e drive access time are improv-ing. Several tap e drive manufacturers are reducing rewind and search times by implementingp erio dic zones on the tap es where eject and load op erations op erations are allowed, ratherthan requiring that a tap e must always b e ejected and loaded at the start of the tap e [69].Mechanical op erations like load, eject and rob ot grab and insert will b e substantially faster

21
substratealuminum reflectortransparent dielectricthin metal layerpit

Figure 2.4:Optical disk structurein the next generation of tap e drives.2.3Optical Disk Technology2.3.1Optical DisksIn optical recording, a non-contact optical head uses a laser beam to store infor-mation on the disk surface by creating \pits" in the surface material [7], [120], [98], [83].Optical disks can only b e written once and are often called WORM (write once, read many)devices. Figure 2.4 shows a typical optical disk structure; a substrate (often plastic) is cov-ered with an aluminum re

ectivelayer, a transparent dielectric, and nally a thin layer ofmetal. During writing, in resp onse to an electrical input signal, a highly-fo cused laser b eamcan melt a small region of the metal layer, opening a hole or \pit." Later, during reading,alower-intensity, unmo dulated laser b eam is re

ected o the surface of the disk. A pho-to detector interprets information stored on the disk by detecting dierences in re

ectivitybetween pits and the surface of the thin metal layer, called the \land." Data are enco dedand stored as alternating regions of pits and land.The enco ding may b e a simple as apit representing a zero and the land a one. In CD-ROM disks, a more complicated enco d-ing scheme minimizes the numb er of consecutive zero or one bits to minimize intersymbolinterference on the disk surface [83].On optical disks, data may be stored in a single spiral track as in a CD-ROM oras a series of concentric circular tracks. Figure 2.5 shows a simplied diagram of an optical

22
optical disk focus lens mirror data detectoroptics input signalmodulator laser

Figure 2.5:Typical components of an optical diskdisk.The optical disk uses several servos, one for tracking, one for fo cusing and one tocontrol the rotation sp eed so that the data rate is constant. As the read/write hequotesdbs_dbs14.pdfusesText_20

[PDF] tertiary structure of protein pdf

[PDF] tesco 2014 annual report

[PDF] tesco annual report 2013

[PDF] tesco beef burgers halal

[PDF] tesla unit

[PDF] tesselaar roses

[PDF] test 10 7

[PDF] test 100 7

[PDF] test 7 14 olympus

[PDF] test 7 14 panasonic

[PDF] test and score data summary for toefl 2

[PDF] test anglais cecrl b2

[PDF] test anglais cecrl c1

[PDF] test anticorps coronavirus belgique

[PDF] test anticorps coronavirus paris