The workshop began at noon with a luncheon, followed by a brief Opening/Welcome session. This moved quickly to a 90-minute session, Storage for Digital Preservation: Panel of Case Studies from Users, which began at 1:15pm. There were eight, seven-minute presentations:
- Thomas Youkel, Library of Congress, cited some numbers about the amount of content ingested by the LoC, and the amount they project they will ingest during the next two years. One interesting figure is that the LoC ingested 24.7 TB during the week of June 2, 2009. He also described data integrity as a key challenge, and workflow, content management, and migration as secondary challenges.
- David Minor, San Diego Supercomputer Center, gave a brief overview of the Chronopolis project: three partners (SDSC, NCAR, UMIACS), 50TB of storage at each node. SRB is the content transport system; BagIt is the content container; and, ACE is the content integrity system. ICPSR, the California Digital Library, the MetaArchive, and one other organization I missed are the content providers. Chronopolis Next Generation is seeking additional storage partners (nodes), migration tools, and connecting to other storage networks, such as the MetaArchive's private LOCKSS network.
- Bill Robbins, Emory University, described how the MetaArchive was using an Amazon EC2 system as its central "properties server" to solve the (political? procedural?) issues of Emory serving as the "master node" for the MetaArchive. Bill had a good quote: "We're not cheap. We're 6x cheap, and that's not so cheap." Bill expressed general satisfaction with EC2, but wished the documentation was better.
- Andy Maltz, Academy of Motion Pictures Arts and Sciences, reference the Digital Dilemna in his talk about the requirements his organization has for digital preservation solutions: (1) last 100 years; (2) survive benign neglect; (3) at least as good as photochemical; and, (4) cost less than $500/TB/year. Andy also referenced the phrase "Migration is broken" from a 2007 SNIA report. He cited some figures: a movie consumes 2-10PB of storage, and Hollywood produces about one move/day. He finished with a brief description of StEM, an NDIIPP-sponsored project.
- Laura Graham, Library of Congress, described the LoC's efforts to preserve websites. The Internet Archive does the crawling, and the Wayback Machine is the delivery mechanism. A system at the LoC acts as archival storage. Wish list includes fewer manual steps in the system, and less of a need to copy files around quite so much.
- John Wilkin and Corey Snavey, Hathitrust (and the University of Michigan Library), gave a brief overview of Hathitrust. They're leveraging the OAIS reference model, plugging in modular solutions wherever possible. 185TB of storage today. Focus is on the "published record." Corey asserted that data stewards will need to be able to rely more and more on the storage solution (trust) in order to succeed in their missions.
- Jane Mandelbaum, Library of Congress, was the proxy for a very brief overview of the DuraCloud effort from DuraSpace. DuraCloud is essentially a middle layer between a variety of cloud storage providers and data stewards.
- Jimmy Lin, representing Cloudera, described Cloudera as the RedHat for Hadoop. Jimmy went on to talk a bit about Hadoop, HDFS, and MapReduce, and how Cloudera might be a very attractive platform for connecting "compute" to storage.
My main comment during the session was that software and hardware and even power are not the big costs of digital preservation (at least at ICPSR); the big costs are people, and the processes that require people.
After a short break, we began the next session, Storage Products & Future Trends: Vendor Perspectives, at 3:15. Again, the format was a seven-minute presentation:
- Art Pasquinelli, Sun Microsystems, spoke about the Sun PASIG, an d described how researchers were looking to IT and libraries for their digital preservation needs, and how that was simply not working.
- Mike Mott, IBM, asked how we define a "document" in a digital world, and thought that we would see the end of Moore's Law (in storage) by 2013 unless there was a new technological breakthrough. Mike also described a new paradigm in architecture: a river v. a building.
- Dave Anderson, Seagate, spoke on how he expected some trends to end (approx 40%/year increase each year in capacity + 20%/year increase each year in transfer rate); how solid state disk uptake has been slower than expected; and, how the change in disk form factor from the desktop (3.5") to the laptop (2.5") will shift the industry.
- Tim Harder, EMC, described a new "compute + storage" solution called Atmos, and how they are betting big on x86 technology + virtualization, and off-the-shelf gear bundled with software. EMC has also founded a cloud division.
- Paul Rutherford, Isilon, expects SATA, 3.5" form-factor disks, and block-level access to disappear, replaced by SAS, 2.5" form-factor disks, and file-level (or object-level) access. He said "I hate the cloud" and didn't think it would be used as the sole source for important data.
- Kevin Ryan, Cisco Systems, gave a high-level overview of Cisco's "unified fabric" vision which sounds like it consists of a single, lossless, open pipe for all sorts of bits: network, data, NAS, SAN, etc.
- Raymond Clarke, Sun Microsystems, gave a similar type of talk, but about the Sun Cloud which struck me as an umbrella term for an integrated solution using lots of Sun's open technologies: Solaris, Java, ZFS, MySQL, etc.