Managing very-large distributed datasets
de Oliveira Branco, Miguel, Zaluska, Ed and De Roure, David (2008) Managing very-large distributed datasets. Lecture Notes in Computer Science, 5331, 775-792.
- Published Version
In this paper, we introduce a system for handling very large datasets, which need to be stored across multiple computing sites. Data distribution introduces complex management issues, particularly as computing sites may make use of different storage systems with different internal organizations. The motivation for our work is the ATLAS Experiment for the Large Hadron Collider (LHC) at CERN, where the authors are involved in developing the data management middleware. This middleware, called DQ2, is charged with shipping petabytes of data every month to research centers and universities worldwide and has achieved aggregate throughputs in excess of 1.5 Gbytes/sec over the wide-area network. We describe DQ2’s design and implementation, which builds upon previous work on distributed ﬁle systems, peer-to-peer systems and Data Grids. We discuss its fault tolerance and scalability properties and brieﬂy describe results from its daily usage for the ATLAS Experiment.
|Divisions:||Faculty of Physical and Applied Science > Electronics and Computer Science > Web & Internet Science
|Date Deposited:||03 May 2009 19:48|
|Last Modified:||01 Mar 2012 16:49|
|Contributors:||de Oliveira Branco, Miguel (Author)
Zaluska, Ed (Author)
De Roure, David (Author)
|Further Information:||Google Scholar|
|RDF:||RDF+N-Triples, RDF+N3, RDF+XML, Browse.|
Actions (login required)