Masterworks
Masterwork sessions address real-world applications of HPC technologies in business, finance, the arts, medicine, energy and more. Masterworks show how HPC is driving innovation in a wide range of areas and how it plays a role in addressing many key issues of the 21st century, form the development of new materials to research into new medical procedures to finding new energy solutions and greener computing methods.
Tuesday, November 18
HPC in the Arts
Room: 18A/18B/ 18C/18D
10:30 a.m. - noon
Humanities Scholarship in the Petabyte Age
Presenter: Brett Bobley (National Endowment for the Humanities)
Abstract: The core dataset for humanities scholars consists of objects like books, documents, journals, paintings, newspapers, film and audio recordings, sculpturethese are the things humanists study. In the past, these objects were read and searched on a very small scale; no one scholar could read more than a small subset of the works in his field. But in the Petabyte Age, the scale of available materials has exploded. In just the past few years, massive amounts of cultural heritage materials have been digitized, and scholars now have access to millions of digitized books, journals, recordings, etc. In the sciences, the data-driven approach to knowledge afforded by supercomputing has made possible incredible breakthroughs. Now humanities scholars are exploring how this approach can benefit their disciplines. What new knowledge might humanities scholars acquire? What new questions might all this data compel them to ask? Can supercomputers allow humanists to mine these "mountains of digital data" in ways that advance their understanding and scholarship?
Concurrencies (2008) for Live Multi-Channel Electronics and Viola
Presenters: David Wessel and Nils Bultmann (University of California, Berkeley)
Abstract: This work is a study in musical design patterns inspired by patterns for parallel programming. The synonymous termsvoices, streams, channels, trackssuggest the highly exposed parallelism in music. If multiple voices are to fuse into a single monophonic line they must be tightly correlated in terms of their pitch and amplitude. On the other hand, polyphony is achieved by asynchrony in the note onsets and a lack of common melodic motion. Other coordination schemes give rise to homophonic and heterophonic textures. The performance of Concurrencies is live and the audio is generated in real-time by Max/MSP. Control is provided by the instrument-like multiple touch pad surface shown in the photo below. The device is a product of research on expressive musical interfaces at the Center for New Music and Audio Technologies CNMAT at UC Berkeley and was engineered by Rimas Avizienis. Each of the 32 touch pads sense x and y finger position as well as finger pressure.
Biographies: David Wessel received a B.S. in mathematical statistics from the University of Illinois and a Ph.D. in mathematical and theoretical psychology from Stanford in 1972. From his high school years onwards his musical activities were central to his life and after his P.hD. he committed himself to blending his science and technology skills with his musical interests. As an assistant professor. in the early 1970s at Michigan State University, his experiments with perceptually based dimensionality reduction techniques provided expressively powerful control of high-dimensional sinusoidal-track sound synthesis algorithms. In 1976, he was invited to join the Institut de Recherche et Coordination Acoustic/Musique (IRCAM) in Paris and in979 was made head of IRCAM's Pedagogy Unit. In the mid-1980s he started a new unit in IRCAM dedicated to developing real-time music software for personal computers. At the time Wessel taught the first computer music class at the Paris Conservatory. For his work at IRCAM he was recognized as Chevalier dans l'Ordre des Arts et des Lettres by the French Minister of Culture. He joined UC Berkeley in 1988 as professor of music with the charge of building the interdisciplinary Center for New Music and Audio Technologies (CNMAT). He organized CNMAT as a laboratory wherein both science and technology people interact on daily basis with musicians.
Nils Bultmann is a violist, improviser and composer, committed to developing his own voice within the context of a wide variety of musical styles and art forms. His background ranges from undergraduate studies in German literature and violist with the New World Symphony to recording and touring with jazz Saxophonist Roscoe Mitchell. He has been active in collaborative projects of dance and film in the United States and Europe, and has been the recipient of numerous grants including residencies at the Djerassi, Music Omi, and Ucross foundations. He is currently pursuing doctoral studies at UC Berkeley in music composition.
HPC in Transportation
Room: 18A/18B /18C /18D
Time: 1:30 p.m. 3 p.m.
Enabling Analysis-based Tire Development with High Performance Computing
Presenter: Loren Miller (Goodyear Tire & Rubber Company)
Abstract: Over the last several years, Goodyear has introduced an unprecedented string of award-winning new products as a direct consequence of the company's switch to analysis-based tire design. Although tires appear simple on the surface, analysis-based tire design is possible only through extensive use of high performance computing. High performance computing is required both at the conceptual stage when large numbers of design alternatives are evaluated and at the final development stages when we capture nuances of the tire's response. This talk describes Goodyear's application of teraflop computing to tire design.
Biography: Since 2001, Loren Miller has been director of IT responsible for high performance computing at The Goodyear Tire & Rubber Company. He initiated Goodyear's efforts to substitute analysis-based design in place of a traditional design/build/test development process. He has served on advisory panels for the Department of Energy, Sandia and the Ohio Supercomputer Center. He is on the steering committee of the Ralph Regula School of Computational Science (OH) and spoke recently at the U.S. Navy Ship Design Process Workshop. He holds bachelor's and master's degrees in physics.
Contributions of CFD to the 787 (and Future Needs)
Presenter: Douglas N. Ball (Boeing)
Abstract: The commercial airplane market continues to become more competitive. Fuel mileage is a critical element in the competitive landscape. Numerical simulations and the use of computational fluid dynamics (CFD) contribute greatly to creating vehicles that offer superior performance. The 787 benefited greatly from the use of CFD. Several applications of CFD to the 787 will be discussed. Today the push is for more environmentally friendly aircraft, meaning fewer emissions and less noise. Governments are discussing using carbon as a trade limiter. To minimize this impact, numerical tools will be pushed well beyond their capabilities of today.
Biography: Doug Ball is the chief engineer of the Enabling Technology & Research Unit of the Boeing Commercial Airplane Group responsible for Aerodynamics, Noise, and Propulsion research. Ball received his BS degree in aeronautical/astronautical engineering from the Ohio State University in 1974 and his MS degree in 1975. Upon graduation he joined the General Dynamics Corporation as an aerodynamicist on the new F-16 fighter program. He joined The Boeing Company in 1977 where he has held many assignments within the aerodynamics configuration area, including CFD methods development, high lift design, nacelle design and integration, and wing design supporting the 757, 767 and 737 programs. He was promoted into management in October 1993. Since that time he has served as the manager for a variety of groups within the technology and product development areas. He was promoted his current assignment in August 2000 focusing on the technology needs of the 787 and future airplanes. He serves as a consultant to NASA, the National Research Council, the U.S. Air Force, and several universities.
HPC in Finance
Room: 18A / 18B / 18C / 18D
3:30 p.m. 5 p.m.
High Performance Computing in the Financials: Where Rocket Science Meets The Street
Presenters: John Storm and Prashant Reddy (Morgan Stanley)
Abstract: The use of high performance computing within the financial industry has had a long but rather obscure history. Starting with the early adoption by quantitative analysts within fixed income to its now pervasive use across the extremely complex set of products and services within Morgan Stanley, high performance computing has become a fundamental requirement in a very competitive and risk conscious environment. This talk will begin with an introduction on how institutional banks utilize HPC as part of their everyday business and then focus on some of the unique requirements and challenges this presents to the computational architectures and underlying technology infrastructure.
Biographies: John Storm is an Executive Director within Morgan Stanleys Institutional Securities division. His current focus is on the strategic development of future infrastructure and technologies to support very large-scale computational requirements associated with HPC applications. At Morgan Stanley, Storm has worked in various technology areas including the development of a firmwide HPC program, data center strategies, telecommunications and data networks. Before Morgan Stanley, he was associated with the University of Utah where he served as an assistant director of the Utah Supercomputing Institute and later as an associate director for the Department of Telecommunications and General Computing. Storm sat on the Cisco technical advisory board for six years and is currently a member of the US Council for Competitiveness HPC Advisory Committee. Storm holds a B.S. from the University of Utah with a specialty in large-scale satellite image processing.
Prashant Reddy is an executive director in the Institutional Securities division of Morgan Stanley. He is currently the global head of a team that focuses on software infrastructure for trading systems and analytics development. The team provides toolkits and middleware that enable latency-sensitive electronic trading applications and large-scale HPC applications for pricing and risk management. In previous roles, Reddy managed the Equity algorithmic trading engines group and also the Equity Derivatives lifecycle management group. He has been chairman of Morgan Stanleys technology architecture committee since 2004 and was a founding member of the Wall Street Guaranteed Distributed Computing Council. Reddy graduated with a B.S. in electrical engineering and computer sciences from UC Berkeley and with an MBA in finance and entrepreneurship from Wharton.
On the Determination of General Scientific Models with Application to Asset Pricing
Presenter: Robert McCulloch (University of Texas at Austin)
Abstract: We consider a consumption based asset pricing model that uses habit persistence to overcome the known statistical inadequacies of the classical consumption based asset pricing model. A distiguishing feature of this model, and many asset pricing models, is that no likelihood is available. The only thing we can do with the model is simulate it. We develop a computationally intensive, generally applicable, Bayesian strategy for estimation and inference for models of this type. In addition we develop a general approach to assessing adequacy of such models. Biography: Robert McCulloch received his Ph.D. in statistics from the University of Minnesota in 1985. His research is in Bayesian statistics methods with applied interests in finance and marketing. He is currently professor of statistics in the Information, Risk, and Operations Management department of the McCombs Business School, University of Texas.
Wednesday, November 19
HPC in Biomedical Informatics I
Room: 18A/18B /18C /18D
10:30 a.m. noon
Patient Specific Cardiovascular Modeling and the Predictive Paradigm in Medicine
Presenter: Thomas J. R. Hughes (University of Texas at Austin)
Abstract: Cardiovascular disease is the leading cause of death in the industrialized world and by 2010 it will be the leading cause of death in the developing world as well. This presentation describes the advent of a new era in the treatment of cardiovascular disease in which engineering methodologies of design, analysis and optimization will be utilized to create and analyze patient-specific models obtained from medical imaging data, enabling predictions of outcomes to be made based on virtual intervention scenarios. A confluence of recent advances in simulation-based engineering technologies have brought about a unique opportunity to redirect cardiovascular medicine from its traditional diagnostic paradigm, based on statistics and the experience of the physician, to a predictive paradigm, in which geometrically and physiologically accurate computer models of individual patients will be used to determine optimal treatments.
Biography: Thomas J.R. Hughes holds B.E. and M.E. degrees in mechanical engineering from Pratt Institute and an M.S. in mathematics and Ph.D. in engineering science from the University of California at Berkeley. He taught at Berkeley, Caltech and Stanford before joining the University of Texas at Austin, where he is a professor of aerospace engineering and engineering mechanics and holds the Computational and Applied Mathematics Chair III. He is a Fellow of the American Academy of Mechanics, ASME, AIAA, ASCE, AAAS, a Founder, Fellow and past President of USACM and IACM, past Chairman of the Applied Mechanics Division of ASME, and co-editor of the international journal Computer Methods in Applied Mechanics and Engineering. Hughes is one of the most widely cited authors in Computational Mechanics. He has received the Huber Prize from ASCE, the Melville, Worcester Reed Warner, and Timoshenko Medals from ASME, the Von Neumann Medal from USACM, the Gauss-Newton Medal from IACM, the Computational Mechanics Award of JSME, and the Grand Prize from JSCES. He is a member of the U.S. National Academy of Engineering, a Fellow of the American Academy of Arts and Sciences, and a Foreign Member of the Istituto Lombardo Accademia di Scienze e Lettore.
Computational Functional Anatomy on the TeraGrid
Presenter: Michael Miller (Johns Hopkins University)
Abstract: We review our efforts in the Bioinformatics of Computational Functional Anatomy where we compute functional and physiological response variables in anatomical coordinates. Collaborative efforts in brain and heart imaging in NIH funded initiatives such as the Biomedical Informatics Research Network (BIRN) Morphometry project and the CaridoVascular Research Network (CVRG) respectively are discussed.
Biography: Michael I. Miller is the Herschel and Ruth Seder Professor of Biomedical Engineering, Professor of Electrical and Computer Engineering and Director of Center for Imaging Science at Johns Hopkins University, Baltimore, MD. His research focuses on image understanding and computer vision; medical imaging and computational anatomy. Previously, Miller was the Newton R. and Sarah Louisa Glasgow Professor in Biomedical Engineering at Washington University. From 1995-2000 he was the director of the Army Center for Imaging Science, an academic center of excellence consisting of seven universities. He has been a visiting professor for several years at the Ecole Normale in France and at Brown University. Miller is a Fellow of the American Institute for Medical and Biological Engineering, a Presidential Young Investigator Award Winner and recipient of the International Order of Merit; International Man of the Year, Warrant of Proclamation and Medal of Recognition from the International Biographical Centre, Cambridge, England. He received a B.S.E.E. from State University of New York at Stony Brook, an M.S. in electrical engineering and a Ph.D. in biomedical engineering from Johns Hopkins University.
HPC in Biomedical Informatics II
Room: 18A/18B/18C/18D
3:30 p.m. 5 p.m.
High Performance Computing in the "Personalization" of Cancer Therapy: Genomics, Proteomics, and Bioinformatics
Presenter: John N. Weinstein (M. D. Anderson Cancer Center)
Abstract: A major goal of current cancer research is to tailor treatment to the molecular profile of an individual patients tumor. In the last dozen years, microarrays and other omic technologies have made it possible to study tens of thousand of genes simultaneously. Consequently, for the first time in the history of biology, its much easier, faster and cheaper to generate information than it is to analyze or interpret it. But thats only the beginning. High-throughput deep sequencing, which is destined to replace many of the hybridization-based technologies such microarrays, is generating terabyte and petabyte digital databases that will transform our understanding of biological systemsbut will also further tax our hardware, software and human resources. Tsunamis and fire hoses have been invoked to represent the situation visually. Ill describe our multifaceted molecular profiling of cancer cells and a set of tools and databases, the Miner Suite that weve been developing for integrative ('integromic') analyses of the data. For more information see http://discover.nci.nih.gov.
Biography: John Weinstein earned his B.A. in biology from Harvard College and his M.D. and Ph.D. in biophysics from Harvard University. Until February 2008, he was chief of the Genomics & Bioinformatics Group with the National Cancer Institute. He is currently professor and chair in the Department of Bioinformatics and Computational Biology at the MD Anderson Cancer Center. He has authored more than 250 publications, including 11 as first author in Science, and four of his publications in the last dozen years have been cited more than 400 times each, with an H-Score of 55. Weinstein has been nominated for National Medal of Technology as a pioneer of the Postgenomic Era in biology. His research combines experiments, bioinformatics, systems biology and computer science. He has developed widely used, Web-based bioinformatics software, databases and tools, including the Miner Suite and has developed infrastructure and/or tools for three large NCI projects: The Cancer Genome Atlas, Cancer Genetic Markers of Susceptibility Program, and Cancer Biomedical Informatics Grid (caBIG).
Computational Opportunities in Genomic Medicine
Presenter: Jill Mesirov (Broad Institute)
Abstract: The sequencing of the human genome and the development of new methods for acquiring biological data have changed the face of biomedical research. With this flood of information, we stand on the verge of an explosion of scientific discovery. The use of computational approaches is critical to integrate the many types of data now available to us and to use them to address questions in basic biology and medicine. There is also a critical need for an integrated computational environment that can provide easy access to a set of universal analytic tools; support the development and dissemination of novel algorithmic approaches; and enable reproducibility of in silico research. We will describe some of the challenging computational problems in biomedicine, the techniques we use to address them, and a software infrastructure to support this highly interdisciplinary field of research.
Biography: Dr. Jill Mesirov is director of the Computational Biology and Bioinformatics Program and Chief Informatics Officer at the Broad Institute of MIT and Harvard. She is also a member of the MIT Koch Institute for Integrative Cancer Research and adjunct professor of bioinformatics at Boston University. Her current research interest is computational biology with a focus on algorithms and analytic methodologies for pattern recognition and discovery with applications to cancer genomics, genome analysis and interpretation and comparative genomics. In 1997, Mesirov came to the Whitehead Institute/MIT Center for Genome Research, now part of the Broad Institute, from IBM. Before joining IBM in 1995, she was director of research at Thinking Machines Corporation for 10 years. She has held positions in the UC Berkeley Mathematics Department, the Institute for Defense Analyses Center for Communications Research and the American Mathematical Society. Mesirov is a fellow of the American Association for the Advancement of Science, director of the International Society for Computational Biology and former president of the Association for Women in Mathematics.
Thursday, November 20
HPC in Alternative Energy Technologies
Room: 18A/18B/18C/18D
Time: 10:30 a.m. noon
Simulation at the Petascale and Beyond for Fusion Energy Science
Presenter: William M. Tang (Princeton University and Princeton Plasma Physics Laboratory)
Abstract: Major progress in magnetic fusion research has led to ITERa $10B class burning plasma experiment supported by seven nations representing over half of the world's population. It is designed to produce 500 million Watts of heat from fusion reactions for over 400 seconds with gain exceeding 10 thereby demonstrating the scientific and technical feasibility of magnetic fusion energy. Strong R & D programs are needed to harvest the scientific information from ITER to help design a future demonstration power plant with a gain of 25. Advanced computations at the petascale and beyond in tandem with experiment and theory are essential for acquiring the scientific understanding needed to develop whole device integrated predictive models with high physics fidelity. Since ITER and leadership class computing are prominent missions of the DOE today, producing such a world-leading predictive capability for fusion represents a key exascale-relevant strategic project for the future.
Biography: William M. Tang is the Chief Scientist at the Princeton Plasma Physics Laboratory and Associate Director for Princeton Universitys Institute for Computational Science and Engineering. He is a professor in the Department of Astrophysical Sciences at Princeton University and a Fellow of the American Physical Society. The Chinese Institute of Engineers-USA, the oldest and most widely recognized Chinese-American Professional Society in North America, presented him its Distinguished Achievement Award in 2005 for his outstanding leadership in fusion research and contributions to fundamentals of plasma science. Internationally recognized for developing the requisite kinetic formalism as well as the associated computational applications dealing with electromagnetic plasma behavior in complex geometries, he has over 200 publications including more than 130 peer-reviewed papers in with a Web of Science h-index or impact factor of 38, including over 5,100 citations. He has guided the development of widely recognized modern codes for simulating complex transport dynamics in plasmas. These general geometry particle-in-cell codes have demonstrated excellent scalability on high-performance computing platforms worldwide including leadership class facilities at the terascale and beyond.
Understanding Complex Biological Systems using Computation: Enzymes that Deconstruct Biomass
Presenter: Michael E. Himmel (National Renewable Energy Laboratory)
Abstract: Lignocellulose could potentially serve as the dominant source of renewable liquid biofuels if it could be efficiently and economically depolymerized into its component sugars suitable for microbial fermentations or chemical conversions. However, even the most efficient cellulase enzymes available today are not effective enough to depolymerize cellulose at a rate and cost that can meet the needs of commercial-scale biofuels production. Cellobiohydrolase I is in many ways the most dominant cellulase in the biosphere. Through a series of mechanisms yet unknown, a single cellodextrin chain is thought to be removed from the crystalline cellulose microfibers and fed into the active site tunnel of the catalytic domain were hydrolysis occurs. Given that there are few experimental methods available to probe this complex behavior at the molecular level; we are using molecular dynamics simulations to study this problem at the level of its emergent properties (a holosystem containing ~1,000,000 atoms) as well as defined system domains (subsets).
Biography: Michael Himmel has 29 years of progressive experience in conducting, supervising and planning research in protein biochemistry, recombinant technology, enzyme engineering, new microorganism discovery and the physicochemistry of macromolecules. He has also supervised research that targets the application of site-directed mutagenesis and rational protein design to the stabilization and improvement of important industrial enzymes, especially glycosyl hydrolases. Himmel has functioned as PI for the DOE EERE Office of the Biomass Program since 1992, wherein his responsibilities have included managing research designed to improve cellulase performance, reduce biomass pretreatment costs and improve yields of fermentable sugars. He has also developed new facilities at NREL for biomass conversion research. Himmel also serves as the principal group manger of the Biomolecular Sciences Group, where he has supervisory responsibly for 36 staff scientists. Hehas contributed over 450 journal papers, meeting abstracts, books, patents, and copyrights to the literature. In 2008, Himmel edited a new book for Blackwell Publishers entitled Biomass Recalcitrance.
Green HPC I
1:30 p.m. 3 p.m.
Room: 18A/18B/18C/18D
Zero-Emission Datacenters: Concept and First Steps
Presenters: Bruno Michel, T. Brunschwiler, B. Smith and E. Ruetsche (IBM Zurich Research Laboratory)
Abstract: High-performance liquid cooling allows datacenter operation with coolant temperatures above the free cooling limit in all climates, eliminating the need for chillers and allowing the thermal energy to be re-used in cold climates. We have demonstrated removal of 85% of the heat load from high performance compute nodes at a temperature of 60 degrees C and compare their energy and emission balance with a classical air cooled datacenter, a datacenter with free cooling in a cold climate zone, and a datacenter with chiller mediated energy re-use. We show that our concept reduces the energy consumption by almost a factor of two compared to a current datacenter and reduces the carbon footprint by an even larger factor. Our energy re-use concept can be demonstrated most effectively in terms of cost and energy savings in a homogeneous high performance computing environment.
Biography: Bruno Michel received a Ph.D. degree in biochemistry/biophysics from the University of Zurich, Switzerland, in 1988 and subsequently joined the IBM Zurich Research Laboratory to work on scanning probe microscopy and its applications to molecules and thin organic films. He then introduced microcontact printing and led an international industry project for the development of accurate large-area soft lithography for the fabrication of LCD displays. Michel started the Advanced Thermal Packaging group in Zurich in 2003 in response to the needs of the industry for improved thermal interfaces and better miniaturized convective cooling. Main current research topics of the Zurich group are microtechnology/microfluidics for nature inspired miniaturized treelike hierarchical supply networks, hybrid liquid/air coolers, 3D packaging, and thermophysics to understand heat transfer in nanomaterials and structures. With the high-performance coolers he contributes to improved datacenter efficiency and energy re-use in future green datacenters.
Moving from Data Center Efficiency to Data Center Productivity and the Role of HPC in the Data Center of the Future
Presenter: John Pflueger (Dell Inc.)
Abstract: Power and Cooling is an important topic today, but as the industry succeeds in managing data centers for efficiency, the next natural step for the industry will be to learn to manage data centers for productivity. As this occurs, lessons learned from the HPCC space will be instrumental in addressing industry issues at large. This presentation will provide a first look at the complex relationship between power and cooling efficiency, IT utilization and data center productivity. It will show how new strategies to manage the lifecycle of data centers can provide opportunities for state-of-the-art IT installations within existing facilities. Last, it will discuss how lessons and offerings in the HPC space will help the overall industry as each organization looks to Compute More and Consume Less.
Biography: John Pflueger, Ph.D., is a technology strategist working in Dells Data Center Infrastructure group in Round Rock, Texas. In this role, John is responsible for driving Dell initiatives pertaining to data center efficiency and helping customers improve the efficiency of their computer systems and facilities. Since graduating from MIT in 1991 with a Ph.D. in mechanical engineering, Pflueger has spent 16 years of experience in manufacturing engineering, product development, product marketing and product management roles for technology companies including IBM. Pflueger currently serves as a director for The Green Grid and a participant in The Green Grids Technical Committee.
Green HPC II
Room: 18A/18B /18C /18D
3:30 p.m. 5 p.m.
Save Energy Now in Computer Centers
Presenter: Dale A. Sartor (Lawrence Berkeley National Laboratory)
Abstract: As HPC electrical power moves from kilowatts to megawatts, energy efficiency of our supercomputer centers is becoming critical. Energy costs will soon exceed computer equipment purchase costs in the total cost of ownership. Benchmarking of dozens of computer centers has demonstrated significant variations in energy efficiency of their infrastructure. Typically for every dollar in computing energy, another dollar is spent on power delivery and cooling. However, the best computer centers reduce their infrastructure costs to 30 or 40 percent of the computational load, and 15 to 20 percent is possible. LBNLs building energy efficiency experts have been working with the IT industry and computer center operators to identify best practices, and develop new tools and technologies to improve infrastructure efficiency. LBNL has tested and adopted a number of these best practices in its own computer centers and is planning to maximize efficiency in a new UC super computer facility. Dale Sartor will present the results of this research and applications activity.
Biography: Dale Sartor, P.E., heads the LBNL Building Technologies Applications Team which assists in the transfer of new and underutilized technology. Sartor has an A.B. in architecture, and a masters in business administration. He is a licensed mechanical engineer, and a licensed general building contractor. He has over 30 years of professional experience in energy efficiency and renewable energy applications including 10 years as a principal of an architecture and engineering firm, and seven years as the head of LBNL's In-House Energy Management Program. Sartor oversees R&D at LBNL focused on energy efficiency in buildings for high tech industries (i.e. laboratories and data centers). http://Ateam.LBL.gov
Green IT Datacenters
Presenter: Ken Brill (Uptime Institute)
Abstract: Moores law has brought us unparalleled computing. But, the rate of computing energy efficiency improvement has fallen far behind computational improvement so total energy consumption is up. Between 2005 and 2015, between 20 and 30 major 1,000 megawatt power plants must be built to supply increasing IT energy demands. As a result, we have reached the point where the cost of facilities and electricity now exceeds the cost of the computer. This imbalance will only get worse unless we change the way we do things. This session will focus on whole systems thinking to identify out of the box opportunities to restore the previous economics of computing. The session will end with group interaction in identifying supercomputer industry recommendations for DOEs 2009 roadmap of future data center efficiency initiatives.
Biography: Kenneth (Ken) G. Brill is the founder and executive director of the Uptime Institute in Santa Fe, N.M. Uptime is a research and benchmarking organization comprising more than one hundred global companies with the largest business data centers in the world. The Institute publishes industry standards, certifies the tier level of data centers, and provides consulting services.
|