At SC08, the invited plenary speaker sessions will offer insights by an accomplished group of international experts involved in the cutting edge of computation. The talks will cover a broad range of interdisciplinary, scientific and commercial topics that illustrate the wide impact of state-of-the-art computing and also offer exiting glimpses into the future of our field.
The SC08 Masterworks program of invited speakers is nearly complete. Watch this page for the finalized lineup of speakers.
In addition, the SC08 invited Masterworks speakers will highlight case studies that illustrate novel and innovative ways to apply high performance computing, networking, storage and analysis to solve the world’s most challenging problems and answer the questions fundamental to our understanding of the world around us and our existence.
Michael Dell to Give Keynote Speech at SC08
Tuesday, November 18
Higher Performance: Supercomputing in the Connected Era
Michael Dell, chairman of the board of directors and chief executive officer of Dell Inc., the company he founded in 1984 with $1,000 and an unprecedented idea--to build relationships directly with customers--will give the keynote address at SC08 in Austin, Texas.
In 1992, Mr. Dell became the youngest CEO ever to earn a ranking on the Fortune 500 list. Mr. Dell is the author of “Direct From Dell: Strategies that Revolutionized an Industry,” his story of the rise of the company and the strategies he has refined that apply to all businesses.
In 1998, Mr. Dell formed MSD Capital, and in 1999, he and his wife formed the Michael & Susan Dell Foundation, to manage the investments and philanthropic efforts, respectively, of the Dell family.
Born in February 1965, Mr. Dell serves on the Foundation Board of the World Economic Forum and the Executive Committee of the International Business Council, and is a member of the U.S. Business Council. Mr. Dell also serves on the U.S. President's Council of Advisors on Science and Technology, the Technology CEO Council and the governing board of the Indian School of Business in Hyderabad, India.
2008 marks the 20th anniversary of the SC Conference series bringing together the world’s leading high-performance computing researchers, scientists and engineers. From the environment to health and energy, these leaders have helped address many of the world’s most pressing challenges. The next era of HPC will be enabled by super-scalable, increasingly simple technologies that will make possible even greater collaboration, productivity and scientific breakthroughs.
Wednesday, November 19
Kenneth H. Buetow
Developing an Interoperable IT Framework to Enable Personalized Medicine
National Cancer Institute
NCI Associate Director, Bioinformatics and Information Technology;
Director, NCI Center for Biomedical Informatics and Information Technology (NCICBIIT);
Chief, Laboratory of Population Genetics, National Cancer Institute;
In his role as the Associate Director for Bioinformatics and Information Technology at the National Cancer Institute (NCI), Dr. Buetow is best known for initiating the cancer Biomedical Informatics Grid™ (caBIG™) and currently oversees its activities. caBIG™ was conceived as the “World Wide Web” of cancer research, providing data standards, interoperable tools and grid-enabled computing infrastructure to address the needs of all constituencies in the cancer community. Guided by the principles of open source licensing, open software development, open access to the products of that development and federated data storage and integration, caBIG™ facilitates data and knowledge exchange and simplifies collaboration between biomedical researchers and clinicians, leading to better patient outcomes and the realization of personalized medicine in cancer care and beyond.
As Director of the NCI Center for Biomedical Informatics and Information Technology (NCICBIIT), Buetow works to advance the Center’s goal of maximizing interoperability and integration of NCI research. The Center participates in the evaluation and prioritization of the NCI’s bioinformatics research portfolio; facilitates and conducts research required to address the CBIIT’s mission; serves as the locus for strategic planning to address the NCI’s expanding research initiative’s informatics needs; establishes information technology standards (both within and outside of NCI); and communicates, coordinates or establishes information exchange standards.
Buetow also serves as the Chief of the Laboratory of Population Genetics (LPG), which focuses on developing, extending and applying human genetic analysis methods and resources to better understand the genetics of complex phenotypes, specifically human cancer. He also spearheaded the efforts of the Genetic Annotation Initiative (GAI) to identify variant forms of the cancer genes detected through the NCI Cancer Genome Anatomy Project (CGAP). His laboratory combines computational tools with laboratory research to understand how genetic variations make individuals more susceptible to liver, lung, prostate, breast and ovarian cancer.
Buetow received a B.A. in biology from Indiana University in 1980 and a Ph.D. in human genetics from the University of Pittsburgh in 1985. From 1986 to 1998, he was at the Fox Chase Cancer Center in Philadelphia, where he worked with the Cooperative Human Linkage Center (CHLC) to produce a comprehensive collection of human genetic maps. Buetow has been in his role at NCI since 2000. He has published more than 160 scientific papers on a wide variety of topics in journals such as PNAS, Science, Cell, and Cancer Research.
Buetow’s honors and awards include The Editor’s Choice Award from Bio-IT World (2008), The Federal 100 Award (2005), The NIH Award of Merit (2004), the NCI Director’s Gold Star Award (2004), The Partnership in Technology Award (1996), and the Computerworld Smithsonian Award for Information Technology (1995).
21st century biomedical research is driven by massive amounts of data: automated technologies generate hundreds of gigabytes of DNA sequence information, terabytes of high resolution medical images, and massive arrays of gene expression information on thousands of genes tested in hundreds of independent experiments. Clinical research data is no different: each clinical trial may potentially generate hundreds of data points of thousands of patients over the course of the trial.
This influx of data has enabled a new understanding of disease on its fundamental, molecular basis. Many diseases are now understood as complex interactions between an individual’s genes, environment and lifestyle. To harness this new understanding, research and clinical care capabilities (traditionally undertaken as isolated functions) must be bridged to seamlessly integrate laboratory data, biospecimens, medical images and other clinical data. This collaboration between researchers and clinicians will create a continuum between the bench and the bedside—speeding the delivery of new diagnostics and therapies, tailored to specific patients, ultimately improving clinical outcomes.
To realize the promises of this new paradigm of personalized medicine, healthcare and drug discovery organizations must evolve their core processes and IT capabilities to enable broader interoperability among data resources, tools, and infrastructure—both within and across institutions. Answers to these challenges are enabled by the cancer Biomedical Informatics Grid™ (caBIG™) initiative, overseen by the National Cancer Institute Center for Biomedical Informatics and Information Technology (NCI-CBIIT). caBIG™ is a collection of interoperable software tools, standards, databases, and grid-enabled computing infrastructure founded on four central principles:
• Open access; anyone—with appropriate permission—may access caBIG™ the tools and data
• Open development; the entire research community participates in the development, testing, and validation of the tools
• Open source; all the tools are available for use and modification
• Federation; resources can be controlled locally, or integrated across multiple sites
caBIG™ is designed to connect researchers, clinicians, and patients across the continuum of biomedical research—allowing seamless data flow between electronic health records and data sources including genomic, proteomic, imaging, biospecimen, pathology and clinical information, facilitating collaboration across the entire biomedical enterprise.
caBIG™ technologies are widely applicable beyond cancer and may be freely adopted, adapted or integrated with other standards-based tools and systems. Guidelines, tools and support infrastructure are in place to facilitate broad integration of caBIG™ tools, which are currently being deployed at more than 60 academic medical centers around the United States and are being integrated in the Nationwide Health Information Network as well. For more information on caBIG™, visit http://cabig.cancer.gov/
Wednesday, November 19
Parallel Computing Landscape: A View from Berkeley
University of California Berkeley & Lawrence Berkeley National Laboratory
Pardee Professor of Computer Science,
Director, UC Berkeley Parallel Computing Laboratory
David Patterson was the first in his family to graduate from college and he enjoyed it so much that he didn’t stop until he received a Ph.D. from UCLA in 1976. He then moved north to UC Berkeley. He spent 1979 at DEC working on the VAX minicomputer, which inspired him and his colleagues to later develop the Reduced Instruction Set Computer (RISC). In 1984, Sun Microsystems recruited him to start the SPARC architecture. In 1987, Patterson and colleagues tried building dependable storage systems from the new PC disks. This led to the popular Redundant Array of Inexpensive Disks (RAID). He spent 1989 working on the CM-5 supercomputer. Patterson and colleagues later tried building a supercomputer using standard desktop computers and switches. The resulting Network of Workstations (NOW) project led to cluster technology used by many Internet services. He is currently director of both the Reliable Adaptive Distributed systems Lab and the Parallel Computing Lab at UC Berkeley. In the past, he served as chair of Berkeley’s CS Division, chair of the Computer Research Association, and president of the ACM.
All this has resulted in 200 papers, 5 books, and about 30 honors, some shared with friends, including election to the National Academy of Engineering, the National Academy of Sciences, and the Silicon Valley Engineering Hall of Fame. He was named Fellow of the Computer History Museum and both AAAS organizations. From the ACM, where as a fellow, he received the SIGARCH Eckert-Mauchly Award, the SIGMOD Test of Time Award, the Distinguished Service Award, and the Karlstrom Outstanding Educator Award, He is also a fellow at the IEEE, where he received the Johnson Information Storage Award, the Undergraduate Teaching Award, the Mulligan Education Medal. Finally, he shared the IEEE the von Neumann Medal and the NEC C&C Prize with John Hennessy of Stanford University.
In December 2006 we published a broad survey of the issues for the whole field concerning the multicore/manycore sea change (see view.eecs.berkeley.edu). We view the ultimate goal as being able to productively create efficient, correct and portable software that smoothly scales when the number of cores per chip doubles biennially. This talk covers the specific research agenda that a large group of us at Berkeley are going to follow (see parlab.eecs.berkeley.edu) as part of a center funded for five years by Intel and Microsoft.
To take a fresh approach to the longstanding parallel computing problem, our research agenda will be driven by compelling applications developed by domain experts in personal health, image retrieval, music, speech understanding and browsers. The development of parallel software is divided into two layers: an efficiency layer that aims at low overhead for 10 percent of the best programmers, and a productivity layer for the rest of the programming community—including domain experts—that reuses the parallel software developed at the efficiency layer. Key to this approach is a layer of libraries and programming frameworks centered around the 13 design patterns that we identified in the Berkeley View report. We rely on autotuning to map the software efficiently to a particular parallel computer. The role of the operating systems and the architecture in this project is to support software and applications in achieving the ultimate goal. Examples include primitives like thin hypervisors and libraries for the operating system and hardware support for partitioning and fast barrier synchronization. We will prototype the hardware of the future using field programmable gate arrays (FPGAs) on a common hardware platform being developed by a consortium of universities and companies (see
Thursday, November 20
High-Performance Computing and the Energy Challenge: Issues and Opportunities
Battelle Memorial Institute
Executive Vice President for Global Laboratory Operations
Battelle Memorial Institute
Jeffrey Wadsworth is the senior executive responsible for Battelle’s laboratory management business. Battelle currently manages or co-manages six U.S. Department of Energy (DOE) national laboratories: Brookhaven National Laboratory, Idaho National Laboratory, Lawrence Livermore National Laboratory (LLNL), National Renewable Energy Laboratory, Oak Ridge National Laboratory, and Pacific Northwest National Laboratory. A Battelle subsidiary, Battelle National Biodefense Institute, manages the National Biodefense Analysis and Countermeasures Center for the U.S. Department of Homeland Security. The laboratories have combined research revenues of $3.2 billion and employ 16,000 staff.
Wadsworth joined Battelle in August 2002 and was a member of the White House Transition Planning Office for the Department of Homeland Security before being named director of Oak Ridge National Laboratory and president and CEO of UT Battelle, LLC, which manages the laboratory for DOE, in August 2003. As ORNL’s director through June 2007, he was responsible for managing DOE’s largest multi-purpose science and energy laboratory, with 4,100 staff and an annual budget of $1 billion. Under his leadership, the laboratory commissioned the $1.4 billion Spallation Neutron Source; launched DOE’s first nanoscience research center; developed the world’s most powerful unclassified computer system; expanded its work in national security; and initiated an interdisciplinary bioenergy program.
Before joining Battelle, Wadsworth was Deputy Director for Science and Technology at Lawrence Livermore National Laboratory, where he oversaw science and technology across all programs and disciplines. His responsibilities included programmatic and discretionary funding, technology transfer, and workforce competencies. He joined the laboratory in 1992 and was Associate Director for Chemistry and Materials Science before becoming Deputy Director in 1995.
From 1980 to 1992, Wadsworth worked for Lockheed Missiles and Space Company at the Palo Alto Research Laboratory, where as manager of the Metallurgy Department he was responsible for direction of research activities and acquisition of research funds.
Wadsworth attended the University of Sheffield in England, graduating with baccalaureate and doctoral degrees in 1972 and 1975, respectively; he was awarded a D. Met. for published work in 1990 and an honorary D. Eng. in 2004. He joined Stanford University in 1976 and conducted research on the development of steels, superplasticity, materials behavior, and Damascus steels. He lectured at Stanford after joining Lockheed, and remained a Consulting Professor until 2004. He is a Distinguished Research Professor in the Department of Materials Science and Engineering at the University of Tennessee.
He has authored and co-authored more than 280 papers in the open scientific literature on a wide range of materials science and metallurgical topics; one book, Superplasticity in Metals and Ceramics (Cambridge, 1997); and four U.S. patents. He has presented or co-authored 300 talks at conferences, scientific venues, and other public events, and has twice been selected as a NATO Lecturer. His work has been recognized with many awards, including Sheffield University’s Metallurgica Aparecida Prize for Steel Research and Brunton Medal for Metallurgical Research. He was elected a Fellow of ASM International in 1987, of TMS (The Minerals, Metals & Materials Society) in 2000, and of the American Association for the Advancement of Science (AAAS) in 2003. He is a member of the Materials Research Society (MRS) and the American Ceramic Society (ACeRS). In January 2005 he was elected to membership in the National Academy of Engineering “for research on high temperature materials, superplasticity, and ancient steels and for leadership in national defense and science programs.”
Energy issues are central to the most important strategic challenges facing the United States and the world. The energy problem can be broadly defined as providing enough energy to support higher standards of living for a growing fraction of the world’s increasing population without creating intractable conflict over resources or causing irreparable harm to our environment. It is increasingly clear that even large-scale deployment of the best, currently available energy technologies will not be adequate to successfully tackle this problem. Substantial advances in the state of the art in energy generation, distribution and end use are needed. It is also clear that a significant and sustained effort in basic and applied research and development (R&D) will be required to deliver these advances and ensure a desirable energy future. It is in this context that high-performance computing takes on a significance that is co-equal with theory and experiment. The U.S. Department of Energy (DOE) and its national laboratories have been world leaders in the use of advanced high-performance computing to address critical problems in science and energy. As computing nears the petascale, a capability that until recently was beyond imagination is now poised to address these critical problems. Battelle Memorial Institute manages or co-manages six DOE national laboratories that together house some of the most powerful computers in the world. These capabilities have enabled remarkable scientific progress in the last decade. The world-leading petascale computers that are now being deployed will make it possible to solve R&D problems of importance to a secure energy fu¬ture and contribute to the long-term interests of the United States.
Thursday, November 20
Computational Frameworks for Subsurface Energy and Environmental Modeling and Simulation
University of Texas at Austin
Mary Fanett Wheeler is a world-renowned expert in massive parallel-processing. She has been a part of the faculty at The University of Texas at Austin since 1995 and holds the Ernest and Virginia Cockrell Chair in the departments of Aerospace Engineering and Engineering Mechanics and Petroleum and Geosystems Engineering. She is also director of the Center for Subsurface Modeling at the Institute for Computational Engineering and Sciences (ICES).
Dr. Wheeler's research group employs computer simulations to model the behavior of fluids in geological formations. Her particular research interests include numerical solution of partial differential systems with application to the modeling of subsurface flows and parallel computation. Applications of her research include multiphase flow and geomechanics in reservoir engineering, contaminant transport in groundwater, sequestration of carbon in geological formations, and angiogenesis in biomedical engineering. Wheeler has published more than 200 technical papers and edited seven books. She is currently an editor of nine technical journals.
Wheeler is a member of the Society of Industrial and Applied Mathematics, the Society of Petroleum Engineers, American Women in Mathematics, Mathematical Association of America and American Geophysical Union. She is a fellow of the International Association for Computational Mechanics and is a certified Professional Engineer in the State of Texas.
Wheeler served has served on numerous committees for the National Science Foundation and the Department of Energy. For the past seven years she has been the university lead in the Department of Defense User Productivity Enhancement and Technology Transfer Program (PET) in environmental quality modeling. Wheeler is on the Board of Governors for Argonne National Laboratory and on the Advisory Committee for Pacific Northwest National Laboratory. In 1998, Wheeler was elected to the National Academy of Engineering. In 2006, she received an honorary doctorate from Technische Universiteit Eindhoven in the Netherlands and in 2008 honorary doctorate from the Colorado School of Mines.
Wheeler received her B.S., B.A., and M.A. degrees from the University of Texas at Austin and her Ph.D. in mathematics from Rice University.
Over the past 60 years, modeling and simulation have been essential to the success of the petroleum industry. This in fact dates back to 1948 when von Neumann was a consultant for Humble Research in Houston, Texas. Exploration and production in the deep Gulf of Mexico and the North Slope of Alaska and the design and construction of the Aleyeska pipeline could not have been achieved without modeling of coupled nonlinear partial differential equations.
Today energy-related industries are facing new challenges: unprecedented demand for energy as well as growing environmental concerns over global warming and greenhouse gases Resolving complex scientific issues in addressing next generation energies requires multidisciplinary teams of geologists, biologists, chemical, mechanical and petroleum engineers, mathematicians and computational scientists working closely together. Simulation needs include: 1) the development of novel multiscale (molecular to field scale) and multiphysics discretizations for estimating physical characteristics and statistics of stochastic systems; 2) modeling of multiscale stochastic problems for quantifying uncertainty to heterogeneity and small-scale uncertainty in subdomain system parameters; 3) verification and validation of models through experimentation and simulation; 4) robust optimization and optimal control for monitoring and controlling large systems; and 5) petascale computing on heterogeneous platforms that includes interactive visualization and seamless data management.
In order to address these challenges, a robust reservoir simulator comprised of coupled programs that together account for multicomponent, multiscale, multiphase (full compositional) flows and transport through porous media and through wells and that incorporate uncertainty and include robust solvers is required. The coupled programs must be able to treat different physical processes occurring simultaneously in different parts of the domain and, for computational accuracy and efficiency, should also accommodate multiple numerical schemes. In addition, these problem-solving environments or frameworks must have parameter estimation and optimal control capabilities. We present a “wish list” for simulator capabilities as well as describe the methodology and parallel algorithms employed in the IPARS software being developed at the University of Texas at Austin.