Scientific Visualization at Los Alamos National Laboratory

Investigators: James Ahrens, Pat McCormick, Al McPherson, James Painter

Summary

Scientific visualization is an essential tool for understanding the vast quantities of large-scale, time-dependent data produced from high performance computer simulations. The need for visualization crosses all DOE high performance computing application areas including Grand Challenge science problems, predictive modeling, and weapons safety calculations. While interaction is recognized as a key feature for useful exploration of scientific data, sufficient speed for interaction is impossible on these large data sets using commercially available serial visualization hardware, algorithms and tools.

Predictive modeling often requires time critical computing where results from a simulation must be produced and interpreted quickly enough to be of use in an ongoing crisis. The time critical nature of these problems requires visualization tools that are tightly integrated with the simulation codes. Visualization tools must be the primary interface to a running simulation for such time critical problems, allowing time critical interpretation of simulation results and allowing "what if" scenarios to be explored rapidly.

Visualization problems such as these are outside the scope of the commercial marketplace and an extensive research and development program is required to meet future requirements. The nature of the necessary research spans the areas of traditional computer graphics and computer systems software. The existing efforts at the ACL and LANL in systems (Tulip, PAWS), application frameworks (POOMA, PETE), high performance networking (HIPPI) and applications (Grand Challenges, CHAMP, predictive modeling) together with existing expertise in high performance visualization provide the necessary infrastructure to approach these problems. Collaborations with University of Utah, Argonne and other DOE laboratories reinforce these strengths.

Research Activities

The Advanced Computing Laboratory has a long history of expertise and research leadership in parallel and distributed methods for visualization and rendering of extremely large data sets. These methods allow visualization tools to operate on massive data sets in place, without requiring data movement from the supercomputers where the data was produced to a separate visualization workstation. As we have demonstrated, data sets can be handled with these methods that are much larger than can be dealt with even a large visualization server.

Scalable parallel and distributed visualization systems are still a major focus of our research effort. In the past we have mainly explored purely software-based approaches. The SGI based Nirvana Blue platform now allows us to integrate multiple high-end graphics accelerators directly into the large cluster based platform. We currently have four SGI Infinite Reality graphics pipelines that can be used in several modes. Typically the four pipelines are used as four independent heads driving independent monitors. For very large visualization problems we are also interested in chaining the graphics pipelines together to produce a single system with much higher rendering capability. We are working with collaborators at SGI and at the University of Utah to exploring parallel algorithms which allow multiple graphics pipes to be chained together for increased capability. We have initially explored texture based volume rendering and are beginning to explore sort last based polygon rendering.

Our past work has primarily focused on individual "one off" software tools with limited software sharing through the use of libraries. In the past year we have put an increased emphasis on greater code reuse through the use of software frameworks. Software frameworks are a proven technology for software reusability, rapid development and shareability. Clearly, these are very desirable features that would be of great benefit to visualization. Current successful frameworks tend to be narrowly focused and limited to specific classes of problems. For an area as large as scientific visualization, especially in a parallel and distributed environment, significant research is needed to develop robust, high performance, scalable and portable frameworks. Several interrelated projects have begun to explore parallel, distributed visualization frameworks.

To create a full-featured visualization solution for a specific application domain requires a collection of commercial and research software components. Using a coarse abstraction, these components can be classified as front-ends (i.e. user interfaces) and back-ends (i.e. visualization engines). We have developed an object-oriented framework, called VIF (http://www.acl.lanl.gov/Viz/frameworks.html), to interconnect these components. Using the framework, a user can define an interface for tool component communication. Once an interface is specified, new or existing components can easily be written to communicate with the defined interface. An initial implementation of VIF is complete.

A second framework, PDVF, is being developed to address parallel and distributed visualization solutions that allow rapid prototyping and reusability. The ability to prototype solutions quickly allows us to experiment with multiple solutions and compare their efficiency and effectiveness. In the future, it may be possible to use the framework to create a solution, which optimally uses one of a set of parallel visualization solutions depending on dataset characteristics and system conditions. This ability to explore the parallel and distributed visualization solution space is unique and is required to create an optimized solution for terascale sized problems.

Working closely with the Parallel Object-Oriented Methods and Applications (POOMA) team, we have integrated visualization tools directly into the POOMA framework allowing runtime visualization capabilities to any application built within the POOMA. An initial serial implementation of these POOMA visualization tools has been delivered and is in use by POOMA application developers. A sort last based parallel implementation is in progress to provide a scalable solution for large data sets. This work leverages the PAWS DOE-2000 project and related work from ANL.

Volume rendering is an alternative paradigm to traditional polygonal-based rendering. It allows data sets to be visualized in their entirety through dynamic use of transparency and color. For data sets that do not have hard surfaces, it can provide a more accurate visual representation. Volume rendering tends to be an extremely slow process. Without interactivity, the visual effect is not easily comprehended. By utilizing SGI’s 3D-texture memory and dynamic paging, it is becoming possible to interactive volume render terascale data sets. We have recently applied hardware texture mapping and interactive volume rendering techniques to the global climate model results. In the past ocean animations took days to produce and didn't allow interactive exploration. The new tool, poptex, allows users to interact with an animating ocean model time history, a capability not possible before.

We expect to continue research work in scalable parallel software and hardware solutions for tera-scale visualization, software frameworks to support visualization software reuse, and new rendering techniques such as hardware texture based volume rendering. Our visualization research efforts are driven by the needs of the grand challenge projects. When possible, we leverage commercial visualization software such as CEI EnSight, IBM DX, and AVS Express, but none of these products can currently address the terascale needs of the grand challenges. Our research efforts, and those of our collaborators, result in research prototype visualization software that will be applied to the needs of the grand challenges.