This is an archival copy of the Visualization Group's web page 1998 to 2017. For current information, please vist our group's new web page.

Remote and Distributed Visualization (DiVA)

The primary goal of this project is to foster the definition and emergence of a "virtual standard" for high performance scientific visualization. Our research focuses on visualization algorithms, infrastructure and architecture as applied to DOE-funded science projects with visualization needs unfulfilled by any commercial or research technologies. Our work is conducted as an end-to-end process, driven by close interactions with stakeholders and related research communities. Our primary stakeholders are DOE-funded SciDAC projects with specific remote and distributed visualization (RDV) needs. Generally speaking, the unfulfilled needs can be characterized as the ability to perform scientific visualization and data analysis of large and complex scientific data using remote and distributed resources. Our project combines research activities from two complementary areas that will result in new capabilities needed by science programs. The two research areas are (1) graphics and visualization algorithms targeted especially for RDV environments, and (2) the infrastructure needed to effectively deploy these new RDV capabilities. We will work closely with individual scientific research groups to ensure the technology is well targeted and to validate the efficacy of the new methods.

Our vision is for scientific researchers to be able to perform visualization anytime, anywhere, using any of a large collection of resources, and without the need to be master of many different disciplines. Furthermore, we envision a stable environment for visualization and data analysis that can be used effectively by all scientific researchers as well as for visualization research by the scientific visualization community. We envision high performance visualization tools having the same ease of use and prevalence as common office productivity software. We strive for the ability to bring to bear the combined resources of many remotely distributed computational resources upon a single scientific visualization task that exceeds the capabilities of any one platform.

Table of Contents


Performance Modeling and Dynamic Pipeline Optimization

Many visualization component architectures use a dataflow pipeline paradigm for their distributed execution model. Such an organization has provided the underpinnings of the most successful visualization packages, such as AVS, IBM's Data Explorer, and the Visualization Toolkit (VTK). In their simplest form, all components that comprise the application pipeline reside on a single platform. Early attempts at distributed execution required a user to manually partition components into remote and local groups. In such a partitioning, some of the components run locally, and the other collection runs on a remote host. The partitioning is static, which means the partitioning never changes in response to changing application needs or environmental conditions.

However, there is no a-priori way to select an optimal (or even tolerable) pipeline distribution at startup without first being able to accurately predict the performance of the individual components. Because no such performance models exist, placement of components to date has been entirely heuristic. When we have a performance model associated with each component that comprises a pipeline, we can create a composite parametric model that enables us to accurately predict the overall performance of the pipeline and therefore can make quantifiably optimal selections for the distribution of the components across resources. The ability to optimally place components will be a core requirement for resource selection mechanisms needed for effective Grid computing. Note that "pipeline elements" consist not only of individual software components, but also include the "pipes" through which data flows between components.

More Information -- Performance Modeling and Dynamic Pipeline Optimization Project Page.


Realizing Grid-based RDV Applications

The Grid and its associated middleware is useful as a concept only if its services can be used to create the illusion that all of the resources are centralized or local to the user's workstation. Paradoxically, the most successful distributed applications on the Grid will be those where the user is not aware that she is operating using distributed components. It is essential that the DiVA be completely decoupled from the user interface paradigm so that a variety of interface methodologies can be supported; from desktop computers, to CAVEs, to AccessGrids, to web browsers on cell phones. For this purpose, there are a number of technologies for separating the interface definition from the back-end logic that implements those operations. Such separation is the foundation of the service specifications, examples of which include Open Grid Services Architecture (OGSA) and the Web Services Resource Framework (WSRF), and interface specifications like the Web Service Definition Language (WSDL) and gSOAP. For performance oriented applications, it may be necessary to look at other specification methods like the Common Component Architecture's SIDL (an analogue to CORBA), or the relative merits of a more custom system built atop the RIPP protocol The separation between presentation and implementation is so critically important that we will make it a fundamental element feature of our research activities from the very start.

Despite such decoupling of the GUI from the compute engine, some centralized resources are required for coordinating resource access and providing a unified view to the user's Virtual Organization. The motivation for producing a web-based Grid Portal interface to visualization services and Problem Solving Environments (PSE's) is derived from the desire to hide the complex software architectures and access to remote resources behind a single point of presence that is accessible through comparatively simple client-side interfaces.

A portal is a single point of presence, typically hosted on the web that can be customized for a particular user and remembers particular aspects of the customizations regardless of of the location of the user's access. Yahoo and HotMail are typical consumer-oriented examples of this capability, and are the origin of the meaning for the term "portal." Regardless of your location when you login to the URL of these portals, you get access to the same view of your personalized environment and data (ie. in the case of HotMail, your email). Like the Yahoo example, a Grid portal, such as the nascent LBL VisPortal, provides a personalized environment that enables access to sophisticated distributed data analysis tools that support the NERSC HPC environment and its users. We feel that a web-portal architecture is a very natural way to provide the location independence, consolidation of tools, and the level automation necessary to support HPC activities. A portal is just one way to present a collection of services, but it is highly effective.

More information about the LBNL Visualization Portal


(Dex) Remote and Distributed Visualization and Scientific Data Management

More information.


DiVA-related talks and presentations.


DiVA Email List

The Diva email list is one of the majordomo variety. To talk to the majordomo bot at LBL, send email to majordomo-at-listserv.lbl.gov (replace the -at- text with @), and then put commands into the body of the email. No subject line is required. To subscribe to the diva list, send email to the majordomo bot with the commands "subscribe diva me@foo.bar" (where me@foo.bar is your email address). To unsubscribe, send an email with the body "unsubscribe diva me@foo.bar".

If you have questions about majordomo lists, you can consult a FAQ located here: http://www.cs.duke.edu/csl/faqs/majordomo.php Then, replace majordomo@cs.duke.edu with majordomo@lbl.gov or whatever mail server hosts the Majordomo list(s) in question, and "demo-list" with the real listname.