TOC PREV NEXT INDEX

The Graphics Display Kit


4

Render methods

This chapter provides on details on each of the supported render method in the Graphics Display Kit.

This chapter discusses:

4.1 Overview

All data that is to be rendered by the Graphics Display Kit must have a render method associated with it. The presence of a render method is what allows any data to be connected to one of the Graphics Display Kit viewers in the Network Editor. This method converts the data from its current form, usually an AVS/Express field, into a renderable representation that can be passed to the renderer-specific primitive level routines. The Graphics Display Kit supports render methods for four separate groups:

4.2 Structured field rendering

Structured fields have several different render methods associated with them. The following V code illustrates how render methods have been assigned to structured fields.

Grid_Unif+Cells_Struct+Dim2 Render_UnifMesh {
nnodes+read;
coordinates+read;
int+read+opt nnode_data;
Data_Array+read+opt node_data[nnode_data];
data_method+virtual+nosave render = "GDdraw_mesh_unif";
data_method+virtual+nosave pick = "GDpick_mesh_unif";
};
Grid_Rect+Cells_Struct+Dim2 Render_RectMesh {
nnodes+read;
coordinates+read;
int+read+opt nnode_data;
Data_Array+read+opt node_data[nnode_data];
data_method+virtual+nosave render = "GDdraw_mesh_rect";
data_method+virtual+nosave pick = "GDpick_mesh";
};
Grid_Struct+Cells_Struct+Dim2 Render_Mesh {
nnodes+read;
coordinates+read;
int+read+opt nnode_data;
Data_Array+read+opt node_data[nnode_data];
data_method+virtual+nosave render = "GDdraw_mesh";
data_method+virtual+nosave pick = "GDpick_mesh";
};
Grid_Struct+Cells_Struct+Dim3+Space3 Render_Volume {
nnodes+read;
coordinates+read;
int+read+opt nnode_data = 1;
Data_Array+read+opt node_data[nnode_data] }
veclen = 1;
};
data_method+virtual+nosave render = "GDdraw_volume";
data_method+virtual+nosave pick = "GDpick_volume";
};

When the object draw routine invokes one of the structured field render methods, the first set of steps that take place involve retrieving information from the field about the mesh and the node data or cell data:

1. Get information about the mesh. This includes the nspace of the data, the coordinate extents, and the dimensions.
2. Get information about the node data and cell data. One of the attributes of the data is an id. This id is used to indicate to the Graphics Display Kit that the data has special meaning. Some of the special ids are used to indicate that the data represents normals, colors, uv(w)s and so on. The render method looks at all the data and retrieves any of the data that has a special id. It also retrieves the first node data and cell data it finds with a vector length of 1 as values to be converted to color information later in the conversion process.

At this point, structured fields may be rendered in one of three different ways: as an image, a mesh, or a volume. If the current camera is 2D and the mesh is uniform, the field is rendered as an image. If the camera is 3D and the mesh is 3-space and uniform, the field is rendered as a volume. In all other cases, the field is rendered as a mesh.

The processing that takes place to render a structured field as a mesh is done a strip at a time. Each strip is converted into a triangle strip if a surface rendering mode is specified. Lines and points are also converted on a per strip basis in a similar manner if a line rendering mode or point rendering mode is enabled. If more than one mode is on at the same time (for example, gouraud shading and lines), the strip processing is done once for the surface and once for the lines. Processing per strip uses memory more efficiently if the cache is disabled because the coordinates are not actually stored. This means it is possible to render large fields as meshes with very little overhead. The processing steps that are outlined below are for the surface representation and are repeated for each strip in the mesh.

1. Get two rows of coordinates.
2. Remove null data if it is specified.
3. If there are no normals and the mesh is non-planar, calculate the normals. Otherwise, use the same normals for each strip.
4. If there are colors, get the colors for the strip. If there are RGB values, get the RGBs for the strip and convert them to colors. If there are node data or cell data values, get the values for the strip and generate colors for them.
5. If there are UV(W)s, get the UV(W)s for the strip.
6. Call the renderer-specific primitive routine. Either the 3D or 2D routine is called depending on the type of camera to which the object is attached.
7. If the object has caching enabled, add the renderable representation to the cache for the object.
4.3 Unstructured field rendering

Unstructured fields have one render method associated with them. The following V code illustrates how the render method has been assigned to unstructured fields.

Grid+Cells Render_Polyhedra {
nnodes+read;
coordinates+read;
int+read+opt nnode_data;
cell_set+read;
ncell_sets+read;
Data_Array+read+opt node_data[nnode_data];
data_method+virtual+nosave render = "GDdraw_polyh";
data_method+virtual+nosave pick = "GDpick_polyh";
};

When the object draw routine invokes the unstructured field render methods, the first set of steps that take place involve retrieving information from the field about the mesh and the node data or cell data.

1. Get information about the mesh. This includes the nspace of the data, the coordinate extents, and the coordinates array.
2. Get information about the node data and cell data. One of the attributes of the data is an id. This id is used to indicate to the Graphics Display Kit that the data has special meaning. Some of the special ids are used to indicate that the data represents normals, colors, uv(w)s and so on. The render method looks at all the data and retrieves any of the data that has a special id. It also retrieves first node data and cell data it finds with a vector length of 1 as values to be converted to color information later in the conversion process.

The processing that takes place for an unstructured field is on a per cell set basis and is repeated for each cell set that is in the field. There are a number of different types of cell sets that are supported. The V code below shows the cell set types and the render method that are associated with them.

Point Render_PointCells {
data_method+virtual+nosave render = "GDdraw_point_cells";
data_method+virtual+nosave pick = "GDpick_point_cells";
};
Line Render_LineCells {
data_method+virtual+nosave render = "GDdraw_line_cells";
data_method+virtual+nosave pick = "GDpick_line_cells";
};
Polyline Render_PolylineCells {
data_method+virtual+nosave render =
"GDdraw_polyline_cells"
;
data_method+virtual+nosave pick =
"GDpick_polyline_cells"
;
};
Polytri Render_PolytriCells {
data_method+virtual+nosave render="GDdraw_polytri_cells";
data_method+virtual+nosave pick="GDpick_polytri_cells";
};
Tri Render_TriCells {
data_method+virtual+nosave render = "GDdraw_tri_cells";
data_method+virtual+nosave pick = "GDpick_tri_cells";
};
Quad Render_QuadCells {
data_method+virtual+nosave render = "GDdraw_quad_cells";
data_method+virtual+nosave pick = "GDpick_quad_cells";
};
Polyhedron Render_PolyhedronCells {
data_method+virtual+nosave render = "GDdraw_polyh_cells";
data_method+virtual+nosave pick = "GDpick_polyh_cells";
};

The processing for each type of cell set is similar but slightly different. This section outlines the processing for the triangle cell set type when a surface rendering mode is specified. Lines and points are also processed on a cell set basis. If more than one rendering mode is on at the same time, processing is done for each mode that is enabled.

1. Get the render method that is associated with the cell set and invoke it. Just as there is a render method associated with each field, there is a render method associated with each different type of cell set.
2. Get the number of cells (that is, the number of primitives).
3. Get the number of nodes per cell. For a triangle cell set, this is always three.
4. Get the connectivity array for the cell set. This array contains the index into the coordinate array for each vertex in each triangle in the cell set.
5. Remove null data if it is specified.
6. Convert the triangles into a triangle strip for rendering efficiency.
7. Gather the vertices in the triangle strip using the connectivity array.
8. Gather the node data present using the connectivity array. Node data gathered includes normals, colors, and uv(w)s.
9. Gather the cell data present using the connectivity array. Cell data gathered includes normals and colors.
10. Call the renderer-specific primitive function. Either the 3D or 2D routine is called depending on the type of camera to which the object is attached.
11. If caching is enabled, add the renderable representation to the cache for the object.
4.4 "Tiled" field rendering

Both an array of meshes and a uniform volume can be rendered in a tiled manner. This is done by using the AVS/Express field to represent both the mesh and the volume and adding some additional parameters to control how the tiling is done. The following V code represents that data structures that will be rendered.

group Mesh_Array_NoXform {
int start;
int width;
int height;
int orientation;
int mode;
int border_width;
Mesh &MeshIn;
int nmesh;
Mesh+Data MeshArr[nmesh];
};
Mesh_Array_NoXform+OPort Mesh_Array {
DefaultXform+opt xform;
DefaultXform+opt tile_xform;
};
group Tiled_Volume_NoXform {
int start;
int width;
int height;
int orientation;
int mode;
int border_width;
Mesh_Unif+Node_Data+Dim3+Space3 &VolIn;
};
Tiled_Volume_NoXform+OPort Tiled_Volume {
DefaultXform+opt xform;
DefaultXform+opt tile_xform;
};

There are two "tiled" render methods, one for the array of meshes and one for the uniform volume. The following V code illustrates how these render methods have been assigned to "tiled" fields.

Mesh_Array Render_MeshArray {
data_method+virtual+nosave render = "GDdraw_mesh_array";
data_method+virtual+nosave pick = "GDpick_mesh_array";
};
group Render_UnifVol {
int start;
int width;
int height;
int orientation;
int mode;
int border_width;
data_method+virtual+nosave render="GDdraw_volume_tiled";
data_method+virtual+nosave pick="GDpick_volume_tiled";
};

When the object draw routine invokes the data-specific render method for an array of meshes, the following set of steps are taken:

1. Get the tile attributes. This includes start, width, height, orientation, mode, and border_width.
2. Get the array of meshes and the reference mesh (that is, MeshIn).
3. Update the extents of the field using the tile attributes. This allows normalization to work properly.
4. Get the render method for the mesh array. This is done once since the method is the same for each mesh in the array.
5. Using start, width and height, call the render method for each mesh. This leverages the existing render methods that exist for the various types of meshes.

When the object draw routine invokes the data-specific render method for a uniform volume, the following set of steps are taken:

1. Get the tile attributes. This includes start, width, height, orientation, mode, and border_width.
2. Get information about the mesh of the volume. This includes the nspace of the data, the coordinate extents, and the dimensions. Update the extents of the field using the tile attributes. This allows normalization to work properly.
3. Get information about the node data and cell data. Since all or part of the volume is to be rendered as an array of images, the only type of data that matters is the first node data with a vector length of 1. This can be converted to color information during the rendering process.
4. Call the renderer-specific primitive function. The primitive function only exists for 2D cameras.
5. If caching is enabled, add the renderable representation to the cache for the object.
4.5 Text rendering

Text in AVS/Express is not represented in the field data structure. Instead, a separate data structure is used to represent text string. The following V code represents annotation and stroke text in AVS/Express.

group TextAttribs {
int align_horiz;
int align_vert;
int drop_shadow;
int bounds;
int underline;
int lead_line;
int radial;
int do_offset;
float offset[3];
int color;
};
group StrokeTextAttribs {
int font_type;
int style;
int plane;
int orient;
int path
int space_mode; int path
int space_mode;
float spacing;
float angle;
float height;
int expansion;
float width;
};
TextAttribs Text_NoXform {
string str;
int nspace;
float position[nspace];
int stroke = 0;
StrokeTextAttribs StrokeTextAttribs;
};
Text_NoXform+Xform Text; {
float+write min_vec[nspace];
float+write max_vec[nspace};
};

TextAttribs TextValues {
int color;
string text_values[];
int stroke = 0;
StrokeTextAttribs StrokeTextAttribs;
};
Grid+Xform+TextValues TextField;

Annotation text has two render methods associated with it, one for a single text string and one for an array of text strings. The following V code illustrates how the render methods have been assigned to annotation text.

Text_NoXform Render_Text {
data_method+virtual+nosave render = "GDdraw_text";
data_method+virtual+nosave pick = "GDpick_text";
};
TextField Render_TextArray {
data_method+virtual+nosave render="GDdraw_text_array";
data_method+virtual+nosave pick="GDpick_text_array";
};

Stroke text has two render methods associated with it, one for a single text string and one for an array of text strings. The following V code illustrates how the render methods have been assigned to stroke text.

Text_NoXform Render_StrokeText {
stroke = 1;
data_method+virtual+nosave render = "GDdraw_stroke_text";
data_method+virtual+nosave pick = "GDpick_stroke_text";
};
TextField Render_StrokeTextArray {
stroke = 1;
data_method+virtual+nosave render="GDdraw_stroke_text_array";
data_method+virtual+nosave pick="GDpick_stroke_text_array";
};

When the object draw routine invokes a data-specific render method for text, the set of steps taken involves retrieving information about the text.

1. Get the text attributes. This includes alignment, offset, and boolean flags for drop shadow, bound, underline, lead line, and radial options. This is done once for all text render methods.
2. Get the text string. For the text array render method, this is repeated for each text string.
3. Get the text position. For the text array render method, this is repeated for each text string.
4. Call the renderer-specific primitive function. Either the 3D or 2D routine is called depending on the type of camera to which the object is attached.
5. If caching is enabled, add the renderable representation to the cache for the object.
4.6 Object cache

As mentioned in previous sections, each object can have a cache associated with it. The cache contains a renderable representation of the data that is attached to the object. The cache can be user-specified on a per object basis. By default, caching is enabled.

If the cache is enabled, when the data is initially rendered, it is added to the cache after rendering. Subsequent renderings use the cache if it exists and is valid. A number of different circumstances cause the cache to be invalidated. For example, the cache is invalidated if the data is changed due to the user changing the level on the isosurface module. A change in the rendering mode of the object also invalidates the cache, for example, if the user turns off surface mode and enables line mode.

The Graphics Display Kit supports both 3D and 2D cameras. Any object connected to a camera is rendered in the type of the camera. The render space (rspace) is defined to be 2 if the camera is 2D and 3 if the camera is 3D. The type of cameras attached to an object has an effect on the format of the cache. Connecting an object to a new camera may cause the cache to become invalidated.

The space of the data (nspace) also has an effect on the format of the cache. If the data has XY coordinates, nspace is defined to be 2. If the data has XYZ coordinates, nspace is defined to be 3.

The idea is to keep the cache in a format that is most suitable for 3D rendering if the object is attached to a 3D camera and most suitable for 2D rendering if the object is attached to a 2D camera. A conflict arises when the same object is attached to both 3D and 2D cameras. This can happen if a user has taken an orthoslice from a volume and wants to display it as a mesh in a 3D camera and as an image in a 2D camera.

The cache is always kept in an optimal form for 3D rendering. To determine the nspace of the data in the cache, you need to consider the rspace of the camera (that is, 2D or 3D) and the nspace of the original data. The nspace of the data in the cache is the maximum of the nspace of the data and the rspace of the camera.

There are a number of different cases:

This may happen if nspace and rspace both equal 2 or 3. In either case, the nspace of the cache is the same as the nspace of the original data.

In this case, nspace is equal to 2 and rspace is equal to 3. The data in the cache is promoted to have nspace equal to 3 with the Z values of 0. An example of this is the output of orthoslice which has nspace equal to 2 being rendered in a 3D camera.

In this case, nspace equals 3 and rspace equals 2. The data in the cache has nspace equal 3. Note that data is promoted in nspace but is not demoted. This is the case where a 3-space object is being rendered in a 2D camera. The transformation utilities ignore the Z values in this case.

As mentioned earlier that connecting an object to a camera that has a different type can cause the cache to be invalidated. This happens if an nspace equal to 2 object that is attached to a 2D camera is attached to a 3D camera. In this case, the cache which would have been nspace equal to 2 is invalidated and rebuilt upon rerendering as nspace equal to 3. This also happens if an nspace equal to 2 object that is attached to both a 3D and 2D camera is then detached from the 3D camera. The cache, which would have been nspace equal to 3, is rebuilt upon rerendering as nspace equal to 2.

If we have an orthoslice connected to both a 3D view and a 2D view and caching is enabled, the cache will have the data as nspace equal to 3. As a result, the rendering of the 3D view is optimal but the 2D view does not render the orthoslice as an image but as a mesh due to the format of the cache. To work around this conflict and render the orthoslice as an image, you add another Graphics Display Kit object into the hierarchy so the object rendered in the 2D camera can have its own cache. Note that the only additional memory that this requires is an additional copy of the cache for the object in the 2D camera. There is still only one copy of the data being rendered; there are just two copies of the cache for the data. This situation can also be completely avoided by not having the cache enabled for the object that is being rendered in both cameras. In this case, it is not necessary to add the extra object into the hierarchy.

The figure below shows the case where an additional Graphics Display Kit object has been introduced into the hierarchy. This allows the orthoslice to be rendered as a 3D mesh in the 3D view and as a image in the 2D view.


TOC PREV NEXT INDEX

Copyright © 2001 Advanced Visual Systems Inc.
All rights reserved.