Rendering Infrastructure

Jump to navigation Jump to search

Under Construction

This will be a design document which describes the major features (and likely the non-features or anti-features) of a rendering infrastructure which allows VTK-m to function as a standalone library which can output images. Given that algorithms like ray casting and volume rendering can be highly data-parallel and are being implemented using VTK-m algorithms, it makes sense to hook these up to a basic infrastructure which can output these results.

(Note that this will start as a description of the EAVL rendering infrastructure, but will evolve to a VTK-m specific design, which can then be implemented as a modification of EAVL's.)


The entire point is to be a lightweight, standalone rendering infrastructure, which includes the ability to output 1D, 2D, and 3D plots, with basic axes and annotations, including text.

Also important is the ability to output to vector formats like PS. (Note: EPS seems to work better.) The ability to output polygons/line segments as either vector primitives to these EPS files with clipping, or to use a raster renderer (GL) and output the pixels within the viewport while still using vector annotations. Also, rendering raytraced/software volume renderings to a foreground OpenGL window, with annotations, as easy as saving to a file.

Another capability implemented was that multiple windows could render the same scene, and you could switch renderers without having to rebuild the scene. In our usage scenarios for VTK-m like in situ, this may not be as important, but was not difficult to allow for, and could be useful in a number of interactive scenarios.

Implementation Considerations


Anything potentially OpenGL- or Mesa-dependent must be defined in header files only. We do not want to deal with mangling, or multiple-linkage issues in or required by our library.


No explicit third-party dependencies means any annotation must be self-contained. Initially, this means bitmap fonts. For EAVL, I used Liberation 2.0 fonts. These are licensed under the SIL Open Font License. The former 1.x series Liberation fonts have licensing problems. In particular, EAVL contains Liberation 2 Mono, Sans, and Serif as bitmap fonts along with spacing metrics. For the ASCII character set, these files take 500KB each as source, and 160KB each as object files. (I.e., multiply these sizes by 3x if you want to include all three fonts, so about half a megabyte in final binary form for all three.)

Design and Usage


   // create a window
   eavlColor bg(0,0,0);
   eavlRenderSurface *surface = new eavlRenderSurfacePS;
   eavlScene *scene = new eavl3DScene();
   eavlSceneRenderer *renderer = new eavlSceneRendererSimpleRT;
   eavlWorldAnnotator *annotator = new eavlWorldAnnotatorPS;
   eavl3DWindow *window = new eavl3DWindow(bg, surface, scene, renderer, annotator);
   // set up a plot for the data set
   eavlPlot *plot = new eavlPlot(dataSet, CellSetName);
   // set the view
   window->view.TrackballRotate(.4,.3, -.3,-.2);
   // paint
   // save the window
   surface->SaveAs("output.eps", eavlRenderSurface::EPS);

Main Classes

Render Surface: This is the output target like an OpenGL context or a PostScript file. It also handles any image-space annotations, like colorbars or 2D axis bars/text and chart titles, and it knows how to configure the target transformations for image- or screen-space. It can also save itself (and all of its contents) to an image or raster file, depending on its type.

World Annotator: This handles annotations in "world" or "scene" space, like pick point letters, 3D axes/text, or other callouts pointing to or relative to scene geometry.

Scene Renderer: This handles the mapping of scene geometry into something the render surface understands (like pixels or postscript commands). It has a default implementation which can turn datasets into ONLY points, line segments, triangles, and tetrahedrons. Derived classes can override at one of two points: drawing a whole 1D/2D/3D data set, or handling individual primitives as decomposed by the base class. It also has default implementation for primitive drawings to map all primitives down to a single type, e.g. it will map any triangle (whether it has node/cell/no normals and node/cell/no scalars) into a triangle with node normals and node scalars.

Window: This handles setting up the viewport and annotations specific to a type of window (like 2D, 3D, 1D, Polar).

Scene: This is essentially the list of plots to render.

Plot: A plot contains the data set to render, the name of an optional field, cellset, and normal array from the data set to render, and some rendering attributes like log scaling and color table.

Concrete Subtypes

Render Surface

  1. eavlRenderSurfaceGL is the base class for dealing with OpenGL rendering contexts. It does not create or activate any contexts itself, so it is used for on-screen rendering e.g. within a QGLWidget.
  2. eavlRenderSurfaceOSMesa derives from eavlRenderSurfaceGL but includes context creation for an OSMesa pixel and depth buffer.

World Annotator

  1. eavlWorldAnnotatorGL draws screen-space lines and text using OpenGL commands.
  2. eavlWorldAnnotatorPS is currently unimplemented as attempting to do world-space vector annotations in postscript is a little lacking (would have to ignore depth buffer), but could be implemented.

Scene Renderer

  1. eavlSceneRendererSimpleGL is the base OpenGL implementation that implements the minimum necessary (namely a single triangle/edge/point with node normals and node scalars) and allows the base class to decompose a data set.
  2. eavlSceneRendererGL is a more optimized implementation derived from eavlSceneRendererSimpleGL that maps a whole dataset to OpenGL more efficiently (and keeping quads as quads).
  3. eavlSceneRendererRT is the data-parallel ray tracing renderer. It keeps its own internal pixel buffer.

There are others as well: VR for volume rendering, and a SimpleRT for unoptimized ray tracing, both of which keep their own internal pixel buffer.


  1. eavl1DWindow, eavl2DWindow, and eavl3DWindow are all similar. They initialize viewports and can render any window-specific annotations. These also handle legends. Right now, the 2D and 3D windows will make a colortable annotation from the first plot. The 1D window will create a solid block color table suitable for a plot with lots of curves (e.g. labeling red as "A" and green as "B").


  1. Base class only.


  1. eavlPlot is the base class and gets things set up for rendering, like finding the right field pointers and extracting data and spatial extents. It also handles transforms like polar.
  2. eavl1DPlot is the only specialization; instead of mapping all coordinates fields to spatial coordinates, it treats the first coordinate as X and one of the FIELD values as Y. It can also handle e.g. "bar" plots.



  • Ability to easily switch to different renderers on the same scene, and render same scene in more than one window (e.g. from multiple views or other renderer settings)
  • Ability to output pure vector EPS files, or EPS files with raster images (raytraced) and vector annotations
  • The fact that the base scene render will decompose everything all the way into a single type of point/line/tri/tet so make implementing new renderers.

Needs fixing/improving

  • Caching -- different renderers need to handle updates differently. E.g. some changes might trigger BVH rebuild, point sizes and other geometry changes, but not color tables. We're currently handling this okay, but it's a basic caching mechanism from the base scene renderer, and it probably needs a little better infrastructure to allow renderers to cache what they need and know what changes are occurring.
  • OpenGL scene renderer doesn't know about OpenGL render surface. Right now we don't have the ability to capture pixels from an OpenGL-based scene renderer, but we need it so we can e.g. embed OSMesa raster images into an EPS file -- and at the same time, we need to prevent a glReadPixels/glDrawPixels event when the scene renderer is OpenGL and the render surface is OpenGL, because then it's drawing to the visible buffer. Probably some "onscreen" or "active" concept is fine, but perhaps OpenGL is special enough we can embed that concept right into the API.
  • Transformations make a full copy of the data set points. Maybe okay, but not highly efficient.

Minor wants/desires

  • Image space annotations are handled by the render surface, while world space annotations have their own class. It works fine, and even makes sense why, but I have this nagging feeling that it feels like a minor inconsistency.
  • Naming: really, naming in general should be re-thought for VTK-m.
  • Scenes and plots don't have dimensionality, but windows do and renderers can accommodate them -- is that okay? (Again, there's a reason for it, but I want to make sure it's not inconsistent or otherwise a problem.)
  • Cylinders vs lines, spheres vs points. Inconsistency in visual size that needs to be resolved, perhaps by changing e.g. sphere size in physical space based on camera to map to coherent visual size.
  • Better font rendering; mipmap bitmap fonts are okay and very efficient, but would love something better without depending on something like freetype. Perhaps distance field fonts are a sufficient improvement without going overboard.
  • Would love a rasterizing offscreen renderer that does NOT depend on Mesa. We could probably write one that works fine with minimal effort, and even if it's 10x slower than Mesa, that will be enough to test out ideas. We could fall back to RT, but need to think about accuracy/BVH times, etc., think about if it's a good enough replacement. (I'm not saying RT can't replace it here, just saying need to think about it before abandoning other replacements.)