Areas
  • J3D FAQ
  • Tutorials
  • Utilities
  • Books
  • News
  • Other Sites
  •  Chapters
  • TOC
  • Geometry
  • Lighting
  • Textures
  • Behaviours
  • User Feedback
  • Navigation
  • Audio
  • Input Devices
  • No Monitors
  •  Sections
  • Chapter TOC
  • A Window to the World
  • Scene Graph Basics
  • Textures
  • Event Model
  • Cameras and Viewing
  •   

    Geometry

    © Justin Couch 1999

    Now that you have a basic handle on what comprises the major portions of a scene graph, it is time to move onto the useful bits - creating objects that you can see on screen. In a word: Geometry.

    Geometry consists of a collection of points in 3D space. As seen in earlier parts of the tutorial, this 3D space may occupy one of several different roles such as world space or display space. While a collection of 3D dots in space may be a useful form of visualisation technique for your data, it is not the most common mechanism. To make the objects look solid, we then form a relationship between these points to produce a surface. Combine a number of these surfaces together, throw a splash of colour about, and you end up with a solid looking 3D object on the screen.

    BranchGroups

    One offshoot of capabilities is the effect it has on grouping nodes in particular. The idea of a group is to bring together related sub-graphs. In a lot of visualisation type applications, you want to be constantly adding and removing items from these groups.

    It has been mentioned several times already that BranchGroups form the root of a scene graph. There is more to this statement than meets the eye and you may be wondering why, because you can create objects from any Grouping node, add/remove children from them and everything appears fine. Once you get to dealing with a live scene graph, this can be a completely different story.

    Apart from the obvious option to be able to compile a scene graph, the branch group also does a number of other optimisation and handling functions for user input handling. The various pickX methods allow you to deal with pointer input and selection of objects. These will be covered in the later chapter of Chapter 7 Navigation and Chapter 8 User feedback. Picking may also be used for much more that just user input too.

    When you compile a scene graph, the only point that you can guarantee will always remain the same is the branch group. It does not contain any information itself, but holds all of the information about its children. Thus, when you want to remove a part of the scene graph, you can take a single compiled and optimised chunk in a single hit. There are no problems about having to decompile the scene graph just to remove a portion of it and then recompile it again afterwards. Removing a branch group causes minimal interruption to the rendering process.

    One interesting part of this is the observation that a branch group will contain other branch groups as children. Thus, all through the scene graph will be little collection points of optimisations. Because you've probably lost the parent node of the branch group during the compilation, you may not be able to directly remove the branch group at a later date. Since you will probably be keeping around references to branch groups (you might be wanting to remove one or more of them) you would need to keep around references to their parent objects as well. These parent objects, under compilation may or may not exist under the hood. To alleviate this problem, a detach method is provided. Calling this automatically removes the branch group from its parent object.

       Note
      In testing that I've done, it is faster to add/remove a lot of little branch groups than to add/remove one branchgroup and make the modifications to it. The former illustrates an approximately linear increase in time based on the number of objects, while the latter seems to be more or less polynomial.

    Geometry Arrays

    Before moving onto building up your own pieces of geometry, you first need to understand how their most basic component, the GeometryArray works. The GeometryArray class is the base of the different attributes that need 2D or 3D coordinates to represent them. That is, everything in Java 3D geometry. A geometry array performs the following tasks:
    • 3D Coordinates representing the vertices of an object
    • Normals of either a face, line or vertex. Normals are used to calculate whether to draw a polygon (backface culling), or how to shade it with respect to any influencing lights.
    • Color values with or without alpha (transparency) settings. Color in this form is used to represent colours on both a per vertex or per face basis. Per vertex coloring can be seen in the ColorCube demo that comes with the standard Java3D install.
    • Texture coordinates. May represent both 2D and 3D texture coordinates. Texture coordinates are used to modify an image that is laid over polygons. For example, it is used to turn a square image into a spherical map that turns a sphere into a soccer ball.

    A single geometry array instance represents a single piece of geometry and contains at least one of the attributes mentioned above. At all times it must include the coordinates, then other flags in the constructor allow you to add other values for each coordinate.

    Geometry arrays are an abstract class, so you can't create one directly. Instead, you will need to use one of the derived classes like IndexedQuadArray or IndexedTriangleFanArray. You will need to become familiar with these depending on the type of geometry that you are modelling, and probably it is more based on how your raw data is presented. Sometimes, one form of geometry array is easier to build than others based on the raw data.

       Note
      There are many more derived classes to GeometryArray than the two noted here. For an in-depth look at every class, see the next chapter.
    To create an array representing a flat square with 4 component color and normals, you would use an IndexedQuadArray. The following code would be used.
      int flags =
            GeometryArray.COORDINATES |
            GeometryArray.COLOR_4 |
            GeometryArray.NORMALS;
    
      IndexedQuadArray geom = new IndexedQuadArray(4, flags, 4);
    
    You will note that the first argument is the number four. This is because we know that we need 4 vertices to describe a flat square. You always need to provide a number indicating the number of vertices in the object when you create a geometry array.

    After creating the basic array, we need to fill it with some data. First, the coordinates. Since we are dealing with a simple object, we'll declare all the points in a single hit as an array of floats.

      double[] coordinates = {
            0.5, 0.5, 0,
            0.5, -0.5, 0,
            -0.5, -0.5, 0,
            -0.5, 0.5, 0
      };
    
      int[] indices = { 0, 1, 2, 3 };
    
      geom.setCoordinates(0, coordinates);
      geom.setCoordinateIndices(0, indices);
    
    Note that we've declared the coordinates in a clockwise fashion, in order. Like most of the geometry arrays, the final derived classes make assumptions about the ordering of the vertices that you've given the base class. We need to explicitly set them using the last line. It is much easier to make the match up between the two arrays if you keep the coordinate declarations in a nice, logical order.

    The next step in building our flat square is to give the object some color at each corner. Colors follow the same pattern for the coordinates. First declare an array of the values, the indices and then use the appropriate set method. We can reuse the indices array from the first set of coordinates to make life easy for ourselves. The only gotcha to remember here is that we have specified four components for each color.

      float[] colors = {
            1, 1, 0, 0,
            1, 0, 1, 0,
            1, 0, 0, 1,
            1, 1, 1, 0
      };
    
      geom.setColors(0, colors);
      geom.setColorIndices(0, indices);
    
    Finally, we add the finishing touches by declaring normals for each vertex. For this simple example, we declare them all to be pointing directly up (along the Z axis). Normals should always be normalised too. This time, an alternative approach is taken:
      float[] normal = { 0, 0, 1 };
    
      geom.setNormal(0, normal);
      geom.setNormal(1, normal);
      geom.setNormal(2, normal);
      geom.setNormal(3, normal);
    
    which results in all for vertices all having the same normal direction.

    That is your basic GeometryArray. From this point, we need to turn this into a particular shape that can be rendered. Ignoring setting the appearance for the moment, we create the Shape3D to contain the geometry and add it to the scene graph.

    Shape3D shape = new Shape3D(geom);
    branch_group.addChild(shape);
    

    Customised Geometry

    Building a basic square is a relatively trivial exercise. When it comes to more complex items like a car engine block, you need to start thinking much more carefully about how you are going to create it in terms of Java3D's node structure.

    The major decision to make is how far down the object chain of derived classes do you go? For a complex object like the engine block, a standard quadarray probably wouldn't be sufficient. Instead, you might want an object that is represented by a parametric surface like NURBS or quadratics that are custom built from parameters rather than pure coordinates.

    Building primitive objects like cubes, spheres and similar fixed shaped objects it is usually preferable to extend or just use one of the indexed base classes. It is typical that these sorts of objects directly extend one of the grouping classes and then build the geometry internally in one or more shape nodes.

    When you have an object whose shape is not directly determinable at creation time, it is usually easier to extend the basic GeometryArray class and provide all your own handling. This is certainly true of parameterised objects or those where the user may interact with it to drag out the shape required.

    Appearance

    If you supplied just the basic geometry array, with no vertex colors specified, to the Shape3D class, you end up with a default appearance - a mid-greyish plastic looking object. The idea of the Appearance class is to provide global information about a particular shape. When you set an appearance, it effects all of the shape.

    Before we even dig into the Appearance class, you might be wondering how this corresponds with the per vertex coloring of the geometry array. The simple answer is that if colors are specified in the geometry, then these are used in preference to the global settings.

    Color settings are only part of the function performed by the appearance. The Appearance class controls all aspects of how the geometry is rendered: everything from transparency to whether to draw the geometry as points, lines or polygons. There are a number of attribute collection classes that assist this process.

    RenderingAttributes
    Defines how to deal with the alpha values specified in the colour. It is used to control whether to use alpha, completely ignore it, or only use values above a certain minimum. It is also used to control how the depth buffer is set up for dealing with transparencies, and whether one should be used at all.
    PolygonAttributes
    Describes how polygons should be rendered. For example, you might decide that you should always render the polygon regardless of which direction it is viewed from. If you decide to view only from one side, this is termed Backface Culling. The back face is the face of the polygon that is pointing away from the set normals (hence if you see missing faces on a geometry array, this may be the cause). The attributes may also be used to determine whether you should draw polygons as lines, points or a filled area (drawing mode). The final interesting option is the ability to offset the polygons by a number of pixels in the display space arena.
    PointAttributes
    If you've selected point drawing mode in the PolygonAttributes, this class is used to further refine whether the points should be anti-aliased or not. Also, it allows the point size in pixels to be set.
    LineAttributes
    If you've selected line drawing mode in the PolygonAttributes, this class is used to further refine whether the lines should be anti-aliased or not. Like point attributes, you can control the thickness of the lines and also the line style (dashed, dotted or combination).
    ColoringAttributes
    Describes how to deal with the color values when you've got to interpolate between values. The shading model used to represent the effects of lighting can also be controlled. This is particularly important in non-trivial worlds where different shading effects can make very large differences in rendering speed - even with hardware accelerated graphics. Of course, you may also use this to set the intrinsic color of the object before any shading is used.
    Material
    Where the color attributes control how the object responds to lighting effects, the material attribute is used to control the intrinsic color properties of the object itself. For example, you may use this to set what color the object glows (emissive color), how it reflects light (specular color) and the basic shininess of the object (anywhere from matte, to mirror finish). All of these colors may be set to completely different values. The subject and the effects of the different types of color can take a book by themselves, so we won't discuss it here.
    TransparencyAttributes
    Rendering attributes allow us to control how to deal with the alpha values, which is another name for transparency. This class allows you to deal with how to render transparency. Various hints are available to set it to the nicest looking or fastest mode. Like coloring, the type of transparency rendering used can effect the performance of your world tremendously, even with hardware assistance.
    TexturingAttributes
    Textures are images that are laid over the geometry, typically to make it more realistic. These attributes allow us to determine how to render a texture. Should the texture blend with the underlying color (particularly if it itself contains transparent sections) or completely replace it (transparent parts of the texture make that part of the object see-through). We may also control how to render a texture - how much processing should be performed to make it realistically correct for the underlying object. Because textures are images, we may also want to place only a certain part of the texture on the object, not the whole image. We use texture transforms in this class to manipulate the texture.
    TexCoordGeneration
    There are various models of how to apply a texture to a surface that is not parallel to the camera. This final attribute is used to control how we calculate the initial texture coordinates relative to the screen and the viewer's eye. Also, sometimes the texture may need to be wrapped around a spherical object rather than a flat face, so we use this class to tell the renderer whether this needs to be done.
       Note
      Controlling the appearance of an object can involve a lot more than just setting the Appearance instance. Information given in the appearance settings are then used by both lighting and texturing to create the final look of the object. These are discussed in detail in Chapter 3 Lighting and Chapter 4 Texturing.

    Code Implementation

    Now we need to relate this to the code example that we are building up for this chapter. The geometry is contained in a class called ExampleGeometry. This class extends Shape3D so that we can keep it all clean. It looks something like this:
    public class ExampleGeometry extends Shape3D
    {
      private IndexedQuadArray geom;
      private Appearance appearance;
      private Texture texture;
    
      public ExampleGeometry()
      {
        constructGeometry();
        constructAppearance();
      }
    
      private void constructGeometry()
      {
      }
    
      private void constructAppearance()
      {
      }
    }
    
    Inside the constructGeometry() method, we perform the steps of creating the IndexedQuadArray that you saw earlier in the chapter. The final step of the method is to add the quad array to the class:
      addGeometry(geom);
    
    rather than adding it to a separate class.

    Back in the main window class, we can now add some code to the constructWorld() method that creates an instance of the geometry and adds it to the universe manager that we created in the previous section.

      private void constructWorld()
      {
        // create the basic universe
        universe = new UniverseManager();
    
        ExampleGeometry geom = new ExampleGeometry();
    
        universe.addWorldObject(geom);
      }
    

      

    [ TOC ] [ Home ] [ FAQ ] [ Books ] [ Tutorials ] [ Utilities ] [ Contact Us ]