Areas
  • J3D FAQ
  • Tutorials
  • Utilities
  • Books
  • News
  • Other Sites
  •  Chapters
  • TOC
  • Geometry
  • Lighting
  • Textures
  • Behaviours
  • User Feedback
  • Navigation
  • Audio
  • Input Devices
  • No Monitors
  •  Sections
  • Chapter TOC
  • A Window to the World
  • Scene Graph Basics
  • Geometry
  • Textures
  • Event Model
  •   

    Cameras and Viewing

    © Justin Couch 1999

    Earlier in the chapter, we took a quick look as Views and ViewPlatforms so that you may see the scene graph. If you have tried to run the code as is, you will have found it generated null pointer exceptions. We didn't add a few pieces in.

    The combination of View and ViewPlatform form the camera object that we will discus in Chapter 7 Navigation Techniques. To match with the ideas expressed in that chapter, it makes sense to build a class that represents a camera and the way that a user may play with it. Some of the ideas expressed in that chapter included different types of control (first, second and third person), constraints on what the camera can do and also making geometry move with the camera.

       Note
      The camera models demonstrated here also match very nicely with the concepts expressed in 3D User Interface Techniques with Java 3D by J. Barrilleaux. The concepts are defined in much greater detail than can be expressed here and it is recommended reading for those seriously interested in the modelling side of the code.
    Part of modelling a camera in a virtual world, is modelling the user of the camera. That is, you as a real human, occupy a finite volume of space, you can bump into things, jump over something if it is small enough and be either left or right handed. To make a complete view and camera representation, we need to model many of the same characteristics in the 3D world.

    PhysicalBody

    The first part of modelling your virtual body is representing how you are looking at the world. The PhysicalBody class represents how your head is located in the world. In the real world, you rarely walk around with your head at ground level, and most people have two eyes set some distance apart.

    The following physical head related attributes are modelled by the physical body class:

    • Eye position relative to the center of the head. This is set using the setLeftEyePosition/setRightEyePosition methods. Mainly used when making stereo projections for HMDs and similar devices.
    • Ear position relative to the center of the head. This is set using the setLeftEarPosition/setRightEarPosition methods. It can be used to model the appropriate sound levels to go to each ear based on the environment. If you have 3D audio rendering capabilities on your sound card (starting to become very common in PC devices) then this can be used to control how the 3D sound is projected to you as well.
    • Eye height from the ground. This is the most commonly used application of the body representation because it can be used for automatic terrain following. This is particularly important when in use in high-end devices such as CAVEs where the user is physically inside the projected 3D environment and the rendering should match the reality to prevent motion sickness and similar physiological problems. The value can be set with the setNominalEyeHeightFromGround() method
    • Eye position relative to the screen. Fairly uncommon is usage, but can be used if/when you need more control over the stereo projection. If you consider the eyepoint as being the vertex of a pyramid defined by the perspective projection and then place the screen between you and the objects, this is the distance between your eye and the screen. The further away from your eye, the smaller the effective field of view (for a constant sized screen).
    • Head Tracking transform. If you do happen to have a head-tracking device, the setHeadToHeadTrackerTransform method can be used to control and scaling or offset calculations that need to be done.
    The attributes in this class can be used in a two way configuration. If you are using stereo goggles like a HMD, you can use the eye position to make sure that the goggles are not doing something unnatural to your eyes. At the same time, the height above the ground may be used to create automatic terrain following for the avatar.

    PhysicalEnvironment

    While the previous class models the basic characteristics of your body needed for rendering, the PhysicalEnvironment class models the computer environment that your body sits in. We've mentioned input devices a few times before, the physical environment is that class that is used to manage and install the various devices available on your computer. Typically these features won't be used in a PC based app as every machine is different. However, if you are building a one off system such as a large virtual environment or other exotic hardware then these capabilities are very useful.

    Audio Devices

    Of all the device types available, only the audio device will be the one that is installed most of the time. If you wish to have sound in your world, then an audio device must be installed. Another reason for installing an audio device is to choose between a number that may be available on your system: for example a standard FM OPL3 card and a 3D spatialization card.

    You cannot write an audio device directly, Instead the drivers either come from a manufacturer or they are provided as part of the JavaSound Java Media API set. There are two separate classifications of audio devices: standard stereo devices are represented by AudioDevice while 3D spatialisation devices such as those capable of Dolby THX output would subclass AudioDevice3D.

    Input Device

    If you have a specialised device, like the classical Mattel PowerGlove, you might want to create a new input device. The InputDevice interface is used to represent any sort of external input device in combination with the Sensor class. Between these two, it is possible to represent any arbitary input device - including multiple button systems like the current crop of high range joysticks and throttle controls.

    Moving Geometry with the View

    Probably the most important aspect of setting up camera objects is to be able to associate some geometry with the camera and have it move around with it. This Head-Up-Display forms part of the scene graph and can contain the normal 3D objects. The result is like a 3D dashboard with the geometry.

    ViewPlatforms are just a leaf node as far as the scene graph is concerned. They do not have any special properties that make them different from other objects. The result of this is that they may be placed anywhere in the scene graph. Any grouping node that you place them under will effect the position and location of the camera that is represents. If the camera is located under a BranchGraph that is removed then that camera will no longer render the scene. If the camera is located under a TransformGroup then changing the transform in the group would result in the camera moving according to the new transform too (although only standard yaw, rotate, translate transforms make sense, shears don't).

    Say you want to build your standard Amusement Park camera model that is located in a virtual rollercoaster. The rollercoaster has defined geometry that always exists regardless of whether the view is on the ride or not. To make the camera move with the rollercoaster. you simply make sure that the ViewPlatform is placed inside the geometry that represents the rollercoaster car. You apply a transform just above the ViewPlatform to make sure that it is sitting in the correct spot relative to the car and when the car moves, so does your camera.

    The HUD principle employs the opposite approach - instead of moving your view wherever the geometry goes, you move the goemetry wherever your view goes. In practice though, there is very little difference in the way that the scene graphs are structured. Figure 10.1 illustrates the differences between the two approaches. As you can see, the only difference is in the relative position of the view platform in the scene graph to the root transform that is used to drive everything about. Where the difference comes about is how the transform is driven. In the first case, the camera object is moved by the geometry that it is associated with while the second moves the camera directly.

    Figure 10.1 The scene graph used for a rollercoaster ride (top) and for a Head Up Display (bottom).

    Scene graph descriptions

    To create such a structure, we need to build almost exactly what you see in the image. Inside our camera model we need a group node that is the container for all the HUD objects:

    private static final double BACK_CLIP_DISTANCE = 100.0;
    private static final Color3f White = new Color3f(1, 1, 1);
    
    private Group hud_group;
    private TransformGroup root_tx_grp;
    private Transform3D location;
    private ViewPlatform platform;
    private View view;
    private DirectionalLight headlight;
    
    private PhysicalBody body;
    private PhysicalEnvironment env;
    
    public Camera()
    {
            hud_group = new Group();
            hud_group.setCapability(Group.ALLOW_CHILDREN_EXTEND);
    
            platform = new ViewPlatform();
    
    
    With the basic top level structure complete and a default view platform created, we may want some other options. We also want to create a root node to hold everything. In this case, being a simple camera, a TransformGroup is used as the root.
      location = new Transform3D();
    
      root_tx_grp = new TransformGroup();
      root_tx_grp.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);
      root_tx_grp.setTransform(location);
      root_tx_grp.addChild(platform);
      root_tx_grp.addChild(hud_group);
    
    A typical option for a camera is to include a headlight. This is a directional light that points exactly where the camera is pointing. Lights, like behaviours, always need a bounding area of influence. For cameras, we like to create a fixed length headlight because that makes it act like the miner type headlights when viewing the world.
    private static final BoundingSphere LIGHT_BOUNDS;
    
    static
    {
            Point3d origin = new Point3d(0, 0, 0);
            LIGHT_BOUNDS =
                    new BoundingSphere(origin, BACK_CLIP_DISTANCE);
    }
    
      // create the basic headlight in the constructor code...
      headlight = new DirectionalLight();
      headlight.setCapability(Light.ALLOW_STATE_WRITE);
      headlight.setColor(White);
      headlight.setInfluencingBounds(LIGHT_BOUNDS);
      root_tx_grp.addChild(headlight);
    
    
    To finish off the construction of the camera we need to create the View object. The view needs both a PhysicalBody and PhysicalEnvironment in order to run. A few other bits and pieces are added to the View just to make sure that we can see everything and then it is added to the root of the camera mini scene graph.
      body = new PhysicalBody();
      env = new PhysicalEnvironment();
    
      view = new View();
      view.setBackClipDistance(BACK_CLIP_DISTANCE);
      view.setPhysicalBody(body);
      view.setPhysicalEnvironment(env);
      view.attachViewPlatform(platform);
    
    root_tx_grp.addChild(tx_grp);
    }
    
    That completes a basic camera object. To this, you will need to add methods that allow you to add and remove pieces of the HUD and to allow you to change the location and orientation of the camera.

    Code Implementation

    While the model presented so far is pretty good representation of the real code that we would use, there are a couple of minor, but important items to add.

    If you took this class and just added it to your scene graph, you would see nothing other than a grey area on the screen. The reason for this is that we've forgotten an important part of the connection - all of the scene graph in the world is fine, but if our renderer doesn't know anything about it, we're stuffed.

    If you recall some earlier discussions in this section, the connection between the scene graph and the outside world is the View class. From the UI perspective, we have the Canvas3D. To connect the two together, we need to call the setCanvas() method on our instance of View held by the camera. So, Camera needs one more method to accomodate this.

      public void setCanvas(Canvas3D canvas)
      {
        view.addCanvas3D(canvas);
      }
    
    Now, to wrap up the rest of the example code from this chapter we need to add the camera to the rest of the world. This is done through a simple addition to the constructWorld() method of the main frame:
      private void constructWorld()
      {
        // create the basic universe
        universe = new UniverseManager();
    
        Camera cam = new Camera();
        Vector3f loc = new Vector3f(0, 0, 10.0f);
        cam.setLocation(loc);
        cam.setHeadLight(true);
        universe.addCamera(cam);
    
        cam.setCanvas(canvas);
    
        // add some geometry
        ExampleGeometry geom = new ExampleGeometry();
    
        universe.addWorldObject(geom);
        universe.makeLive();
      }
    
    And that is it. You now just need to compile the code from the chapter and run it on the command line. You should see a nice square object on a black background appear before you now.

      

    [ TOC ] [ Home ] [ FAQ ] [ Books ] [ Tutorials ] [ Utilities ] [ Contact Us ]