Chapter 10 Getting Started Java3D is an extension API provided by Sun that provides 3D graphics capabilities to Java applications. It is not the only 3D API for Java. There are several commercial APIs that are available that provide bindings to more established APIs like OpenGL and Direct3D. On the other end of the scales are modelling languages like VRML that are very high level and conceptual in nature. All of these can be used to create the building blocks of a 3D application. Java3D, as a relatively new 3D API tries to learn a lot from the existing camps that have provided 3D graphics interfaces for programmers. It sits somewhere in the middle of the low level APIs like OpenGL and high level like VRML. However, in the end it is still a programming API. You won't find an artist or your mum and dad building 3D worlds with Java3D. In this chapter we'll start by giving you a basic understanding of the various parts of Java3D. For this we will be sticking to the core API as defined by the javax.media.j3d and javax.vecmath packages. Sun provides a number of extension and utility packages as part of its implementation, but these cannot be guaranteed for every distribution. Also, we assume that you can download and install Java3D by yourself. 10.1 The Basics As we've mentioned, Java3D is a mid level 3D graphics API. It provides a structured scene graph approach to representing objects in 3D space. On top of this it provides a system for building custom behaviours that act according to a variety of stimulus within the API. Low Level APIs usually don't provide this and must be taken care of by the overall application. J3D also provides a lot of glue capabilities, such as generic input device support and 3D sound. We'll go over this in more detail a little later. In the real world application, Java3D only provides the 3D part of the user interface. There is a lot of other support code that is needed in order to make it run, such as a window, mouse handling and other items. We'll now look at how Java3D fits with the rest of the Java APIs 10.1.1 Fitting in with the Other Java APIs Java3D exists as part of the Java Media Framework. This various APIs in this framework provide the capabilities for integrating with various multimedia and Internet technologies. Also, it works with parts of the standard Java 2 platform to provide integration with the windowing system and basic mouse input. Java3D provides only 3D rendering of graphics and sound. If you want to use sound, you also need to use some of the other Java Media APIs (JavaSound) to source the audio input. Similarly, it does not provide raw device input capabilities. It provides a virtual device and then you need to make use of other APIs like the Java Communications API to provide the low level device specific handling. In order to provide a decent level of performance, Java3D uses a lot of native code provided by the operating system libraries. On a Solaris based machine, this means making use of OpenGL rendering. MS Windows users may use either OpenGL or Direct3D variants. At the time of writing neither Linux nor Macintosh support is available, but it is expected that OpenGL would be the underlying 3D API used on both. Native code use is restricted to only the final rendering steps while most of the core features are written in Java. This means a lot of usage of the AWT toolkit and Java2D APIs for use in providing basic windowing functionality. Generally, what all this means is that to build a complete Java3D application, you are going to need to use a lot more than just the one API set. For example, as you will see later, much of our mouse handling is performed using standard AWT mouse handling. Also, we assume that you are familiar with the Java2D requirements for dealing with issues such as graphics configuration and window handling. 10.1.2 Resources Because of the approach that we are taking in this book, you are going to require a lot of external help from various resources. Apart from finding yourself a good Java3D tutorial book, the following sites will be useful: Java3D Homepage: http://java.sun.com/products/java-media/3D/ Java3D FAQ: http://tintoy.ncsa.uiuc.edu/~srp/java3d/faq.html Java3D Repository: http://java3d.sdsc.edu/ Of course all the code presented in this section can be found on the CD and the Authors' websites. 10.1.3 The Basic window The first part in any Java3D application is to establish an area that can be used to render to. This is the Canvas3D class. The canvas provides a single view into your world and forms the basic interface with the windowing toolkit. Just like any other component, it may be added in any fashion, shown, hidden, be given mouse input and display output. Before we even start, we need to mention that everything, although using Java 2, will be developed using standard AWT for the windowing and mouse handling. Bearing in mind that Java3D uses native code for rendering, this immediately brings up a small problem - the canvas requires a native peer. Native peers are a nuisance if your application is written with Swing. In fact, it makes life very difficult for you in the case of Java3D. A workaround is available and you can find the information in the Java3D FAQ. For simplicity, our basic application consists of a Frame with a simple menubar. To start with, the menu only allows us to close the application. The 3D window takes up all of the available space. We'll call the application class J3dTestFrame. This code is pretty straight forward, and you've probably done this a thousand times already, so we won't repeat it. 10.1.4 Canvas3D: Your Window to the World With a basic window established, we now want to put in our 3D canvas. Creating a new Canvas3D takes a little more than just a single line of code. In the real world, what that canvas paints on to can be a myriad of different devices. We could have a set of 3D shutter glasses, a CAVE environment or just your lowly monitor. Java keeps all of this device specific information in a number of different classes. Java3D requires the use of a graphics configuration that must be extracted from the system and passed to the canvas with whatever options we need to work with. 3D also provides a number of extra features over the standard 2D display, such a stereo displays. To cater for this, it provides an extended version of the standard java.awt.GraphicsConfigTemplate called GraphicsConfigTemplate3D. Apart from the basic information, this template adds additional information dealing with depth buffering support and stereo rendering. To set up your basic canvas with no options you would use the following code: GraphicsConfigTemplate3D tmpl = new GraphicsConfigTemplate3D(); GraphicsEnvironment env = GraphicsEnvironment.getLocalGraphicsEnvironment(); GraphicsDevice device = env.getDefaultScreenDevice(); GraphicsConfiguration config = device.getBestConfiguration(tmpl); canvas = new Canvas3D(config); Then to finish, you add the canvas to the parent code. This establishes your basic, flatscreen 3D canvas. There are other options that you may add to this in you code. For example you might want to add code to read a system property to decide if you should do stereo rendering, or to change the rendering quality. 10.2 Scene graph Organisation Once you have the basic window established, you need to put something in that window. There are objects to place, sound to listen to, and above all, putting your view into the world so that you can see it. There are many different ways of doing this in general 3D graphics techniques. On the low level end, you just put in lines, points and polygons. On the high end, you create just objects and let the system take care of it. Java3D uses the scene graph approach. A scene graph is a hierarchical approach to describing objects and their relationship to each other. For example, you would describe the connection of your hand relative to your arm. That way, when moving your arm, the hand moves with it. However, you can describe the angle of rotation of the hand in an angle that is relative to the arm. This description of information relative to the parent object is termed a local coordinate system, and is the heart of the scene graph approach to 3D graphics. As you descend each level, there is a grouping structure. Typically this grouping structure contains objects of similar characteristics and always has something useful from the parent. Our hand/arm example is typical. Usually at each group there is the ability to move the object relative to the parent. This range of abilities can be described by the Transform Constraints introduced in Chapter 3. If you were to apply transform constraints to an object, here is where it would be used. 10.2.1 Describing the Universe Java3D operates on a number of levels of progressive refinement of the scene graph. In the conceptual model of the world, the scene graph is a standalone entity. We can create a scene graph without the need to have anything else from Java3D present - not even a canvas. Despite having the sexiest scene graph, what's the point if we can't see or play with it? Apart from the scene graph, we also need to have an on screen place to render it. We also need to put a camera in there to connect the scene graph with the screen. How about mouse input where a user might want to click on an object? Doing a little reading between the lines, this also suggests that it is possible to share the one scene graph between multiple windows. If we have all of these parts floating around, it also suggests a collection requirement to pull all of these bits together. At the top level, we use a universe descriptor. A universe describes everything that we see and do within a particular world. The universe can be a pretty darn big place. An object a couple of billion units away may be pretty hard to see. If we're stuck in the land of floating point numbers, it's pretty tricky to exactly specify the position. To take care of these problems, we introduce the concept of a locale. This provides a basic frame of reference to increase the amount of precision that we can specify a point in 3d space. A universe may have many locales as needed to describe it. For example, if we were modelling the real universe, then there probably would be one locale for each galaxy. From the Locale level, you may now start building a scene graph. You do this by creating various groups and pieces of geometry in whatever order meets your needs. In a typical application, you probably have a collection of pre-built classes that represent objects. Other times these objects are built on the fly in response to your data source(s) like a file or database. You then assemble these objects into groups as needed by the application in each locale. In the Java3D model of the world, cameras are also placed inside this scene graph. There is a series of transforms that locate a view in the world, and then a couple of connecting classes used to represent you with the world and finally to glue that to the canvas. Within the 3D world, the camera is called a ViewPlatform and the glue classes are a View. 10.2.2 Building a Basic Universe Now we come to building that universe in Java3D. A number of different approaches are available to building this universe. The approach that we favour in this book is to extend the VirtualUniverse class itself. Then, within you own universe, you can create methods to add items exactly how you want to classify them. Because our basic world is very small, there is no need to have multiple locales. Creating the universe will create a single locale and place three BranchGraphs underneath it for the following objects. 1. Cameras: Any camera that can be used to view the scene. 2. WorldSpace Objects: Objects that should be rendered in the world space coordinate system 3. DisplaySpace Objects: Objects that should be rendered in the display space coordinate system. Within the BranchGraphs, any object may be placed. The universe class can then provide convenience methods for adding and removing objects to the top level. We can add objects at any level within the scene graph because that is the nature of the scene graph. These methods are only for adding top-level objects. After creating a universe (remember that we're extending the VirtualUniverse class) we need to add a locale and BranchGraphs to it. A default Locale is generally used in 3D applications were we don't have to worry about really large coordinate values. Looking at the Locale class you will notice that the only 3D item that we can add to it is BranchGraphs. A BranchGraph is used by Java3D to indicate a part of the scene graph that may act independently. As you'll discover later, there are some optimisation techniques that BranchGraphs give Java3D. Constructing a basic universe takes almost no code at all. For illustrative purposes, we'll create everything in code rather than extending various objects: VirtualUniverse universe = new VirtualUniverse(); Locale locale = new Locale(universe); BranchGroup view_group = new BranchGroup(); BranchGroup world_object_group = new BranchGroup(); BranchGroup display_object_group = new BranchGroup(); // now add them to the locale locale.addBranchGraph(view_group); locale.addBranchGraph(world_object_group); locale.addBranchGraph(display_object_group); Once you've added the branch graph to the locale, that branch graph, and anything you add to it, is considered live (we'll discuss the difference between live and dead scene graphs shortly). A live scene graph will place restrictions on what you can add, so it is always best to leave this step to the last line of code in your setup routines. Locales have a lot more use than just presenting a bunch of geometry. As you will see in later chapters, it is used for many of the interactions with the scene graph by the display devices. You may want to use more branch groups than this. It is common that top-level branch groups may be assigned to non-3D objects like lights and sound. How you divide up the top level groups is really up to the application that you are writing and also personal style. 10.2.3 Viewing the Scene With the basic scene layout in place, we now need to concentrate on making sure that the user can see what you want to show them. To do this, we need two mechanisms - one to represent the view position in the world and another to map a particular view to the screen. As explained earlier, a view exists as part of the standard scene graph using the ViewPlatform class. They represent the user in the world space. A simple view object can be made by instancing this class and placing it directly into the scene graph at the appropriate spot (placement and orientation are covered shortly). ViewPlatform camera1 = new ViewPlatform(); For a standard POCS application, there is nothing more to do. When dealing with more unusual displays, there are a range of options. These options are based on dealing with HMD's and stereo projections. Using the setViewAttachPolicy() method, you can place the virtual head of the user in different positions. The different positions are used to take into account rendering differences for stereo views. Coming from the other side, a particular canvas needs to know what to render. On inspection of the Canvas3D and the ViewPlatform classes, you will notice that neither of them has a reference to the other. There is not even a hint as to what class may be needed to form the glue between the two. What you need to know is the View class. View forms the connection between a given view platform and the canvas. Assuming that our view platform that we created has been inserted into the scene graph, we can bring the two halves together with the following code: View view = new View(); view.addCanvas3D(canvas); view.attachViewPlatform(camera1); The relationship between views and canvases allows multiple renderings of the same view. In normal programming you probably won't be needing to use more than one canvas because of the limitations of the flat screen. What you might use though, is the collection of different policy options available as part of the view class. Options are based around the viewing parameters that need to be projected to the camera. One that you are most likely to use is the setProjectionPolicy() for controlling orthographic visualisation or standard perspective. You can set it to these two options using the following code: view.setProjectionPolicy(View.PARALLEL_PROJECTION); view.setProjectionPolicy(View.PERSPECTIVE_PROJECTION); Dealing with view objects can be a trifle kludgy. Not only do they need to know about both the canvas and view platform, but typically there is a lot of other associated baggage like camera specific displays (HUD type objects or modal requirements). There are many different ways of organising your code: * Everything in the universe: The universe is responsible for managing all of the views, which are active and what geometry is currently displayed. The view object and canvas remain as simple Java3D classes. * Everything in the Canvas. The universe is minimal; containing the raw world space geometry, lighting and sound. The canvas is the manager of what it sees. It only ever contains one view and that is moved around the universe as required. The canvas also contains the scene graph for any display space or view specific geometry and handling code. * Everything in the view: The universe is minimal; containing the raw world space geometry, lighting and sound. A new container class contains the view object and all of the geometry associated with a particular display. When the display changes view platform, the container takes care of the changing of geometry, mode handling and any other items. There are many combinations and variations of the above basic themes. Which you choose depends heavily on how much external interaction you have. For example, later on you will see that we do most of our mouse handling through the AWT interface so it makes most sense to use the second arrangement. For general use, we think that you'll find the last option is probably the most used and the most portable because it closely matches the camera concept that we use in this book. 10.2.4 Placing Objects Until this point all you've probably seen is a blank black square when running any of the code. Now it is time to quickly look at different ways of describing and organising your objects. Ed: What's the correct spelling here? Java defines it as a Leaf class. The standard plural is leaves, but I don't think that accurately reflects the java thinking: hence the Leafs spelling. Java3D organises all scene graph elements into one of two categories: Leafs and Groups. Objects derived from Leaf are the end of a line - they represent a real, renderable object (light and sounds are considered to be renderable). Group objects are those that can represent a branching point or collection of either further groups or Leafs. Leafs cover a very broad spectrum of objects from primitives to sound, lights, behaviours and more. For the moment, we'll concentrate on one specific leaf - the Shape3D class. Shape3D is the basic object that all solid (and non-solid!) visible objects in the scene graph take. The shape consists of two properties: 1. Geometry: The raw collection of 3D points and their relationships to form planes and lines in 3D space. There are no standard pre-made primitives like cubes, spheres or cylinders. These must be constructed from sets of points to represent the desired object 2. Appearance: Attributes of the object that define what it looks like in terms of colour, reflection, transparency and texturing. An object with geometry but no appearance set looks like dull grey plastic. These may be changed at will; changing one will not change the other. On the organisational side of the house are the grouping nodes. The idea of the grouping node is to collect together a bunch of the leaf nodes into common parts. With grouping nodes you can move objects about (TransformGroup), selectively show one of many objects (Switch), use one object many times (SharedGroup) and you've already seen the BranchGroup object, that acts as a manipulation point to act as a local scene graph root node. All groups may contain either other groups or leaf nodes, or collections of both. Because you can arrange these groups in this hierarchical order, it leads to term scene graph. In a scene graph you have many objects arranged in a (typically) tree fashion starting from some root group and branching out into sub groups and objects, until you reach the ends of each branch - the individual leaf nodes. We won't look much more at these topics, because they'll be covered in more depth shortly. 10.2.5 Scene Graph Attributes Objects in the scene graph have a number of attributes. The scene graph itself also may have a number of attributes that cover all or large parts of objects. These attributes are used to control internal optimisation of the scene graph. A lot of this is tied in with the existance of the BranchGroup class. When you first create a collection of group and leaf nodes, they exist exactly in the order that you create them. Typically when you create a scene graph, you don't construct it from a top down approach. That is, start at the locale, add branch graphs, and then children nodes from there. Instead, you'll probably have a collection of pre-built parts that describe say a box or piece of terrain (elevation grid). From these, you then assemble then with the appropriate transforms to make a complete scene. Some of these objects may even come from files that you've loaded from disk, like a VRML world or DXF mesh. Because of these arrangements, it is possible that you have not created the most efficient structure when it comes to rendering it at the hardware level. A complex scene may contain thousands of groups and primitive objects, so it is also quite possible that most of them you won't be directly changing. That leaves us a lot of room for doing some optimisations under the hood. For example, in our virtual world, we have a car. The car doesn't change colour or size, but it can change location. Being logical thinkers, we've placed a single transform that contains all of the car parts as the root of that piece of scene graph. All of the objects below it remain in fixed positions, so that allows us to optimise away any lower level transforms that might exist. 10.2.6 Capability Bits If you've already gone for a wander through the javadoc for Java3D, you will note that the base class for both group and leaf classes is the SceneGraphObject. In this, you will find the setCapability method. Capability bits are the Java3D way of defining exactly what can and cannot be optimised away by the library. Although the base class does not define any itself, you will note that every derived class will. By default, all capabilities are set to the negative. If you want to do something to an object in the scene graph, you need to set the capability bit to allow it. Capabilities are also very fine grained. For example, in the TransformGroup class, you have separate capabilities to read and write the transform object used to actually move the object. If you set the read bit, it does not automatically allow you to write to it or vice versa. Capability bits do not transfer to the children of groups. If you have a scene graph that represents an arm with two transform groups, one each for elbow and wrist, setting the capability to write to the shoulder transform does not automatically allow you to write to the wrist or elbow transforms. These must be set individually. One potential area of confusion is the difference between object attributes and capabilities. A capability can only ever be set through the setCapability method. If the object has any other set or get methods, then these define attributes. The capability bits determine what happens when you call one of these get/set methods - will the request be accepted or will it bounce? 10.2.7 Live and Dead Scene Graphs At the point where you add your objects to the current locale, they now become eligible for rendering. Any renderable part of a scene graph is termed live. In is now possible that the low level rendering system will pass through the contents of that object and turn them into something visible on screen. The corollary of this is that any scene graph part that is not currently added to a locale can be classified as dead. In the previous section, we quickly looked at capability bits. These bits define what can and cannot be done. When the object becomes live, those capabilities are used to determine what a user may do on the active scene. The specification is very loose about what this means though when it comes to dead scene graphs. Under some implementations, the capability bits may be ignored when dealing with a dead scene graph, while for others they will have full effect. For any node in the scene graph, you can determine its status by calling the isLive method of SceneGraphObject. A true result means that it is currently being rendered. If the capability is not set to allow you make a modification to an object, it will generate a CapabilityNotSetException. Get used to seeing this because it comes up very often! When you see this it means that you currently have a live scene graph and you are trying to make modifications to it. Having capabilities is only part of the optimisation story. You actually need to make use of them as well by telling Java3D that it is OK to start the optimisation process. To do this, you compile the scene graph. The compile method is found in the BranchGroup class, which, if you remember, is always treated as the root object of a sub graph. When you call this, it allows Java3D to take that whole scene graph and apply whatever tricks it can to make it run as fast as possible. Obviously, the more capabilities that you request, the smaller the amount of potential optimisation that may take place. It pays to be as miserly with capabilities as possible. You can check if a node in the scene graph has been compiled by calling the isCompiled method of SceneGraphObject. A true result means that it is compiled. Like a live scene graph, if you attempt to modify something that you have not set the capability for, it will result in an exception. A compiled scene graph is not necessarily the same as a live one. They are independent states and either could be the result of your CapabilityNotSetException. 10.3 Geometry Now that you have a basic handle on what comprises the major portions of a scene graph, it is time to move onto the useful bits - creating objects that you can see on screen. In a word: Geometry. Geometry consists of a collection of points in 3D space. As seen in earlier parts of the book, this 3D space may occupy one of several different roles such as world space or display space. While a collection of 3D dots in space may be a useful form of visualisation technique for your data, it is not the most common mechanism. To make the objects look solid, we then form a relationship between these points to produce a surface. Combine a number of these surfaces together, throw a splash of colour about, and you end up with a solid looking 3D object on the screen. 10.3.1 BranchGroups One offshoot of capabilities is the effect it has on grouping nodes in particular. The idea of a group is to group together related sub-graphs. In a lot of visualisation type applications, you want to be constantly adding and removing items from these groups. It has been mentioned several times already that BranchGroups form the root of a scene graph. There is more to this statement than meets the eye and you may be wondering why, because you can create objects from any Grouping node, add/remove children from them and everything appears fine. Once you get to dealing with a live scene graph, this can be a completely different story. Apart from the obvious option to be able to compile a scene graph, the branch group also does a number of other optimisation and handling functions for user input handling. The various pickX methods allow you to deal with pointer input and selection of objects. These will be covered in the later chapter of Navigation (Chapter 12) and user feedback. (Chapter 13). Picking may also be used for much more that just user input too. Jon: I've swapped chapters 13 and 14 because I think it makes more sense this way. When you compile a scene graph, the only point that you can guarantee will always remain the same is the branch group. It does not contain any information itself, but holds all of the information about its children. Thus, when you want to remove a part of the scene graph, you can take a single compiled and optimised chunk in a single hit. There are no problems about having to decompile the scene graph just to remove a portion of it and then recompile it again afterwards. Removing a branch group causes minimal interruption to the rendering process. One interesting part of this is the observation that a branch group will contain other branch groups as children. Thus, all through the scene graph will be little collection points of optimisations. Because you've probably lost the parent node of the branch group during the compilation, you may not be able to directly remove the branch group at a later date. Since you will probably be keeping around references to branch groups (you might be wanting to remove one or more of them) you would need to keep around references to their parent objects as well. These parent objects, under compilation may or may not exist under the hood. To alleviate this problem, a detach method is provided. Calling this automatically removes the branch group from its parent object. 10.3.2 Geometry Arrays Before moving onto building up your own pieces of geometry, you first need to understand how their most basic component, the GeometryArray works. The GeometryArray class is the base of the different attributes that need 2D or 3D coordinates to represent them. That is, everything in java 3D geometry. A geometry array performs the following tasks: * 3D Coordinates representing the vertices of an object * Normals of either a face, line or vertex. Normals are used to calculate whether to draw a polygon (backface culling), or how to shade it with respect to any influencing lights. * Color values with or without alpha (transparency) settings. Color in this form is used to represent colous on both a per vertex or per face basis. Per vertex coloring can be seen in the ColorCube demo that comes with the standard Java3D install. * Texture coordinates. May represent both 2D and 3D texture coordinates. Texture coordinates are used to modify an image that is laid over polygons. For example, it is used to turn a square image into a spherical map that turns a sphere into a soccer ball. A single geometry array instance represents a single piece of geometry and contains at least one of the attributes mentioned above. At all times it must include the coordinates, then other flags in the constructor allow you to add other values for each coordinate. Geometry arrays are an abstract class, so you can't create one directly. Instead, you will need to use one of the derived classes like IndexedQuadArray or IndexedTriangleFanArray. You will need to become familiar with these depending on the type of geometry that you are modelling, and probably it is more based on how your raw data is presented. Sometimes, one form of geometry array is easier to build than others based on the raw data. To create an array representing a flat square with 4 component color and normals, you would use an IndexedQuadArray. The following code would be used. int flags = GeometryArray.COORDINATES | GeometryArray.COLOR_4 | GeometryArray.NORMALS; IndexedQuadArray geom = new IndexedQuadArray(4, flags, 4); You will note that the first argument is the number four. This is because we know that we need 4 vertices to describe a flat square. You always need to provide a number indicating the number of vertices in the object when you create a geometry array. After creating the basic array, we need to fill it with some data. First, the coordinates. Since we are dealing with a simple object, we'll declare all the points in a single hit as an array of floats. double[] coordinates = { 0.5, 0.5, 0, 0.5, -0.5, 0, -0.5, -0.5, 0, -0.5, 0.5, 0 }; int[] indices = { 0, 1, 2, 3 }; geom.setCoordinates(0, coordinates); geom.setCoordinateIndices(0, indices); Note that we've declared the coordinates in a clockwise fashion, in order. Like most of the geometry arrays, the final derived classes make assumptions about the ordering of the vertices that you've given the base class. We need to explicitly set them using the last line. It is much easier to make the match up between the two arrays if you keep the coordinate declarations in a nice, logical order. The next step in building our flat square is to give the object some color at each corner. Colors follow the same pattern for the coordinates. First declare an array of the values, the indices and then use the appropriate set method. We can reuse the indices array from the first set of coordinates to make life easy for ourselves. The only gotcha to remember here is that we have specified four components for each color. float[] colors = { 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0 }; geom.setColors(0, colors); geom.setColorIndices(0, indices); Note that the last color line is a fully transparent black corner. Finally, we add the finishing touches by declaring normals for each vertex. For this simple example, we declare them all to be pointing directly up (along the Z axis). Normals should always be normalised too. This time, an alternative approach is taken: float[] normal = { 0, 0, 1 }; geom.setNormal(0, normal); geom.setNormal(1, normal); geom.setNormal(2, normal); geom.setNormal(3, normal); which results in all for vertices all have the same normal direction. That is your basic GeometryArray. From this point, we need to turn this into a particular shape that can be rendered. If we don't set the appearance for the moment, then we can create a complete shape and add it to the world space branch group of the universe that we created earlier. Shape3D shape = new Shape3D(geom); world_object_group.addChild(shape); 10.3.3 Customised Geometry Building a basic square is a relatively trivial exercise. When it comes to more complex items like a car engine block, you need to start thinking much more carefully about how you are going to create it in terms of Java3D's node structure. The major decision to make is how far down the object chain of derived classes do you go? For a complex object like the engine block, a standard quadarray probably wouldn't be sufficient. Instead, you might want an object that is represented by a parametric surface like NURBS or quadratics that are custom built from parameters rather than pure coordinates. Building primitive objects like cubes, spheres and similar fixed shaped objects it is usually preferable to extend or just use one of the indexed base classes. It is typical that these sorts of objects directly extend one of the grouping classes and then build the geometry internally in one or more shape nodes. When you have an object whose shape is not directly determinable at creation time, it is usually easier to extend the basic GeometryArray class and provide all your own handling. This is certainly true of parameterised objects or those where the user may interact with it to drag out the shape required. 10.3.4 Appearance If you supplied just the basic geometry array, with no vertex colors specified, to the Shape3D class, you end up with a default appearance - a mid-greyish plastic looking object. The idea of the Appearance class is to provide global information about a particular shape. When you set an appearance, it effects all of the shape. Before we even dig into the Appearance class, you might be wondering how this corresponds with the per vertex coloring of the geometry array. This simple answer is that if colors are specified in the geometry, then these are used in preference to the global settings. Color settings are only part of the function performed by the appearance. The Appearance class controls all aspects of how the geometry is rendered: everything from transparency to whether to draw the geometry as points, lines or polygons. There are a number of attribute collection classes that assist this process. Ed: There are a number of different formatting styles here: - Each level 3 head as a bullet point with paragraph following - headers as is, with attribute capabilities as bullet points and short summary - As is Feel free to format whatever way you think fits best. Each header here is the name of the actual class. RenderingAttributes Define how to deal with the alpha values specified in the colour. It is used to control whether to use alpha, completely ignore it, or only use values above a certain minimum. It is also used to control how the depth buffer is set up for dealing with transparencies, and whether one should be used at all. PolygonAttributes Describes how polygons should be rendered. For example, you might decide that you should always render the polygon regardless of which direction it is viewed from. If you decide to view only from one side, this is termed Backface Culling. The back face is the face of the polygon that is pointing away from the set normals (hence if you see missing faces on a geometry array, this may be the cause). The attributes may also be used to determine whether you should draw polygons as lines, points or a filled area (drawing mode). The final interesting option is the ability to offset the polygons by a number of pixels in the display space arena. PointAttributes If you've selected point drawing mode in the PolygonAttributes, this class is used to further refine whether the points should be anti-aliased or not. Also, it allows the point size in pixels to be set. LineAttributes If you've selected line drawing mode in the PolygonAttributes, this class is used to further refine whether the lines should be anti-aliased or not. Like point attributes, you can control the thickness of the lines and also the line style (dashed, dotted or combination). ColoringAttributes Describes how to deal with the color values when you've got to interpolate between values. The shading model used to represent the effects of lighting can also be controlled. This is particularly important in non-trivial worlds where different shading effects can make very large differences in rendering speed - even with hardware accelerated graphics. Of course, you may also use this to set the intrinsic color of the object before any shading is used. Material Where the color attributes control how the object responds to lighting effects, the material attribute is used to control the intrinsic color properties of the object itself. For example, you may use this to set what color the object glows (emissive color), how it reflects light (specular color) and the basic shininess of the object (anywhere from matte, to mirror finish). All of these colors may be set to completely different values. The subject and the effects of the different types of color can take a book by themselves, so we won't discuss it here. TransparencyAttributes Rendering attributes allow us to control how to deal with the alpha values, which is another name for transparency. This class allows you to deal with how to render transparency. Various hints are available to set it to the nicest looking or fastest mode. Like coloring, the type of transparency rendering used can effect the performance of your world tremendously, even with hardware assistance. TexturingAttributes Textures are images that are laid over the geometry, typically to make it more realistic. These attributes allow us to determine how to render a texture. Should the texture blend with the underlying color (particularly if it itself contains transparent sections) or completely replace it (transparent parts of the texture make that part of the object see-through). We may also control how to render a texture - how much processing should be performed to make it realistically correct for the underlying object. Because textures are images, we may also want to place only a certain part of the texture on the object, not the whole image. We use texture transforms in this class to manipulate the texture. TexCoordGeneration There are various models of how to apply a texture to a surface that is not parallel to the camera. This final attribute is used to control how we calculate the initial texture coordinates relative to the screen and the viewer's eye. Also, sometimes the texture may need to be wrapped around a spherical object rather than a flat face, so we use this class to tell the renderer whether this needs to be done. 10.4 Textures As we've just mentioned, textures get applied to objects to make them more realistic. These days, it is almost impossible to find a game that uses flat colored objects. They almost always have textures of some kind, even if only to make the walls look like old stone (ala Quake). For most terrestrial based applications, use of textures is essential at least somewhere. Even for abstract data spaces, textures may be used to present text in 3D space. Therefore it is important to have a good understanding of how they work. The texture capabilities under Java3D offer three areas of management: 1. Image Management: Specifies how to place an image on an object, such as how to blend edges together and how to relate a point in the image to a point on the object. 2. MipMapping: Keeping the resolution of the texture to an acceptable level to handle both better performance and better appearances 3. XLinear Filtering. Keeping track of the warped image when it has been applied to an object and making it maintain a sense of realism. We'll now expand on each of these areas. 10.4.1 Texture Mapping Basics Ed: Would you like some pictures in here too? It takes very little to whip some up if you want them. An image is represented as a bunch of 2D pixels: X by Y pixels in width and height. Your objects in the scene are represented in some world space 3D coordinates that have no relationship at all to pixels. When we come to mapping pixels to world space coordinates to put an image on an object, we have to make some sort of calculations about how to do that mapping. We could make an arbitrary guess, but obviously this is not acceptable because we, the programmer, want control of how to put the image into the world. Instead, we map the image to the object using texture coordinates. A very simplistic description of texture coordinates is that the bottom left of the image is considered to be (0, 0) in S (width) and T (height) coordinate axes respectively. The top right of the image is considered to be (1, 1). A coordinate of (0.5, 0.5) would place you in the center of the image, and (2, 0) would place you at two widths of the image to the right and on the baseline. To relate a texture to a piece of geometry, you use texture coordinates. You can't just place arbitary texture coordinates into a collection of geometry and hope things turn out right. The only place that you can place a texture coordinates is on one of the geometry vertices. If you need more control over the image mapping, then you can simply add extra vertices to the appropriate places in the underlying geometry. Using this mapping, the texture always stays relative to the object. Thus, when you stretch a polygon, the texture will increase in size so that the points of the texture stay the same relative to the vertices on the object. Java3D allows 3D textures too. That is, an extra coordinate R is added to S and T to allow warps in the depth direction (relative to the texture image) as well. In this case, the texture is treated as a 3D block of image. If you consider that in a 2D image, there are square areas defined by a pixel, you can now make this a volume that has depth as well. Typically these are procedurally generated, but you can do it by laying multiple images over the top of each other. One image is considered to be a depth to the equivalent value that a pixel is width and height. A "unit volume" of 3D texture space would be defined as one pixel by one pixel by one image. 10.4.2 Creating a Texture Java3D defines two types of textures - 2D and 3D. A 2D texture is your standard image like a JPEG that you want to put over the object. For this, there are two separate classes: Texture2D and Texture3D. These are used internally to represent the images and extract the appropriate information from them. To texture an object, you will first need to create a texture and a set of texture coordinates. Typically 2D textures come from an external file. What we need to do is load that image and turn it into a form that Java3D wants (the Texture2D object). Unfortunately, the process is a bit convoluted with Java3D. You cannot just provide the texture with a URL, you need to do all of the image loading and processing yourself. Here's one method of approaching it. 1. Use the AWT toolkit to get the default toolkit and use it's createImage method to load the basic image for you. Toolkit tk = Toolkit.getDefaultToolkit(); Image src_img = tk.createImage(url); BufferedImage buf_img = null; 2. Notice that the object returned is a java.awt.Image. Texture2D uses an ImageComponent2D to represent an image, and that requires a java.awt.image.BufferedImage as the source. The tricky bit in all of this work is doing this conversion step. There are many approaches to this. The simplest solution is to create a new BufferedImage and then just draw the old image straight over the top of it. if(!(src_img instanceof BufferedImage)) { // create a component anonymous inner class to give us the // image observer we need to get the width and height of // the source image. Component obs = new Component() { }; int width = src_img.getWidth(obs); int height = src_img.getHeight(obs); // construct the buffered image from the source data. buf_img = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB); Graphics g = buf_img.getGraphics(); g.drawImage(src_img, 0, 0, null); g.dispose(); } else buf_img = (BufferedImage)src_img; 3. Create the new ImageComponent2D for use by the texture using our newly created BufferedImage. ImageComponent img_comp = new ImageComponent2D(ImageComponent.FORMAT_ARGB, buf_img); 4. Create the Texture2D object. Notice that we've set the image type to be ARGB because we have no idea what the source image really is. For all we know at the code level it could be a GIF image with transparency. texture = new Texture2D(Texture.BASE_LEVEL, Texture.RGB, img_comp.getWidth(), img_comp.getHeight()); 5. Add the texture to the appearance used for the object and cleanup. appearance.setTexture(texture); buf_img.flush(); This is enough to get you started with a loaded texture. The next step is to apply it to your object. Since we already have an appearance node set, that should automatically apply the texture. If you want more control, you will need to set the appropriate Texture Coordinates in the IndexedQuadArray that we created previously and match the indices with the vertices of the geometry. More complex management issues are handled in the next section. 10.4.3 Image Management One objective of texture mapping is to provide realistic looking surfaces like a tiled floor. In your house, there may be a number of different sized rooms with that tiled floor so you'll want to reuse the image as much as possible. If you just made the texture sized proportionally to fit one room, all the other rooms will end up looking wrong. One way to overcome this is to play with the texture coordinates to make sure the room scales the texture properly. For example, say the second room is twice as long as the first, but the same width, you could set the texture coordinate for "far" corners to be (0, 2) and (1,2) so that we get the same size tiles across the room. What these greater-than-one coordinates mean is that the texture must be tiled across the room. This might not be your desired result. Instead, you might want is to have a picture frame type effect where the entire texture sits inside the bounds of the object, but does not tile, leaving the raw object color around the borders (between the image and the boundaries of the object/polygon). Boundary Mode In texture mapping parlance, this is called the boundary mode. The boundary of the texture has one of two modes - clamped or tiled. A clamped mode means that the texture coordinates always move with the object. No tiling is involved and the image takes on the picture frame effect described in the previous paragraph. For Tiled mode, the image is automatically tiled the appropriate number of times defined by the texture coordinates to make the completed texture on the object. You can set these modes independently for each axis. You can set the texture Boundary Color If you've decided to use a clamped boundary mode, then the boundary color is used as the fill color between the edges of the texture and the ploygon(s). 10.4.4 MipMapping MipMaps are about controlling both the complexity and size of your textures. When an object is far off, you can't see much of the detail due to the resolution of the screen. There is no real point in having a highly detailed image as the texture because the effects are lost. On the other hand, when you are really close to an object, the textures can look really pixellated. To control these problems, one solution is to create a number of images of different resolution and then apply them to the object depending on the level of detail needed. When an object is really close, load the high resolution images, if the object is far away or moving quickly, use lower resolution. In the mipmapping process you may supply any number of images that you want. The only requirement is that for each level of greater resolution, you need to provide an image of double the resolution. That is, each level is a power of 2 greater in detail in each dimension. The setMipMapMode method is used to control how the mipmapping is setup for the object. Basically the option is to have either one image (no mipmapping), or many images (mipmapping on). If you are using mipmapping then you need to set all the images using the setImage method. 10.4.5 Filtering The final option for control of your textures is to use one of the various filtering modes. These filtering modes are used to control how to map your images to objects. In particular, these effects are most apparent when you are looking at a non-linear scale of a texture across a polygon. For example, the road disappearing off into the distance with the perspective making the road taper is usually used as a test of the accuracy of hardware filtering implementations. There are a number of different types of filtering available. You're probably quite familiar with the terms Bi-linear and Tri-linear filtering as used in all the marketing brochures used on 3D accelerator video cards. These are two approaches to solving the problem, and Java3D lets you choose the mode to use with the setMinFilter and setMagFilter methods. These control filtering when one texel (a pixel after it has been massaged by the texture mapping process) is smaller than the original pixel and when one texel is larger than one pixel, respectively. 10.5 Event Model So far, all you've been introduced to is static geometry. Once you place an object into the scene graph, it stays there, fixed in space. For all but the most trivial applications, this is insufficient. We want animation and to be able to move around the world viewing it from different perspectives. Somehow we need generate information that causes the scene graph to dynamically change. This is called the event model. An event model describes how information flows around the scene graph from either external sources, like your mouse, or internal sources, like a timer or proximity sensor. It involves the propogation of data through a series of nodes until it reaches a sink that consume it and makes the final change on your scene graph. Events apply to many different things: it may be you walking into an object, the audio fading as you walk away from the object, or clicking on an object. In Java 3D, events can be classified into one of three basic areas: 1. Behaviours: An encapsulated piece of code that is Triggered by the movement of the user or internal rendering system. The source is always the Java3D engine. 2. Picking: The result of the user making suggestions about trying to find an object. Picking casts a line into the scene from a given point in a given direction and asks for intersecting objects. Principally used to deal with mouse input, but has other uses. 3. Sensors: Used to harness external input devices and bring that information into the Java3D model in a device independent way. A sensor may be your mouse, a glove, a HMD tracking unit or motion capture suit. To Java3D, they all appear the same. Along with these basic devices, there are other environments that may be modelled too. For example, you may wish to include audio in your scene. There is a whole model used to determine how you hear the sound, depending on your position and other sound sources. Also, you may want to model The effects of your body in the scene - Avatar characteristics Each of these event models have different characteristics. They don't all follow the event listener pattern that you are familiar with from AWT or JavaBeans. For the moment, we won't go into the description of the different event models, as they will be covered extensively in the next chapter. 10.6 Viewing and Cameras Earlier in the chapter, we took a quick look as Views and ViewPlatforms so that you may see the scene graph. If you have tried to run the code as is, you will have found it generated null pointer exceptions. We didn't add a few pieces in. The combination of View and ViewPlatform form the camera object that we discussed back in Chapter 7 on Navigation Techniques. To match with the ideas expressed in that chapter, it makes sense to build a class that represents a camera and the way that a user may play with it. Some of the ideas expressed in that chapter included different types of control (first, second and third person), constraints on what the camera can do and also making geometry move with the camera. Part of modelling a camera in a virtual world, is modelling the user of the camera. That is, you as a real human occupy a finite volume of space, you can bump into things, jump over something if it is small enough and either left or right handed. To make a complete view and camera representation, we need to model many of the same characteristics in the 3D world. 10.6.1 PhysicalBody The first part of modelling your virtual body is representing how you are looking at the world. The PhysicalBody class represents how your head is located in the world. In the real world, you rarely walk around with your head at ground level, and most people have two eyes set some distance apart. The following physical head related attributes are modelled by the physical body class: * Eye position relative to the center of the head. This is set using the setLeftEyePosition/setRightEyePosition methods. Mainly used when making stereo projections for HMDs and similar devices. * Ear position relative to the center of the head. This is set using the setLeftEarPosition/setRightEarPosition methods. It can be used to model the appropriate sound levels to go to each ear based on the environment. If you have 3D audio rendering capabilities on your sound card (starting to become very common in PC devices) then this can be used to control how the 3D sound is projected to you as well. * Eye height from the ground. This is the most commonly used application of the body representation because it can be used for automatic terrain following. This is particularly important when in use in high-end devices such as CAVEs where the user is physically inside the projected 3D environment and the rendering should match the reality to prevent motion sickness and similar physiological problems. The value can be set with the setNominalEyeHeightFromGround method * Eye position relative to the screen. Fairly uncommon is usage, but can be used if/when you need more control over the stereo projection. If you consider the eyepoint as being the vertex of a pyramid defined by the perspective projection and then place the screen between you and the objects, this is the distance between your eye and the screen. The further away from your eye, the smaller the effective field of view (for a constant sized screen). * Head Tracking transform. If you do happen to have a head-tracking device, the setHeadToHeadTrackerTransform method can be used to control and scaling or offset calculations that need to be done. The attributes in this class can be used in a two way configuration. If you are using stereo goggles like a HMD, you can use the eye position to make sure that the goggles are not doing something unnatural to your eyes. At the same time, the height above the ground may be used to create automatic terrain following for the avatar. 10.6.2 PhysicalEnvironment While the previous class models the basic characteristics of your body needed for rendering, the PhysicalEnvironment class models the computer environment that your body sits in. We've mentioned input devices a few times before, the physical environment is that class that is used to manage and install the various devices available on your computer. Typically these features won't be used in a PC based app as every machine is different. However, if you are building a one off system such as a large virtual environment or other exotic hardware then these capabilities are very useful. Audio Devices Of all the device types available, only the audio device will be the one that is installed most of the time. If you wish to have sound in your world, then an audio device must be installed. Another reason for installing an audio device is to choose between a number that may be available on your system: for example a standard FM OPL3 card and a 3D spatialization card. You cannot write an audio device directly, Instead the drivers either come from a manufacturer or they are provided as part of the JavaSound Java Media API set. There are two separate classifications of audio devices: standard stereo devices are represented by AudioDevice while 3D spatialisation devices such as those capable of Dolby THX output would subclass AudioDevice3D. Input Device If you have a specialised device, like the classical Mattel PowerGlove, you might want to create a new input device. The InputDevice interface is used to represent any sort of external input device in combination with the Sensor class. Between these two, it is possible to represent any arbitary input device - including multiple button systems like the current crop of high range joysticks and throttle controls. 10.6.3 Moving Geometry with the View Probably the most important aspect of setting up camera objects is to be able to associate some geometry with the camera and have it move around with it. This Head-Up-Display forms part of the scene graph and can contain the normal 3D objects. The result is like a 3D dashboard with the geometry. ViewPlatforms are just a leaf node as far as the scene graph is concerned. They do not have any special properties that make them different from other objects. The result of this is that they may be placed anywhere in the scene graph. Any grouping node that you place them under will effect the position and location of the camera that is represents. If the camera is located under a BranchGraph that is removed then that camera will no longer render the scene. If the camera is located under a TransformGroup then changing the transform in the group would result in the camera moving according to the new transform too (although only standard yaw, rotate, translate transforms make sense, shears don't). Say you want to build your standard Amusement Park camera model that is located in a virtual rollercoaster. The rollercoaster has defined geometry that always exists regardless of whether the view is on the ride or not. To make the camera move with the rollercoaster. you simply make sure that the ViewPlatform is placed inside the geometry that represents the rollercoaster car. You apply a transform just above the ViewPlatform to make sure that it is sitting in the correct spot relative to the car and when the car moves, so does your camera. The HUD principle employs the opposite approach - instead of moving your view wherever the geometry goes, you move the goemetry wherever your view goes. In practice though, there is very little difference in the way that the scene graphs are structured. Figure 10.1 illustrates the differences between the two approaches. As you can see, the only difference is in the relative position of the view platform in the scene graph to the root transform that is used to drive everything about. Where the difference comes about is how the transform is driven. In the first case, the camera object is moved by the geometry that it is associated with while the second moves the camera directly. Figure 10.1 The scene graph used for a rollercoaster ride (left) and for a Head Up Display (right). Ed: This might be too wide. If so, then please make vertical picture and fix the wording here. To create such a structure, we need to build almost exactly what you see in the image. Inside our camera model we need a group node that is the container for all the HUD objects: private static final double BACK_CLIP_DISTANCE = 100.0; private static final Color3f White = new Color3f(1, 1, 1); private Group hud_group; private TransformGroup root_tx_grp; private Transform3D location; private ViewPlatform platform; private View view; private DirectionalLight headlight; private PhysicalBody body; private PhysicalEnvironment env; public Camera() { hud_group = new Group(); hud_group.setCapability(Group.ALLOW_CHILDREN_EXTEND); platform = new ViewPlatform(); With the basic top level structure complete and a default view platform created, we may want some other options. We also want to create a root node to hold everything. In this case, being a simple camera, a TransformGroup is used as the root. location = new Transform3D(); root_tx_grp = new TransformGroup(); root_tx_grp.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE); root_tx_grp.setTransform(location); root_tx_grp.addChild(platform); root_tx_grp.addChild(hud_group); A typical option for a camera is to include a headlight. This is a directional light that points exactly where the camera is pointing. Lights, like behaviours, always need a bounding area of influence. For cameras, we like to create a fixed length headlight because that makes it act like the miner type headlights when viewing the world. private static final BoundingSphere LIGHT_BOUNDS; static { Point3d origin = new Point3d(0, 0, 0); LIGHT_BOUNDS = new BoundingSphere(origin, BACK_CLIP_DISTANCE); } // create the basic headlight headlight = new DirectionalLight(); headlight.setCapability(Light.ALLOW_STATE_WRITE); headlight.setColor(White); headlight.setInfluencingBounds(LIGHT_BOUNDS); root_tx_grp.addChild(headlight); To finish off the construction of the camera we need to create the View object. The view needs both a PhysicalBody and PhysicalEnvironment in order to run. A few other bits and pieces are added to the View just to make sure that we can see everything and then it is added to the root of the camera mini scene graph. body = new PhysicalBody(); env = new PhysicalEnvironment(); view = new View(); view.setBackClipDistance(BACK_CLIP_DISTANCE); view.setPhysicalBody(body); view.setPhysicalEnvironment(env); view.attachViewPlatform(platform); root_tx_grp.addChild(tx_grp); } That completes a basic camera object. To this, you will need to add methods that allow you to add and remove pieces of the HUD and to allow you to change the location and orientation of the camera. 10.7 Summary That concludes the look at the basics of Java3D. This is a broad sweep across all of the areas of the core specification that provides you with some idea of the capabilities and elementary setup in the Java3D environment. In the next few chapters we'll be covering a few of the topics in much greater detail. These topics will cover the areas that are most needed to implement the building blocks that we provide in Part IV.