Dev 143 – JSR184 – Using 3D in J2ME – part 2 of 3

The main aspects of 3D graphic programming are the management of vertices, polygons and their representation on video. We will develop a small practical example to understand the mechanisms used by M3G.

From this article on we will study practical examples of 3D programming in immediate mode. We will see how to create, visualize and move three-dimensional objects; we will apply light effects and textures to add realistic appearance. Because the purpose of this series of articles does is not to cover basic concepts of 3D math, such as matrix operations, I advise those who have no specific experience to refer to the bibliography, below. It is also important to refer to the JSR184 documentation that you can find on the FTP site given that, for reasons of space, all the methods and their variants -exposed by the API – will not be presented.

Use of vertices

The element that underlies all the architecture of 3D engines is the vertex. Each vertex, expressed by a set of three coordinates, can be translated and rotated in space. It can have different properties (color, transparency , normals, etc.) which combine to determine the final appearance of the polygons. The effects gouraud shading an lighting, in fact are drawn by taking as a reference the values of the vertices’ properties.

For complete management of the vertices M3G provides two classe: the VertexBuffer and the VertexArray.

Figure 1 shows how the VertexArray is used respectively to store the coordinates of vertices, normals, colors and texture coordinates.

Every reference to the VertexArray, relating to the same object, is stored in the VertexBuffer which behaves as a container. For greater clarity we proceed with the classic and simple example of creating a cube.

Figure 1 – The types of collections that a VertexBuffer accepts

We therefore declare an array of bytes in which we store the spatial coordinates of our cube

private static final byte[] POSIZIONI_TOP = {
    -1, -1, 1, 1, -1, 1, -1, 1, 1, 1, 1, 1,
    -1, -1, -1, 1, -1, -1, -1, 1, -1, 1, 1, -1 

we also declare a class variable of type VertexBuffer

private VertexBuffer datiVerticiCubo;

In the init() method of the class structured in the previous article we write these lines of code

cubeVertices = new VertexBuffer();
VertexArray vertices = new VertexArray (POSITIONS_VERTICES.length / 3, 3, 1);
cubeVertices.setPositions(vertices, 1.0f, null);

The first line is self-explanatory – it creates a new VertexBuffer object. Then, as a second step, you need to create an array of vertices to be buffered . For this reason, the next statement creates a VertexArray instance.

The declaration of its constructor is the following

VertexArray(int numVertices, int numComponents, int componentSize)   

As a first parameter it requires the number of vertices of which is constitutes our object. In our case, given that each vertex is defined by three bytes one for each coordinate, we pass the value of the dimension of the array divided by 3.

Through the second and third argument we indicate the number of components of each vertex and the size in bytes of each one. Each vertex of a cube is formed by 3 coordinates (x, y and z) which we have established to fall within the range of values ​​of a byte type. So we have set a number of components equal to 3 of 1 byte each.

Now we can transfer the values ​​of the vertices contained in the array POSTER_SERIES, through the set (the only) method of the VertexArray class that presents these declarations

void set(int firstVertex, int numVertices, byte[] values)
void set(int firstVertex, int numVertices, short[] values) 

Since during its construction the object has allocated all the necessary memory to contain the vertex definitions, the firstVertex parameter is needed to indicate at which position of the array start writing into. numVertices tells how many vertices we want to copy from the array indicated as third parameter.

The differentiation between the two method declarations of on the type of each component, according to what was passed to the constructor of the object through the componentSize parameter .

If the value of the vertices is greater than the range from -128 to 127, it becomes necessary for the vertex array to be declared as short[] .

The last statement of the code block just seen, so defined

cubeVertices.setPositions(VertexArray positions, float scale, float[] bias) 

sets the reference of VertexArray – containing the definition of the vertices – in VertexBuffer. The scale parameter allows to indicate an object’s resizing factor with a value between 0.0f and 1.0f (the value 1.0f leaves the size of the object unchanged), while bias requires a set of three constants (X, Y, Z) that will be added to the vertices to obtain a spatial translation. Having no need to exploit the bias we set it to null.

Set up polygons

At this point we have only set the positions of the vertices of our cube. Obviously to be drawn it is necessary to join the vertices together to form a solid figure. We already know that every solid consists of several polygons and that each one can be decomposed into several triangles. M3G works only with this type of geometrical shape for optimization and simplicity on the calculations.

So we are going to use the class TriangleStripArray, which is nothing more that a concatenation of triangles.

In this manner after specifying the vertices of the first triangle, the next ones are denoted simply by a single further vertex; the other two vertices of each new triangle are taken from the last specified by previous triangle.

In this way, the amount of data needed to draw an object is also reduced. In Figure 2 is proposed an example to clarify this concept.

Figure 2 – Here is explained the usefulness of using a TriangleStripArray instead of specifying all the individual vertices

Knowing that a cube is formed by 8 vertices to cover it with triangles we would need 6 vertices per face that means 6 x 6 = 36 vertices . With the triangle-strip we can reduce the whole to 4 (for the first face) + (5 faces * 2 triangles per face) = 3 + 11 = 14 vertices.

It is obvious that when the need is design a very complex object this aspect also greatly affects the performance and memory required to memorize the definition.

Let’s create a new array containing the indices of the vertices to be connected

private static int[] TRIANGLE INDICES = {
    0, 1, 2, 3, 7, 1, 5, 4, 7, 6, 2, 4, 0, 1

it should be noted that the value of each vertex refers to the order indicated in the VertexArray. At this point we need to create a new TriangleStripArrray object, so declare the class variable

private TriangleStripArray cubeTriangles;

and insert this line at the bottom of the Init() method

cubeTriangles = new TriangleStripArray(TRIANGLES_INDICES,  new int[]{TRIANGLES_INDICES.length});

The cameras

A camera defines the observer’s point of view in terms of position and orientation. It is a very useful class because with each transformation performed on it all the objects that make up the scene are transformed accordingly. So it becomes easy to make the observer travel within a virtual world.

Multiple cameras can also coexist in the same scene; in this way it is possible to keep the various observation points up-to-date and activate them if necessary.

If we tried to run the program at this point we would see an error message caused by the absence of a camera, “on the scene”. The Graphics3D class, in fact, is not able to determine an observation point from which to base all the calculations and the drawing operations.

At the bottom of the Init() method, you need to create a new instance of the camera object

Camera camera = new Camera();

A very important operation in the setup procedure of a camera is the choice of the type of projection to be used. This choice can fall on the perspective or parallel projection. The latter that is usually employed in few applications (such as CAD), produces a “flat” output where the distance of the vertices from the observer does not affect the result.

In the case of perspective projection the natural effect to which the human eye is affected is simulated. Objects of the same dimensions distant from the observer appear smaller than the near ones.

By reproducing this phenomenon it is possible to make the impression of depth on our displays that can only show two-dimensional images.

So let’s see how to set this type of projection for the camera. First of all we calculate the ratio between width and height of the device display

float aspectRatio = (float) GetWidth() / (float) GetHeight();

This value serves to re-proportion the output. It is not desirable the images of an application are distorted or stretched on a mobile phone that has a display of different dimensions from those for which the application was designed.

Then we set the volume of cut with the line

camera.setPerspective(30.0f, aspectRatio, 1.0f, 1000.0f);

The cutting volume, shown in Figure 3 , is nothing but the space portion visible to the observer. The first parameter passed represents the width of the visual angle, while the last two indicate the closest visible point and the furthest. All vertices outside the pyramid that originates will not be drawn. This is another mechanism used to avoid operations to draw objects which – by their position – are not visible to the observer.

Figure 3 – Here is a generic cut volume

As a last step we connect the camera just created to the Graphics3D object via

g3d.setCamera(camera, null);

To make the API process all the data entered for the cube, in the Draw3D method, we write this line

g3d.render(cubeVertices, cubeTriangles, new Appearance(), null);

At this point we can refer to Listing 1 to clearly see all the changes made to the Init() method of our class.

We have already mentioned the render() method of the Graphics3D class. This operation will trigger all the pipeline steps necessary to obtain a two-dimensional image directly on the display.

If we wanted to view a scene composed of several objects we would have to repeatedly run the render() method with different VertexBuffer and TriangleStripArray instances.

Someone might be tempted to create a mega VertexBuffer containing the data of all the objects to be drawn. This procedure can be used only in case of sets of fixed objects since, as we will see in the next paragraph, the transformations can be applied to the entire render. For this reason all the vertices of the buffer are affected by the same transformation operation .


As last topic we cover the basic functionalities for working with 3D objects. If we tried to launch the application so far we would “strangely” get a completely blank screen.

This happens because the camera created is positioned on the origin, as a result it appears as if the observer were very close to the object in the scene.

It is necessary to apply a transformation. Transformations are operations of translation, rotation or resizing performed on an object. To see the cube therefore we can move the camera away from the cube, ie translating it on the Z axis.

Since also transformations are represented through objects, we immediately create one with

Transform cameraTransform = new Trasform();

after which it is set the necessary translation via

cameraTransform.postTranslate(0.0f, 0.0f, 10.0f);

In this manner we moved the camera back along the 10-unit on the Z axis. At this point we replace the previously written line

g3d.setCamera(camera, null);


g3d.setCamera(camera, cameraTransform);

to associate the transformation to the camera. And as shown in Figure 4 it is possible to finally admire the object.

Figure 4 – The cube seen towards its front face

A simple animation

You expected to see a cube, but there is only a rectangle on the display. This is because we are looking directly at one of his faces. Let’s try to apply a transformation to it in order to create an animation.

Let’s add a class variable

Transform cubeTransform = new Transform();

while in the frame() method of the class insert this line of code

cameraTransform.postRotate(1.0f, 1.0f, 1.0f, 1.0f);

this command applies a rotation of one degree on all axes. Now to replace the old line

g3d.render(cubeVertexData, cubeTriangles, new Appearance(), null);

with the new one

g3d.render(cubeVertexData, cubeTriangles, new Appearance(), cubeTransform);

At each frame a further rotation of one degree will be added thus generating a progressive animation.

Figure 5 shows the result of these instructions.

Figure 5 – The rotating cube


So far we have studied the most important notions to take quickly advantage of the features of JSR-184. With a little effort and imagination you can create solid objects and entire worlds to explore, for your video games or for any other application you want to run on your phones.

There are still some very interesting topics such as coloring effects, texturing and 3D sprites that will be covered in the next article.

For this reason I hope that the reading has been interesting and stimulating to you.


  1. Sun Corporation, “Mobile 3D Graphics API Specification “,
  2. DEV No. 132 year XIII September 2005, “Using OpenGL”

Original article


Leave a Reply

Your email address will not be published. Required fields are marked *