Open GL ES: Part 1.5

There’s good news and bad news. I’ve made a lot of progress in getting a simple engine running is the good news. And even better news is that now that the 2011 WWDC videos are online there’s some awesome new videos about GLKit. I’d recommend checking them out if you’re a developer because some of the features seem very nice, I’ll definitely be upgrading to OpenGLES 2.0 when iOS 5.0 comes out.

The bad news is that I had an eye appointment earlier today where my eyes were dilated so I can’t really see. I was starting to think I wouldn’t be able to finish my post for today but it’s starting to come back so I can make a short post. I’m getting LASIK done next month because I’m nearsighted enough that I can’t even use a computer without my glasses/contacts. I’ve always hated feeling so dependent on them. Getting my eyes dilated has effectively made me farsighted for the past 7 hours and it’s pretty much just as bad. The real annoying part is that I have to wear my glasses or I can’t see anywhere! It’s a really interesting juxtaposition.

So for today’s post I’m going to talk about the files that are in the template I posted last week. If you haven’t seen it, get it at the bottom of the page here: OpenGL ES: Part 1

App Delegate

The app delegate is pretty much as straight forward as it could be. When the app is launched MainWindow.xib creates the app delegate, a window and one view. The view is a subclass of EAGLView. The only code in the body of OpenGLES2DAppDelegate is:

- (void)applicationDidFinishLaunching:(UIApplication *)application {
	[glView startAnimation];
}

- (void)applicationWillResignActive:(UIApplication *)application {
	[glView stopAnimation];
}

- (void)applicationDidBecomeActive:(UIApplication *)application {
	[glView startAnimation];
}

- (void)applicationWillTerminate:(UIApplication *)application {
    [glView stopAnimation];
}

This should be pretty obvious but what it does is start and stop the EAGLView’s animation whenever the app launches or terminates. It also does it when the app goes into or out of the background beacuse OpenGL isn’t allowed to run on apps in the background. The way it does this is by calling a method which stops the timer that calls the render method every frame. Or in the case of startAnimation it creates the timer.

EAGLView

The EAGLView does the bulk of the work. Lets just dive into the init:

- (id)initWithCoder:(NSCoder*)coder
{
	if((self = [super initWithCoder:coder])) {
		// Get the layer
		CAEAGLLayer *eaglLayer = (CAEAGLLayer*) self.layer;

		eaglLayer.opaque = YES;
		eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
										[NSNumber numberWithBool:FALSE], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];

		context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1]; //OpenGL ES 1.1

		if(!context || ![EAGLContext setCurrentContext:context] || ![self createFramebuffer]) {
			//[self release];
			return nil;
		}

		animating = FALSE;
		animationFrameInterval = 1;
		displayLink = nil;

        [self setupView];
	}

	return self;
}

First off the reason we use initWithCoder is because it’s getting created from a nib. If you plan on creating it programmatically you’d need to implement something else. The first thing we do is get the layer, now because we implmented the class method “layerClass” and returned “[CAEAGLLayer class]” our view was created with a layer that is a CAEAGLLayer already. The layer is set to opaque because there can be performance issues if it’s set to transparent, note: this isn’t something I’ve looked into, just read, so if you need it to be transparent who knows it might work? Next we set the drawable properties, this is a dictionary containing only the two properties you see here. Retained Backing makes it so frames are saved and drawn over next time, we’ll be redrawing the entire screen each frame so there’s no need to waste the extra memeory. The color format can be either kEAGLColorFormatRGBA8, which means 8 bits each for red, blue, green and alpha (transparency) or kEAGLColorFormatRGB565 a 16-bit format with 5 bits for both red and blue, and 6 for green. So a screen buffer that’s 320×480 is 153,600 pixels, meaning it would take ~300KB of memory in 16-bit or ~600 in 32-bit. It’s another area where you need to access your needs. It’s important to note that this has to be called before renderbufferStorage:fromDrawable: is called. This should make sense because the size of the storage is going to depend on the color format, also should you change these options you’ll have to call renderbufferStorage:fromDrawable: again.

Ok next we generate an EAGLContext and you’ll see that it’s an OpenGL ES 1.1 context being created. This context is what manages the state of OpenGL, so when you’re changing things you need to make sure it’s set as the current context. The if conditional makes sure that all three of these things happen: the context was created, it’s successfully set, and it sucessfully creates a framebuffer. If any of these fail the initilization of your EAGLView fails. There should be better error handling for what to do when this happens but I figure that might vary app to app.

Finally we set some ivars and call setupView, then return the newly created EAGLView.

- (BOOL)createFramebuffer {
	glGenFramebuffersOES(1, &defaultFramebuffer); //Create the Frame Buffer
	glGenRenderbuffersOES(1, &colorRenderbuffer); //Create the Render Buffer

	glBindFramebufferOES(GL_FRAMEBUFFER_OES, defaultFramebuffer); //Bind the Frame Buffer
	glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer); //Bind the Render Buffer
	[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(id)self.layer]; //Create the storage based on the layer
	glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, colorRenderbuffer); 

	glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth); //Get the width of the renderbuffer
	glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight); //Get the height of the renderbuffer

	if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) {
		NSLog(@"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
		return NO;
	}

	return YES;
}

This method got sneakly called after the creation of our context in that if statement. It also gets called whenever layoutSubviews happens (Which for most cases should only be once). The goal of it is to create the framebuffer and the renderbuffer. Because Apple can probably explain it better than I can here’s their definition of the two:

  • A renderbuffer is a simple 2D graphics image in a specified format. This format usually is defined as color, depth or stencil data. Renderbuffers are not usually used in isolation, but are instead used as attachments to a framebuffer.
  • Framebuffer objects are the ultimate destination of the graphics pipeline. A framebuffer object is really just a container that attaches textures and renderbuffers to itself to create a complete configuration needed by the renderer.

So the framebuffer is a chunk of memory which is big enoug to contain the data for a pixel (32 or 16 bits) for each pixel we need (320×480). The renderbuffer is a little harder to grasp but it’s kind of like a scratchpad for the framebuffer. The renderbufferStorage:fromDrawable: method is what sets aside the chunk of memory based on the size of the layer.

- (void) setAnimationFrameInterval:(NSInteger)frameInterval {
	// Frame interval defines how many display frames must pass between each time the
	// display link fires. The display link will only fire 30 times a second when the
	// frame internal is two on a display that refreshes 60 times a second. The default
	// frame interval setting of one will fire 60 times a second when the display refreshes
	// at 60 times a second. A frame interval setting of less than one results in undefined
	// behavior.
	if (frameInterval >= 1)
	{
		animationFrameInterval = frameInterval;

		if (animating)
		{
			[self stopAnimation];
			[self startAnimation];
		}
	}
}

#pragma mark - Animation methods

- (void) startAnimation {
	if (!animating)
	{
        displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(drawView)];
        [displayLink setFrameInterval:animationFrameInterval];
        [displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];

        animating = TRUE;
	}
}

- (void)stopAnimation {
	if (animating)
	{
        [displayLink invalidate];
        displayLink = nil;

		animating = FALSE;
	}
}

These three methods set up the timer for rendering. We want to render at some framerate usually 30 or 60fps. CADisplayLink is the ideal method because it gets called in sync with the screen refresh rate. The screen on an iPhone refreshes at 60fps, the interval has to be a natural number. So at 1 it’s 60fps, 2 it’s 30, 3 would be 20, etc. This is only an attempted framerate though so if you end up taking too long for drawing code it will skip to the next update. When we start and stop the animation we add or remove the displayLink which causes drawView to get called.

- (void)setupView
{
	// Sets up matrices and transforms for OpenGL ES
	glViewport(0, 0, backingWidth, backingHeight);
	glMatrixMode(GL_PROJECTION);
	glLoadIdentity();
	glOrthof(-1.0f, 1.0f, 1.5f, -1.5f, -1.0f, 1.0f);
	glMatrixMode(GL_MODELVIEW);

	// Clears the view with gray
	glClearColor(0.5f, 0.5f, 0.5f, 1.0f);

	// Sets up pointers and enables states needed for using vertex arrays and textures
	glVertexPointer(2, GL_FLOAT, 0, spriteVertices);
	glEnableClientState(GL_VERTEX_ARRAY);
	glTexCoordPointer(2, GL_SHORT, 0, spriteTexcoords);
	glEnableClientState(GL_TEXTURE_COORD_ARRAY);

    //Load the texture from an image
    UIImage *mitchImage = [UIImage imageNamed:@"Mitch.png"];
    mitchTexture = [[Texture2D alloc] initWithImage:mitchImage filter:GL_LINEAR];

    // Enable use of the texture
    glEnable(GL_TEXTURE_2D);
    // Set a blending function to use
    glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
    // Enable blending
    glEnable(GL_BLEND);
}

SetupView gets called once and does some things that only need to be done once. I’m not really going to go into the difference between all the levels of model view, projection view, view port because some of the tutorials I listed earlier can explain it better than me. The important part is that we set the viewport to the size of the renderbuffer we made before. The projection matrix is set to be a coordinate system from -1,1 on the x, -1.5,1.5 on the y, and -1,1 on the z. The reason for Y being 1.5 is that 480:320 is a 1.5:1 ratio so this keeps everything square. The z dimention will be largely not used because we’re working in 2D.

glClearColor sets the color that the background will be when we clear it, this is one of the easiest ways to check that you’ve set up everything properly.

The next series of calls are what we set up to draw the image. This is just demo code and will get moved out as I expand the template. glVertexPointer is how we pass in the vertexes to draw. The first parameter, 2, says how many values per vertex (x,y). Next is the type of the values. Third the stride, so say we had an array with vertex positions followed by the color values so XYRGBXYRGB and so on. We would enter a stride of 3 so it would read the first two (XY) then skip the next three (RGB) then read the next 2 (second XY). And finally we pass in the array with the verticies. glEnableClientState makes it so when we actually do a draw command it uses the vertex array we just passed, likewise one line down calling it on GL_TEXTURE_COORD_ARRAY enables drawing textures from the texture array we pass to glTexCoordPointer.

One important thing to note is that you probably don’t want to use imageNamed in your actual code. The reason being that iOS caches images loaded with imageNamed in preperation of the image being loaded again. We’re loading it to a texture so there’s no need to retain it at all. So after loading the image we create a Texture2D from it. Then enable the context to draw 2D textures and belending.

- (void)drawView
{
	// Make sure that you are drawing to the current context
	[EAGLContext setCurrentContext:context];

	glBindFramebufferOES(GL_FRAMEBUFFER_OES, defaultFramebuffer);

	glClear(GL_COLOR_BUFFER_BIT);
	glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

	glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
	[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}

Finally drawView is what gets called every time the displayLink fires. The first call is relatively trivial in this example but if you had multiple contexts you’d want to make sure you’re using the right one. Now we bind the framebuffer, clear it to the clear color, and use glDrawArrays to draw all four vertices in two triangles. The zero here is the starting index in the array. All that’s left is to bind the renderbuffer and present it to the screen through the CAEAGLLayer.

Texture 2D

As I said last time this is a pretty barebones. I figure it’s not really worth expanding upon until it’s actually needed so if you want to use pvr files you’re on your own for now.

- (id)initWithImage:(UIImage*)content filter:(GLenum)filter {

    self = [super init];
    if(self) {

        // Creates a Core Graphics image from an image file
        CGImageRef contentRef = content.CGImage;
        // Get the width and height of the image
        width = CGImageGetWidth(contentRef);
        height = CGImageGetHeight(contentRef);
        // Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
        // you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.

        if(contentRef) {
            // Allocated memory needed for the bitmap context
            GLubyte *contentData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
            // Uses the bitmap creation function provided by the Core Graphics framework.
            CGContextRef contentContext = CGBitmapContextCreate(contentData, width, height, 8, width * 4, CGImageGetColorSpace(contentRef), kCGImageAlphaPremultipliedLast);
            // After you create the context, you can draw the sprite image to the context.
            CGContextDrawImage(contentContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), contentRef);
            // You don't need the context at this point, so you need to release it to avoid memory leaks.
            CGContextRelease(contentContext);

            // Use OpenGL ES to generate a name for the texture.
            glGenTextures(1, &name);
            // Bind the texture name.
            glBindTexture(GL_TEXTURE_2D, name);

            // Set the texture parameters to use a the specified filter
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, filter);
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, filter);

            // Specify a 2D texture image, providing the a pointer to the image data in memory
            glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, contentData);
            // Release the image data
            free(contentData);

        }
    }

    return self;
}

So the first step is to get a CGImageRef form the UIImage. Next we get it’s dimentions, this texture loader only works with images that are a power of 2. I plan on making any as texture atlases in zwoptex so I didn’t bother implementing a feature to make them power 2 if they weren’t. It also doesn’t actually check if they are powers of two, but again I expect it to get replaced. Basically here we get the bitmap data from the image then create a texture from it. The Texture2D stores it’s name of the texture which is what OpenGL calls it’s handle or identifier. One thing to add is that when Texture2D dealloc’s we should remove the texture from OpenGL because a texture will only be used through it’s Texture2D so once the Texture2D is gone we can free it up from OpenGL.

Conclusion

Phew, maybe that wasn’t as short as I though it would be. My eyesight is actually noticably better, I can almost read my phone without looking like I’m 90. Next week should be my last week on iDevBlogADay so I’ll try to have a nice fat post.

Comments are closed.