Handmade Hero»Forums»Code
217 posts
Summary of the rendering system?

Can someone summarize for me the renderer of Handmade Hero? What techniques did Casey use and why? What features does the renderer currently have? How does it work with the entity system (i.e how can I specify how to draw an entity)? What are some of the current API functions that are commonly use? I know that the project first starts only for 2d, but then moves toward 3d. Did the API change that much?

Jason
235 posts
Summary of the rendering system?

This is a fairly broad question. Let me ask if you are familiar with anything about the renderer? I can explain some of the more basic aspects of it since I've seen and implemented many of the earlier video techniques into my own code, but not sure if you already understand at least some of it. For example, if I say Casey implemented a pushbuffer scheme which creates a list of render commands that are later processed, do you understand this part of it already?

217 posts
Summary of the rendering system?
Edited by longtran2904 on
Replying to boagz57 (#25117)

I'm trying to be more familiar with rendering architectures and concepts. I only know the basic stuff. I know that at the end of every frame, the game will push some data and some commands to tell the gpu driver how to read and work on those data. So most of the time, optimize the renderer just boil down to reduce the number of data and draw calls that the gpu needs to handle, right? In your example, is it correct to think that Casey created a stack and push a bunch of draw commands every frame?

Jason
235 posts
Summary of the rendering system?
Edited by Jason on
Replying to longtran2904 (#25118)

In your example, is it correct to think that Casey created a stack and push a bunch of draw commands every frame?

Yes, but it's not just draw calls. So basically he allocates an area of memory in the platform layer that is, as you suggested, treated like a stack in the game code where he pushes whatever his version of a render command is. So, he could have a render command like "DrawRect" which requires the user to pass in the pos, color, width, height of the rect and then this information get pushed onto the render command stack (in the form of a struct like CmdEntry_DrawRect). So, function might look like:

void GPUCmd_DrawRect(RenderCmdBuffer* cmdBuffer, v2 pos, Color color, f32 width, f32 height)
{
    //Here's where the entry is actually added to the command stack
    RenderEntry_DrawRect* rectEntry = RenderCmdBuf_Push(cmdBuffer, RenderEntry_DrawRect);
   
    //Fill in the values
    rectEntry->header.type = EntryType_DrawRect;
    rectEntry->color = color;
    rectEntry>pos = pos;
    rectEntry>width = width;
    rectEntry>height = height;
    
    ++cmdBuffer->entryCount;
};   

but you can also add say a "ClearColorBuffer" command where a user passes in what color to clear the window to or a "LoadTextureBuffer" command or anything else you would want to add. It's important to note that, with these commands and the way Casey has it set up in his code, no actually opengl calls are made in this api layer. This is only for setting up a render command buffer. This command buffer/stack (whatever you wanna call it) is then eventually passed to an opengl layer of the code where the commands are actually processed. In his opengl layer, he just has a switch statement which checks what command is next on the stack and performs the correct operation on it. E.g.:

//Inside opengl.h within the processing function
u8* currentRenderBufferEntry = bufferToRender.baseAddress;

for (s32 entryNumber = 0; entryNumber < bufferToRender.entryCount; ++entryNumber)
{
    RenderEntry_Header* entryHeader = (RenderEntry_Header*)currentRenderBufferEntry;
    switch (entryHeader->type)
    {
         //Other render entry cases....

        case EntryType_DrawRect:
        {
            RenderEntry_DrawRect rectEntry = *(RenderEntry_DrawRect*)currentRenderBufferEntry;

            //Where you actually perform whatever opengl calls you need. 
            glBegin(GL_QUADS);
            glColor3fv(rectEntry.color.r, rectEntry.color.g, rectEntry.color.b);
            glVertex2f(rectEntry.pos.x, rectEntry.pos.y);
            glVertex2f(rectEntry.pos.x + rectEntry.width, rectEntry.pos.y);
            glVertex2f(rectEntry.pos.x + rectEntry.width, rectEntry.pos.y + rectEntry.height);
            glVertex2f(rectEntry.pos.x, rectEntry.pos.y + rectEntry.height);
            glEnd();
 
            glDrawQuads(/*whatever params*/);

            currentRenderBufferEntry += sizeof(RenderEntry_DrawRect);
        }break;
    }
}

And yes, this building and processing of the command buffer is done every frame.

Casey's renderer did start out as 2d only but has since evolved into a 2d/3d hybrid. Basically he still passes quads to the renderer but they also have z positions and the actual world is rendered as a 3d world.

So most of the time, optimize the renderer just boil down to reduce the number of data and draw calls that the gpu needs to handle, right?

Broadly speaking yes. Generally your goal is to batch as much data as you can and feed it to the GPU. If your game is slow is more than likely because it's CPU bound, and GPU is sitting idle (GPU's are incredibly powerfule nowadays). For Casey, I know at one point he started passing just 1 giant buffer for textures and one for vertex data (I think) and you would then just index into that data when drawing (which beats having to buffer data individually constantly). Though I'm really not very specialized in rendering techniques and really don't have too much experience with modern performance rendering.

Jason
235 posts
Summary of the rendering system?

Also, if you haven't paid for access to Casey's code I would highly recommend you do so and fiddle around with things. I think at point he also completely separated the renderer and designed a more user friendly api for it. I know in videos 477 & 478 he talks a bit about the separated renderer and how you can try it out

217 posts
Summary of the rendering system?
Replying to boagz57 (#25127)

Casey's renderer did start out as 2d only but has since evolved into a 2d/3d hybrid. Basically he still passes quads to the renderer but they also have z positions and the actual world is rendered as a 3d world.

How can he render 3d when he still passes quads to the renderer? Did the renderer implicitly convert the quad to something else?

Jason
235 posts
Summary of the rendering system?
Replying to longtran2904 (#25134)

Well you can pass whatever shapes you want to for your 3d world, whether they be flat rectangles, flat circles, cubes, whatever. In the end, your 3d world is really just a 2d world that is warped by the projection matrix to give the illusion of 3d. The biggest thing is that he adds a z position to all the quad vertices and its the z position that determines where these quads/rectangles are to be positioned in the 3d world (e.g. Vertex { float xPos; float yPos; float zPos };)

If you watch in one of the videos I mentioned (I think towards the beginning of either day 477 or 478) at one point Casey takes his debug camera and flys around the 3d world and you can see when he flys by the tree sprites (which again are made of 4 vertices) they are actually just flat surfaces and not 3d cubes.

If this is still confusing to you I would suggest checking out some videos on perspective transform and then later the projection matrix. Casey goes into this towards the beginning of the series. Jon Blow also has a long form video discussing these things as well which I'm sure you could search for on google/youtube (I can't link it here since I'm at work). If you have any more specific questions though obviously I can try and help out.

217 posts
Summary of the rendering system?

After toying around with the renderer_test system, I kind of understand the handmade hero rendering API. First, you have the handmade_renderer which is the API that you will use. Then you have the handmade_renderer_opengl that implements the handmade_renderer in OpenGL. Obviously, you need to load OpenGL in the platform layer code, so that's the win32_handmade_opengl job. It will be compiled into a DLL and will talk to the win32_handmade by a function table declared in the win32_handmade_renderer. The renderer_test is just a separate file that will... well... test the DLL and act just like win32_handmade.

217 posts
Summary of the rendering system?
Edited by longtran2904 on

Something I noticed while running the win32_renderer_test is that there is a couple of white bars appearing on the side of each cube when rotating (both walls and grounds). Can't send any video because of my internet connection :(.

Also, the code which loads all the assets has the wrong file location for the sprites. All the required assets are in the sources/renderer_test but the code just loads directly from where you run the executable (maybe you need to run it in that folder?).