OpenGL renderer questions

Hello!

I know the hardware rendering is still far off in the series, but I was hoping opengl-proficient people could help me understand how it's done 'properly'.

I'm using arcsynthesis opengl manual and the 'red book', but neither of them really say how to do proper architecture.

I approached a point in my program where I need to do proper sorting + the way I render stuff doesn't look right. So my questions are:

1. Is there a point in using pushbuffer sorting with opengl? So I want to use pushbuffer to do z-sorting before I call opengl draw. Or is it better to just add z-coordinates for objects and let opengl decide where to draw what? I feel uneasy about the latter choice, because there's potentially lots of transparent objects involved and certain draw order migh be needed for them. Or transparency sorting only makes sense when doing 3d rendering?

2. What's the 'proper' way to organize opengl-specific data? In my app there's a main data structure, called visual_object. It has data like: position, scale, color, states etc. But there's also opengl-specific things, like vertex array buffer name, and texture name (which i get when I init the object with glGenBuffers()/glGnTextures()). This way, during the render call I can easily bind the buffer/texture, just by grabbing their IDs from the object. But again, weird! Is this really the right way? Plus I don't like having implementation-specific things in my platform-independent code, Is there a better way to do handle these things?

I would really appreciate any thoughts, advice etc.

thank you!

Edited by Stas Lisetsky on Reason: grammar
I'm not an *expert* on GL, but I did just release a game with a GL backend.

1. Yes. Z-ordering aside, you want to be able to rearrange draws to a) minimize the number of state changes (texture, shader, framebuffer) over the entire render sequence, and b) batch as many draws as possible into a single call. (Uploading a full buffer of vertex data rather than one or two quads at a time.) This is.. *difficult*, to say the least, without a pushbuffer.

Furthermore, what the pushbuffer actually buys you is that the order you simulate in is not necessarily the order you render in, and that is *always* a good thing. Restricting one by the other is going to lead to finicky, inefficient code.

2. How you set this up is going to depend on what you actually need. Attaching vertex buffers to individual objects only really makes sense when you're dealing with complex models. Otherwise, it's better to just grab whatever buffer is available during the render process, and fill it up -- the batching buys more than not having to upload vertices again. (Hint: don't try to cache and reuse a VBO between draw calls -- abandon the buffer after each draw by reallocating. (The driver will clean it up after the draw is done.) Otherwise the serialization becomes a bottleneck.)

For textures, just wrap them in a handle struct. As long as nothing outside the renderer or asset system goes poking at the internals, you can switch between GL/DirectX/software at will. The game code itself doesn't actually care what the contents of the texture are. (Usually. If your game does, then you need to address that differently.)

Remember that abstraction isn't "these things must never touch!" Rather, it's about making sure that outside code doesn't care about the implementation details. Your game code can pass around handles to textures and shaders (or any other platform object) with no issues, as long as it doesn't have to care about the *contents* of that object.
This is exactly the information I needed, thank you btaylor2401. For some reason I never thought of batching vertex data uploads. And it does sound easier/faster than do per-quad fills and keeping the VBOs.

I will still try things out and see how it really works and performs. But I at least have some idea about how people actually use this stuff. So thanks!