Day 242, OS X and OpenGL voes

So, I am currently at day 242, keeping up a good fight against OS X graphics API in order to make things work properly. As a person who've never written a line of OpenGL code in my life before this, it is a bit of a struggle.

Originally I've always blitted the bitmap to the screen via OpenGL in my OS X version of the platform layer. Everything turned out much cleaner after I'd implemented Casey's version of basically the same thing, both the render and the blit.

However, I am having some difficulty understanding how to deal with certain features that were introduced in 242. What occurs is Casey introduces detection of availability of modern OpenGL features by looking up wglCreateContextAttribsARB function and attempting to instantiate a device context with a specific set of attributes including major API version and various flags. Then he proceeds to look up available extensions using glGetString and use that information in order to set some parameters (GL_EXT_texture_sRGB, GL_EXT_framebuffer_sRGB) and set up stage for further endeavours. As I understand, this will be needed in the future. For reference, here's a direct link to a related commit to those who has access to the github repo.

From what I can tell, Windows' implementation of OpenGL API allows request of modern OpenGL features while retaining possibility to use antiquated immediate mode drawing functions.

OS X is not set up that way. The user has a choice to either include gl.h and use direct mode drawing or include gl3.h and completely migrate to a newer API.

My current code to instantiate an OpenGL context looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
static NSOpenGLContext *
OSXInitOpenGL(NSWindow *Window)
{
    NSOpenGLPixelFormatAttribute Attributes[] =
    {
        NSOpenGLPFAClosestPolicy,
        NSOpenGLPFADoubleBuffer,
        NSOpenGLPFAColorSize, 32,
        NSOpenGLPFAAlphaSize, 8,
        NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersionLegacy,
        0
    };

    NSOpenGLPixelFormatAttribute ModernAttributes[] =
    {
        NSOpenGLPFAClosestPolicy,
        NSOpenGLPFADoubleBuffer,
        NSOpenGLPFAColorSize, 32,
        NSOpenGLPFAAlphaSize, 8,
        // NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core,
        0
    };

    NSOpenGLPixelFormat *PixelFormat =
        [[NSOpenGLPixelFormat alloc] initWithAttributes: Attributes];
    NSOpenGLContext *Context =
        [[NSOpenGLContext alloc] initWithFormat: PixelFormat
                                 shareContext: 0];

    /* modern context set up */

    b32 IsModernContext = false;

    NSOpenGLPixelFormat *ModernPixelFormat =
        [[NSOpenGLPixelFormat alloc] initWithAttributes: ModernAttributes];
    NSOpenGLContext *ModernContext =
        [[NSOpenGLContext alloc] initWithFormat: ModernPixelFormat
                                 shareContext: 0];

    if (ModernContext)
    {
        IsModernContext = true;
        [Context dealloc];
        Context = ModernContext;

        [PixelFormat dealloc];
        PixelFormat = ModernPixelFormat;
    }
    else
    {
        // NOTE: This is an antiquated version of OpenGL
    }

    if (Context)
    {
        [Context makeCurrentContext];

        OpenGLInit(IsModernContext);

        GLint VSync = 1;

        // if (VSync)
        {
            [Context setValues: &VSync forParameter: NSOpenGLCPSwapInterval];
        }
    }
    else
    {
        InvalidCodePath;
        // TODO: Diagnostic
    }

#if 0
    GLint Dimensions[] = {FramebufferWidth, FramebufferHeight};
    CGLSetParameter(Context.CGLContextObj, kCGLCPSurfaceBackingSize, Dimensions);
    CGLEnable(Context.CGLContextObj, kCGLCESurfaceBackingSize);
#endif

    NSOpenGLView *View = [[NSOpenGLView alloc] init];
    [View setOpenGLContext: Context];
    [View setPixelFormat: PixelFormat];

    [Window setContentView: View];
    [Context setView: View];

    return Context;
}


Note the commented line inside the definition of the ModernAttributes variable. If I try to enable this profile, the renderer does not function. Although, curiously enough, I can still run the game and see OpenGL clearing up the bitmap to a pre-defined colour and then blit it to the screen. The function glGetString(GL_EXTENSIONS) returns NULL, but it returns all the other requested strings properly. GL_VERSION and GL_SHADING_LANGUAGE_VERSION clearly show a newer version of the driver and shading language which implies that the only things I lose by doing things this way is the access to immediate drawing functions.

Currently, I include OpenGL header files in the following fashion:

1
2
3
4
#define GL_DO_NOT_WARN_IF_MULTI_GL_VERSION_HEADERS_INCLUDED 1

#import <OpenGL/gl.h>
#import <OpenGL/gl3.h>


So here are the questions:
1) why does the implementations are so drastically different? (this is probably rhetorical but still)

2) is there going to be a move towards usage of the modern API by Casey in the future? I.e., should I bite the bullet now and write a completely new version of the current renderer just to satisfy the design decision to support both newer and older version of OpenGL? What sorts of extended features of OpenGL might be used in the Windows implementation that are optional and do not break the main render path written using outdated immediate mode functions?

3) Am I even understanding the situation correctly or there are things that I am missing due to inexperience?

Thank you
1) why does the implementations are so drastically different? (this is probably rhetorical but still)

Because the legacy immediate mode opengl is very suboptimal for how gpus actually work.

Each time glVertex is called it must transfer the vertex data to the gpu for drawing which is on the other side of the PCIe bus.

Setting up each PCIe transfer has some overhead so it pays to do a few big transfers than a lot of small transfers. Drivers can do cpu side batching but it cannot know when you are done drawing. So it has to guess when to flush and sometimes it guesses wrong. Which will lead to bad framerates.

Core opengl 3.3+ forces you to use VBOs and compiled programs in an effort to make the application communicate in advance what it needs to draw. It's still not perfect and the next gen apis (vulkan, dx12 and metal) have changed a few paradigms (explicit execution queues, immutable pipeline object with all render info embedded, explicit sync, ...) to enable a lot more communication up front, enable multithreading and to make explicit what used to be implicit (or required strict hoops to jump through which if missed made the driver fall back onto the slow path).
On Windows when you create an OpenGL context using the modern system (3.1 or up), you can pass a flag to request a compatibility profile that contains both the old and new functions. According to this it doesn't work like this in OSX.

For the list of extensions, if you use OpenGL 3.0 or above you should use glGetStringi(GL_EXTENSIONS, i). From the opengl wiki
In modern, post GL 3.0 code, the correct way to query which extensions are supported is to use glGetIntegerv(GL_NUM_EXTENSIONS) to get the number of extensions, and then use glGetStringi(GL_EXTENSIONS, i), where i is a number between 0 and the number of extensions - 1. Because glGetStringi is not a GL 1.1 function, you will need to load this function before using it on Windows machines.

2) Casey moves to more "modern" OpenGL at some point (day 371 I believe). If you want to keep using old function, you can emulate them yourself to work with vertex buffer (VBO) and glDraw* commands until you reach the point where Casey switches to the new functions.

3) I thing you understand correctly. On OS X 10.7 or higher you have to choose between the old fixed function pipeline ( up to 2.1) or the new 3.2+ pipeline. But maybe the extensions Casey uses are available in OpenGL 2.1 and you don't need to go to 3.2 ?
Thanks for the explanations.

For the list of extensions, if you use OpenGL 3.0 or above you should use glGetStringi(GL_EXTENSIONS, i).
Indeed. I've read about it in the documentation, but it have fallen out of my memory completely.

If you want to keep using old function, you can emulate them yourself to work with vertex buffer (VBO) and glDraw* commands until you reach the point where Casey switches to the new functions.
That's a good idea, I haven't considered reimplementing the "old" functions using the newer API.

3) I thing you understand correctly. On OS X 10.7 or higher you have to choose between the old fixed function pipeline ( up to 2.1) or the new 3.2+ pipeline. But maybe the extensions Casey uses are available in OpenGL 2.1 and you don't need to go to 3.2 ?

Well, I still have access to the same extensions that are used right now in HH even when I'm in the legacy mode. I guess I'll stick to the it and the current renderer until I encounter an issue that forces me to switch to the new API (before it happens in natural progression in HH's development).

I'll try to implement the same logic separately using the new API to have it available when/if the need arises.