OpenGL VBO without VAO

Hi, I'm trying to use a vertex buffer object in OpenGL and have some problem understanding why it doesn't work without a vertex array object.

Here is the code and shaders sources.

It's simple:
- create a VBO, bind it, fill it with data;
- create shaders and program (I checked for errors and it compiles and links);
- bind and set the vertex attributes;
- use glDrawArrays to draw a triangle.

The results:
- OpengGL context 3.2 (created with wglCreateContextAttribsARB) without VAO: glDrawArrays raise an GL_INVALID_OPERATION error but I don't do what the doc says about that error.
- OpenGL context 3.2 with VAO: no problems.
- OpenGL context 3.1 with or without VAO: no problems.

Note: I use an AMD graphics card and GL_VERSION always returns the same version (4.5.13399 Compatibility Profile) but the behaviour of OpenGL seems to change (as the 3.1 test works without VAO). I asked a friend to test the 3 exe on his NVidia card and he got the same results, exept the GL_VERSION string corresponded to the version asked.

To my understanding (I'm new to OpenGL as you may have guessed), VAO are some quick way to re-set the buffer bindings and vertex attribute and should not be required. Am I missing something ? Does a VAO do other things ? Do I need to set other states in OpenGL to use VBO without VAO ?

Thanks for your time.
mrmixer

Hi, I'm trying to use a vertex buffer object in OpenGL and have some problem understanding why it doesn't work without a vertex array object.


mrmixer

To my understanding (I'm new to OpenGL as you may have guessed), VAO are some quick way to re-set the buffer bindings and vertex attribute and should not be required. Am I missing something ? Does a VAO do other things ? Do I need to set other states in OpenGL to use VBO without VAO ?


Short version: You need a VAO. You might get away without creating one with some OpenGL drivers, but for maximum compatibility, you need to create one.

Longer version:

VAOs are the objects that store the state around the following calls:

* glEnableVertexAttribArray​/glDisableVertexAttribArray​
* glVertexAttribPointer
* GL_ELEMENT_ARRAY_BUFFER

Annoyingly, they don't store the state for GL_ARRAY_BUFFER itself, if I recall.

You can use VAOs to use/store state for drawing some set of objects, and then swap to another VAO to use/store state for drawing some other set of objects. If you don't want to do it that way, and instead have one global set of state, you still need _a_ VAO bound.

The first item in https://www.opengl.org/wiki/Vertex_Rendering for getting GL_INVALID_OPERATION is:


A non-zero Vertex Array Object must be bound (though no arrays have to be enabled, so it can be a freshly-created vertex array object).

This caused me to lose a lot of time at one point - it worked without a VAO on some of Windows, Mac OS, and Linux, but not on the remainder.

Neil
Are you sure you need VAO? It's been a while I've done something with OpenGL, but I remember that you absolutely need to use VAO only if you are using core profile. In compatibility profile you don't need to use VAO.
Ah - this triggered a memory.

On OS X, there is no OpenGL 3.2+ compatibility mode. You can use OpenGL 2.1, or you can use OpenGL 3.2+ in core profile.

* Legacy mode: https://developer.apple.com/opengl/capabilities/GLInfo_1090.html
* Core Profile mode: https://developer.apple.com/opengl/capabilities/index.html
The easiest way to do things is to just create a VAO when your program starts and then just use VBOs as normal, without messing around with the VAO stuff. Last I heard it is actually faster to do things that way than using VAOs the "correct" way.
I believe the "don't use VAOs" conventional wisdom comes from slide 57 in this presentation:

https://developer.nvidia.com/site...Porting%20Source%20to%20Linux.pdf

At least that was the first time I saw that. I haven't tried to confirm that sort of thing myself yet since vertex array stuff isn't the kind of thing we choke on in our current engine, so I can't really shed much light on this. But at least at some point both Valve and nVidia seemed very confident that you don't want to actually swap VAOs, you just want to set one and then always use pointers.

I also seem to recall that the reasoning for this was because glVertexAttribPointer allows the data to flow unidirectionally from the game to the driver in a way that allows for more overlap, whereas VAOs I guess caused synchronization between the threads that created problems for the driver. So it was unintuitive, but drivers are very complicated and I could easily see that being the case. It is not a well-designed part of the API.

- Casey
Thanks, I'll read those documents but I already understand it a little bit better.

Is there a site you'd recommend to get those kind of information (e.g. a VAO being required) ? I've got the OpenGL programming guide 8th edition, but it never mention VAO being required (or I missed it). I guess openg.org wiki is a good place to strat.
"Modern" OpenGL is very complex and changes from version to version. So any older book/website available won't have complete information.

You can always read official OpenGL specifications to understand what is required and what not. It's not the best option, but it has pretty complete information.

If you are using core profile at least read "D.1. CORE AND COMPATIBILITY PROFILES" section in https://www.opengl.org/registry/doc/glspec45.core.pdf So you know what is deprecated and not allowed from "regular" OpenGL (compatiblity profile).

It explicitly mentions requirement to use VAO.
Client vertex and index arrays - all vertex array attribute and element array
index pointers must refer to buffer objects. The default vertex array object
(the name zero) is also deprecated. Calling VertexAttribPointer when no
buffer object or no vertex array object is bound will generate an INVALID_-
OPERATION error, as will calling any array drawing command when no vertex
array object is bound.

I have found these OpenGL reviews pretty useful: http://www.g-truc.net/project-0032.html#menu
You can read each of them as a changelog from previous version of OpenGL.

Edited by Mārtiņš Možeiko on
Thanks.

On that page there are all version specifications. I should have started there. I didn't search in details, but the 3.0 spec says that the default VAO (zero) is deprecated but wasn't VAO introduced in OpenGL 3.0 ? Does it refer to the VAO extension ?

As for core and compatibility profiles, I'm pretty sure I tried both and got the same result, but as GL_VERSION always returns the same string on my machine, how can I be sure of what I'm getting ?
OpenGL 3.0 introduced deprecation notion. It said that not using VAO is deprecated feature. But it still should work. By "deprecated" they meant that in next version then could remove this feature.

You could create forwards compatible context that actually removed all deprecated features.

With OpenGL 3.2 they introduced profiles - core and compatibility. In core profile they removed all deprecated features. In compatibility they left everything as is. So even in latest OpenGL 4.5 version you are free to use glBegin() and other "deprecated" features (like not using VAO).

More info here: https://www.opengl.org/wiki/Core_And_Compatibility_in_Contexts

Check if you context really is using compatibility profile and is not created with forward compatible flag.

You can check if profile with glGetIntegerv(GL_CONTEXT_PROFILE_MASK, &mask). If compatibility profile is used then (mask & GL_CONTEXT_CORE_PROFILE_BIT) will have value 0.

To check if profile is created forward compatible do glGetIntegerv(GL_CONTEXT_FLAGS, &flags). If forward compatible is used then (flags & GL_CONTEXT_FLAG_FORWARD_COMPATIBLE_BIT) will be non-zero value.

Edited by Mārtiņš Možeiko on
And thanks again !

I messed up the profile bit mask when creating the context and so I was always getting a core profile (but when the GL_CONTEXT_CORE_PROFILE_BIT is set, GL_VERSION still says it's a compatibility profile, but I guess it's how AMD choose to do things).

Now, with a core profile I need the VAO, and with a compatibility profile I don't. Everything is following the specs.
To be fair to AMD, I was still doing things wrong. After some clean up in the context creation, GL_VERSION returns the correct version. I'm not sure what I was doing but I think I wasn't setting the new context current before querying GL_VERSION.