Bypass graphics driver and write your own graphics library?

I'm just wondering to what extent everyday programmers can access GPU hardware directly. To my understanding the graphics driver is what implements something like opengl so does the driver use some sort of assembly instruction set for the GPU to actually program it? If so, is this something that the everyday programmer can access and build there own graphics library on? Or is that restricted by the OS, meaning you would have to write your own OS that would allow anyone to access the GPU hardware?

Edited by Jason on Reason: Initial post

There was topic here discussing same thing: https://hero.handmade.network/forums/code-discussion/t/384-graphics_display_reference_material

In short - there are actually two parts of graphics driver in modern OS'es. One sits in kernel and is responsible communicating to GPU hardware. This part usually is small, and it does not understand OpenGL, D3D or whatever other high-level API. The other part is user-space part that provides OpenGL, D3D and other API's for user to interact. Internally it does everything to convert commands into formats GPU supports and pushes to its kernel part.

If you want to write something directly to GPU, you could skip whole GL/D3D part and just provide commands in final GPU compatible form. But be aware that they will be very different between vendors or even different generations of same vendor GPU's.

Intel has bunch of documentation about how their GPU works (one of reasons why open-source OpenGL driver in Linux is of very high quality). For example, here's Skylake GPU docs: https://01.org/linuxgraphics/documentation/hardware-specification-prms/2015-2016-intel-processors-based-skylake-platform

If you don't want to rewrite everything, you could use Intel's driver in kernel (i915) and just ignore user-space part (mesa3d) and communicate with driver directly (not GPU). Here's super-simple example how to do that: http://betteros.org/tut/graphics1.php It does not do any 3d, but just shows what is involved in this kind of communication.

If we go to AMD, then you can find their docs on gpuopen.com, for example: https://gpuopen.com/amd-vega-7nm-instruction-set-architecture-documentation/

If we talk Nvidia, then their docs are not open at all. They said they will start opening documentation here. But as far as I know, it is very incomplete. So currently nvidia driver is binary-blob only. People have been reverse engineering older hardware, so there is some docs available, but I have no idea how complete they are: https://nouveau.freedesktop.org/wiki/NvHardwareDocs/

To understand what is needed to translate GL state to GPU driver format, you can use mesa3d open-source library. It supports many different backends. i915 for Intel, radeonsi for AMD. It uses bunch of other parts of mesa3d library - like GLSL compiler.

You can take a look how Linux kernel drivers communicate with actual GPU here: i915 for Intel, and amdgpu for AMD.

Raspberry Pi GPU docs are available here: VideoCore IV 3D Architecture Reference Guide. Linux kernel driver is here: vc4 and v3d. Mesa3d code to talk to them is here: vc4 or v3d. v3d driver supports newer pi's and GLES3.

In conclusion - on Windows you probably cannot do anything unless you write your own kernel driver to communicate to hardware, and then you can interface with your driver. On Linux you can bypass Mesa3D library and talk to i915, amdgpu or vc4 drivers in kernel, but then you'll need to translate your GL calls to whatever format GPU expects them to be (including shader bytecode). This'll work for Intel, AMD and RaspberryPi, but not for Nvidia.


Edited by Mārtiņš Možeiko on
A lot to read through and unpack here so I'll be keeping busy lol. Thanks for a very thorough answer here martins. I don't even know exactly what I would be trying to do with this information, guess just read things and let it percolate for a while and see if there is at least any hope of getting out of the current software mess we have. Though it seems a lot of the problems with things are buried very deep (OS/Hardware level). With what I understand from you even if I built my own os I would still have to have a team of people dedicated to writing drivers for every different GPU vendor that will keep changing as cards are upgraded. Has anyone ever talked about writing a game specific OS for PC?
Not really, because you will end up with a console. That's what modern PS or Xbox is. Game OS with PC hardware.
You should really ask yourself what you are really after.
Vulkan is much closer to the hardware than OpenGL,
so the user-level "driver" is a lot thinner.
You could (and people have) re-implement OpenGL on top of vulkan.
Though again you should then be asking yourself why you are doing that.
One reason that people have, is to allow old OpenGL based code to run on platforms that don't have a good story for running OpenGL on (..ehm.. Apple... ehm...)
Though if your motivation for writing your own library is that OpenGL is not great for your needs, then you probably won't be doing that.
There are similar examples like DX11 reimplemented on top of DX12 - for completely different reasons.
Point is, know your reasons first.
Then evaluate what your best course of action should be.
I had no idea thoses informations were available, all those links are really extraordinary !
I just watched a video where Casey talked about openGL 4.5 and AZDO (Approaching Zero Driver Overhead) and how that was an api closer to what he thinks a good graphics api should be. How well supported is openGL 4.5 today? What about its future? Is it comparable to vulcan in terms of potential performance benefits?
Pretty much all modern GPU's support OpenGL 4.6 now. You can check https://opengl.gpuinfo.org/ for more details.
If you write your code carefully with GL you can get very similar GPU performance to Vulkan. The problem with GL is that you need to be careful what you use. It won't help you avoid writing bad code. Also Vulkan will have lower CPU usage as it's API is more thinner, and GL still has terrible support for multi-threading. With Vulkan you can do a lot of things in multiple threads to speed up CPU part of GPU API.

Edited by Mārtiņš Možeiko on