philosophical question about software rendering

Hi everyone,

I hear that every current computer has a GPU(integrated or not).
and I suppose that drawing stuff on the monitor needs to go through it.

So when we are software rendering we are practically using the GPU.
So there is SOME driver some where in the OS and when we ask the
OS for a pointer of pixels(or rather pass it the pointer), the OS is sending it
to the driver.

So if we had used OpenGL from the beginning and just passed it the the pointer
wouldn't that be more from scratch?

Edited by ramin on Reason: Initial post
There's no such thing as "pass pointer" to OpenGL for drawing to screen. You cannot do it. OpenGL requires textures, samplers, vertex buffers, uniforms, transformation matrix, etc.. to draw something reasonable to screen. Handmade Hero is showing how to approach problems about the things you don't know how they work. Showing what is pixel, and you can see it in debugger's memory is way simpler thing that using OpenGL - as that would require showing many/crazy GL things at once.
So you are saying that OS talks to monitor or the GPU directly?

Of course I understand what casey's purpose is. I have watched many episodes myself.
I am just trying to understand what happens at the lowest OS level.

Edited by ramin on
OS talks to GPU. And gpu driver is one how knows how to talk to GPU (what kind of information to send and receive).

Edited by Mārtiņš Možeiko on
Thank you.

The reason I got interested in this was that I am on Linux.
I wanted to watch the first episodes one more time but this time not use SDL.
But didn't want to talk to Xserver too.

So I guess my question is, if I use OpenGL(or MESA) and draw a full screen rectangle and pass the pixels to
the OpenGL as a texture to be drawn on the rectangle, I am still doing something "from scratch"
because it is probably what Xserver or SDL are doing anyway.

Am I understanding it right or am I missing something?
So I'm not sure if I fully understand what you're asking so let me know if I'm wrong about your question but it's up to you how low of a level you want to understand for rendering. For handmade hero, Casey starts by building a software render which implies that we will not be calling opengl at all and just doing all of the rendering ourselves on the CPU. This means, as far as we are concerned, the GPU will not be used and we will instead render to the screen with calls like StretchDiBits() (the function used to help render a buffer to the screen on Windows). When doing software rendering the programmer is responsible for writing algorithms for things like figuring out how to utilize texture coordinates (texture sampling) and other things that openGL and other graphics api's will typically abstract for you. If you don't care about implementing your own rendering algorithms for things like texture sampling then you can just start with opengl and render things to the screen that way.

Edited by Jason on
No actually I want to go even lower. I want to know how windows or Linux(in my case) draw a buffer on the monitor. mmozeiko says (and I had read in other places) that any thing that is going to be drawn on the screen needs to go through the gpu. and only the driver can talk to gpu. So OS needs to talk to gpu. So that StretchDiBits() that you mentioned is calling some driver(probably directX) functions. I want to know whether I am understanding this right and if yes I want to know what specific functions it is(probably) calling.

Edited by ramin on
I think you are greatly confused about phrase "from scratch" here.

From scratch in Handmade Hero context means when you write thing, you understand how it works from A-Z. Not using OpenGL and writing bunch of pixels to memory buffer is as "from scratch" as it can be. This memory buffer can be blitted to screen with GDI, or with OpenGL or even with Vulkan - that all is fine. It's just a minor detail.

But even drawing triangles with OpenGL also is "from scratch" - if you prepare input data yourself and call necessary GL functions yourself. As long as you don't relay on things like Unity or UE4. Because for them you don't know what happens in their insides to get graphics out. Which means it is harder to debug things and harder to know why it works with worse performance than you expect, etc...

You can avoid talking to Xserver if you want to do raw OpenGL. But that will involve few things:

1) not running X11 at all, because that overrides who is accessing GPU - which means you need to reboot to console mode to run your application

2) using libdrm library (can be avoided if you directly talk to kernel KMS) or using EGLStreams (if you are on Nvidia binary driver as they don't work the same as rest of open-source gpu drivers). This is needed to initialize GL context, handle memory allocations.

3) using whole Mesa library (or nvidia binary driver) if you don't want to reimplement OpenGL. Because GPU drivers in kernel do not know OpenGL. It's responsibility of Mesa to translate GL API calls to something that GPU understands.

You can see how this can be done in my github repo here: https://github.com/mmozeiko/rpi/tree/master/gles2_drm This code works on RaspberryPi or on Intel GPU. It will probably work also with AMD using open-source driver, but I have not tested that. For Nvidia it will require replacing libdrm & libgbm with EGLStreams.

In general that is a lot of pain, and unless your goal is to learn how program without X11 server, don't do this.
Using Mesa library will still mean you are using GL. At that point what is reason to avoid X11? That is such a tiny party of application. Rest of your code will be multiple of magnitudes larger and more complex.

Edited by Mārtiņš Možeiko on
mmozeiko
I think you are greatly confused about phrase "from scratch" here.

From scratch in Handmade Hero context means when you write thing, you understand how it works from A-Z. Not using OpenGL and writing bunch of pixels to memory buffer is as "from scratch" as it can be. This memory buffer can be blitted to screen with GDI, or with OpenGL or even with Vulkan - that all is fine. It's just a minor detail.

But even drawing triangles with OpenGL also is "from scratch" - if you prepare input data yourself and call necessary GL functions yourself. As long as you don't relay on things like Unity or UE4. Because for them you don't know what happens in their insides to get graphics out. Which means it is harder to debug things and harder to know why it works with worse performance than you expect, etc...



Thanks for the complete answer. I think we mean the same thing by "from scratch". The reason this whole thing started was that I was ok with SDL creating a window for me(That simplifies a lot of unnecessary talking to X which is not worth it. And, wow I didn't know that without X it is going to be even worse :) )

But then I have to create a renderer, a texture and allocate memory say with malloc.
I understand the malloc, but SDL's document gives no REAL explanation what a renderer or texture is.
then I
1) clear the renderer,
2) draw the gradient
3) update the texture
4) copy the texture to the renderer
5) present the renderer.

of all this I just understand 2. all of the others are gibberish to me. for example 4 can be as easy as just passing a pointer or copying the entire memory. and I don't understand why they are necessary and what they do or what the cost is.

So that is not good. I wanted something better. then I came across glfw. it seems a little bit better. But then it can just draw with opengl or vulkan. So I wanted to avoid it. but then I thought if X is using Mesa which is just a opengl translator then X is using opengl. why should I not use it directly?

So I think I've got my answer.

I hope I got it right:
The way casey does it or using glfw and opengl(Just to draw the pixels that I have prepared) are as low level as it gets. Am I right?

and can you please tell me how to use raw opengl to draw the pixels?
I talked about a method of one rectangle covering the entire screen and adding a texture(which is my pixels) to it. Is there a better way?

Edited by ramin on
Yes, they way how he was using OpenGL in beginning was as "low level" as you usually do to draw big chunk of pixels to screen.

Easiest way to draw rectangle covering whole screen is to use "old" OpenGL. Upload texture data, bind texture, and call glBegin(GL_QUADS) & glEnd with 4 glVertex/glTexCoord calls in the middle.

I don't know how far have you watched Handmade Hero, but at some point all this goes away, and Casey switched to render 3d polygons with OpenGL, including writing pretty complex lighting GLSL shader, using framebuffer with multiple attachments to render into textures for depth peeling, using fancy multisampling and sRGB.
I have watched 60 episodes. I hope I'll get to those fancy parts someday. Anyway thank you again for your help that was very helpful.