So the renderer is still in the CPU, the opengl bits are just to get the graphics buffer to the screen. The glTexSubImage2d call is just updating the texture memory with what we've rendered in our graphics buffer. I guess to answer your question, the drawing is not hardware accelerated, but the blitting to the screen is.
Another of way of doing this (without opengl) would be to copy the memory into an NSImage and then drawing that to the screen in the drawRect function. Or you could create a layer backed view and update the layer bits.
Here's an example of someone's platform layer that's using cgimage to do the drawing
https://github.com/zenmumbler/han...aster/HandmadeHero/main.m#L48-L87
Really there's a bunch of ways to get the bits to the screen, what I'm doing with opengl just avoids most of the overhead of cocoa.
I'm hoping to implement the input code today and then audio tomorrow, so stay tuned :D