Main game loop on OS X

I got the buffer stuff now running and can blit to the screen.

However there is one thing I don't quite understand. In my view delegate I got this function:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
- (void)drawRect:(NSRect)dirtyRect {
    CGContextRef gctx = [[NSGraphicsContext currentContext] graphicsPort];
    CGRect myBoundingBox;
    myBoundingBox = CGRectMake(0, 0, GlobalBackbuffer.BitmapWidth, GlobalBackbuffer.BitmapHeight);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
    int bitmapBytesPerRow = GlobalBackbuffer.BitmapWidth * 4;
    _backBuffer = CGBitmapContextCreate(GlobalBackbuffer.BitmapMemory, GlobalBackbuffer.BitmapWidth, GlobalBackbuffer.BitmapHeight, 8, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
    CGColorSpaceRelease(colorSpace);
    CGImageRef backImage = CGBitmapContextCreateImage(_backBuffer);
    
    
    CGContextDrawImage(gctx, myBoundingBox, backImage);
    CGContextRelease(_backBuffer);
    CGImageRelease(backImage);
}


This takes a pointer to the bitmap buffer creates a bitmap context, creates a image out of that context and draws it to the screen.

The code which actually controls the bitmap is invoked in the main run loop. I'm right know animating the buffer and it works without problems which leads me to the question "why"? I'm invoking the animation function as often as the run loop gets surpassed but I'm not calling the drawRect method every time after I altered the bitmap. I'm never ever calling it. Apple says that drawRect is called when the window thinks it should repaint the view. But how does it know when the bitmap has changed?
Is it because [view setNeedsDisplay:YES]; is called right after the animation function? All righty, YES it does!

Is there another place where I can put the drawing code? Like doing it in a more Windows way without sending a message to view that then calls draw rect? Like an update function that gets called after animation is done without the use of delegates?
However I read that drawRect ist the only place where you can actually draw stuff. Sounds stupid to me but its the same on windows with the paint message right?
Another thing, would it make sense to use Metal for the bitmap blit? It seems to be faster then OpenGL and since the rendering is done by software until much later in the project, Metal could increase performance a bit I guess?

Next problem, how is input handled by cocoa? The IOKit seems extremely complicated.
Maybe I will just implement the keyboard stuff first. Controllers seem extremely complicated.
Could you recommend resources for implementin keyboard/gamepad input?
I tried to understand this stuff from Jeff Buck's osx layer but daaaamn there is a lot of stuff going on.


EDIT: I implemented input through casual event handling. However I've seen people using IOKit for the Keyboard as well. Are there any advantages for using the IOKit?

Edited by Adrian on

I’m wondering if there is a way to handle delegate methods like applicationDidFinishLaunching in the main event loop. Like it is done in Windows. This way you could organize the code a lot better and you wouldn’t have to use all these Delegates.

For example you can handle key presse like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
void ProcessEvents() {
    @autoreleasepool {
        NSEvent *ev;
        int speed = 255/1;
        do {
            ev = [NSApp nextEventMatchingMask: NSAnyEventMask
                                    untilDate: nil
                                       inMode: NSDefaultRunLoopMode
                                      dequeue: YES];
            if (!ev) {
                //break;
            }
            switch ([ev type]) {
                case NSKeyUp:
                case NSKeyDown: {
                    int hotkeyMask = NSCommandKeyMask | NSAlternateKeyMask | NSControlKeyMask | NSAlphaShiftKeyMask;
                    if ([ev modifierFlags] & hotkeyMask) {
                        // Handle events like cmd+q etc
                        [NSApp sendEvent:ev];
                        break;
                    }
                    // Handle normal keyboard events in place.
                    int isDown = ([ev type] == NSKeyDown);
                    switch ([ev keyCode]) {
                        case 13: { // W
                            
                        } break;
                        default: {
                        } break;
                    }
                } break;
                default: {
                    // Handle events like app focus/unfocus etc
                    [NSApp sendEvent:ev];
                } break;
            }
        } while (ev);
    }
}



So is there a way to check if ev is type of NSNotification and equals for example applicationDidFinischLaunching?
adge

Is there another place where I can put the drawing code? Like doing it in a more Windows way without sending a message to view that then calls draw rect? Like an update function that gets called after animation is done without the use of delegates?
However I read that drawRect ist the only place where you can actually draw stuff. Sounds stupid to me but its the same on windows with the paint message right?
Another thing, would it make sense to use Metal for the bitmap blit? It seems to be faster then OpenGL and since the rendering is done by software until much later in the project, Metal could increase performance a bit I guess?


For better or worse, Cocoa on OS X (and iOS for that matter) is heavily based around delegates as a way to add customization without having to subclass. As I recall, the reason why you need to do your view drawing inside -drawRect: is because that's the only place where you get the graphics context for the view that you use for drawing. And the graphics context can change between calls, so you can't reliably cache it either. As far as using Metal to do the bitmap blit, it's probably not worth it. Quartz (which is the foundation of Core Graphics) already uses Metal under the hood, so if you're just blitting a bitmap, I don't expect you'd see a big difference.

In your example though, I would cache the bitmap context when the view is created so you don't keep creating it every frame. :)

adge

Next problem, how is input handled by cocoa? The IOKit seems extremely complicated.
Maybe I will just implement the keyboard stuff first. Controllers seem extremely complicated.
Could you recommend resources for implementin keyboard/gamepad input?
I tried to understand this stuff from Jeff Buck's osx layer but daaaamn there is a lot of stuff going on.

EDIT: I implemented input through casual event handling. However I've seen people using IOKit for the Keyboard as well. Are there any advantages for using the IOKit?


This is an area that I'm not really knowledgeable in either, so I can't give specifics here. I use IOKit for input even for keyboard & mouse because when I do add support for gamepads, it's all in one place, as opposed to dealing with NSEvents for keyboard/mouse and HID for gamepads.

adge

I’m wondering if there is a way to handle delegate methods like applicationDidFinishLaunching in the main event loop. Like it is done in Windows. This way you could organize the code a lot better and you wouldn’t have to use all these Delegates.

So is there a way to check if ev is type of NSNotification and equals for example applicationDidFinischLaunching?


The only way I can think of doing this is to still observe or implement the -applicationDidFinishLaunching: method, but in there, create an NSEvent yourself (and you can fill in its userData field with whatever you need) and then add it to the event queue by calling -postEvent:atStart: on the application instance. You should then receive the event in your process loop.

Edited by Flyingsand on
Flyingsand

In your example though, I would cache the bitmap context when the view is created so you don't keep creating it every frame. :)


What do you mean by "caching"? Storing it in a variable that stays around? Like a static variable in the function? I thought I had to recreate the BitmapContext every time I change the bitmap itself because well, it's changing!
See thats the problem if you never really learned programming in a professional way and learned all by yourself, you do know nothing. Especially because the OS Framework documents are somewhat wacky, lacking and hard to understand. And there are not really books that teach you the Frameworks. There are 100 books that show you how the damn syntax works but thats it. Nobody tells you what a function actually does except you ask for it in a forum.


Flyingsand

The only way I can think of doing this is to still observe or implement the -applicationDidFinishLaunching: method, but in there, create an NSEvent yourself (and you can fill in its userData field with whatever you need) and then add it to the event queue by calling -postEvent:atStart: on the application instance. You should then receive the event in your process loop.


Yeah thats exactly what I thought about to. But well, this seems like a roundabout which will slowdown things more I guess.
adge
Flyingsand

In your example though, I would cache the bitmap context when the view is created so you don't keep creating it every frame. :)


What do you mean by "caching"? Storing it in a variable that stays around? Like a static variable in the function? I thought I had to recreate the BitmapContext every time I change the bitmap itself because well, it's changing!
See thats the problem if you never really learned programming in a professional way and learned all by yourself, you do know nothing. Especially because the OS Framework documents are somewhat wacky, lacking and hard to understand. And there are not really books that teach you the Frameworks. There are 100 books that show you how the damn syntax works but thats it. Nobody tells you what a function actually does except you ask for it in a forum.


Yes, you would create the bitmap context when the view is created (for example) and keep it around as an instance variable of the view. The problem with storing it as a static in the -drawRect: function itself is that you will need to release the context with CGContextRelease, but since it now persists between calls, you release it once in -dealloc but if it is static inside -drawRect:, the variable is scoped within that method only.

And no, you don't have to recreate the bitmap context every time because if you create it with your own pointer in memory (as you have with GlobalBackbuffer.BitmapMemory), the bitmap memory always stays the same. It's contents will change as you draw into the bitmap every frame, but you don't need to recreate the context for that. You do need to recreate the CGImage from the context every frame though to get the changes in the bitmap. Hope that makes sense.

adge
Flyingsand

The only way I can think of doing this is to still observe or implement the -applicationDidFinishLaunching: method, but in there, create an NSEvent yourself (and you can fill in its userData field with whatever you need) and then add it to the event queue by calling -postEvent:atStart: on the application instance. You should then receive the event in your process loop.


Yeah thats exactly what I thought about to. But well, this seems like a roundabout which will slowdown things more I guess.


Yeah, unless you have a good reason for doing so, there's little point in doing this. One of the downfalls of callbacks/delegates (that I believe Casey has mentioned before), is that if you need shared data between various delegates/callbacks, or if it doesn't allow you pass userData or doesn't pass you the data you need, then you need to either maintain state or use globals.
Do I have to release the CGBitmapContext then in drawRect? Or just release it when the application closes?

Another thing I just came across. Is there any guide on how to create build.sh scripts for the compilation of your code? I wanted to abandon Xcode but I cant seem to find any resources how to write them properly.

Edited by Adrian on
adge
Do I have to release the CGBitmapContext then in drawRect? Or just release it when the application closes?


No, if you create the context once in -init: or some other place where initialization takes place, you release it in -dealloc (much like malloc/free).

adge

Another thing I just came across. Is there any guide on how to create build.sh scripts for the compilation of your code? I wanted to abandon Xcode but I cant seem to find any resources how to write them properly.


I would just look at bash scripting. It's very similar to compiling from the command line. You can also look at the report navigator in Xcode (it's the last tab on the navigation pane on the left side) where you can see the full command line Xcode uses to build your project. It uses "xcodebuild", but you can apply most of the relevant arguments if you build using "clang++".
Do I have to release it at all? Since the bitmap stays around the whole time the application is running. And when the application gets terminated everything is freed anyway right?
I guess its just good practice but strictly speaking there is no reason for freeing memory that is allocated the whole lifecycle of an application.

Edited by Adrian on
adge
Do I have to release it at all? Since the bitmap stays around the whole time the application is running. And when the application gets terminated everything is freed anyway right?
I guess its just good practice but strictly speaking there is no reason for freeing memory that is allocated the whole lifecycle of an application.


So I would say that you're probably right and don't have to release it given that it will be released in -dealloc anyway. But the thing is, CGContextRelease doesn't technically release the context -- it decrements its retain count. All of the Objective-C/Swift frameworks in Cocoa on both OS X and iOS use what's called automatic reference counting. Before ARC, what you did when allocating new objects was to retain it, and this incremented its retain count by 1. Sending it a release message would decrement it by 1, and when it's retain count reached 0, it would be freed. Under ARC, the compiler now puts in the retain/release messages for you.

However, APIs like CoreGraphics and CoreFoundation don't do this because they are still in C. So CoreFoundation has a naming convention where any function that has "create" or "copy" in its name will be returned with a retain count of 1 and ownership passes to you, and so it is your responsibility to call its corresponding release function to decrement the retain count.

So that's just a little background on what the CoreFoundation "release" functions actually do -- it's not quite as simple as a free.

In your case, however, you allocated the bitmap memory, so really.. what is there for CGContextRelease to free? Perhaps it allocates some memory to store context info, like the color space, or some memory for caching, etc. But again, you're probably safe in not releasing it, but I can't say 100% for sure. When using CoreFoundation, I always just follow the guidelines and release those objects.

Edited by Flyingsand on
Thank you very much for your answers. I hope you don't bother answering all my questions. There are probably more to come.


I have ARC disabled. But it doesn't seem I'm leaking anything. Memory usage stays at around 40MB. Used @autorelease for all the window stuff.
Could it happen that if you didn't release all the memory an application uses when it shuts down, that the memory is still occupied, even when the application isn't running anymore?

By the way why does my application not print to the terminal with printf()? I created a build.sh and to compile my application I just type sh build.sh. The script contains the following line:
1
clang code/main.m -fno-objc-arc -fmodules -mmacosx-version-min=10.6 -o main


I found this somewhere online. I really have no clue how this is done properly. I've searched a lot but it doesn't seem there is anyone who is not using Xcode.

Edited by Adrian on
adge
Thank you very much for your answers. I hope you don't bother answering all my questions. There are probably more to come.


Not a problem. :)

adge

I have ARC disabled. But it doesn't seem I'm leaking anything. Memory usage stays at around 40MB. Used @autorelease for all the window stuff.
Could it happen that if you didn't release all the memory an application uses when it shuts down, that the memory is still occupied, even when the application isn't running anymore?


It could be that you're just not allocating enough in your platform layer for it to be noticeable. Just looking at my platform layer, there are very few things I allocate in Objective-C for there to be any noticeable leaks if there were any. And most of the allocations persist for the whole application.

And no, it's not possible that memory you allocated isn't freed when your process shuts down.

adge

By the way why does my application not print to the terminal with printf()? I created a build.sh and to compile my application I just type sh build.sh. The script contains the following line:
1
clang code/main.m -fno-objc-arc -fmodules -mmacosx-version-min=10.6 -o main


I found this somewhere online. I really have no clue how this is done properly. I've searched a lot but it doesn't seem there is anyone who is not using Xcode.


It's hard to say why you're not seeing output from printf(). One thing that came to mind is that you're not linking with the C/C++ standard libraries at all, or any of the frameworks (i.e. Cocoa). But your build script still does compile and produce an executable?

I dug up one of my older shell scripts for building C++:
1
2
3
clang++ -std=c++11 -stdlib=libc++ -DDEBUG=1 -O0 -g -Wall -I../ -o filter main.cpp filter.cpp
mv filter build/
mv filter.dSYM build/


For compiling an application that relies on the Cocoa framework, from what I recall you will want to specify something like:
1
clang -g -Wall -framework Cocoa -o main code/main.m


When I have used build scripts in the past instead of Xcode, I remember that it was a little fussy. A fair bit of trial-and-error, and comparing with the command line that Xcode would output.
Jeah it creates an perfectly running Unix executable. It seems programming is the easy part. Creating the build script is the hard one. There are absolutely no guides or documentations out there which could help. I tried your compilation line but doesn't print as well.

Is there really no one not using Xcode? It's a shaaaaaaammmeee!!

Edited by Adrian on
Clang has pretty good manual on its command-line usage:
http://clang.llvm.org/docs/CommandGuide/clang.html
http://clang.llvm.org/docs/UsersManual.html
And there are a lot of common things with GCC, so gcc manual can be used as well: https://gcc.gnu.org/onlinedocs/gcc-6.2.0/gcc/

There also a bunch of useful info on Apple Developer site (probably free account is required). Here's some older info how to port and build Linux applications: https://developer.apple.com/libra...tingUnix/compiling/compiling.html

So don't say there is no documentation. There is a lot of documentation.

I have used clang in CLI a lot (including on OSX) and I don't remember printf being a problem - it prints out everything normally. You are doing something unusual.

Edited by Mārtiņš Možeiko on
My bad. Seems there was a problem with the place where the printf() function was invoked. Im an idiot. Thank you for the links! These will definitely help me!
There is something I don't understand with my bitmap.

To create the bitmapContext I use this code:

1
2
3
4
5
6
7
backbuffer = CGBitmapContextCreate(GlobalBackbuffer.Memory,
                                       GlobalBackbuffer.Width,
                                       GlobalBackbuffer.Height,
                                       8,
                                       bitmapBytesPerRow,
                                       colorSpace,
                                       kCGImageAlphaNoneSkipFirst| kCGBitmapByteOrder32Little);


It was a bit of trial and error to make it work because I don't quite understand "kCGImageAlphaNoneSkipFirst| kCGBitmapByteOrder32Little".

To draw into my bitmap I have this block:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
internal void RenderWeirdGradient(osx_offscreen_buffer *Buffer, int BlueOffset, int GreenOffset)
{
int Pitch = Buffer->Width * Buffer->BytesPerPixel;

uint8 *Row = (uint8 *)Buffer->Memory;
for(int Y = 0;
Y < Buffer->Height;
++Y)
{
  uint32 *Pixel = (uint32 *)Row;
  for(int X = 0;
    X < Buffer->Width;
    ++X)
    {
      uint8 Red = 0;
      uint8 Green = (Y + GreenOffset);
      uint8 Blue = (X + BlueOffset);

      *Pixel++ = Red | Green << 8 | Blue << 16 ;
    }

    Row += Pitch;
  }
}


Which is the same what Casey did. But what I don't quite understand is this line:
1
      *Pixel++ = Red | Green << 8 | Blue << 16 ;

Whats happening there? I know that he said it has to do something with Little and Big Endian but I also didn't understand his explanation. It was just trial and error to get this working but I don't know what I did there.

This is what I expect my pixel int to look like right?
RR GG BB xx

But in memory it is stored like that:
xx BB GG RR

But why do I have to shift bits?
Why cant I just do
1
*Pixel++ = Red | Green | Blue | xx

It would then get stored like:
RR
GG RR
BB GG RR
xx BB GG RR
am I right? Or do I get something fundamental wrong?

Edited by Adrian on