Question About the Games Industry

I'm a computer engineering student and really like exploring the technology behind video games. One of the biggest turning points for me, in terms of approaching the development of low-level systems, is learning about why the OOP features in C++ (as an example) might not be the best idea ever. I personally have experience in writing code in an OOP way and then in a compression oriented way using C, I can say with absolute certainty that the latter is the better way to go (at least for me) because it teaches you to be messy with your code while finding the solution, then optimizing and hardening the system later on when it actually works, instead of starting of by worrying about classes, abstractions and inheritance relations. Having said that, I noticed that in several job listings, open source engines and other resources, OOP is mainly considered the best way to go in the games industry and no matter how much I searched there doesn't seem to be a clear starting point to adopting OOP in the games industry, does anyone have an idea of why OOP started spreading in the games industry and perhaps also the technical reasonings that justify that?

Edited by Abdo on Reason: Initial post

I wasn't programming at the time but I always thought that in the 90's when Java came around, a lot of people started using OOP (or at least the idea of using it spread) and that's when 3d engines started to develop a lot and by consequence OOP started to be used in games to different degree. If I remember correctly, the Doom 3 source code use objects but the code is not object oriented. On the other side, the source engine seems (the last time I look at their sdk) to go all the way OOP (maybe not the core engine, but the gameplay code).

The use of "scripting" language for the gameplay code also contributes to the use of OOP. I'm thinking of unity that uses C# and while you can write code in a non OOP manner, it's not what most people will advice or do.

I agree with you that most people won't advice others to not do OOP, but I'm wondering why that is, specifically the technical justifications for using OOP.


Replying to mrmixer (#24974)

I can only speak for myself.

At some point I studied computer graphics and was working with less experience (programmer) student then myself. I encouraged them to use OOP and since I add more experience they considered it to be a good advice. I had no real reason to use OOP, it was what I though was the way to go in college (because so many class focused on it). There wasn't any metrics behind my "choice".

So I think a lot of it comes from ignorance, that is passed on. Encouraging people to make their own choice, and be open to changing your choices is a good thing to do. I remember seeing Allen Webster reading their old code that used OOP and noting the places where OOP was just forced on. I did a similar thing on my code and it was even worse than theirs. Seeing that opened my eyes.

About the unity C# thing, a problem is that even if you don't want to program OO, you will still need to integrated the code with unity and extend unity classes. And the benefits between using OOP or not isn't directly clear. So maybe its better to let somebody know there are other ways to program and then let them get there when the need arise.

That makes a lot of sense, even at my college there is a lot of focus on OOP and how it's the only path to the future.


Replying to mrmixer (#24978)

I'm not in the games industry, but I can contribute my experience.

Back when Java was still newer, before Oracle bought Sun, it fit the niche of development language that big enterprises were looking for. General purpose, C-like syntax so they could leverage their existing talent, memory management, cross-platform, desktop/web forms ootb, and so on. OOP wasn't an industry thing yet, but developers learned it because it was how Java (and soon after, MS's C#/.NET) was designed. So one reason OOP has spread is because of Java's many years of success across multiple industries. Eventually the ideas seep in and it's what people know how to do, and they spread those ideas as they change jobs or whatever.

Another factor is how it lets individual developers isolate their work from the rest of the application. To explain this, let me set the stage a bit.

There are a few different ways of working on software. You may be used to working on projects solo, or you may have learned about Agile/Scrum methodologies, where there's a core team of 3-6 engineers working in sprints and iterating on their product as time goes on. At large enterprises, there is typically a team of 15-25 engineers supporting 300+ applications. The number of LOC in these apps resembles a bell curve - most are in the 50k-500k range, but a few are less than 5k and a few are easily 1-2M+.

(Source for the above: I've worked in a few large enterprises in my career, and have networked and read stories from people in others.)

Work is handed out in a piecemeal fashion - this app needs a new column on a report, this one has a bug, this one takes 30m to execute a single database call and can we do anything about that, etc. Generally it's a low-stress environment. You just take an item off the top of the list, work on it, and reach out to the business contact to figure out a good time to deploy the changes.

OOP works very, very well this this environment as a means to isolate your changes/fixes from other developers and the rest of the application. It's not realistic to expect the engineers to know what all of these apps do and how they're designed, but they're all written in the same language - Java or C# or whatever - and they know those languages. It's very easy to whip up a new class, drop it into the program flow with a new MyClass(), and call it a day. This is the preferred way of working even if it would be cleaner to amend existing classes/modules elsewhere to get the desired functionality.

This can lead to some annoying situations - more then once, I've produced a well-designed API for some new feature in an app, then 3 months later I go back and check it and two other devs have ignored what I did and reimplemented it elsewhere, or changed the private members to public so their thing works easier. It leads to inelegant and worse-performing software. But the general mentality is that the app works, the business isn't paying for elegant software (it only needs to work as well as your internal customers will tolerate, who have shockingly low expectations), and no one has to look at the app for another six months until the next ticket.

In this sense, OOP is a cheap, effective way to work on vast numbers and sizes of codebases without needing to understand how they work and what the "best" fix for a given problem would be. It's a simple little box to work in that lets you only care about that box.

After spending some time around HMN, I now think that modules are a better line to draw between things that are internal and things you expose to clients as part of that module's API. But every time there's a language that works with modules as the unit of compilation, there's usually an RFC where someone is proposing adding public and private to individual structs and classes. I think Go or Dart was recently going through this last I checked, but I don't remember for sure. And it's usually because that person is coming from a world where they use OOP to isolate their work from everything else, and they can't map that mental model onto modules.

Anyway that turned into a bit of a rant. Hope it helped.


Edited by lettucemode on

I've had the opposite experience, universities tried to use the technologies actually used in most workplaces to make us easily hireable, or companies pushed for their technologies to be taught. Some even tried to push teaching COBOL at universities.

I've actually had a few teachers who would stick to teaching C if they could.


Replying to Shastic (#25038)