I wasn't programming at the time but I always thought that in the 90's when Java came around, a lot of people started using OOP (or at least the idea of using it spread) and that's when 3d engines started to develop a lot and by consequence OOP started to be used in games to different degree. If I remember correctly, the Doom 3 source code use objects but the code is not object oriented. On the other side, the source engine seems (the last time I look at their sdk) to go all the way OOP (maybe not the core engine, but the gameplay code).
The use of "scripting" language for the gameplay code also contributes to the use of OOP. I'm thinking of unity that uses C# and while you can write code in a non OOP manner, it's not what most people will advice or do.
I can only speak for myself.
At some point I studied computer graphics and was working with less experience (programmer) student then myself. I encouraged them to use OOP and since I add more experience they considered it to be a good advice. I had no real reason to use OOP, it was what I though was the way to go in college (because so many class focused on it). There wasn't any metrics behind my "choice".
So I think a lot of it comes from ignorance, that is passed on. Encouraging people to make their own choice, and be open to changing your choices is a good thing to do. I remember seeing Allen Webster reading their old code that used OOP and noting the places where OOP was just forced on. I did a similar thing on my code and it was even worse than theirs. Seeing that opened my eyes.
About the unity C# thing, a problem is that even if you don't want to program OO, you will still need to integrated the code with unity and extend unity classes. And the benefits between using OOP or not isn't directly clear. So maybe its better to let somebody know there are other ways to program and then let them get there when the need arise.
I'm not in the games industry, but I can contribute my experience.
Back when Java was still newer, before Oracle bought Sun, it fit the niche of development language that big enterprises were looking for. General purpose, C-like syntax so they could leverage their existing talent, memory management, cross-platform, desktop/web forms ootb, and so on. OOP wasn't an industry thing yet, but developers learned it because it was how Java (and soon after, MS's C#/.NET) was designed. So one reason OOP has spread is because of Java's many years of success across multiple industries. Eventually the ideas seep in and it's what people know how to do, and they spread those ideas as they change jobs or whatever.
Another factor is how it lets individual developers isolate their work from the rest of the application. To explain this, let me set the stage a bit.
There are a few different ways of working on software. You may be used to working on projects solo, or you may have learned about Agile/Scrum methodologies, where there's a core team of 3-6 engineers working in sprints and iterating on their product as time goes on. At large enterprises, there is typically a team of 15-25 engineers supporting 300+ applications. The number of LOC in these apps resembles a bell curve - most are in the 50k-500k range, but a few are less than 5k and a few are easily 1-2M+.
(Source for the above: I've worked in a few large enterprises in my career, and have networked and read stories from people in others.)
Work is handed out in a piecemeal fashion - this app needs a new column on a report, this one has a bug, this one takes 30m to execute a single database call and can we do anything about that, etc. Generally it's a low-stress environment. You just take an item off the top of the list, work on it, and reach out to the business contact to figure out a good time to deploy the changes.
OOP works very, very well this this environment as a means to isolate your changes/fixes from other developers and the rest of the application. It's not realistic to expect the engineers to know what all of these apps do and how they're designed, but they're all written in the same language - Java or C# or whatever - and they know those languages. It's very easy to whip up a new class, drop it into the program flow with a new MyClass(), and call it a day. This is the preferred way of working even if it would be cleaner to amend existing classes/modules elsewhere to get the desired functionality.
This can lead to some annoying situations - more then once, I've produced a well-designed API for some new feature in an app, then 3 months later I go back and check it and two other devs have ignored what I did and reimplemented it elsewhere, or changed the private members to public so their thing works easier. It leads to inelegant and worse-performing software. But the general mentality is that the app works, the business isn't paying for elegant software (it only needs to work as well as your internal customers will tolerate, who have shockingly low expectations), and no one has to look at the app for another six months until the next ticket.
In this sense, OOP is a cheap, effective way to work on vast numbers and sizes of codebases without needing to understand how they work and what the "best" fix for a given problem would be. It's a simple little box to work in that lets you only care about that box.
After spending some time around HMN, I now think that modules are a better line to draw between things that are internal and things you expose to clients as part of that module's API. But every time there's a language that works with modules as the unit of compilation, there's usually an RFC where someone is proposing adding public and private to individual structs and classes. I think Go or Dart was recently going through this last I checked, but I don't remember for sure. And it's usually because that person is coming from a world where they use OOP to isolate their work from everything else, and they can't map that mental model onto modules.
Anyway that turned into a bit of a rant. Hope it helped.
OOP culture comes downstream from the professors in the universities. The universities are the gatekeepers of all corporate employment. If the only people you are allowed to hire are trained in OOP, eventually they must become the majority everywhere.
If you are paid to think, write books, and lecture impressionable young people with no financial pressures upon you, you aren't going to produce anything that is practical, because the system does not reward that. So what I'm suggesting is the best teachers for a skill are the craftsmen who depended on it to make their living, not the intellectuals who are paid to produce words.
I've had the opposite experience, universities tried to use the technologies actually used in most workplaces to make us easily hireable, or companies pushed for their technologies to be taught. Some even tried to push teaching COBOL at universities.
I've actually had a few teachers who would stick to teaching C if they could.
OO makes you employable, because the damage has already been done. I have a data structures book from way back in 1990, where the pseudo code was written procedural, while in the forward the computer scientist lamented that he didn't use OO.
I find the story on COBOL difficult to believe, but perhaps there are some old code bases that still need servicing, because the older programmers have died. There can be a lot of money in something, if there are very few people who can do it. Surely you wouldn't need a degree to learn COBOL? You could learn it in a cheap online course.
The point I was making in another way: If I wanted to learn how to make movies, what would be my preferred choice? Apprentice to someone like Ridley Scott, or take lectures from a professor, who has never made a movie that succeeded in his life. I would pick Ridley Scott.
You will get a better education learning from people like Casey and Jon, and talking to the kind of programmers they attract, than you will get at any university on the planet.