(I have read the other OOP topics on this forum, and elsewhere, but none of them seemed to answer this question)
I've been working in C++ for about 8 years at this point and following handmade hero since day 1. I agree that OOP methodologies are plain bad, and I haven't used "proper" OOP in my code in several years at this point. I first started to abandon OOP after reading about the visitor pattern and how insane it seems to need to do that.
In the past couple of years I've oscillated between simple modular class-based design (when I'm required to for work, because principle of least design in the codebase), what I call "pragmatic oriented" where things are arranged in structs with no private members, but also using Constructors/Destructors and Member functions when the first argument would essentially only be a pointer to the data (thus exposing the API and the data to the user whilst leaving them free to do whatever else they want), and Casey's compression/data oriented style.
I feel most comfortable in what I call "pragmatic", since I feel constructors and destructors (when used sensibly) can significantly improve readability and safety of code (and they're perfectly usable in a PushStruct scenario like Casey uses), and I see no reason why a function that just takes a pointer to a data-type shouldn't just receive this implicitly as "this".
Is the reason that this is avoided primarily due to not wanting to spend the time updating the headers and making "refactoring"/compression harder? I feel like I must be missing something bigger in the avoidance of this style, and I'd be intrigued to know what it is.
Use what works for you. I mean, dogmatically rejecting OOP is just as bad as dogmatically requiring it.
The tradeoffs I see with using ctor/dtor (even without virtual members) are that it creates "magic" code that the compiler runs for you. For example you might want to re-use an object, but there's no easy way to re-run the ctor (not counting stuff like placement new), or you might want to free memory without running the destructor. Data structures can be problematic, e.g. if you have a Foo() with a constructor, and you create an array of Foo, the compiler is going to generate code to call the constructor on those, but if you have a Foo * that you assign a block of memory, it's not. Why does the compiler get to decide this? Shouldn't you be in charge of when the initialization gets called?
In general, "vanilla" c++ tightly couples allocation and initialization, deallocation and de-initialization. Some people think this is a feature. Look at how often casey uses unions, and how useful they are. And then read the c++ faqs on them. c++ basically just shrugs its shoulders and gives up.
For member functions, the biggest headache I've run into is that it can be difficult to hold on to a pointer to a member function. You need a special "pointer-to-member-function", which opaquely wraps up the this pointer for that object, and is not a predictable size. Whereas a function pointer that takes the equivalent of this as the first argument is just a regular function pointer, and you're on your own to provide it with the object. You've also got the issues that if you're holding onto these member-function-pointers, you can't fixup the pointers if you're moving things around (such as with the live reload, or loading the asset files).
Now, I don't think all the c++ baggage is necessarily bad. If you're writing code that maps well to what the compiler is doing, don't make extra work for yourself. Use what works for you. If you're using classes with all static member functions, you're basically just using the class as a namespace, and that's fine.
Another issue with C++ constructors is that you cannot return stuff from them. To fail a constructor you have to throw an exception. And once you have an exception in one place you have them everywhere.
Pointers to member functions can certainly be a little tricky however with C++11 and a mix of templates and preprocessor they can quite trivially be converted to static/freestanding functions (I used this method when I was told to provide python bindings for a project), although, if I had had time I would rather have done an occasional source translation rather than running it every compile time (for the ~200 functions I was binding there was a 6 second template instantiation cost which did upset me a little).
Constructors certainly can be tricky, but I very rarely use them for anything that can fail, normally it's just a matter of getting the data to a defined state, rather than doing anything complex, essentially avoiding writing a line for each data member I want to initialise. For this purpose I find they work well enough. (I don't use exceptions anywhere - I've read many times that the x64/Itanium ABI uses a zero-cost model for exceptions, my own tests show that the cost is definitely non-zero and I think the stack unwinding seems to mess up what's in the cache).
So the moral is essentially going to be: be aware of what computers do, don't be going out of your way to make them do something else. That makes a lot of sense.
Zero-cost model for exceptions means that there is no additional code or performance penalty when exceptions are not thrown. When exception is thrown, then of course there is additional code executed.
You can implement exceptions yourself with setjmp/longjmp functions (SJLJ exception model). That is not zero-cost model, because it saves state of CPU registers every time it enters try block, so it costs performance a lot.