Ik stumbled upon the handmade hero thing some time ago. I have to say I do admire him for it. It does take some guts to develop some serious program totally from scratch without any libraries, in a livestream. I also share his sentiment that code should be as direct and as focussed on the real problem as possible. However a few observations.
About OOP
First thing: OOP was never invented in order to produce efficient software. It was invented as an attempt to improve the organisation of the code. The purely imperative style of C has a tendency to spaghetti-code. Believe me, I have seen some masterworks. The idea was to organise it into little self-contained chunks with less coupling. From spaghetti to ravioli so to say. Did it succeed? Maybe up to a point. But it's very easy to take it too far. Taken too far, OOP has a tendency of generating loads of code that don't really do the task, but are just fluff. You know, the factories, managers and factory-managers. I have seen plenty of horrible, incomprehensible OOP hells. However, when done in moderation, it can help to structure things better. Also with data oriented approaches. You just have to rethink the class design. A class should for example not model just one specific car, but have the data for all cars in the simulation. Data orientation and object orientation are not mutually exclusive.
About destructors, RAII and smart pointers
Back in the day, the biggest criticism on C was the manual memory management. Allocated memory has to be freed exactly once on any execution path. No free is a memory leak, two frees is undefined behaviour, in practice a crash. These errors are not easy to fix. I've done it often enough. I have been a professional C programmer. Of course in C all execution paths are explicitly present in the code, so if you are careful, it can be done. But I would caution against taking the tough guy stance and glossing over this problem. In C++ it's even harder, because there are exceptions and those create implicit execution paths.
The java people solved this problem by doing automatic garbage collection. This has a problem of it's own: on unpredictable points in time the garbage collector will do it's round and collect all garbage, causing an unpredictable performance profile. Of course the world has gone on and now there are ways to tune the garbage collection strategy so that this is not so problematic, but this is specialist knowledge and the problem will never go away completely.
Fortunately the C++ people have not gone this way. They didn't have to. From the earliest beginnings there was already RAII. RAII makes use of the guarantee of the language that the destructor is called exactly once whenever an object goes out of scope, also on exception paths. So if an object holds allocated memory (or a file handle, or a database connection), and in it's destructor it releases that resource, you never have a leak or a crash on double free. That's all there is to know about RAII. Smart pointers are basically just a convenience thingy to make RAII easier to use. Good thing: the moment the resource is released is deterministic and as such can be controlled.
Are memory leaks so bad? This of course depends wholly on what you're doing. If your project is control software for jet fighters, or some server application that will run production for months, yes, then it is really bad to leak memory. Even leaking one int occasionally will accumulate and it will spiral out of control, causing out of memory crashes that are really hard to debug. But if your project is a game, or some other application that will run for a few hours at most, that might be different. For resources that you know have essentially the same lifetime as the program, you might as well not release them at all. The OS will reclaim them on program termination.
About exceptions
The e-word has been uttered already. Exceptions are OK, but in some environments they are way overused. My experience: java people love exceptions too much, C++ people hate them too much. Exceptions only have runtime cost when they are thrown, so the trick is not to throw them. Or, at least to throw them only in exceptional situations. When should exceptions be thrown? I think it's easier to say when not.
1- on programming bugs. Example: index out of bounds faults are almost always programming bugs. The program should just crash on those and the bug should be fixed. It's no use to try to recover from a bug. Things like sanity-checking program parameters can better be done by the contracts feature (whenever it comes).
2- on normal program flow. Exceptions should not be used for program flow. They are too expensive for that. And that includes weird user input. Users doing weird things is normal behaviour that should be anticipated.
3- on environmental situations that you can't recover from. What this is, depends on the application. Jet fighter control software should be fault tolerant. It should have some contingency plan for almost anything (but not through exceptions, because of point 4). Transaction processing software of a bank should not lose data. Even if the file system crashes, it should have some mechanism not to lose data. But a game? It should just crash on out of memory problems or a crashing file system.
4- in hard real time situations. Exceptions cause unpredictable runtime cost when thrown. But this is off topic for games.
Exceptions are a nice way to keep the normal program flow from being polluted by error handling code. But they should be used with moderation.
About templates and standard library containers
To be honest, I don't really understand his problem with templates. Templates have no runtime cost, they are eveluated at compile time. They do make compilation slower though. I guess it's more about things like std::string and std::vector. If you just append to those, they will keep reallocating and copying the data, which is of course slow. That's why std::vector has the reserve() function. If you have any idea of the size of the data, and that is not really small, always use reserve(). Also, std::string has the two-param constructor that creates an initialised string of given length. Use those features, they are there for a reason.
However, when done in moderation, it can help to structure things better.
So does writing C code in moderation.
Good thing: the moment the resource is released is deterministic and as such can be controlled.
You forgot to mention a bad thing - producing & running a ton of code to "destruct" your forest of pointers. When all of that could be simple memory arena reset. Not only runtime cost, but binary bloat too.
Templates have no runtime cost
They absolutely have runtime cost in additional to compile cost. Compilers tend to generate a lot of bloated/duplicated binary code for templates. A lot of binary code means a lot of code cache misses thus lower performance. And lower IPC rate. Not only theoretically but really measurable in practice.