So OOP is crap?

DaleKim

The more and more I program, the more I've come to appreciate that it's just way easier to program when you try to understand what the CPU actually needs to do and write the most direct code necessary to get the CPU to do that thing in whatever language you use, as Casey mentioned.

(...)

I've been hamstrung by an incredible amount because of this sort of indoctrination. It's just so harmful when you want to be an effective programmer in the real world. The code is a mess and becomes almost impossible to reason about the costs of anything regarding it.


That's an interesting take. What does it mean to be an effective programmer for you?

For me being effective means that I implement the features the customer wants (ant pays for), get my work done in a reasonable amount of time and that the code is clear and understandable. Number 1 is important or I won't get paid. Number 2 is basically also about money. And the last point allows me to work together with other programmers, which also is a requirement of my job.

To achieve these goals I don't think about what I want the CPU to do. I work with sensible abstractions in the problem domain. So that's a rather top-down approach.

This also means I won't care about performance, until it becomes a problem. Very often performance is great, even though there are tons of little inefficiencies. (Ok, in my work performance improvements are usually made by optimizing database schemas, so a game programmer might have different objectives. But even this is done only if actual measurements show that a change there can improve things)

There is a saying: "Hardware is cheap, programmer time is expensive". It's true.

Working bottom-up is nice and all, especially if you can afford to do everything from scratch. But there is always the danger of writing lots of code that doesn't contribute at all to solving your real problem.
insanoflex512
Speaking of programming paradigms, I read an article bashing "compression oriented programming". Basicall is says, it's an anti-pattern, uh, except when it's not. I wonder if Casey's read it? I understand his sentiments on the issue, but I think he missed the point of what Casey means by "Semantic Compression".
http://mikedrivendevelopment.blog...mpression-driven-development.html
I was thinking also, about the minimalistic use of OOP, but I always thought that was just called class-based programming. Hmmm, semantics.


Wow, it's been awhile since I've seen someone miss the point of something so completely. I feel like I need to expel those code examples from my brain for fear that they could infect my code..
There is a saying: "Hardware is cheap, programmer time is expensive". It's true.

Sometimes it is true and sometimes it is not. It has never been true for me in the domains I have worked in and my career included a good part of the "free lunch" period of Moorse's law. My areas have been:

  1. Scientific computing - No such thing as enough computer power
  2. Complex system simulation - Chasing fidelity, how much do we need to simplify our models (a typical run was one week)
  3. Embedded systems - Getting the most out of fixed hardware or reducing hardware cost

I have not done game development but it is somewhat similar to the above (consoles are "fixed" hardware with a desire to maximize "fidelity" for some suitable definition of fidelity).

"Hardware is cheap, programmer time is expensive". It's true.

So you basically throw money at the hardware until it gets fast enough.

What people don't know is that most electronics has limited capability like a game console or a mobile phone. Those can't by updated by adding hardware (throwing money at it).
cmuratori
If I may, I think part of the problem here is that different people like to define OOP in different ways, and it's not always clear what they mean by OOP. I define OOP as exactly what its name implies: that the practice of programming that you employ starts with the objects and is oriented around them. That is strictly a bad programming practice, and I have literally never seen anyone use it effectively. Ever. Not even once.


I have two definitions for OOP.

Definition #1 is what Alan Kay meant when he first coined the term "object-oriented". He had a vision (insert god beams and sound of choir "ah") of software systems being structured like biological systems: a collection of "cells", which interact via messaging. "Pure" OO languages (e.g. Smalltalk, Newspeak, Eiffel, Sather) try to realise this model.

This model implicitly incorporates the notion of encapsulation (the only way for two objects to communicate is via a message), and one kind of polymorphism (objects which implement the same messaging protocol are interchangeable).

What it doesn't incorporate is inheritance. But more on this in a moment.

Definition #2 is that OOP is producing a program that was designed using OOAD. This is closer to Casey's definition, in that the point of OOAD is to take a nebulous real-world problem and turn it into something that computers can handle, and you do this by finding and organising "the objects".

I'm going to go out on a limb here and say that this isn't a dumb idea. There are many, many software systems in the world where the "hard part" is modelling business rules and constraints, or turning a pre-computer procedure or workflow into a program. Anything involving legislation or regulation/compliance regimes is a perfect example. Getting the conceptual model right is most of the problem.

Once again, however, if you read the historic stuff on OOAD, even then there is very little which has anything to do with inheritance. Even subtype polymorphism (which is a useful idea in domain modelling) seems to mismatch horribly with inheritance as languages descended from SIMULA understand it.

Incidentally, the problem even has a name: the circle-ellipse problem. The contortions that you see some people getting into trying to explain why their favourite programming language is correct and the rest of the world is wrong is as sad as it is funny.

Some people here may be surprised to learn that one of the biggest critics of OO culture in general, and inheritance in particular, is Alexander Stepanov, creator of the STL.

Stepanov
Even now C++ inheritance is not of much use for generic programming. Let's discuss why. Many people have attempted to use inheritance to implement data structures and container classes. As we know now, there were few if any successful attempts. C++ inheritance, and the programming style associated with it are dramatically limited. It is impossible to implement a design which includes as trivial a thing as equality using it. If you start with a base class X at the root of your hierarchy and define a virtual equality operator on this class which takes an argument of the type X, then derive class Y from class X. What is the interface of the equality? It has equality which compares Y with X. Using animals as an example (OO people love animals), define mammal and derive giraffe from mammal. Then define a member function mate, where animal mates with animal and returns an animal. Then you derive giraffe from animal and, of course, it has a function mate where giraffe mates with animal and returns an animal. It's definitely not what you want. While mating may not be very important for C++ programmers, equality is. I do not know a single algorithm where equality of some kind is not used.


Stepanov
I spent several months programming in Java. Contrary to its authors prediction, it did not grow on me. I did not find any new insights – for the first time in my life programming in a new language did not bring me new insights. It keeps all the stuff that I never use in C++ – inheritance, virtuals – OO gook – and removes the stuff that I find useful. It might be successful – after all, MS DOS was – and it might be a profitable thing for all your readers to learn Java, but it has no intellectual value whatsoever. Look at their implementation of hash tables. Look at the sorting routines that come with their “cool” sorting applet. Try to use AWT. The best way to judge a language is to look at the code written by its proponents. “Radix enim omnium malorum est cupiditas” – and Java is clearly an example of a money oriented programming (MOP). As the chief proponent of Java at SGI told me: “Alex, you have to go where the money is.” But I do not particularly want to go where the money is – it usually does not smell nice there.


...but I guess it makes sense when you consider that the STL does not contain a single use of the keyword "virtual". The only "virtual" anything in that part of the modern C++ standard library is the hierarchy of exception structures.

Software systems where "domain engineering" isn't the hard part outnumber those where it is. That's why, IMO, OOAD will be a useful tool for the forseeable future. In the niche where it works, it works well. In that sense, OOP is not "crap". It's just far from universally applicable.

cmuratori
If by "OOP" people mean something very minimal, like "there is a struct somewhere in my system that is not exposed to the rest of the program that also has some functions that mostly just operate on it and not other things", [...]

...then that's what we call a "module".

This illustrates one of the other problems with OO culture. People are taught to use an object system as if it were a module system. In a sense, objects are kind of like instantiable modules. In that sense, an object system is almost a module system.

There are a dozen or so other useful programming concepts that OO (as it is practised) is "almost". Inheritance is "almost" subtype polymorphism, for example. But by "almost", I mean "not", and a significant amount of programming effort often has to go into coding around the impedance mismatch.

My personal opinion on all this is that every programming problem is different, and the question you need to ask is: "What is the central problem that this program is trying to solve?" Sometimes, the problem is trying to model a difficult-to-understand real-world scenario. Many business systems are "about" trying to wrangle the problem domain into some kind of manageable shape. That's where OO can help (but doesn't always!).

Games, on the other hand, are typically "about" efficiently transforming well-understood data and get it to the device that needs it as quickly as possible. OO has very little to offer here.
There is a saying: "Hardware is cheap, programmer time is expensive". It's true.

This is the mindset that gets us things like the Visual Studio interface, which is often laggier than sending a network packet to another continent and back. Or a browser text field that performs 25,000 memory allocations for every character you type. Or a phone UI that can't keep up with the relatively lethargic speed of a human finger. Et cetera et cetera.

The quote should be modified to "Programmer time is expensive, CPU time is cheap, and end-user time is free." Because essentially what happens is that a little bit of programmer time may be saved, so that every user of the software can have some their time wasted, every time they use the software, for the lifetime of that software.

And with CPU scaling being what it is these days, if you write software with this philosophy, the hardware is not going to save your bacon like it did in the 90's. I think MS has discovered this the hard way.

I believe Casey and others have made this point before, so apologies if I'm badly paraphrasing it here. But I think it bears repeating...

Edited by ben on
There is a saying: "Hardware is cheap, programmer time is expensive". It's true.
It is true, but you're usually not in a situation where you can trade one off against the other.

A picture is worth a thousand words, but there are very few sets of 1000 words which can be adequately replaced with a picture.
[quote post=2515]Hardware is cheap, programmer time is expensive

And sometimes hardware is fixed, so may your abstractions help you when trying to run on 512mb ram on previous gen consoles :)

Edited by Furkan on
Well, it's hard enough getting used to programming paradigms, but I think I get what everyone is talking about. Anyway, at my university a lot of stuff we learn is heavily reliant on OOP. Is it possible that someone could post some bad code examples (c# or java), to demonstrate why it is so bad? And how to do/write it in a better way?
Mike Acton does a bunch of that in his talks (eg., https://www.youtube.com/watch?v=rX0ItVEVjHc)

- Casey
There's also this book: http://www.dataorienteddesign.com/dodmain/
I've had a quick read in the last days and it's as beta as it says in the can, but there's some good stuff in there.

There's a whole "What's wrong with OOP" chapter:
http://www.dataorienteddesign.com/dodmain/node17.html
I come from domains without a fixation on (cache-friendly) performance, but I've disavowed OOP purely on structural grounds. Short version of the link:

1. Encapsulation doesn't protect state coherence without huge structural burdens. In practice, real OOP codebases rarely achieve real encapsulation of partial program state, let alone the entire program state.
2. Most behaviors have no natural primary association with any particular data type. Consequently, object decomposition of application logic almost always produces unnatural associations of behavior and data as well as producing extra, unnatural data types and behaviors we otherwise wouldn't need.

As Casey talked about, stand-alone 'objects' are a perfectly fine concept, e.g. ADT's are natural objects (data manipulated only through a defined interface). But trying to shove everything into an object mold produces Frankenstein entities with superfluous layers of abstraction.

I say all this as someone generally comfortable with high levels of performance overhead. In my experience, OOP adds complications which overwhelm the expressiveness gains of higher-level code.

Edited by Brian Will on
insanoflex512
Speaking of programming paradigms, I read an article bashing "compression oriented programming". Basicall is says, it's an anti-pattern, uh, except when it's not. I wonder if Casey's read it? I understand his sentiments on the issue, but I think he missed the point of what Casey means by "Semantic Compression".
http://mikedrivendevelopment.blog...mpression-driven-development.html
I was thinking also, about the minimalistic use of OOP, but I always thought that was just called class-based programming. Hmmm, semantics.


Wow,I read this article, trying to keep an open mind but after he called Casey a novice programmer....I lost interest. You can't really call somebody a novice, esp when he has been programming for over 30 years. He's not perfect but he has a lot of experiences.
but after he called Casey a novice programmer

I don't think he was talking about Casey. I think he was talking about a hypothetical novice programmer taking the compression-oriented approach (albeit to a bizarre extreme).

On a side note, every time I see this topic name pop up, I wonder why OP didn't seize the opportunity.
"So OOP is poop?"