If I may, I think part of the problem here is that different people like to define OOP in different ways, and it's not always clear what they mean by OOP. I define OOP as exactly what its name implies: that the practice of programming that you employ starts with the objects and is oriented around them. That is strictly a bad programming practice, and I have literally never seen anyone use it effectively. Ever. Not even once.
I have two definitions for OOP.
Definition #1 is what Alan Kay meant when he first coined the term "object-oriented". He had a vision (insert god beams and sound of choir "ah") of software systems being structured like biological systems: a collection of "cells", which interact via messaging. "Pure" OO languages (e.g. Smalltalk, Newspeak, Eiffel, Sather) try to realise this model.
This model implicitly incorporates the notion of encapsulation (the only way for two objects to communicate is via a message), and one kind of polymorphism (objects which implement the same messaging protocol are interchangeable).
What it doesn't incorporate is inheritance. But more on this in a moment.
Definition #2 is that OOP is producing a program that was designed using OOAD
. This is closer to Casey's definition, in that the point of OOAD is to take a nebulous real-world problem and turn it into something that computers can handle, and you do this by finding and organising "the objects".
I'm going to go out on a limb here and say that this isn't a dumb idea. There are many, many software systems in the world where the "hard part" is modelling business rules and constraints, or turning a pre-computer procedure or workflow into a program. Anything involving legislation or regulation/compliance regimes is a perfect example. Getting the conceptual model right is most of the problem.
Once again, however, if you read the historic stuff on OOAD, even then there is very little which has anything to do with inheritance. Even subtype polymorphism (which is
a useful idea in domain modelling) seems to mismatch horribly with inheritance as languages descended from SIMULA understand it.
Incidentally, the problem even has a name: the circle-ellipse problem
. The contortions that you see some people getting into trying to explain why their favourite programming language is correct and the rest of the world is wrong is as sad as it is funny.
Some people here may be surprised to learn that one of the biggest critics of OO culture in general, and inheritance in particular, is Alexander Stepanov, creator of the STL.
Even now C++ inheritance is not of much use for generic programming. Let's discuss why. Many people have attempted to use inheritance to implement data structures and container classes. As we know now, there were few if any successful attempts. C++ inheritance, and the programming style associated with it are dramatically limited. It is impossible to implement a design which includes as trivial a thing as equality using it. If you start with a base class X at the root of your hierarchy and define a virtual equality operator on this class which takes an argument of the type X, then derive class Y from class X. What is the interface of the equality? It has equality which compares Y with X. Using animals as an example (OO people love animals), define mammal and derive giraffe from mammal. Then define a member function mate, where animal mates with animal and returns an animal. Then you derive giraffe from animal and, of course, it has a function mate where giraffe mates with animal and returns an animal. It's definitely not what you want. While mating may not be very important for C++ programmers, equality is. I do not know a single algorithm where equality of some kind is not used.
I spent several months programming in Java. Contrary to its authors prediction, it did not grow on me. I did not find any new insights – for the first time in my life programming in a new language did not bring me new insights. It keeps all the stuff that I never use in C++ – inheritance, virtuals – OO gook – and removes the stuff that I find useful. It might be successful – after all, MS DOS was – and it might be a profitable thing for all your readers to learn Java, but it has no intellectual value whatsoever. Look at their implementation of hash tables. Look at the sorting routines that come with their “cool” sorting applet. Try to use AWT. The best way to judge a language is to look at the code written by its proponents. “Radix enim omnium malorum est cupiditas” – and Java is clearly an example of a money oriented programming (MOP). As the chief proponent of Java at SGI told me: “Alex, you have to go where the money is.” But I do not particularly want to go where the money is – it usually does not smell nice there.
...but I guess it makes sense when you consider that the STL does not contain a single use of the keyword "virtual". The only "virtual" anything in that part of the modern C++ standard library is the hierarchy of exception structures.
Software systems where "domain engineering" isn't
the hard part outnumber those where it is. That's why, IMO, OOAD will be a useful tool for the forseeable future. In the niche where it works, it works well. In that sense, OOP is not "crap". It's just far from universally applicable.
If by "OOP" people mean something very minimal, like "there is a struct somewhere in my system that is not exposed to the rest of the program that also has some functions that mostly just operate on it and not other things", [...]
...then that's what we call a "module".
This illustrates one of the other problems with OO culture. People are taught to use an object system as if it were a module system. In a sense, objects are kind of like instantiable modules. In that sense, an object system is almost
a module system.
There are a dozen or so other useful programming concepts that OO (as it is practised) is "almost". Inheritance is "almost" subtype polymorphism, for example. But by "almost", I mean "not", and a significant amount of programming effort often has to go into coding around the impedance mismatch.
My personal opinion on all this is that every programming problem is different, and the question you need to ask is: "What is the central problem that this program is trying to solve?" Sometimes, the problem is
trying to model a difficult-to-understand real-world scenario. Many business systems are "about" trying to wrangle the problem domain into some kind of manageable shape. That's where OO can help (but doesn't always!).
Games, on the other hand, are typically "about" efficiently transforming well-understood data and get it to the device that needs it as quickly as possible. OO has very little to offer here.