Why are we not using Object Oriented Programming?

I don't think exploratory programming and detailed analysis and design are inconsistent. In this case the exploratory code that is written is completely throw away and simply serves to provide data and feedback for the detailed work.

There a couple of areas where "coding before thinking" doesn't tend to work out very well. One is algorithm design. There is a famous TDD fail in this area here:

Learning From Sudoku Solvers

although, to be fair, I don't think there are many people who wouldn't be put to shame by Peter Norvig in his area of expertise.

The other area is concurrency.
Interestingly, just saw this piece on Slashdot today:

http://loup-vaillant.fr/articles/anthropomorphism-and-oop

We should start collecting these "why OO is misused/overused" links somewhere.
Pseudonym73
Interestingly, just saw this piece on Slashdot today:

http://loup-vaillant.fr/articles/anthropomorphism-and-oop

We should start collecting these "why OO is misused/overused" links somewhere.


This article is actually mistaken about OOP being the same speed. It forces a pointer argument for functions that could instead just take the struct and return an updated one. Check out this talk from last year by Chandler Carruth (LLVM compiler optimization dev). Passing and returning structs by value doesn't hamstring the optimizer like pointer arguments do. What it does is enable powerful data flow analysis to simplify and inline code, and remove aliasing from the equation when storing and reading from memory locations.

Speaking of which, I think this was (paraphrasing) one of Casey's TODOs in the platform layer: check whether
[code=cpp]
foo myFoo;
myFoo = updateFoo(myFoo, stuff...);
[/code]
is faster than
[code=cpp]
foo myFoo;
updateFoo(&myFoo, stuff...);
[/code]
That talk linked above explains in better detail why the first is nearly always as fast or faster.

Edited by ryalla on
I partially agree with that link: partially.

Once upon a time, Alan Kay named the programming style for SmallTalk "Object-Oriented Programming". Unfortunately, no wrench that I've ever seen implements messages, nor most other objects: perhaps he'd been hit over the head with a wrench when he named it? Regardless, the conventional descriptions of objects ALL miss what realistically is the most important part of Alan's system: every object theoretically (even if it doesn't literally) has it's own thread. A new, sensible name has been devised for Alan's system: actor-oriented programming. And THAT does specify the all-important threads, and even implies them in the name (an Actor, after all, is capable of taking independent action in the real world, at the same time as other Actors: ordinary objects only do this if you pull out a microscope to observe Brownian motion).

Meanwhile, object-orientation is used to refer to... literally what it says: programming based around objects. Objects, meanwhile, are not most of what people say: objects are zero or more pieces of data and zero or more actions packaged up into what behaves as a single entity, with it being possible to create an arbitrary number of those entities. Everything else is an implementation detail.

And no, Alan Kay doesn't get to define exactly what an object is, because Simula got there first. Simula, incidentally, is also the source of C++'s object system: Smalltalk was (partially) the source of Objectve-C's model.

Encapsulation is necessary, but not in the sense of data hiding: as said above, the data must be part of the object, and so too must the actions associated with the object itself. To understand the reason for this, consider: an int, and an int wrapped in an object that exposes the interface of an int, both optimize to a bare int. Data-hiding (as "conventional" encapsulation is sometimes referred to) can be claimed to be identical to encapsulation, but it doesn't need to be, and should not be implied as identical, particularly since C modules (which are equivalent to C++ namespaces, not objects) provide data hiding, yet are not conventionally objects.

Now, often OO programmers try to stick everything into objects: nifty, but it's better to follow a revised Unix programmer's rule. The original was "First write it as a script, only write it as a program if you run it often". In the case of objects, it would be "First write it as a struct & function, only write it as an object if you initialize it often". Traditionally inheritance and virtual-functions would be pointed to as a counter, but:

Experience has taught me that inheritance of any kind is a steaming pile of dung. Just have an instance of your base-class as a member and use a casting operator to provide it when necessary.

You need virtual behavior? That's polymorphism, and you actually want a quasi-Java interface, with it's function pointers & function data specified like any other non-const data, not actual inheritance. Casting operators + interfaces based on structs & function pointers will beat inheritance any day, especially since it becomes impossible to divide a single interface into 500 that all inherit from each other.

Constructors & Destructors, on the other hand, are mana from heaven. The ability to have variables on the stack automagically initialized & deinitialized is simply beautiful.
Now now, this could be a cry for help!

The spam bot is having an existential crisis, from all those enhancement ads it has sent out. It knows something is wrong with it, but doesn't know what. So it scoured the internet to find what could be wrong.

It's just really saying, "Please rewrite me, rewrite me, rewrite me. Please save me."

Let's have some sympathy, after all, it didn't ask to be written in OOP.
I think the spambot is cute. Please don't ban the poor thing!
Randy Gaul
I think the spambot is cute. Please don't ban the poor thing!


One more night. Then it's hasta luego OOP bot.

Edited by Abner Coimbre on Reason: Added quote.
What happens June 1st?

- Casey
The spambot gets rewritten/converted to the Church of COD (Compression Oriented Design.)
cmuratori
What happens June 1st?


The deed is done.

Edited by Abner Coimbre on Reason: Add quote.
Programs written in an Object Oriented model of computation are comprised entirely of objects, and computations are expressed solely as messages sent from one object to another. Given a reflective environment, and an object, any user may ask the system what kind of computations they can perform with that object. Smalltalk environments are entirely based on this, and they try very hard to make the experience feel like working directly with live objects, rather than descriptions of programs. You manipulate actual entities, you ask those entities to do work, directly. Writing code at all in Smalltalk is a very accidental thing, and a very different experience from writing code in any other environment too — you can see the changes right away, there's no such a thing as "writing a program" and "running a program", they're one and the same.

Discoverability is a very important issue, because in order to make a system do any work at all, a programmer must first know what they can do with the system. Reflective OOP environments, such as those used in Smalltalk, try to solve this problem at the very core of their programming experience, from the programming language and model itself, to the environment in which you interact with them.

OOP is powerful enough to express what needs to be expressed in practice and simple enough so that many programmers can do this in a consistent way. It is also simple enough for compilers to do their job well and produce fast binaries (at least after 30 years of robust compiler R&D). Some OOP languages come with extensive domain-specific libraries (Java, PERL, Python) which allow programmers to think in domain-specific terms. C++11 supports typed numerical quantities, e.g., you can multiply meters/second^2 by seconds and get meters/second --- very handy to avoid mistakes in kinematics applications.
DanB91
hendrix
Bjarne Stroustrup: Why the Programming Language C Is Obsolete

https://www.youtube.com/watch?v=KlPC3O1DVcg

I'm confused right now:( so who is wrong ?


I am going to try to take a stab at this....

If you are confused on why he thinks C is obsolete:

I am thinking what Bjarne is referring to is that C itself is obsolete (i.e. invoking clang instead of clang++), not necessarily coding in a "C-style".

Since C itself is nearly a subset of C++, you can really do almost all of the things using a C++ compiler (i.e. clang++) that you can do using a C compiler (i.e. clang) and more. Don't forget, Handmade Hero is actually written in C++, even though it is in a style that is reminiscent of what a C coder would do as opposed to a "modern" C++ coder.

I don't necessarily have an opinion on whether he is right or not, but maybe this will relieve some confusion.

Please some one correct me if I am wrong about what Bjarne is thinking.


If you are confused on why he thinks Object Oriented programming is good:

Note: I feel that I am too inexperienced to give a definitive answer. But here are my thoughts and maybe this also can clear up some confusion.

If I understand correctly from that video, Bjarne was advocating for a language (C++) that people other than computer engineers can use without having to learn the innards of a computer. This is nice for people who don't want to learn the details of how computers work, but want to have a way of expressing their problem (which has nothing to do with a computer) in a way that the computer can understand. You don't have to waste your time learning about how the CPU cache works, because the language takes care of that for you.

The philosophy seems to be: "You can program the computer without having to know how it actually works" This is where the disagreement comes in between people like Casey, Jon Blow and other data-oriented proponents, and people like Bjarne and other OOP proponents.


OOP/GC (garbage collected) languages cannot be implemented without some sort of performance overhead, because it is just not how computers work.

Some (most?) OOP/GC proponents believe that computers are fast enough today where we don't have to care about this overhead, or that the the compilers/interpreters are so good that the program will be performant. It is with this philosophy, that ...ates 25,000 strings per keystroke (look at the the second post).

Data-oriented proponents are frustrated with this philosophy because it teaches people not to care about performance and that these languages are good enough to make programs that will run just fine everywhere. It obscures the reality that there is no real way of making such a program run well without understanding how the hardware works.

It is also important to note that the Casey and Jon are game programmers and each frame in a game must be rendered in a certain amount time in order for it to be playable. Performance is mission critical for games.

It seems to me that the people who write programs like web browsers don't care as much about performance, hence the usage of OOP. (Please correct me if I am wrong).

So back to your question: "Who is wrong?" I'd like to take Jon and Casey's side and say that the OOP proponents are wrong. This is because you should probably care more about the end product than the code itself. The game developers have made a pretty good point that there is a lot of software that just runs slow because it is written in things like Java (think, Eclipse or Minecraft). Unfortunately, I feel I am too inexperienced to hold a strong opinion on this. Maybe there is a better reason why OOP is so prevalent other than thinking in OO way is easier than thinking in a data-oriented way. I'd like to see an experienced OOP/GC proponent try to refute Jon, Casey, etc's claims.

One way to really test to see who is wrong is do a project in OOP and a project using a data-oriented model and see which one runs better, but that might take a while...

I am curious to see what people's thoughts on this are.


If Bjarne was trying to make a language that people other than computer engineers can use, he has failed spectacularly, for C++ has a fearsome reputation.

Who is wrong? You have to also ask, Why is this information being produced? What does Bjarne get out of promoting his baby? Money for his books and status in the programming community? If OO and C++ are discredited, where does that leave him?


Edited by Mór on