Dawid Ciężarkiewicz aka `dpc`


This post and work behind tries to achieve multiple goals:


I often receive feedback to my general OOP critique from people somewhat sympathetic to my message suggesting that since OOP is vague and not precisely defined, it would be more productive to talk about its core tenets/features in separation and drop the “OOP” name altogether.

I've also received an email (hi Martin!) asking among other things on my opinion about usage of interfaces in OOP. So instead of writing a private response, I'm just going to dwell a bit more on it in a blog post.

BTW. I'm continuing to read #oop #books to gather more insight and arguments. Currently, I am going through Object-Oriented Software Construction by Bertrand Meyer. The book is huge and presents the case for OOP in-depth, which is perfectly fulfilling my needs. And on top of it – it's old, so it gives me a lot of insight on “what were they thinking?!” ;). Hopefully, I'll get to a post about it in the not-too-distant future, but I will be referring to it already in this post.

Anyway... about the polymorphism and stuff...


Edit: I received a lot of feedback that this post is a rant. It's probably true. The primary reason is because of frustration. I have been engaging online looking for books and example of code presenting “Good OOP” for studying and refining my views. The previous book I was recommended and read: Growing Object Oriented Software, was a blast. This book was recommended frequently, even more than the previous one, so I reluctantly bought it and then got rather upset about how big of a disappointment it was. It's a book about small scope code refactoring, not really OOP, so it's not even on the point. And even at learning to refactor, I think it uses confusing and counterproductive examples all the way. I read it, I didn't like it, I wrote a short rant why exactly, to get it off my list. If you don't enjoy rants, hit Ctrl+W now.

When asking around “Good OOP” examples a lot of people recommend 99 bottles of OOP book. I was reluctant to buy it. By the name and the description, I could tell that it is going to use the “99 Bottles of Beer” song as an example of some sort... which frankly seemed completely unproductive. But since people keep on mentioning it – I got it. And unsurprisingly, I was right: the book indeed is confusing and just... not good.

TL;DR: Even as a vocal critic of OOP, I can see that this book is doing a huge disservice to anything that could be called a reasonable OOP. Save yourself $40 and don't buy it. If you own it, throw it in the trash. If you've read it – know that you learned a bunch of nonsense about OOP, and maybe some minor stuff about structuring your code and writing tests.


TL;DR: I review Growing Object-Oriented Software, Guided By Tests and contrast it with my personal approach to developing software, explaining my reasoning, and making some comments on the book, OOP and software engineering in general.


One problem that I have when discussing OOP is the “no true Scotsman” fallacy. Whatever criticism one has against OOP can be always rejected with “that's not a real/good OOP”.

So here is the dare – send me links to public Open Source projects that you think are an example of superb and excellent OOP code. It doesn't have to be fancy, but it has to be a real project doing something useful end to end: not a library or a framework. You can find the mail link at the bottom. I did some Google searches looking for similar questions (and answers) and I haven't really found anything useful.

Please share this post if you're also interested.

My plan is to collect some projects, pick up handful that I find best, and then apply common OOP criticism to a concrete code that other people find the best in class.

I will make sure to edit this post and add them below.

Thank you!


I was just recently mulling over how interfaces (Java-style) for data types are pretty much always the wrong thing to do.

Putting an interface means losing a resolution on something. Now you have to talk to it through a generic approximation. The benefit of that – you can now talk to anything that implements that interface, which means an open set. Anyone can come and add a new thing that implements that interface and it will work, without changing a line of code. Drawbacks: All the additional information about that thing is lost, and you have to come up with that interface which is not a trivial task. As the business logic changes, such interfaces often need to be revised. And as any abstraction and indirection – it introduces some confusion and mental tax.

In a typical business setting, data shape type is almost always a closed set. Even if you have N versions of some you want to support, you don't expect random external developers to add more, without altering the broader code around it. The closed set of allowed formats can just be expanded, new cases handled and the job is done. And business logic is never as simple as we would like and very often requires conditional handling, which is just painful to express via interfaces. A “switch statement” is more flexible and can do whatever without bothering with defining a contract between the data and the code using it.

While there are exceptions and it is all context-dependent, as a rule of thumb, avoid stuffing your data with interfaces, and use sum types.

#oop #programming #software

In one of post on the excellent blog of Ted Kaminski he talks about Data vs Object distinction. When I was reading it I got excited because I've been mulling about this exact distinction when crystalizing my problems with mainstream-class-oriented-OOP. And I think we're both aiming at the same thing, but I have drawn the line between the two differently. Plus I have some other thing to say about this confusion.


As some people might now I am a vocal OOP critic. I think it is fair to say that I am on a crusade, actually. :D

Oftentimes, my long online posts explaining what is wrong with OOP meet with a No true Scotsman argument. That I am somehow pointing out to flaws in caricature of an OOP, and the correct OOP is free from these issues. To prove to myself and other people that it is not the case, I decided to go through some classic OOP books, and criticize the OOP examples in them.

My first choice is the Clean Architecture by Robert C. Martin (aka Uncle Bob). I must admit Uncle Bob is not one of my favorite software engineering gurus. But he is a reputable and experience developer, and if he was to write a caricature of OOP, then who are the people who dare to say they do it right?


Here is my take on the relationship between functional, imperative, actor, and service-oriented programming, and a short description of a hybrid approach combining them all and giving each an explicit function and level within the system (computation).

There's a natural progression between them, and a great deal of efficiency when they are combined to do the part they do best. I call it “opportunistic programming” (working title). If you're aware of existing ideas/publications etc. along the same lines, please share them with me. As usually, I probably didn't discover anything new.

Whenever you can – it's best to express computation declaratively: using functional programming. It should be a default mode of operation, only not used when not practical. It has many advantages over alternative approaches, and very little downsides (when used opportunistically). The goal here is to express as much logic as possible using pure mathematical computation that is easy to reason about, prove, and test.

When your code is dealing with external side-effects or things like computation performance are important, you have to abandon the luxury of the FP mode, and switch to imperative code. It's lesser and harder to use mode but it is closer to how the reality (computers) works, so it gives more control. You should still aim at writing as much as possible in FP mode, and only wrap the FP core logic in an imperative shell coordinating data-sharing, mutation and side-effects where needed. Depending on the problem and requirements the ratio might be different, but generally, imperative code should be isolated and kept to the necessary minimum. The goal here is to either explain to the machine exactly how to efficiently compute something and/or take control of ordering between events.

As your computation (program) grows it will become apparent it is possible to split it into parts that don't require “data-coherency”. That means – parts that have no reason to share data (even for performance) and it is natural for them to communicate entirely using message passing (immutable copies of data), typically using in-memory message queues of some kind. That's (kind of) the actor model. That goal here is to encapsulate and decompose the system along the most natural borders. The size of actors depends entirely on the problem. Some programs can be composed of many tiny actors – single function each. Some will be hard to decompose at all or have complex and big (code-wise) actors. It is worthwhile to consciously consider the design possibilities that allow finer granularity in this layer.

When the operational needs (availability, scalability, etc.) demand it, actors from the previous paragraph are a natural candidate to be moved to run on different machines potentially in many copies and become “services”. The cost and additional work are in handling: network latency, unreliable communication, and potential data loss. The goal here is to adapt the computation to the requirements of hardware: limited capacity and imperfect availability.

That's it. Some side-comments:

  1. It's a shame that FP is still not a default school of mainstream programming. FP is really easy and natural when applied opportunistically and generally will lead to both better runtime and developer performance.
  2. My main problem with OOP (and possibly actor model) is granularity. Encapsulation is costly. That's why encapsulating every single “object” is a bad idea. The right granularity for OOP is the module/library/component level, and for actors – along the problem-dependent natural lines where sharing data is no longer required anyway. Within functional and imperative code I recommend data-oriented approach, instead of the typical OOP-approach.
  3. This model easily handles the problem of converting “monolith” into microservices-based system. “encapsulation and decomposition” level is just “microservices but without the extra work (yet)”.

#software #oop

Object-oriented programming is an exceptionally bad idea which could only have originated in California.

— Edsger W. Dijkstra

Maybe it's just my experience, but Object-Oriented Programming seems like a default, most common paradigm of software engineering. The one typically thought to students, featured in online material and for some reason, spontaneously applied even by people that didn't intend it.

I know how succumbing it is, and how great of an idea it seems on the surface. It took me years to break its spell, and understand clearly how horrible it is and why. Because of this perspective, I have a strong belief that it's important that people understand what is wrong with OOP, and what they should do instead.

Many people discussed problems with OOP before, and I will provide a list of my favorite articles and videos at the end of this post. Before that, I'd like to give it my own take.