Dawid Ciężarkiewicz aka `dpc`

contrarian notes on software engineering, Open Source hacking, cryptocurrencies etc.

From my point of view, the main source of progress in programming (software engineering) was, is and will be sacrificing small scope expressiveness and power to improve larger-scope composability.



It's very good, and I pretty much agree with everything in there.

For an Nth time, I'm seeing the story of the smartest woman in the world proving everyone wrong on the Monty Hall Problem. You probably want to make sure you read it before reading further.

I've never seen anyone mentioned an argument that is pretty obvious to me, and invalidates the whole “logical” and “mathematical” conclusion, and proves that people's intuition is correct.


I keep thinking about this idea of a different approach to building Operating Systems. I know it's not a new idea.

I guess the core of the idea is that the programming language, its runtime, and the OS should be the same thing. You know – kind of like Emacs or generally Lisp, or Smalltalk. Where the language is the runtime, and it allows modifying itself while its running.

Maybe I'm wrong, and what do I know about anything anyway, but I think the previous attempts to make this a reality were outcompeted due to practical considerations: mostly performance and resource utilization.

You see, I am a bottom-up developer.: When I was a kid I first learned how do electric switches and transistors work, how to chain them into longer logical circuits. I was playing with writing assembler code for MOS CPU of my C64, POKEing and PEEKing bytes into addresses of memory mapped hardware registers. As a teenager, I was hacking on a Linux kernel. For me, the computer will always be only a very fast automaton, composed of circuits, capacitors, and so on, executing long sequences of tiny CPU instructions, and the job of software is to make it do something reliably and efficiently.


Putting an “innocent” tracking links and “pixels” in your email is like taking photos of people without asking or even putting a security camera in your guest's bathroom. Just because most people won't notice or complain and you pinky promise not to use it for anything nefarious doesn't make it less rude.

Pretty much everyone are doing it now. Almost all companies, recruiters, salespeople, even official US government emails. Since I use NextDNS and block most tracking on DNS level, I am acutely aware that I can't just click any link from an email without hitting a blocked tracking page.

And if you're doing it, then I think you, your company or your institution are rude.

The new era is beginning, and the old era is ending. C and C++ as a lingua franca of systems programming are being displaced by Rust. Many will deny it, many will fight it. But I'm confident it is already happening and is inevitable to continue at accelerating pace.


As some people might now I am a vocal OOP critic. I think it is fair to say that I am on a crusade, actually. :D

Oftentimes, my long online posts explaining what is wrong with OOP meet with a No true Scotsman argument. That I am somehow pointing out to flaws in caricature of an OOP, and the correct OOP is free from these issues. To prove to myself and other people that it is not the case, I decided to go through some classic OOP books, and criticize the OOP examples in them.

My first choice is the Clean Architecture by Robert C. Martin (aka Uncle Bob). I must admit Uncle Bob is not one of my favorite software engineering gurus. But he is a reputable and experience developer, and if he was to write a caricature of OOP, then who are the people who dare to say they do it right?


I am running my #Urbit ship on Digital Ocean using #NixOS .

Took me quite a bit of time to figure the actual settings to use for Nginx to forward HTTPS to the port 8080 that vere uses. For some reason, default settings were causing the whole UI to misbehave completely: keep showing nonsense, disconnect etc. I finally found a working setup by asking around, googling and just trail and error.

In case you're interested, here are the settings that worked for me. TLS is set up using Let's Encrypt, terminated in Nginx, HTTP is redirected to HTTPs and HTTPs goes to vere.

    services.nginx.enable = true;
    services.nginx.recommendedOptimisation = true;
    services.nginx.recommendedProxySettings = true;
    services.nginx.recommendedGzipSettings = true;
    services.nginx.recommendedTlsSettings = true;

    services.nginx.virtualHosts."napzod-dopzod.arvo.network" = {
        forceSSL = true;
        enableACME = true;
        http2 = false;
        locations."/" = {
            proxyWebsockets = true;
            proxyPass = "";
            extraConfig = ''
              # required when the target is also TLS server with multiple hosts
              proxy_ssl_server_name on;
              # required when the server wants to use HTTP Authentication
              proxy_pass_header Authorization;
              chunked_transfer_encoding off; 
              proxy_buffering off; 
              proxy_cache off; 
            '' + "proxy_set_header Connection '';"; 

    security.acme.certs = {
      "napzod-dopzod.arvo.network".email = "myemail@example.com";

I have not attempted to minimize these settings, so I don't know which ones are actually necessary.

I am still running vere in a lame way: by starting it in tmux session, since I don't have a working Nix recipe for it yet. If you do, make sure to submit a PR to Nixpkgs so we can all benefit.

Here is my take on the relationship between functional, imperative, actor, and service-oriented programming, and a short description of a hybrid approach combining them all and giving each an explicit function and level within the system (computation).

There's a natural progression between them, and a great deal of efficiency when they are combined to do the part they do best. I call it “opportunistic programming” (working title). If you're aware of existing ideas/publications etc. along the same lines, please share them with me. As usually, I probably didn't discover anything new.

Whenever you can – it's best to express computation declaratively: using functional programming. It should be a default mode of operation, only not used when not practical. It has many advantages over alternative approaches, and very little downsides (when used opportunistically). The goal here is to express as much logic as possible using pure mathematical computation that is easy to reason about, prove, and test.

When your code is dealing with external side-effects or things like computation performance are important, you have to abandon the luxury of the FP mode, and switch to imperative code. It's lesser and harder to use mode but it is closer to how the reality (computers) works, so it gives more control. You should still aim at writing as much as possible in FP mode, and only wrap the FP core logic in an imperative shell coordinating data-sharing, mutation and side-effects where needed. Depending on the problem and requirements the ratio might be different, but generally, imperative code should be isolated and kept to the necessary minimum. The goal here is to either explain to the machine exactly how to efficiently compute something and/or take control of ordering between events.

As your computation (program) grows it will become apparent it is possible to split it into parts that don't require “data-coherency”. That means – parts that have no reason to share data (even for performance) and it is natural for them to communicate entirely using message passing (immutable copies of data), typically using in-memory message queues of some kind. That's (kind of) the actor model. That goal here is to encapsulate and decompose the system along the most natural borders. The size of actors depends entirely on the problem. Some programs can be composed of many tiny actors – single function each. Some will be hard to decompose at all or have complex and big (code-wise) actors. It is worthwhile to consciously consider the design possibilities that allow finer granularity in this layer.

When the operational needs (availability, scalability, etc.) demand it, actors from the previous paragraph are a natural candidate to be moved to run on different machines potentially in many copies and become “services”. The cost and additional work are in handling: network latency, unreliable communication, and potential data loss. The goal here is to adapt the computation to the requirements of hardware: limited capacity and imperfect availability.

That's it. Some side-comments:

  1. It's a shame that FP is still not a default school of mainstream programming. FP is really easy and natural when applied opportunistically and generally will lead to both better runtime and developer performance.
  2. My main problem with OOP (and possibly actor model) is granularity. Encapsulation is costly. That's why encapsulating every single “object” is a bad idea. The right granularity for OOP is the module/library/component level, and for actors – along the problem-dependent natural lines where sharing data is no longer required anyway. Within functional and imperative code I recommend data-oriented approach, instead of the typical OOP-approach.
  3. This model easily handles the problem of converting “monolith” into microservices-based system. “encapsulation and decomposition” level is just “microservices but without the extra work (yet)”.

#software #oop

Everybody does code reviews nowadays (I hope!). Research shows that it increases quality... blah, blah, blah.

But, how many times did you think about how much does it cost? Because you know ... Code Review isn't free. Code Review takes time. A developer publishes a PR, and then... what? Is there anyone readily available to review it? Other devs are busy doing their own stuff. Should the developer interrupt someone to get a fast review? Or just context switch to something else for now? I'm sure the research was not investigating this part.

Before most readers close this page enraged, shouting “Obviously, Code Review is good, you fool! It's totally worth it!” Yes, Code Review is worth it! I didn't say it isn't. I only said that it does have productivity cost and you should not ignore it. Please, while you are aware of the benefits you're getting, also acknowledge the costs you made to gain them.

I'm not just trying to be controversial and/or annoying. My point is: if you want to be good at something, you have to be mindful of both of them: benefits and costs, to maximize the former, while minimizing the later! Most of my guidelines below are going to be about minimizing the cost of Code Reviews.