Dawid Ciężarkiewicz aka `dpc`

contrarian notes on software engineering, Open Source hacking, cryptocurrencies etc.

Here is my take on the relationship between functional, imperative, actor, and service-oriented programming, and a short description of a hybrid approach combining them all and giving each an explicit function and level within the system (computation).

There's a natural progression between them, and a great deal of efficiency when they are combined to do the part they do best. I call it “opportunistic programming” (working title). If you're aware of existing ideas/publications etc. along the same lines, please share them with me. As usually, I probably didn't discover anything new.

Whenever you can – it's best to express computation declaratively: using functional programming. It should be a default mode of operation, only not used when not practical. It has many advantages over alternative approaches, and very little downsides (when used opportunistically). The goal here is to express as much logic as possible using pure mathematical computation that is easy to reason about, prove, and test.

When your code is dealing with external side-effects or things like computation performance are important, you have to abandon the luxury of the FP mode, and switch to imperative code. It's lesser and harder to use mode but it is closer to how the reality (computers) works, so it gives more control. You should still aim at writing as much as possible in FP mode, and only wrap the FP core logic in an imperative shell coordinating data-sharing, mutation and side-effects where needed. Depending on the problem and requirements the ratio might be different, but generally, imperative code should be isolated and kept to the necessary minimum. The goal here is to either explain to the machine exactly how to efficiently compute something and/or take control of ordering between events.

As your computation (program) grows it will become apparent it is possible to split it into parts that don't require “data-coherency”. That means – parts that have no reason to share data (even for performance) and it is natural for them to communicate entirely using message passing (immutable copies of data), typically using in-memory message queues of some kind. That's (kind of) the actor model. That goal here is to encapsulate and decompose the system along the most natural borders. The size of actors depends entirely on the problem. Some programs can be composed of many tiny actors – single function each. Some will be hard to decompose at all or have complex and big (code-wise) actors. It is worthwhile to consciously consider the design possibilities that allow finer granularity in this layer.

When the operational needs (availability, scalability, etc.) demand it, actors from the previous paragraph are a natural candidate to be moved to run on different machines potentially in many copies and become “services”. The cost and additional work are in handling: network latency, unreliable communication, and potential data loss. The goal here is to adapt the computation to the requirements of hardware: limited capacity and imperfect availability.

That's it. Some side-comments:

  1. It's a shame that FP is still not a default school of mainstream programming. FP is really easy and natural when applied opportunistically and generally will lead to both better runtime and developer performance.
  2. My main problem with OOP (and possibly actor model) is granularity. Encapsulation is costly. That's why encapsulating every single “object” is a bad idea. The right granularity for OOP is the module/library/component level, and for actors – along the problem-dependent natural lines where sharing data is no longer required anyway. Within functional and imperative code I recommend data-oriented approach, instead of the typical OOP-approach.
  3. This model easily handles the problem of converting “monolith” into microservices-based system. “encapsulation and decomposition” level is just “microservices but without the extra work (yet)”.

Everybody does code reviews nowadays (I hope!). Research shows that it increases quality... blah, blah, blah.

But, how many times did you think about how much does it cost? Because you know ... Code Review isn't free. Code Review takes time. A developer publishes a PR, and then... what? Is there anyone readily available to review it? Other devs are busy doing their own stuff. Should the developer interrupt someone to get a fast review? Or just context switch to something else for now? I'm sure the research was not investigating this part.

Before most readers close this page enraged, shouting “Obviously, Code Review is good, you fool! It's totally worth it!” Yes, Code Review is worth it! I didn't say it isn't. I only said that it does have productivity cost and you should not ignore it. Please, while you are aware of the benefits you're getting, also acknowledge the costs you made to gain them.

I'm not just trying to be controversial and/or annoying. My point is: if you want to be good at something, you have to be mindful of both of them: benefits and costs, to maximize the former, while minimizing the later! Most of my guidelines below are going to be about minimizing the cost of Code Reviews.

Read more...

This is going to be a quick overview of how I tend to write my application code. It might be a bit Rust-centric, but I apply similar methods in all programming languages I use.

I think it's an important subject and during past online discussions about learning Rust and writing code “the Rust-way”, I was asked multiple times how do I do it. I don't really have a lot of time to write something longer and better structured, so please excuse anything that is confusing. You get what you pay for.

Also, I don't want to suggest this is some sacred, best way or anything like that. I think this is what I'm typically doing. A result of years of professional and Open Source work and experiences I gained during that time. I'm always happy to learn and get to know other points of view, so I'm happy to hear any feedback.

Read more...

Or: Why there's no inflation and things are weird.

TL;DR: Global economy is much closer to a game of Monopoly than most people realize. We're at this phase when it's clear who owns all the hotels, and the only thing that allows other players to keep playing is taking on more and more debt. This big Monopoly game always ends up like this, and is being restarted. That's the economic “supercycle”.

All models are wrong, but some are useful

I really enjoy simplifying complex system into useful mental models. This is a short compilation of what and how I think about the macroeconomy. I'm not saying it's right, but it's mine and I wanted to share it. I'll be happy to hear where it is inaccurate or simply wrong. Also, I'm aware that what I say here is probably not very novel.

Read more...

I really like the Urbit project. I don't know how long I've been following it now. It was definitely earlier than 2016 when I discovered it, probably even 2014.

I think #Urbit is fundamentally trying to address the right set of problems in a comprehensive, holistic way. The fact that the Internet is broken for anything but centralized silos. I am deeply enthusiastic and cheering for Urbit. I do want it to succeed!

But in this post, I'm going to focus on an honest critique. I've actually read a lot of interesting critique of Urbit over the years – all that I could get my hands on. Mine is going to be less ambitious, and mostly down to earth and pragmatic. I hope it won't come out to harsh.

Read more...

I'll be updating this post in the future. It is mostly for my record keeping.

Read more...

Test runners/frameworks caused me a lot of grief over the years. It's part of the reason why really appreciate how standard #Rust test runner works. As usual with Rust, core developers, and the community at very least avoided the most common misfeatures and made the right thing to be the easiest and most natural thing to do.

That's why i recently (probably too abrasively) bashed an announcement of some new Rust testing library.

I've taken some time and collected a list of feature request, misfeature rants, that I just want to share.

Read more...

This is a part of #altswe series, where I describe ideas for an alternative approach to software engineering.

I've been a part of teams between 3 and around 20 software engineers; in companies between 3 and thousands of employees. I have used corporate messengers, IRC, Hangouts, Slack and many other IM tools. After all of that, I think they are generally counterproductive.

Read more...

This is an introduction page for the #altswe series, where I describe ideas for an alternative approach to software engineering.

I've been fortunate to work on software with a lot of smart and hard-working people, in a lot of cool places. And while the companies I worked for were often very different: their business, products, size, growth, etc., the way software engineering was organized, was actually very similar.

I feel like generally, we just keep doing things “as we did them at my previous company”, and very rarely anything truly different is being considered.

I'm a natural contrarian, and I'm always on the lookout for alternative approaches. I just couldn't help myself but to develop a list of things that I really wish I could have an opportunity to try one day.

If I am ever in charge of a team or a company, I'd like to try some or even all of them, and see how well (or bad) they work.

For now, I am just gathering them here, under the #altswe tag. The ones without a link, are yet to be written.

  • Kill Instant Messaging – less is more.
  • Less is more – cut unnecessary tools, projects, features.
  • Work in pairs
  • 4x7 – 4 days a week, 7 hours – again: less is more.
  • No Scrum, No Standups, No “Wish Points”
  • Ownership – Root of safety in Rust, prosperity in the markets, and quality in SWE.
  • Avoid success at all cost

or: Notes on indexing blockchains

A copy of this post can be found on rust-bitcoin-indexer wiki.

Abstract: I have ventured on the quest of discovering the best way to index a blockchain. Surprisingly little has been written about it, considering that it is a common task for any organization handling blockchain-based interactions. It seems that, it is an area dominated by in-house solutions, with very little knowledge sharing. In this post, I go over the problem, ideas, discoveries, and experiences. I also describe a Bitcoin Indexer implementation that I have worked on while researching the subject: rust-bitcoin-indexer.

I expect the reader to be familiar with at least the basics of Bitcoin.

Please note that this is not scientific research and only a spare-time project. Feel free to contact me with comments: pointing out any mistakes and ideas for improvements.

Read more...