Dawid Ciężarkiewicz aka `dpc`

notes on software engineering, Open Source hacking, cryptocurrencies etc.

I am a crypto-finance enthusiast and though I am strongly skewed toward Bitcoin maximalism, I still try to follow the space looking for promising technologies and ecosystems.

One of the coins that really caught my interest was Grin

  • It has no fishy business (pre-mine, dev-taxes),
  • is written in #Rust (which, I think, is a perfect language for #cryptofin),
  • it is based on MimbleWimble which is a very promising tech improving blockchain scalability.

It ticks all the boxes to be a reasonable altcoin.

The only problem with it is it's controversial monetary policy. Basically: one coin, every second, forever. While I don't mind the “forever” part, the problem with Grin's monetary policy is the steady and long initial inflation rate.

Bullish (“moon”) estimates

Let's try to put some initial estimation on the value/price of a Grin Coin.

Let's go super bullish first. Let's say that in 20 years Grin becomes “a digital gold” and will reach a market cap of gold, despite numerous more established coins competing for that position.

From a randomly googled article:

According to a 2013 report from Thompson Reuters GFMS, it was believed that 171,300 tons of gold had either been mined or was still in the ground. Since there are 32,000 ounces per ton, we're talking about 5.482 billion ounces of gold in the entire world, based on this report. If each ounce was worth about \$1,290, the world's gold supply would have an implied market cap of $7.07 trillion dollars.

As around 1 Grin Coin is mined every second, in 20 years we will have:

60 * 60 * 24 * 365 * 20 = 630720000

Grin Coin issued. 630 million. Which, assuming \$7T market cap, gives is a rough price of $11k.

For comparison, a comparable price for BTC would be, something around \$300k.

Now let's calculate the same for 40 years in the future.

60 * 60 * 24 * 365 * 40 = 1261440000

1.2 billion Grin Coins, at an estimated price of \$5k.

For comparison, a similarly estimated price for BTC would still be something around \$300k, because with each passing year less and less of Bitcoin is being issued.

You see the problem? With Grin you expect the value to keep being eroded , unless the total market increase offsets it.

That is putting a strong upper floor on the valuations that Grin could reach.

Down to earth estimation

Let's base the valuation on a coin with somewhat similar properties: Monero. I think it's a good coin to compare with. Monero does use a continuous monetary supply, after reaching 18 million of XMR issued, but the first 18 million is issued with a decreasing supply rate. Both Grin and Monero are privacy focused and while Grin might be considered a “better tech”, it will also have to compete with more and more established coins, and its issuance schedule will not be as inviting for the speculators as other coins.

After around 2 years of existence, the market cap of XMR was $7M.

In 2 years Grin will issue:

60 * 60 * 24 * 365 * 2 = 63072000

Price estimate: \$7M / 63072000 = \$0.1 per Coin.

After around 5 years of existence, the market cap of XMR is \$757M.

In 5 years Grin will issue:

60 * 60 * 24 * 365 * 5 = 157680000

Price estimate: \$757M / 157680000 = \$4.8 per Coin.

Summary

Given this back on the napkin calculations, I would expect the price of Grin coin to initially hover in the sub-\$0.1 area, and if it reaches enough traction and even starts stealing users from other coins, maybe it could raise to around $10 in a few years. However, it seems to me that this is a rather optimistic scenario.

In the pessimistic scenario, people will just not put any value into Grin, preferring either existing tech (like XMR for anonymity, or BTC for “digital gold with big upside potential”), Lightning Network and other techniques will solve both scalability and fungibility shortcommings in Bitcoin, or even a Bitcoin-native MimbleWimble sidechain will be created making Grin obsolete.

The risk-to-reward ratio here is just simply not great. I like the fact that it's at least realistic to have Grin maintain a relatively stable price of let's say $0.1/Grin, which is making it a practical coin for keeping some small “digital cash” spending money, for privacy purposes.

Though the risk of \$0.1/Grin price is: 51% attacks on Grin will cost \$100-\$1000 per hour. Which reinforces the coin to be destined to serve mostly low-value privacy-focused cash spending. Which, I guess, is perfectly OK. As long as you don't plan to get rich on speculating Grin.

As for what will really happen – only time will tell. A lot of it depends on psychology and public perception.

Code reuse in Rust

In my opinion, one of the biggest reasons why Rust is so productive is that it's a superb language for code reuse.

First, ownership and borrowing system allows exposing in the API properties that are often impossible to express in other languages. The fact that the API communicates and enforces how the resources are being created, shared and moved and destroyed, all checked at the compilation time, gives the user great confidence that they are using any API – internal or external – correctly.

Second, Rust comes with first-class built-in tooling around discovering, creating, sharing and using publicly available Open Source libraries. It's not a property unique to Rust, but it builds a powerful synergy when combined with the first point.

Third, the community (at least so far), were strongly encouraging uniformity and commonality: similar code style, similar documentation style, common core libraries, and patterns. A big chunk of this effort was achieved through great tooling like rustfmt and clippy. Thanks to this, Rust ecosystem does not feel fragmented. Jumping into a code authored by someone else does not feel like a venture into a foreign land, as it often does in other programming languages.

Together, this properties creates a language and ecosystem where code reuse is almost effortless. While in other languages it's often more tempting and more convenient to implement things yourself and avoid learning a new API, in Rust the difference between own and 3rd party code, often blurs completely. In a way, this creates a completely new quality of building your software.

Despite all these strengths, there's one problem that sticks out like a sore thumb: trust. Every additional dependency is another piece of code that could be buggy, or even malicious. And in my opinion, it's problem so serious, that is entirely blocking the untapped potential of code reuse in Rust.

I've been personally bitten by this at least a couple of times in 2018.

One time, I've carelessly used a library that turned out straight buggy with the author refusing to fix or even admit the problem. I've used it because the name was very similar to a popular crate that I've used before. After I've investigated, turns out the author is crates.io name squatter and has plenty of almost-empty or poor quality libraries with very good names. It's hard for me to say exactly what is his motivation, but it does not look good at all. It made me even more motivated to figure out a way to prevent incidents like this.

Another time, I've badly misjudged a library that looked good, had many authors, and yet, as I was pointed out by a fellow redditor, had some extremely serious performance problems, that made it pretty much unusable for serious purposes. The only way to discover this problem was finding one particular github issue, between many other, innocent looking ones.

As things are right now, it's very hard to judge the quality of a given crate. The only accessible metric available on crates.io is the download count. Not only can it be artificially inflated, but it is just unreliable: just because crate is popular, doesn't mean it's good, or the version we're using is not known to have serious problems. Another accessible metric: judging by authors, becomes less and less practical, as the ecosystem and user base grows, more and more crates are published and crate ownership changes with time.

On top of it, in 2018, there were at least a couple of instances where NPM ecosystem was shaken by serious security breaches:

and it's just a matter of time before issues like this will start happening in Rust ecosystem too.

About cargo crev

crev is a language and ecosystem agnostic Code REView system concept and cargo-crev is its first implementation – tightly integrated with Rust and crates.io.

The idea behind crev is fairly simple. You create a cryptographic ID, review packages in form of cryptographically verifiable proofs, publish your proofs online and share them with other users. Eventually, a Web of Trust is built, low-quality packages accumulate bad reviews, high-quality packages accumulate more and more good reviews, and it's easy to determine which packages are new and require a higher level of scrutiny.

Let's take a tour over recently released cargo-crev 0.3.

Here is the screenshot of using cargo-crev when reviewing its own dependencies:

cargo crev verify deps

The first column (status) show the cryptographically verifiable status of a given crate in a given version. In this case common_failures crate is verified because someone within my personal WoT (me), positively reviewed it.

The review proof is available in my Proof Repository.

A crate negatively reviewed by someone in my WoT, would show a red flagged instead.

Following columns show:

  • reviews – the number of review proofs of a given crate version and total number for all its versions,
  • downloadshttps://crates.io download counters to help judge popularity,
  • owners – a number of “known”/“total” crate owners; cargo crev edit known command allows editing a list of known and somewhat trusted crates.io users,
  • crate and version – the dependency name and version used.

The primary role of owners and downloads is to help narrow down the set of crates which are unpopular and/or don't come from known, well-respected authors. This is especially useful when starting using crev and trying to prioritize.

With time, I hope the number of users and circulating reviews is going to grow, shifting the focus from owners and downloads to status column.

A lot of crates are quite small and low-intensity review of them takes around 10 minutes. It is not as titanic work as one might think. crev does not require everyone to spend hours carefully reviewing every line of code. A quick scan – looking for things clearly out of order – is still more valuable than no review at all.

While there's definitely more to talk about to fully introduce ideas behind cargo-crev and already implemented features, I hope this single screenshot, gives a good initial insight into its current state.

Now, after such a long introduction, here is the time for my Rust 2019 wish: I would like the Rust community in 2019 to solve the code trust problem and untap the “fearless code reuse” potential.

I know of at least 2 other Rust 2019 series posts voicing similar concerns:

So I now, I'm not the only one which would like this to happen.

I invite all Rust users to give cargo crev a try and share their Proof Repositories and feedback with us!

The longer I do software engineering (and even things outside of it), the more confidence I have that one of the most important metrics (most important one?) in any sort of creative process is iteration time.

What do I mean by iteration time? The time between having an idea for change and getting real feedback from the real world what are the results of implementing this idea.

Worse is Better, Agile, Lean, Antifragile they can all be destiled to iteration time.

The world is an infinitely complex place. It's very very hard to predict the real results of any action. Because of that, to navigate world it's best to make small steps and collect feedback. The faster you can make these steps, the faster you use a new knowledge, to make new, better steps, which compounds very quickly.

It's a principle so powerful, that taken to extreme allows agents that literally have no understanding of anything, beat everyone.

Read more...

Object-oriented programming is an exceptionally bad idea which could only have originated in California.

— Edsger W. Dijkstra

Maybe it's just my experience, but Object-Oriented Programming seems like a default, most common paradigm of software engineering. The one typically thought to students, featured in online material and for some reason, spontaneously applied even by people that didn't intend it.

I know how succumbing it is, and how great of an idea it seems on the surface. It took me years to break its spell, and understand clearly how horrible it is and why. Because of this perspective, I have a strong belief that it's important that people understand what is wrong with OOP, and what they should do instead.

Many people discussed problems with OOP before, and I will provide a list of my favorite articles and videos at the end of this post. Before that, I'd like to give it my own take.

Read more...

Read Bootstrapping Urbit from Ethereum for details.

I find #Urbit to be one of the most interesting tech projects around and I'm happy to see another milestone. I'm not a great fan of Ethereum, but I guess for this application, it's well suited.

One of the things I love about Rust is its ownership-system. The ability to express a resource changing ownership (moving) is a huge enabler, eg. allowing to express APIs that are impossible to misuse.

I've just a found a good example: Bulletproofs are implemented in Rust, and are using move semantics to enforce secure usage.

As a side-note: it looks to me like Rust has already become a de facto standard for #crypto world infrastructure.

#programming #crypto

Turns out the platform I'm hosting this blog on supports ActivityPub, so you should be able to subscribe to updates by just adding dpc@dpc.pw.

#test

I think I've discovered Rust somewhere around the year 2012. Back then it was much different language than it is today. It had green-threads, @ and ~ were used a lot, and there was even a GC.

Rust caught my attention because I was looking for a language for myself. I always considered myself “a C guy”: a bottom-up developer, that first learned machine code, then learned higher level programming. And while C was my language of choice, I couldn't stand it anymore.

I was tired of how difficult it was to write a correct, robust software in C, especially:

  • inability to create solid abstractions and nice APIs,
  • segfaults, double checking my pointers and general lack of trust in my code,
  • make and make-likes building system.

I loved the simplicity and minimalism, I loved the flexibility and control, but I couldn't stand primitivism and lack of modern features.

With time I grew more and more fond of Rust. The language kept evolving in a direction that was my personal sweet spot: a modern C. And at some point I realized I'm in love with Rust. And I still am today, after a couple of years of using it.

Just look at my github profile. It has “Rust” written all over it. And check how my contributions grew since 2013. Rust made me much more productive and enthusiastic about programming.

So let me tell you why is Rust my darling programming language.

Read more...

Introduction

In this post, I will describe how I refactored quite complicated Rust codebase (rdedup) to optimize performance and utilize 100% of CPU cores.

This will serve as a documentation of rdedup.

Other reasons it might be interesting:

  • I explain some details of deduplication in rdedup.
  • I show an interesting approach of zero-copy data stream processing in Rust.
  • I show how to optimize fsync calls.
  • I share tips working on performance-oriented Rust codebase.
Read more...

TL;DR

The biggest strength of #Go, IMO, was the FAD created by the fact that it is “backed by Google”. That gave Go immediate traction and bootstrapped a decently sized ecosystem. Everybody knows about it, and have a somewhat positive attitude thinking “it’s simple, fast, and easy to learn”.

I enjoy (crude but still) static typing, compiling to native code, and most of all: native-green thread, making Go quite productive for server-side code. I just had to get used to many workarounds for lack of generics, remember about avoid all the Go landmines and ignore poor expressiveness.

My favorite thing about Go, is that it produces static, native binaries. Unlike software written in Python, getting software written in Go to actually run is always painless.

However, overall, Go is a poorly designed language full of painful archaisms. It ignores multiple great ideas from programming languages research and other PL experiences.

“Go’s simplicity is syntactic. The complexity is in semantics and runtime behavior.”

Every time I write code in Go, I get the job done, but I feel deeply disappointed.

Read more...