I keep thinking about this idea of a different approach to building Operating Systems. I know it's not a new idea.
I guess the core of the idea is that the programming language, its runtime, and the OS should be the same thing. You know - kind of like Emacs or generally Lisp, or Smalltalk. Where the language is the runtime, and it allows modifying itself while its running.
Maybe I'm wrong, and what do I know about anything anyway, but I think the previous attempts to make this a reality were outcompeted due to practical considerations: mostly performance and resource utilization.
You see, I am a bottom-up developer.: When I was a kid I first learned how do electric switches and transistors work, how to chain them into longer logical circuits. I was playing with writing assembler code for MOS CPU of my C64, POKEing and PEEKing bytes into addresses of memory mapped hardware registers. As a teenager, I was hacking on a Linux kernel. For me, the computer will always be only a very fast automaton, composed of circuits, capacitors, and so on, executing long sequences of tiny CPU instructions, and the job of software is to make it do something reliably and efficiently.
The way I see it, the computing we use today, with all its problems and shortcomings is a result of people with a bottom-up point approach: C, Unix, winning the competition. The raw performance, computational efficiency, beats mathematical elegance due to economic pressures.
Yet fundamentally these two perspectives on computing are not incompatible. IMO, Rust programming language is to date the best mainstream synthesis of both. Rust managed to largely retain both the mathematical elegance and expressiveness of high-level languages like Haskell or Ocaml, in a form suitable for C-like system programming language.
In my opinion if one wants mainstream theoretical/mathematical beauty in system software, it must just not compromise on system software core values.
Now, to reach perfection a third aspect needs to be synthesized: Lisp or Smalltalk like self-amending, self-reflective system building.
So now, I'll attempt to describe how such a system language would have to look like, and what features and properties to have.
The starting point is going to be Rust. A language like this would have to be imperative, memory and type-safe, at least by default. Second, it will have to use an ownership and borrowing system, as the one Rust has. This is all necessary for the performance, robustness and low-level control reasons. Side-effects, data sharing are necessary for building practical and efficient system software.
One of the primary purposes of an OS is to isolate applications. Contemporary mainstream OSes like Linux achieve that by a combination of hardware (different CPU executions modes, virtual memory controller, etc.) and software isolation. A holy grail system programming language would achieve such isolation on a language level. That means that global variables and memory unsafety escape must be disallowed. No readily available escape hatches like Rust's
unsafe (at least not without a capability for it)!
Calling an arbitrary function in such a language must have similar guarantees to executing code in a sandbox. Think BPF code in Linux kernel. Even though side-effects are supported, the code can't even access any data that was not made accessible to it in some form. These sorts of ideas are already being explored in Rust community but the main difference here is that language itself would have to be able to really enforce them.
The benefit of capability-like control is tremendous. First - it allows the language itself to act as a resource and application isolation and access control mechanism. Second - it solves the problem of socially scalable 3rd party code reuse.
It seems to me that any non-trivial part of the code would require plenty of all sorts of capabilities, so to make this all practical, probably some form of inference of capabilities required by a given function, and some form of "implicit context" system will be required.
A language like this would have to support JIT compilation, possibly as a base for everything. Many people think of AOT compilation as the only way for system programming languages, but I don't think that's the case. JIT compilation itself does not make the resulting binary code necessary slower, or garbage collected. It could be very much like calling
gcc to build a shared library, at runtime.
In essence, this language would have to provide facilities to compile binary code from a source code (possibly loaded from storage and provided via network) at runtime in a type-safe and memory-safe, but flexible way. Possibly always accessible as a form of a capability/resource, like pretty much anything else.
If the language/system level sandboxing works reliably, and some function is being called and provided resources/capabilities allowing it to dynamically download data from the Internet (think downloadable plugins or additional applications), it would be perfectly safe and flexible. Such dynamically downloaded, compiled, and executed code will only have as much access as the original function itself (or less).
Possibly some other minor features might be desirable or required. The ability to enforce destructor execution at the end of resource lifetime would probably allow efficient cleanup, without the possibility for an untrusted code to prevent it and cause resource exhaustion.
And that's it. A language like that would allow building a complete, fully-featured, flexible, efficient, and performant operating system, where the distinction between kernel and application becomes meaningless. Interactive shells, graphic interfaces, all sorts of abstractions would all be just dynamically compiled and executed functions. Many architectures and designs of the interactions between different pieces of code are possible, so completely different OSes can be built using it, but at the core it would all be based on the programming language itself.