From my point of view, the main source of progress in programming (software engineering) was, is and will be sacrificing small scope expressiveness and power to improve larger-scope composability.

When we gave up on writing machine code and started enforcing higher-level programming language, we sacrificed small scope power to improve larger-scope composability.

When we gave up on goto and enforced structured control flow, we sacrificed small-scope power to improve the larger scope composability.

When we give up on spawning threads and enforce structured concurrency, we sacrifice small-scope power to improve the larger scope composability. This transition is still ongoing, but I believe in the next decade developer starting a random thread will be looked at the same way a developer putting a random goto in their code is.

When we give up on memory-unsafe programming languages and enforce memory-safe constructs, we sacrifice small-scope power to improve the larger scope composability.

I am strongly convinced that the next step should be and will be giving up on uncontrolled data access and side effects. The FP is aiming at this (or similar) problem, but I believe that it misses the point. Our code is executing on stateful machines, in a stateful world, and precisely to achieve side-effects in it.

Instead of trying to ban side-effects altogether and trying to express everything as pure mathematical computation, what we need is the enforcement of controlled and structured causality and banning any action at the distance in the code. Namely: any part of the code taken in isolation should not be able to access and especially not mutate any data that was not passed to it in some way as an (direct or indirect) argument. By giving up on the action at the distance and enforcing structured causality, we sacrifice small-scope power to improve the larger scope composability.

For many developers, this proposition should seem rather uncontroversial, as they are already writing code this way - as a form of intuitive best practice. However, best practice is not as good as enforcement.

Most programming languages upfront violate this rule upfront, e.g. by providing a globally accessible standard library that allows opening any files, doing any network operations, reading current time, and pretty much open access to system resources.

If we can guarantee language level enforcement of structured causality things like reusing 3rd party code become much easier. If you haven't passed any file system & network resource to a library you're using, it can not read your password from the file system and send it to the attacker's system. The whole class of attacks against ecosystems like NPM, crates.io, becomes impossible.

But it's not only about security. It's mainly about making reliable, predictable software that can be easily reasoned about and thus - about making the software as a whole more composable.

The need to control software data and resource access is all over the place in the industry. jails, containers, VMs, sandboxes, capability-based systems and access-control, FP, deno runtime, frowning upon global variables during code review. My point: we need it at the programming language level.

This level of control would require a completely new set of language features to make it practical. Just like Rust's borrow system is a whole new aspect of programming required to enforce memory-safe data sharing, enforcing structured causality would be akin to language built-in capability control / automated and semi-transparent dependency-injection system. All of it, mainly so that adding a logging statement in some deep leaf part of the code, does not lead to cascading changes in every calling function (recursively) all the way to the root, and the whole thing is checked but also somehow automated.