Friday, December 12, 2025

The choice between Rust and C-derived languages is not simply a matter of memory safety

Rust encourages a rather different "high-level" programming style that doesn't suit the domains where C excels. Pattern matching, traits, annotations, generics, and functional idioms all sound great on paper, but when you're building low-level systems, they create an environment where everything becomes verbose, ceremony-driven, and semantically dense. You write more code about the code than about the actual work being done.

This is a consequence of Rust's "killer" feature: prevent entire classes of bugs at compile time. An ambitious goal, and when it works, it's magic. But the price is that the language must express more information than the actual algorithm demands. That extra information becomes noise in domains that already prize simplicity – bare-metal firmware, packet parsers, high-performance I/O loops. These are ecosystems where adding structure just because the type system encourages you to is not a virtue; it's overhead.

C, by contrast, is a different kind of animal. The language rewards terseness and economy of expression. You write exactly what you mean, and almost nothing else. Once you understand how pointers, lifetimes, and data layouts behave, it's like the compiler stops being a gatekeeper and becomes a quiet workhorse. It translates what you wrote into something very close to what the machine executes. No lifetimes to annotate. No type-level gymnastics to appease the borrow checker. No compiler front-end that feels like an argumentative partner.

The philosophical split between C-derived languages and Rust is deeper than "safe vs unsafe". It's about the role of the programmer in approaching a problem. Rust's mental model is rich, layered, and session-typed. It demands you to think in terms of ownership semantics, region constraints, and type traits. Even a simple data structure can grow a halo of metadata: derives, macros, lifetimes, Send/Sync bounds, feature gates. And while all of this scales well for large teams building complex asynchronous systems, it’s overkill in tight, performance-critical loops where the ideal number of moving parts is as few as possible.

C's mental model is brutally direct: bytes go here, pointers point there, and you control the boundaries. Unsafe? Yes. But also minimal. The clarity it provides in low-level domains isn't a side effect; it's the design goal. The language does not enforce guard rails, and therefore it does not require you to encode a novel's worth of constraints in your types. Good C codebases look simple not because they lack abstractions, but because they avoid abstractions that don't directly express the computation. And that's the thing Rust struggles with.

None of this means Rust is "bad" or that C is "better". It means they optimise for different values. Rust optimises for correctness and maintainability under heavy abstraction. C optimises for transparency and minimalism under extreme constraints.

Sometimes you don't want a language that keeps you safe. Sometimes you want one that simply gets out of your way.

LLMs are not a means to an end

We use these tools as know-it-all assistants that can answer questions in all areas of human knowledge. Their answers are never grounded in the empirical world of the senses. They don't have "skin in the game" and will happily change their opinion diametrically if you present the same question from a different light. They reinforce conventional wisdom by offering generic solutions to local problems with local variables. Our over-reliance on them will atrophy our reading and writing skills. We also use them as "rephrasers" to make ourselves sound better, supposedly. This is destroying genuine human connection and communication.

Friday, February 23, 2024

The pitfalls of pure rationality

Being rational with regards to the surrounding natural and physical world is undeniably good for our survival. We learn the effects of touching a hot stove and never attempt to do it consciously once we are older than 4. We know that driving on ice makes a vehicle hardly controllable, and tend to avoid it. This cause and effect paradigm is incredibly helpful for navigating the natural world. We don't need constant empirical proof for stuff like that. We don't need to deconstruct any social constructs.

However, I believe that this paradigm, when applied abstractly to human-made institutions and enterprises, skews our social understanding and navigation abilities. It fabricates toxic conventional wisdom and explanatory models that supress basic reasoning in a local, context-sensitive manner. The Scottish philosopher David Hume famously said that "reason is a slave of the passions". Once you start viewing the world through deterministic mega power structures and institutions, you are no longer sensitive to your local environment. You stop seeing a window of possibilities and the particular qualities of the people around you. Many of these assumptions are time- and region- sensitive and can become even less relevant with time.

If you become overly hooked on formalisms like behavioural economics, cognitive psychology, evolutionism or the geopolitical power struggles, you are no longer an ingenious free-acting agent. You attempt to explain the behaviour of your neighbour with theories developed by distant academics. You start taking decisions symbolically and in a virtue-signalling manner – "I will buy from another brand because I believe that someone I don't know 5000 miles away is a creationist".

What would the cure be? Develop social intuitions. Observe your local surrounding environment and the people around you. Try to listen to their personal stories. Tell them yours. Then you can probably figure out what's valuable for them, what drives them every day. Applying that knowledge in the economic realm might change the world for the better.

Thursday, December 14, 2023

The Essence of "Modern C++"

I can remember my first teenage attempts at learning programming in an imperative OO style with C++ in the late 1990s. That was the pre-stackoverflow, pre-Web 2.0 era when the web generally consisted of a few keyword-based search engines and millions of weird personal pages with those nasty "blink" tags. Any educational material covering my interest in programming was sparse and hard to come by. Living in a poor Eastern European post-communist country that was still pretty much isolated from the Western world and not knowing any conversational English didn't help, either. My best bet were a couple of locally-available, badly-translated C++ books that were mediocre to begin with. Having access to anything from Scott Meyers in English was probably a privilege for the very few.

The common ethos of those books revolved around inheritance, virtual functions and operator overloading. They never showed you how to approach a real programming problem. I was eager to learn how to create a basic game with moving stuff on the screen, but all I got was dead-boring hierarchies of shapes, animals and employees with short, mostly one-liner methods. Defining your own Matrix class, overloading the "+" operator and doing "m1 + m2" was advertised as cool and exciting. If there was a minimal, quintessential example I absorbed, it was this:

class Shape {
public:
    virtual float area() = 0;
};

class Square: public Shape {
private:
    float side;

public:
    Square(float side): side(side) {};

    float area() {
        return side * side;
    }
};

class Rectangle: public Shape {
private:
    float width, height;

public:
    Rectangle(float width, float height): width(width), height(height) {}

    float area() {
        return width * height;
    }
};

Then they showed you how to instantiate and use these hierarchies:

Square *sq = new Square(5);
Rectangle *rect = new Rectangle(3, 4);

vector<Shape *> shapes = vector<Shape *>();
shapes.push_back(sq);
shapes.push_back(rect);

for (int i = 0; i < shapes.size(); i++) {
cout << shapes[i]->area() << endl;
}

shapes.clear();
delete sq;
delete rect;

And that was mostly it. At best, you learned how to implement a linked list. No Pong game or flying teapots on the screen. At that point, continuing playing with Lego looked more interesting and certainly more joyful than programming. There was never a word about any potential problems with that style of programming. Segfaults, leaks, dangling pointers, reading uninitialised data, overflows - there was the implicit assumption that good programmers don't make such mistakes. There was never the concept of "object ownership" as it is today. Passing objects between functions was an exercise of ad-hoc trickery. You had to learn the hard stuff by playing with the weird behaviours yourself.

Now, I would define the essence of modern C++ with transforming the above fragment into:

auto shapes = std::vector<std::unique_ptr<Shape>>();
shapes.push_back(std::make_unique<Square>(5));
shapes.push_back(std::make_unique<Rectangle>(3, 4));

for (const auto& shape: shapes) {
    cout << shape->area() << endl;
}

This code cannot leak and is exception-safe. The "pointers" are value objects with enforced semantics. Behind the scenes, move semantics transfers the ownership of the unique pointers from their temporary expressions to the containing vector. When the vector goes out of scope, everything else disappears safely.