Memory management is complicated: a love letter to Rust

Seeing problems in new light by learning a new programming language

This week I'm just talking about one thing: memory management. But I'm really talking about two things: memory management and Rust.


I've been learning Rust lately and it has been really fun. It's reminiscent of all the fun parts of C++ without the drawbacks and warts. After using C++ for a few years at the start of my career and vowing never to write it again, this is a breath of fresh air: I can get low level without the fear of memory leaks or segfaults.

One of the things that makes Rust feel really comfortable to me in a way that Go does not is memory management. This is personal preference, but I really like the feeling of control with being responsible for the allocation of memory, with knowing exactly when it will be allocated and exactly when it will be deallocated. This is missing in Go, a garbage collected language, and Rust has it. (This isn't free: Rust is a much more complicated language than Go.)

Before learning Rust, I had a fairly simplistic view of memory management, despite working with C++ for a few years. In C++, you really have to be very careful with where memory is allocated and deallocated and even then, it's really easy to mess it up. So, my code tended to allocate and deallocate in very clearly related places. Ownership wasn't really transferred, and this was before move semantics entered C++ anyway.

Enter Rust. Halfway through the Rust book, I stumbled onto this article about dropping memory in a different thread. My mind was kind of blown, because this was something I hadn't considered before. If I did this in C++, I'm pretty sure I'd somehow make a mistake and leak memory or deallocate it twice. Rust won't let me do that. So you can allocate a really big object, move it into a separate thread, then return. Now your function's latency does not include the deallocation time! (This is dangerous if you end up allocating memory more quickly than you deallocate, but I digress.)

This seems sort of obvious in retrospect. If you control allocation and deallocation, of course you can deallocate in a separate thread! But I had not seen this before and it's amazing to me. It opens up some important possibilities: you can make a really snappy UI by not slowing renders for deallocation, or you can speed up an API response by returning before you deallocate memory.

And this is made safe because Rust will prevent double deallocations, prevent memory leaks, and prevent deadlocks or data races between threads so that you cannot end up with memory issues or crashes due to playing tricks with memory management.

Most days, I write Python code. It's nice, it's easy, it's comfortable. You don't have to worry about memory management, ever. But it does prevent you from seeing some of these things and seeing these possibilities, because this is simply not expressible in Python the language, since you do not actually have control over the memory, because it's garbage collected! The same thing with Go.

Languages like Rust are valuable for writing systems and low level code, but they're also very valuable on their own for opening you up to see things in a new way. I would never have known about this method of reducing latency if I hadn't done some work in a language that was capable of expressing it.

Have a nice rest of your Friday!

-Nicholas