Rust and the Future of Memory Corruption


Recently, Mozilla announced that they were working on writing a modern web browser in Rust, Servo. This might have been the first time security researchers were exposed to Rust as a concept and I think that very few people understand exactly what this development means for security.

"They're All Turing Complete Languages"

A sentiment I encounter a lot in the "ad-hoc computer security research community" is that since every programming language is (almost by definition) Turing-complete, there will be bugs, and some of those bugs will always be security bugs, so we will always have bugs. This is an attitude that I find really annoying, both in its reductionism and its incorrectness.

An especially pernicious class of security bugs come in the form of memory corruption bugs, bugs which allow for the injection of new low-level programs into an already executing low-level application. These bugs are problematic because they violate so many assumptions we make about containment, verification and security. They're also extremely difficult to locate in C and C++ code, due to language features in C and C++ (pointers, manual object lifetime management, etc).

We have gotten better, of course. Efforts like SAGE from Microsoft have dramatically improved the quality of Microsoft software in the last six years. Of course, many members of the "ad-hoc computer security research community" are also largely unaware of SAGE. However, it cannot be ignored that though the quality has increased, bugs in Microsoft software remain even with the efforts of the world's first and best whitebox fuzzer.

What does this tell us? Well, it tells me that writing secure C/C++ code is astoundingly difficult even if you have essentially unlimited resources to contribute to all aspects of the software development and testing lifecycle. What could we do differently?

A Quick Thought Experiment

One thing I like to think about is the following: Imagine all of the business applications written in C# since the introduction of C#. Now, imagine that C# never happened and all of those applications were written in C++. Would there be more, or less, exploits in the world?

C# is a managed language where the programmer does not have direct access to memory (unless they ask for it) and are guarded from things like object lifetime management (unless they work with stuff via a FFI). In this safe environment we find ourselves with dramatically fewer memory corruption errors. Are there still memory corruption errors? Sadly yeah, people weld their C# apps together with C++ apps and the obvious happens. Are there other errors in the code? Of course. Are some of those security errors? Yes, nothing is stopping you from doing command injection or authentication bypasses.

Is it progress? Can you honestly answer 'no'? We moved from 'The programmer must double check all of their low level behavior' to not.


After this thought experiment, consider Rust. It's not a managed language, it compiles to native code, has very few run-time checks and wants above all to be performant. There is a runtime that sits between some parts of user-written code and the processor. Ben Blum has written an excellent series of posts about Rust and really if you do one thing after reading this post, you should read his posts.

The Rust memory model claims to remove the possibility of use after free bugs. This is possible in languages like C# and Java but at a steep cost of having a garbage collector behind the scenes and using a bytecode interpreter with JIT compilation. Rust claims to remove use after free through language design, reference counting and static checking. I say 'claims' because I'm not aware of a solid proof for this, but, no one (to my knowledge) has demonstrated a way to coerce UAF out of an arbitrary Rust program.

How does Rust accomplish this? It has three different pointer types. One of them is a 'unique' type, which is allocated from a special heap and the language disallows shallow copies. You can deep copy a unique pointer allocated object, of course. Then there are managed pointers and borrowed pointers. Managed pointers are reference counted by the language runtime, programmers do not need to add any acquire or release semantics into their code to note that they hold a reference to a managed pointer. Of course, we know that reference counting pointers add overhead, so the last pointer type is a borrowed pointer. A borrowed pointer points to a managed pointer, but is not reference counted.

This sounds like the way in, except, the Rust compiler includes a static analysis that checks for any borrowed pointer 'outliving' the managed object it points to. Rust won't let you compile your code if it finds a violation to this check.

"But Java Has Bugs Too"

Java, as a language, has many of these features as well (though enforced via a managed runtime). "Java has bugs too" is another thing that a lot of hackers like to say when faced with new programming language technology that has an impact on security. I hate this view as well. The bugs that people point out in Java are bugs in the sandbox/security policy system, or deserialization errors in the class loader and bytecode interpreter. Essentially, the problem is that if you can run an arbitrary Java program, you as an attacker can break out of a sandbox and run whatever you want.

This is totally different to the sitaution with Rust. Something I have seen very few people talk about is finding bugs in Java applications that result in memory corruption. A challenge: Feed arbitrary data to a Java application that results in memory corruption and shellcode execution. I'm aware of one successful execution of an attack like this, and it involved a heat gun.

"I Found A Bug In The Borrow Checker!"

Given the level of analytic rigor most of the "ad-hoc computer security research community" demonstrates on a daily basis I would be surprised if there was a BlackHat USA talk with the above title, but it could happen. Finding one of those bugs will be tricky and will probably depend not just on an attackers ability to control input data to a Rust program but also the Rust programmer unknowingly expressing certain idioms in their code. This of course is no different from today, really, with one important difference: Once you publish the technique that reaches this bug class, Mozilla patches the borrow checker and the entire bug class dies.

So for each memory error that you want to exploit in Servo, you'll need to discover a logical error in a compiler paired with code that exercises it in Servo, and then create data for it. Will it be possible? Until Mozilla writes a certified Rust compiler, we can't rule it out, but how likely is it? And to think about what the future holds, look at the number of bugs that John Regehr found in CompCert vs other C compilers in his csmith research.

The Door Closing

Previously, the systems and PL communities introduced lots of technologies (W^X, ASLR, stack canaries, safe heap unlinking) that were designed to make exploitation of software harder. At each step, many members of the "ad-hoc computer security research community" adapted, but many more were left behind and now we find that many are listening and talking, but fewer are doing, because it's harder.


So is Servo the "end of security"? Of course not. That question is very ill posed. What "secure" software is still hasn't really been well defined and if you think of secure software as software without memory corruption errors your focus is probably too narrow. Rust is also very complicated to program in (most programmers have trouble with one pointer type, let alone three) so it will be interesting to see if Mozilla is able to pull off writing a whole browser in this new language. It will also be interesting if their early data on the performance of their language generalizes to something as massive as a web browser, and if they don't compromise by backending a lot of data processing to C++ code written against an FFI.

It's going to be an interesting experiment to watch, and it could wind up producing software in which none of the exploitive idioms we've developed in the last fifteen years are valid. This should be exciting and scary.