After writing this comment I noticed it became a bit ranty, sorry for that. Something about this article rubbed a bit in the wrong way.
The relevant section seems to be this:
Browser engines and garbage-collected runtimes are classic examples of code that fights the borrow checker. You’re constantly juggling different memory regions: per-page arenas, shared caches, temporary buffers, objects with complex interdependencies. These patterns don’t map cleanly to Rust’s ownership model. You end up either paying performance costs (using indices instead of pointers, unnecessary clones) or diving into unsafe code where raw pointer ergonomics are poor and Miri becomes your constant companion.
The first half is obviously correct, this kind of data model doesn’t work well for the ownership model rust uses for its borrowchecker. I don’t like the conclusion though. Rust makes you pay the performance costs necessary to make your code safe. You would need to pay similar costs in other languages if you intend on writing safe code.
Sure, if you are fine with potential memory corruption bugs, you don’t need these costs, but that’s not how I would want to code.
The other thing bugging me is how miri being your companion is framed as something bad. Why? Miri is one the best things about rusts unsafe code tooling. It’s like valgrind, or sanitisers but better.
Now, the raw pointer ergonomics could be better, I’ll give them that. But if you dive deep into what rust does with raw pointers, or rather what they are planning to do, is really really cool. Provenance and supporting cheri natively is just not possible for languages that chose the ergonomic of a raw integer over what rust does.
They’re not calling Rust unsafe. There is a memory safe mode and a memory unsafe mode in Rust, and this was built in unsafe Rust which allowed for the memory bug to be exploited
Rust by default will not allow you to make certain kinds of errora, which is great. But if you are doing something advanced, down at the hardware level [see below], you might need to disable those defaults in order to write the code you need. This is what people mean by “unsafe” – lacking the normal memory safeguards.
With careful coding, “unsafe rust” or normal C, for that matter, can be free of bugs and safe. But if programmers make a mistake, vulnerabilities can creep in more easily in the unsafe sections.
But if you are doing something advanced, down at the hardware level
This part is wrong. Otherwise yes correct.
The “unsafe” code in rust is allowed to access memory locations in ways that skip the compiler’s check and guarantee that that memory location has valid data. They programmer is on their own to ensure that.
Which as you say is just the normal state of affairs for all C code.
This is needed not because of hardware access but just because sometimes the proof that the access is safe is beyond what the compiler is able to represent.
Thank you for the correction, I’ll edit my comment.
sometimes the proof that the access is
safe is bevond what the compiler is able to represent
Could you say a few more words about this? In what situations do you have to write ‘unsafe-tagged’ code blocks? Could this be changed by improvements to the compiler? Or is it necessitated by the type of task being done by the code?
For memory safety, which is not unsafe rust
You say that. But the CVE is a memory corruption bug.
Which is worse?
Surely if X > 0 then this is still a net improvement?
I don’t know, but I found this article interesting with respect to unsafe Rust - https://lightpanda.io/blog/posts/why-we-built-lightpanda-in-zig
After writing this comment I noticed it became a bit ranty, sorry for that. Something about this article rubbed a bit in the wrong way.
The relevant section seems to be this:
The first half is obviously correct, this kind of data model doesn’t work well for the ownership model rust uses for its borrowchecker. I don’t like the conclusion though. Rust makes you pay the performance costs necessary to make your code safe. You would need to pay similar costs in other languages if you intend on writing safe code.
Sure, if you are fine with potential memory corruption bugs, you don’t need these costs, but that’s not how I would want to code.
The other thing bugging me is how miri being your companion is framed as something bad. Why? Miri is one the best things about rusts unsafe code tooling. It’s like valgrind, or sanitisers but better.
Now, the raw pointer ergonomics could be better, I’ll give them that. But if you dive deep into what rust does with raw pointers, or rather what they are planning to do, is really really cool. Provenance and supporting cheri natively is just not possible for languages that chose the ergonomic of a raw integer over what rust does.
They’re not calling Rust unsafe. There is a memory safe mode and a memory unsafe mode in Rust, and this was built in unsafe Rust which allowed for the memory bug to be exploited
You don’t understand what unsafe means
Rust by default will not allow you to make certain kinds of errora, which is great. But if you are doing something advanced,
down at the hardware level[see below], you might need to disable those defaults in order to write the code you need. This is what people mean by “unsafe” – lacking the normal memory safeguards.With careful coding, “unsafe rust” or normal C, for that matter, can be free of bugs and safe. But if programmers make a mistake, vulnerabilities can creep in more easily in the unsafe sections.
Is that basically it?
This part is wrong. Otherwise yes correct.
The “unsafe” code in rust is allowed to access memory locations in ways that skip the compiler’s check and guarantee that that memory location has valid data. They programmer is on their own to ensure that.
Which as you say is just the normal state of affairs for all C code.
This is needed not because of hardware access but just because sometimes the proof that the access is safe is beyond what the compiler is able to represent.
Thank you for the correction, I’ll edit my comment.
Could you say a few more words about this? In what situations do you have to write ‘unsafe-tagged’ code blocks? Could this be changed by improvements to the compiler? Or is it necessitated by the type of task being done by the code?