[profile.release]opt-level = 'z'# Optimize for size
It doesn’t matter here, let’s get that out of the way.
But I’m wondering if someone/someplace is
wrongly recommending this!
Because lately I’ve been seeing this getting set in projects where their binaries wouldn’t be typically running on environments where this is required or even helpful.
My concern is that some developers are setting this without really understanding
what -Oz actually does.
Release profile defaults to -O3. -Oz compromises runtime performance for binary size.
When you start seeing for example network services which are not going to be running in constrained environments, nor do they restrict themselves from using heap allocations and all, when you see them set -Oz in their release profiles. Then you start to see small beginner projects setting it like OP here. You start to wonder if it’s a pattern, and the settings are being copied from somewhere else, with potential misconceptions like thinking it can meaningfully reduce runtime memory usage or something.
I actually think I found the culprit, thanks to one of the projects I was hinting at mentioning it:
Looks like we have a second wave of net-negative common wisdom in the ecosystem, the first being optimizing for dependencies’ compile times. But this one is not nearly as bad, as it doesn’t affect libraries.
It doesn’t meaningfully help with that unless much harder constraints are applied in development where it would become relevant at run-time. It can be relevant for low-storage machines however. That’s what binary size is primarily about after all. And low-storage and low-memory may go hand in hand at times as device properties.
To be clear - I’m referring to devices with, say, 128MiB of device storage and memory when I refer to low memory machines (which I’ve developed for before actually). If you’ve got storage in the GB, then there’s no way optimizing for size matters lol.
Welcome.
[profile.release] opt-level = 'z' # Optimize for size
It doesn’t matter here, let’s get that out of the way.
But I’m wondering if someone/someplace is wrongly recommending this!
Because lately I’ve been seeing this getting set in projects where their binaries wouldn’t be typically running on environments where this is required or even helpful.
My concern is that some developers are setting this without really understanding what
-Oz
actually does.This comment would be much better if it featured an explanation of your concerns with that particular opt setting.
There is no deep explanation or anything.
Release profile defaults to
-O3
.-Oz
compromises runtime performance for binary size.When you start seeing for example network services which are not going to be running in constrained environments, nor do they restrict themselves from using heap allocations and all, when you see them set
-Oz
in their release profiles. Then you start to see small beginner projects setting it like OP here. You start to wonder if it’s a pattern, and the settings are being copied from somewhere else, with potential misconceptions like thinking it can meaningfully reduce runtime memory usage or something.I actually think I found the culprit, thanks to one of the projects I was hinting at mentioning it:
https://github.com/johnthagen/min-sized-rust
Looks like we have a second wave of net-negative common wisdom in the ecosystem, the first being optimizing for dependencies’ compile times. But this one is not nearly as bad, as it doesn’t affect libraries.
My understanding is that should almost only ever be set for WASM. Certain low-memory machines may also want it, but that’s extremely rare.
I’m not sure who’s recommending it, only ever seen it recommended for WASM applications.
This is a part of the misconceptions about it.
It doesn’t meaningfully help with that unless much harder constraints are applied in development where it would become relevant at run-time. It can be relevant for low-storage machines however. That’s what binary size is primarily about after all. And low-storage and low-memory may go hand in hand at times as device properties.
See the link in my other comment.
To be clear - I’m referring to devices with, say, 128MiB of device storage and memory when I refer to low memory machines (which I’ve developed for before actually). If you’ve got storage in the GB, then there’s no way optimizing for size matters lol.