Merely two assumptions

  1. The address of a struct may be 0x0 - whether fixed by hardware or reported at runtime.
  2. The programme must perform receiver operations (field access, method calls) on it in place.

Each of these assumptions is commonplace in bare-metal / safety-critical programming; their intersection is unrepresentable in today's Rust.

Furthermore, the programme must be proven correct - not tested, not assumed, proven. Any path that admits undefined behaviour is inadmissible regardless of how unlikely it may be at runtime.

// This address is forced by the hardware.
// Rust does not get to choose it.
const BLOB_P: usize = 0;
const _: () = assert!(usize::BITS == 16);

#[unsafe(no_mangle)]
extern "C" fn ignite() -> ! {
    // BLOB can never be read volatilely;
    // There's no available RAM to copy the entire struct.
    let mut blob = unsafe { &mut *(BLOB_P as *mut DevTreeBlob) };
    // instant UB upon reference construction

    let mapping = blob.foo();
    blob.bar |= 0b1;

    ...
}
use core::slice::from_raw_parts as mkslice;

// `map` address is reported by the firmware on Mask ROM.
// Rust does not get to choose it.

// Caller ensures there's at least one entry
#[unsafe(no_mangle)]
extern "C" fn spark(map: *const RamLayout, len: NonZeroUsize) -> ! {
    for entry in unsafe { mkslice(map, len.get()).iter() } {
        // instant UB upon calling `from_raw_parts`
        // as `from_raw_parts` constructs `&T`
        ...
    }

    ...
}

Evidence

The problem has already surfaced in practice:

That the Vorago case was resolved with read_volatile does not close the problem - it merely shows that the simplest instance (a single u32 read) has a workaround. The three assumptions above require no exotic hardware to co-occur; they require only an ordinary firmware handoff on an ordinary MCU. The question is not whether the intersection will arise again, but when - and whether Rust will have an answer that does not involve undefined behaviour.

Workaround is impossible

The &T chain

The non-zero invariant is not a surface-level convention - it is load-bearing at every layer of the language, from trait definitions down to MIR transforms:

Layer I - Trait signature

// rust-lang/rust @ 8ddf4ef064fb702fed0f3d239ec8d0bac607484e
// library/core/src/ops/deref.rs

pub const trait Deref: PointeeSized {
    type Target: ?Sized;
    fn deref(&self) -> &Self::Target;
} //                   ^ `&T` hardcoded in return type

pub const trait DerefMut: [const] Deref + PointeeSized {
    fn deref_mut(&mut self) -> &mut Self::Target;
} //                           ^^^^ `&mut T` hardcoded in return type

Layer II - Field / Method resolution

// rust-lang/rust @ 8ddf4ef064fb702fed0f3d239ec8d0bac607484e
// library/core/src/ops/deref.rs

// `Receiver` is blanket-implemented over `Deref`.
impl<P: ?Sized, T: ?Sized> Receiver for P
where
    P: Deref<Target = T>,
{
    type Target = T;
}

impl<T: PointeeSized> LegacyReceiver for &T {}
impl<T: PointeeSized> LegacyReceiver for &mut T {}

// compiler/rustc_hir_typeck/src/expr.rs

// Field expressions automatically deref
let mut autoderef = self.autoderef(expr.span, base_ty);

// Every `v.method()` call and `v.field` reference goes through this chain.