* It's valid to have overlapping buffers on a heap, which could end up with
overlapping ranges.
* It's impossible from the API to tell which buffer an address came from if
there are multiple possibilities, but any of them will be technically correct,
we just might display a different buffer than the user expected.
* The problem with storing resource pointers in descriptors is that it can be
invalidated without detection - a kind of A-B-A problem - if the resource is
deleted and then another resource is allocated with the same pointer.
* Descriptor creation in D3D12 is extremely complex and there are many ways a
resource could become incompatible with the descriptor metadata struct.
Detecting all possible ways a new resource could be incompatible is not
feasible.
* As a solution, we store the ResourceId which we know is immutable, serialise
via the pointer, and keep the live ResourceId on replay. If the resource was
deleted, the serialisation will fail because we look up the pointer at the
point of serialise, and a deleted resource will end up being NULL.
* To try and abstract this away and avoid potential confusion with the
ResourceIds, we make the descriptor contents private and provide accessors.
* Any time the replay types serialisation change, the remote server becomes
incompatible. We're not going to add backwards compatibility to that system,
so we need to break it every time.
* Really the version should be bumped any time renderdoc_serialise.inl changes,
but we don't have an auto-incrementing revision to use.
* This measure the number of samples that pass the depth/stencil tests -
In the case of early tests, this means it doesn't account for shader discard before writing.
* Seemingly if NULL, the embedded blob is used in the shaders (which must be
present in at least one shader, and must match exactly if present in multiple
shaders).
This fixes apps which load legacy GL functions using dlsym(). Some apps
load legacy GL functions unconditionally, even if they have no intention
of calling them.