* The crash handler gets destroyed and recreated when we need to change creation
parameters, and this races against anything else trying to use it to register
or unregister memory regions.
* In partcular when initialising the replay we recreated the crash handler right
after kicking off the GPU enumeration.
* We add a read/write lock so there's no significant added contention on most
paths (even though it's not really high traffic in any case) to prevent this
kind of problem in future.
* Add CATCH_CONFIG_FORCE_FALLBACK_STRINGIFIER to force use of ToStr in
all cases. We can handle ints, etc, we don't need ostringstream, and
this allows us to handle enums that would otherwise just be printed as
their integer value.
* Add CATCH_CONFIG_INLINE_DEBUG_BREAK which restores the Catch 1.0
behaviour of debugbreaks happening in macros and so in-line at the
actual site failure. So the debugger stops on the CHECK() or REQUIRE()
call instead of inside AssertionHandler::complete()
Cherry pick from https://github.com/baldurk/renderdoc/commit/4232736fc21fc6a13a4de6997a5ae106598b225f
xcode project generation is driven by the sources list
before this commit "pipestate.inl" was not present in the xcode project
after this commit it is present
* This can be out of bounds when purely considering the instruction index if
there's an instruction near the end, beyond the point where there are (many)
further values.
* If we serialise from the map, and then separately memcpy from it to populate
the reference data then we risk another thread's writes happening in between
and not being detected on subsequence checks. Instead we need to copy from the
serialised data, that way regardless of what we snapshot there we always
compare against it to detect any further writes.
* We used to allow applications to call vkFlushMappedMemoryRanges on coherent
memory to manually annotate regions of memory that are changed in persistent
maps, thus avoiding the overhead of RenderDoc needing to check for changes on
each submit.
* Unfortunately this means that if the application calls flush wrongly then
changes will no longer appear, even though the application was completely
correct, if misleading, since by the spec behaviour vkFlushMappedMemoryRanges
is a no-op on coherent memory so incorrect calls to it make no difference.
* Since applications making use of this are rare or non-existant we just remove
the optimisation.
* DXIL's shape and necessity of pointers and relative addressing poses a
challenge for editing. Currently we pre-reserve all arrays and take pointers
to match DXIL's format and to allow edits to not require lots of index fixups.
However this is a challenge when editing as the arrays may resize as things
are inserted.
* The solution we take here is to copy all the arrays that will mutate to new
storage and reserving enough for the edits, *while keeping the old array
storage around*. We then assign all elements IDs, and after editing we do a
fixup pass to find any pointers that are pointing into the old arrays and find
them in the new arrays by ID.
* This means that pointers to the old storage remain valid and point to the
right object even after we've resized for editing, and we only fix them up
right at the end after all the edits so we can look up indices from pointers
again.
* This approach may not work indefinitely sadly, and it does require
conservative reservation. In future we might have to completely clone DXC's
data storage as well (arena allocated and new everything, with no arrays for
storage).
* We also allow GetPointerType to silently return NULL when no pointer exists,
as this will be useful for encoding when we want to check if a pointer type
exists.
* We now track the values in a simple array so getting value IDs amounts to just
getting an index in this array. This avoids the need to iterate in an llvm-
identical way to enumerate values. That does mean that we need to insert new
values into the array in the correct order, which isn't too bad.
* We cheat slightly with the attributes - we combine these on load, so we lose
information about multiple groups they may reference. So we save the groups
that we read from and use that to write the attribute.