* This also includes serialising vulkan handles as Ids by type instead
of using a macro as before.
* In contrast to the old code, we serialise handles as the wrapped type
since e.g. when serialising a command buffer in a vkCmd* function we
want to get back the wrapped type. This means some structs need to be
unwrapped on replay when before they were "pre-unwrapped" after the
serialisation, but this isn't a big deal.
* The serialise function handles fetching the ID on write, and looking
up the resource manager to fetch the handle on read.
* This allows us to do nice things like:
Serialise_SetConstBufs(SerialiserType &ser, UINT NumBuffers,
ID3D11Buffer *const *ppConstantBuffers)
{
SERIALISE_ELEMENT_ARRAY(ppConstantBuffers, NumBuffers);
// ...
}
And the array handling and serialise recursion will correctly iterate
and serialise the array as a series of ResourceIds, then restore it
as a series of handles.
* When opening a capture file, a format is now available to allow
easy import from another format without a completely different
interface. Only rdc files can be replayed, but any other file can
load and access structured data through the same interface.
* The replay initialisation and capture writing interfaces also use the
RDCFile instead of passing filenames or Serialisers around directly.
Driver initialisation parameters are now entirely private, and don't
need to be exposed - any agnostic metadata like thumbnail, driver, etc
are all accessed via the RDCFile container itself.
* Callstack resolution is now part of the container file, not the
back-end via way of its Serialiser.
* Importers/Exporters to other non-RDC formats are registered in a
similar way to replay/remote drivers.
* It is also then possible to construct an RDC file from thin air, by
creating an empty RDCFile container and filling it with data, then
requesting it to be written to disk.
* Since the initial states will all be written directly to the file
serialiser without intermediate copies, there will be no ability to
seek backwards to fixup chunk lengths.
* Instead all chunks will write an upper bound size estimate first, then
any padding is written at the end if they came in under the estimate.
* The new system contains the ability to export serialised data to a
structured form in memory - and conversion back to serialised bytes.
* This will allow offline transformations/visualisation of capture files
as well as more rich representations of API calls in the UI.
* Likewise it enables a number of optimisations such as the ability to
write straight from mapped API memory to disk via a compressor,
without any intermediate copies.
* The commits following this add a new refactored serialisation system.
As a balance between readability of individual commits and ability to
bisect/compile at interim steps, the individual API back-ends are
disabled in this commit, so that after the serialisation system is
committed the project compiles again as soon as possible.
* However this is of course not much use in itself as then the code
can't function usefully. The drivers are re-enabled as each one is
updated to the new system, so at least the codebase should stay
compiling, even if each individual API may not yet be functioning.
* Normally this wasn't a problem but in a few cases we were doing lots
of push_back calls and this caused horrible resize-by-1 behaviour.
* So instead we double the available count (if that's larger than just
satisfying the request).
* Hopefully these can be restored at some point, when the features are
implemented. For now where possible we remove options that are just
unavailable always, and selectively disable others when they may or
may not be available based on what API the capture uses.