* Any struct with a pNext pointer gets a Deserialise function, to clean
up any dynamically allocated chains.
* The mechanism is as follows:
- On writing, walk the pNext chain (assuming there is one) and skip
any structs we don't care to serialise.
- When we reach a struct we do want to serialise, save a nullable (i.e
optional) VkStructureType* for it, then recurse and serialise the
nullable casted struct.
- If there is no pNext (or none we care about) we serialise a NULL
VkStructureType*
- On reading, we serialise the VkStructureType*. If it's NULL, rename
it to look like a void *pNext = NULL.
- If it's not NULL, use the structure type to serialise a nullable
struct of the right type, and recurse.
* The pNext chain is dynamically allocated, which is why it gets cleaned
up in the Deserialise() function.
* These extensions don't need any special support from us, and only
affect the SPIR-V consumption.
* The list of extensions is:
- VK_KHR_16bit_storage
- VK_KHR_relaxed_block_layout
- VK_KHR_storage_buffer_storage_class
- VK_KHR_variable_pointers
* We need to pass flags down from the parent struct to know, since it
depends on the type of descriptor. This could happen in any case, but
is most likely for templated descriptor updates where the memory that
is referenced could be garbage.
* Previously we did this because unused descriptors don't have to be
updated, but for consistency with templated updates we mark them ref'd
now (although the contents are still not referenced until bound).
* We need to patch in fixed blocks, so we can try to use glslang to
preprocess away confusing code, and then do the patching on the
pre-processed output. This has a risk of affecting the results but it
can be a last-ditch attempt which is better than having no separable
program to reflect at all.
* When pulling in command buffers, we want to include their allocation
chunk and descriptor pool. Previously we did this by marking their
record as referenced, however this had the problem where if the
application started recording new commands into that command buffer
before the record was referenced and processed, the capture would
include those chunks without any of the proper references etc.
* To avoid this, we take advantage of the fact that the allocation chunk
is already stored in a separate record so that it doesn't get thrown
away every time the command buffer is baked. We can instead directly
reference this record when pulling in at submit time.
* Normally a mesh rendering replay output would do this implicitly when
an event is selected, but if accessing purely through script this may
not happen, so we should initialise it ourselves here.
* If it's already init'd, it's almost free to call again.
* The previous fix was insufficient, the iterator being at end() is only
one way push_back can invalidate it, the other is if the array being
expanded is large enough (or things are just right) that the vector
resizes.
"cmd package resolve-activity" is only available from Android 7.0. Pre Android 7.0 we can use "pm dump <packagename>" to get the default Activity name.
* Since we search for #version, if it appears in a separate source
string from the rest of the shader, we fail to find the right place to
insert the patch blocks. If we concatenate first, it makes it easier.
Adhere to vkImageMemoryBarrier valid usage:
If image has a depth/stencil format with both depth and stencil
components, then aspectMask member of subresourceRange must include both
VK_IMAGE_ASPECT_DEPTH_BIT and VK_IMAGE_ASPECT_STENCIL_BIT
* On GL shader reflection is mutable but only between different events
(we can expect it to be consistent/cacheable for a single event).
* We don't want to delete shader reflection pointers out from under the
UI, and it's not feasible to invalidate all pointers it might hold at
any point - e.g. previous to this change when a ReplayLog happens.
* Instead we just add another dimension to the cache key and allow the
cache to bloat more on GL, preserving possibly redundant reflection
objects.
* We were already fetching and compacting vertex buffers so this wasn't
as big a change as expected - it just needs to expand out e.g.
R16G16B16A16_SNORM to R32G32B32A32_SFLOAT so that it can be created
as a texel buffer (where the former may only be supported via fixed
function vertex inputs).
* This isn't 100% reliable but it seems reasonably consistent that while
the first file in the streams isn't always the entry file, the first
name of a file in the list of names which map to files is. So we
re-sort by the list of names.
* This can happen in cases where the application syncs to the GPU after
using one set of descriptors from a pool, resets it, and then
allocates more descriptors out of the pool to use later in the frame.
* Since we allocate all descriptors up-front before the frame starts we
end up allocating more than the high-water mark, and running out of
room in the pool.
* Instead we just allocate duplicates of the pool as needed, as overflow
space, and then use those overflow pools to satisfy any extra need.