* The general idea is that while idle we can handle these as normal -
If we ignore the map we ignore the map and mark the buffer dirty etc.
This is fine because modified but unflushed regions become undefined, and
we just let these become the modified values.
* While capturing a frame, we do everything to set up as normal, but then
instead of comparing the modified shadow buffer to an unmodified version,
detecting the modified range and making one big Unmap() chunk, we make a
chunk per Flush() call with the flushed range, and ignore the unmap.
* When glVertexAttrib*Pointer is called it 'latches' the current binding
of GL_ARRAY_BUFFER (it sets the attrib binding and vertex buffer on the
VAO internally).
* So when reading, we haven't serialised all the buffer bindings, so we
need to bind the right buffer by hand.
* If no program or pipeline is bound, need to make sure we don't try and
query a non-existant pipeline. This could happen if e.g. state is totally
cleared at the start of the frame.
* This most commonly happens with shaders and programs. A program record
takes a reference on shader records when they are attached and linked,
then a shader can be orphaned with only that reference remaining if the
user code detaches and deletes it.
* Previously we would go through, force-delete the shader, then when we
force-delete the program things would explode since it tries to decrement
the refcount on the shader and it becomes -1. So now we make all records
remove their parents (which might delete their parents), before we force
delete them.
* If you're capturing and replaying on the same driver it's insanely
unlikely that this translation will be anything other than the identity
map, although it wouldn't be illegal to renumber locations. However this
should allow moving captures between vendors/driver versions/platforms,
which in the past wasn't possible because locations would change (quite
validly) when the programs were recompiled. There might be other issues,
but at least this one is fixed.
* If it's immutable, fetch immutable size, otherwise calculate number from
top level width/height/depth.
* Then clamp by TEXTURE_MAX_LEVEL.
* THEN we need to go through each mip and check that it's been set if the
texture wasn't immutable, since some mips might be uninitialised and
that is legal, as long as those mips aren't used (e.g. by point sampling
or as a FBO attachment).
* Even this isn't quite enough. We're actually assuming a 'complete'
texture, i.e. one which has had all mips uploaded/init'd. It's actually
valid to have a texture that doesn't have all of its mips initialised
but as long as you don't use them, ie. you bind a point sampler or you
use it as a framebuffer attachments, it's valid. So REALLY in the
non-immutable case, we need to iterate over every level and query to see
if it's been configured, e.g. by fetching its width. Sigh.
* GL_TEXTURE_MAX_LEVEL acts as a bound on the number of mips for either
immutable or non-immutable textures. Instead for non-immutable textures
we do the standard mip calculation and that's what's necessary for a
"complete" texture.
* This way if someone updates their install without clicking the menu item
to clear this flag, it will still detect the update after a few days.
* (And when I forget to update which beta is latest, it will fix itself
eventually. Oops).
* I need to investigate this more. It seems that on AMD the compressed
image size returned is the size of the whole cubemap, and then you get
each face one-by-one. On nVidia the compressed image size is the size of
one face, so dividing by 6 gives too little space.
* I don't know which one is standard correct, but that kind of doesn't
matter since I probably need to query the size for maybe a 1x1 cubemap
or a known dimension & compression format (like BC1 1 byte per pixel).