* When glVertexAttrib*Pointer is called it 'latches' the current binding
of GL_ARRAY_BUFFER (it sets the attrib binding and vertex buffer on the
VAO internally).
* So when reading, we haven't serialised all the buffer bindings, so we
need to bind the right buffer by hand.
* If no program or pipeline is bound, need to make sure we don't try and
query a non-existant pipeline. This could happen if e.g. state is totally
cleared at the start of the frame.
* This most commonly happens with shaders and programs. A program record
takes a reference on shader records when they are attached and linked,
then a shader can be orphaned with only that reference remaining if the
user code detaches and deletes it.
* Previously we would go through, force-delete the shader, then when we
force-delete the program things would explode since it tries to decrement
the refcount on the shader and it becomes -1. So now we make all records
remove their parents (which might delete their parents), before we force
delete them.
* If you're capturing and replaying on the same driver it's insanely
unlikely that this translation will be anything other than the identity
map, although it wouldn't be illegal to renumber locations. However this
should allow moving captures between vendors/driver versions/platforms,
which in the past wasn't possible because locations would change (quite
validly) when the programs were recompiled. There might be other issues,
but at least this one is fixed.
* If it's immutable, fetch immutable size, otherwise calculate number from
top level width/height/depth.
* Then clamp by TEXTURE_MAX_LEVEL.
* THEN we need to go through each mip and check that it's been set if the
texture wasn't immutable, since some mips might be uninitialised and
that is legal, as long as those mips aren't used (e.g. by point sampling
or as a FBO attachment).
* Even this isn't quite enough. We're actually assuming a 'complete'
texture, i.e. one which has had all mips uploaded/init'd. It's actually
valid to have a texture that doesn't have all of its mips initialised
but as long as you don't use them, ie. you bind a point sampler or you
use it as a framebuffer attachments, it's valid. So REALLY in the
non-immutable case, we need to iterate over every level and query to see
if it's been configured, e.g. by fetching its width. Sigh.
* GL_TEXTURE_MAX_LEVEL acts as a bound on the number of mips for either
immutable or non-immutable textures. Instead for non-immutable textures
we do the standard mip calculation and that's what's necessary for a
"complete" texture.
* This way if someone updates their install without clicking the menu item
to clear this flag, it will still detect the update after a few days.
* (And when I forget to update which beta is latest, it will fix itself
eventually. Oops).
* I need to investigate this more. It seems that on AMD the compressed
image size returned is the size of the whole cubemap, and then you get
each face one-by-one. On nVidia the compressed image size is the size of
one face, so dividing by 6 gives too little space.
* I don't know which one is standard correct, but that kind of doesn't
matter since I probably need to query the size for maybe a 1x1 cubemap
or a known dimension & compression format (like BC1 1 byte per pixel).
* This won't work if the feedback is started before the captured frame then
captured inside it, either by being paused or actually active across the
SwapBuffers call.
* ie. not just an unnormalised mantissa plus biased exponent, it has proper
IEEE style float properties with hidden bit on the mantissa etc. So full
decoding is necessary, contrary to what some of the docs say :(.
* Note that the uniform subroutines are saved and restored by the render
state, but they are not properly restored anywhere where we fetch &
restore the current program. Will probably need a special helper class
to push/pop that correctly.