* We stream-out or transform feedback the whole instanced draw at once,
producing a buffer containing all N instances in one. Then when the
client requests postvs data, an offset into the buffer is calculated
(in 1/N chunks) and carried through everywhere.
* Since we were using the offset to indicate where the system position
output lay for since-last-clear auto drawing of meshes, we rearrange
the output attribute order so system position is always first in the
list.
* Also since-last-clear now doesn't include the current event, but does
include any previous instances before the current instance.
* This might be an nvidia bug but I honestly don't know what the syntax
for varyings is, and the spec seems vague (it just says 'user-defined'
variables). This system/heuristic works on NVIDIA and AMD though, so
it'll have to do for now.
* This fixes the bug that's been around for a while, where if you have
an event selected and then open the buffer viewer, it won't have the
post vs data available until you re-select that event.
* This will work in case the binding order changes between two impls,
and it also works if copying program state between two programs that
might have different UBO/SSBO sets.
* Still missing several features:
- solid shading of 'secondary' data
- highlighting of vertices/faces/supporting faces
- helpers like axis markers or frustum
- post VS data and re-projection
* So RenderMesh doesn't pick up anything implicitly from the current
event, log, pipeline state etc - everything it needs is explicitly
provided by the config parameters (note this might include a buffer
generated by postvs data fetching, but the implementation now doesn't
need to care or treat it as a special case.
* This will allow shifting to RenderMesh being run locally just by
the UI specifying the buffer and simple vertex specification, rather
than by relying on any local log properties (or replaying the log).
* The reasoning behind this change is that it becomes much simpler to
implement, rather than having to modify the draw to do what we want,
we just do an entirely custom draw based on a few properties - similar
to the texture rendering. This will help e.g. for writing a GL
implementation.
* The second benefit is that we can just transfer the buffer contents
across the network when replaying remotely, so mesh rendering can be
implemented even for remote replay - the last unimplemented feature.
* It could also be used similar to the image viewer in future, to
display mesh files.