Potentially overestimate compressed image size to avoid crashes

* I need to investigate this more. It seems that on AMD the compressed
  image size returned is the size of the whole cubemap, and then you get
  each face one-by-one. On nVidia the compressed image size is the size of
  one face, so dividing by 6 gives too little space.
* I don't know which one is standard correct, but that kind of doesn't
  matter since I probably need to query the size for maybe a 1x1 cubemap
  or a known dimension & compression format (like BC1 1 byte per pixel).
This commit is contained in:
baldurk
2014-11-27 19:52:26 +00:00
parent c22a1bb82c
commit 5ad5a0af49
+6 -2
View File
@@ -474,8 +474,12 @@ bool GLResourceManager::Serialise_InitialState(GLResource res)
// cubemaps return the compressed image size for the whole texture, but we read it
// face by face
if(t == eGL_TEXTURE_CUBE_MAP)
size /= 6;
//
// Disabled since it seems nvidia doesn't return the whole image size, but just the size per face,
// and it's not clear which is right (or how to handle that anyway). Worst case this means we serialise out
// too much data (since only the first face of data will be used).
//if(t == eGL_TEXTURE_CUBE_MAP)
// size /= 6;
byte *buf = new byte[size];