>>
|
No. 47398
>>47395 Well, this does require an explanation. [TL;DR:] it`s implied by the current architecture. In case there is an alternative solution that I`ve overseen, feel free to describe a better architecture. Some of the probable solutions I already considered are listed below.
Hereinafter, «server» is a library responsible for loading and drawing, and «client» is an executable that tells the server what to load and draw. To avoid confusion, let`s also define «animation» as an internal server structure whose data is unique among other animations, and «sprite» as a client-controlled instance of some animation on the screen.
Every animation has its unique ID. When it`s time to update the screen, the client has to set properties for every sprite, namely: animation ID, frame number and sprite position (X, Y). Sprite dimensions are not needed: both server and client know them, and they stay constant for every sprite rendered from the same animation. Then the array of properties is passed to the server, and the server feeds it to the shader program. When the execution reaches this point, the shader program needs to determine which animations are to be drawn, and all it`s got is an array of sprite properties and a pre-allocated array of animations. How to link sprites to animations, and how to store them rationally?
In GLSL 1.10, the input data passed to the shader can be either vertex attributes or uniforms — special variables imported from RAM to VRAM. Static arrays are possible, but the total number of imported uniforms is bound by GL_MAX_VERTEX_UNIFORM_COMPONENTS and GL_MAX_FRAGMENT_UNIFORM_COMPONENTS, both usually less than 1024, and much less than, say, a million. Vertex attributes, in turn, are given to the shader one vertex at a time, proving their name. So, animation properties like width, height or scaling factor can not be stored in vertex attributes. Technically, it`s possible, of course, but it`s very impractical: they`ll have to contain several times more data than the number of sprites, since every sprite has several vertices (6 for the current implementation — 3 for both triangles comprising a sprite quad); what is more, the attribute array would have to be refilled on every sprite reordering and then sent to the GPU, along with the sprite properties from the client. Another point is that the total number of unique animations is quite finite, bound by the animation base on disk, unlike the number of sprites, which can be added or deleted every frame. Unfortunately, uniform arrays are also not an option, since their size is limited and largely depends on GPU capabilities: some early models supporting 1.10 had as little as 16 hardware uniform slots. What`s left? The most promising (if not the only) option is textures. Consider a texture containing IDth animation properties in IDth pixel. This screme is not in any way affected by sprite ordering, so once loaded, these values hardly ever change. Thus, there`s no need to update the texture every frame wasting precious video bus capacity. There`s even a special texture type for that, called GL_TEXTURE_BUFFER, but it`s not available in 1.10, so let`s stick to the good old GL_TEXTURE_2D. So, acquiring the necessary data by animation ID would require a texel fetch.
And there`s the pitfall. Most computations that give pixel shaders the data they need are complex enough to slow execution down if done anew for every pixel. Furthermore, setting sprite geometry, the actual job of a vertex shader, requires knowledge of sprite dimensions at least, stored in a texture.
|