In what happens to be a rarity nowadays you’ll now get exactly what you could guess from the caption of this posting. I’ve been working on those three things yesterday and today and thought I might share some of my experience implementing color selection the right way into a complex user interface.
Those of you that coding apps/games/whatever using OpenGL might know that the old selection functionality of OpenGL (which wasn’t that bad at all) is a no-go nowadays, mainly due to the fact that it’s done in software for some time now by all big graphics cards vendors and therefore sadly disqualifies it for a realtime usage. In case you never used it : It had a name stack and a special render mode. So before rendering an object you could push a name (actually an integer) onto the name stack, render the object and go to the next object until you’ve drawn all objects that are selectable in your 3D scene. Then you could simply read back what object was drawn at a certain point in your framebuffer and OpenGL returned the name of it.
But nowadays it’s a no-go due to it’s slowness and so I’ve been using a technique called “color selection” for some time now. It’s a manual way of doing selection on your own in OpenGL (or any other 3D-rendering API) that is fast and easy to use but also has it’s caveats, something I’ll be getting to. It basically works by rendering your objects without lights/shaders/textures and only single-colored and then reading back the color below e.g. your cursor and you then compare it to the stored values for your objects so you can easily get the selected object derived from the color read back from the framebuffer below the cursor. (left image shows the region, right image shows the color selection image) :
It sounds very simple and if you do it in a basic demo it also is very simple. First you render the color selection parts only (not visible to the eye in your backbuffer), read back the color, get the object selected, clear the scene and then render your normale scene. This works fine for simple cases. But ProjektW is not a simple case.
But take a look at this :
It’s a bit hard to see. But this is a construction spot in the regional view of “Phase 2”. You see a crane with an alpha-masked texture that lets the background “shine” through. And now you also see that the upper part has a red background. And that is one of the aforementioned problems when using color selection in e.g. a complex UI like “Phase 2” does. Why? Because the window has a callback for rendering the content that is called when the window is visible and in that callback I render (amongst other stuff) the region’s structural preview with all it’s building spots. And So contrary to a simple application I can’t just render the color selection view of the region and delete the framebuffer’s content cause that would erase the whole content of the window that was drawn before (e.g. the background scenery image).
So how do you get around these problems? You need to use some kind of offscreen rendering. For “Phase 2” I use frame buffer objects when available (if not, just simple render-to-texture). Though here lies another problem that forced me to change how I update the offscreen rendering in “Phase 2”. When you use FBOs you can update them whenever you want, cause your normale framebuffer won’t be affected. But with normal render-to-texture that’s not the case, so you can’t just simply update the texture when e.g. the region is displayed cause that would affect the current framebuffer too, but along with this problem I fixed the above color selection problem too : I do texture (and fbo) updates and color selection before I start rendering the current frame, and that fixes all those problems.
Old way of updating textures and doing color selection : * Render the 3D-Scene in the background (globe, water, sky) * Render GUI * GUI calls the region’s window’s callback for content rendering * Color selection view of region is rendered * Depth buffer is cleared * Normal view of region is rendered This would cause the above artefacts behind alpha-masked (and translucent) buildings and also would not work with normal render-to-texture (since I want to have an audience as broad as possible I implemented that for older GPUs).
So here is how the rendering now looks : * Update (if necessary ) FBO (or texture) for region’s normal view * Update (if necessary ) FBO (or texture) for region’s color selection view (yes, separate FBOs/textures, cause it’s not always necessary to update them both, this is due to performance reasons) * Render ONLY the color selection part of the region if the window is visible and cursor is hovering above the 3D preview panel * Read back color in framebuffer and get selected building * Clear framebuffer (and depth, stencil and everything else) * Render the 3D-Scene * Render GUI * GUI calls the region’s window’s callback for content rendering * Now just paint a simple quad with the FBO/texture of the region’s normale view where the 3D panel is located
So now with that new way of rendering e.g. the region there are no more artefacts for the color selection of alpha-masked buildings and (not less important) it’s now a lot faster cause I only update the FBO/texture of the region’s view when something has changed instead of rendering the region directly onto the GUI every frame. Hopefully this will help some of you on how to render 3D scenery on-top of a 2D user interafce, as it’s not a trivial task.