16 Dec 2012

CoreGraphics2

That's Twiggy's official name now.

I've basically written a vertical slice of the new Nebula3 Render Layer during the past few weekends where I'm trying out a few ideas of what the Nebula3 rendering system will look like in the future.

The lowest-level subsystem is CoreGraphics2, which I wrote about already a little bit.

It wraps the host platform's 3D API (e.g. OpenGL or Direct3D), the rendering vocabulary is higher level / less verbose then OpenGL/D3D. It runs the render thread, but can be compiled without threading (on the emscripten platform for instance). There's a facade singleton object (CoreGraphics2Facade) which wraps the entire functionality into a surprisingly simple interface.

CoreGraphics2 works with only 5 resource types:
  1. Texture: Just what the name implies, a texture resource object. This also includes render targets.
  2. Mesh: This encapsulates all the required geometry data for a drawing operation: vertex buffer, index buffer (optional), vertex layout / vertex array definition, and "primitive groups" (basically sub-mesh definitions). 
  3. DrawState: This wraps all the required shader and render-state data for a drawing operation: a reference to a shader object, shader constants (one-time-init, immutable), shader variables (mutable) and an (immutable) state-block for render-states.
  4. Pass: A pass object holds all required data for a rendering pass, this includes a render-target-texture object, and a DrawState object which defines state which is valid for the rendering pass. All rendering must happen inside passes. Typical passes in a pre-light-pass renderer are for instance the NormalDepth-Pass, the Light-Pass, the Material-Pass, and a Compose-Pass. The pass object also contains the information whether and how the render target should be cleared at the start of the pass.
  5. Batch: A batch object just contains a DrawState object which defines render state for several draw operations, so this is just a way to reduce redundant state switches.
Resource objects are opaque to the outside. To the caller, these are just ResourceId objects, there's no way to directly access the data in the resource objects (since they actually live in the render thread).

Resource creation happens by passing a Setup object to one of the Create methods in the CoreGraphics2Facade singleton. There's one Setup class for each resource type (so basically TextureSetup, MeshSetup, DrawStateSetup, PassSetup and BatchSetup). The Setup object basically describes how the resource should be created and shared (for instance when creating a texture resource, the Setup object would contain the path to the texture file, whether the texture should be loaded asynchronously, whether the texture object should be a render target, and so on). The render thread will keep the Setup objects around, so it has all information available to re-create the resource (for instance because of D3D's lost device state, or for more advanced resource management where currently unused resources can be removed from memory, and re-loaded later).

All rendering happens by calling methods of CoreGraphics2Facade:

Begin / End methods:
These methods structure a frame into segments. 
  • BeginFrame / EndFrame: Signal the start and end of a render frame. 
  • BeginPass / EndPass: Signal start and end of a rendering pass. BeginPass takes the ResourceId of a Pass object, makes the render target of the pass active, optionally clears the render target, and applies the render state of the DrawState object of the pass.
  • BeginBatch / EndBatch: Signal start and end of a rendering batch. This simply applies the render state of the DrawState object of the batch.
  • BeginInstances / EndInstances: This is where it gets interesting. BeginInstances sets all the required state for a series of Draw commands. It takes a Mesh ResourceId, a DrawState ResourceId, and a "shader variation bitmask". The bitmask basically selects a "technique" from the shader (in D3DXEffects terms). For instance, to select the right shader technique for rendering the NormalDepth-pass of a skinned object, one would pass "NormalDepth|Skinning" as the bitmask.
Apply methods:
This method group applies dynamic state changes during a frame:
  • ApplyProjectionTransform, ApplyViewTransform, ApplyModelTransform: Sets the projection, view and model matrices.
  • ApplyVariable: applies a shader variable value to the currently active DrawState object (which has been set during BeginInstances). This is a template method, specialized for each shader variable data type (float, int, float4, matrix44, bool).
  • ApplyVariableArray: same as ApplyVariable, but for an array of values.
Draw methods:
This method group performs actual drawing operations:
  • Draw: Performs a single draw call, must be called inside BeginInstances/EndInstances. Renders a PrimitiveGroup (aka material group) from the currently active Mesh, using the render state defined in the currently active DrawState. For non-instanced rendering one would usually perform several ApplyModelTransform() / Draw() pairs in a row.
  • DrawInstanced: Like Draw, but takes an array of per-instance transforms to render the same mesh at many different positions. Tries to use some sort of hardware instancing, but falls back to a "tight render loop" if no hardware instancing is available.
  • DrawFullscreenQuad: simply render a fullscreen quad with the currently set DrawState, this is used for fullscreen-post-effects
And that's it basically. I'm quite happy with how simple everything looks from the outside, and how straight-forward the innards work. For instance, leaving the shader system aside (which is implemented in a separate subsystem CoreShader), the OpenGL specific code in CoreGraphics2 is just 7 classes, and the biggest file is around 600 lines of code.

And it's simple to use, for instance here's the render loop to render the point lights in the new LightPrePassRenderer (hopefully the Blogger editor won't screw up my formatting):
    CoreGraphics2Facade* cg2Facade = CoreGraphics2Facade::Instance();
    if (this->pointLights.Size() > 0)
    {
        cg2Facade->BeginInstances(this->pointLightMesh, this->lightDrawState, this->pointLightFeatureBits, false);
        IndexT i;
        for (i = 0; i < this->pointLights.Size(); i++)
        {
            const Light* curLight = this->pointLights[i];
            const matrix44& lightTransform = curLight->GetTransform();
            
            // compute light position in view space, and set .w to inverted light range
            float4 posAndRange = matrix44::transform(lightTransform.get_position(), this->viewTransform);
            posAndRange.w() = 1.0f / lightTransform.get_zaxis().length();
            
            // update shader params
            cg2Facade->ApplyModelTransform(lightTransform);
            cg2Facade->ApplyVariable<float4>(LightPosRange, posAndRange);
            cg2Facade->ApplyVariable<float4>(LightColor, curLight->GetColor());
            cg2Facade->ApplyVariable<float>(LightSpecularIntensity, curLight->GetSpecularIntensity());
            cg2Facade->Draw(0);
        }
        cg2Facade->EndInstances();
    }
The only thing that's still missing from CoreGraphics2 is dynamic resources and a plugin system to extend functionality of the render-thread side with custom code (for instance for non-essential stuff like runtime resource baking). 

As much as I'd love to have a rendering system where dynamic resources aren't needed at all, there's no way around them yet. We still need them for particle systems and UI rendering.

On the front-end of the render layer, there's the new Graphics2 subsystem. The changes are not as radical as in CoreGraphics2 (with good reason because changes in this subsystem would affect a lot of high level gameplay code). There are still the basic object types Stage, View, Camera, Light and Model. There's now a new GraphicsFacade object, which simplifies setup and manipulation of the graphics world drastically. And I tried out a new component-system for GraphicsEntities (Models, Lights and Cameras). Instead of a inheritance hierarchy for the various GraphicsEntity types, there's now only one GraphicsEntity class which owns a set of Component objects. The combination of those components is what turns a GraphicsEntity into a visible 3D model, a light source, or a camera. The main driver behind this was that 90% of all data in a ModelEntity was character related, but less then 10% of graphics objects in a typical graphics world are actually characters.

I've split the existing functionality into the following entity components:
  • TransformComponent: defines the entity's position and bounding box volume in world space.
  • TimingComponent: keeps track of the entity-local time
  • VisibilityComponent: attached the entity to the Visibility subsystem (view frustum culling)
  • ModelComponent: renders the entity as a simple 3D object
  • CharacterComponent: additional functionality for skinned characters (animations, skins, joint attachments, ...)
  • LightComponent: turns the entity into a light source
  • CameraComponent: turns the entity into a camera
This component model hasn't really been written to allow strange combinations (you might be tempted to attach a CameraComponent to a Character-entity for a first person shooter). Theoretically something like this might be even possible, but I don't think it is a good idea. The driving force behind the component model was cleaner code and better memory usage.