Impossible Creatures | Offical FAQ
The following is an overview of the key technological features of the Impossible Creatures Engine:
Combiner Engine
SPOOGE (Rendering Engine)
Mesh Representations
World Lighting & Shadowing
Motion Tree
Terrain Rendering
Trigger System
NISlet System
Pathfinding & Collision Detection
AI System
Sound Engine
Extendable Game Architecture
Combiner Engine

Functionality: The combiner engine's sole function is to take two animals as inputs; parameters that define how to combine the two inputs; and then generate a unique output creature.
Description: The core of the combiner engine works around the concept that each input is a collection of limbs. Each input contains data that defines for each limb what geometry, textures and skeleton the limb is comprised of. The parameters to the combiner define what limbs the output is to be composed of. To simplify the combination process the combiner is broken up into several components. Each component operates on a different piece of data. The skeleton combiner creates the new skeleton for the output. The geometry combiner then creates and attaches the geometry to the newly created skeleton. The texture combiner creates and attaches the new textures to the geometry of the output. One of the more difficult steps is the animation combination. In this step, the animations of the two inputs are blended together to create new animations for the output. The end result is a unique blended result of the two inputs, from geometry to animation to game attributes.

Animation Compression - The challenge with combining the animations of two creatures lies in the fact that the source creatures can have very different skeletal structures and movement characteristics. To accomplish this with visually pleasing results, the animation is broken down on a per limb basis (torso, legs, head, tail, etc.) then joined together to match the limb assignments of the resulting creature. However, the input creatures can have proportionally different leg lengths, and so a custom inverse kinematics solution was designed to adjust for this and give the creatures the proper look of walking on even ground. This also allows for animations to be transfered which are unique to an input creature, such as how the chimpanzee stands upright when not moving, even if the second input creature stands on all four legs (this of course requires that the chimpanzee's front arms were selected for the resulting creature.)


SPOOGE (Rendering Engine)

Functionality: The low-level rendering interface, known as spooge, provides abstracted services for all drawing.

Description: Spooge separates the drawing information into topology (triangle lists, etc.), spatial information (vertex lists), and surface representation (textures and shaders.) We support different primitive indexing modes and variable vertex sizes to minimize bus bandwidth for different mesh types. The surface representation abstracts the API representation of the pixel pipeline, provides auto-multipass fallback, and internal optimizer to minimize the complexity of the shader.

The shader system is one of the larger sub-systems. It was designed to provide maximum multi-texturing, while building multiple passes when needed. It uses a DAG representation to compute the pixel colour. This representation is then normalized (to our internal standard), optimized, and then fixed up (to avoid card/driver specific issues). This final tree is then used to build the passes. Where hardware is incapable of performing a full tree in single pass, the compiler will generate accompanying passes to complete the pixel representation.

D3D specific optimizations in the drivers include: static vertex and index buffer sub-allocation, streaming dynamic vertex and index buffers and depth buffer size matching (for nVidia hardware). Current work on spooge includes: Vertex and Pixel shader support, heterogeneous 3-D pipeline assembly, and a render batching system.


Mesh Representations

Functionality: We support several mesh types in order to support the art needs of the team.

Description: We support: static triangle meshes, animated triangle meshes (segmented and skinned), static variable resolution meshes (VRMs), animated VRMs (segmented and skinned), and animated bezier quad patch meshes. The VRM and patch meshes allow for variable triangle counts. The animated meshes are controlled by an internal skeletal representation.

Bezier patch mesh: We use the Bezier patch mesh type to represent our creatures. This mesh type allows us to tessellate our creatures to high level in order to remove faceting on the models. Also, the surface tessellation gives us a blended vertex weighting when the control points are welded to a single bone.

Variable resolution mesh: Due to the relatively high cost of animated Bezier patch meshes, and their inability to reduce below their base mesh representation easily, we have built a variable resolution mesh system to provide a greater detail range. The system is based on a traditional view-independent edge collapse reduction. The current system selects collapses based on minimal volume change, although future work includes a quadric error method implementation to minimize surface attribute distortion.


World Lighting & Shadowing

Functionality: The in-game rendering provides a unified lighting and shadow system.

Description: Terrain lighting is pre-computed in the MissionEditor. This includes storing the diffuse terrain lighting but also terrain shadow volumes. Object shadows are computed on demand. Objects that request static shadows will share when possible. Dynamic shadowing objects have their shadow rendered out to a shared dynamic texture. In order to reduce the amount of dynamic texture memory, the dynamic shadow system sub-allocates from shared dynamic textures balancing the shadow size to the texture space allotted. The lighting is composited onto the terrain such that overlap of shadows does not occur. Additionally, the terrain shadow volume is used to project the terrain shadow over objects in the world dynamically. Objects fully within terrain shadow do not produce shadows.


Motion Tree

Functionality: The motion tree gives use the freedom to create unique behaviors for every object in the game.

Description: The motion tree is a tree structure of possible motions for a game object. An artist or designer creates the tree in the ObjectEditor tool by layering the different animations and defining how they transition between one another. Variables can also be created that are altered real-time by the game engine, this affects the way each node in the tree is handled and produces different desired results.


Terrain Rendering

Functionality: The terrain renderer is the visualization of the game world created within the MissionEditor. It consists of three components: land, water and sky.

Description: A regularly sampled height field defines the land geometry. The land is decorated with a base texture, detail textures and decals. A low-resolution base texture is used to define the overall land colour. Higher-resolution tile-able textures are used for details in close-up shots. Additional details are introduced using decals, which are textured geometries that conform to the shape of the terrain.

A regularly sampled height field defines the water geometry. Any water geometry that is hidden by the land is omitted. Binary triangle trees are used to adaptively triangulate the water geometry while avoiding T-junctions. Water geometry is decorated with animated water textures and reflection maps (see below).

The sky geometry is created using a large textured dome above the land and water. The textures of the sky are also used as reflection maps for the water.

Frustum culling and visibility culling are used to reduce the number of polygons to display. Detailed textures and decals that are significantly far away from the camera are also omitted to improve display performance.


Trigger System

Functionality: The trigger system allows designers to script the behavior of the game using triggers.

Description: A trigger is a description of the actions to perform when certain conditions are met during the game. For example, a trigger may declare a player as the winner when the enemy's lab is destroyed. By default, triggers are deactivated once their conditions are met and their actions are carried out. Triggers may be flagged as looping triggers so that they would not be deactivated after they carry out their actions. Triggers may also activate and deactivate other triggers. This allows designers to create complex game behaviors by sequencing the flow of triggered actions.


NISlet System

Functionality: A NISlet is a short Non-Interactive Sequence (NIS) that plays scripted camera and entity animations with the help of the game simulation engine.

Description: The NISlet system currently supports three types of animations: animations for the camera, animations for entities that may be commanded to run, swim or fly to given locations, and animations for entities that do not run, swim or fly. These animations are all based on the movement of objects along pre-defined paths.

NISlet camera animations control the position of the camera as well its rotation, declination, roll and zoom distance. The camera may also be instructed to focus on entities in the world during an animation. NISlet animations that command entities to run, swim or fly along animation paths rely on pathfinding to move entities from their existing locations to their target locations.

NISlet animations for entities that cannot run, swim or fly move entities by applying transformations directly to them.


Pathfinding & Collision Detection

Functionality: The pathfinding and collision detection work together to allow the creatures within the game to move about in a natural looking way without colliding into one another.

Description: Spatial searching: The world is divided up into a coarse grid and each grid-point is designated to hold a list of entities that center's are contained within that grid-point. Each entity is contained within a bucket.

Collision detection: The spatial search is used to provide a list of entities we are potentially colliding with. A quick reject sphere test is done on each of these entities and finally an OBB (Oriented Bounding Box) check is done between entities that reported colliding during the sphere test.

Pathfinding: Highest-level pathfinding (from one side of the world to the other) is performed on a multi-resolution grid, to ensure a path is found. Convex hulls are then created to resolve collisions with entities along this path. If Entities (convex hulls) block a high level "clear passage" on the multi-resolution grid, they are temporarily "burnt in" for one high level pathfinding call to find an alternate route around an obstruction.


AI System

Functionality: The AI system was designed to allow the designers to create the high-level goals and desires of the AI without having programmer support for every detail.

Description: The AI we are using will be divided into two main layers. The high level scripted layer, and the low-level code layer. The scripted layer is a set of rules that the AI will use to determine what it should do based on the state of the game world and is written in a high level scripting language. The script will control the units and buildings from a high level and give them general desires or direction on what they should do. When scripted conditions are met the associated actions are executed by the low-level system. Examples of what the high-level script can do are request certain units, specific research, rank up, request a building, command units to attack, specify a general target, etc.


Sound Engine

Functionality: The Sound Engine was designed to take advantage of current and future audio technology and features without having to rewrite the entirety to support future technologies and hardware. The tools allow the Sound Designer a lot of flexibility with out requiring programmer support.

Description: The Sound engine is a feature rich, flexible high-level sound engine that is hardware independent by using a standardized interface to a low-level hardware dependant dynamic link library (DLL).

The high-level sound engine is designed to utilize the latest technologies in sound, including positional 3-D audio, compression techniques, interactive music, envelopes and DSP filters. By abstracting these at a high level, the users (application programmers, sound designers and composers) can utilize these features without knowledge of the underlying hardware's capabilities and implementation.

Implementing the hardware dependant interface in a low-level DLL allows us to create new DLLs to support new hardware as it becomes available. It also allows us to switch hardware platforms with a minimal amount of new code to support the new platform; for example switching from PC (our current platform) to one of the next generation gaming consoles like the Sony Playstation2 or Microsoft X-Box.


Extendable Game Architecture

Functionality: Our goal was to design an object-oriented architecture for a Real-Time Strategy game that will be easily modifiable.

Description: This architecture will allow us to easily produce incremental content as data modifications, code add-ons and full sequels. To achieve this goal, we created a set of high quality tools for creating the game data, scripts that are easily modified drive most aspects of the game and the game runs from a "MOD" dll to separate the "MOD" specific code from the game engine. The game engine, tools and specifications could be licensed for creating an entirely new game or specific tools can be released to the "MOD" community to produce the user created incremental content that helps the longevity of most games today.


Back to top


Incredible IC Updates!
Let The Games Begin!
New Insect Invasion Critters!
Contest Winners
Playbacks Contest Winner, Bill_Brooks


March 21st, 2002
New IC Screenshots
March 19th, 2002
Screenshots section greatly updated
March 18th, 2002
Links section updated
February 4th, 2002
ICC Fiction Contest added
February 4th, 2002
New section devoted to IC contests
February 2nd, 2002
We've done some revisions to our links section



Copyright © 2001 by ICCenter. All rights reserved to their respected parties.
Contact us for more information on ICCenter.