Making [Q4] Kats Klorr #3 – Radiant & Entities
Developing game levels is often a long and protracted process, especially where the level and it’s contents are based on customised content, i.e., built models, textures and so on. The following is a partial walk-through of the process to build a custom level for Quake 4. All content was modelled using Blender 3D.
The contents of this article were originally posted several years ago to a now unavailable forum. Some minor editing for clarity has been performed.
Lighting First Pass
Now that the model is in the editor the good stuff can be started; the lighting! For multi-player maps this is by far the most fiddly part of the process simply because of the way lighting works in the Doom 3 engine; in short, there’s no easy or quick way to go about properly lighting up a level; it’s something that needs a lot of time spent on it. The end results are worth it though. For more general information about lighting principles read this tutorial.
The basic setup is to add a global fog light that contains the whole level using a setting of between 15000 and 20000 by using the following entity key/value pairs;
key - "surfaceParm3", value - "15000"
This will create a thin ‘mist’ rather than a ‘fog’. There’s a reason for this aside from the ambient mood it creates; for some reason although a fog light doesn’t actually light anything up (it has no ‘light’ ability) but when a point light (default light setting) is added into an area it starts to act a little bit like an ‘ambient’ light by ‘softening and seemingly dispersing the light on surfaces. The net effect of this is something that more closely approximates ‘true’ ambient lighting than the current ambient light material that can be applied. Granted the level is fogged but I’d rather have that than the rather weird lighting that happens as a result of using the ambient material. Aside from the global fog, there are basically 3 lights coloured as follows;
‘sky’ – pale bluish
0.78 0.88 1.0
‘ambient’ – medium grey blue/purple
0.44 0.47 0.49
or 0.24 0.26 0.27
‘lamp’ – light yellow/orange/red (depends on the colour of the flame used for the lanterns)
0.82 0.76 0.70
Each ‘room’ has one local ‘ambient’ light that creates an overall background light value that’s greyish in tone with a mid range light level, this makes the room relatively well lit with a constant value.
Important note : This is an ‘ambient’ light in the literal sense of what that means; a background light rather than Quake 4’s ‘ambient’ light, which is the nofall material applied to a light entity. There is a big difference between the two.
What this light is doing is acting as a ‘filler’. Shown in the screen-shot below centre left (roughly running down the middle of the image) is a brighter area of the room (only the floor area is shown slightly brighter than it’s surroundings as well as highlights on the rockwall), that’s a skylight, and because of it’s proximity to the room adding the background light helps give the impression that the relatively bright light from the sky is ‘bleeding’ into the whole room.
In the editor the skylight is a long thin light volume the covers roughly the same amount of area it would be expected to illuminate. This then overlaps the rooms background light to provide a reasonably smooth transition between the two while at the same time create a fake radiosity effect (in as much as that can be visually approximated in the Doom 3 engine).
After a good few hours (most of which was spent on this one area of the map) this is what the level looks like in game with the current lighting placement; it’s getting there (the low FPS in the shot is a result of screen-shots ‘pausing’ the game, it’s real reading is around 30FPS with the entire map being draw all the time)
Readjusting The Model
Doing the initial lighting setup above and viewing that in game also allowed the model to be physically checked for size issues against the player; there were some a couple of them mainly the narrow entrances/exits to certain areas. Whilst they may look visually interesting (and would work ‘OK’ in single player where speed of movement isn’t as great a priority) they created ‘movement’ problems in multi-player which means they need to be widened to allow easier passage.
Back in Blender, to make sure adjustments like this are relative to the game each vertex is moved whilst holding down Ctrl to lock it to the grid. This also means that vertices from different mesh sections all match up correctly providing they too have been moved from their original positions in the same way.
The big dark blue zigzag shape below is a gap that needs closing after moving the walls. Needless to say that doing this to the model then means that the UV maps needs to be tweaked; moving vertices like this result in stretched UV maps, and also altering the brushwork hull to compensate for these minor changes.
Re-Adjusting The Map
The shot below shows the re-adjusted and re-exported model back in the editor. As can be seen parts of the brushwork hull poke through the mesh so these have to be adjusted so that doesn’t happen – keeping in mind the brushes need to hug the model as closely as possible (the main reason for doing this is to prevent the player passing through the mesh and into a void in instances where the games collision breaks – which it has a tendency to do when dealing with areas of a map that are ‘complex’ due to the use of models – intersecting models run a greater risk of causing problems) and that there aren’t any stray edges poking out which could snag the player in game (a must to watch out for when making adjustments to areas of a map that players can get to during the course of normal game-play).
Sometimes it’s easier to adjust the simple brush volume shapes than the relatively complex model structure, especially for organic surfaces
Second Lighting & Model Pass
It’s usually easier to work with large models in such a way that your separate objects by material when doing tests. The main reason for this is just to make it easier to bug-track errors to do with textures and/or the mesh itself; exporting everything as a multi sub-object file often makes it difficult to see immediately where an error is in a large file so breaking the mesh down means being able to see right away where an offending object is. It’s particularly useful to do this when texture blending is involved; having two or three mesh sections set up to use blending can get a little confusing in terms of which blend is supposed to go on where! Breaking the model up like this also makes it easier to apply the materials to each object.
Speaking of texture blending, the models vertex colours were reset and repainted so that only the essential areas received any blending in game; notably the edges and joints between rock and ground – blended textures are quite ‘in your face’ if you get carried away and put it everywhere so it’s best to sometimes drop right back to the bare minimum so you can see what’s going on and how the blending effects the looks of the surfaces in game. The screen-shot at the bottom of this post shows quite a heavy blend between ‘dirt’ and ‘rock’, too much so because you can’t really see enough of the rock textures detailing; it’s things like that which need to be watched.
Old ‘first pass’ lighting – changing the colour of the light changes the ‘mood’ of the level, blueish colours lends a colder feel
New Second-pass lighting – warmer colours change the mood of the level fitting the eventual colour-cast the stone touches will emit
The two shots above highlight another ‘feature’ that often gets overlooked; using lighting to create ‘mood’. Although the light entities are approximately in the same position in both shots above, it’s easy to see how changing the colour of the lights effects the overall ambiance or ‘mood’ of the scene. In essence the lighting should reflect how the room is lit and by what – sounds like an obvious statement to make but it’s often overlooked for various reasons chief amongst which is the difficultly of placing lights in such a way as to maintain a consistent appearance versus the overheads of the engine having to process polygons several times over as they’re rendered relative to the number of lights in a room. In effect, that’s pretty much what most of your time with lighting is spent on; trying to get the right look balanced against rendering overheads (FPS). It’s not easy and it does require patience and time. Read up on lighting principles here.
If the colours of the lights are kept towards the ‘blue’ end of the spectrum you create a ‘cold’ atmosphere, move it towards ‘red’ and you create a ‘warm’ one. Use this colour difference to play one against the other so that you not only have ‘dark/light’ but also ‘cool/warm’ interplay which will then highlight certain features better. This is a good technique to use with outdoor lighting when the use of a secondary light is required to reduce the ‘blackness’ of shadows.
Depending on the architectural features of a map, not all lights need to be setup to cast shadows; oddly turning them off not only helps performance but also adds more credence to faking the impression of radiosity because of the way lights work when set up to ‘noshadow’; they ‘bleed’ better.
Collision Models
Because models have been used for some of the detailing in the map, one advantage of this is that they can be rotated and positioned where necessary. The downside to this is that it creates an alignment problem when trying to cover objects with playerClip brushes in the editor (to simplify collision data and reduce the maps file size as a result); despite blocking out a ‘primitive’ shape and *then* moving it (as opposed to trying to manipulate the edges of a brush to match the alignment of a model it’s covering) they always, without fail, either leave bits poking out from the clip brush, or to a lesser extent, the clip brush usually ends up as an odd shape because the edges can’t be snapped to the grid well enough to approximate fine levels of rotation – this usually isn’t a problem in itself unless the clipbrush needs to be lined up to something else.
To get around these problems a collision hull is applied to the model in Blender 3D. This is essentially nothing more than a primitive shape that approximates the general contours of the underlying model covered in “textures/common/collision”; everything reacts to this in game because the model underneath will get a ‘nonsolid’ material parameter that will make it possible to walk and shoot though it – the advantage is the fiddly details of the model won’t bloat the .cm file with none-essential collision information.
The models shown below are exported as individual units to allow ‘instancing’ (the repeat use of an object), and once the material paths have been edited, they’re loaded into the editor and placed around the map. Previous to this the main map model had been broken down into material based sections – sections of the mesh were ‘grouped’ based on material assignment – one advantage of doing this is that the larger mesh section of the stone light fittings could act as a ‘placement template’; it acts as a guide for the positioning of the individual light units and is then deleted once done.
It’s worth pointing out here that the original larger lamp model section could have been left in place ‘as is’, it’s convenient to do and ensures that the position of the lamp models are *exactly* where they were put. The downside is ‘overdraw’; polygons you can’t see being drawn on screen – more on this later.
Wire frame showing the two models that compose the light object; the light itself and the highlighted (pink outline) collision box
Highlighted ‘red’ in the background is the original light template section, the foreground showing an individual light unit already placed over the top of the light that was part of the larger template
Tidying Up Texture Blends
This takes a bit of an eagle eye as incomplete blends aren’t always easy to spot. One of the ‘problems’ associated with using models is how the game engine treats them in relation to the lighting and smooth groups. Although lighting is per pixel it still doesn’t negate the problems associated with how smoothing works across groups of polygons; two neighbouring polygons will be lit slightly differently depend on whether they are part of the same ‘group’ or not, or, the angle a light hits them (this is aside from the shading and shadows of dynamic lights).
As a result of this there are instances where textures blended across two different objects may appear to be incomplete. It’s not, it’s a result of the way the lighting is interacting with the seams where mesh sections meet.
Ideally corrections like this need to be done before the mesh is broken down into smaller sections (before final export); it just makes amendments of this nature easier to manage than trying to ‘guess’ correct vertex values when sections aren’t right next to each other (so you can see the intensity of the colouring side by side).
It’s worth pointing out here that as you look at the last images it *does* have a ‘seam’ but it’s not as prominent as previously. The textures do blend to 100% on both sections so it’s not a blend issue; it’s related to the comment above regarding how lighting interacts with mesh objects and smooth groups – elsewhere on the map there are transitions similar to the one shown above and they are more or less perfect (given the limitations on not being able to use objects with ‘multimaterials’ [two or more materials on the same object]). This just highlights one of the minor visual problems that happen as a result of using models.
In the shots above and below there are two separate sections; highlighted red is one. Of particular concern is the texture blending as it crosses from one section to the next, both of which use ‘dirt’ as the connecting texture – both sections blend out along their edges to ‘dirt’ so the common shader gives the impression of a smooth transition from one area (rock) to another (ground)
The above image show the initial blending as it appears in the editors real time render; because the vertex colours weren’t painted correctly over the two sections there’s too much of a hard transition from one to the other which creates a hard leading edge
The image above shows the two sections with corrected vertex re-painting resulting in a much better blend between the two mesh sections; the two sections were repainted so the both had equal intensity values as well as occupying a slightly larger number of vertices than originally
Placing Clip Brushes
Part of this phase of build isn’t necessary because there aren’t any *official* Bots around for Quake 4. There are however some community one so it’s worth ‘clipping’ the map with their use in mind.
Aside from ‘BotClipping’, ‘PlayerClipping’ also often needed in a map to cover and ‘smooth’ the profiles of walls and objects, preventing players from getting snagged on things as they run past. How much of this is done usually depends on the size and complexity of the object and it’s position or accessibility – how well (if) the player can get to it. Ask a few simple questions and go on from there;
If they can’t get to it, clip it
If it’s a rough surface, clip it
If the object is overly complex, clip it, to simplify player collision at least
Normally clipping is done using a couple of special materials applied to brushes which then covers an object. It’s slightly different to using collision hulls as mentioned above because clip materials can’t be applied to polygons in the same way the collision material can – something to do with clip objects needing to be ‘volumes’ rather than ‘surfaces’.
What this means is that depending on what you want to do regarding the simplification of objects in the world you can do one of two things; add a collision hull, or use a clip brush (technically you can do both in the Radiant editor but not on the actual model itself).
Collision hulls generally block everything; weapons fire, bots and player models – the advantage is that being model based it can closely approximate the contours of the object it’s clipping resulting in fewer ‘dead-space’ errors (weapons impacts happening in mid air because they’ve hit a clip brush). The downside is that it blocks everything.
Clip on the other hand doesn’t; it blocks various aspects of game play based on what’s written into the material file. Usually it blocks players (playerClip), bots/AI (monsterClip), weapons (weaponsClip), everything (fullClip), or any combination therein. The advantage is that specific clip properties can be built around certain objects or areas (you want to block players but not weapons impacts). The downside is that because they’re brush based trying to clip odd or awkward shapes is impossible at worst, difficult at best.
Because the walls of the model are at all sorts of angles botClip has been used extensively; bots don’t ‘trick’ when playing so those areas can be clipped out. Players on the other hand do using the environment to their advantage; hidy holes, ledges, nooks and crannies, are all important for the player; if they are in a map they will be found and used. This raises a bit of a dilemma in ‘terrain maps’ because A) collision needs to be simplified, but B) players don’t like being told they can’t go somewhere by invisible walls.
How clip is used depends on the map and the type of game-play you want people to get from it, there are no real fixed rules on what to do other than the general questions that can be asked above
Model Placement & Connection
Once the level has been reached a point where it all fits together; lighting has been done, item placement done and any tweaks and bug fixing sorted out, then it’s time to start ‘optimising’ the mesh. Up until this point the mesh has been one huge single modelled object – as was mentioned previously this just makes adjustments and fine tuning that much easier – the disadvantage of this is ‘optimisation’, i.e. making sure the game in rendering to screen only the polygons that are required based on what the player can see.
At this ‘mid build’ point in development the engine is pretty much drawing the whole model to screen, that needs to be addressed and fixed, or at least better managed.
What need to be done now is two fold; both of which effect something called ‘overdraw’; the model needs to be re-exported in smaller chunks and those chunks need to be ‘managed’ as best they can be given the features and terrain of the level.
What this means from a practical point of view is that currently the objects in Blender that are broken down into smaller chunks (based on ‘smooth-grouping’ as previously mentioned) and need to be reassessed and reconnected to make larger (but still optimised) units; it’s all very well having close to one hundred modelled objects on different layers in Blender, but exporting that many ASE models to use in a level starts to get impractical and time consuming to manage, more importantly it doesn’t necessarily offer any improvements on in-game performance – the time it takes to do all the the work involved with exporting lots of smaller objects and any small gains that might get needs to be offset against fewer larger objects, less time and ‘reasonable’ performance. A line has to be drawn otherwise you end up spending too much time on something and gaining very little in return. The question then to ask “is it worth it?”.
In essence then, the model sections need to regrouped so they work with portal placement in the map itself rather than against it; models are placed and broken relative to where the boundaries for each portal-ed ‘zone’ or ‘area’ are. More on this to come.
The image above show a smaller mesh section of a larger area selected; these need condensing into fewer larger mesh sections
The image below shows the mesh section after being joined to other sections to make one larger object. On either end of the selected object is a passage down which you can see the the mesh section. As an area composed of three separate objects originally it makes no real sense to keep them separate when all of them will be seen and then drawn by the game. From a practical point of view, it’s easier to work with one larger (but optimised) object than it is to work with several. In other words, no performance (or very little) gains are made when using separate smaller sections in this instance
Floor objects & Detail Collision
I was wanting to add plants to this level but on converting them over (multiplant) and placing them into game the lack of shadows becomes all to obvious; painfully so in fact; granted it’s more than possible to add shadow ‘decal’ layers under the models but due to the uneven ground surface it means making a lot of custom decals for every plant placed in the level which is a little impractical and adds a considerable amount of extra polygons to the scene.
Instead it was decided to add some edge ‘interest’ in the form of thin layers of rock; what this is doing is ‘breaking’ the monotony of the surfaces in the map without interfering with game play too much.
The problem here is the “thin” bit. The Doom 3 engine and subsequently Q4 has major problems with anything “thin”; it messes up collision detection in multi-player to such an extent that players can actually get stuck on small ‘lips’ and ‘outcrops’ that stick up from the floor; so much so in fact that they can’t even be walked over, seemingly forming an impenetrable barrier.
The way around this is to ‘null’ the visible map object itself and add a much more simplified collision hull/shell around it; in the case of the floor models that shell has a shallow ‘ramp’ on all sides to make it easy for the player to walk/run/move up or down the collision covered mesh.
The Blender 3D shot below shows the model centred in the mass of the object with the much simpler collision shell encasing it. As with all map objects it’s POO (point of origin) is positioned to allow a healthy burial into the surrounding brushwork and other models without causing additional problems. As it’s an ASE model both sections are separate objects each with just one material/texture applied per object. They are then both selected and exported as a single ‘combined’ object, file paths are edited and the objects are brought into game.
The important part of this ‘prepping’ of the ASE model is to make sure that the ‘rock’ texture – the texture you physically see in game – is nonsolid; in other words, the Quake 4 material you apply to the mesh section has to have no effect on any collision in Q4 at all – you’d fall through the object if you were to walk over it in game (if you’re writing custom materials then this simply means copying a material and adding “nonsolid” as one of the material parameters) – this is because the collision hull/shell does all that work; the ramps easing player movement across their surfaces as well as acting as generalised weapon impact hulls.
Wireframe, flat (smoothed) shaded and textured views of the floor detailing map objects showing the visible object and the ‘ramped’ collision shell
The map objects as they appear in Q4Edit, layered up to add a bit more surface and edge detail
Same objects run in-game (showing approximately the same view as the editor shot)
Sectioning The Level Model
Although there is still a bit of work to do to finalise the level, it’s at this point that work can begin on exporting the individual model sections to separate objects instead of the one big mesh that’s been used so far – which was mainly done to easy development; fixing problems and making adjustments to the mesh without too many headaches as well as allowing the level to actually be built and propagated with items and gameplay objects (jump pads, weapons pickup etc.).
Now this part of the process is very important and needs to be done if the level is to be reasonably well ‘optimised’; by that it’s meant that the game engine is rendering only the parts of the map it needs to and/or parts the player can see at any given moment.
Taking a closer look at the shot below reveals a couple of things; the model sections once connected all remain relative to their original position when the mesh was created and exported as one big object. This means the POO of each section is current at Blender 0,0,0 grid axis centre; fine for exporting one big object, but not for smaller ones. This needs to be addressed.
Move each section to it’s new 0,0,0 axis position making sure to have the Ctrl key pressed when doing so. Also make sure to be in orthogonal top view because the snap to grid sensitivity changes depending on whether you’re in orthogonal or perspective view. Once there make sure the cursor it also set to 0,0,0 by editing the ‘View Properties’ panel (View > View Properties > 3D cursor ‘XYZ’ position fields) 3D Cursor XYZ fields, this MUST be done.
Once the cursor has been set, in object mode press the ‘Centre Cursor’ button in the Editing buttons window (F9) to reset the mesh objects POO to where the cursor is, this then means the objects centre of origin changes to a more ‘local’ position relative to the object itself, which is important for map objects; all map object models needs to have a ‘local’ origin point.
By setting the POO to Blenders grid centre and then snapping to that grid when moving objects for repositioning it means that the objects stay snapped to the grid in Q4Radiant when they are finally opened in to it for map placement; all the pieces will snap together like a jigsaw puzzle
[1: Level Blockout | 2: Export Mesh | 3: Level Editor | 4: Performance]