This is my first attempt at writing one game a month. It's a turnbased dungeon crawler in the style of traditional roguelikes. All the characters in the game are bugs. Each bug has an ability and combat stats. You can "channel" a bug and gain access to its ability and stats.

Bug Catcher

Visible Area Detection with Recursive Shadowcast
Most games employ some form of visible area detection to simulate the fact that opaque objects obscure one's view of whatever is behind them. Recursive Shadowcast is one of a handful of algorithms that compute visible area in worlds represented by 2D grids. This makes it suitable for use in roguelikes. This post will explain the recursive shadowcast algorithm.
Screenshot from Dungeon Crawl Stone Soup demonstrating its visible area detection
Visible Area Detection in a 2D Grid
Before diving into the details, let's get a feel for 2D gridbased visible area detection. Below is an environment that will be used throughout this post. White cells are empty and black cells are opaque walls. The scene is viewed from the cell containing the red cross.
The yellowshaded area in the image below is visible from the centre of the observer's cell.
Note that some cells are partially inside the visible area. To simplify explanations, these cells will be considered to be completely visible.
The visible floor cells are shown below.
If at least one edge of a wall cell is visible, it is considered visible. Visible wall cells are shown below.
Recursive Shadowcast
Recursive Shadowcast takes the following input:
 a 2D grid of cells, where each cell is either transparent or opaque
 a coordinate of a cell in this grid from which the scene is viewed
It identifies which cells in the grid are visible when viewed from the specified position.
Octants
Recursive shadowcast considers the space to be made up of 8 octants, centred at the observer. Each coloured segment in the image below is a single octant. The visible area is computed independently on each octant.
If a cell lies on the border of 2 octants, it is considered to be in both. This implies that some cells are in 2 octants. It would be trivial to keep track of which cells have been visited, and thus prevent visiting the same cell twice while processing two different octants if necessary.
To clarify the point above, shaded in blue below are all the cells in a single octant.
Note also that the cell containing the observer isn't part of any octant. Generally, the cell containing the observer is always visible, regardless of the opacity of the cells around them.
Example Octant
The simplest way to explain this algorithm is to jump right in with an example. We'll first consider this octant.
As we compute the visible area in this octant, we'll be keeping track of a few variables:
 depth is the vertical distance of a cell (or row of cells) from the eye. Computation always starts with depth = 1.
 min slope is the gradient of the leftmost edge of our vision (within this octant)
 max slope is the gradient of the rightmost edge of our vision (within this octant)
For this octant, gradient values are ratios dx/dy relative to the eye's position. That is, multiplying a gradient value by some distance above the eye, will give a corresponding distance to the right of the eye on the slope defined by the gradient. Thus, as in the image below, a gradient of 0 defines a line going straight up, and a gradient of 1 defines a line at 45 degrees up and to the right.
It's important to point out that the eye is considered to be in the centre of its cell. That is, if the eye is in cell at coordinates (2, 3), the actual absolute position of the eye would be (2.5, 3.5) (assuming (0, 0) is the coordinate of the topleft corner of the grid).
We'll scan the current depth from west to east. Each cell that is visited is considered to be visible. If an opaque cell is encountered, recurse, incrementing the depth, and adjusting the min and max slope such that the opaque cell obscures vision at the next depth level. This process is documented more concretely in the following pseudocode.
computeVisibleArea(current_depth, min_slope, max_slope) { first = true; previous_cell = null; for each cell "current_cell" at depth "current_depth" between "min_slope" and "max_slope" from west to east { mark current_cell as visible; if not first { if current_cell is opaque and previous_cell is transparent { # first opaque cell after at least one transparent cell A: next_max_slope = gradient of line passing through eye and northwest corner of "current_cell"; computeVisibleArea(current_depth+1, min_slope, next_max_slope); } if current_cell is transparent and previous_cell is opaque { # first transparent cell after at least one opaque cell B: min_slope = gradient of line passing through eye and southwest corner of "current_cell"; } } else { first = false; } previous_cell = current_cell; } if previous_cell is transparent { # see through last group of transparent cells in row C: computeVisibleArea(current_depth+1, min_slope, max_slope); } }
The interesting parts of the code are labelled with A, B and C. We'll visit each case as we work through the example octant.
Initially, the function is called with arguments:
computeVisibleArea(1, 0, 1)
.The scan at depth 1 (below) is simple as there are no opaque cells. The only case in the code above that we encounter here is case C. After scanning each cell,
previous_cell
refers to the rightmost cell at this depth (since we scan from west to east), which is transparent. Thus we recurse to depth 2, leaving themin_slope
andmax_slope
arguments unchanged as they are passed to the recursive call.Depth 2 is the same as depth 1 in that none of the cells are visible, so we will skip it and move straight on to depth 3, which is much more interesting. Unlike the previous depths, there are 2 recursive calls made on depth 3 (represented by the green and blue pairs of lines in the image below).
Moving west to east, when the second cell at this depth is visited, the code above will hit case A, as an opaque cell is visited after a transparent cell. The value of
next_max_slope
for this case is the gradient of the rightmost green line. Note that this line passes through the eye and the northwest corner of the second cell. The leftmost green line has a gradient ofmin_slope
(0 in this case) which was passed to this function as an argument. These two gradients are passed to the recursive call, along with an incremented depth (4).The third cell visited at depth 3 causes the code to hit case B, as a transparent cell is visited after an opaque cell. In this case, no recursive call is made, but the value of
min_slope
is adjusted to be the gradient of the leftmost blue line. Note that this line passes through the eye and the southwest corner of the third cell.Finally, the fourth cell at this depth causes the code to hit case A again, meaning a new
next_max_gradient
is calculated (the rightmost blue line) and the code makes a second recursive call for this depth. Note again, that the rightmost blue line passes through the eye, and the northwest corner of the opaque cell.Unlike the previous depths, case C is not reached, as the last cell visited at this depth is not transparent.
Depth 4 is reached by 2 separate recursive calls. It's important to note here that all cells even partially inside the area between a pair of coloured lines be considered in the corresponding scan (hence the shading in the image below). The green instance is simple as there are no opaque cells. The blue instance will hit case A, though note that
next_max_slope
(the dotted line below) will be less than (ie. to the left of)min_slope
. This means no cells could be between then. Thus the blue instance stops here.The final case to consider in this example is the very last recursion of the green instance. Shown below, this case is unique as no cells in the scan are transparent. Stepping through the code, we see that each cell is marked as visible (as normal), but neither case A, nor C are hit, meaning no recursion is made, and execution stops.
The shaded cells in the image below are all the cells marked as visible in this octant using recursive shadowcast.
Generalizing to all Octants
The solution presented in the previous section is specific to the example octant. The following features of the algorithm are specific to that octant:
 As
depth
increases, we move north.  We scan west to east.
min_slope
starts at 0.max_slope
starts at 1. When visiting an opaque cell after a transparent cell, the northwest corner of
the opaque cell is used to find
next_max_slope
for the recursive call.  When visiting a transparent cell after an opaque cell, the southwest corner of the transparent cell
is used to find the new value of
min_slope
.
Let's see how these characteristics change in different octants. Note that this is just one of many possible ways to represent the differences between octants. These are in no way canonical.
The image below shows the direction to move in the grid as the
depth
increases. In the northmost octants we move north, in the eastmost we move east, and so on.Depth Direction
Closely relate to this is the scan direction. It can be derived from the depth direction: if you stand facing the depth direction for an octant, the scan direction for that octant will be from your left, to your right.
Scan Direction
Tied into the above two characteristics is the initial values of
min_slope
andmax_slope
. For any pair of octants sharing a common depth direction (or scan direction), if you stand facing the depth direction, the octant to your left would have slopes ranging from 1 to 0, and the octant to your right from 0 to 1.Initial values for
min_slope
andmax_slope
When scanning cells in the octant's scan direction, if an opaque cell is visited after a transparent cell, we compute a new maximum depth (
next_max_depth
) to pass to a recursive call based on one of the corners of the opaque cell. In the previous example, that was the northwest corner. Similarly, when visiting a transparent cell after an opaque cell, the currentmin_slope
is adjusted based on one of its corners. This is the southwest corner in the example. Which corners to choose in this cases is octantdependent, and is shown in the diagram below.Cell corners to use for finding slopes
Many of these characteristics are related in some way, and knowledge of some can be used to derive others. Despite this, for simplicity, when computing the visible area for a given octant, let's just pass all the characteristics as arguments:
computeVisibleArea(current_depth, min_slope, max_slope, depth_direction, scan_direction, opaque_corner, transparent_corner) { first = true; previous_cell = null; for each cell "current_cell" at depth "current_depth" between "min_slope" and "max_slope" in "scan_direction" { mark current_cell as visible; if not first { if current_cell is opaque and previous_cell is transparent { # first opaque cell after at least one transparent cell A: next_max_slope = gradient of line passing through eye and "opaque_corner" corner of "current_cell"; computeVisibleArea(current_depth+1, min_slope, next_max_slope, depth_direction, scan_direction, opaque_corner, transparent_corner); } if current_cell is transparent and previous_cell is opaque { # first transparent cell after at least one opaque cell B: min_slope = gradient of line passing through eye and "transparent_corner" corner of "current_cell"; } } else { first = false; } previous_cell = current_cell; } if previous_cell is transparent { # see through last group of transparent cells in row C: computeVisibleArea(current_depth+1, min_slope, max_slope, depth_direction, scan_direction, opaque_corner, transparent_corner); } }
Iterative Shadowcast
Passing these characteristics as arguments to the recursive function is repetitive, since within an octant they never change, and recursive calls never go between octants. An alternative way of structuring this function that removes the need to pass them around, is to replace the recursion with iteration.
We'll need a data structure representing state once represented by a recursive call. The arguments that could change with each recursive call are
depth
,min_slope
andmax_slope
. Thus, our structure will look like:class StackFrame { depth; min_slope; max_slope; constructor(depth, min_slope, max_slope) { this.depth = depth; this.min_slope = min_slope; this.max_slope = max_slope; } }
Using this, and an assumed Stack data structure, the code becomes:
computeVisibleArea(initial_min_slope, initial_max_slope, depth_direction, scan_direction, opaque_corner, transparent_corner) { # Make this a global variable or equivalent to remove the # overhead of constructing it each time this is called. stack = new Stack(); initial_frame = new StackFrame(1, initial_min_slope, initial_max_slope); stack.push(initial_frame); while stack is not empty { current_frame = stack.pop(); current_depth = current_frame.depth; min_slope = current_frame.min_slope; max_slope = current_frame.max_slope; first = true; previous_cell = null; for each cell "current_cell" at depth "current_depth" between "min_slope" and "max_slope" in "scan_direction" { mark current_cell as visible; if not first { if current_cell is opaque and previous_cell is transparent { # opaque cell after at a transparent cell A: next_max_slope = gradient of line passing through eye and "opaque_corner" corner of "current_cell"; next_frame = new StackFrame(current_depth+1, min_slope, next_max_slope); stack.push(next_frame); } if current_cell is transparent and previous_cell is opaque { # transparent cell after at opaque cell B: min_slope = gradient of line passing through eye and "transparent_corner" corner of "current_cell"; } } else { first = false; } previous_cell = current_cell; } if previous_cell is transparent { # see through last group of transparent cells in row C: next_frame = new StackFrame(current_depth+1, min_slope, max_slope); stack.push(next_frame); } } }
All the recursive calls were replaced with creating a new object with the information that would have been passed to the recursive call, and pushing it onto a stack. Keep iterating until the stack is empty, removing the most recently pushed data each time. This will change the order in which cells are visited. In the recursive version, when a recursive call was made, the current scan would be put on hold while the scan at the next depth was performed, which may itself have made a recursive call, etc, etc. In the iterative version, while doing a scan, any wouldbe recursive calls have their arguments pushed onto a stack. Once the scan is finished, the same set of nextdepth scans will occur as in the recursive version, but in the reverse order. Take some time to convince yourself that despite the new order, the same cells are visited, and thus the same cells are classified as visible.
Another minor change is the removal of the
depth
argument. This is no longer needed because the function's arguments are now just initial values, and the depth always starts at 1.Putting it all together, we need to call
computeVisibleArea
once for each octant, giving it the relevant arguments for that octant. The following function does this, starting with the octant in the example above, going around clockwise.computeTotalVisibleArea() { computeVisibleArea(0, 1, North, East, NorthWest, SouthWest); computeVisibleArea(1, 0, East, South, NorthWest, NorthEast); computeVisibleArea(0, 1, East, South, NorthEast, NorthWest); computeVisibleArea(1, 0, South, West, NorthEast, SouthEast); computeVisibleArea(0, 1, South, West, SouthEast, NorthEast); computeVisibleArea(1, 0, West, North, SouthEast, SouthWest); computeVisibleArea(0, 1, West, North, SouthWest, SouthEast); computeVisibleArea(1, 0, North, East, SouthWest, NorthWest); }

Cellular Automata Cave Generation
A cellular automata is a collection of cells whose states change over time based on the states of adjacent cells. They can be used to produce naturallooking patterns, such as the cave in the picture below.
Perhaps the most wellknown instance of a cellular automata is Conway's Game of Life, shown below (click to start). Each cell is considered to be either alive or dead. In this example, the initial states of each cell is random. At each step, the neighbours of each cell are examined to determine if that cell will be alive or dead in the next step. This proceeds according to the following rules:
 if a cell is alive and has less than 2 living neighbours, the cell dies (as if of underpopulation)
 if a cell is alive and has more than 3 living neighbours, the cell dies (as if of overpopulation)
 if a cell is alive and has 2 or 3 living neighbours, the cell remains alive
 if a cell is dead and has exactly 3 living neighbours, the cell becomes alive (as if by reproduction)
Conway's game of life. Click to toggle animation.
Here's pseudocode for the function applied at each step of the simulation. In short, it takes a 2D array of cells and updates the state of each cell based on the states of its neighbours.
SURVIVE_MIN = 2 SURVIVE_MAX = 3 RESURRECT_MIN = 3 RESURRECT_MAX = 3 step(cells[HEIGHT][WIDTH]) { for i in 0..HEIGHT1 { for j in 0..WIDTH1 { count = 0 for each neighbour of cells[i][j] { if neighbour is alive { count++ } } if cells[i][j].alive { if count >= SURVIVE_MIN and count <= SURVIVE_MAX { cells[i][j].alive_next_step = true } else { cells[i][j].alive_next_step = false } } else { if count >= RESURRECT_MIN and count <= RESURRECT_MAX { cells[i][j].alive_next_step = true } else { cells[i][j].alive_next_step = false } } } } for i in 0..HEIGHT1 { for j in 0..WIDTH1 { cells[i][j].alive = cells[i][j].alive_next_step } } }
Notice the four constants at the start of the code. These encode the rules for Conway's game of life, described above. These values can be changed to obtain different behaviour of the cellular automata.
 if a cell is alive and has less than 4 living neighbours, the cell dies
 if a cell is alive and has 4 or more living neighbours, the cell remains alive
 if a cell is dead and has exactly 5 living neighbours, the cell becomes alive (as if by reproduction)
The constants that encode these rules are:
SURVIVE_MIN = 4 SURVIVE_MAX = 8 RESURRECT_MIN = 5 RESURRECT_MAX = 5 ...
Variant of Conway's game of life used for cave generation.
These set of rules creates large clumps of living cells which can be used as the walls of a cave. When generating terrain for a game level of finite size, players shouldn't be able to walk off the edge of the map. A simple way to enforce this is to place walls around the edge of the map. A solution that places naturallooking walls around the outside of the map is to ensure the border of the area is always made up of living cells. Enforcing this at each step of the simulation means that these cells cause other nearby cells to become (and stay) alive. The walls grow inwards and interact with interior walls to give the appearance of natural cavern walls.
Cells around border are always alive.
Within the first few steps, naturallooking caverns are generated. In the subsequent steps, the walls recede and the caverns become very vast. Limiting the number of steps to 4 seems to produce interestinglooking caves. Rough edges and single dead cells can be smoothed/removed by killing any cells with less than 2 living neighbours, and resurrecting cells with more than 5 living neighbours. Finally, to ensure that there are no closedoff sections of the generated cave, all but the largest contiguous groups of dead cells are are resurrected, filling in closedoff open spaces.
Running for 4 steps, then cleaning.

2D Phong Illumination in WebGL
Suppose you're rendering an uneven surface like a cobblestone floor, water or grass. You could just draw the details on a flat image by hand. This might look great from one particular angle, but if the player is moving around, the flatness of the image may be quickly exposed. This is exacerbated by the presence of lights, which will illuminate the surface as if it had just been painted on (which it sort of has been).
Another approach is to create lots of polygons and model the surface in 3D. This will solve the lighting problem (provided you have shaders aware of lighting), but as these surfaces can have lots of tiny details, that's lots of work for you to define all the polygons, and lots of work for your GPU to draw them, and all you really gain is a nice aesthetic effect.
Another approach that is generally more efficient on graphics hardware is creating maps  buffers that store information about the details of a surface relevant to lighting. These maps correspond pixel by pixel to the texture being drawn onto the surface, and are used when shading fragments (pixels) to determine exactly how light should behave.
A demo that uses this technique is here.
For the tiles demo, I use two maps. The bump map stores the surface normal (vector at right angle to the surface at a point) and depth at every pixel on the screen (or every pixel in a tile since the tiles are repeated). The light map stores values indicating how reflective each pixel is to ambient, diffuse and specular lights.
Soon I'll explain exactly how these maps work, but for it to make sense you need a crash course on lighting.
Crash course on lighting
I use a technique called Phong Illumination to light the scene. It combines ambient, diffuse and specular lighting at each pixel.
Ambient lighting is the same at every pixel. A scene has a global value representing the amount of ambient light present. Different surfaces may reflect a different amount of ambient light. It does not change with the viewing angle.
Diffuse lighting is light from a point light source hitting a surface and illuminating it. The amount of light reflected by a point on a surface is dependant on the angle between the light source and surface normal at that point. Here's a diagram:
The more similar the two vectors, the brighter the light. This is computed in practice by multiplying the intensity of the light by the dot product of the two vectors. This value is then multiplied by the surface's diffuse reflection coefficient, thus different surfaces may reflect a different amount of diffuse light. If there are multiple light sources in a scene, compute the diffuse intensity for each light and add them together.
Specular lighting computes the "shiny" bits of a surface. When you look at polished wood, metal or water, and see the really bright patches of light reflected on them, these are specular highlights. The intensity of specular lighting at a point is dependent on the difference between the angle from a ray reflected from the light at that point and a vector from that point to the eye. In the diagram below, the relevant vectors are coloured red and green.
The intensity of the light is the dot product of the two relevant vectors raised to some power. The higher the power, the smaller and more intense the highlights appear, and thus the shinier the surface looks. Multiply this value by the surfaces specular reflection coefficient and light brightness. If there are multiple specular lights in an area, compute the specular intensity for each and add them together.
Once the intensity of each type of lighting is computed for a point, just add them all together to get the total lighting at that point. In the tile example, I add the ambient and diffuse lighting first, multiply this by the colour of the pixel (given by the texture) treating the (r, g, b, a) values as a 4D vector, then add on the specular lighting as a vector (i, i, i, 0) where 'i' is the specular light intensity. This is because I wanted the specular highlights to appear white rather than draw from the underlying colour.
Map Encoding Scheme
I store maps in bitmap files that I made using GIMP. Information is encoded in the rgb values of each pixel. Each channel (red, green, blue) of a pixel is represented by a single byte. Thus there are 256 values (0255) that can be stored in a channel of a pixel.
There are actually 4 images that get combined into making the tile demo. These are:
 texture
 bump map
 light map
 shine map
Tile Texture
A simple texture. It's used to determine the colour of each pixel.
Bump Map
For each pixel, this encodes the surface normal vector and depth at that pixel. Normal vectors are represented by a pair of angles. The diagram below shows the pair of angles used to represent the green point. The horizontal angle is blue and the vertical angle is red. The vertical angle in this system is constrained between 90° and 90°. As the tile scene is viewed from above, for the purposes of this example, the vertical angle will be constrained between 90° and 0°. The length of normal vectors is always 1.
Different information is encoded in each channel, so it makes sense to examine one channel at a time.
Red (Horizontal Angle)
This channel encodes the horizontal angle of the surface normal. A value of 0 denotes 0°, 64 (256/4) denotes 90° (360°/4) and so on. This is why the right side of the red image is black  the horizontal angle of the normal is 0°.
Green (Vertical Angle)
This channel encodes the vertical angle of the surface normal. Values are linearly interpolated between 0° and 90°. 0° indicates a vertical normal. The middle and edges of the image are black because the surface normal is straight up.
Blue (Depth)
The image above is slightly blue though it's hard to see. It represents the height in pixelsized units of each pixel. Heights of tiles range from 0 to 8 pixels, so the blueest colour in that picture is rgb(0, 0, 8) which looks almost black.
Light Map
This stores the ambient, diffuse and specular reflection coefficients in the red, green and blue channels respectively.
Red (Ambient)
Green (Diffuse)
Blue (Specular)
The grout between tiles isn't very shiny, so it has a low specular reflection coefficient
Shine Map
This indicates how shiny each pixel is. It is used to determine the specular exponent (the power to which the dot product is raised wen computing specular lighting). Only one channel is used for this, and values can range from 0 to 255.