Optimising our shadows in Unity

Background

Shadows Still

We have a projected shadows system that we use in a few of our games. Very much like a shadow map, it involves rendering objects from the perspective of a light and then projecting the shadows from that light onto the scene.

On some of our games, the fully fledged Unity shadow mapping solution is overkill – we don’t want to render dynamic shadows for everything, only smaller objects in the scene. We also want more control over how we filter our shadows – how we blur them to make them softer.

During a recent profiling session on one of our games, we noticed that generating one of these shadow maps was taking up approximately 12% of the total frame time. So I went about investigating it and looking into what we could do to reduce this cost and at the same time reduce the amount of memory the system was consuming.

Optimisations

My first step was to fire up my preferred profiling tools for both Android (RenderDoc) and iOS (XCode). RenderDoc is a free to use profiler and debugger that can connect to a host Android device and capture frame traces.

RenderDoc

RenderDoc

XCode is to the go-to development app on MacOS, you can capture a GPU frame at any time by selecting the option from the debug menu.

XCode.png

XCode GPU Frame Debugger

Making the most of the space we have

Using the render target viewer on both platforms I spotted that the contents of the shadow map render target was only occupying a small section of the entire texture. I would estimate that over 50% of the pixels in the render target were unoccupied – what a waste of space!

We use our projected shadows with directional lights, orthographic projection modes tend to be easier to control and tweak. You will lose any perspective but for us, this isn’t an issue. Swapping the projection mode over to orthographic as well as better positioning of the light source allowed us to make better use of the available render target space.

In the end, we were able to reduce the resolution of our shadow map texture from 128×128 to 64×64 – that’s 1/4 of the original size. One of the biggest bottlenecks in mobile devices is bandwidth – mobile devices have small busses. Moving 75% less data down the bus is a big saving. This along with shading 75% fewer fragments is a huge win (ignore the colour for the minute – I changed the way we store the shadow map in the render texture).

Shadows-OldProjectionShadows-NewProjection

 

MSAA

As we are using such a small render target when objects start moving within the render target you will notice a lot of aliasing. Due to the way in which mobile GPUs work, MSAA is very cheap. Mobile GPU’s use a tile-based architecture. All of the anti-aliasing work is done on chip and all additional memory is on tile memory. Enabling 4x MSAA with a smaller render texture gave us much better results with only a tiny increase in processing cost.

Render Target Formats

I spotted that our shadow map render texture was using an R8G8B8A8 format. Only two of the channels were being used. The first (R) was being used to store the shadow itself, and the second channel (G) was being used to store a linear fall off. Our artists requested that the intensity of our shadows fall off with distance.

Looking further into it, we didn’t actually need to store both pieces of information here. We only needed the shadow value or the falloff value depending on what was enabled for this shadow projector. I changed the render target format to a single 8-bit channel format (R8). This cut our texture size down by another 1/4. Again this reduces our bandwidth massively,

Blur Method

After we populate our shadow map render texture we blur it. This reduces any artefacts we see from using smaller textures as well as giving the impression of soft shadows. We were using a 3×3 box blur – that’s 9 texture samples per pixel. What’s more we weren’t taking advantage of bilinear filtering with a half pixel offset. I quickly added the option to only sample the surrounding corner pixels along with a half pixel offset, this reduced our sample count from 9 to 5 (we still sample the centre pixel).

You sample a texel from a texture using a texture coordinate. With point filtering enabled, sampling between two texels will result in only one of the texels being sampled. With bilinear filtering enabled the GPU will actually linearly blend between the two texels and return you the average of the two texels. So if we add an additional half pixel offset, we can essentially sample two pixels for the price of one.

Reducing ALU Instructions

Unity doesn’t support the common border texture wrapping mode. Therefore we had to add a bit of logic to our blur shader that checks to see if the current texel is a border texel, and if so keep it clear. This prevents shadows from smearing across the receiving surface. The shader code was using the step intrinsic to calculate if the current texel was a border texel. The step intrinsic is kind of like an if statement, I managed to rework this bit of code to use floor instead, this alone reduced the ALU count from 13-9. This doesn’t sound like much, but when you are doing this for each pixel in a render texture, it all adds up. This wasn’t the only saving we made here, but its an example of what we did.

When writing your shader, head to the inspector in Unity. From here, with a shader selected, you can select the “Compile and show code” button. This will compile your shader code and open it in a text editor. At the top of you compiled shader code you can see how many ALU and Tex instructions your shader uses.

// Stats: 5 math, 5 textures
Shader Disassembly:
#version 100

#ifdef VERTEX

For even more information you can download and use the Mali Offline Shader Compiler. Simply copy the compiled shader code – the bit in between #if VERTEX or #if FRAGMENT – and save it to a .vert or .frag file. From here you can run it through the compiler and it will show you the shader statistics

malisc --vertex myshader.vert malisc --fragment myshader.frag
Mali Shader Compiler Output

Above you can see that the 5 tap blur shader uses

  • 2 ALU (arithmetic logic unit) instructions
  • 3 Load/Store instructions
  • 5 Texture instructions

OnRenderImage

After the end of the blur pass, I noticed that there was an additional Blit – copying the blurred texture into another render target! I started digging into this and noticed that, even though we specified that our blurred render texture is of R8 format, it was R8G8B8A8! It turns out that this is a bug with Unity. OnRenderImage is passed a 32-bit destination texture, then the value of this is copied out to the final target format. This wasn’t acceptable so I changed our pipeline. We now allocate our render textures manually and perform the blur in OnPostRender.

private void OnPostRender()
{
    if (shadowQuality != ShadowQuality.Hard)
    {
        SetupBlurMaterial();
        blurTex.DiscardContents();
        Graphics.Blit(shadowTex, blurTex, blurMat, 0);
    }
}

Depth Buffer

This one is a bit weird – I’m not sure if I like it or not, I’m pretty sure I don’t. If you’re desperate to save memory you can disable the depth buffer. But this means that you are going to get a tonne more overdraw. However if you know what’s going into the shadow map render target and you know there isn’t a lot of overdraw, then this might be an option for you. The only way you’re going to tell is if you profile it, and before you even do this, make sure you’re really desperate for that additional few kilobytes.

Performance Metrics

Here we can see the cost of rendering a single shadow map (the example above). These readings were taken using XCode GPU Frame Debugger on an iPhone 6s. As you can see, the cost of rendering this shadow map is less than 50% of the original cost.

Cost

Thanks to reducing the size of our render targets, using a smaller texture format, eliminating the unnecessary Blit and (optionally) not using a depth buffer our memory consumption went from 320kb down to 8kb! Using a 16-bit depth buffer doubles that to 16kb. Either way, that’s A LOT of saved bandwidth.

Conclusion

Shadows Gif

In the best case scenario, we were able to reduce our memory consumption (and bandwidth usage) by over 40x. We were also able to reduce the overall cost of our shadow system by just over 50%! To any of the art team that may be reading this – it doesn’t mean we can have twice as many shadows 😀 All in all, I spent about 2-3 days profiling, optimizing and changing things up and it was definitely worth it.

Crowd Rendering on mobile with Unity

Background

One of the artists came to me one day and asked “What’s the best way to render a crowd in our game?”. Being a mobile-first studio, it’s not an easy solution. Back in my console days, we could consider instancing and GPU simulations, but our game needs to support OpenGL ES 2.0 and run on devices with a lot less bandwidth and power. Crowd systems inherently consume a lot of processing power, you’ve got to animate thousands of people and each of those people need to look different.

We took a look at what options were available to use. We could simply render a lot of skinned meshes but this is going to be expensive both on CPU and GPU as we need to skin each mesh and then submit it to the GPU. We could use the sprite system in Unity to render a billboarded crowd, but as the camera angles change the sprites would have to be re-sorted. After a while, we realised we needed to come up with a custom solution.

Technique

2D or 3D?

Our crowds were going to be displayed at quite a distance from the camera, on devices with small screens. Therefore fidelity was not so much of a concern. Rendering thousands of 3D skinned meshes on mobile wasn’t really an option, we chose to stick to 2D crowds.

Placement

We need crowd placement to be quick and easy. We don’t want our art team spending hours painfully placing GameObjects inside scenes to signify where a person should spawn. Ideally, an artist should be able to define a region or area where they want people to spawn and when they hit a button it all comes to life.

Crowd Placement

We gave the artists the ability to spawn crowds inside bounding boxes, around a sphere and at points in a mesh. We found that 95% of the time the art team would choose to spawn crowds using a bounding box.

Randomisation

Crowds Up Close

One of the biggest challenges with crowd rendering is having enough variation within the crowd so that it looks believable. This means people in the crowd will need different animations, coloured clothes, coloured skin, hairstyles etc. And those characters that are duplicated will require offset animations so that they look less like clones. You soon realise that people don’t focus on one person in the crowd, they focus on the crowd as a whole. This means that as long as there are enough variation and movement in there it looks pretty convincing.

We allow the artists to randomise:

  • Sprites
  • Position
  • Rotation
  • Colour
  • Animation offsets
  • Scale
  • Movement

Batching

Our games still target older Android devices that only support OpenGL ES 2.0. In order to reduce CPU overhead from issuing too many draw calls, we knew that we would have to batch as many people in the crowd as possible. For this reason, we made the decision that every person within a crowd region would be batched together into one draw call, but this obviously introduces a problem…

Sorting

As soon as you batch everything together you lose any ability to sort individual people within the crowd. So we had to come up with a flexible sorting solution for the artists. We ended up allowing the art team to sort characters in the group along a specific axis (e.g. along the z-axis) or by distance from an object. The latter proved to be the most used option.

[SerializeField] private Transform SortTransform;

private int SortUsingTransform(Vector3 a, Vector3 b)
{
    Vector3 origin = SortTransform.position;

    float dstToA = Vector3.SqrMagnitude(origin - a);
    float dstToB = Vector3.SqrMagnitude(origin - b);

    return dstToB.CompareTo(dstToA);
}

...

var crowdPositions = new List<Vector3>();
// generate crowd positions
crowdPositions.Sort(SortUsingTransform);

Our crowds were used within a stadium, and our camera is always in the centre of the stadium, looking out towards the crowd. Therefore we are able to sort the members of each crowd group by their distance from the centre of the stadium. Every so often you may spot one character rendering in front of another, but again our crowds are so far from the camera that the chances of you seeing this are very, very slim.

Billboarding

We do all of our billboarding within the vertex shader. We generate 4 vertices for each crowd member, each of the verts is located at the centre of the rectangle. We bake a scale into the vertex data and then this scale is used along with the uv’s to offset the vertex from the centre and align it to the camera.

inline float2 GetCorner(in float3 uvs)
{
    return (uvs.xy * uvs.z);
}

inline float4 Billboard(in float4 vertex, in float3 uvs)
{
    float3 center = vertex.xyz;
    float3 eyeVector = ObjSpaceViewDir(vertex);

    float3 upVector = float3(0, 1, 0);
    float3 sideVector = normalize(cross(eyeVector, upVector));  

    float3 pos = center;
    float3 corner = float3(GetCorner(uvs), 0.0f);

    pos += corner.x * sideVector;
    pos += corner.y * upVector;

    return float4(pos, 1.0f);
}

You can see that the uv’s are a float3, not the usual float2. The first 2 components of the vector are standard uv texture coordinates and the 3rd component is the scale of the billboard.

private readonly Vector2[] uvs = new[]
{
   new Vector2(1.0f, 1.0f),
   new Vector2(0.0f, 1.0f),
   new Vector2(0.0f, 0.0f),
   new Vector2(1.0f, 0.0f),
};

var uv = new List<Vector3>(vertCount);
for (var n = 0; n < numberOfCrowdPositions; ++n)
{
    float scale = Random.Range(minScale, maxScale);
    uv.Add(new Vector3(uvs[0].x, uvs[0].y, scale));
    uv.Add(new Vector3(uvs[1].x, uvs[1].y, scale));
    uv.Add(new Vector3(uvs[2].x, uvs[2].y, scale));
    uv.Add(new Vector3(uvs[3].x, uvs[3].y, scale));
}

Lighting

The artists weren’t happy that the crowd didn’t blend nicely with the rest of the scene, they looked flat and a bit out of place. Therefore we developed a bit of code that would bake data from the light probes in the scene into each vertex in the crowd. All of our crowd’s meshes are generated offline, then loaded at runtime.

private Color ProbeColor(Vector3 localPos, Vector3 worldNormal)
{
   SphericalHarmonicsL2 sh;
   LightProbes.GetInterpolatedProbe(localPos, rend, out sh);

   var directions = new[] { worldNormal.normalized };
   
   Color[] results = new Color[1];
   sh.Evaluate(directions, results);

   return results[0];
}

Conclusion

Crowds

In the end, we ended up creating a crowd system that fit our needs exactly. We had to cut some corners in terms of visuals to meet the demands of our target platforms. But we managed to do so and our solution had virtually no impact on performance.

Motion Blur for mobile devices in Unity

What is Motion Blur?

Wikipedia defines motion blur as:

Motion blur is the apparent streaking of moving objects in a photograph or a sequence of frames, such as a film or animation. It results when the image being recorded changes during the recording of a single exposure, due to rapid movement or long exposure.

When we capture an image with a camera, the shutter opens, the image is captured by the sensor and then the shutter closes again. The longer the shutter is open, the more light the sensor can capture. However, leaving the shutter open for longer also means that the image being captured can change.

800px-Dog_Leaping

Imagine we are trying to capture an image of a car speeding along a race tack. If the shutter stays open for a whole second, the car will have shot past the camera and the entire image will be a blur. Now, if we were to open the shutter for a fraction of that time, say 1/500th of a second, chances are we will be able to capture the image with no blurring at all.

Motion blur is a side effect of leaving the shutter open for a long period of time. In games it can be desirable to simulate this effect. It can add a sense of speed and motion to our scenes. Depending on the genre of the game this can add a whole other level of realism to the game. Genres that may benefit from this effect include racing, first person shooter and third person shooter to name a few.

Pipeline Overview

We wanted to develop a motion blur effect for one of our games, a racing game. There are a number of different implementations currently available.

Frame Blur

The simplest method of simulating motion blur is to take the previous frame’s render target and interpolate between that and the current frame’s render target. When programmable shaders first came about, this is how it was done. It’s really simple and easy to implement and it doesn’t require any changes to the existing render pipeline. However it isn’t very realistic and you don’t have the ability to blur different objects in the scene at different scales.

Position Reconstruction

A step up from frame blurring is position reconstruction. In this method we render the scene as we normally would. Then we sample the depth buffer for each pixel in the render target and reconstruct the screen space position. Using the previous frames transformation matrices we then calculate the previous screen space position of that pixel. We can then calculate the direction and distance, in screen space, and blur that pixel. This method assumes that everything in the scene is static. It expects that the world space position of the pixel in the frame buffer does not change.  Therefore, it is great for simulating motion from the camera, but it’s not ideal if you want to simulate finer grained motion from dynamic objects in your scene.

Velocity Buffer

If you really need to handle dynamic objects, then this is the solution for you. It’s also the most expensive of the three. Here we need to render each object in your scene twice, once to output the normal scene render target and again to create a velocity buffer (usually a R16G16 render target). You could circumvent the second draw call by binding multiple render targets if you wish.

When we create our velocity buffer, we transform each object we render from object space by the current and the previous world-view-projection matrix. Doing this we are able to take into account world space changes aswell. We then calculate the change in screen space and store this vector in the velocity buffer.

Implementation

Requirements

We decided to implement the Position Reconstruction method.

  • Frame blurring wasn’t an option – this method was too old school and didn’t offer enough realism.
  • The camera in our game follows the players vehicle which is constantly moving, so even though we can’t simulate world space transformations we should still get a convincing effect.
  • We didn’t want to incur the additional draw call cost of populating velocity buffer.
  • We didn’t want to incur the additional bandwidth overhead of populating the velocity buffer.
  • We didn’t want to consume the additional memory required to store the velocity buffer.

Code

We start by rendering our scene as we usually would. As a post processing step we then read the depth of each pixel in the scene in our shader:

float depthBufferSample = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv).r;

Then we will reconstruct the screen space position from the depth:

float4 projPos;
projPos.xy = uv.xy * 2.0 - 1.0;
projPos.z = depthBufferSample;
projPos.w = 1.0f;

In C# we pass a transformation matrix into our shader. This matrix will transform the current screen space position as follows:

  1. Camera Space
  2. World Space
  3. Previous frames Camera Space
  4. Previous frames Screen Space

This is all done in a simple multiplication:

float4 previous = mul(_CurrentToPreviousViewProjectionMatrix, projPos);
previous /= previous.w;

To calculate this transformation matrix we do the following in C#

private Matrix4x4 previousViewProjection;

private void OnRenderImage(RenderTexture src, RenderTexture dest)
{
    var viewProj = cam.projectionMatrix * cam.worldToCameraMatrix;
    var invViewProj = viewProj.inverse;
    var currentToPreviousViewProjectionMatrix = previousViewProjection * invViewProj;

    motionBlurMaterial.SetMatrix("_CurrentToPreviousViewProjectionMatrix", currentToPreviousViewProjectionMatrix);

    ...

    previousViewProjection = viewProj;
}

We can now calculate the direction and distance between the two screen space vectors. We then use the distance as a scale and sample the render target along the direction vector.

float2 blurVec = (previous.xy - projPos.xy) * 0.5f;
float2 v = blurVec / NumberOfSamples;

half4 colour;
for(int i = 0; i < NumberOfSamples; ++i)
{
    float2 uv = input.uv + (v * i);
    colour += tex2D(_MainTex, uv);
}

colour /= NumberOfSamples

Controlling the Motion Blur

Once we got all this on-screen, we quickly decided that there was just too much blurring going on. We want most of the scene to be blurred, but the artists wanted vehicles and drives to be crisp. In order to achieve this, with as little pipeline impact as possible, we decided to use the alpha channel to mask out areas of the scene that we didn’t want to blur. We then multiplied this mask by the blur vector to effectively make the blur vector [0, 0].

half4 colour = tex2D(_MainTex, input.uv);
float mask = colour.a;

for(int i = 1; i < NumberOfSamples; ++i)
{
    float2 uv = input.uv + (v * mask * i);
    colour += tex2D(_MainTex, uv);
}

To add to this we also found that objects in the distance shouldn’t blur as much as those in the foreground. To achieve this we simply scaled the blur vector by the linear eye (view space) depth, calculated from the depth buffer (LinearEyeDepth) is a helper function inside the Unity cginc headers.

float d = LinearEyeDepth(depthBufferSample);
float depthScale = 1 - saturate(d / _DepthScale

Conclusion

Out of the box, Unity will support motion blur by generating a velocity buffer for you, but for our requirements this was overkill. We always need to keep in mind that we are a mobile studio, so we need to take performance into account every step of the way. The method we implemented has its tradeoffs, we had to add distance based scaling to prevent objects in the distance blurring too much. However, it gave us a convincing effect due to the fact that our camera is constantly moving. If you have any questions or feedback, feel free to drop me a message on Twitter or leave a comment below.

Shared Unity Code: The Implementation

Introduction

If you haven’t already read it, take a look at the first in this two parter. It covers the journey we took and why we decided to give Nuget a shot (Shared Unity Code: The Journey).

How we store NuGet packages

Our server team use a packaging system called Maven.

We actually experimented with sharing code using Maven too, but it was overly complex for our needs and didn’t quite fit what we were looking for.

We have a lot of shared server code, and this code lives in a Maven repository hosted on a service called Artifactory. As it turns out Artifactory supported Nuget too, so we were able to use this to host our private Nuget repository!

For testing purposes you can easily create a private repo on an internal server, an AWS node or even just a new folder on someone’s computer on a shared network – if you do this, just be sure that its backed up! All of these options are covered in this article.

Setting up NuGet package

Once you’ve got somewhere to store your packages, it’s time to get people uploading and downloading them. You need to first install Nuget on your system.

  • On Windows I’d recommend using Chocolatey
    choco install nuget.commandline
  • On MacOS I’d use Homebrew
    brew install nuget

You’re also going to let Nuget know about your private server, you can do this by firing up the terminal/command prompt and running the following commands:

nuget sources Add -Name MyNugetServer -Source https://mynugetserver.com/api/nuget/nuget

If your using a folder on your local machine, you’d do something like this:

nuget sources Add -Name MyNugetServer -Source /Users/myname/NugetPackages

If you have setup a username, password and/or API key to your server you will also need to let Nuget know about this:

nuget sources update -Name MyNugetServer -UserName [username] -Password [password]
nuget setapikey [username]:[apikey] -Source MyNugetServer

Consuming Packages

Once you’ve done this you can start downloading packages from your private server! Doing this is simple:

nuget install MyCompany.MyPackage

However this isn’t ideal on a project with multiple developers.

Each package is versioned, and packages have dependencies on other packages that are also versioned. Therefore we use a configuration file, per project, to specify what packages we are going to use. Here is an example packages.config file:

<?xml version="1.0" encoding="utf-8"?>
<packages>
 <package id="SpaceApe.Logger" version="1.0.0" target="net35"/>
 <package id="SpaceApe.Common" version="1.0.8" target="net35"/>
</packages>

Each entry in this file specifies a package name, a version number and a target (the .Net framework version). You can find all ther reference material for the config file here. When we have a packages.config file we can then use it to install our packages:

nuget install packages.config

This will go through every package in your config file and download it to your computer.

Releasing Packages

In order to release a package you have to create a Nuspec file. This file will define all the information about your package, including:

  • Package Name
  • Developer
  • URL
  • Package Version
  • Dependencies
  • Files

It’s pretty important to keep this file up to date, if you change a dependency or update one of the dependencies to a newer release, update your Nuspec file!

A Nuspec file is actually just another XML document, here is an example of one of ours:

<?xml version="1.0"?>
<package>
 <metadata>
    <!-- The identifier that must be unique within the hosting gallery -->
   <id>SpaceApe.PerformanceTesting</id>
    <!-- The package version number that is used when resolving dependencies -->
   <version>1.0.3</version>
    <!-- Authors contain text that appears directly on the gallery -->
   <authors>SpaceApeGames</authors>
    <!-- Owners are typically nuget.org identities that allow gallery users to earily find other packages by the same owners.  -->
   <owners>mshort</owners>
    <!-- License and project URLs provide links for the gallery -->
   <projectUrl>https://github.com/spaceapegames/performance-testing</projectUrl>
    <!-- If true, this value prompts the user to accept the license when installing the package. -->
   <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <!-- Any details about this particular release -->
    <releaseNotes>Adding timers</releaseNotes>
    <!-- The description can be used in package manager UI. Note that the nuget.org gallery uses information you add in the portal. -->
    <description>Unity performance recording</description>
    <!-- Copyright information -->
    <copyright>Copyright ©2017 Space Ape Games</copyright>
    <!-- Tags appear in the gallery and can be used for tag searches -->
    <tags>unity dll c# performance test testing</tags>
    <dependencies>
    </dependencies>
 </metadata>
 <files>
    <file src="Nuget/bin/Unity54/SpaceApe.PerformanceTesting.dll" target="lib/unity54" />
    <file src="Nuget/bin/Unity54/SpaceApe.PerformanceTesting.dll.*db" target="lib/unity54" />
    <file src="Nuget/bin/Unity56/SpaceApe.PerformanceTesting.dll" target="lib/unity56" />
    <file src="Nuget/bin/Unity56/SpaceApe.PerformanceTesting.dll.*db" target="lib/unity56" />
    <file src="Nuget/bin/Unity2017/SpaceApe.PerformanceTesting.dll" target="lib/unity2017" />
    <file src="Nuget/bin/Unity2017/SpaceApe.PerformanceTesting.dll.*db" target="lib/unity2017" />
    <file src="Nuget/bin/Unity2017/Editor/SpaceApe.PerformanceTestingEditor.dll" target="lib/unity2017/Editor" />
    <file src="Nuget/bin/Unity2017/Editor/SpaceApe.PerformanceTestingEditor.dll.*db" target="lib/unity2017/Editor" />
 </files>
</package>

As you can see there’s quite a lot of information in there, but once you’ve set this file up, it wont change much. For all the details on Nuspec files, you can head over to the reference page. So, now you have your project set up, its building, the tests are passing and you have your Nuspec file ready to go. Jump back into the terminal and run the following commands:

nuget pack MyCompany.MyPackage
nuget push MyCompany.MyPackage.1.0.0 -source MyNugetServer

The first command will package up all the files into a nupkg (which is actually just a .zip file, you can change the extension to .zip and unzip it to see what’s inside it) file and the second will push it up to your repository.

People can now use your NuGet package!

We actually build and release all of our modules via Jenkins.

For the entire Nuget Command Line Interface reference, head over to here.

Multiple Teams, Multiple Versions of Unity

You may have noticed that we hijack the target field to distribute different DLL’s compiled against different versions of Unity in a single package. This is quite unconventional and is something we’ve only recently started trying, but it seems to be working ok so far.

It means a single package can contain a DLL compiled against Unity 5.6 and Unity 2017, but using the target field we specify that our game is only interested in the Unity 2017 DLL.

This allows us to easily support multiple teams with the same shared modules.

Building Multiple DLLs

In order to build our modules against different versions of Unity we change the C# project file so that it has multiple configurations – one for each version of Unity that we are building against.

Each configuration then has the required preprocessor defines assigned to them (e.g. UNITY_5_6_OR_NEWER, UNITY_2017_1_OR_NEWER etc). And each configuration links to the correct Unity DLL’s.

We can then use the exact same source code in a test Unity project to quickly track down any bugs or issues we might encounter.

Unity and NuGet

When you create prefabs in Unity and assign a custom script to that prefab, the GUID of that script is stored within the prefab. So if the GUID of the script ever changes, your prefab will lose the script reference.

So next time you release your package, and you’re super excited to get everyone to update, they do so, and all their prefabs break. Why?

Nuget places the version number in the name of the DLL (eg SpaceApe.PerformanceTesting.1.0.0.dll)

When you update your package, the version number will change (which changes the filename too!)

Unity calculates the GUID of a script within a DLL by taking

  • the name of the DLL
  • the name of the script
  • And then hashing them together

Therefore when we install packages we use a special –excludeversion option that doesn’t put the package version number onto the DLL and we don’t delete the DLL’s meta file, eg:

nuget install packages.config --excludeversion

Where we are now

So as it stands everything is working really well for us.

  • We can share code between teams
  • We can support multiple versions of Unity
  • Our code is shipping with more and more tests
  • A lot of our modules are well documented and the documentation is automatically updated on each release

This all leads to one thing, our dev teams are able to get on with what matters – making that genre defining hit.

Sharing Unity Code: The Journey

Introduction

At Space Ape we have a lot of shared code which we use across all of our projects. As of now, we have 57 shared code modules; these range from messaging systems, logging and our in-game console, to crowd rendering and asset processing, .

Why Share Code?

  • We don’t need to spend time reinventing the wheel.
  • We have the flexibility to take the shared modules we require.
  • We’re confident that this code comes with a suite of unit and integration tests.
  • We know it’s been proven in practical use across previous projects.
  • We are all familiar with the code, making project start-up times much faster.

Sharing code allows developers to focus more on the final product, in our case the games we create, because they spend less time worrying about low-level implementation details and non functional requirements.

The Journey

All of this shared code didn’t just happen overnight. We’ve tried and failed quite a few times with our release process, distribution and collaboration. As a result of failing and learning from these failures, we’re now able to easily share our code, and developers from various projects are all contributing to it.

The process is by no means perfect – we’re still working out the kinks – but it’s improved our workflow a lot, and maybe it will work for you too.

At the Beginning, there was only Ctrl+C, Ctrl+V

When Space Ape first started we were all focused on one project, so there was no need to share code. We were a start-up and our primary objective was to ship a product.

When the time came to start our second project, some of the code was copied over from the first project and we remove any project dependencies, but we had a code base to start with. Our first title was still being developed, bugs were being fixed, and features were being added.

Whilst all of this was happening the two code bases diverged quite a bit. If a bug was fixed in one of our games, there’s a good chance that fix didn’t make it into the other game – the same goes for features and improvements. As each project went on, the ‘shared’ code was modified to fit each project, and in the end sharing code between projects was more hassle than it’s worth.

Context of the Problem

Roll forward a few years. We now have a few established games, Samurai Siege, Rival Kingdoms, Transformers: Earth Wars and Fastlane, and we’ve entered into a partnership with Supercell.

Unsurprisingly, our company goals have changed a bit. We’re no longer a start-up in the true sense of the phrase. Our goal is to create a genre defining mobile hit, and in doing so we are moving away from our build and battle heritage.

We are branching out into many new genres, so we need to iterate quickly. We can’t predict that every new game idea will be a success. We need to try new things rapidly, and learn as quickly as possible.

Having a solid foundation of shared code would help us to iterate faster. In order to do this we would have to look at how we could share code between projects, with as little pain and slowdown as possible. When it comes to sharing code, the biggest obstacle is not writing the code itself, it’s the tooling and practices around releasing and distributing it.

Enter Git Submodules

Git Submodules are like a repository within a repository. You can continue to work on your code base and once you’ve finished a feature or fixed a bug, you can check it in. You just push your shared code up to one repository, and your project’s code to another.

This seemed ideal at first! We were already using Git across our studio so everyone was familiar with it. But we soon ran into problems.

As the source code is there for you to edit freely, teams would obviously change shared code, check it in and then when the other team pulled changes, their code wouldn’t compile! This sounds a little lazy and reckless, but this issue stems from the fact that there is no boundary between what is shared code and what isn’t. From a team’s point-of-view, they are just changing code in one big solution. The ideal solution here is to expose a simple yet well-defined API to the game teams.

So once this became an issue, each team decided to branch the shared modules off the master branch, and we were back to square one. Two diverging code paths, never merged together.

Further to this, we found that anyone who’s not a developer (artists, animators etc) can have quite a hard time using submodules. The tooling around submodules isn’t straight forward. Often we would update a submodule but someone wouldn’t pull changes for that submodule, so project and shared code would get out of sync.

Maven

Our server developers use Maven to manage and release packages. Maven is a tool developed for the Java ecosystem. When you are ready to release your project, Maven will take all of the information within a pom file and then package up your code so that it can be shared with others.

Because of all the features offered by Maven, and the fact that it’s not a native .Net tool chain, it often felt more complicated than it needed to be. Out of the box it comes with things like build life-cycle management. But at the end of the day all we were really interested in was dependency management, versioning and packaging; and that came with a lot of overhead. We ended up creating custom build steps to install our packages which made our build and release process even more complicated. As it wasn’t natively supported (or developed for) either Unity or .Net we felt that there must be a better solution.

Unity Packages

Because we are using Unity, the next technology that came to mind was Unity Packages, just like you see on the Asset Store. It was really easy to integrate. However, the whole release process and package storage was quite unregulated. There’s no real package versioning support and no dependency management. You also need additional tooling to uninstall a package as there’s no defined package structure, so we would have to clean up the old package before installing the new one.

Finally, Unity packages traditionally contained source code. We wanted to stop teams making changes to source code within these shared modules and improve compile times. This meant we needed to use Dynamic Link Libraries. DLL’s also allow us to easily develop shared code modules that depend on other modules, without having to make sure that the source code for the dependency was the correct version and compiled in the first place. Whats more using DLL’s would also lead us to faster compile times.

So we looked elsewhere, and found:

NuGet

If you’ve not come across Nuget before, it’s a package management system designed specifically for the .Net framework and it supports dependency management. There are currently over 110,000 packages on the public repository, some of which we were already using. However this repository is public, and a lot of our code isn’t for public release, so we couldn’t just go ahead and push our packages up to this public repository.

Before we could make a start there was quite a bit of work involved in setting up a whole development and release process around Nuget, not to mention setting up our own Nuget package server and getting everything to work nicely with Unity. In my next blog post I’m going to take you through everything, from start to finish.

Mentoring – you might be doing it already

MCV4

MCV Women in Games Awards at Facebook May 11, 2018.

Do you remember all of your good teachers, both in- and outside of the classroom? The ones who inspired you, pushed you, believed in you, called you on your BS? I do, and they made all the difference.

Last week I was honoured to win the MCV Women in Games Award for Career Mentor of the Year. I didn’t have exposure to this industry when I was growing up, so I feel blessed to have the chance to be a part of it now. I think it’s up to all of us to make the opportunities available in this incredible industry accessible to those trying to follow in our footsteps, and apparent to those who may not have even considered it as an option.

award1

My boss and Space Ape mentor Mickey.

I’m proud to be part of a studio which takes that seriously. We set up our Varsity Program for students earlier this year, partnering with local universities to deliver lectures about disciplines in games. We livestreamed the lectures on Twitch, and had more than 16 thousand live views. One of the students I met through the program is now actually interviewing with us for a part time position over the summer, and we’re looking forward to next semester.

SpaceApe-Varisty_Masterclass_1080p_Unis

We partnered with UCL and the University of Greenwich to deliver six lectures.

VarsitySTUDENTS

Us and some of the students following a lecture at the University of Greenwich.

But even before our efforts for more outreach, we’ve been mentoring talent internally for years –

We hold Universities at lunchtime where we teach each other about different aspects of game development and the broader industry. To further build on our experience every Ape gets a yearly £1500 training budget, to spend however they see fit to develop their skill-set. We also hold monthly Ape Spaces, days dedicated to fostering creativity and brainstorming new game ideas as a company.

I wanted to take this chance to highlight just a few of our success stories within the company.

IMG_4595

George Yao is the PM for one of our upcoming titles, which grew out of an Ape Space game jam.

Graduating with a Finance degree back in 2010, George never thought he would have the opportunity to work in the games industry.

“It wasn’t a thought that ever crossed my mind even though I grew up loving and playing games,” he says.

George didn’t just love playing, he held the Number 1 world rank in Clash of Clans for seven consecutive months.

“At the time, I didn’t understand the potential impact from pro-gaming. For me, I just played a game that I enjoyed and due to my competitive nature, I strived to be the best. After retiring from Clash of Clans, Simon Hade (COO) contacted me from a start-up mobile games studio out in London.”

IMG_1303

George with a player from Team Secret, where he acts as Media Director.

IMG_0829

You can find out more about George’s journey and his involvement with esports @JorgeYao87

After consulting for Space Ape for a few months, he was interviewed and officially hired for a full-time position as a VIP community manager. Alongside his career at Space Ape, George now manages pro esports team, Team Secret. 

“Being a self-starter and having strong mentorship from management, I became a Live Operations Manager within six months and a product manager and owner within two years. Space Ape not only opened the doors but also fostered my career growth every step of the way.”

Screen Shot 2018-05-17 at 11.17.44

Vicki is the Vision Holder for one of our upcoming titles, also born out of an Ape Space game jam.

Vicki is a Lead Artist and Vision Holder for one of our new games. After she started as a 3D artist she was quickly exposed to game design, management, pitching and other areas of development.

“We are huge on our knowledge sharing culture, and with our density of talents Space Ape is a great place to learn and grow,” she says. “I’m always learning new things in the Universities we hold at lunch. I don’t think I would be as equipped to be a Lead Artist if I had gone anywhere else.”

Fire_Demon

Art from our first title Samurai Siege, and (above) art from our second game Rival Kingdoms.

Vicki found agency through working in a small team and set the artistic vision for one of our most promising new titles.

Image uploaded from iOS (1)

Johnathan went from Games Analyst to Game Lead in two years.

Johnathan began his career at Space Ape as a Games Analyst, keeping his finger on the pulse of trends in the market.

“What’s really impressed me about Space Ape is their willingness to give people the opportunity to prove themselves in new roles. The training budget also allowed me to get the resources I needed to develop my skills. There is a strong culture of promoting from within and it’s a true meritocracy.”

Fast-forward two years and he’s now the Product Owner of one of our most successful titles.

IMG_4611

Johnathan used his training budget to develop some of the skills required to become a PO.

“When I joined Space Ape having changed career, I never imagined I’d be running a game team just two years later! If you’re passionate and productive, they will make sure you get the opportunity to put your new skills into practise.”

I can think of a dozen other examples off the top of my head, from Alex and Ioannis who journeyed from QA to Product Owners, to Raul and Keedoong who started as CS agents and now head up entire departments in CS and Localisation.

From Pro-Gamer to Product Owner, George and his team are now getting ready to soft-launch his dream title, which was actually born out of an Ape Space game jam.

“As long as you have a long-term vision and the traits that embody the company culture, your goals will come to fruition,” he says.

VarsityDEB1

Fore more info or to get involved with our Varsity Program: varsity@spaceapegames.com

I’ve watched my colleagues grow into various roles and thrive. I feel incredibly lucky to work in an environment that allows for, and encourages that kind of growth. I’m personally excited about using the talent we’ve fostered in-house to reach, build and hopefully inspire the talent waiting to be tapped in the wider community.

Fastlane: a growth engine fueled with ads

How holistic experimentations on ad monetisation amplified with smart UA took Fastlane from 170,000 to 700,000 DAU in 4 months. And growing.

  • Fastlane has reached 16M installs, approaching $30M run rate, and is on an explosive growth trajectory 10 months after launch
  • Fastlane is an evolved arcade shooter game available for free on iOS and Android phones
  • From $5,000/d to $45,000/d from ads in 4 months: what are the lessons learnt from our holistic iterations and partnership with Unity Ads
  • We are setting a new benchmark for ads at $0.13 ad arpdau in the US
  • Our Ad LTV – lifetime value – is now based on true ad performance to gain accuracy
  • We multiplied our User Acquisition budgets by 5x with our lean team of 2. And we are profitable under a month at $0.52 CPI direct

blog_fastlanelogo

16M installs, approaching $30M run rate, 700k DAU, and onto a recent explosive growth trajectory.

Fastlane was developed in 6 months by a team of 8 people. The team’s thesis was that there was a gap in the market between hyper-casual and midcore titles. A gap where casual addictive gameplay can meet $0.25+ arpdau in Tier 1 geos, marrying IAP and ads while maintaining good retention metrics at 12% d28 and attracting more than 100,000 new users daily.

We feel we’ve built a replicable growth engine with Fastlane. Better – we have improved our ad monetisation stack and our understanding of ad LTV – lifetime value – as well as forged long term partnerships that will have a long lasting impact in our future strategies going forward.

blog_realgraph(Fastlane daily active user base and revenue has been growing week on week at an explosive growth rate since November 2017 – and is more profitable than ever)

Fastlane’s stats by mid-March 2018, 10 months after launch:

  • 16M installs, approaching $30M run rate
  • 700,000 DAU (up from 170k in nov 2017)
  • 2.5M+ daily video views
  • $80,000 daily booking (iap + ads), with highs approaching $100,000/d
  • $45,000 daily booking with ads alone (up from $5k/d in nov 2017)

This article sums up our main learnings on our ad monetisation implementation and partnership that increased significantly our LTV. It also explains how global UA with key partners amplified its impact and led us, in 4 months, to profitably:

  • 4x DAU
  • 4.5x revenue
  • 5x marketing spend while more profitable than before

Fastlane: Road to Revenge, an evolved mobile arcade shooter

Fastlane was launched in Mid 2017, a period of low risk growth and calculated bets for the studio – since then, we joined force with Supercell and are committed to make our mark on the gaming ecosystem, define a category hit and make a game that people will be talking about in 10 years time.

Despite, it not being our genre-defining game, Fastlane was, and is, a great learning ground for us in many aspects, including how to automate live-ops in a casual game, how to integrate 3rd party content from Youtubers to a Kasabian soundtrack, ad monetization and user acquisition.

I’m pleased to be able to share some of these lessons in this blog post.

Inspired by classic arcade shooters from the ’80s like Spy Hunter and 1942, Fastlane: Road to Revenge is a one-handed retro arcade shooter with RPG elements, designed to be played in short bursts. Players chase high scores in multiplayer leagues and leaderboards, collect, upgrade and customise exotic cars and unlock devastating vehicle transformations!

The game presents a huge motley crew of characters–many played by some of YouTube’s biggest gaming personalities–as well as powerful vehicle upgrades, outrageous events and fully customisable soundtrack with Apple Music integration.

From $5,000/d to $45,000/day from ads in 4 months: the lessons learnt

An iterative approach

The success we’ve had in the last few months on Fastlane is a result 6 months of iteration and experimentation by the dev and marketing teams working closely co-located.

Fastlane was not our first attempt at in-game ads.  We had included rewarded ad units in both Samurai Siege and Rival Kingdoms but in both cases the features were added post launch and not inserted into the core economy of the game and therefore were not additive.   

In Fastlane we committed from the beginning to design the economy specifically for ad monetisation. This involved being very clear that we would create value for both players and advertisers. This seems like common sense but previously our approach to in game ads was to just focus on the player experience. Of course no one is going to pay to advertise in your game if no players ever engage with the ads and ultimately install your advertisers’ (often a competitor) game. Once you start from the position that you want your players to tap on these ads then you approach ad unit design very differently.  Rather than focussing on how you can make the ad experience cause the minimal disruption to your gameplay, you focus on how you can ensure that once your players leave your game that they come back. This was a very different mindset and the fundamental reason why Fastlane’s ad implementation has been so successful.

It also resulted in the team implementing ads with the LTV components and the player happiness as our top concern. Making sure we are chasing the big picture, not only increasing a parameter (views) and decreasing other ones (retention, IAP) in the process.

That was all fine in theory, but initially it was merely a hypothesis so we tested in Beta. Below is the outcome of a test we ran in beta where we forced interstitial ads after every race. The result was pretty clear. It increased vastly the amount of interstitial views per day as well as ad revenue, but user retention dropped from day 7.  The overall result was negative as expected but it was a good exercise for us to go through as a studio and each subsequent hypothesis was tested in a similar way.

We were not ready to grow our short term revenue while hurting our long term retention. We made no compromise, canned that idea and tested some more.

We a/b tested different approaches to ads with a strict data driven approach and played with caps, frequencies, placements, formats and providers in order to end up with the design that you are seeing today.  6 months after the game’s global release we eventually found winning formula for that stage of the game’s life cycle. This was an optional rewarded ad at the end of almost every race, an interstitial showing up if you don’t make any IAP nor watch rewarded ad.

We also entered into an exclusivity partnership with Unity Ads in Dec-17 using their unified auction to monetise our entire inventory that has proven to be pivotal in our growth journey.

This new setup increased ads arpdau to $0.13 net in the US and $0.18 in CN while we more than tripled our scale with more than 2.5M daily video views globally.

Here is our current ad performance per main geo, in term of weekly views and CPM:

blog_cpms

(US and CN are leading both in video ads actual CPM payouts and weekly impressions)

In addition to significantly increasing ad arpdau, we were able to confidently see in the data that impact on retention and the IAP cannibalisation was more than offset by the increase in ad revenue.  LTV improved by 40% overall.

Fastlane’s arpdau – average revenue per daily active user – in the US:

blog_arpdauUS

(while there was a 15% cannibalisation of IAP, the increase we had gotten from ads led to a net increase of 40% of the overall arpdau)

Our 4 ad design pillars

We had 4 pillars that guided our ad design methodology on Fastlane.

Pillar 1 – Ads must work for the player. Rewards needs to feel desirable and part of the core loop, yet complementary to IAP bundles.

Pillar 2 – Ads must to be displayed at the right moment during the player’s session so that it does not impact negatively retention. In other words, show the ads when the player would be ending their session anyway.

Pillar 3 – Ads must to be part of the game’s world. They need to feel natural for the player. They need to add to the world.

Pillar 4 – The ad implementation must work for advertisers and drive installs. You should look at creating a placement where the players will want to interact with the ads in order to reach the highest CPMs possible, not necessarily the highest amount of views.  Culturally this was the hardest pillar to implement as it is counter intuitive to design to drive people to play competitor’s’ games!

blog_pillars(4 ad pillars that the Fastlane product team lived by during the ad implementation and experimentations)

Fastlane’s key findings

Here are our main findings specific to Fastlane:

» Rewarded ads > Interstitial ads for player retention and CPM

85% of our ad revenue is from rewarded ads and it does not hurt retention as the player has the choice.

» End of a race/session is the best placement for maximising CPM and player’s engagement with ads on Fastlane.

We want people interacting with the ads. In order to encourage that behaviour we found that a rewarded video at the end of a session generated 25% higher CPM than giving the option to watch an ad at the start of a session.

» Giving significant rewards to a player for watching an ad does increase the engagement rate

And it does not cannibalise IAP bundles if the economy is ready for it.  But your rewards must be set to levels that players would not otherwise buy with IAPs.

» Be upfront and unapologetic.

Watching ads is a clear value exchange and is part of the core aesthetic of the game. Not offering the option to pay to remove ads did work better for us. A Fastlane player should WANT ads. TV shows have been designed around ad breaks for years and our game is too as it’s the business model we’ve chosen from the start.

(Our ad implementation makes sure that it feels natural and enhance the brand and the game world)

A different approach than our previous titles

This methodology differed vastly from the approach we took for Samurai Siege and Rival Kingdom where the ad feature was added more than 6 months after the release of each game rather than designed with specific sinks and taps for ad rewarded currencies in mind. This resulted in the rewards being either insignificant or cannibalising IAP in strategy games.  Furthermore our strategy games had an aprdau of $1-2 from IAP, so the bar for in-game ads to be impactful in that economy was very high.

It should also be noted this lesson does not just apply to ads. The same is true for viral and social hooks that need to be designed as part of the core loop to have an impact.

An ad LTV model based on actual ad performance to gain accuracy for user acquisition media buying.

Understanding Ad LTV per campaign is arguably our big learning on ad monetization. It is so easy to make bad decisions by becoming fixated on one or other metric, when ultimately the only metric that matters is LTV.

LTV calculation has been pivotal since the mobile app industry moved to free-to-play.

As a marketer in today’s industry, chasing profitable ROI and growth via targeted paid campaigns is key. LTV is based on complex predictive models when it comes to IAP and developers have become quite sophisticated in predicting what a user will spend over their lifetime just by analysing their behaviour in the first few play sessions. Modelling LTV from IAPs at a user level is not trivial but it is well understood in 2018 and any game developer inherently has the data to do so because they need to associate a payment with a user account in order to deliver the relevant in-game item to the correct person.

However, when it comes to ad revenue, the ad LTV calculation is usually very basic. Historically we would crudely estimate how much revenue each individual user was generating from ads and this was fine because it was a very small part of our business. However today advertising is a $12M+/year business for us – only a small percentage of our overall revenue but significant enough that we would invest in understanding it more and adapt our UA to it.

At launch, our crude ad LTV approach was to simply divide a country’s ad revenue by the number of ad views in that country, and then apportion the revenue per user depending on the number of ads they watched. In the case of Fastlane in the US, ad LTV for the first 2 weeks with this method was between $0.40 and $0.64 per user depending on the source of the user.

blog_adLTVOne

However this is missing the point that most advertisers are bidding on performance, not on view, and that all views in a given country/platform are not necessary equal to another in term of revenue. We since moved from that model and are now attributing revenue based on the true ads performance, data that we’re receiving as part of our partnership with Unity. Which gives us an ad LTV between $0.23 and $0.73 for Fastlane in the US per UA channel. A much bigger spread.

blog_adLTVTwo

This was an eureka moment for the team and would allow us to tailor even more our UA bids to specific media that are bringing higher value users – like we’ve been doing for years with IAP.

We’re now tracking our ad monetization performance not only with the sole amount of views/user/placement in mind, but also taking into account the actions triggered after watching an ad by our users in order to maximise ad arpdau.

Next step, we scaled up UA based on this data to deliver supersonic growth.

In addition to getting better performing ad placements in-game, and improving LTV, we were also getting better at optimising our UA campaigns.  

During this time we managed to increased our oCVR% (installs/imps ratio) thanks to smart ASO and creative iteration from our in-house team and playable ads from partners that allowed us to reach much more scale at a reduced direct CPI of $0.52 from October onwards ($0.34 including organics).

oCVR% on our main video UA channel per month and per geo:

blog_oCVR

(oCVR% increased by 35-75% in the space of a few months depending on geos, which allowed us to scale, and improve ROI)

All these improvements in oCVR% and ad monetization led to the growth we’re seeing today, multiplied by the rocket fuel that is smart UA spend.

blog_ROI(Direct UA ROI has improved. Weekly UA investment was multiplied by 5x with our key ad partners. And it shows no sign of slowing down any time soon – quite the opposite actually)

We’re now spending north of $250k a week on marketing with our lean team of 2, growing week on week, while profitable under a month at a $0.52 CPI. We know more on Ad LTV per campaign, we have higher LTV, lower CPIs, long term partners, better creatives iterations, and there is no coming back.

The lessons we will be taking forward to our future games will be:

  1. Design your game with ads in mind from the beginning if that’s the business model you choose. Use the pillars that we used for Fastlane (or adapt for the new game) and in particular design moments when you can effectively push your players to interact with an ad. Your CPM and ultimately revenue will reflect that.
  2. AB test and track all the impacts of any changes to the in game ads implementation and focus your KPI on LTV improvements, not views.
  3. Attribute ad LTV precisely at a user level so you can target UA campaigns to the kind of people who will generate more revenue from interacting with ads

Introducing Space Ape Varsity

SpaceApe-Varsity_Jacket.png

Space Ape Varsity is our new program housing any projects we kick off in the collegiate space. Through mentorship and knowledge-sharing, our goal is to build relationships with educational institutions and their student bodies to inspire and support future leaders of our industry.

Our first project under the Varsity umbrella, are Masterclasses.

Over the month of March we held a series of Masterclasses focusing on different disciplines within the games industry to give students an insider’s look at the space they will soon be entering. Through six tailor-made sessions students learned about a variety of disciplines ranging from Game Design and Development to UI/UX, Marketing and Community.

We partnered with London universities to deliver the lectures to their students in person, and also livestreamed the sessions on Twitch’s frontpage. Each class was followed by a Q&A with both the students present at the lecture, and those watching online. More than 10 thousand people tuned into the Masterclasses live over three weeks.

Space Ape is an advocate for education, training in games, computer science, and the myriad of specialties that make up our vibrant industry. We’ve already had so much positive feedback from students. We’ve just wrapped up our first round of classes for this semester and we’ll be looking to cover more topics and disciplines in the fall. Thank you to the University of Greenwich, University College London, the NUEL, NACE and Twitch for all their support this semester.

You can find a synopsis, complete slides and videos for each of the six Masterclasses below. For more info email varsity@spaceapegames.com

Creative Engineering 101

Tom Mejias

Tom Mejias is a Client Engineer at Space Ape Games and a whiz at prototyping new titles. During the hour Tom gave an overview of the games industry and the engineering roles that exist within it as well as some in depth guidance, tips and tricks for specializing in the role of Creative Engineer.

Tom’s slides

Watch Tom’s class

Screen Shot 2018-03-21 at 18.20.59

You can find all the Masterclasses here:

https://www.twitch.tv/collections/S-kGIX1YGRVAMw

Designing for Competition

Andrew Munden

Andrew Munden leads Live-Ops at Space Ape and has been a competitive gamer since his teens. Students will learn about designing for a competitive environment and why features that seem ‘fun’ aren’t always good for the player.

Andrew’s slides

Screen Shot 2018-03-21 at 18.21.14

You can find all the Masterclasses here:

https://www.twitch.tv/collections/S-kGIX1YGRVAMw

Game Design for Modern Times

Adam Kramarzewski

Adam Kramarzewski is a Game Designer at Space Ape with 11 years of experience in the industry and a new book just about to be published. He gives students an unfiltered insight into the production practices, responsibilities, and challenges facing Game Designers in the modern game development scene.

Adam’s slides

Watch Adam’s class

Screen Shot 2018-03-21 at 18.21.24

You can find all the Masterclasses here:

https://www.twitch.tv/collections/S-kGIX1YGRVAMw

 

High-Performance Team Management

Pablo Calvo

Pablo Calvo heads up Social Media at Space Ape Games and has previously worked in esports as a team manager and coach. In this widely applicable lecture he discusses high performance teams and the skills learned in competitive play that can be transferred across work and study.

Pablo’s slides

Watch Pablo’s class

Screen Shot 2018-03-21 at 18.21.30

You can find all the Masterclasses here:

https://www.twitch.tv/collections/S-kGIX1YGRVAMw

UI/UX: Building Player Experiences

Adam Sullivan & Lissa Capeleto

Adam Sullivan heads up UI/UX at Space Ape. He and fellow UI artist Lissa Capeleto take students behind the visual language of games. In their class Adam and Lissa share their insights about how to build meaningful player experiences. UI and UX- much more than buttons or layout.

Adam and Lissa’s slides

Watch Adam and Lissa’s class here

Screen Shot 2018-03-21 at 18.21.35

You can find all the Masterclasses here:

https://www.twitch.tv/collections/S-kGIX1YGRVAMw

 

Communities: Bridging the Gap

Deborah Mensah-Bonsu

Deborah Mensah-Bonsu heads up content at Space Ape. Join her as she delves into the world of the players. There’s no game without the player community – Where do you find it, how do you build it and how can you help it grow. Join her as she shares her tips for empowering players, using content to connect and setting a community up to thrive.

Deborah’s slides

Watch Deborah’s class

Screen Shot 2018-03-21 at 18.21.41

You can find all the Masterclasses here:

https://www.twitch.tv/collections/S-kGIX1YGRVAMw

 

Creative Engineering: The Source Of Your Imagination

In another instalment of our technical events series, today we hosted Creative Engineering: The Source of Your Imagination.

In this jam packed event we heard from Tom Mejias, Bill Robinson and Matteo Vallone.


Tom Mejias spoke about how we decide which projects to start, and which architectures we use to get them off the ground. He described our fail fast philosophy on prototyping, and the razors with which we judge our prototypes.

His slides outlining his approaches and learnings are here:


Bill Robinson gave us an insight into how animation curves can be used for game balancing with his Multi-Curve editor. He also introduced UIAnimSequencer – a tool to quickly add juicy transitions and animations within Unity.

You can see his slides including his video demonstration here:


Matteo Vallone revealed how to make your game stand out and give it the best chance of success in the market. As former Google Play Store Manager he gave a valuable insight into making a big impact with your game launch. Now as an early stage game investor, he described how to maximise your game’s discoverability by building a beta community, engaging with app stores teams and partnering with influencers.


 

We are always looking for talented game developers at Space Ape Games. If you’ve been inspired by hearing about how we work, have a look at our careers page.

A video of the whole event will be posted here shortly. Follow @SpaceApeGames for all the latest announcements.

Discover our games on the Space Ape Games site.