Crowd Rendering on mobile with Unity


One of the artists came to me one day and asked “What’s the best way to render a crowd in our game?”. Being a mobile-first studio, it’s not an easy solution. Back in my console days, we could consider instancing and GPU simulations, but our game needs to support OpenGL ES 2.0 and run on devices with a lot less bandwidth and power. Crowd systems inherently consume a lot of processing power, you’ve got to animate thousands of people and each of those people need to look different.

We took a look at what options were available to use. We could simply render a lot of skinned meshes but this is going to be expensive both on CPU and GPU as we need to skin each mesh and then submit it to the GPU. We could use the sprite system in Unity to render a billboarded crowd, but as the camera angles change the sprites would have to be re-sorted. After a while, we realised we needed to come up with a custom solution.


2D or 3D?

Our crowds were going to be displayed at quite a distance from the camera, on devices with small screens. Therefore fidelity was not so much of a concern. Rendering thousands of 3D skinned meshes on mobile wasn’t really an option, we chose to stick to 2D crowds.


We need crowd placement to be quick and easy. We don’t want our art team spending hours painfully placing GameObjects inside scenes to signify where a person should spawn. Ideally, an artist should be able to define a region or area where they want people to spawn and when they hit a button it all comes to life.

Crowd Placement

We gave the artists the ability to spawn crowds inside bounding boxes, around a sphere and at points in a mesh. We found that 95% of the time the art team would choose to spawn crowds using a bounding box.


Crowds Up Close

One of the biggest challenges with crowd rendering is having enough variation within the crowd so that it looks believable. This means people in the crowd will need different animations, coloured clothes, coloured skin, hairstyles etc. And those characters that are duplicated will require offset animations so that they look less like clones. You soon realise that people don’t focus on one person in the crowd, they focus on the crowd as a whole. This means that as long as there are enough variation and movement in there it looks pretty convincing.

We allow the artists to randomise:

  • Sprites
  • Position
  • Rotation
  • Colour
  • Animation offsets
  • Scale
  • Movement


Our games still target older Android devices that only support OpenGL ES 2.0. In order to reduce CPU overhead from issuing too many draw calls, we knew that we would have to batch as many people in the crowd as possible. For this reason, we made the decision that every person within a crowd region would be batched together into one draw call, but this obviously introduces a problem…


As soon as you batch everything together you lose any ability to sort individual people within the crowd. So we had to come up with a flexible sorting solution for the artists. We ended up allowing the art team to sort characters in the group along a specific axis (e.g. along the z-axis) or by distance from an object. The latter proved to be the most used option.

[SerializeField] private Transform SortTransform;

private int SortUsingTransform(Vector3 a, Vector3 b)
    Vector3 origin = SortTransform.position;

    float dstToA = Vector3.SqrMagnitude(origin - a);
    float dstToB = Vector3.SqrMagnitude(origin - b);

    return dstToB.CompareTo(dstToA);


var crowdPositions = new List<Vector3>();
// generate crowd positions

Our crowds were used within a stadium, and our camera is always in the centre of the stadium, looking out towards the crowd. Therefore we are able to sort the members of each crowd group by their distance from the centre of the stadium. Every so often you may spot one character rendering in front of another, but again our crowds are so far from the camera that the chances of you seeing this are very, very slim.


We do all of our billboarding within the vertex shader. We generate 4 vertices for each crowd member, each of the verts is located at the centre of the rectangle. We bake a scale into the vertex data and then this scale is used along with the uv’s to offset the vertex from the centre and align it to the camera.

inline float2 GetCorner(in float3 uvs)
    return (uvs.xy * uvs.z);

inline float4 Billboard(in float4 vertex, in float3 uvs)
    float3 center =;
    float3 eyeVector = ObjSpaceViewDir(vertex);

    float3 upVector = float3(0, 1, 0);
    float3 sideVector = normalize(cross(eyeVector, upVector));  

    float3 pos = center;
    float3 corner = float3(GetCorner(uvs), 0.0f);

    pos += corner.x * sideVector;
    pos += corner.y * upVector;

    return float4(pos, 1.0f);

You can see that the uv’s are a float3, not the usual float2. The first 2 components of the vector are standard uv texture coordinates and the 3rd component is the scale of the billboard.

private readonly Vector2[] uvs = new[]
   new Vector2(1.0f, 1.0f),
   new Vector2(0.0f, 1.0f),
   new Vector2(0.0f, 0.0f),
   new Vector2(1.0f, 0.0f),

var uv = new List<Vector3>(vertCount);
for (var n = 0; n < numberOfCrowdPositions; ++n)
    float scale = Random.Range(minScale, maxScale);
    uv.Add(new Vector3(uvs[0].x, uvs[0].y, scale));
    uv.Add(new Vector3(uvs[1].x, uvs[1].y, scale));
    uv.Add(new Vector3(uvs[2].x, uvs[2].y, scale));
    uv.Add(new Vector3(uvs[3].x, uvs[3].y, scale));


The artists weren’t happy that the crowd didn’t blend nicely with the rest of the scene, they looked flat and a bit out of place. Therefore we developed a bit of code that would bake data from the light probes in the scene into each vertex in the crowd. All of our crowd’s meshes are generated offline, then loaded at runtime.

private Color ProbeColor(Vector3 localPos, Vector3 worldNormal)
   SphericalHarmonicsL2 sh;
   LightProbes.GetInterpolatedProbe(localPos, rend, out sh);

   var directions = new[] { worldNormal.normalized };
   Color[] results = new Color[1];
   sh.Evaluate(directions, results);

   return results[0];



In the end, we ended up creating a crowd system that fit our needs exactly. We had to cut some corners in terms of visuals to meet the demands of our target platforms. But we managed to do so and our solution had virtually no impact on performance.

9 thoughts on “Crowd Rendering on mobile with Unity

    • We did look into using the Unity particle system. But it didn’t offer us the flexibility that we needed and it would have added additional runtime overhead. Speaking with our VFX artist he said that being able to place the particles would have been awkward for the art team, particles are spawned completely randomly and there’s no way to ensure that only one particle is placed at each spot.
      In terms of performance, we would have to pay the CPU cost of any particle system update function – whereas all of our crowd system variables are simulated on the GPU. And particles within each particle system would need to be sorted. Sorting has to be done each and every frame which we wanted to avoid. In terms of raw numbers, we have 10’s of thousands of people in our crowd, rendering and simulating all of these people using the particle system wasn’t really feasible. On top of this, we would lose the static batching we have implemented. Unity’s particle systems use a single light probe for the entire particle system, we were able to bake the lighting into the crowd, per vertex.


  1. Hi! Awesome article. We are in the same need of this system for large scale crowds at a stage arena. All need work on mobile. We have considered doing this with either Shader Graph and billboards or via ECS (DOTS) new skin meshed render. We use URP (LWRP) in our project. Would you consider share your code or same parts of it for some more inspiration? Many thanks!


  2. Any reason for sorting manually? I assume you disabled depth sorting for doing this. Can you confirm?
    If so any reason for doing it manual?


    • When all of the crowd members are in a single mesh, its impossible to dynamically sort the crowd billboards individually. Imagine the case where you’re camera is viewing the crowd mesh from the front, and then again from the back. Depending on how you have manually sorted the mesh, in one of these perspectives the crowd will render incorrectly, where billboards at the back are obscuring those at the front. We chose not to sort dynamically as we can predict our camera perspective and it reduces runtime cost (both computational and bandwidth).


  3. Hi! This is a really great solution! I’m working on similiar mobile project, where I have my crowd as one static texture and it would be great if it looked more realistic. Unforutnetely I guess I’m not experienced enought to reproduce something like this. I see someone already asked if you can provide some source code and my question is if something changed or maybe if your solution is on asset store, so I can purcharse it. I knew this post was published about 2 years ago, but I hope. Anyway thanks for making this post, it’s really awesome solution!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s