Mentoring – you might be doing it already


MCV Women in Games Awards at Facebook May 11, 2018.

Do you remember all of your good teachers, both in- and outside of the classroom? The ones who inspired you, pushed you, believed in you, called you on your BS? I do, and they made all the difference.

Last week I was honoured to win the MCV Women in Games Award for Career Mentor of the Year. I didn’t have exposure to this industry when I was growing up, so I feel blessed to have the chance to be a part of it now. I think it’s up to all of us to make the opportunities available in this incredible industry accessible to those trying to follow in our footsteps, and apparent to those who may not have even considered it as an option.


My boss and Space Ape mentor Mickey.

I’m proud to be part of a studio which takes that seriously. We set up our Varsity Program for students earlier this year, partnering with local universities to deliver lectures about disciplines in games. We livestreamed the lectures on Twitch, and had more than 16 thousand live views. One of the students I met through the program is now actually interviewing with us for a part time position over the summer, and we’re looking forward to next semester.


We partnered with UCL and the University of Greenwich to deliver six lectures.


Us and some of the students following a lecture at the University of Greenwich.

But even before our efforts for more outreach, we’ve been mentoring talent internally for years –

We hold Universities at lunchtime where we teach each other about different aspects of game development and the broader industry. To further build on our experience every Ape gets a yearly £1500 training budget, to spend however they see fit to develop their skill-set. We also hold monthly Ape Spaces, days dedicated to fostering creativity and brainstorming new game ideas as a company.

I wanted to take this chance to highlight just a few of our success stories within the company.


George Yao is the PM for one of our upcoming titles, which grew out of an Ape Space game jam.

Graduating with a Finance degree back in 2010, George never thought he would have the opportunity to work in the games industry.

“It wasn’t a thought that ever crossed my mind even though I grew up loving and playing games,” he says.

George didn’t just love playing, he held the Number 1 world rank in Clash of Clans for seven consecutive months.

“At the time, I didn’t understand the potential impact from pro-gaming. For me, I just played a game that I enjoyed and due to my competitive nature, I strived to be the best. After retiring from Clash of Clans, Simon Hade (COO) contacted me from a start-up mobile games studio out in London.”


George with a player from Team Secret, where he acts as Media Director.


You can find out more about George’s journey and his involvement with esports @JorgeYao87

After consulting for Space Ape for a few months, he was interviewed and officially hired for a full-time position as a VIP community manager. Alongside his career at Space Ape, George now manages pro esports team, Team Secret. 

“Being a self-starter and having strong mentorship from management, I became a Live Operations Manager within six months and a product manager and owner within two years. Space Ape not only opened the doors but also fostered my career growth every step of the way.”

Screen Shot 2018-05-17 at 11.17.44

Vicki is the Vision Holder for one of our upcoming titles, also born out of an Ape Space game jam.

Vicki is a Lead Artist and Vision Holder for one of our new games. After she started as a 3D artist she was quickly exposed to game design, management, pitching and other areas of development.

“We are huge on our knowledge sharing culture, and with our density of talents Space Ape is a great place to learn and grow,” she says. “I’m always learning new things in the Universities we hold at lunch. I don’t think I would be as equipped to be a Lead Artist if I had gone anywhere else.”


Art from our first title Samurai Siege, and (above) art from our second game Rival Kingdoms.

Vicki found agency through working in a small team and set the artistic vision for one of our most promising new titles.

Image uploaded from iOS (1)

Johnathan went from Games Analyst to Game Lead in two years.

Johnathan began his career at Space Ape as a Games Analyst, keeping his finger on the pulse of trends in the market.

“What’s really impressed me about Space Ape is their willingness to give people the opportunity to prove themselves in new roles. The training budget also allowed me to get the resources I needed to develop my skills. There is a strong culture of promoting from within and it’s a true meritocracy.”

Fast-forward two years and he’s now the Product Owner of one of our most successful titles.


Johnathan used his training budget to develop some of the skills required to become a PO.

“When I joined Space Ape having changed career, I never imagined I’d be running a game team just two years later! If you’re passionate and productive, they will make sure you get the opportunity to put your new skills into practise.”

I can think of a dozen other examples off the top of my head, from Alex and Ioannis who journeyed from QA to Product Owners, to Raul and Keedoong who started as CS agents and now head up entire departments in CS and Localisation.

From Pro-Gamer to Product Owner, George and his team are now getting ready to soft-launch his dream title, which was actually born out of an Ape Space game jam.

“As long as you have a long-term vision and the traits that embody the company culture, your goals will come to fruition,” he says.


Fore more info or to get involved with our Varsity Program:

I’ve watched my colleagues grow into various roles and thrive. I feel incredibly lucky to work in an environment that allows for, and encourages that kind of growth. I’m personally excited about using the talent we’ve fostered in-house to reach, build and hopefully inspire the talent waiting to be tapped in the wider community.

Fastlane: a growth engine fueled with ads

How holistic experimentations on ad monetisation amplified with smart UA took Fastlane from 170,000 to 700,000 DAU in 4 months. And growing.

  • Fastlane has reached 16M installs, approaching $30M run rate, and is on an explosive growth trajectory 10 months after launch
  • Fastlane is an evolved arcade shooter game available for free on iOS and Android phones
  • From $5,000/d to $45,000/d from ads in 4 months: what are the lessons learnt from our holistic iterations and partnership with Unity Ads
  • We are setting a new benchmark for ads at $0.13 ad arpdau in the US
  • Our Ad LTV – lifetime value – is now based on true ad performance to gain accuracy
  • We multiplied our User Acquisition budgets by 5x with our lean team of 2. And we are profitable under a month at $0.52 CPI direct


16M installs, approaching $30M run rate, 700k DAU, and onto a recent explosive growth trajectory.

Fastlane was developed in 6 months by a team of 8 people. The team’s thesis was that there was a gap in the market between hyper-casual and midcore titles. A gap where casual addictive gameplay can meet $0.25+ arpdau in Tier 1 geos, marrying IAP and ads while maintaining good retention metrics at 12% d28 and attracting more than 100,000 new users daily.

We feel we’ve built a replicable growth engine with Fastlane. Better – we have improved our ad monetisation stack and our understanding of ad LTV – lifetime value – as well as forged long term partnerships that will have a long lasting impact in our future strategies going forward.

blog_realgraph(Fastlane daily active user base and revenue has been growing week on week at an explosive growth rate since November 2017 – and is more profitable than ever)

Fastlane’s stats by mid-March 2018, 10 months after launch:

  • 16M installs, approaching $30M run rate
  • 700,000 DAU (up from 170k in nov 2017)
  • 2.5M+ daily video views
  • $80,000 daily booking (iap + ads), with highs approaching $100,000/d
  • $45,000 daily booking with ads alone (up from $5k/d in nov 2017)

This article sums up our main learnings on our ad monetisation implementation and partnership that increased significantly our LTV. It also explains how global UA with key partners amplified its impact and led us, in 4 months, to profitably:

  • 4x DAU
  • 4.5x revenue
  • 5x marketing spend while more profitable than before

Fastlane: Road to Revenge, an evolved mobile arcade shooter

Fastlane was launched in Mid 2017, a period of low risk growth and calculated bets for the studio – since then, we joined force with Supercell and are committed to make our mark on the gaming ecosystem, define a category hit and make a game that people will be talking about in 10 years time.

Despite, it not being our genre-defining game, Fastlane was, and is, a great learning ground for us in many aspects, including how to automate live-ops in a casual game, how to integrate 3rd party content from Youtubers to a Kasabian soundtrack, ad monetization and user acquisition.

I’m pleased to be able to share some of these lessons in this blog post.

Inspired by classic arcade shooters from the ’80s like Spy Hunter and 1942, Fastlane: Road to Revenge is a one-handed retro arcade shooter with RPG elements, designed to be played in short bursts. Players chase high scores in multiplayer leagues and leaderboards, collect, upgrade and customise exotic cars and unlock devastating vehicle transformations!

The game presents a huge motley crew of characters–many played by some of YouTube’s biggest gaming personalities–as well as powerful vehicle upgrades, outrageous events and fully customisable soundtrack with Apple Music integration.

From $5,000/d to $45,000/day from ads in 4 months: the lessons learnt

An iterative approach

The success we’ve had in the last few months on Fastlane is a result 6 months of iteration and experimentation by the dev and marketing teams working closely co-located.

Fastlane was not our first attempt at in-game ads.  We had included rewarded ad units in both Samurai Siege and Rival Kingdoms but in both cases the features were added post launch and not inserted into the core economy of the game and therefore were not additive.   

In Fastlane we committed from the beginning to design the economy specifically for ad monetisation. This involved being very clear that we would create value for both players and advertisers. This seems like common sense but previously our approach to in game ads was to just focus on the player experience. Of course no one is going to pay to advertise in your game if no players ever engage with the ads and ultimately install your advertisers’ (often a competitor) game. Once you start from the position that you want your players to tap on these ads then you approach ad unit design very differently.  Rather than focussing on how you can make the ad experience cause the minimal disruption to your gameplay, you focus on how you can ensure that once your players leave your game that they come back. This was a very different mindset and the fundamental reason why Fastlane’s ad implementation has been so successful.

It also resulted in the team implementing ads with the LTV components and the player happiness as our top concern. Making sure we are chasing the big picture, not only increasing a parameter (views) and decreasing other ones (retention, IAP) in the process.

That was all fine in theory, but initially it was merely a hypothesis so we tested in Beta. Below is the outcome of a test we ran in beta where we forced interstitial ads after every race. The result was pretty clear. It increased vastly the amount of interstitial views per day as well as ad revenue, but user retention dropped from day 7.  The overall result was negative as expected but it was a good exercise for us to go through as a studio and each subsequent hypothesis was tested in a similar way.

We were not ready to grow our short term revenue while hurting our long term retention. We made no compromise, canned that idea and tested some more.

We a/b tested different approaches to ads with a strict data driven approach and played with caps, frequencies, placements, formats and providers in order to end up with the design that you are seeing today.  6 months after the game’s global release we eventually found winning formula for that stage of the game’s life cycle. This was an optional rewarded ad at the end of almost every race, an interstitial showing up if you don’t make any IAP nor watch rewarded ad.

We also entered into an exclusivity partnership with Unity Ads in Dec-17 using their unified auction to monetise our entire inventory that has proven to be pivotal in our growth journey.

This new setup increased ads arpdau to $0.13 net in the US and $0.18 in CN while we more than tripled our scale with more than 2.5M daily video views globally.

Here is our current ad performance per main geo, in term of weekly views and CPM:


(US and CN are leading both in video ads actual CPM payouts and weekly impressions)

In addition to significantly increasing ad arpdau, we were able to confidently see in the data that impact on retention and the IAP cannibalisation was more than offset by the increase in ad revenue.  LTV improved by 40% overall.

Fastlane’s arpdau – average revenue per daily active user – in the US:


(while there was a 15% cannibalisation of IAP, the increase we had gotten from ads led to a net increase of 40% of the overall arpdau)

Our 4 ad design pillars

We had 4 pillars that guided our ad design methodology on Fastlane.

Pillar 1 – Ads must work for the player. Rewards needs to feel desirable and part of the core loop, yet complementary to IAP bundles.

Pillar 2 – Ads must to be displayed at the right moment during the player’s session so that it does not impact negatively retention. In other words, show the ads when the player would be ending their session anyway.

Pillar 3 – Ads must to be part of the game’s world. They need to feel natural for the player. They need to add to the world.

Pillar 4 – The ad implementation must work for advertisers and drive installs. You should look at creating a placement where the players will want to interact with the ads in order to reach the highest CPMs possible, not necessarily the highest amount of views.  Culturally this was the hardest pillar to implement as it is counter intuitive to design to drive people to play competitor’s’ games!

blog_pillars(4 ad pillars that the Fastlane product team lived by during the ad implementation and experimentations)

Fastlane’s key findings

Here are our main findings specific to Fastlane:

» Rewarded ads > Interstitial ads for player retention and CPM

85% of our ad revenue is from rewarded ads and it does not hurt retention as the player has the choice.

» End of a race/session is the best placement for maximising CPM and player’s engagement with ads on Fastlane.

We want people interacting with the ads. In order to encourage that behaviour we found that a rewarded video at the end of a session generated 25% higher CPM than giving the option to watch an ad at the start of a session.

» Giving significant rewards to a player for watching an ad does increase the engagement rate

And it does not cannibalise IAP bundles if the economy is ready for it.  But your rewards must be set to levels that players would not otherwise buy with IAPs.

» Be upfront and unapologetic.

Watching ads is a clear value exchange and is part of the core aesthetic of the game. Not offering the option to pay to remove ads did work better for us. A Fastlane player should WANT ads. TV shows have been designed around ad breaks for years and our game is too as it’s the business model we’ve chosen from the start.

(Our ad implementation makes sure that it feels natural and enhance the brand and the game world)

A different approach than our previous titles

This methodology differed vastly from the approach we took for Samurai Siege and Rival Kingdom where the ad feature was added more than 6 months after the release of each game rather than designed with specific sinks and taps for ad rewarded currencies in mind. This resulted in the rewards being either insignificant or cannibalising IAP in strategy games.  Furthermore our strategy games had an aprdau of $1-2 from IAP, so the bar for in-game ads to be impactful in that economy was very high.

It should also be noted this lesson does not just apply to ads. The same is true for viral and social hooks that need to be designed as part of the core loop to have an impact.

An ad LTV model based on actual ad performance to gain accuracy for user acquisition media buying.

Understanding Ad LTV per campaign is arguably our big learning on ad monetization. It is so easy to make bad decisions by becoming fixated on one or other metric, when ultimately the only metric that matters is LTV.

LTV calculation has been pivotal since the mobile app industry moved to free-to-play.

As a marketer in today’s industry, chasing profitable ROI and growth via targeted paid campaigns is key. LTV is based on complex predictive models when it comes to IAP and developers have become quite sophisticated in predicting what a user will spend over their lifetime just by analysing their behaviour in the first few play sessions. Modelling LTV from IAPs at a user level is not trivial but it is well understood in 2018 and any game developer inherently has the data to do so because they need to associate a payment with a user account in order to deliver the relevant in-game item to the correct person.

However, when it comes to ad revenue, the ad LTV calculation is usually very basic. Historically we would crudely estimate how much revenue each individual user was generating from ads and this was fine because it was a very small part of our business. However today advertising is a $12M+/year business for us – only a small percentage of our overall revenue but significant enough that we would invest in understanding it more and adapt our UA to it.

At launch, our crude ad LTV approach was to simply divide a country’s ad revenue by the number of ad views in that country, and then apportion the revenue per user depending on the number of ads they watched. In the case of Fastlane in the US, ad LTV for the first 2 weeks with this method was between $0.40 and $0.64 per user depending on the source of the user.


However this is missing the point that most advertisers are bidding on performance, not on view, and that all views in a given country/platform are not necessary equal to another in term of revenue. We since moved from that model and are now attributing revenue based on the true ads performance, data that we’re receiving as part of our partnership with Unity. Which gives us an ad LTV between $0.23 and $0.73 for Fastlane in the US per UA channel. A much bigger spread.


This was an eureka moment for the team and would allow us to tailor even more our UA bids to specific media that are bringing higher value users – like we’ve been doing for years with IAP.

We’re now tracking our ad monetization performance not only with the sole amount of views/user/placement in mind, but also taking into account the actions triggered after watching an ad by our users in order to maximise ad arpdau.

Next step, we scaled up UA based on this data to deliver supersonic growth.

In addition to getting better performing ad placements in-game, and improving LTV, we were also getting better at optimising our UA campaigns.  

During this time we managed to increased our oCVR% (installs/imps ratio) thanks to smart ASO and creative iteration from our in-house team and playable ads from partners that allowed us to reach much more scale at a reduced direct CPI of $0.52 from October onwards ($0.34 including organics).

oCVR% on our main video UA channel per month and per geo:


(oCVR% increased by 35-75% in the space of a few months depending on geos, which allowed us to scale, and improve ROI)

All these improvements in oCVR% and ad monetization led to the growth we’re seeing today, multiplied by the rocket fuel that is smart UA spend.

blog_ROI(Direct UA ROI has improved. Weekly UA investment was multiplied by 5x with our key ad partners. And it shows no sign of slowing down any time soon – quite the opposite actually)

We’re now spending north of $250k a week on marketing with our lean team of 2, growing week on week, while profitable under a month at a $0.52 CPI. We know more on Ad LTV per campaign, we have higher LTV, lower CPIs, long term partners, better creatives iterations, and there is no coming back.

The lessons we will be taking forward to our future games will be:

  1. Design your game with ads in mind from the beginning if that’s the business model you choose. Use the pillars that we used for Fastlane (or adapt for the new game) and in particular design moments when you can effectively push your players to interact with an ad. Your CPM and ultimately revenue will reflect that.
  2. AB test and track all the impacts of any changes to the in game ads implementation and focus your KPI on LTV improvements, not views.
  3. Attribute ad LTV precisely at a user level so you can target UA campaigns to the kind of people who will generate more revenue from interacting with ads

Introducing Space Ape Varsity


Space Ape Varsity is our new program housing any projects we kick off in the collegiate space. Through mentorship and knowledge-sharing, our goal is to build relationships with educational institutions and their student bodies to inspire and support future leaders of our industry.

Our first project under the Varsity umbrella, are Masterclasses.

Over the month of March we held a series of Masterclasses focusing on different disciplines within the games industry to give students an insider’s look at the space they will soon be entering. Through six tailor-made sessions students learned about a variety of disciplines ranging from Game Design and Development to UI/UX, Marketing and Community.

We partnered with London universities to deliver the lectures to their students in person, and also livestreamed the sessions on Twitch’s frontpage. Each class was followed by a Q&A with both the students present at the lecture, and those watching online. More than 10 thousand people tuned into the Masterclasses live over three weeks.

Space Ape is an advocate for education, training in games, computer science, and the myriad of specialties that make up our vibrant industry. We’ve already had so much positive feedback from students. We’ve just wrapped up our first round of classes for this semester and we’ll be looking to cover more topics and disciplines in the fall. Thank you to the University of Greenwich, University College London, the NUEL, NACE and Twitch for all their support this semester.

You can find a synopsis, complete slides and videos for each of the six Masterclasses below. For more info email

Creative Engineering 101

Tom Mejias

Tom Mejias is a Client Engineer at Space Ape Games and a whiz at prototyping new titles. During the hour Tom gave an overview of the games industry and the engineering roles that exist within it as well as some in depth guidance, tips and tricks for specializing in the role of Creative Engineer.

Tom’s slides

Watch Tom’s class

Screen Shot 2018-03-21 at 18.20.59

You can find all the Masterclasses here:

Designing for Competition

Andrew Munden

Andrew Munden leads Live-Ops at Space Ape and has been a competitive gamer since his teens. Students will learn about designing for a competitive environment and why features that seem ‘fun’ aren’t always good for the player.

Andrew’s slides

Screen Shot 2018-03-21 at 18.21.14

You can find all the Masterclasses here:

Game Design for Modern Times

Adam Kramarzewski

Adam Kramarzewski is a Game Designer at Space Ape with 11 years of experience in the industry and a new book just about to be published. He gives students an unfiltered insight into the production practices, responsibilities, and challenges facing Game Designers in the modern game development scene.

Adam’s slides

Watch Adam’s class

Screen Shot 2018-03-21 at 18.21.24

You can find all the Masterclasses here:


High-Performance Team Management

Pablo Calvo

Pablo Calvo heads up Social Media at Space Ape Games and has previously worked in esports as a team manager and coach. In this widely applicable lecture he discusses high performance teams and the skills learned in competitive play that can be transferred across work and study.

Pablo’s slides

Watch Pablo’s class

Screen Shot 2018-03-21 at 18.21.30

You can find all the Masterclasses here:

UI/UX: Building Player Experiences

Adam Sullivan & Lissa Capeleto

Adam Sullivan heads up UI/UX at Space Ape. He and fellow UI artist Lissa Capeleto take students behind the visual language of games. In their class Adam and Lissa share their insights about how to build meaningful player experiences. UI and UX- much more than buttons or layout.

Adam and Lissa’s slides

Watch Adam and Lissa’s class here

Screen Shot 2018-03-21 at 18.21.35

You can find all the Masterclasses here:


Communities: Bridging the Gap

Deborah Mensah-Bonsu

Deborah Mensah-Bonsu heads up content at Space Ape. Join her as she delves into the world of the players. There’s no game without the player community – Where do you find it, how do you build it and how can you help it grow. Join her as she shares her tips for empowering players, using content to connect and setting a community up to thrive.

Deborah’s slides

Watch Deborah’s class

Screen Shot 2018-03-21 at 18.21.41

You can find all the Masterclasses here:


Creative Engineering: The Source Of Your Imagination

In another instalment of our technical events series, today we hosted Creative Engineering: The Source of Your Imagination.

In this jam packed event we heard from Tom Mejias, Bill Robinson and Matteo Vallone.

Tom Mejias spoke about how we decide which projects to start, and which architectures we use to get them off the ground. He described our fail fast philosophy on prototyping, and the razors with which we judge our prototypes.

His slides outlining his approaches and learnings are here:

Bill Robinson gave us an insight into how animation curves can be used for game balancing with his Multi-Curve editor. He also introduced UIAnimSequencer – a tool to quickly add juicy transitions and animations within Unity.

You can see his slides including his video demonstration here:

Matteo Vallone revealed how to make your game stand out and give it the best chance of success in the market. As former Google Play Store Manager he gave a valuable insight into making a big impact with your game launch. Now as an early stage game investor, he described how to maximise your game’s discoverability by building a beta community, engaging with app stores teams and partnering with influencers.


We are always looking for talented game developers at Space Ape Games. If you’ve been inspired by hearing about how we work, have a look at our careers page.

A video of the whole event will be posted here shortly. Follow @SpaceApeGames for all the latest announcements.

Discover our games on the Space Ape Games site.

Deep Reinforcement Learning for Small Teams

On Thursday, October 12th, we hosted a tech event at our HQ to share some of the shiny new toys we’ve been building.

The office was jam-packed, so we’ve written up our talks for those that couldn’t make it. We’ve got more events in the pipeline, so be sure to follow us on Twitter (@SpaceApeGames) so you can get a heads-up before the next event fills up.

This is what we got talked about this time:

  • Scalability & Big Data Challenges In Real Time Multiplayer Games, by Yan Cui and Tony Yang, Space Ape Games
  • Advanced Machine Learning For Small Teams, by Atiyo Ghosh and Dennis Waldron, Space Ape Games
  • Serverless: The Next Evolution of Cloud Computing, by Dr. Steve Turner, Amazon Web Services

Check out Tony and Yan’s post on creating a real-time multiplayer stack!

Dennis and I talked about our recent adventures with reinforcement learning (see video at the bottom of this post). We had an ambitious agenda:

  • How reinforcement learning can help our customers get what they want, when they want it.
  • An overview of Deep Mind’s deep-Q learning algorithm, and how we adapted it to our use case.
  • How we used a serverless stack to minimise friction in building, maintaining and training the model. We are small team, busy with building new things: low maintenance stacks are our friends.
  • How our choice of stack determined our choice of deep learning framework.

It’s a lot of material to cover in a short talk, but we managed to answer some questions at the pub afterwards. For those of you with questions who couldn’t make it there, leave a note in the comments 🙂

Tackling scalability challenges in realtime multiplayer games with Akka and AWS

We hosted a tech event at our HQ last week and welcomed over 200 attendees to join us for an evening of talks and networking. It was an absolute blast to meet so many talented people all at once! We plan to host a series of similar events in the future so keep coming back here or follow us on Twitter (@SpaceApeGames) to listen for announcements.

We had three talks on the night, covering a range of interesting topics:

  • Scalability & Big Data Challenges In Real Time Multiplayer Games, by Yan Cui and Tony Yang, Space Ape Games
  • Advanced Machine Learning For Small Teams, by Atiyo Ghosh and Dennis Waldron, Space Ape Games
  • Serverless: The Next Evolution of Cloud Computing, by Dr. Steve Turner, Amazon Web Services

The recording of me and Tony’s talk on building realtime multiplayer games is now online (see end of the post), with the accompanying slides.

In this talk we discussed the market opportunity for realtime multiplayer games and the technical challenges one have to face, as well as the tradeoffs that we need to keep in mind when making those decisions.

  • do you deploy infrastructure globally or run them from one (AWS) region?
  • do you build your own networking stack vs using an off-the-shelf solution?
  • do you go with a server authoritative approach or implement a lock-step system?
  • how do you write a highly performant multiplayer server on the JVM?
  • how do you load test this system?
  • and many more.

Over the next few weeks we’ll publish the rest of the talks, so don’t forget to check back here once in a while 😉

Building a Custom Terraform Provider for Wavefront

At Space Ape we’re increasingly turning to Golang for creating tools and utilities, for example – De-comming EC2 Instances With Serverless and Go. Inevitably we’ll need to interact with our metric provider – Wavefront. To this end, our colleague Louis has been working on a Go client for interacting with the Wavefront API, which allows us to query Wavefront and create resources such as Alerts and Dashboards. Up until now, we’ve been configuring these components by hand, which worries us – what happens if they disappear or are changed? How do we revert to a known good version or restore a lost Dashboard?

We were set to start creating our own tool for managing Wavefront resources, but as luck would have it Hashicorp released version 0.10.0 of Terraform which splits providers out from the main Terraform code base and allows you to load custom (not managed by Hashicorp) providers without recompiling Terraform.

So we set about creating a custom provider and have so far implemented Alerts, Alert Targets and Dashboards and fully intend to continue adding functionality to both the SDK and the provider in the future.

Now creating an Alert is as simple as:

resource "wavefront_alert" "a_terraform_managed_alert" {
 name                   = "Terraform Managed Alert"
 target                 = ""
 condition              = "ts()"
 display_expression     = "ts()"
 minutes                = 4
 resolve_after_minutes  = 4
 additional_information = "This alert is triggered because..."
 severity               = "WARN"

 tags = [

You can find the latest released version, complete with binary here.

Creating our own provider for Wavefront means that we get all the benefits of Terraform; resource graphs, plans, state, versioning and locking with just a little bit of effort required by us. Hashicorp has made a number of helper methods which means that writing and testing the provider is relatively simple.

Another benefit to writing a provider is that we can use the import functionality of Terraform to import our existing resource into state. Hopefully, Hashicorp will improve this to generate Terraform code soon, in the meantime, it shouldn’t be too difficult to script turning a state file (JSON) into Terraforms HCL.

Using the Provider

Terraform is clever enough to go and fetch the officially supported providers for you when you run terraform init. Unfortunately, with custom providers, it’s a little bit more complicated. You need to build the binary (We upload the compiled binary with each of our releases) and place it in the ~/.terraform.d/plugins/darwin_amd64/ (or equivalent for your system). Now when we run terraform init it will be able to find the plugin. After this the setup is pretty simple:

provider "wavefront" {
 address = ""
 token   = "wavefront_token"

You can export the address and token as an environment variable (WAVEFRONT_ADDRESS and WAVEFRONT_TOKEN respectively) to avoid committing them to Source Control (We highly recommend you do this for the token!).

Writing your own Provider

If you fancy having a go at writing your own provider then this blog post by Hashicorp is a good way to get started. I’d also recommend taking a look at the Hashicorp supported providers and using them as a reference when writing your own.


How to load test a realtime multiplayer mobile game with AWS Lambda and Akka

Tencent’s Kings of Glory is one of the top grossing games worldwide in 2017 so far.

Over the last 12 months, we have seen a number of team-based multiplayer games hit the market as companies look to replicate the success of Tencent’s King of Glory (known as Arena of Valor in the west) which is one of the top grossing games in the world in 2017.

Even our partners Supercell has recently dipped into the genre with Brawl Stars, which offers a different take on the traditional MOBA (Multiplayer-Online-Battle-Arena) formula.

Supercell’s Brawl Stars offers a different experience to the traditional MOBA format, it is built with mobile in mind and prefers simple controls & maps, as well as shorter matches.

Here at Space Ape Games, we have been exploring ideas for a competitive multiplayer game, which is still in prototype so I can’t talk about it here. However, I can talk about how we use AWS Lambda to load test our homegrown networking stack.

Why Lambda?

The traditional approach of using EC2 servers to drive the load testing has several problems:

  • slow to start : any sizeable load test would require many EC2 instances to generate the desired load. Since it costs you to keep these EC2 instances around, it’s likely that you’ll only spawn them when you need to run a load test. Which means there’s a 10–15 mins lead time before every test just to wait for the EC2 instances to be ready.
  • wastage : when the load test is short-lived (say, < 1 hour) you can incur a lot of wastage because EC2 instances are billed by the hour with a minimum charge for one hour (per-second billing is coming to non-Windows EC2 instances in Oct 2017, which would address this problem).
  • hard to deploy updates : to update the load test code itself (perhaps to introduce new behaviours to bot players), you need to invest in the infrastructure for updating the load test code on the running EC2 instances. Whilst this doesn’t have to be difficult, after all, you probably already have a similar infrastructure in place for your game servers. Nonetheless, it’s yet another distraction that I would happily avoid.

AWS Lambda addresses all of these problems.

It does introduce its own limitations — especially the 5 min execution time limit. However, as I have written before, you can work around this limit by writing your Lambda function as a recursive function and taking advantage of container reuse to persist local state from one invocation to the next.

I’m a big fan of the work the Nordstrom guys have done with the serverless-artillery project. Unfortunately we’re not able to use it here because the game (the client app written in Unity3D) converses with the multiplayer server in a custom protocol via TCP, and in the future that conversation would happen over Reliable UDP too.


Our multiplayer server is written in Scala with the Akka framework. To help us optimize our implementation we collect lots of metrics about the Akka system as well as the JVM — GC, heap, CPU usage, memory usage, etc.

The Kamon framework is a big help here, it made quick work of getting us insight into the running of the Akka system — no. of actors, no. of messages, how much time a message spends waiting in the mailbox, how much time we spend processing each message, etc.

All of these data points are sent to Wavefront, via Telegraf.

We collect lots of metrics about the Akka system and the JVM.

We also have a standalone Akka-based load test client that can simulate many concurrent players. Each player is modelled as an actor, which simulates the behaviour of the Unity3D game client during a match:

  1. find a multiplayer match
  2. connect to the multiplayer server and authenticate itself
  3. play a 4 minute match, sending inputs at 15 times a second
  4. report “client side” telemetries so we can collect the RTT (Round-Trip Time) as experienced by the client, and use these telemetries as a qualitative measure for our networking stack

In the load test client, we use the t-digest algorithm to minimise the memory footprint required to track the RTTs during a match. This allows us to simulate more concurrent players in a memory-constrained environment such as a Lambda function.

AWS Lambda + Akka

We can run the load test client inside a Java8 Lambda function and simulate 100 players per invocation. To simulate X concurrent players, we can create X/100 concurrent executions of the function via SNS (which has an one-invocation-per-message policy).

To create a gradual ramp up in load, a recursive Orchestrator function will gradually dial up the no. of current executions by publishing more messages into SNS, each triggering a new recursive load test client function.

LoadTest function that is triggered by API Gateway allows us to easily kick off a load test from a Jenkins pipeline.

Using the push-pull pattern (see this post for detail), we can track the progress of all the concurrent load test client functions. When they have all finished simulating their matches, we’ll kick off the Aggregator function.

The Aggregator function would collect the RTT metrics published by the load test clients and produce a report detailing the various percentile RTTs.

  "loadTestId": "62db5790-da53-4b49-b673-0f60e891252a",
  "status": "completed",
  "successful": 43,
  "failed": 2,
  "metrics": {    
    "client-interval": {      
      "count": 7430209,
      "min": 0,
      "max": 140,
      "percentile80": 70.000000193967,
      "percentile90": 70.00001559848,
      "percentile99": 71.000000496589,
      "percentile99point9": 80.000690623146,
      "percentile99point99": 86.123610689566
    "RTT": {      
      "count": 744339,
      "min": 70,
      "max": 320,
      "percentile80": 134.94761466541,
      "percentile90": 142.64720935496,
      "percentile99": 155.30086042676,
      "percentile99point9": 164.46137375328,
      "percentile99point99": 175.90215268392

If you would like to learn more about the technical challenges in developing successful mobile games, come join us for an evening of talks, drinks, food and networking in our office on the 12th Oct.

We’re running a free event in partnership with AWS where we will talk about:

  • the opportunities and challenges in building a realtime multiplayer game
  • data science and machine learning
  • serverless with AWS Lambda (by Dr Steve Turner from AWS)

Get your free ticket here!

The problems with DynamoDB Auto Scaling and how it might be improved

Here at Space Ape Games we developed some in-house tech to auto scale DynamoDB throughput and have used it successfully in production for a few years. It’s even integrated with our LiveOps tooling and scales up our DynamoDB tables according to the schedule of live events. This way, our tables are always provisioned just ahead of that inevitable spike in traffic at the start of an event.

Auto scaling DynamoDB is a common problem for AWS customers, I have personally implemented similar tech to deal with this problem at two previous companies. I’ve even applied the same technique to auto scale Kinesis streams too.

When AWS announced DynamoDB Auto Scaling we were excited. However, the blog post that accompanied the announcement illustrated two problems:

  • the reaction time to scaling up is slow (10–15 mins)
  • it did not scale sufficiently to maintain the 70% utilization level
Notice the high no. of throttled operations despite the scaling activity. If you were scaling the table manually, would you have settled for this result?

It looks as though the author’s test did not match the kind of workload that DynamoDB Auto Scaling is designed to accommodate:

In our case, we also have a high write-to-read ratio (typically around 1:1) because every action the players perform in a game changes their state in some way. So unfortunately we can’t use DAX as a get-out-of-jail free card.

How DynamoDB Auto Scaling works

When you modify the auto scaling settings on a table’s read or write throughput, it automatically creates/updates CloudWatch alarms for that table — four for writes and four for reads.

As you can see from the screenshot below, DynamoDB auto scaling uses CloudWatch alarms to trigger scaling actions. When the consumed capacity units breaches the utilization level on the table (which defaults to 70%) for 5 mins consecutively it will then scale up the corresponding provisioned capacity units.

Problems with the current system, and how it might be improved

From our own tests we found DynamoDB’s lacklustre performance at scaling up is rooted in 2 problems:

  1. The CloudWatch alarms requires 5 consecutive threshold breaches. When you take into account the latency in CloudWatch metrics (which typically are a few mins behind) it means scaling actions occur up to 10 mins after the specified utilization level is first breached. This reaction time is too slow.
  2. The new provisioned capacity unit is calculated based on consumed capacity units rather than the actual request count. The consumed capacity units is itself constrained by the provisioned capacity units even though it’s possible to temporarily exceed the provisioned capacity units with burst capacity. What this means is that once you’ve exhausted the saved burst capacity, the actual request count can start to outpace the consumed capacity units and scaling up is not able to keep pace with the increase in actual request count. We will see the effect of this in the results from the control group later.

Based on these observations, we hypothesize that you can make two modifications to the system to improve its effectiveness:

  1. trigger scaling up after 1 threshold breach instead of 5, which is in-line with the mantra of “scale up early, scale down slowly”.
  2. trigger scaling activity based on actual request count instead of consumed capacity units, and calculate the new provisioned capacity units using actual request count as well.

As part of this experiment, we also prototyped these changes (by hijacking the CloudWatch alarms) to demonstrate their improvement.

Testing Methodology

The most important thing for this test is a reliable and reproducible way of generating the desired traffic patterns.

To do that, we have a recursive function that will make BatchPut requests against the DynamoDB table under test every second. The items per second rate is calculated based on the elapsed time (t) in seconds so it gives us a lot of flexibility to shape the traffic pattern we want.

Since a Lambda function can only run for a max of 5 mins, when context.getRemainingTimeInMillis() is less than 2000 the function will recurse and pass the last recorded elapsed time (t) in the payload for the next invocation.

The result is a continuous, smooth traffic pattern you see below.

We tested with 2 traffic patterns we see regularly.

Bell Curve

This should be a familiar traffic pattern for most — a slow & steady buildup of traffic from the trough to the peak, followed by a faster drop off as users go to sleep. After a period of steady traffic throughout the night things start to pick up again the next day.

For many of us whose user base is concentrated in the North America region, the peak is usually around 3–4am UK time — the more reason we need DynamoDB Auto Scaling to do its job and not wake us up!

This traffic pattern is characterised by a) steady traffic at the trough, b) slow & steady build up towards the peak, c) fast drop off towards the trough, and repeat.

Top Heavy

This sudden burst of traffic is usually precipitated by an event — a marketing campaign, a promotion by the app store, or in our case a scheduled LiveOps event.

In most cases these events are predictable and we scale up DynamoDB tables ahead of time via our automated tooling. However, in the unlikely event of an unplanned burst of traffic (and it has happened to us a few times) a good auto scaling system should scale up quickly and aggressively to minimise the disruption to our players.

This pattern is characterised by a) sharp climb in traffic, b) a slow & steady decline, c) stay at a stead level until the anomaly finishes and it goes back to the Bell Curve again.

We tested these traffic patterns against several utilization level settings (default is 70%) to see how it handles them. We measured the performance of the system by:

  • the % of successful requests (ie. consumed capacity / request count)
  • the total no. of throttled requests during the test

These results will act as our control group.

We then tested the same traffic patterns against the 2 hypothetical auto scaling changes we proposed above.

To prototype the proposed changes we hijacked the CloudWatch alarms created by DynamoDB auto scaling using CloudWatch events.

When a PutMetricAlarm API call is made, our change_cw_alarm function is invoked and replaces the existing CloudWatch alarms with the relevant changes — ie. set the EvaluationPeriods to 1 minute for hypothesis 1.

To avoid an invocation loop, the Lambda function will only make changes to the CloudWatch alarm if the EvaluationPeriod has not been changed to 1 min already.
The change_cw_alarm function changed the breach threshold for the CloudWatch alarms to 1 min.

For hypothesis 2, we have to take over the responsibility of scaling up the table as we need to calculate the new provisioned capacity units using a custom metric that tracks the actual request count. Hence why the AlarmActions for the CloudWatch alarm is also overridden here.

Result (Bell Curve)

The test is setup as following:

  1. table starts off with 50 write capacity unit
  2. traffic holds steady for 15 mins at 25 writes/s
  3. traffic then increases to peak level (300 writes/s) at a steady rate over the next 45 mins
  4. traffic drops off back to 25 writes/s at a steady rate over the next 15 mins
  5. traffic holds steady at 25 writes/s

All the units in the diagrams are of SUM/min, which is how CloudWatch tracks ConsumedWriteCapacityUnits and WriteThrottleEvents, but I had to normalise the ProvisionedWriteCapacityUnits (which is tracked as per second unit) to make them consistent.

Let’s start by seeing how the control group (vanilla DynamoDB auto scaling) performed at different utilization levels from 30% to 80%.

I’m not sure why the total consumed units and total request count metrics didn’t match exactly when the utilization is between 30% and 50%, but seeing as there were no throttled events I’m going to put that difference down to inaccuracies in CloudWatch.

I make several observations from these results:

  1. At 30%-50% utilization levels, write ops are never throttled — this is what we want to see in production.
  2. At 60% utilization level, the slow reaction time (problem 1) caused writes to be throttled early on as the system adjust to the steady increase in load but it was eventually able to adapt.
  3. At 70% and 80% utilization level, things really fell apart. The growth in the actual request count outpaced the growth of consumed capacity units, more and more write ops were throttled as the system failed to adapt to the new level of actual utilization (as opposed to “allowed” utilization measured by consumed capacity units, ie problem 2).

Hypothesis 1 : scaling after 1 min breach

Some observations:

  1. At 30%-50% utilization level, there’s no difference to performance.
  2. At 60% utilization level, the early throttled writes we saw in the control group is now addressed as we decreased the reaction time of the system.
  3. At 70%-80% utilization levels, there is negligible difference in performance. This is to be expected as the poor performance in the control group is caused by problem 2, so improving reaction time alone is unlikely to significantly improve performances in these cases.

Hypothesis 2 : scaling after 1 min breach on actual request count

Scaling on actual request count and using actual request count to calculate the new provisioned capacity units yields amazing results. There were no throttled events at 30%-70% utilization levels.

Even at 80% utilization level both the success rate and total no. of throttled events have improved significantly.

This is an acceptable level of performance for an autoscaling system, one that I’ll be happy to use in a production environment. Although, I’ll still lean on the side of caution and choose a utilization level at or below 70% to give the table enough headroom to deal with sudden spikes in traffic.

Results (Top Heavy)

The test is setup as following:

  1. table starts off with 50 write capacity unit
  2. traffic holds steady for 15 mins at 25 writes/s
  3. traffic then jumps to peak level (300 writes/s) at a steady rate over the next 5 mins
  4. traffic then decreases at a rate of 3 writes/s per minute

Once again, let’s start by looking at the performance of the control group (vanilla DynamoDB auto scaling) at various utilization levels.

Some observations from the results above:

  1. At 30%-60% utilization levels, most of the throttled writes can be attributed to the slow reaction time (problem 1). Once the table started to scale up the no. of throttled writes quickly decreased.
  2. At 70%-80% utilization levels, the system also didn’t scale up aggressively enough (problem 2). Hence we experienced throttled writes for much longer, resulting in a much worse performance overall.

Hypothesis 1 : scaling after 1 min breach

Some observations:

  1. Across the board the performance has improved, especially at the 30%-60% utilization levels.
  2. At 70%-80% utilization levels we’re still seeing the effect of problem 2 — not scaling up aggressively enough. As a result, there’s still a long tail to the throttled write ops.

Hypothesis 2 : scaling after 1 min breach on actual request count

Similar to what we observed with the Bell Curve traffic pattern, this implementation is significantly better at coping with sudden spikes in traffic at all utilization levels tested.

Even at 80% utilization level (which really doesn’t leave you with a lot of head room) an impressive 94% of write operations succeeded (compared with 73% recorded by the control group). Whilst there is still a significant no. of throttled events, it compares favourably against the 500k+ count recorded by the vanilla DynamoDB auto scaling.


I like DynamoDB, and I would like to use its auto scaling capability out of the box but it just doesn’t quite match my expectations at the moment. I hope this post provides sufficient proof (as you can see from the data below) that there is plenty of room for improvement with relatively small changes needed from AWS..

Feel free to play around with the demo, all the code is available here.