Deep Reinforcement Learning for Small Teams

On Thursday, October 12th, we hosted a tech event at our HQ to share some of the shiny new toys we’ve been building.

The office was jam-packed, so we’ve written up our talks for those that couldn’t make it. We’ve got more events in the pipeline, so be sure to follow us on Twitter (@SpaceApeGames) so you can get a heads-up before the next event fills up.

This is what we got talked about this time:

  • Scalability & Big Data Challenges In Real Time Multiplayer Games, by Yan Cui and Tony Yang, Space Ape Games
  • Advanced Machine Learning For Small Teams, by Atiyo Ghosh and Dennis Waldron, Space Ape Games
  • Serverless: The Next Evolution of Cloud Computing, by Dr. Steve Turner, Amazon Web Services

Check out Tony and Yan’s post on creating a real-time multiplayer stack!

Dennis and I talked about our recent adventures with reinforcement learning (see video at the bottom of this post). We had an ambitious agenda:

  • How reinforcement learning can help our customers get what they want, when they want it.
  • An overview of Deep Mind’s deep-Q learning algorithm, and how we adapted it to our use case.
  • How we used a serverless stack to minimise friction in building, maintaining and training the model. We are small team, busy with building new things: low maintenance stacks are our friends.
  • How our choice of stack determined our choice of deep learning framework.

It’s a lot of material to cover in a short talk, but we managed to answer some questions at the pub afterwards. For those of you with questions who couldn’t make it there, leave a note in the comments 🙂

Tackling scalability challenges in realtime multiplayer games with Akka and AWS

We hosted a tech event at our HQ last week and welcomed over 200 attendees to join us for an evening of talks and networking. It was an absolute blast to meet so many talented people all at once! We plan to host a series of similar events in the future so keep coming back here or follow us on Twitter (@SpaceApeGames) to listen for announcements.

We had three talks on the night, covering a range of interesting topics:

  • Scalability & Big Data Challenges In Real Time Multiplayer Games, by Yan Cui and Tony Yang, Space Ape Games
  • Advanced Machine Learning For Small Teams, by Atiyo Ghosh and Dennis Waldron, Space Ape Games
  • Serverless: The Next Evolution of Cloud Computing, by Dr. Steve Turner, Amazon Web Services

The recording of me and Tony’s talk on building realtime multiplayer games is now online (see end of the post), with the accompanying slides.

In this talk we discussed the market opportunity for realtime multiplayer games and the technical challenges one have to face, as well as the tradeoffs that we need to keep in mind when making those decisions.

  • do you deploy infrastructure globally or run them from one (AWS) region?
  • do you build your own networking stack vs using an off-the-shelf solution?
  • do you go with a server authoritative approach or implement a lock-step system?
  • how do you write a highly performant multiplayer server on the JVM?
  • how do you load test this system?
  • and many more.

Over the next few weeks we’ll publish the rest of the talks, so don’t forget to check back here once in a while 😉

How to load test a realtime multiplayer mobile game with AWS Lambda and Akka

Tencent’s Kings of Glory is one of the top grossing games worldwide in 2017 so far.

Over the last 12 months, we have seen a number of team-based multiplayer games hit the market as companies look to replicate the success of Tencent’s King of Glory (known as Arena of Valor in the west) which is one of the top grossing games in the world in 2017.

Even our partners Supercell has recently dipped into the genre with Brawl Stars, which offers a different take on the traditional MOBA (Multiplayer-Online-Battle-Arena) formula.

Supercell’s Brawl Stars offers a different experience to the traditional MOBA format, it is built with mobile in mind and prefers simple controls & maps, as well as shorter matches.

Here at Space Ape Games, we have been exploring ideas for a competitive multiplayer game, which is still in prototype so I can’t talk about it here. However, I can talk about how we use AWS Lambda to load test our homegrown networking stack.

Why Lambda?

The traditional approach of using EC2 servers to drive the load testing has several problems:

  • slow to start : any sizeable load test would require many EC2 instances to generate the desired load. Since it costs you to keep these EC2 instances around, it’s likely that you’ll only spawn them when you need to run a load test. Which means there’s a 10–15 mins lead time before every test just to wait for the EC2 instances to be ready.
  • wastage : when the load test is short-lived (say, < 1 hour) you can incur a lot of wastage because EC2 instances are billed by the hour with a minimum charge for one hour (per-second billing is coming to non-Windows EC2 instances in Oct 2017, which would address this problem).
  • hard to deploy updates : to update the load test code itself (perhaps to introduce new behaviours to bot players), you need to invest in the infrastructure for updating the load test code on the running EC2 instances. Whilst this doesn’t have to be difficult, after all, you probably already have a similar infrastructure in place for your game servers. Nonetheless, it’s yet another distraction that I would happily avoid.

AWS Lambda addresses all of these problems.

It does introduce its own limitations — especially the 5 min execution time limit. However, as I have written before, you can work around this limit by writing your Lambda function as a recursive function and taking advantage of container reuse to persist local state from one invocation to the next.

I’m a big fan of the work the Nordstrom guys have done with the serverless-artillery project. Unfortunately we’re not able to use it here because the game (the client app written in Unity3D) converses with the multiplayer server in a custom protocol via TCP, and in the future that conversation would happen over Reliable UDP too.

Akka

Our multiplayer server is written in Scala with the Akka framework. To help us optimize our implementation we collect lots of metrics about the Akka system as well as the JVM — GC, heap, CPU usage, memory usage, etc.

The Kamon framework is a big help here, it made quick work of getting us insight into the running of the Akka system — no. of actors, no. of messages, how much time a message spends waiting in the mailbox, how much time we spend processing each message, etc.

All of these data points are sent to Wavefront, via Telegraf.

We collect lots of metrics about the Akka system and the JVM.

We also have a standalone Akka-based load test client that can simulate many concurrent players. Each player is modelled as an actor, which simulates the behaviour of the Unity3D game client during a match:

  1. find a multiplayer match
  2. connect to the multiplayer server and authenticate itself
  3. play a 4 minute match, sending inputs at 15 times a second
  4. report “client side” telemetries so we can collect the RTT (Round-Trip Time) as experienced by the client, and use these telemetries as a qualitative measure for our networking stack

In the load test client, we use the t-digest algorithm to minimise the memory footprint required to track the RTTs during a match. This allows us to simulate more concurrent players in a memory-constrained environment such as a Lambda function.

AWS Lambda + Akka

We can run the load test client inside a Java8 Lambda function and simulate 100 players per invocation. To simulate X concurrent players, we can create X/100 concurrent executions of the function via SNS (which has an one-invocation-per-message policy).

To create a gradual ramp up in load, a recursive Orchestrator function will gradually dial up the no. of current executions by publishing more messages into SNS, each triggering a new recursive load test client function.

LoadTest function that is triggered by API Gateway allows us to easily kick off a load test from a Jenkins pipeline.

Using the push-pull pattern (see this post for detail), we can track the progress of all the concurrent load test client functions. When they have all finished simulating their matches, we’ll kick off the Aggregator function.

The Aggregator function would collect the RTT metrics published by the load test clients and produce a report detailing the various percentile RTTs.

{
  "loadTestId": "62db5790-da53-4b49-b673-0f60e891252a",
  "status": "completed",
  "successful": 43,
  "failed": 2,
  "metrics": {    
    "client-interval": {      
      "count": 7430209,
      "min": 0,
      "max": 140,
      "percentile80": 70.000000193967,
      "percentile90": 70.00001559848,
      "percentile99": 71.000000496589,
      "percentile99point9": 80.000690623146,
      "percentile99point99": 86.123610689566
    },    
    "RTT": {      
      "count": 744339,
      "min": 70,
      "max": 320,
      "percentile80": 134.94761466541,
      "percentile90": 142.64720935496,
      "percentile99": 155.30086042676,
      "percentile99point9": 164.46137375328,
      "percentile99point99": 175.90215268392
    }
  }
}

If you would like to learn more about the technical challenges in developing successful mobile games, come join us for an evening of talks, drinks, food and networking in our office on the 12th Oct.

We’re running a free event in partnership with AWS where we will talk about:

  • the opportunities and challenges in building a realtime multiplayer game
  • data science and machine learning
  • serverless with AWS Lambda (by Dr Steve Turner from AWS)

Get your free ticket here!

De-comming EC2 Instances With Serverless and Go

Nothing is certain but Death and Taxes, goes the old idiom, and EC2 instances are not exempt. You will have to pay for them, and at some point they will die. Truly progressive outfits embrace this fact and pick off unsuspecting instances Chaos-Monkey-style, others wait for that obituary email from Amazon. Either way, we all have to make allowances for those dearly departed instances, and tidy them up once they are gone.

This article describes one way of doing so automatically; using Lambda, the Serverless Framework, and Go.

Why?

Why Lambda? The obvious benefit is that there is no need to run and maintain a host to watch for dying instances. Also it integrates nicely with Cloudwatch Events, which is the best way to get notified of them.

Why Serverless? The Serverless Framework is an open-source effort to provide a unified way of building entire serverless architectures. Originally designed specifically for Lambda it is gaining increasing support for other providers too. Beyond just deploying Lambda functions, it allows you to manage all of the supporting infrastructural components (e.g. IAM, ELBs, S3 buckets) in one place, by supplementing your Lambda code with Cloudformation templates.

Why Go? Aside from it being one of our operational languages (along with Ruby), this is perhaps the hardest one to answer, as AWS don’t actually support it natively (yet). However some recent developments have made it more attractive: in Go 1.8, support was added for plugins. These are Go programs that are compiled as shared modules, to be consumed by other programs. The guys at eawsy with their awesome aws-lambda-go-shim immediately saw the potential this had in running Go code from a Python Lambda function. No more spawning a process to run a binary, instead have Python link the shared module and call it directly. Their Github page suggests that this is the second fastest way of executing a Lambda function, faster even than NodeJS, the serverless poster-boy!

It is this shim that we have used to build our EC2 Decomissioner, and we have also borrowed heavily from this idea (we found that we just needed a bit more flexibility, notably in pulling build-time secrets from Vault, outside the scope of this article).

How?

Cloudwatch Events are a relatively recent addition to the AWS ecosystem. They allow us to be notified of various events through one or more targets (e.g. Lambda functions, Kinesis streams).

Pertinently for this application, we can be told when an EC2 instance enters the terminated state, and the docs tell us the event JSON received by the target (in our case a Lambda function) will look like this:

{
   "id":"7bf73129-1428-4cd3-a780-95db273d1602",
   "detail-type":"EC2 Instance State-change Notification",
   "source":"aws.ec2",
   "account":"123456789012",
   "time":"2015-11-11T21:29:54Z",
   "region":"us-east-1",
   "resources":[
      "arn:aws:ec2:us-east-1:123456789012:instance/i-abcd1111"
   ],
   "detail":{
      "instance-id":"i-abcd1111",
      "state":"terminated"
   }
}

The detail is in the…detail, as they say. The rest is just pre-amble common to all Cloudwatch Events. But here we can see that we are told the instance-id, and the state to which it has transitioned.

So we just need to hook up a Lambda function to a specific type of Cloudwatch Event. This is exactly what the Serverless Framework makes easy for us.

Note, the easiest way to play along is to follow the excellent instructions detailed here, below we are configuring the setup in a semi-manual fashion, to illustrate what is going on. Either way you’ll need to install the Serverless CLI.

Create a directory to house the project (lets say serverless-ec2). Then create a serverless.yml file with contents something like this:

service: serverless-ec2
package:
  artifact: handler.zip
provider:
  name: aws
  stage: production
  region: us-east-1
  runtime: python2.7
  iamRoleStatements:
    - Effect: "Allow"
      Action:
        - "ec2:DescribeTags"
      Resource: "*"
functions:
  terminate:
    handler: handler.HandleTerminate
    events:
      - cloudwatchEvent:
          event:
            source:
              - "aws.ec2"
            detail-type:
              - "EC2 Instance State-change Notification"
            detail:
              state:
               - terminated

This config describes a service (analogous to a project) called serverless-ec2.

The package section specifies that the handler.zip file is the artifact containing Lambda function code that is uploaded to AWS. Ordinarily the framework takes care of the the zipping for us, but we will be building our own artifact (more on that in a moment).

The provider section specifies some AWS information, along with an IAM Role that will be created, that allows our function to describe EC2 tags.

Finally the functions section specifies a function, terminate, that is triggered by a Cloudwatch Event of type ‘aws.ec2’, with an additional filter applied to match only those events that have a ‘state’ of ‘terminated’ in the detail section of the event (see above).  The function is to be handled by the handler.HandleTerminate function, which is to be the name of the Go function we will write.

So lets go ahead and write it. First, run the following to grab the runtime dependency:

go get -u -d github.com/eawsy/aws-lambda-go-core/...

Then we are good to compose our function, create a handler.go with the following content:

package main

import (
	"log"

	"github.com/eawsy/aws-lambda-go-core/service/lambda/runtime"
)

// CloudwatchEvent represents an AWS Cloudwatch Event
type CloudwatchEvent struct {
	ID     string `json:"id"`
	Region string `json:"region"`
	Detail map[string]string
}

// HandleTerminate decomissions the terminated instance
func HandleTerminate(evt *CloudwatchEvent, ctx *runtime.Context) (interface{}, error) {
	log.Printf("instance %s has entered the '%s' state\n", evt.Detail["instance-id"], evt.Detail["state"])
	return nil, nil
}

Some points to note:

  • Your Handle* functions must reside in the main package, but you are free to organise the rest of your code as you wish. Here we have declared HandleTerminate, which is the function referenced in serverless.yml.
  • The github.com/eawsy/aws-lambda-go-core/service/lambda/runtime package provides access to a runtime.Context object that allows you the same access to the runtime context as the official Lambda runtimes (to access, for example, the AWS request ID or remaining execution time).
  • The return value will be JSON marshalled and sent back to the client, unless the error is non-nil, in which case the function is treated as having failed.

Perhaps the most important piece of information here is how the event data is passed into the function. In our case this is the Cloudwatch EC2 Event JSON as shown above, but it may take the form of any number of JSON events. All we need to know is that the event is automatically JSON unmarshalled into the first argument.

This is why we have defined a CloudwatchEvent struct, which will be populated neatly by the raw JSON being unmarshalled. It should be noted that there are already a number of predefined type definitions available here, we are just showing this for explanatory purposes.

The rest of the function is extremely simple, it just uses the standard library’s log function to log that the instance has been terminated (you should use this over fmt as it plays more nicely with Cloudwatch Logs).

With our code in place we can build the handler.zip that will be uploaded by the Serverless Framework. This is where things get a little complicated. Thankfully, the chaps at eawsy have provided us with a Docker image (with Go 1.8, and some tools used in the build process, installed). They also provide a Makefile (with an alternative one here) that you should definitely use, again what follows is just to demystify the process:

Run:

docker pull eawsy/aws-lambda-go-shim:latest

docker run --rm -it -v $GOPATH:/go -v $(pwd):/build -w /build eawsy/aws-lambda-go-shim go build --buildmode=plugin -ldflags='-w -w' -o handler.so

This builds our code as a Go plugin (handler.so) from within the provided Docker container. Next, run:

docker run --rm -it -v $GOPATH:/go -v $(pwd):/build -w /build eawsy/aws-lambda-go-shim pack handler handler.so handler.zip

This runs a custom ‘pack’ script that creates a zip archive (handler.zip) that includes our recently compiled handler.so along with the Python shim required for it to work on AWS. The very same handler.zip referenced in the serverless.yml above!

The final step then is to actually deploy the function, which is as simple as:

sls deploy

Once the Serverless tool has finished doing its thing, you should have a function that logs whenever an EC2 instance is terminated!

Clearly, you want to do more than just log the terminated instance. But the actual decomissioning is subjective. For instance, amongst other things, we remove the instance’s Route53 record, delete its Chef node/client, and remove any locks it might be holding in our Consul cluster. The point is that this is now just Go code – you can do with it whatever you wish.

Note that if you require access to anything inside your VPC as part of the tidying-up process, you need to explicitly state the VPC and subnets/security groups in which Lambda functions will run. But don’t worry, the Serverless tool has you covered.

AWS Lambda – build yourself a URL shortener in 2 hours

An interesting requirement came up at work this week where we discussed potentially having to run our own URL Shortener because the Universal Links mechanism (in iOS 9 and above) requires a JSON manifest at

https://domain.com/apple-app-site-association

Since the OS doesn’t follow redirects this manifest has to be hosted on the URL shortener’s root domain.

Owing to a limitation with our attribution partner they’re currently not able to shorten links when you have Universal Links configured for your app. Whilst we can switch to another vendor it means more work for our (already stretched) client devs and we really like our partner’s support for attributions in links.

Which brings us back to the question

“should we build a URL shortener?”

swiftly followed by

“how hard can it be to build a scalable URL shortener in 2017?”

Well, turns out it wasn’t hard at all 

ape-shortener

Lambda FTW

For this URL shortener we’ll need several things:

  1. a GET /{shortUrl} endpoint that will redirect you to the original URL
  2. a POST / endpoint that will accept an original URL and return the shortened URL
  3. an index.html page where someone can easily create short URLs
  4. a GET /apple-app-site-association endpoint that serves a static JSON response

all of which can be accomplished with API Gateway + Lambda.

Overall, this is the project structure I ended up with:

  • using the Serverless framework’s aws-nodejs template
  • each of the above endpoint have a corresponding handler function
  • the index.html file is in the static folder
  • the test cases are written in such a way that they can be used both as integration as well as acceptance tests
  • there’s a build.sh script which facilitates running
    • integration tests, eg ./build.sh int-test {env} {region} {aws_profile}
    • acceptance tests, eg ./build.sh acceptance-test {env} {region} {aws_profile}
    • deployment, eg ./build.sh deploy {env} {region} {aws_profile}

ape-shortener-project-structure

Get /apple-app-site-association endpoint

Seeing as this is a static JSON blob, it makes sense to precompute the HTTP response and return it every time.

ape-shortener-app-association

POST / endpoint

For an algorithm to shorten URLs, you can find a very simple and elegant solution on StackOverflow. All you need is an auto-incremented ID, like the ones you normally get with RDBMS.

However, I find DynamoDB a more appropriate DB choice here because:

  • it’s a managed service, so no infrastructure for me to worry about
  • OPEX over CAPEX, man!
  • I can scale reads & writes throughput elastically to match utilization level and handle any spikes in traffic

but, DynamoDB has no such concept as an auto-incremented ID which the algorithm needs. Instead, you can use an atomic counter to simulate an auto-incremented ID (at the expense of an extra write-unit per request).

ape-shortener-auto-incr-id

ape-shortener-auto-incr-id-dynamodb

GET /{shortUrl} endpoint

Once we have the mapping in a DynamoDB table, the redirect endpoint is a simple matter of fetching the original URL and returning it as part of the Location header.

Oh, and don’t forget to return the appropriate HTTP status code, in this case a 308 Permanent Redirect.

ape-shortener-redirect

 

GET / index page

Finally, for the index page, we’ll need to return some HTML instead (and a different content-type to go with the HTML).

I decided to put the HTML file in a static folder, which is loaded and cached the first time the function is invoked.

ape-shortener-index

 

Getting ready for production

Fortunately I have had plenty of practice getting Lambda functions to production readiness, and for this URL shortener we will need to:

  • configure auto-scaling parameters for the DynamoDB table (which we have an internal system for managing the auto-scaling side of things)
  • turn on caching in API Gateway for the production stage

Future Improvements

If you put in the same URL multiple times you’ll get back different short-urls, one optimization (for storage and caching) would be to return the same short-url instead.

To accomplish this, you can:

  1. add GSI to the DynamoDB table on the longUrl attribute to support efficient reverse lookup
  2. in the shortenUrl function, perform a GET with the GSI to find existing short url(s)

I think it’s better to add a GSI than to create a new table here because it avoids having “transactions” that span across multiple tables.

Useful Links