High Performance and Parallelism With the Feel of a Scripting Language

October 21, 2013 - 10 minute read -
code go

Sometimes you need a quick-and-dirty tool to get something done. You're not looking for a long-term solution, but rather just have a simple job to do. As a programmer when you have those thoughts you naturally migrate toward scripting languages. But sometimes that throw away tool also needs to do highly parallel operations with good performance characteristics. Until recently it seemed like you only got to choose one or the other though. And then came Go.

A Use Case

We've been working on an application that provides APIs for other apps. Those APIs are required to be fast and to scale up to many concurrent users. We needed a way to push a lot of traffic to this API while ensuring that the API would access a wide swath of the data in the database. We didn't want to run into the case where the same request was being made over and over allowing the database to end up with an unrealistic scenario where it had all the data cached. There are a number of existing tools for this kind of performance testing, but seeing some of the tests run didn't give us much confidence that they were really running these requests in parallel like we needed. We also wanted to be able to easily run these tests from many different clients computers at once so that we could ensure that the client computers and internet connections were not the bottleneck.

How Does Go Fit That Use Case?

Write-Once (Compile a Few Times) and Run Anywhere

One of the advantages of Go is that it is easy to cross-compile it to other architectures and operating systems. This property made it easy to write a little application that we could run at the same time on Mac OS and Linux. Just like a scripting language it was write-once and run anywhere. Of course we had to compile it for each of the different operating systems but that is incredibly easy with Go. Unlike most scripting languages, once a Go binary is compiled for an OS, nothing else needs to be installed to run it. There's no management of different versions or libraries. A Go binary is entirely self-contained so no extra Go runtime is needed to be installed for the application to be run and all of the depencies are statically linked in. Simply copy the binary to the appropriate machine and execute it. You can't get much simpler than that.

$ brew install go --cross-compile-common
$ GOOS=linux go build myapp.go

Libraries for All The Things

Go has a large number of good libraries that come standard. These libraries include support for making HTTP clients and servers. There's support for accessing databases (although the drivers themselves are not included). It includes support for parsing command line arguments, encoding and decoding JSON, for doing cryptography, and for using regular expressions. Basically it includes a lot of libraries that you need for creating applications whether it's something you want to maintain forever or whether it's a throw away app.

flag.BoolVar(&help, "h", false, "help")
resp, err := http.Get("http://example.com/")
var exampleResp MyJsonResponse
decoder := json.NewDecoder(resp.Body)
err = decoder.Decode(&exampleResp)

Concurrent Design and Parallel Execution

Goroutines allow a program to execute a function concurrently with other running code. Channels allow for different goroutines to communicate by passing messages to each other. Those two things together allow for a simple means of structuring code with a concurrent design.

ch := make(chan int)
go func() {
  for {
    val := <-ch
    fmt.Printf("Got an int: %v", val)
  }
}()
ch <- 1
ch <- 2

In addition to having easy mechanisms to implement a concurrent design, your program also needs to be able to do actual work in parallel. Go can run many different goroutines in parallel and gives you control over how many run at the same time with a simple function call.

runtime.GOMAXPROCS(25)

Put The Pieces Together

Bringing together those libraries and a concurrent design allows us to easily create a program that meets our needs for testing these APIs.

This is a simple application that does GET requests to a specific URL. The program allows you to specify the URL, the number of requests to make, and the number to run concurrently. It uses many of the libraries I mentioned above for handling HTTP, for parsing command line arguments, for calcuating the duration of requests, etc. It also uses goroutines to allow for multiple simultaneous requests to be made while using a channel to communicate the results back to the main program.

package main
import (
  "flag"
  "fmt"
  "io/ioutil"
  "net/http"
  "runtime"
  "sync"
  "time"
)
var help bool
var count int
var concurrent int
var url string
var client *http.Client
func init() {
  client = &http.Client{}
  flag.BoolVar(&help, "h", false, "help")
  flag.IntVar(&count, "n", 1000, "number of requests")
  flag.IntVar(&concurrent, "c", runtime.NumCPU() + 1, "number of concurrent requests")
  flag.StringVar(&url, "u", "http://127.0.0.1:5000/", "url")
  flag.Parse()
}
func main() {
  if help {
    flag.Usage()
    return
  }
  fmt.Printf("Concurrent: %v\n", concurrent)
  runtime.GOMAXPROCS(concurrent + 2)
  runChan := make(chan int, concurrent)
  resultChan := make(chan Result)
  var wg sync.WaitGroup
  success_cnt := 0
  failure_cnt := 0
  var durations []time.Duration
  var min_dur time.Duration
  var max_dur time.Duration
  // Run the stuff
  dur := duration(func() {
    // setup to handle responses
    go func() {
      for {
        r := <-resultChan
        durations = append(durations, r.Duration)
        min_dur = min(min_dur, r.Duration)
        max_dur = max(max_dur, r.Duration)
        // 200s and 300s are success in HTTP
        if r.StatusCode < 400 {
          success_cnt += 1
        } else {
          fmt.Printf("Error: %v; %v\n", r.StatusCode, r.ErrOrBody())
          failure_cnt += 1
        }
        wg.Done()
      }
    }()
    // setup to handle running requests
    wg.Add(count)
    go func() {
      for i:=0; i < count; i++ {
        <-runChan
        fmt.Printf(".")
        go func() {
          resultChan <- Execute()
          runChan <- 1
        }()
      }
    }()
    // tell N number of requests to run, but this limits the concurrency
    for i := 0; i < concurrent; i ++ {
      runChan <- 1
    }
    wg.Wait()
  })
  fmt.Printf("\n")
  fmt.Printf("Success: %v\nFailure: %v\n", success_cnt, failure_cnt)
  fmt.Printf("Min: %v\nMax: %v\n", min_dur, max_dur)
  fmt.Printf("Mean: %v\n", avg(durations))
  fmt.Printf("Elapsed time: %v\n", dur.Seconds())
}
func avg(durs []time.Duration) time.Duration {
  total := float64(0)
  for _, d := range durs {
    total += d.Seconds()
  }
  return time.Duration((total / float64(len(durs))) * float64(time.Second))
}
func min(a time.Duration, b time.Duration) time.Duration {
  if a != 0 && a < b {
    return a
  }
  return b
}
func max(a time.Duration, b time.Duration) time.Duration {
  if a > b {
    return a
  }
  return b
}
func Execute() Result {
  var resp *http.Response
  var err error
  dur := duration(func() {
    resp, err = http.Get(url)
  })
  if err != nil {
    return Result{dur, -1, err, ""}
  }
  defer resp.Body.Close()
  var body string
  if b, err := ioutil.ReadAll(resp.Body); err == nil {
    body = string(b)
  } else {
    body = ""
  }
  return Result{dur, resp.StatusCode, nil, body}
}
type Result struct {
  Duration time.Duration
  StatusCode int
  Err error
  Body string
}
func (r *Result) ErrOrBody() string {
  if nil != r.Err {
    return r.Err.Error()
  } else {
    return r.Body
  }
}
func duration(f func()) time.Duration {
  start := time.Now()
  f()
  return time.Now().Sub(start)
}

The app we wrote started out a lot like this; easy and straightforward. As we needed to add more tests we stated refactoring out types to allow me to separate the core of the load testing and calculation of times from the actual requests run. Go provides function type aliases, higher order functions and a lot of other abstractions which make those refactorings quite elegant. But that's for a different post...