Bloggity Blog

We love our work so much, we want to tell the world about it.

Elixir vs Ruby Showdown - Phoenix vs Rails

Written by Chris McCord

Phoenix vs Rails

This is the second post in our Elixir vs Ruby Showdown series. In this latest installment, we're exploring the performance of the Phoenix and Rails web frameworks when put up against the same task. Before we get into code samples and benchmark results, let's answer a few common questions about these kinds of tests:

tl;dr Phoenix showed 10.63x more throughput over Rails when performing the same task, with a fraction of CPU load


Isn't this apples to oranges?

No. These tests are a direct comparison of our favorite aspects of Ruby and Rails with Elixir and Phoenix. Elixir has the promise to provide the things we love most about Ruby: productivity, metaprogramming, elegant APIs, and DSLs, but much faster, with a battle-tested concurrency and distribution model. The goals of this post are to explore how Elixir can match or exceed our favorite aspects of Ruby without sacrificing elegant APIs and the productive nature of the web frameworks we use.

Are benchmarks meaningful?

Benchmarks are only as meaningful as the work you do upfront to make your results as reliable as possible for the programs being tested. Even then, benchmarks only provide a "good idea" of performance. Moral of the story: never trust benchmarks, always measure yourself.

What are we comparing?

Elixir Phoenix Framework

  • Phoenix 0.3.1
  • Cowboy webserver (single Elixir node)
  • Erlang 17.1

Ruby on Rails

  • Rails 4.0.4
  • Puma webserver (4 workers - 1 per cpu core)
  • MRI Ruby 2.1.0

We're measuring the throughput of an "equivalent" Phoenix and Rails app where specific tasks have been as isolated as possible to best compare features and performance. Here's what we are measuring:

  1. Match a request from the webserver and route it to a controller action, merging any named parameters from the route
  2. In the controller action, render a view based on the request Accept header, contained within a rendered parent layout
  3. Within the view, render a collection of partial views from data provided by the controller
  4. Views are rendered with a pure language templating engine (ERB, EEx)
  5. Return the response to the client

That's it. We're testing a standard route matching, view rendering stack that goes beyond a Hello World example. Both apps render a layout, view, and collection of partials to tests real-world throughput of a general web framework task. No view caching was used and request logging was disabled in both apps to prevent IO overhead. The wrk benchmarking tool was used for all tests, both against localhost, and remotely against heroku dynos to rule out wrk overhead on localhost. Enough talk, let's take a look at some code.



defmodule Benchmarker.Router do
  use Phoenix.Router
  alias Benchmarker.Controllers

  get "/:title", Controllers.Pages, :index, as: :page


Benchmarker::Application.routes.draw do
  root to: "pages#index"
  get "/:title", to: "pages#index", as: :page


Phoenix (request parameters can be pattern-matched directly in the second argument)

defmodule Benchmarker.Controllers.Pages do
  use Phoenix.Controller

  def index(conn, %{"title" => title}) do
    render conn, "index", title: title, members: [
      %{name: "Chris McCord"},
      %{name: "Matt Sears"},
      %{name: "David Stump"},
      %{name: "Ricardo Thompson"}


class PagesController < ApplicationController

  def index
    @title = params[:title]
    @members = [
      {name: "Chris McCord"},
      {name: "Matt Sears"},
      {name: "David Stump"},
      {name: "Ricardo Thompson"}
    render "index"


Phoenix (EEx)

    <h4>Team Members</h4>
      <%= for member <- @members do %>
          <%= render "bio.html", member: member %>
      <% end %>
<b>Name:</b> <%= %>

Rails (ERB)

    <h4>Team Members</h4>
      <% for member in @members do %>
          <%= render partial: "bio.html", locals: {member: member} %>
      <% end %>
<b>Name:</b> <%= member[:name] %>

Localhost Results

Phoenix showed 10.63x more throughput, with a much more consistent standard deviation of latency. Elixir's concurrency model really shines in these results. A single Elixir node is able to use all CPU/memory resources it requires, while our puma webserver must start a Rails process for each of our CPU cores to achieve councurrency.

  req/s: 12,120.00
  Stdev: 3.35ms
  Max latency: 43.30ms

  req/s: 1,140.53
  Stdev: 18.96ms
  Max latency: 159.43ms


$ mix do deps.get, compile
$ MIX_ENV=prod mix compile.protocols
$ MIX_ENV=prod elixir -pa _build/prod/consolidated -S mix phoenix.start
Running Elixir.Benchmarker.Router with Cowboy on port 4000

$ wrk -t4 -c100 -d30S --timeout 2000 ""
Running 10s test @
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     8.31ms    3.53ms  43.30ms   79.38%
    Req/Sec     3.11k   376.89     4.73k    79.83%
  121202 requests in 10.00s, 254.29MB read
Requests/sec:  12120.94
Transfer/sec:     25.43MB


$ bundle
$ RACK_ENV=production bundle exec puma -w 4
[13057] Puma starting in cluster mode...
[13057] * Version 2.8.2 (ruby 2.1.0-p0), codename: Sir Edmund Percival Hillary
[13057] * Min threads: 0, max threads: 16
[13057] * Environment: production
[13057] * Process workers: 4
[13057] * Phased restart available
[13185] * Listening on tcp://

$ wrk -t4 -c100 -d30S --timeout 2000 ""
Running 10s test @
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    21.67ms   18.96ms 159.43ms   85.53%
    Req/Sec   449.74    413.36     1.10k    63.82%
  11414 requests in 10.01s, 25.50MB read
Requests/sec:   1140.53
Transfer/sec:      2.55MB

Heroku Results (1 Dyno)

Phoenix showed 8.94x more throughput, again with a much more consistent standard deviation of latency and with 3.74x less CPU load. We ran out of available socket connections when trying to push the Phoenix dyno harder to match the CPU load seen by the Rails dyno. It's possible the Phoenix app could have more throughput available if our client network links had higher capacity. The standard deviation is particularly important here against a remote host. The Rails app struggled to maintain consistent response times, hitting 8+ second latency as a result. In real world terms, a Phoenix app should respond much more consistently under load than a Rails app.

  req/s: 2,691.03
  Stdev: 139.92ms
  Max latency: 1.39s

  req/s: 301.36
  Stdev: 2.06s
  Max latency: 8.36s

Phoenix (Cold)

$ ./wrk -t12 -c800 -d30S --timeout 2000 ""
Running 30s test @
  12 threads and 800 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   317.15ms  139.55ms 970.43ms   81.12%
    Req/Sec   231.43     66.07   382.00     63.92%
  83240 requests in 30.00s, 174.65MB read
  Socket errors: connect 0, read 1, write 0, timeout 0
Requests/sec:   2774.59
Transfer/sec:      5.82MB

Phoenix (Warm)

$ ./wrk -t12 -c800 -d180S --timeout 2000 ""
Running 3m test @
  12 threads and 800 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   318.52ms  139.92ms   1.39s    82.03%
    Req/Sec   224.42     57.23   368.00     68.50%
  484444 requests in 3.00m, 0.99GB read
  Socket errors: connect 0, read 9, write 0, timeout 0
Requests/sec:   2691.03
Transfer/sec:      5.65MB



sample#memory_pgpgin=204996pages sample#memory_pgpgout=196379pages

Rails (Cold)

$ ./wrk -t12 -c800 -d30S --timeout 2000 ""
Running 30s test @
  12 threads and 800 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.85s     1.33s    5.75s    65.73%
    Req/Sec    22.68      7.18    61.00     69.71%
  8276 requests in 30.03s, 18.70MB read
Requests/sec:    275.64
Transfer/sec:    637.86KB

Rails (Warm)

$ ./wrk -t12 -c800 -d180S --timeout 2000 ""
Running 3m test @
  12 threads and 800 connections
	  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.07s     2.06s    8.36s    70.39%
    Req/Sec    24.65      9.97    63.00     67.10%
  54256 requests in 3.00m, 122.50MB read
  Socket errors: connect 0, read 1, write 0, timeout 0
Requests/sec:    301.36
Transfer/sec:    696.77KB





Elixir provides the joy and productivity of Ruby with the concurrency and fault-tolerance of Erlang. We've shown we can have the best of both worlds with Elixir and I encourage you to get involved with Phoenix. There's much work to do for Phoenix to match the robust ecosystem of Rails, but we're just getting started and have very big plans this year.

Both applications are available on Github if want to recreate the benchmarks. We would love to see results on different hardware, particularly hardware that can put greater load on the Phoenix app.

Shoutout to Jason Stiebs for his help getting the Heroku applications setup and remotely benchmarked!

Elixir vs Ruby Showdown - Part One

Written by Chris McCord

We've taken a huge interest in Elixir here at the Littlelines office this year. I gave a 3.5 hour into to Elixir workshop at RailsConf in April, and have been busy building Phoenix, an Elixir Web Framework. Earlier this week, I put together Linguist, an Elixir internationalization library and was shocked at how little code it required after taking a look at the Ruby implementation. By using Elixir's metaprogramming facilities, I was able to define function heads that pattern match on each I18n key. This approach simply generates a function per I18n key, whose function body returns the translation with any required interpolation. Let's see it in action.

tl;dr The Elixir implementation is 73x faster than Ruby's i18n gem

edit Joel Vanderwerf put together a Ruby implementation in response to this post that runs in 3.5s, making the Elixir implementaiton 2.18x as fast. gist

defmodule I18n do
  use Linguist.Compiler, locales: [en: [
    foo: "bar",
    flash: [
      notice: [
        alert: "Alert!",
        hello: "hello %{first} %{last}",

iex> I18n.t("en", "flash.notice.alert")
iex> I18n.t("en", "flash.notice.hello", first: "chris", last: "mccord")
"hello chris mccord"

By calling use Linguist.Compiler, the above code would expand at compile time to:

defmodule I18n do
  def t("en", "foo") do
    t("en", "foo", [])
  def t("en", "foo", bindings) do

  def t("en", "flash.notice.alert") do
    t("en", "flash.notice.alert", [])
  def t("en", "flash.notice.alert", bindings) do

  def t("en", "flash.notice.hello") do
    t("en", "flash.notice.hello", [])
  def t("en", "flash.notice.hello", bindings) do
    ((("hello " <> Dict.fetch!(bindings, :first)) <> " ") <> Dict.fetch!(bindings, :last)) <> ""

Notice in the last function definition, the interpolation is handled entirely by string contcatenation instead of relying on regex splitting/replacement at runtime. This level of optimization isn't possible in our Ruby code.

Ruby's implementation requires a complex algorithm to split the I18n keys into a Hash to allow performant lookup at runtime. Since our Elixir implementation just produces function definitions, we let the Erlang Virtual Machine's highly optimized pattern matching engine take over to lookup the I18n value. The result is strikingly less code for equivalent functionality. Not only do we get less code, we also get a 77x speed improvement over Ruby 2.1.0. Here's a few benchmarks we ran to see how the Elixir implementation compared to Ruby:


defmodule Benchmark do

  defmodule I18n do
    use Linguist.Compiler, locales: [
      en: [
        foo: "bar",
        flash: [
          notice: [
            alert: "Alert!",
            hello: "hello %{first} %{last}",
            bye: "bye now, %{name}!"
        users: [
          title: "Users",
          profiles: [
            title: "Profiles",

  def measure(func) do
    |> elem(0)
    |> Kernel./(1_000_000)

  def run do
    measure fn ->
      Enum.each 1..1_000_000, fn _->
        I18n.t("en", "foo")
        I18n.t("en", "users.profiles.title")
        I18n.t("en", "flash.notice.hello", first: "chris", last: "mccord")
        I18n.t("en", "flash.notice.hello", first: "john", last: "doe")



  foo: "bar"
      alert: "Alert!"
      hello: "hello %{first} %{last}"
      bye: "bye now %{name}!"
    title: "Users"
      title: "Profiles"
class Benchmarker

    Benchmark.measure do
      1_000_000.times do |i|
        I18n.t("flash.notice.hello", first: "chris", last: "mccord")
        I18n.t("flash.notice.hello", first: "john", last: "doe")


Benchmark Results *

  • Elixir (0.14.1) Average across 10 runs: 1.63s
  • Ruby (MRI 2.1.0) Average across 10 runs: 118.62s

That's a 77x speed improvement for Elixir over Ruby, with the same top-level API! With careful use of metaprogramming, we were able to produce a clean implementation with compile-time optimized lookup. Elixir provides metapgramming abilities well beyond what we can dream up as Rubyists. Here's a few resources to learn more:

*never trust benchmark results, always measure yourself

5 Reasons Why Rubyist Will Love Swift

Written by Matt Sears

At Littlelines, we are very excited by Apple's announcment last week of their brand new programming language for building iOS and Mac apps called Swift. As developers, we get very curious when new languages are announced and this was no exception. For the past week, I've been buried in books, articles, and screencasts on all things Swift. Along the way, I've recognized a few things about Swift that I really like and they just so happen to be some of the same things I love about Ruby.

1. String Interpolation

Oh how we love our string interpolation in Ruby. Anything we can do to avoid concatenate strings together with plus(+) signs - we'll do.

name = "Matt"
puts "Hello there, #{name}."

#=> Hello there, Matt."

In Swift we can do the same thing by wrapping our variables or constants in parenthesis and escaping it with a backslash e.g. \(variable). Swift also supports expressions inside the parenthesis as well.

let name = "Name"
println("Hello there, \(name).")

//=> Hello, there Matt.

2. Optional Binding & Implicit Returns

We've long enjoyed using optional binding in Ruby to check if our variable contains a value before using it in our if block for example. This is a great way to maintain clean control flow in our code.

if current_user = find_current_user

In Swift, we can do something very similar by extracting the value into a constant or variable in a single action.

if let currentUser = findCurrentUser() {

3. Keyword Arguments

Keyword arguments were introduced in Ruby in version 2.0. Before the version 2.0 released, we had to "emulate" keyword arguments by passing a hash of arguments like this:

def foo(options = {})
  options = {bar: 'bar'}.merge(options)
  puts "#{options[:bar]} #{options[:buz]}"

If you have been coding Ruby for a while, you probably saw something like this a lot. But, it's not a very clean solution and we can't easily set default values. So, in version 2.0, keyword were introduced add now we can write something like this:

def say_hello(name: "Matt")
  puts "Hello there, #{name}"

Much better.This is a nice improvement and we can do the same thing in Swift. In addition, we can assure the arguments are of a specific type (more on that later). For example, we can force the arguments to be a String. If we pass anything other than a String, the compiler will flag an error.

func sayHello(name: String) {
  printlin("Hello there, \(name)")


//=> "Hello there, Matt"

4. Type Inference

Ruby is a dynamic type language so we can assign a variable anything we want: strings, integers, floats, it doesn't matter.

name = "Matt"
name = 23
name = 45.00

Swift is a type safe language. So if your code expects a string and you pass it an integer, the compiler will complain. However, Swift doesn't require us to specify a type. With Swift's type inference, much of the work of specifying the type is done for us. For example, take the following snippet:

var name = "Matt"
name = 3.14159  //=> Compiler Error: Can't convert!

Since we assigned the name constant a literal value of "Matt", Swift infers the constant to be of type String. If we try to assign a value to the name variable that is not a String, the compiler will flag it.

5. Closures

As Rubyist, we love closures and use them a lot. Ruby provides many ways for us to use closures e.g. Procs, Lambdas, and Blocks in particular. Let's take blocks for example:

def say_hello(&block)

say_hello { puts "Hello there" }

#=> "Hello there"

This is just one example closures in Ruby. We're simply passing a self-contained block of functionality that the closure can capture and store a reference to. This is know has closing over, hence the name "closures". In the above, Ruby example, we're passing the puts statement to the closure, but puts isn't being called until inside the block. In Swift, it looks very similar:

func sayHello(task: () -> ()) {

sayHello { println("Hello there.") }

//=> "Hello there."

Again, we're passing the println statement, but it won’t be called until it's inside the sayHello function.

In addtion, Swift also allows us to remove a lot of the syntactic noise (which we love in Ruby). Take for example, the sort function:

var fruits = ["Orange", "Apple", "Banana"]

fruits.sort({(a: String, b: String ) -> Bool in
    return a < b

//=> ["Apple", "Banana", "Orange"]

We can accomplish the same thing with the help of type inference and implicit returns that we touch on before:

fruits.sort({a, b in a < b })

Swift also has implicit arguments so we can accomplished the same sort with even less code:

fruits.sort{ $0 < $1 }

So there you have it. Hopefully, I've illustrated how we can enjoy writing Swift the same way we enjoy writing Ruby. I encourage you to checkout the official Swift Language Guide, available for free on iBooks. There are also quite a few videos on Swift available on Apple from the WWDC14 event this past week.