Benchmarking Go, Node, Python & PyPy

I was just recently introduced to [[http://golang.org/|Go]], and it seems to offer some great features. I wrote a few simple benchmarks to evaluate Go against [[http://nodejs.org/|Node.js]] and [[https://www.python.org/|Python]]/[[http://pypy.org/|PyPy]] in terms of speed. The results initially surprised me, but a user comment guided me to update Golang because they have made a lot of optimizations since version 1.0.2 (current stable is 1.2.2). This recommendation was well founded, and greatly increased Go’s speeds. Go was still beat in a few instances, but I likely just need to learn the nuances of the language.

//–Old forward below for historical accuracy–//

The results actually surprised me a bit because there was no clear winner. Overall, I would have to say that Python is the fastest language because it is second place in nearly all of the tests that it did not take first in. It was also interesting that PyPy does very well when it comes to numbers, but falls short in most of the other areas.

Even after these benchmarks, Go seems to be the most viable language in terms of concurrency. I love Node’s [[http://strongloop.com/strongblog/node-js-event-loop/|event loop]], but your code ends up as a tangled mess of anonymous functions pretty quickly. Python has a [[https://docs.python.org/2/library/multiprocessing.html|multiprocessing]] module that works quite well, but is a bit clumsy in a lot of instances. Go provides [[http://www.golangbootcamp.com/book/concurrency#sec-goroutines|goroutines]] and [[http://www.golangbootcamp.com/book/concurrency#sec-channels|channels]], which are an interesting take on concurrency; requiring very little consideration during development.

//–End old forward–//

[[[TOC]]]

==Go==
Make sure to compile this (`go run ./path_to_go_script.go`) before running the benchmarks, or the first test will be skewed.
{{{ lang=go
/* Go benchmarks to compare against Python and Node.js
*
* @author Dave Lasley, dave@dlasley.net
* @website https://blog.laslabs.com
* @file go_benches.go
*/

package main

import (
“bytes”
“flag”
“fmt”
)

/* Test str concatenation, generic method */
func strConcat() string{
var str string

for i := 0; i < 100000; i++{ str += "A" } return str } /* Language specific recommended way of str concat */ func strConcatRecommended() string{ var buffer bytes.Buffer for i := 0; i < 100000; i++ { buffer.WriteString("A") } return buffer.String() } /* Int math+reassignment */ func intBench() int{ var test int for i := 0; i < 100000; i++{ test += i } return test } /* Array assignments */ func arrBench() [100][100]int{ var test [100][100]int // Loop 100 assignments of all elements for i := 0; i < 1000; i++{ for j := 0; j < 100; j++{ for k := 0; k < 100; k++{ test[j][k] = i+j+k } } } return test } /* Slice assignments */ func sliceBench() []int{ test := make([]int, 100) // Loop 1000 iters of 100 assignments of all elements for i := 0; i < 100000; i++{ for j := 0; j < 100; j++{ test[j] = j+i } } return test } /* Fib */ func fib(n int) int{ if(n<2){ return n } return fib(n-2) + fib(n-1) } func main(){ str := flag.Bool("str", false, "str concat test") buffer := flag.Bool("strrec", false, "buffer write test") int_ := flag.Bool("int", false, "int test") arr := flag.Bool("arr", false, "array test") slice := flag.Bool("slice", false, "slice test") fib_ := flag.Bool("fib", false, "fib test") flag.Parse() if *str{ fmt.Println(strConcat()) } if *buffer{ fmt.Println(strConcatRecommended()) } if *int_{ fmt.Println(intBench()) } if *arr{ fmt.Println(arrBench()) } if *slice{ fmt.Println(sliceBench()) } if *fib_{ fmt.Println(fib(40)) } } }}} ==Node.js== {{{ lang=javascript #!/usr/bin/env node /* Go benchmarks to compare against Python and Node.js * * @author Dave Lasley, dave@dlasley.net * @website https://blog.laslabs.com * @file node_benches.js */ /* Test str concatenation, generic method */ function strConcat(){ var str = ''; for (var i=0; i<100000; i++){ str += 'A'; } return str } /* Int math+reassignment */ function intBench(){ var test = 0; for(var i=0; i<100000; i++){ test += i; } return test; } /* Multidimensional Array assignments */ function arrBench(){ var test = [], test_ = []; for(var i=0; i<1000; i++){ for(var j=0; j<100; j++){ for(var k=0; k<100; k++) { test_[k] = i+j+k; } test[j] = test_; } } return test; } /* Single dimensional array assignments */ function sliceBench() { for(var i=0; i<100000; i++){ var test = []; for(var j=0; j<100; j++){ test.push(j+i); } } return test; } /* Fib test */ function fib(n) { if (n<2) { return n; } return fib(n-2) + fib(n-1); } var args = process.argv.slice(2); switch (args[0]) { case '--str': console.log(strConcat()); break; case '--int': console.log(intBench()); break; case '--arr': console.log(arrBench()); break; case '--slice': console.log(sliceBench()); break; case '--fib': console.log(fib(40)); break; } }}} ==Python/PyPy== {{{ lang=python #!/usr/bin/pypy # -*- coding: utf-8 -*- ## # Python benchmarks to compare against Go and Node.js # # @author Dave Lasley, dave@dlasley.net # @website https://blog.laslabs.com # @file py_benches.py from optparse import OptionParser def strConcat(): # Test str concatenation, generic method str_ = '' for i in xrange(0, 100000): str_ += 'A' return str_ def strConcatRecommended(): # Language specific recommended way of str concat return ''.join(['A' for _ in xrange(0, 100000)]) def intBench(): # Int math+reassignment test = 0 for i in xrange(0, 100000): test += i return test def arrBench(): # Multidimensional Array assignments for i in xrange(0, 1000): test = [] for j in xrange(0, 100): test.append([i+j+k for k in xrange(0, 100)]) return test def sliceBench(): # Single dimensional array assignments for i in xrange(0, 100000): test = [] for j in xrange(0, 100): test.append(j+i) return test def fib(n): # Implement fibonacci if n<2: return n return fib(n-2) + fib(n-1) if __name__ == '__main__': parser = OptionParser() parser.add_option('-s', '--str', dest='str', action="store_true", default=False, help='str concat test') parser.add_option('-r', '--strrec', dest='strrec', action="store_true", default=False, help='str join test') parser.add_option('-i', '--int', dest='int', action="store_true", default=False, help='int test') parser.add_option('-a', '--arr', dest='arr', action="store_true", default=False, help='array test') parser.add_option('-d', '--slice', dest='slice', action="store_true", default=False, help='slice test') parser.add_option('-f', '--fib', dest='fib', action="store_true", default=False, help='fib test') options, args = parser.parse_args() if options.str: print strConcat() if options.strrec: print strConcatRecommended() if options.int: print intBench() if options.arr: print arrBench() if options.slice: print sliceBench() if options.fib: print fib(40) }}} ==Controller== Each language contains their own benchmarking mechanisms that are considered to be accurate. The problem is that these may have slightly different implementations, so I felt a single controller was necessary to properly evaluate the difference between languages. I wrote said controller in Python: {{{ lang=python #!/usr/bin/pypy # -*- coding: utf-8 -*- ## # Controller for benchmarks # # I have determined that it will be more accurrate to use the same timing method for all benches # # @author David Lasley
# @website https://blog.laslabs.com
# @file bench_controller.py

from __future__ import division

import time
import csv
import logging

from subprocess import check_output

logging.basicConfig(level=logging.DEBUG)

def timeIt(func, loops=10):
times = []
for i in xrange(0, loops):
start = time.time()
func()
times.append((time.time() – start) * 1000)
return times

test_definitions = {
‘str concat’: {
‘node’: [‘node’, ‘./node_benches.js’, ‘–str’],
‘pypy’: [‘pypy’, ‘./py_benches.py’, ‘–str’],
‘python’: [‘python’, ‘./py_benches.py’, ‘–str’],
‘go’: [‘go’, ‘run’, ‘./go_benches.go’, ‘-str’],
},
‘recommended str concat’: {
‘node’: [‘node’, ‘./node_benches.js’, ‘–str’], #< Node's recommended is str+=str_ 'pypy': ['pypy', './py_benches.py', '--strrec'], 'python': ['python', './py_benches.py', '--strrec'], 'go': ['go', 'run', './go_benches.go', '-strrec'], }, 'int tests': { 'node': ['node', './node_benches.js', '--int'], 'pypy': ['pypy', './py_benches.py', '--int'], 'python': ['python', './py_benches.py', '--int'], 'go': ['go', 'run', './go_benches.go', '-int'], }, 'multi-d arr tests (fixed length golang)': { 'node': ['node', './node_benches.js', '--arr'], 'pypy': ['pypy', './py_benches.py', '--arr'], 'python': ['python', './py_benches.py', '--arr'], 'go': ['go', 'run', './go_benches.go', '-arr'], }, 'single-d arr tests (slice in golang)': { 'node': ['node', './node_benches.js', '--slice'], 'pypy': ['pypy', './py_benches.py', '--slice'], 'python': ['python', './py_benches.py', '--slice'], 'go': ['go', 'run', './go_benches.go', '-slice'], }, 'fibonacci': { 'node': ['node', './node_benches.js', '--fib'], 'pypy': ['pypy', './py_benches.py', '--fib'], 'python': ['python', './py_benches.py', '--fib'], 'go': ['go', 'run', './go_benches.go', '-fib'], }, } with open('/tmp/bench_results.csv', 'w') as fh: w = csv.writer(fh) for test_name, test_set in test_definitions.iteritems(): logging.info('Beginning %s tests' % test_name) w.writerow([test_name]) for lng, cmd in test_set.iteritems(): logging.info('Running %s' % lng) w.writerow([lng] + timeIt(lambda: check_output(cmd))) w.writerow([]) #< blank }}} ==The Results== For my tests, I used the following versions: |= Language |= Version | | Go | go1.2.2 | | Node.js | v0.11.14-pre | | Python/PyPy | 2.7.4 | ** recommended str concat ** |= node | 59.99112129 | 67.76809692 | 68.49098206 | 87.81909943 | 71.92587852 | 60.00208855 | 95.99804878 | 91.69507027 | 41.71395302 | 58.16984177 | |= python | 29.15120125 | 25.42495728 | 29.11901474 | 31.02397919 | 30.90906143 | 39.27588463 | 33.57982635 | 31.39519691 | 27.6081562 | 21.58117294 | |= pypy | 35.27402878 | 32.18197823 | 48.72894287 | 30.77101707 | 47.76883125 | 34.89899635 | 36.69404984 | 42.68407822 | 49.04198647 | 58.57610703 | |= go | 262.3169422 | 253.9041042 | 257.7459812 | 269.9151039 | 267.0440674 | 224.8828411 | 258.8560581 | 283.6902142 | 257.532835 | 276.6370773 | ** multi-d arr tests (fixed length golang) ** |= node | 138.8771534 | 141.5431499 | 125.0460148 | 114.6330833 | 121.655941 | 110.918045 | 128.9660931 | 120.3701496 | 114.9089336 | 103.9888859 | |= python | 640.6638622 | 676.3451099 | 544.3031788 | 777.8539658 | 719.8348045 | 613.1420135 | 621.7420101 | 708.4009647 | 830.1188946 | 717.7159786 | |= pypy | 198.1470585 | 118.5650826 | 115.8950329 | 116.9779301 | 106.6081524 | 123.4230995 | 119.3599701 | 118.8399792 | 120.0230122 | 117.6080704 | |= go | 311.5470409 | 326.9798756 | 306.0839176 | 293.6029434 | 269.1380978 | 253.4139156 | 288.7370586 | 328.4268379 | 296.3681221 | 240.8840656 | ** single-d arr tests (slice in golang) ** |= node | 132.4520111 | 139.1201019 | 152.1511078 | 147.8271484 | 160.2649689 | 124.4721413 | 143.3520317 | 156.0280323 | 155.5659771 | 116.8549061 | |= python | 883.0759525 | 899.2409706 | 1039.23893 | 956.5742016 | 973.6499786 | 1071.118116 | 1173.737049 | 1185.379028 | 962.4710083 | 1092.193842 | |= pypy | 157.82094 | 145.6198692 | 174.3578911 | 169.7480679 | 158.244133 | 128.9889812 | 131.467104 | 123.1899261 | 134.8230839 | 148.804903 | |= go | 293.6689854 | 247.9851246 | 286.0591412 | 310.8739853 | 278.069973 | 269.8729038 | 294.1191196 | 266.8190002 | 286.0610485 | 238.9888763 | ** str concat ** |= node | 48.42591286 | 59.42797661 | 61.02490425 | 61.6350174 | 54.77595329 | 57.43503571 | 56.75506592 | 64.34988976 | 56.11419678 | 55.35697937 | |= python | 33.90216827 | 34.02519226 | 34.17897224 | 34.07692909 | 34.82508659 | 34.24286842 | 34.51299667 | 34.48319435 | 34.17778015 | 34.37805176 | |= pypy | 732.2180271 | 730.1030159 | 667.4458981 | 720.0639248 | 613.6038303 | 615.9689426 | 577.4409771 | 562.2119904 | 601.3510227 | 579.4260502 | |= go | 1602.607012 | 1497.264862 | 1519.823074 | 1528.020859 | 1487.414122 | 1490.914106 | 1577.569008 | 1647.518873 | 1518.991947 | 1554.651976 | ** fibonacci ** |= node | 1867.120028 | 1707.561016 | 1836.390018 | 1908.078909 | 1670.866966 | 1685.119867 | 1758.877039 | 1681.353092 | 1803.344965 | 1693.615913 | |= python | 37720.66522 | 36329.70405 | 34938.38787 | 37395.29514 | 38313.84993 | 39367.03801 | 38027.74191 | 39836.77411 | 38381.70695 | 39898.633 | |= pypy | 15490.13901 | 14594.594 | 13039.69288 | 13491.85491 | 13261.80506 | 13530.72 | 13109.36284 | 13404.52719 | 13851.68004 | 15174.24798 | |= go | 1187.971115 | 1081.583977 | 1059.811115 | 1235.404015 | 1228.081942 | 1217.715025 | 1280.277967 | 1149.573088 | 1209.995031 | 1068.785191 | ** int tests ** |= node | 56.04815483 | 55.95517159 | 60.06979942 | 53.06196213 | 57.4889183 | 55.74083328 | 56.06102943 | 55.05800247 | 73.02999496 | 77.44002342 | |= python | 38.56801987 | 29.40201759 | 35.03012657 | 29.36983109 | 30.00593185 | 36.32903099 | 32.7000618 | 28.79595757 | 33.01596642 | 36.20100021 | |= pypy | 49.50404167 | 50.21190643 | 48.82097244 | 49.39198494 | 48.97403717 | 52.81281471 | 49.15213585 | 29.21509743 | 30.32207489 | 35.92514992 | |= go | 242.7301407 | 328.2020092 | 257.4050426 | 298.3369827 | 296.8220711 | 311.5959167 | 257.5638294 | 232.5921059 | 283.1189632 | 286.3800526 | ==Download Files== * [[https://blog.laslabs.com/user-files/uploads/benches.zip]]


Posted

in

, ,

by

Tags:

Comments

9 responses to “Benchmarking Go, Node, Python & PyPy”

  1. Daniel Avatar
    Daniel

    FYI go1.0.2 is pretty old, a lot of optimizations went into go1.2 and the recent go1.3 (still in beta).

    1. David Lasley Avatar

      That was the Ubuntu std install, it makes sense that it’s out of date. I’m updating now and will provide new results, thanks for letting me know!

  2. Stuart Avatar
    Stuart

    I think you have some copy-paste errors in your controller. You appear to be running the string test for python in the int test, multi-d array and single-d array tests:

    ‘int tests’: {
    ‘node’: lambda: check_output([‘node’, ‘./node_benches.js’, ‘–int’]),
    ‘pypy’: lambda: check_output([‘pypy’, ‘./py_benches.py’, ‘–int’]),
    ‘python’: lambda: check_output([‘python’, ‘./py_benches.py’, ‘–str’]),
    ‘go’: lambda: check_output([‘go’, ‘run’, ‘./go_benches.go’, ‘-int’]),
    },
    ‘multi-d arr tests (fixed length golang)’: {
    ‘node’: lambda: check_output([‘node’, ‘./node_benches.js’, ‘–arr’]),
    ‘pypy’: lambda: check_output([‘pypy’, ‘./py_benches.py’, ‘–arr’]),
    ‘python’: lambda: check_output([‘python’, ‘./py_benches.py’, ‘–str’]),
    ‘go’: lambda: check_output([‘go’, ‘run’, ‘./go_benches.go’, ‘-arr’]),
    },
    ‘single-d arr tests (slice in golang)’: {
    ‘node’: lambda: check_output([‘node’, ‘./node_benches.js’, ‘–slice’]),
    ‘pypy’: lambda: check_output([‘pypy’, ‘./py_benches.py’, ‘–slice’]),
    ‘python’: lambda: check_output([‘python’, ‘./py_benches.py’, ‘–str’]),
    ‘go’: lambda: check_output([‘go’, ‘run’, ‘./go_benches.go’, ‘-slice’]),
    },

    1. Dave Lasley Avatar

      Good catch, thanks! I was just about to re-run the tests with the new go version, perfect timing :)

  3. Isaac Gouy Avatar
    Isaac Gouy

    “I wrote a few simple benchmarks to…”

    The benchmarks game shows measurements for programs that are more than 5 lines of code (but still small enough that you can check through the source code) —

    http://benchmarksgame.alioth.debian.org/u64/benchmark.php?test=all&lang=v8&lang2=go&data=u64

    http://benchmarksgame.alioth.debian.org/u64/benchmark.php?test=all&lang=v8&lang2=python3&data=u64

    1. Dave Lasley Avatar

      Nice, these are much better than mine. Thank you for providing them!

      1. Isaac Gouy Avatar
        Isaac Gouy

        Hopefully you’ll be able to do something more interesting than the benchmarks game!

  4. jcrubino Avatar

    These are great tests but

    you are testing go compile time + run time by not compiling the binary first
    $ go build
    $ ./go_bench

  5. jcrubino Avatar

    I just read your inlined note about running go build first before the tests.

    This will still invoke the go command line tool to check the binary and the difference on my system running “go run…” vs ./go_bench –int is a ~50x delta

Leave a Reply to Isaac Gouy Cancel reply

Your email address will not be published. Required fields are marked *