Benchmark collection of random problem + language combinations.
Comes with a framework making it easy to contribute, run, and visualize new solutions. This website is auto-generated from the results.
Disclaimer: This is a fun project, without putting too much though into experiment design. Some results are obviously flawed.
Notes: The figure is based on the median run time of largest problem size. The numbers correspond to the ranks within each benchmark. Use mouse-over to see relative run times. The color scale uses green for the minimum run time of a benchmark, gray for the median run time, and red is set to the point which is three standard deviations away from the median.
The framework allows to quickly run a set of benchmarks and generates output to visualize the results. All HTML on this page is auto-generated.
Each benchmark problem is split into stages, i.e., the solution is computed in several steps and each step is measured individually (implementations are responsible themselves to measure the time for each step). The total runtime is obtained by adding up the runtimes of all stages.
In some cases splitting the solutions into several steps will feel slightly non-idiomatic and inefficient, but it comes with the benefit to disentangle for instance I/O from the computations. Moreover, it sometimes reveals interesting results like a language being particularly fast in a certain step while being slow in a different step.
Each benchmark is performed with three different problem sizes: Currently, each problem comes in a small, medium, and large variant.
You can run all benchmarks for yourself or even create your own set of benchmarks. The main framework is written in Python, and should be reproducible on any UNIX based system.
TODO: extend documentation
Contributions of any kind are highly welcome: GitHub Repository
In particular it would be nice to see (more) implementations e.g. for the languages: Nim, Rust, Go, Haskell, Closure, Kotlin, Julia, R, Crystal, Racket, Lua, Ruby, Java...
This project is licensed under the terms of the MIT license.
All benchmarks were performed on a PC with the following specs:
Property | Value |
---|---|
OS | Linux |
Distribution | Ubuntu 14.04.5 LTS |
Kernel | 3.13.0-110-generic |
CPU | Intel(R) Core(TM) i5-4670 CPU @ 3.40GHz |
Number of cores | 4 |
L1 data cache size | 32K |
L1 instruction cache size | 32K |
L2 cache size | 256K |
L3 cache size | 6144K |
Memory | 7930.1 MB |
Summary of compiler/language versions used:
Property | Value |
---|---|
GCC | gcc (Ubuntu 4.8.5-2ubuntu1~14.04.1) 4.8.5 |
Clang | clang version 3.8.0-2ubuntu3~trusty4 (tags/RELEASE_380/final) |
JVM | Java(TM) SE Runtime Environment (build 1.8.0_74-b02) |
Python | Python 2.7.6 |
Go | go version go1.7.4 linux/amd64 |
Rust | rustc 1.15.1 (021bd294c 2017-02-08) |
Nim | Nim Compiler Version 0.16.1 (2017-02-18) [Linux: amd64] |