Welcome to TechEmpower Framework Benchmarks (TFB)
If you're new to the project, welcome! Please feel free to ask questions here. We encourage new frameworks and contributors to ask questions. We're here to help!
This project provides representative performance measures across a wide field of web application frameworks. With much help from the community, coverage is quite broad and we are happy to broaden it further with contributions. The project presently includes frameworks on many languages including
C, and others. The current tests exercise plaintext responses, JSON seralization, database reads and writes via the object-relational mapper (ORM), collections, sorting, server-side templates, and XSS counter-measures. Future tests will exercise other components and greater computation.
Read more and see the results of our tests on Amazon EC2 and physical hardware. For descriptions of the test types that we run, see the test requirements section.
If you find yourself in a directory or file that you're not sure what the purpose is, checkout our file structure in our documenation, which will briefly explain the use of relevant directories and files.
Quick Start Guide
$ git clone https://github.com/TechEmpower/FrameworkBenchmarks.git
Move into the vagrant-development directory.
$ cd FrameworkBenchmarks/deployment/vagrant
Turn on the VM (takes at least 20 minutes).
$ vagrant up
Enter the VM.
$ vagrant ssh
Move into the FrameworkBenchmarks directory in the vm.
vagrant@TFB-all:~$ cd ~/FrameworkBenchmarks
Run a test.
vagrant@TFB-all:~/FrameworkBenchmarks$ tfb --mode verify --test beego
Add a New Test
Once you open an SSH connection to your vagrant box, start the new test initialization wizard.
vagrant@TFB-all:~/FrameworkBenchmarks$ tfb --new
This will walk you through the entire process of creating a new test to include in the suite.
Our official documentation can be found at frameworkbenchmarks.readthedocs.org. If you find any errors or areas for improvement within the docs, feel free to either submit a pull request or issue at the documentation repository.
Results of continuous benchmarking runs are available in real time here.
If you have a
results.json file that you would like to visualize, you can do that here(these will be visualized using the metadata from the last known round; if you are adding a new test, it will not visualize anything). You can also attach a
runid parameter to that url where
runid is a run listed on tfb-status like so: https://www.techempower.com/benchmarks/#section=test&runid=fd07b64e-47ce-411e-8b9b-b13368e988c6
The community has consistently helped in making these tests better, and we welcome any and all changes. Reviewing our contribution practices and guidelines will help to keep us all on the same page. The contribution guide can be found in the TFB documentation.