Today's guess is this: you have a test harness which utilizes the database,
and it has enough test cases in it for full test run to be so slow you cringe
at the very thought of launching it.
We discuss a lifehack-style solution to this problem: putting the DBMS which
will be used by our test harness completely on a RAM disk, so it'll
operate from much faster memory than the hard drive (even faster than from
Main issue is this: as you probably need only a test datasets at ramdisk, and
only for a period of running test suite, you will need separate DBMS instances
to work on ramdisk, not the ones already installed on your system.
In the end you'll get the special preparation script which you should launch
before your tests. After this, your test suite will run with the isolated MySQL
and MongoDB instances on top of ramdisk.
If your test suite has large quantities of integration tests using databases,
this will greatly increase the speed of test run. It is reported in one particular
case that the drop in run time was from 1:30 to 18 seconds, 5 times faster.
You should have a *nix system as a MySQL Sandbox (see below) works only there.
OSX probably will do, too. This system should have some bash-like shell (obviously)
and Perl installed. Kernel should have
tmpfs support. Your nonprivileged
user should be able to mount filesystems, or you'll need to hack the script
sudo at mount step (assuming your user can sudo).
For isolated MySQL instance you need the MySQL distirbutive downloaded from
the website. For isolated MongoDB instance you need the MongoDB distirbutive
downloaded from the website. Note, however, that the whole MongoDB server
is contained in just a single binary file ~8MB in size.
Of course as we will work completely in memory you have to be sure that you
have enough RAM to store your (presumably test) datasets.
Making ramdisk is very simple with latest Linux:
RAMDISK_NAME is some identifier for mountpoints table.
RAMDISK_DIR is the
directory which will be turned into RAM-based filesystem.
mount action, anything you put into
RAMDISK_DIR will be placed
into memory, without interaction with the physical hard drive.
Of course, it means that after unmounting the ramdisk everything which was
in it will be lost.
Shutting down the ramdisk
Just unmount the created mount point:
Note that you probably should stop all running services which still use the
ramdisk prior to unmounting!
Isolated MySQL instance
We'll use the MySQL Sandbox project to launch isolated MySQL instances.
For it to work you need the MySQL distirbutive downloaded from the website.
MySQL Sandbox is installed with the following command:
as root, and you'll need to run it as follows:
It's a one-liner split to two lines for readability.
Note that you need root privileges only to install the MySQL Sandbox application
itself, all further communication with it will be done from unprivileged account,
most possibly the same under which you launch the test suite.
We need to set the
SANDBOX_HOME variable prior to launching the sandbox factory
because that's how we control where it'll put the sandboxed MySQL instance.
By default it'll use
$HOME/sandboxes, which is probably not what you need.
RAMDISK_DIR is the same directory that the one we prepared in previous
MYSQL_PACKAGE is a full path to the MySQL distirbutive package downloaded
from website. Please note that MySQL Sandbox will unpack it to the same directory
and will essentially use this unpacked contents to launch the sandboxed MySQL.
So, probably, you'll need to move the package to ramdisk, too, to increase
performance of actually launching and running the MySQL server itself, however,
note that unpacked 5.6.0 contents are 1GB in size.
MYSQL_PORT_DESIRED value you use here, because you'll need to
use it to configure your test suite to point at correct MySQL instance.
MYSQL_DIRNAME is of least importance here, because it's just a name of a
subfolder under the
SANDBOX_HOME in which this particular sandbox will be
make_sandbox ended it's routine you can check that your sandbox is
indeed working by running:
Connection to Isolated MySQL Instance
You should use the following credentials to connect to sandboxed MySQL:
- host : '127.0.0.1'
- port :
- username : 'msandbox'
- password : 'msandbox'
Please note that you must use
127.0.0.1 value for host and not a
as usual, because of sandbox internal security configuration.
Shutting Down the Isolated MySQL Instance
To shutdown the sandboxed MySQL, issue the following command:
or more forceful
This commands are needed mostly to stop the working daemon; after the unmounting
of ramdisk all of sandbox data will be purged out of existence.
Isolated MongoDB instance
MongoDB server is contained in just a single binary file so it'll be a lot
more easier compared to MySQL.
You'll need the MongoDB distirbutive downloaded from the website, too.
This time unpack it to some directory.
After that, you can launch a separate instance of MongoDB with the following
MONGODB_BIN is a
/bin/mongod path preceded by the full path to the unpacked
MongoDB distributive. Here you can even use your system MongoDB package, in
case you have it installed. As a full example,
MONGODB_BIN can have a value
MONGODB_DIR is a path to directory under
RAMDISK_DIR to which this MongoDB
instance should put it's files. For example, it can be just a
As with MySQL,
MONGODB_PORT_DESIRED is a crucial parameter to specify the
correct MongoDB instance to connect to. Remember it as you will need to set
it up in your test suite.
Connecting to Isolated MongoDB Instance
By default MongoDB do not enforce any usernames or passwords so you need to
just use the hostname and port parameters.
- host : 'localhost'
- port :
For example, for PHP Mongo extension, you get a connection to this instance
Shutting Down the Isolated MongoDB Instance
As you provided the
--pidfilepath commandline argument when launching the
MongoDB server, the following command should do the trick:
Essentially we are feeding the
kill command with the contents of pidfile
and removing it afterwards.
Bash scripts to automate the sandbox starting and stopping
There is a GitHub repository with the example scripts nicely laid out
along with the comments.
There are three scripts:
db_sandbox_properties.sh: this is the all variable parameters you need to properly setup the sandboxes.
db_sandbox_start.sh: this script you run before your test suite.
It was updated with the command to copy the tables schema from source MySQL
instance to sandbox MySQL instance, so, if you have an accessible up-to-date
MySQL instance with the schema for your tests, this script will copy schema
to sandbox so you will have a DB ready to accept test cases.
NOTE that you will need the sudo rights for your unprivileged user to successfully
mount the ramdisk. If you do not have them, you can hack the
in any way you see sufficient to successfully mount the ramdisk.
db_sandbox_stop.sh: this script you run when you don't need the sandbox anymore.
It'll stop both MySQL and MongoDB and unmount the ramdisk (note that you'll
need the sudo rights for this, too).
Suppose we want to write the following test scenario:
Let's define this steps in our
FeatureContext. First step we can define with
the following regexp:
/^I am in the Friends section$/ because we really don’t
need the method of
FeatureContext class containing long switch enumerating
every possible section of the site.
Second step we can define with the following regexp:
/^I(?: should)? see "([^"]*)" .
in the search results$/
It should be obvious why we use the custom test step instead of using the predefined test steps and writing something like 'I should see "My Friend" in ".search-wrapper form input[role="search"]" element'.
Then, someday, sure thing, we will want to write the following scenario:
And in here, we have another “search results”, which should be found by completely
different selector and which is located on different page.
So, this is the context-dependent statement: term “search results” depends
on what “section” we mentioned previously. This is right from the linguistics.
To be able to use this natural-language feature we need to implement it somehow.
I'll use the abbrev CDTS instead of longer "context-dependent test step".
Fortunately, Behat has a feature with exactly the same purpose: subcontexts.
Unfortunately, it's not working in the way we need to use the CDTS properly.
In an ideal world, we can do this:
and this would load the
FriendsSectionContext and all CDTS definitions in
it, like the following:
useContext different context class, we get different definition for the
/^I should see "([^"]*)" in the search results$/ test step.
Unfortunately, Behat cannot load the test step definitions from subcontexts
at runtime. Apparently, it’s because it should parse the regexps in docblocks
corresponding to definitions or something like that. So, are forced to load
all our subcontexts right in our constructor.
Apart from being horribly ineffective, this prevents us from defining the test
steps having same regexp across several different separate subcontexts.
Workaround for this problem is this:
- add the property to the
FeatureContextwhich will hold the reference to current subcontext, name it like "location_context" or so,
- make the context-
setting('I am in the "..." section') test step set the "location_context" to the subcontext needed (you can get the subcontext with the call to
- move the context-dependent logic to “normal” subcontext methods, which should have the same name across all subcontexts,
- register all subcontexts with
useContextunder meaningful aliases like "friends_section", "shop_section", etc,
- define the context-dependent test step like 'I should see "..." in search results' in main
- in the definition of this step, get the context-dependent logic needed by calling the relevant method on the subcontext the "location_context" property currently points at.
So, we need our context-setting test steps to be like this:
Assuming 'friends_section' is an alias of the
it was set in the constructor, after this test step, our "location_context"
FriendsSectionContext, and, say, it's
will do exactly what we need in the "Friends" section.
Then, the context-depentent test step will be like this, getting the location-dependent
logic from the "location_context" set previously:
Main point is this: we want to check if something should appear in the "search
results" entity in some different page → we can use the same test step in our
.feature files, just explicitly name the section needed beforehand somewhere
above in the text. This will make the
.feature files a lot more human-readable.
This concludes the explanation about how to use this linguistic technique in