Saving time with property-based testing

Posted by Dan Octavian on Saturday, January 4, 2020


Saving time with property-based testing


Writing tests takes a lot of time, but as our code grows, quality can become a concern and more and more bugs are likely to creep in, hence we can’t put it off for much longer. Also, when implementing algorithms under tight time constraints such as algorithm coding challenges, exhaustively thinking through all the edge cases can get tricky.

In this article we will showcase how efficient is property-based testing in terms of saving your precious developer time when writing tests or come in handy when you need to find bugs in record time. This happens by expressing our test cases in terms of properties to be verified, instead of typing out lots of concrete test cases.

A use-case: a Javascript linked-list for code challenges

To do this we consider a useful case of a minimal doubly-linked list implementation in Javascript with 0 dependencies that you can easily copy-paste in code-challenge type of environments (eg. leetcode) if you are dealing with performance issues when using an standard Javascript Array.

Firstly, why would you require a doubly-linked list implementation in Javascript when you have the built-in Array type? As illustrated well in this StackOverflow post, if you are dealing with a large list of elements, where you would like to attach items at both ends of the list, using the built-in Array type exhibits linear O(N) runtime complexity for .unshift() method calls. This method call appends an element at the beginning of the array. Since the Array is represented as a contiguous region in memory, to make “room” for this new element requires re-allocation of a new region in memory and copying all existing array items over.

This array implementation sensibly favors the .push()method call that pre-allocates more memory than needed to accommodate for the array growing in size through appends to its end. If, however you are implementing an algorithm where appending at the beginning of a list and at the end are both frequent operations and your list becomes very large (think 100k+ elements), the performance of .unshift() can become a problem and you may need a Linked-List implementation to have a O(1) runtime complexity for both. This comes with the known trade-off that random access (eg. give me the element at position 11000) becomes a O(N) operation.

For the purpose of this article we implement a doubly linked list that has a minimalist implementation with 0 dependencies that you can easily reuse in code-challenge type of environments (if rules of the contest allow) in case your algorithm implementation presents the performance requirements outlined above. The implementation is a class called DoublyLinkedList can be found on Github, supporting a subset of methods of the Array type.

Property-based testing - easy mode

Since the DoublyLinkedList has a subset of the interface of the javascript Array type which has the same behavior as the Array type (but not the same performance), while also being a piece of pure code and by pure we mean no I/O (input/output) is executed that can change results from run to run, it is a perfect candidate for property-based testing. For full test code check out this file on Github.

We pick the jsverify library for expressing properties and jest as a test framework.

Here’s how we can test that the length property of the DoublyLinkedList is implemented correctly:'has correct length', jsc.array(jsc.int32), arr => {
     const list = new DoublyLinkedList(arr)
     return lodash.isEqual(list.length, arr.length)

This property is saying: given an array of arbitrary length with arbitrary elements of type int32, a list created from that array will have the same length as the array.

Behind the scenes, jsverify randomly generates many arrays with various elements and various lengths and tests for a return value of true and no exceptions thrown for each.

Another easy one is the includes method:'check for includes', jsc.array(jsc.int8), jsc.array(jsc.int8), (arr, elemToCheck) => {
    const list = new DoublyLinkedList(arr)
    return list.includes(elemToCheck) === arr.includes(elemToCheck)

This property says the given an arbitrary array of int8 elements and and an arbitrary int8 element to check for inclusion, the predicate includes(elemToCheck) is answered in the same way by both the Array and the list created from the same array.

Property-based testing - spicing things up

We could move forward and write such tests for the other methods such as unshift and push. However, in he real usage of the list, there is more likely going to be an arbitrary sequence of addition and removal operations at both ends of the list, after which integrity needs to be preserved. That is a more realistic scenario.

Let’s see how we can do that with a property:'supports arbitrary sequences of array mutation operations',
    jsc.array(jsc.int8), jsc.array(arbitraryArrayOperations), (arr, operations) => {
    arr = arr.slice(0, 100)
    operations = operations.slice(0, 1000)
    const list = new DoublyLinkedList(arr)
    for (let i = 0; i < operations.length; i++) {
      const operation = operations[i]
      const r1 = list[operation.op](operation.val)
      const r2 = arr[operation.op](operation.val)
      if (!lodash.isEqual(r1, r2) || !lodash.isEqual(list.toArray(), arr)) {
        return false
    return true

This property is saying that given an arbitrary sequence of 1000 operations which may have 0 or 1 parameters (jsc.array(arbitraryArrayOperations) ) and an arbitrary array, applying them to both the list and the array gives the same return value and the array and the list contain the same elements after each application.

An operation is a tuple { op, val } where op is a method name (eg. pop) and val is a parameter to be passed into the method (may be undefined).

We are effectively building our own arbitrary data generator here, since jsverify does not provide it out of the box. It goes likes this:

const arbitraryArrayOperations = jsc.bless({
  generator: jsc.generator.bless(function () {
    switch (jsc.random(0, 3)) {
      case 0: return {
        op: 'push',
        val: jsc.random(0, 100)
      case 1: return {
        op: 'pop',
      case 2: return {
        op: 'shift'
      case 3: return {
        op: 'unshift',
        val: jsc.random(0, 100)

This generates 1 of the 4 operations we are testing for (push, pop, shift, unshift) with an accompanying concrete value for the push and unshift methods which add new elements.

With this test and generator we capture relatively complex behaviour for 4 of the methods with a few tens of lines of code and in less than 30 minutes!

Beware though, it is essential that the properties you write are as strong as possible. If the properties are too weak they may leave hidden bugs. For example, the clause lodash.isEqual(list.toArray(), arr) being checked in the property above after every operation is essential. Leaving it out initially meant that the test was not capturing a situation where next pointer of the tail element of the List and and the prev pointer of the head of the list were not nullified when doing pop and shift as shown in this commit.


We can see how when confronted with a nearly “ideal” case, using property-based testing not only saves us developer time when writing the tests but also simplifies test code which saves developer time when reading a codebase.

For comparison purposes, the test file explained above also contains standard test cases with assertions (scroll down past the property-based block). We can see how we’re letting jsverify do all that work of generating test cases for us, since the concrete tests written by hand can easily be generated automatically.

This approach can be very useful in code-challenge type of environments where, in most cases, I/O is absent and the evaluation has very thorough test cases that verify edge cases which you might accidentally omit given time limitations.

However, real-world code is often times messier than we would wish, reducing the opportunity for easily defining properties that give us near perfect test coverage. A good approach here is to look for pieces of functionality that are complex enough to warrant their own function and attempt to construct more generic pure functions that encode the logic, which can potentially be re-used or just tested much easier.