The time a script takes to run with Ghost Inspector can be frustrating, particularly while you are authoring or debugging tests and need a quick response. There’s also the 10 minute max run time to consider. In this article, we outline methods for reducing the suite run time.
Analyse the time between steps
Once the test script authoring has been completed, analyse the successful test run; paying particular attention to the timestamps on each step. Look for significant time gaps and to help you focus on where to optimise your script.
Remove Pause steps and use explicit waits
Ghost Inspector already hints at this in the UI for Pause steps, and covers it in the documentation. In almost all cases, there should be no use for the Pause step. We should be able to use explicit waits through an assertion. This saves time potentially wasted through the pause step. There may be occasions where the Pause step is still useful, for example waiting for an email you have just triggered to be delivered. However, the Pause step type should seldom be used.
Remove optional steps
If you have optional steps, you’ll notice when reviewing your script run that the entire timeout duration is spent waiting for that optional selector to appear. Not only does this waste precious run time, but it also suggests a possible flaw in your test. You should almost always know whether to expect an element to be present or not. If you only care whether one element or another is present (for an assertion step, perhaps), consider using backup selectors to save time:
If the presence of the element is critical to the success of the journey, then you should know under what conditions the element should be present. We should use Conditional Steps to assert where the element should be present. Conditional steps are perhaps the most significant recent addition to Ghost Inspector – they let us describe the conditions used in the software under test to determine the path taken. We’ll cover these in more detail in a future article, but consider the following rudimentary example:
Permit tests to run concurrently
You should never rely on the tests in your suite running sequentially – this adds a huge execution time overhead. Usually, concurrency is prevented because the site under test changes as a result of a previous test script, and the other script you need to run is sensitive to the effect the other script has. While we won’t go into detail here (perhaps a future article will), it should be possible to make all your scripts flexible to the context under which they run. Let’s consider a couple of example scenarios:
The problem – All tests use the same customer account. Running the tests concurrently results in the basket continuously changing, becoming a moving target.
The solution – This can be resolved by all scripts using a unique account. There shouldn’t be a need for the actions in one script to influence another – they should be standalone.
The problem – A selector in Script A relies upon the order placed being in the journey being the most recent when we access the order in the back office. If Script B runs at the same time and completes its order before Script A tries to access the order in the back office, the incorrect order will be accessed.
Review your global step timings
At Script and Suite level, there are test defaults for step timing. These determine a built in delay between every step (Step Delay) and the duration of waiting time for an element to be located (Element Timeout).
The impact of the Step Delay will multiply with the number of steps in the script, therefore we want the lowest value possible. This is our implicit wait value. If we’re using explicit waits to ensure the element of interest is present on the page, then we shouldn’t need to have a Step Delay. Therefore, extending the implicit wait / Step Delay setting should really only be used for debugging scripts.
If the Element Timeout is a high value, and the script fails due to a failed validation or interaction step, we have to wait for the duration of the timeout before we receive our feedback that the test has failed. This is not ideal, as our aim should be to receive our results as soon as possible. While it may be tempting to reduce the timeout to a very low value, it should be noted that our explicit wait verification steps are also bound by this timeout threshold. Therefore, we are aiming to set this value as low as possible without compromising the successful running of our scripts. Looking at the step timings for your explicit/fluent wait validation steps may indicate the value to choose.
Remove unnecessary steps from your script
One thing to consider, particularly if you created your script with the recorder plugin, is how lean your script is. While you wouldn’t expect any of your assertion steps to be extraneous, some of your interaction steps may be. Take a look through your scripts step-by-step – a good opportunity to to perform this analysis is while commenting each step (if you haven’t already done this).