Ghost Inspector offers a limited implementation of Screenshot Comparison. This gives us the opportunity to implement some form of visual regression testing without using a specialist tool (e.g. BackstopJS, Percy). In this post, we look at the merits of the functionality and consider some examples of potential usage.
Summary
- Recommendations
- Worked Example
- Ghost Inspector feature documentation link – https://ghostinspector.com/docs/comparing-screenshots/
Recommendations
Expect the tests to require some oversight
The baseline screenshot is critical to the value of the comparison, and may need to be updated fairly frequently. Every failure should be scrutinised – decide whether there is a genuine failure (unexpected change) or whether the the baseline screenshot can just be updated. However, if you find you are having to do this frequently, consider the following advice or whether the test is yielding any value.
Check common elements separately
If there are elements that are present on most pages (for example, the header and footer), it makes sense to check these in isolation. You can then exclude these elements from the other comparisons so a failure of a common elements doesn’t also fail all other tests. As with all automation, we should always be looking to limit maintenance and the level of manual intervention wherever possible.
Exclude dynamic content from the comparison
You’re going to give yourself a serious maintenance overhead if the page/area you’re comparing is constantly changing. If dynamic content is a constant cause of comparison failure, then you need to accept that it’s not helpful to have that content included in the comparison. Thankfully, GI lets you exclude a list of selectors from the comparison (it will make them invisible, retaining the page layout).
Keep the tolerance as low as possible
What’s the value in a test if the tolerance is set at a level where small changes are ignored? Ideally, you should aim for a tolerance of as close to zero percent as possible. If your tests require a tolerance approaching or greater than 5%, you should question whether the comparison is valuable – that 5% change could potentially be critical. Sometimes, the page inexplicably shifts a few pixels horizontally, which can cause a false indication of failure that it can be tempting to try to bypass by increasing the tolerance. Sometimes using a different browser for the comparison (Firefox, for example) can minimise this.
Don’t just check the desktop breakpoint
It’s probably fairly obvious, but there’s no need to just check a single breakpoint. Generally, it’s best to prioritise targeting the primary differences between the different breakpoints (navigation changes, for example). While Ghost Inspector stores baseline screenshots for each resolution the test is run at, it’s generally cleaner to create a new suite for the different breakpoints, setting the capture resolution at the suite level. In many cases, you’ll be able to start the new suite as a duplicate of the first to save some time.
Consider specialist tools
While Ghost Inspector provides a great entry point to figuring out how to get value from visual comparison, it’s ultimately quite a limited implementation. Once you’ve decided you’re getting value from the tests, and would like to invest further, perhaps look at the numerous script- and configuration-based tools available specifically to handle visual regression testing. This article from softwaretestinghelp.com covers a variety of options.
Worked Example
Finding common elements
Let’s use mancity.com as our example. As with many sites, the header and footer appear on all pages. There is a slight variation in the header – the current section is highlighted, but let’s decide we aren’t concerned with this.
We could just focus on the content area of the header, but let’s use the full container to make sure we also compare alignment. We could use the whole header element, but the ad at the top is dynamic, and doesn’t always appear, so it’s better to rule it out.
For the footer, there are two sections we’re interested in. Both have static content, so we can be general with our targeting.
Finding dynamic content to exclude
Let’s take a look at an article to see how we might want to approach the content. First of all, we want to make sure we’re excluding the header and footer, as we already cover those separately. We can see there’s a container for the article container, so let’s focus on that.
While we now have our target, there’s most likely a large amount of dynamic content we need to exclude from the comparison to avoid continual failure. While there are many examples in a mancity.com article, let’s just focus on a few here as examples. The first example is common on content sites – an ad served by Google in an iframe. We know that this is going to be subject to frequent change. There’s a lot of options for a selector to exclude the ad, but let’s just pick an easy one.
There are a couple of blocks at the base of an article designed to display related and popular content. Both of these are likely to be dynamic. In the case of the Further Reading section, we might as well keep the section heading in the comparison, as it’s unlikely to change, but the article link tiles need to be excluded.
Looking for key differences between breakpoints
The site navigation and footer are always likely candidates for layout changes in most sites. mancity.com is no different. There are other examples in the article of layout changes, but let’s just choose to focus on these two elements for this example.
Configuring the tests in Ghost Inspector
Suite settings
Header/navigation configuration
Header/navigation steps
Header/navigation mobile variation
Footer configuration
Footer steps
Footer mobile variation

Article configuration
Article steps
Article baseline screenshot with exclusions
