Automated QA for Mobile Apps

With answers to the toughest challenges we’ve solved

Who wants to spend their day working on the same task over and over? Not me! Fortunately, Automation in QA for mobile apps is on the rise. When done properly, Automation is a long-term time and money saver. Here at Timehop & Nimbus, our QA testing is fully automated — but it wasn’t easy to get here.

With millions of daily users across our Android and iOS hybrid apps, we needed a scalable, cross-platform solution. Mobile testing is paramount to our success because any downtime or bug related issues could lead to millions of users unable to access their memories, or even worse, cause them to lose their streaks! Furthermore, automating our testing solution was a necessity due to the fast deployment of features and the rapid growth of users. In this post, I’ll talk about some of the biggest challenges we encountered while setting up our automated framework and walk through the solutions we came up with.

The layers of Timehop’s iOS app showcasing elements that can be called for testing

The layers of Timehop’s iOS app showcasing elements that can be called for testing



Cross-platform challenge

In the mobile world, we support not only Android and iOS devices but also their multiple screen resolutions and OS versions. This leads to an exponential matrix of device configurations that must be supported, and without automation testing at this scale would be impossible. 

The challenge started with creating this framework on an existing app with millions of daily active users (more on that later!) Timehop needed a framework that would be able to minimize redundancy while maximizing test coverage. Writing one test suite to run on both platforms with minor differences (we allow for some tangents) is key to maximizing efficiency. By only having to write one test and have it run on any OS, version, screen size, we can maximize our coverage.

An early blocker that we encountered was how to refer to the elements on screen. Each platform has its own unique way to target UI elements - Android relies on the content description while iOS utilizes the Accessibility ID. We decided to use Appium in our framework for its ability to manage this. Because we reuse layouts on Android, we could not just apply an arbitrary ID to an element and expect it to select correctly as there could be multiple instances of that control on the screen at the same time.

We created a simple function that would take a unique and agreed-upon format and mutate it based on the OS being tested at the time.

export function control (screen, name) {
    if (driver.isAndroid) {
        var id = '.*' + name.split(/(?=[A-Z])/).join('_').toLowerCase();
        var prefix = screen.split(/(?=[A-Z])/).join('_').toLowerCase();
        return 'android=new UiSelector().descriptionStartsWith("' +
        prefix + '").fromParent(new UiSelector().resourceIdMatches("' +
        id + '"))';
    }
    if (driver.isIOS) {
        return '~' + screen + name;
    }
}


The structure of this accessibility ID was crucial since it has to be extremely unique to prevent collisions yet clear enough to identify the element you want to call. We decided to use the page name where the element resides, followed by the canonical name of said element, followed by its type. 


Parity challenge

The benefit of having a cross-platform framework is that we can proactively test any number of devices prior to releasing our app to the world. Creating one test to run on an iPhone 6, running on old firmware, can still be tested without physically having that device in hand. Now, who wouldn’t want that! 

Here we can see a simple test that logs a user into our Timehop app on 2 devices in parallel. 



Since Timehop comes in iOS and Android flavors which have their inherent differences, issues often arise when creating a test suite. Each platform behaves differently, and some design elements, copy and interactions are better tailored to their native platform. That said, whenever possible, maintaining parity of the product, interactions and design elements across platforms is only going to help keep your test suites running smoothly. 

Side-by-side screenshot of iOS and Android performing a test in parallel

Side-by-side screenshot of iOS and Android performing a test in parallel


Aspiring to achieve and maintain parity is also a great way to discover bugs between your apps. Reducing the discrepancies between operating systems makes for more efficient testing run times and creates a benchmark for the build measure against. Starting with the product specification, developers on both platforms should be building to the same set of standards, and it is the responsibility of a QA team to ensure that this is done correctly.

If you are planning on creating an app for multiple platforms, having parity in mind from the beginning results in a more consistent product and a more streamlined automated test suite.


Adding unique element locators to a pre-existing app

This might be the biggest challenge companies face. Most apps are not created with Automated QA in mind. So the time it takes to go back into code and assign a unique element locator, content description, and Accessibility ID to each element can seem like a tedious and daunting task. 

The Timehop app has been live in the App Store and Play Store for several years, which means there were hundreds of elements that needed to be labeled. Some of the easier to find ones were simple to update – however some dynamic elements posed a challenge that required valuable engineering time to solve and curate functions that would. If done correctly, the QA Engineer will be left with an app full of elements that can be tested efficiently on every pull request and app release. 

The more you can test, the more stable your app will be! Taking things a step further is to update the development process to ensure that any new elements that are created are labeled with the metadata necessary to target them in an automated test. This should be required going forward and streamlines your team’s ability to move a feature from QA to production.

One of the most impactful outcomes of creating an automated testing framework has been the ability to catch issues before builds are released to QA for testing. This maintains the quality of an app while minimizing the cost of development, leaving more resources to focus on improving the user experience and building new features. If you are looking to add automation to your company's QA team, I hope some of this information is helpful to you. 

twitter facebook facebook