Podcast

110: Mobile Testing Coverage Optimization with Eran Kinsbruner

By Test Guild
  • Share:
Join the Guild for FREE
Eran Kinsbruner TestTalks Feature


Coming up with a solid plan for the right test coverage mix for your mobile app testing efforts can sometimes feel like a black art. And like with all black arts, you usually end up paying the price with bad results.


On today’s show we’ll test talk with Eran Kinsbruner, Director and Mobile Technical Evangelist from Perfecto, about mobile testing coverage optimization using a free online resource called Digital Test Coverage Optimizer. Eran will also share some of his best tips to ensure you’re consistently creating mobile testing automation awesomeness.

About Eran

Eran Kinsbruner Headshot

Eran has been in this testing space, mostly mobile, for the last 17 years. Along the way he's been a CTO for Mobile Testing and director of QA. Eran has also worked at Sun Microsystems back in the J2ME area, moved to Symbian and other platforms. For the last couple of years, He's been the Director and Mobile Tech Evangelist with Perfecto, working on thought leadership material, investigating, putting some hands-on experience on mobile testing, both on functional testing perspective as well as non-functional testing, performance, user-condition testing, and so forth.

Quotes & Insights from this Test Talk

  • Organizations today that are struggling, simply also take into account analytics. They either do it through a real use of monitoring solution, they simply embed an SDK into the mobile application, which reports back to a dashboard which they are analyzing on an ongoing basis, and see which devices, which operating systems are hitting their websites or servers on a daily basis, from which geographies. Based on this date, they are making decisions. So that's another way of looking into the coverage conversation.
  • First of all in all of these geographies the end users have different usage scenarios, and they are using different network carriers. You have different carriers, different operators operating in the UK and the US. A different LG connection and a different 4G connection. It does matter where your users are operating and which devices are there because as I mentioned, I can assure that the Vodaphone UK flavor of Android 6 or 5 will be different from what you will see from Verizon here in the US. They are tweaking it and once they are tweaking it it's not exactly the same.
  • So you have some market events which are kind of predictable. Nothing changes them. September 16, more or less is the iOS 10 release, I know it today, six months before or five months before. I already have some ads up, I know that if I am about to launch my mobile application, let's say two or three months from today, I might want to wait for June-June is just around the corner-I'm going to see the iOS 10 beta, which is public, in just a few weeks. I can already start testing on this iOS beta and make sure that whenever I am deploying my application at least I have seen some snapshot of it, of how it works on the next generation of Apple. Same with Android N, Android's next release. We already have two different previews which Google has made available. You can already execute them on real devices. All the Google Nexus devices today are able to support the developer preview of the Android N which is just going to be launched in October.
  • I think that as long as you are recording your performance test from a mobile device where you have the user agent itself, you know that this was recorded on a Samsung device or an iPhone device, and you manage it correctly within the load test framework. But again, I agree with you, there are a lot of practitioners out there which are being challenged by this scenario and my recommendation to them is start small. Take one scenario on one mobile device, see that it works, it doesn't mix and confuses your other existing test scenarios from web or from other performance testing, then scale it up and try to analyze it on an ongoing basis.
  • There is another approach, which I am also experiencing a lot, and this is the pre-production monitoring. It's still being done in production environments, but in a synthetic way. This is the difference between real-user monitoring and synthetic monitoring. Synthetic monitoring is also the same. But it's done in a closed lab where you can control everything. You mimic the production environment. Once the application is ready, you set a very small number of devices. You don't need too many. You take two Androids, two iPhones and so forth, and you continuously run, 24/7, your test scenarios on the most important transactions. You collect the data. The data can be performance degradation, you put some thresholds, if they exceed it you get an alert. If the application crashes, you get an alert and you get it on an ongoing basis.
  • So on top of the other pieces you talked about coverage and monitoring and non-functional, there is also the robustness of your test automation. At the end of the day, the test cycles are very limited in time. I'm hearing of cycles which need to run for 48 hours and sometimes even less. For these test scenarios or test execution cycles to be executed on an ongoing basis on every commit the developer is doing, you need to have robustness in your test automation, and one way of getting this robustness is to put more validations into your test automation scripts. It's no doubt that using object ID is the most important, either through X-Path or other identifiers, is the best way today in mobile testing. But you want to make sure that you are also visually able to analyze the expected next screen before you are moving to the next test. If you put some prerequisites between the test scenarios, and put some validation points on a visual basis, you get two in one. You get one, from one end, the ability to transition safely to the next test step. For example, after log in, you want to make sure your log in was successful, you do a visual validation. If it didn't succeed you simply stop the execution and report back the defect to the engineers, to the developers.

[tweet_box design=”default”]Tool to help optimize your #mobile #testing coverage  http://tools.perfectomobile.com/test-coverage-optimizer/[/tweet_box]

Resources

Connect with Eran

May I Ask You For a Favor?

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.

Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

SponsoredBySauceLabs

Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

267: Smart Test Execution with Eran Sher

Posted on 08/25/2019

Do you run an entire automation test for every build because you don’t ...

266: Automation Journey and TestNG with Rex Jones II

Posted on 08/18/2019

In this episode we’ll test talk with Rex Jones about his automation testing ...

265: TestProject a Community Testing Platform with Mark Kardashov

Posted on 08/11/2019

In this episode, we’ll talk to Mark Kardashov, CEO and Co-Founder of TestProject, ...