BreadcrumbHomeResourcesBlog How To Debug Your Test Script With Perfecto October 8, 2020 How to Debug Your Test Script With PerfectoDevOpsAutomationBy Julius MongYour web and mobile app testing is only as good as your test scripts.In order to create and maintain failsafe tests that highlight anomalies quickly, it is vital to debug your test script in a systematic way. You need to have a protocol in place for your test engineers to follow for debugging test scripts.This ensures that you have a consistent approach in building and sustaining a test framework which will only grow and scale up over time, as new OS, browser versions, and device models are rolled out.Keep reading to learn how to debug your tests scripts.Table of ContentsQuality Test Scripts Are CriticalHow to Debug With IntelliJDebug Across Layers in Test DevelopmentPrep Your Test Scripts for SuccessTry Testing With PerfectoTable of Contents1 - Quality Test Scripts Are Critical2 - How to Debug With IntelliJ3 - Debug Across Layers in Test Development4 - Prep Your Test Scripts for Success5 - Try Testing With PerfectoBack to topQuality Test Scripts Are CriticalThe success of a test automation strategy depends on the quality of its test suites. It is vital that the tests themselves are fail proof and of high quality so that you can make quick decisions based on success rates in CI/CD pipeline runs.A high success rate gives confidence to QA managers and automation engineers on the test automation platform. It allows them to trust that any issues reported are of core functionality defects rather than “noise” in the form of scripting errors, backend issues, etc.What this means is that the tests themselves should too be thoroughly, for lack of a better word, tested. These tests should be able to gracefully handle scenarios that could potentially cause noise in the results and reduce the amount of false negatives that they would raise.Back to topHow to Debug With IntelliJMost IDEs let you debug in much the same way. In our example we will use IntelliJ. Debug mode basically means running the code as you would, but stopping at breakpoints in the code. This allows you to step through it one line at a time, jumping into and out of various functions or conditions your code defines, so you can see exactly where your code is going to when it encounters different situations (OS, device, etc.).To enter Debug mode in IntelliJ, execute the test by Debug, rather than Run.Before you start to debug, you will want to place markers around your code where you want the execution to pause. To place a breakpoint in most IDEs that support debugging, simply click on the space between the line number and the actual line of code.Once your code is running in debug mode, it will execute up to your first breakpoint. From then on you can choose to Step Over, Step Into, Step Out, etc.. You can also instruct the code to Run to Cursor if you want to jump ahead, amongst other methods that you can apply in debugging. In our example above, we have code put in to idenfity the type of driver that is being used. In another example, we make our code behave differently on iOS and Android when zooming into a map view on the AUT (Application Under Test):We can make this even more failsafe by testing it on different iOS and Android versions (and even specific model families and brands in Android’s case). This makes sure the application under test will behave exactly as intended across all target platforms.Back to topDebug Across Layers in Test DevelopmentNow that you have a better idea of how to debug, let’s discuss debugging code during test development. This will allow you to put in failsafe code as you expand your test suites to handle various potential pitfalls.In test management, always apply Murphy’s law. Anything that can potentially go wrong will go wrong. It is never far-fetched to assume that your test will fail in the most unexpected configuration, be it OS version, browser version, device model, etc. It is therefore important that you increase test coverage and test against the widest possible collection of target platforms for your application, and on real devices.Ask any QA engineer and they will happily tell you how many times they have had to battle with developers over issues that only occur on a real device that cannot be reproduced on an emulator/simulator. The process is time consuming and frustrating for both parties. It is therefore vital that you develop test scripts that will work across a wide range of OS, browsers, devices, and environmental variables. Here, we visualize these factors into a cube.Imagine you have a suite of 100 tests designed to work on Chrome 85 on Windows 10. You then need to test on Firefox, Safari, Edge, and their corresponding N-6 versions. So you now have a few more layers of coverage. Now imagine you also need to cover Windows 8.1, macOS versions, and all of the other browsers you need to support on those macOS platforms, plus the N-6 versions of each.The test coverage is therefore going to expand quickly into the form of a cube. Adding on top extra layers of environmental elements such as poor or loss of connectivity, fluctuating bandwidths, etc. and you have a behemoth of a cube to manage that will only ever increase in size.By debugging your code across these layers during test development, you will be able to see and learn quickly how different layers of the cube can produce so different behaviors.📕 Related Resource: Learn how to Move Your Development Forward With Reverse DebuggingBack to topPrep Your Test Scripts for SuccessIt is easy to assume that tests that work on one platform will work on another version of the same platform (e.g. iOS 13 tests will work on 14 and so on). In reality, that is not always the case.It is common for tests to fail and cause subsequent tests in your pack to fail further. Because a test event that was assumed to happen did not in a test, maybe the app has crashed, or the device froze. It is therefore vital to make sure no single test in your test pack should assume to start at a state that is set by a previous test. In other words, avoid any possibility of problems from one test affecting subsequent ones.By debugging your tests across a range of devices, walking through the code step by step, and seeing the results of each line of code being executed, you can visualize what and why your code works differently on each layer of the cube. This helps you to put in further failsafe code to reset the device and/or app back to a state for each test to start.With Perfecto’s debugging abilities, mobile app developers can utilize the DevTunnel to connect a real Android device from the cloud like it was connected to a local PC, making it debuggable via e.g. Chrome inspector, ADB shell commands, IDEs and more. Below is an example of how a real Android Samsung Galaxy S10 can be debugged through Chrome tools after the DevTunnel connection was initiated. Ultimately you want to have a list of all variables in your test scenario for all possible platforms, and write code to handle ALL of them at every point in your code where these variables occur.It does not necessarily mean uninstalling and installing the app on each test run, which would be overkill and unnecessarily lengthen the execution time. It simply means assuming nothing will be in the correct state for each test to start in. You need to have a protocol to make sure each test starts in a prepped, consistent, state.Back to topTry Testing With PerfectoDebugging test scripts and executing them across platforms is easier with Perfecto.Perfecto gives you access to real devices that you can test in the cloud as if it was connected by USB. With full access to the device, as well as its properties, you can test all kinds of device configurations and see how your code works in real time.And with Perfecto’s DevTunnel, you can debug apps remotely in the cloud, line by line, all from your IDE. And you don’t need to waste time recreating the test environment to do it.Give it a try. Start your free two-week trial of Perfecto.Start My TrialBack to top
Julius Mong Senior Sales Engineer, Perfecto Julius has over 20 years’ experience in software development, consulting, business development, digital marketing, pre and post-sales, and QA operations across the software, consumer product, digital media, and digital marketing industries. He is specialized in helping enterprises optimize their QA strategies and make testing become of value rather than a liability by "shifting testing left."