[PDF] Functional and Security test- ing of a Mobile Application - DiVA

This thesis investigates different approaches to functional testing and security testing Five common methods of generating test cases for functional testing have



Previous PDF Next PDF





[PDF] Mobile Application Security Testing - Deloitte

Our comprehensive mobile security testing approach will cover all the possible threats and attack vectors that affect the mobile app landscape



[PDF] Analysis of testing approaches to Android mobile application

Partnership and Mobile application security verification standard Also, we have compared methods it selves and methods of testing for vulnerabilities of mobile



[PDF] Mobile Application Security Testing - Mphasis

These facts and figures clearly state that mobile application should be subjected to periodic scan to identify vulnerabilities and subsequent fixing methods, in order 



[PDF] Introduction to Mobile Security Testing - German OWASP Day

Approaches and Examples using OWASP MSTG OWASP Mobile Automotive Security Testing OWASP Mobile Application Security Verification Standard



Mobile security testing approaches and challenges - IEEE Xplore

security testing targets to detect vulnerabilities and malicious apps on a mobile device In this paper, we present four testing approaches for mobile security: 



[PDF] Security Testing Guidelines for Mobile Apps - OWASP Foundation

Security • Expert for Mobile App Testing • Developed the Mobile Security Testing Guide in his Result: General process (mandatory) and supporting tools and 



Functional and Security test- ing of a Mobile Application - DiVA

This thesis investigates different approaches to functional testing and security testing Five common methods of generating test cases for functional testing have



[PDF] MOBILE APPLICATION SECURITY WITH OPEN-SOURCE TOOLS

Instances of web-application security issues which lead to breaches Android Security Test Cases software for general users is not an ideal approach First  



[PDF] Vetting the Security of Mobile Applications - NIST Technical Series

1 avr 2019 · details a mobile application vetting process 2 1 2 OWASP Mobile Risks, Controls and App Testing Guidance 4 1 Testing Approaches

[PDF] mobile application security testing checklist

[PDF] mobile application security testing pdf

[PDF] mobile application security testing ppt

[PDF] mobile application testing checklist xls

[PDF] mobile apps for language learning pdf

[PDF] mobile computing applications

[PDF] mobile computing architecture

[PDF] mobile computing framework

[PDF] mobile computing functions pdf

[PDF] mobile computing functions ppt

[PDF] mobile computing through internet

[PDF] mobile computing tutorial

[PDF] mobile development design patterns

[PDF] mobile device industry analysis

[PDF] mobile financial services companies

Bachelor thesis, 16 ECTS | Information Technology

2017 | LIU-IDA/LITH-EX-G--17/066--SE

Functional and Security test-

Sara Westberg

Supervisor : Simin Nadjm-Tehrani

Examiner : Nahid Shahmehri

hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet - or its possible replacement - for a period of 25 years starting from the date of publication barring exceptional circum- stances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the con- sent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and versity Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/. c

Sara Westberg

Students in the 5 year Information Technology program complete a semester-long software development project during their sixth semester (third year). The project is completed in mid- sizedgroups, andthestudentsimplementamobileapplicationintendedtobeusedinamulti- actor setting, currently a search and rescue scenario. In parallel they study several topics relevant to the technical and ethical considerations in the project. The project culminates by demonstrating a working product and a written report documenting the results of the practical development process including requirements elicitation. During the final stage of the semester, students create small groups and specialise in one topic, resulting in a bachelor thesis. The current report represents the results obtained during this specialisation work. Hence, the thesis should be viewed as part of a larger body of work required to pass the semester, including the conditions and requirements for a bachelor thesis.

Abstract

A mobile application has been developed to be used for assistance in crisis scenarios. To assure the application is dependable enough to be used in such scenarios, the application was put under test. This thesis investigates different approaches to functional testing and security testing. Five common methods of generating test cases for functional testing have been identified and four were applied on the application. The coverage achieved for each method was measured and compared. For this specific application under test, test cases from a method called decision table-testing scored the highest code coverage. 9 bugs re- lated to functionality were identified. Fuzz testing is a simple security testing technique for efficiently finding security flaws, and was applied for security testing of our applica- tion. During the fuzz test, system security properties were breached. An unauthorized user could read and alter asset data, and it also affected the system"s availability. Our over- all conclusion was that with more time, creating functional tests for smaller components of the application might have been more effective in finding faults and achieving coverage.

Acknowledgments

Nilsson and Filip Polbratt - for doing the excellent job of developing the system that has been put to test in this thesis together with us. Especially we want to thank our supervisor Simin Nadjm-Tehrani for the feedback and help during this project, and Mikael Asplund for the feedback during the start of the project. We also want to thank Rickard Hellenberg and Oskar

Gustafsson for their opposition on this thesis.

v

Contents

Abstractiv

Acknowledgments v

Contentsvi

List of Figures viii

List of Tablesix

1 Introduction1

1.1 Aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.4 Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.5 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Software testing 4

2.1 Software testing in general . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.2 Android testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.3 Functional testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.4 Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.5 Security testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.6 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 Test implementation 10

3.1 The application under test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.2 Selected frameworks and approach . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.3 Functional testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3.4 Security testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4 Results18

4.1 Functional testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.2 Coverage reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4.3 Security testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

5 Discussion21

5.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

5.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

5.3 The work in a wider context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

6 Conclusion25

Bibliography27

vi A Appendix: Functional tests in Espresso - Source code 29 A.1 Authentication test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 A.2 Map test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 A.3 Contact test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

B Appendix: Fuzz test - Source code 42

B.1 Fuzz test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

List of Figures

1.1 Application view over the map activity . . . . . . . . . . . . . . . . . . . . . . . . . 2

2.1 A pyramid of the testing levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3.1 Application view when creating a new pin . . . . . . . . . . . . . . . . . . . . . . . 12

3.2 Fuzz test flow chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4.1 Invalid database entries from the fuzzer . . . . . . . . . . . . . . . . . . . . . . . . . 20

viii

List of Tables

2.1 Decision table - Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3.1 Decision table - Map test, add pin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.2 Decision table - Map test, remove pin . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.3 Identified test cases - Map test, add pin . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.4 Identified test cases - Map test, remove pin . . . . . . . . . . . . . . . . . . . . . . . 14

3.5 Decision table - Authentication test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.6 Identified test cases - Authentication test . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.7 Decision table - Call test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4.1 Bug report from functional testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.2 Coverage report - Whole application . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4.3 Coverage report - Map activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4.4 Coverage report - Authentication activity . . . . . . . . . . . . . . . . . . . . . . . . 19

4.5 Coverage report - Service class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4.6 Vulnerability report for the security testing . . . . . . . . . . . . . . . . . . . . . . . 20

ix

1Introduction

Smartphones are becoming more and more advanced and used in more advanced situations. Nowadays they are used not only in the entertainment sector but also in more critical do- mains. The increasing complexity makes testing mobile applications very hard [1], since the combination of possible inputs grows rapidly. Making sure that your application is de- pendable before pushing an update is not an easy task, but a necessary one. By testing the application"s functionality, faults can be found and the application will become more depend- able. However, there are many different testing methods to chose from. Therefore we will investigate how well different testing methods actually work. This will be done by performing tests on an application that has been developed along- side this thesis. The application under test (AUT) consists of an Android application, where the application depends on a server which is connected to a database. The purpose of the ap- plication is to be a mock up for a tool used by the Swedish Defence Forces for communication and information sharing during a crisis scenario. Figure 1.1 shows a view of the AUT, where pins are placed on a map, representing different tasks for the Swedish Defence Forces. when all of the faults have been found. But if the AUT passes tests that covers all of the source code with relevant variations of different input, it is a sign that most errors due to coding faults given input assumptions have been considered. Coverage is a metric that will be used in this thesis, which is often used in software testing and makes it possible to measure how much of the source code has been tested. A systematic way of performing automatic testing to cover all of the AUT"s functionality needs to be identified. This is due to the low coverage that would be achieved when only end-user testing is used. This is seen in a study with 7- user testing of some popular apps that got a result of only 30% coverage of app screens and only 6% of the apps functionality [2], which is seen as a bad results. More advanced software also leads to more vulnerabilities in a system. These vulnerabil- ities can be exploited by an attacker who wants to get access to the system [3]. This type of attack is seen as a threat to the system since the system assets would be at risk, which calls for the need to identify and remove as many vulnerabilities as possible to increase the security of a system. 1

1.1. Aim

Figure 1.1: Application view over the map activity

1.1 Aim

In this thesis, we will test the AUT with respect to functionality and security. To test the functionality we will analyze different approaches of functional testing with the goal to gain higher coverage than 70%. When testing the security of the system we will focus on the assets and their exposure to breach of confidentiality and integrity. An unauthorized user should not be able to read or alter data. We will find and use approaches of constructing tests that make sure that the AUT is not compromising the confidentiality and integrity of the system.

1.2 Problem statement

In this thesis we will:

1. Investigate how to constr uctfunctional test cases systematically to achieve a high enough coverage. 2. Identify an ef ficientway of testing a system"s confidentiality and integrity . 3. Perform functional and security tests and apply on our system, then categorize and analyze the results from the implemented tests. We will try to answer what we mean by saying "high enough" in chapter 2, but for now, our aim is to achieve 70% coverage.

1.3 Approach

To identify the faults on our application"s functionality we have performed functional tests, aiming to get high enough coverage. The coverage was measured in terms of instructions run and branches taken. Random input generation tests were also performed to make a com- parison between the different functional tests based on how much coverage they achieved. 2

1.4. Delimitations

To identify vulnerabilities in the system we performed a simple penetration test, aiming to get access to or alter asset data as an unauthorized user. The need for an efficient method is because we have limited knowledge in security testing, thus need an easy to implement method that still can find flaws in a systems security.

1.4 Delimitations

Software security is typically defined as the confidentiality, integrity and availability of a system. However, in this thesis we will only design test cases for the first two, confidentiality and integrity.

1.5 Structure

The rest of the thesis is structured as follows. Chapter 2 will present background theory of testing and related work. Chapter 3 will describe the AUT as well as how the tests were constructed, and how the results were collected. Chapter 4 will present the results from the tests. In chapter 5 the results and method will be discussed. Lastly in chapter 6 the answers to the problem statement as well as our ideas for future work to be done will be given. 3

2Software testing

In this chapter, we will present some background theory of software testing. We will present some common methods for identifying test cases for functional testing, how to evaluate tests and cover security testing briefly. Lastly we will view related work for this thesis.

2.1 Software testing in general

The definition of software testing can vary quite a lot depending on who you ask. One of the most popular definitions is Glen Myers, "Testing is the process of executing a program with the intent of finding errors". James Bach"s definition is "Testing is questioning a product to evaluate it". The latter is more relevant to modern use of software testing, since testing can be used for quality assurance of a product [4]. The stakeholders of a product must know what

quality it possesses, and to find that out empirical experiments are carried out [4].Figure 2.1: A pyramid of the testing levels.

Testing is usually performed at different levels, where the test cases have specified areas to cover. The most common levels used areunit testing, integration testing, system testingand acceptance testing[5], which can be seen in figure 2.1. A unit is the smallest piece of software that is tested. What a unit is can differ for different programming languages (for example a unit can be a class in java or c++, or a function in c), but if the tester thinks of it and tests it 4

2.2. Android testing

as one "thing", it is a unit [4]. With integration testing, the smaller units are tested when as- sembled together [5]. System testing is the highest level of integration testing, which includes the complete product that is to be delivered [5]. Acceptance testing is a process that should determine if the stakeholder should accept the product [5]. System testing includes many testing techniques, to list a few: functional, usability, se- curity, reliability and performance [5]. In this thesis, we will perform tests on the system level with the focus on functional and security testing. From now on we will focus on those subcategories. Popular software testing jargon often includeswhite box, grey box and black box testing. These represent strategies for constructing tests and are not testing technique in themselves. White box testing uses full knowledge of the inner workings of the code while black box uses requirements and specifications of the system to construct test [4]. Grey box is something in between. Unit testing is a form of white box test while functional testing usually is black box or sometimes grey box. When testing a system, it might have preconditions that does not allow invalid input and therefore has no need to test invalid input. This could be if the system only allows the user to press certain buttons to interact with it. In almost any system that accepts user-input, this is not the case. Then there might be a good idea to test invalid input, and hope that the system responds with some sort of error message. This is callednegative testing[6]. Testing the system with valid input and expecting an action according to the specifications is called positive testing[6].

2.2 Android testing

When building an Android test, it will be based on the JUnit framework. The Android Test- ing Support Library provides a set of APIs which allows you to build and run test code for your applications, and has the ability to do functional user interface (UI) testing through Espresso. Espresso [7] is a testing framework that is integrated in the Android development environment, which has a very intuitive syntax that makes writing tests easy. The possibil- ity to run tests directly in the Android Studio IDE makes it easy to integrate them into your development flow. During the starting process when developing an application that is to be tested, the struc- ture of the code should be considered. To make a testable code architecture you need to split up parts of the application. An example of a common pattern for this is the Model- View-Presenter (MVP) architecture. Themodelprovides and stores the internal data, theview handles the display of data, and thepresentercoordinates the UI with the data and ensures that they are in sync [8]. The cycle for proper Android testing should start with testing the smaller units and move upwards towards integration testing. In the beginning, making sure that each component works as intended should be of focus. As tests succeed the confidence in the system increases and higher level of testing is to be used. The larger tests incorporating UI should be built upon the well-covering unit tests to get high coverage [9].

2.3 Functional testing

Sincefunctional testingisa form ofblackbox testthetester doesnotrequire fullknowledgeof thesystemtoconstructthetest. Thetesterjustneedstoknowwhatthesystem"soutputwillbe given a specific input. The test cases carry out interactions with the system that supposedly will fulfill a functionality that is intended. However, a system can usually take in a lot of different combinations of inputs and the amount of test cases rapidly gets out of hand. Even a simple program that takes an integer as input would mean a range from231to 2311 numbers in java, total of 4294967295 possible inputs [10]. And that only accounts the accepted 5

2.3. Functional testing

inputs. What happens if the user tries to enter a larger number, or even a text-string? To minimize the amount of test cases while still assuring most faults are found makes up a challenge. To identify the most relevant test cases there are different methods that can be used [10]. Equivalence class testingis a technique that assumes test cases can be divided into equiv- alence classes. Test cases belong to the same equivalence class if the tester believes that [5]:

They all test the same thing.

If one test catches a bug, the others probably will too. If one test doesn"t catch a bug, the others probably won"t either. Consider a system that would accept an integer between 1-10 as input. If the integer is below

6 it returns false, otherwise true. Inputs 2 and 3 would then be in the same equivalence class,

expecting the result to be false. The equivalence class testing technique implies identifying all possible equivalence classes and designing a test case for each [5]. In this example test cases could be input 3 expecting false and input 7 expecting true. Equivalence class testing is especially suitable if the input variables are independent of each other. It is also preferable if the variables refer to physical or logical quantities [10]. Boundary value testingis a technique based on equivalence class testing with the assump- tion that most faults occur at the boundaries of a system"s possible inputs and equivalence classes. Therefore, test cases are created with input on the boundaries of the system. Like equivalence class testing, equivalence classes are identified and test cases are created on the boundaries, with one above and one below [5]. Using the same example as above there are two equivalence classes, inputs 1-5 and 6-10. The inputs for the test cases would then be 1, 2,

4, 5, 6, 7, 9, 10.

An addition to boundary value testing isRobustness testing, where you also add test cases slightly beyond the boundaries [10]. This adds negative test cases. In the example above test cases 0 and 11 would then be added. Decision table-based testingis a method that identifies test cases by creating tables with conditions as inputs and actions as outputs [10][5]. When certain conditions are fulfilled the system is expected to do a certain action according to specifications. Each condition is labeled with a C and a number and can have the value true (T), false (F) and does not matter (-). Each action is labeled with an A and a number and is marked with an X if it is to be executed, and nothing if it should not be executed. Using the previous example, a decision table could be created like in table 2.1 with three conditions and three actions. One last action is added to indicate impossible combinations of conditions. Table 2.1: Decision table - ExampleC1: Number between 1-5?TTFF

C2: Number between 6-10?TFTF

C3: Number less then 1 or greater then 10?TFFT

A1: Return falseX

A2: Return trueX

A3: Return error messageX

A4: Impossible combinationX

Decision table-based testing is an excellent method to cover most cases that are likely to happen without redundancy of test cases [10]. It is especially useful if the input variables are dependent on each other [10]. 6

2.4. Coverage

Random testingis, as the name implies, a method that utilizes randomness in the genera- tion of test cases. Instead of analyzing boundaries and normal values you just let a random input generator create the test cases. A positive effect of this approach is that any bias from a tester is reduced, and test cases that might not have been thought of can be found [10]. How- ever there is a high risk that many redundant tests will be performed, and knowing when to stop testing is not obvious [10]. State-transition testingis like decision-table testing, an excellent method to create test cases. The method involves identifying various states and each of the possible transitions between the states by drawing a state-transition diagram. States are drawn as circles or boxes and transitions are drawn as arrows. From the diagram test cases can be designed by: 1.

V isitingall states.

2.

T akingall paths.

3.

V isitingeach transition.

The first method is not optimal, since coverage usually will be low. The second method is impossible if the diagram has loops, since the possible paths would then be unlimited. The preferred method would be the third, since it can be achieved without much struggle and still generate test cases with a reasonable amount of coverage. For better coverage, the actual input values used in the transitions would have to be generated by methods above [5].

2.4 Coverage

Coverage is a metric often seen in software testing, used to describe the amount of the source code that has been executed during a test suit. It can also be described as a measurement of the adequacy of a test suit, and is often used as a quality evaluator [11]. Achieving a high coverage often means less bugs unnoticed, but most of all it means less untested parts in the source code. Instruction coverageprovides information about the amount of byte code that has been executed or missed. When the source code of a program is compiled into instructions, it is easy to measure the total size and monitor how many of them are run in total. Even if a high instruction coverage is achieved, it does not necessarily mean that most paths of the program are taken. A single instruction branch could be left unchecked containing a big flaw and still score a high instruction coverage. Branch coverageexamines all branches of the program, of each and every decision point. The aim of branch coverage could therefore be seen as a method in trying to execute all reachable code of the program at least once. Creating tests that cover all of the code is more likely to find faults than test that do not. But devoting time to develop meaningful test cases is probably more effective than writ- ing tests that cover all of the code. Setting a coverage goal for a product release could be counter-productive, since humans are goal-oriented and might focus more on achieving high coverage rather than detecting defects [5].

2.5 Security testing

The attributesconfidentiality, integrityandavailabilityare central indicators of computer secu- rity [12]; thus, security testing covers the following concepts. 7

2.6. Related work

Theconfidentialityof the system is ensured.

Theintegrityof the system is maintained.

Information is keptavailablefor the authorized personnel. Security testing is a way to find security vulnerabilities in a system that could threaten these concepts. There are different types of security testing, one of them beingpenetration testing. Pene- tration testing is a way to test the vulnerabilities of a system by looking at security defences through the eyes of a hacker. In other words, the penetration testers use the same techniques and tools as a hacker would do to penetrate a system, but do so in a controlled manner [13]. Another popular method for security tests isvulnerability assessment, which is a technique to identify and quantify the security vulnerabilities in a given system. The vulnerabilities are then eliminated or mitigated to make the system more secure.

Fuzz testing

Fuzz testing, or fuzzing, is a brute force approach to find vulnerabilities in a system by feed- ing it with invalid or semi-valid input [14]. It is a form of random testing used to find weak- nesses of systems. In other words, it is used to find flaws in a system which compromises the system"s security, or cause any unwanted behaviour. The idea is simply to trysemi-validinput against a system and observe how it responds. Semi-valid data should be just enough valid to enter through the system without immediately being discarded (by a parser for example, if you type in characters when an integer is expected), but still enough invalid to maybe cause problems [14]. Ultimately, fuzzing is usually a black box testing technique, but can still be used against software with accessible source code. Toproduceinputdata, fuzzerscanusedifferenttechniques. Themostcommontechniques aregeneratinginput ormutatingexisting valid input. To generate data, the fuzzer is given a set of rules or description of how the data should look and let it generate, often with random functions [14]. The mutation works in the way that some valid data is given to the fuzzer that changes parts of it in order to produce new, untested data [14].

2.6 Related work

built using the Espresso framework. A recommendation in the thesis was that future testers couldlookfortestingoraclesorsemi-oraclestousefortesting, sincerunningtestonyourown code is not optimal. They also recommended looking for an automatic exploration method to get more coverage. The intent of our project has been to follow in their footsteps by making more extensive tests, presenting coverage reports and testing the security. Some of the studies were clearly trying to promote their own tools and work in gen- eral, but could still be helpful to us when conducting or own work. In one study, Azim and Neamtiu [2] developed a tool for automatic exploration of Android applications. The tool was using two different algorithms, Depth-first- and Targeted Exploration, which were compared against each other measuring how much coverage they achieved. The Depth-first Exploration launched a start activity, extracted all UI-elements, and for each transition to an- other activity, it traversed to the new activity, making a new search and going back to the last activity visited when no new transition was found. This way the tool could systemati- cally explore an application mimicking user interaction. Targeted exploration took a list of activities as argument and launched them all, as a fast way of exploring all the activities and also reaching activities that are not easily navigated to during normal use. The tool achieved

59-64% activity coverage and 30-36% method coverage on 25 popular Android applications.

8

2.6. Related work

Unlike the mentioned work our thesis will present the coverage of one specific applicationquotesdbs_dbs17.pdfusesText_23