Award Abstract # 2007718
Collaborative Research: SHF: Small: Reuse and Migration of GUI Tests

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: THE UNIVERSITY OF TEXAS AT SAN ANTONIO
Initial Amendment Date: July 27, 2020
Latest Amendment Date: May 25, 2021
Award Number: 2007718
Award Instrument: Standard Grant
Program Manager: Sol Greenspan
sgreensp@nsf.gov
 (703)292-7841
CCF
 Division of Computing and Communication Foundations
CSE
 Direct For Computer & Info Scie & Enginr
Start Date: August 1, 2020
End Date: July 31, 2024 (Estimated)
Total Intended Award Amount: $250,000.00
Total Awarded Amount to Date: $266,000.00
Funds Obligated to Date: FY 2020 = $250,000.00
FY 2021 = $16,000.00
History of Investigator:
  • Xiaoyin Wang (Principal Investigator)
    Xiaoyin.Wang@UTSA.EDU
  • Jianwei Niu (Co-Principal Investigator)
Recipient Sponsored Research Office: University of Texas at San Antonio
1 UTSA CIR
SAN ANTONIO
TX  US  78249-1644
(210)458-4340
Sponsor Congressional District: 20
Primary Place of Performance: University of Texas at San Antonio
One UTSA Circle
San Antonio
TX  US  78249-1644
Primary Place of Performance
Congressional District:
20
Unique Entity Identifier (UEI): U44ZMVYU52U6
Parent UEI: X5NKD2NFF2V3
NSF Program(s): Software & Hardware Foundation
Primary Program Source: 01002021DB NSF RESEARCH & RELATED ACTIVIT
01002122DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7923, 7944, 9251
Program Element Code(s): 779800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Software applications with Graphical User Interfaces (GUIs) have become essential in people's daily lives, and sufficient testing is a necessity to ensure their quality. When performed manually, GUI testing is a costly and tedious process requiring many human testers to explore the user interface and check whether the output is as expected. In contrast, existing automated testing techniques are less effective due to the lack of availability of domain knowledge that human testers typically possess. In this project, the investigators will explore the reuse and migration of manual GUI tests, an alternative route to complement existing automatic GUI testing research. The intuitive observation behind the project is that developers tend to use similar GUI designs in different platform versions of a same application or different applications within the same domain. Therefore, it is possible to reuse the exploration sequences, input values, and expected output with proper adaptations taking into account the subtle implementation differences between applications. The project is expected to enhance the coverage and productivity of GUI-testing processes, leading to GUI applications with higher quality and fewer defects. Additionally, the incorporated training and education activities will provide opportunities for participants to acquire research experience and become highly qualified researchers and practitioners.

In this project, the PIs are going to answer the research question: whether and how existing GUI tests can be reused in automatic GUI testing with necessary adaptation. In particular, the investigators will work on the generation of GUI-code embeddings to represent the semantics of GUI views and develop novel GUI-view mapping techniques to map GUI views among different applications. The investigators will also study how input-value constraints and event-sequence constraints in existing GUI tests can be extracted as domain knowledge, how such knowledge can be translated across platform and application boundaries, as well as how the translated knowledge can be incorporated into the automatic GUI-test generation process of the target application. Moreover, the investigators will develop techniques to identify the potential reusability of existing test oracles based on measuring their fitness with the new context, and techniques to create new test oracles by summarizing common behaviors of software applications in the same domain. The findings of this project are intended to shed light on the more general problem of reusing and migrating any test cases such as unit tests and integration tests, as well as the solution to the open problem of creating meaningful test oracles.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Wang, Xiaoyin "VRTest: An Extensible Framework for Automatic Testing of Virtual Reality Scenes" 2022 IEEE/ACM 44th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion) , 2022 https://doi.org/10.1109/ICSE-Companion55297.2022.9793753 Citation Details
Rafi, Tahmid and Zhang, Xueling and Wang, Xiaoyin "PredART: Towards Automatic Oracle Prediction of Object Placements in Augmented Reality Testing" IEEE/ACM International Conference on Automati , 2022 https://doi.org/10.1145/3551349.3561160 Citation Details
Zhang, Xueling and Heaps, John and Slavin, Rocky and Niu, Jianwei and Breaux, Travis D. and Wang, Xiaoyin "DAISY: Dynamic-Analysis-Induced Source Discovery for Sensitive Data" ACM Transactions on Software Engineering and Methodology , 2022 https://doi.org/10.1145/3569936 Citation Details
Zhao, Yan and Tang, Enyi and Cai, Haipeng and Guo, Xi and Wang, Xiaoyin and Meng, Na "A Lightweight Approach of Human-Like Playtest for Android Apps" This 29th edition of the IEEE International Conference on Software Analysis, Evolution and Reengineering , 2022 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page