Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Language-based software testing
Steinhöfel D., Zeller A. Communications of the ACM67 (4):80-84,2024.Type:Article
Date Reviewed: May 1 2024

Testing remains as the prime technique to check whether software satisfies the specified requirement. Owing to the high volume of potential inputs and expected outputs in very large systems, the process is laborious and error-prone if conducted manually. To meet the challenge, this article proposes the use of bots, or automatic robot processes, to test the systems continuously 24/7.

However, two hurdles prevail. The first is the huge number of potential inputs. Random testing is an obvious choice, but the diversity still needs human judgement. Second, we assume the existence of a test oracle that verifies the correctness of the outputs. Again, human intervention is often needed.

The present authors propose an excellent solution to the problem by (a) specifying the grammar and then (b) using their input specification language (ISLa) to obtain the properties of the components. They support their methodology via various techniques such as program synthesis, dynamic invariants, and explainable artificial intelligence (AI). They provide a comprehensive solution to the two common issues in software testing: the completeness of input coverage and the presence of undetected failures. Thus, the article is of great importance to novice testers for the fundamental concepts and to experienced testers for the detailed treatment.

I have a modest suggestion: the authors may consider the work of Tsong Yueh Chen, who has just been selected for the ACM SIGSOFT Outstanding Research Award 2024. His work on adaptive random testing (ART) proves that no testing method can reveal failures using less than half the test cases required by random testing, and ART is close to the optimum [1]. His work on metamorphic testing (MT)[2] shows that we may alleviate the oracle problem by comparing the inputs and outputs of multiple executions of the same software. Alastair F. Donaldson and team from Imperial College London have developed metamorphic fuzzers to uncover numerous bugs from compilers. In short, ART and MT together may ease the workload of the bots.

Reviewer:  T.H. Tse Review #: CR147755
1) Chen, T. Y.; Merkel, R. An upper bound on software testing effectiveness. ACM Transactions on Software Engineering and Methodology 17, 3 (2008), Article No. 16.
2) Chen, T.Y.; and Tse, T. H. New visions on metamorphic testing after a quarter of a century of inception. In Proc. of ESEC/FSE 2021. ACM, 2021, 1487–1490.
Bookmark and Share
  Featured Reviewer  
 
Testing And Debugging (D.2.5 )
 
 
General (D.2.0 )
 
 
General (D.0 )
 
Would you recommend this review?
yes
no
Other reviews under "Testing And Debugging": Date
Software defect removal
Dunn R., McGraw-Hill, Inc., New York, NY, 1984. Type: Book (9789780070183131)
Mar 1 1985
On the optimum checkpoint selection problem
Toueg S., Babaoglu O. SIAM Journal on Computing 13(3): 630-649, 1984. Type: Article
Mar 1 1985
Software testing management
Royer T., Prentice-Hall, Inc., Upper Saddle River, NJ, 1993. Type: Book (9780135329870)
Mar 1 1994
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy