The University of Southampton
University of Southampton Institutional Repository

Benchmarking as empirical standard in software engineering research

Benchmarking as empirical standard in software engineering research
Benchmarking as empirical standard in software engineering research

In empirical software engineering, benchmarks can be used for comparing different methods, techniques and tools. However, the recent ACM SIGSOFT Empirical Standards for Software Engineering Research do not include an explicit checklist for benchmarking. In this paper, we discuss benchmarks for software performance and scalability evaluation as example research areas in software engineering, relate benchmarks to some other empirical research methods, and discuss the requirements on benchmarks that may constitute the basis for a checklist of a benchmarking standard for empirical software engineering research.

Benchmarking, Empirical software engineering, Empirical standards
365-372
Association for Computing Machinery
Hasselbring, Wilhelm
ee89c5c9-a900-40b1-82c1-552268cd01bd
Chitchyan, Ruzanna
Li, Jingyue
Weber, Barbara
Yue, Tao
Hasselbring, Wilhelm
ee89c5c9-a900-40b1-82c1-552268cd01bd
Chitchyan, Ruzanna
Li, Jingyue
Weber, Barbara
Yue, Tao

Hasselbring, Wilhelm (2021) Benchmarking as empirical standard in software engineering research. Chitchyan, Ruzanna, Li, Jingyue, Weber, Barbara and Yue, Tao (eds.) In EASE '21: Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering. Association for Computing Machinery. pp. 365-372 . (doi:10.1145/3463274.3463361).

Record type: Conference or Workshop Item (Paper)

Abstract

In empirical software engineering, benchmarks can be used for comparing different methods, techniques and tools. However, the recent ACM SIGSOFT Empirical Standards for Software Engineering Research do not include an explicit checklist for benchmarking. In this paper, we discuss benchmarks for software performance and scalability evaluation as example research areas in software engineering, relate benchmarks to some other empirical research methods, and discuss the requirements on benchmarks that may constitute the basis for a checklist of a benchmarking standard for empirical software engineering research.

This record has no associated files available for download.

More information

Published date: 21 June 2021
Venue - Dates: 25th Evaluation and Assessment in Software Engineering Conference, EASE 2021, , Virtual, Online, Norway, 2021-06-21 - 2021-06-24
Keywords: Benchmarking, Empirical software engineering, Empirical standards

Identifiers

Local EPrints ID: 488777
URI: http://eprints.soton.ac.uk/id/eprint/488777
PURE UUID: 6d14bc34-5dbb-4fd7-94f8-088a1593de47
ORCID for Wilhelm Hasselbring: ORCID iD orcid.org/0000-0001-6625-4335

Catalogue record

Date deposited: 05 Apr 2024 16:38
Last modified: 10 Apr 2024 02:15

Export record

Altmetrics

Contributors

Author: Wilhelm Hasselbring ORCID iD
Editor: Ruzanna Chitchyan
Editor: Jingyue Li
Editor: Barbara Weber
Editor: Tao Yue

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×