The University of Southampton
University of Southampton Institutional Repository

A configurable method for benchmarking scalability of cloud-native applications

A configurable method for benchmarking scalability of cloud-native applications
A configurable method for benchmarking scalability of cloud-native applications

Cloud-native applications constitute a recent trend for designing large-scale software systems. However, even though several cloud-native tools and patterns have emerged to support scalability, there is no commonly accepted method to empirically benchmark their scalability. In this study, we present a benchmarking method, allowing researchers and practitioners to conduct empirical scalability evaluations of cloud-native applications, frameworks, and deployment options. Our benchmarking method consists of scalability metrics, measurement methods, and an architecture for a scalability benchmarking tool, particularly suited for cloud-native applications. Following fundamental scalability definitions and established benchmarking best practices, we propose to quantify scalability by performing isolated experiments for different load and resource combinations, which asses whether specified service level objectives (SLOs) are achieved. To balance usability and reproducibility, our benchmarking method provides configuration options, controlling the trade-off between overall execution time and statistical grounding. We perform an extensive experimental evaluation of our method’s configuration options for the special case of event-driven microservices. For this purpose, we use benchmark implementations of the two stream processing frameworks Kafka Streams and Flink and run our experiments in two public clouds and one private cloud. We find that, independent of the cloud platform, it only takes a few repetitions (≤ 5) and short execution times (≤ 5 minutes) to assess whether SLOs are achieved. Combined with our findings from evaluating different search strategies, we conclude that our method allows to benchmark scalability in reasonable time.

Benchmarking, Cloud-Native, Performance engineering, Scalability
1382-3256
Henning, Sören
e09ef4ea-8a2f-4d11-903b-db51d6371fcb
Hasselbring, Wilhelm
ee89c5c9-a900-40b1-82c1-552268cd01bd
Henning, Sören
e09ef4ea-8a2f-4d11-903b-db51d6371fcb
Hasselbring, Wilhelm
ee89c5c9-a900-40b1-82c1-552268cd01bd

Henning, Sören and Hasselbring, Wilhelm (2022) A configurable method for benchmarking scalability of cloud-native applications. Empirical Software Engineering, 27 (6), [143]. (doi:10.1007/s10664-022-10162-1).

Record type: Article

Abstract

Cloud-native applications constitute a recent trend for designing large-scale software systems. However, even though several cloud-native tools and patterns have emerged to support scalability, there is no commonly accepted method to empirically benchmark their scalability. In this study, we present a benchmarking method, allowing researchers and practitioners to conduct empirical scalability evaluations of cloud-native applications, frameworks, and deployment options. Our benchmarking method consists of scalability metrics, measurement methods, and an architecture for a scalability benchmarking tool, particularly suited for cloud-native applications. Following fundamental scalability definitions and established benchmarking best practices, we propose to quantify scalability by performing isolated experiments for different load and resource combinations, which asses whether specified service level objectives (SLOs) are achieved. To balance usability and reproducibility, our benchmarking method provides configuration options, controlling the trade-off between overall execution time and statistical grounding. We perform an extensive experimental evaluation of our method’s configuration options for the special case of event-driven microservices. For this purpose, we use benchmark implementations of the two stream processing frameworks Kafka Streams and Flink and run our experiments in two public clouds and one private cloud. We find that, independent of the cloud platform, it only takes a few repetitions (≤ 5) and short execution times (≤ 5 minutes) to assess whether SLOs are achieved. Combined with our findings from evaluating different search strategies, we conclude that our method allows to benchmark scalability in reasonable time.

This record has no associated files available for download.

More information

e-pub ahead of print date: 6 August 2022
Additional Information: Publisher Copyright: © 2022, The Author(s).
Keywords: Benchmarking, Cloud-Native, Performance engineering, Scalability

Identifiers

Local EPrints ID: 488884
URI: http://eprints.soton.ac.uk/id/eprint/488884
ISSN: 1382-3256
PURE UUID: f862051e-f2d5-4330-999d-e490c3cc636e
ORCID for Wilhelm Hasselbring: ORCID iD orcid.org/0000-0001-6625-4335

Catalogue record

Date deposited: 09 Apr 2024 10:03
Last modified: 10 Apr 2024 02:15

Export record

Altmetrics

Contributors

Author: Sören Henning
Author: Wilhelm Hasselbring ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×