Organizing for permanent beta

performance measurement before vs performance monitoring after release of digital services

Journal Article (2022)
Author(s)

Kim E. van Oorschot (BI Norwegian Business School )

Henk A. Akkermans (Tilburg University)

Luk N. Van Wassenhove (INSEAD, Europe)

Yan Wang (TU Delft - Research Data and Software)

Research Group
Research Data and Software
Copyright
© 2022 Kim E. van Oorschot, Henk A. Akkermans, Luk N. Van Wassenhove, Y. Wang
DOI related publication
https://doi.org/10.1108/IJOPM-03-2021-0211
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Kim E. van Oorschot, Henk A. Akkermans, Luk N. Van Wassenhove, Y. Wang
Research Group
Research Data and Software
Issue number
3
Volume number
43 (2023)
Pages (from-to)
520-542
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Purpose: Due to the complexity of digital services, companies are increasingly forced to offer their services “in permanent beta”, requiring continuous fine-tuning and updating. Complexity makes it extremely difficult to predict when and where the next service disruption will occur. The authors examine what this means for performance measurement in digital service supply chains. Design/methodology/approach: The authors use a mixed-method research design that combines a longitudinal case study of a European digital TV service provider and a system dynamics simulation analysis of that service provider's digital service supply chain. Findings: With increased levels of complexity, traditional performance measurement methods, focused on detection of software bugs before release, become fragile or futile. The authors find that monitoring the performance of the service after release, with fast mitigation when service incidents are discovered, appears to be superior. This involves organizational change when traditional methods, like quality assurance, become less important. Research limitations/implications: The performance of digital services needs to be monitored by combining automated data collection about the status of the service with data interpretation using human expertise. Investing in human expertise is equally important as investing in automated processes. Originality/value: The authors draw on unique empirical data collected from a digital service provider's struggle with performance measurement of its service over a period of nine years. The authors use simulations to show the impact of complexity on staff allocation.

Files

10_1108_IJOPM_03_2021_0211.pdf
(pdf | 1.74 Mb)
- Embargo expired in 01-07-2022
License info not available