Evaluating and comparing language workbenches

Existing results and benchmarks for the future

Journal Article (2015)
Author(s)

Sebastian Erdweg (Technische Universität Darmstadt)

Tijs van der Storm (Centrum Wiskunde & Informatica (CWI))

Markus Voelter (Voelter.de)

Laurence Tratt (King’s College London)

Remi Bosman (Sioux)

William R. Cook (The University of Texas at Austin)

A.W. Gerritsen (Sioux)

Angelo Hulshout (Delphino Consultancy)

Steven Kelly (MetaCase)

Alex Loh (The University of Texas at Austin)

G.D.P. Konat (TU Delft - Programming Languages)

Pedro J. Molina (Icinetic)

Martin Palatnik (Sioux)

Risto Pohjonen (MetaCase)

Eugen Schindler (Sioux)

Klemens Schindler (Sioux)

Riccardo Solmi (External organisation)

V.A. Vergu (TU Delft - Programming Languages)

E Visser (TU Delft - Programming Languages)

Kevin van der Vlist (Sogyo)

GH Wachsmuth (TU Delft - Programming Languages)

Jimi Van Der Woning (Young Colfield)

Research Group
Programming Languages
DOI related publication
https://doi.org/10.1016/j.cl.2015.08.007
More Info
expand_more
Publication Year
2015
Language
English
Research Group
Programming Languages
Volume number
44
Pages (from-to)
24-47

Abstract

Language workbenches are environments for simplifying the creation and use of computer languages. The annual Language Workbench Challenge (LWC) was launched in 2011 to allow the many academic and industrial researchers in this area an opportunity to quantitatively and qualitatively compare their approaches. We first describe all four LWCs to date, before focussing on the approaches used, and results generated, during the third LWC. We give various empirical data for ten approaches from the third LWC. We present a generic feature model within which the approaches can be understood and contrasted. Finally, based on our experiences of the existing LWCs, we propose a number of benchmark problems for future LWCs.

No files available

Metadata only record. There are no files for this record.