Evaluating Macroscopic DTA Models – For Who, When and How?

More Info
expand_more

Abstract

Over the past few decades, transport authorities globally have resorted to transport models for testing policy interventions and simulating the results as part of the ex-ante analysis. Within the domains of traffic assignment, there is a greater focus on the dynamic representation of traffic, which has proved to be more accurate when compared to their static counterparts. This has put Dynamic Traffic Assignment (DTA) Models at the forefront of development. Departing from the classical traffic flow theories Macroscopic DTA’s simulates aggregated traffic analogous to the flow of fluids or gases. This aggregation enables high-speed computation with the ability to achieve a stable equilibrium state within feasible model run times. Due to the large number of Macroscopic DTA models developed worldwide, the model user is posed with the problem of using the correct model for the correct application. The current research aims to provide an answer to this problem through the design, development, and validation of an evaluation framework for Macroscopic DTA’s. The objective evaluation of the DTA’s is performed through certain Measures of Performances (MoPs). The subjective side of evaluation showcases the differences in importance associated with model features which vary from model users to application domains. Three macroscopic DTA models popular in the Netherlands are used for the application of the framework: the MARPLE (Model for Assignment and Regional Policy Evaluation), StreamLine: MaDAM (Macroscopic Dynamic Assignment Model), and StreamLine: eGLTM (event-based Generalized Link Transmission Model). From the results, it is observed that For a Strategic Planning application, both MARPLE and StreamLine: eGLTM proved to be better alternatives, as they performed exceedingly better in achieving a stable state of convergence. However, as the time horizons of application became smaller as is the case with Tactical and Operational planning, the final score for StreamLine: MaDAM improved substantially due to its accuracy involved in link-level propagation and queuing. The evaluation scores also showcase the fundamental trade-off between model complexity and computational speed was visible from the results. We can observe variations across model users, which validates our original hypothesis that the right choice of a model primary depends on the person using it and the application it is deployed for.