Log-Based Behavioral System Model Inference Using Reinforcement Learning

More Info
expand_more

Abstract

System behavior models are highly useful for the developers of the system as they aid in system comprehension, documentation, and testing. Even though methods to obtain such models exist, e.g. profiling, tracing, source code inference and existing log-based inference methods, they can not successfully be applied to the case of large, real-time systems. Profiling and tracing add overhead which may alter the system's behavior and source code inference does not scale for systems of this magnitude. Existing log-based approaches also suffer due to the intrinsic scalability issues of deriving a minimal model as proven by Gold. In this work, this issue is tackled by applying Reinforcement Learning to the model inference problem. First, an initial model is created from the traces and then Q-Learning is applied to shrink this model into a concise and accurate representation of the system. The approach is evaluated using log traces produced by the XRP Ledger Consensus Protocol. Its effectiveness is assessed based on the accuracy and the conciseness of the inferred models as well as the execution time of the algorithm to infer the model. Results show that the Q-Learning implementation used in this work, is not able to converge to consistent action values. These results might be implementation specific meaning future work should experiment with and extend the current implementation of the algorithm, or due to assumptions made in this work about the underlying systems do not hold. Future work should apply this approach to a different system so as to assess its feasibility.