Model Repair with Quality-Based Reinforcement Learning

By: Iovino Ludovico, Angela Barriga, Adrian Rutle, Rogardt Heldal


Domain modeling is a core activity in Model-Driven Engineering, and these models must be correct. A large number of artifacts may be constructed on top of these domain models, such as instance models, transformations, and editors. Similar to any other software artifact, domain models are subject to the introduction of errors during the modeling process. There are a number of existing tools that reduce the burden of manually dealing with correctness issues in models. Although various approaches have been proposed to support the quality assessment of modeling artifacts in the past decade, the quality of the automatically repaired models has not been the focus of repairing processes. In this paper, we propose the integration of an automatic evaluation of domain models based on a quality model with a framework for personalized and automatic model repair. The framework uses reinforcement learning to find the best sequence of actions for repairing a broken model.


MDE, Machine Learning, Model Repair, Quality Evaluation

Cite as:

Iovino Ludovico, Angela Barriga, Adrian Rutle, Rogardt Heldal, “Model Repair with Quality-Based Reinforcement Learning”, Journal of Object Technology, Volume 19, no. 2 (July 2020), pp. 17:1-21, doi:10.5381/jot.2020.19.2.a17.

PDF | DOI | BiBTeX | Tweet this | Post to CiteULike | Share on LinkedIn

This article is accompanied by a video realized by the author(s).

The JOT Journal   |   ISSN 1660-1769   |   DOI 10.5381/jot   |   AITO   |   Open Access   |    Contact