Reinforcement learning models to optimize compiler phase ordering for specific applications: review of selected cases
DOI:
https://doi.org/10.1234/casi.v1i1.6Keywords:
Reinforcement learning model, compiler, phase ordering, optimization, applicationsAbstract
In this study, we reviewed the applications of reinforcement learning (RL) models to optimize compiler phase ordering, a crucial aspect of compiler optimization. The study examines several prominent RL-based models, including Machine Learning Guided Optimization (MLGO), Autophase, DeepTune, NeuroVectorizer, and COBAYN, highlighting their key contributions, methodologies, limitations, and potential improvements. While RL-based approaches have demonstrated significant advancements in optimizing compiler tasks, such as phase ordering and loop vectorization, the review identifies common challenges, including task-specific optimization, dependency on predefined sequences, limited adaptability, and lack of interpretability. The paper also discusses the gaps in the existing literature, emphasizing the need for more generalizable models, dynamic learning capabilities, and enhanced transparency in optimization decisions. Future research should focus on developing scalable, adaptable, and interpretable RL models that can seamlessly integrate into modular compiler frameworks, paving the way for more efficient and adaptive compiler optimization techniques in real-world applications.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Otobong Anietie Udoh , Peace Okafor

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
The Author(s). This work is licensed under the Creative Commons Attribution-Non Commercial 4.0 International License (CC BY 4.0). https://creativecommons.org/licenses/by/4.0