Reinforcement learning models to optimize compiler phase ordering for specific applications: review of selected cases

Authors

  • Otobong Anietie Udoh Department of Computer and Robotics Education, University of Uyo, Uyo, Nigeria
  • Peace Okafor Department of State Service, Ebonyi State Command, Abakaliki, Nigeria

DOI:

https://doi.org/10.1234/casi.v1i1.6

Keywords:

Reinforcement learning model, compiler, phase ordering, optimization, applications

Abstract

In this study, we reviewed the applications of reinforcement learning (RL) models to optimize compiler phase ordering, a crucial aspect of compiler optimization. The study examines several prominent RL-based models, including Machine Learning Guided Optimization (MLGO), Autophase, DeepTune, NeuroVectorizer, and COBAYN, highlighting their key contributions, methodologies, limitations, and potential improvements. While RL-based approaches have demonstrated significant advancements in optimizing compiler tasks, such as phase ordering and loop vectorization, the review identifies common challenges, including task-specific optimization, dependency on predefined sequences, limited adaptability, and lack of interpretability. The paper also discusses the gaps in the existing literature, emphasizing the need for more generalizable models, dynamic learning capabilities, and enhanced transparency in optimization decisions. Future research should focus on developing scalable, adaptable, and interpretable RL models that can seamlessly integrate into modular compiler frameworks, paving the way for more efficient and adaptive compiler optimization techniques in real-world applications.

Downloads

Published

2024-12-30

How to Cite

[1]
O. A. . Udoh and P. . Okafor, “Reinforcement learning models to optimize compiler phase ordering for specific applications: review of selected cases”, Comp Appl Sci Impact, vol. 1, no. 1, pp. 36–51, Dec. 2024.