There has been an increasing interest in Explainable Artificial Intelligence (XAI) in recent years. Complex machine learning algorithms, such as deep neural networks, can accurately predict outcomes, but provide little insight into how the decision was made or what factors influenced the outcome. This lack of transparency can be a major issue in high-stakes decision-making scenarios, where understanding the reasoning behind a decision is crucial. XAI aims to address the problem of the ”black box” in machine learning models, where the AI’s decision-making process is not transparent, and humans cannot understand how the AI arrived at a particular decision or prediction. Evolutionary and metaheuristic techniques offer promising avenues for achieving explainability in AI systems, and there is a lot of ongoing research in this area to further explore their potential. Our work is a concise literature review that explores the potential adoption of these techniques to facilitate the attainment of explainability in AI systems.We have highlighted some of the contributions of evolutionary and metaheuristic techniques in different approaches to achieving explainability, such as counterfactual explanations, local surrogate modelling, and the development of transparent models.