Advancing Transparent and Human-Centered Artificial Intelligence: A Comprehensive Review of Explainable AI Theories, Methods, and Applications

Authors

  • Dr. Eleanor Whitfield Department of Computer Science, University of Edinburgh, United Kingdom

Keywords:

Explainable Artificial Intelligence, Model Interpretability, Transparency, Counterfactual Explanations

Abstract

Explainable Artificial Intelligence (XAI) has emerged as a central paradigm in contemporary artificial intelligence research, driven by the growing deployment of machine learning systems in high-stakes domains such as healthcare, finance, governance, and autonomous systems. While predictive accuracy has traditionally dominated the evaluation of machine learning models, increasing concerns regarding opacity, accountability, fairness, trust, and ethical compliance have exposed fundamental limitations of black-box approaches. This article presents a comprehensive, publication-ready research study that synthesizes and critically elaborates on the theoretical foundations, taxonomies, methodologies, and application-driven implications of XAI, based strictly on established scholarly literature. Drawing from foundational surveys, conceptual frameworks, and domain-specific studies, the article examines explainability from multiple perspectives, including technical model interpretability, human-centered explanation effectiveness, causality and counterfactual reasoning, knowledge-based representations, and stakeholder-oriented requirements. Particular attention is given to the tension between model complexity and interpretability, the distinction between intrinsic and post-hoc explanations, and the evolving role of XAI in regulated and safety-critical environments. Methodologically, the study adopts a structured qualitative synthesis approach, integrating comparative analysis and conceptual reasoning to uncover patterns, gaps, and unresolved challenges within the existing body of work. The results highlight that explainability is not a singular technical property but a socio-technical construct shaped by context, audience, and purpose. The discussion extends these findings by addressing limitations of current XAI methods, including evaluation ambiguity, potential for misleading explanations, and insufficient alignment with human reasoning. The article concludes by proposing future research directions toward responsible, human-aligned, and causally grounded XAI systems. Overall, this work contributes an in-depth, theoretically rich, and integrative perspective intended to guide researchers, practitioners, and policymakers toward more transparent and trustworthy artificial intelligence.

Downloads

Download data is not yet available.

References

1. Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.

2. Burkart, N., & Huber, M.F. (2021). A survey on the explainability of supervised machine learning. Journal of Artificial Intelligence Research, 70, 245–317.

3. Cambria, E., Malandri, L., Mercorio, F., Mezzanzanica, M., & Nobani, N. (2023). A survey on XAI and natural language explanations. Information Processing & Management, 60(1), 103111.

4. Chou, Y.-L., Moreira, C., Bruza, P., Ouyang, C., & Jorge, J. (2022). Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications. Information Fusion, 81, 59–83.

5. Došilović, F.K., Brčić, M., & Hlupić, N. (2018). Explainable artificial intelligence: A survey. Proceedings of the International Convention on Information and Communication Technology, Electronics and Microelectronics, 210–215.

6. Gerlings, J., & Shollo, A., & Constantiou, I. (2021). Reviewing the need for Explainable Artificial Intelligence (XAI). Proceedings of the Hawaii International Conference on System Sciences.

7. Gerlings, J., Jensen, M.S., & Shollo, A. (2022). Explainable AI, but explainable to whom? An exploratory case study of xAI in healthcare. In Handbook of Artificial Intelligence in Healthcare, Volume 2, 169–198.

8. Guidotti, R. (2022). Counterfactual explanations and how to find them: Literature review and benchmarking. Data Mining and Knowledge Discovery, 1–55.

9. Holzinger, A., Saranti, A., Molnar, C., Biecek, P., & Samek, W. (2022). Explainable AI methods—A brief overview. International Workshop on Extending Explainable AI beyond Deep Models and Classifiers, 13–38.

10. Jung, J., Lee, H., Jung, H., & Kim, H. (2023). Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review. Heliyon, 9, e16110.

11. Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., Sesing, A., & Baum, K. (2021). What do we want from Explainable Artificial Intelligence (XAI)? A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296, 103473.

12. Linardatos, P., Papastefanopoulos, V., & Kotsiantis, S. (2021). Explainable AI: A review of machine learning interpretability methods. Entropy, 23(1), 18.

13. Loyola-Gonzalez, O. (2019). Black-box vs. white-box: Understanding their advantages and weaknesses from a practical point of view. IEEE Access, 7, 154096–154113.

14. Rai, A. (2020). Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science, 48, 137–141.

15. Ribeiro, M.T., Singh, S., & Guestrin, C. (2016). Why should I trust you?: Explaining the predictions of any classifier. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144.

16. Shukla, O. (2025). Explainable Artificial Intelligence Modelling for Bitcoin Price Forecasting. Journal of Emerging Technologies and Innovation Management, 1(1), 50–60.

17. Stepin, I., Alonso, J.M., Catala, A., & Pereira-Farina, M. (2021). A survey of contrastive and counterfactual explanation generation methods for Explainable Artificial Intelligence. IEEE Access, 9, 11974–12001.

18. Tiddi, I., & Schlobach, S. (2022). Knowledge graphs as tools for explainable machine learning: A survey. Artificial Intelligence, 302, 103627.

19. Tjoa, E., & Guan, C. (2020). A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Transactions on Neural Networks and Learning Systems, 32, 4793–4813.

20. Yang, G., Ye, Q., & Xia, J. (2022). Unbox the black-box for the medical Explainable AI via multi-modal and multi-centre data fusion. Information Fusion, 77, 29–52.

Downloads

Published

2025-12-19

How to Cite

Advancing Transparent and Human-Centered Artificial Intelligence: A Comprehensive Review of Explainable AI Theories, Methods, and Applications. (2025). International Journal of Advance Scientific Research, 5(07), 101-108. https://sciencebring.com/index.php/ijasr/article/view/1051

Similar Articles

11-20 of 100

You may also start an advanced similarity search for this article.