On June 25th, Professor Hani Hagras from the University of Essex, UK, was invited to the Huangdao Forum (Mathematical and Physical Sub-forum) and delivered an academic lecture titled Towards True Explainable Artificial Intelligence for Real-World Applications.
Professor Hani Hagras explained that the rapid development of artificial intelligence (AI) in recent years has brought significant benefits, but its nature as a black box model has also raised widespread concerns. While current mainstream deep learning models exhibit excellent performance, their decision-making processes lack transparency, posing major risks in critical fields such as finance and healthcare. Therefore, there is a need to further focus on how to enhance the transparency of AI systems through explainable artificial intelligence (XAI) technology, ensuring that their decisions are both reliable and understandable to humans.
Professor Hagras pointed out that the core of achieving XAI lies in three pillars: causality, transparency, and simplicity. This requires models to reveal internal logic, ensure traceable decision-making, and provide easily interpretable explanations. Fuzzy Logic Systems, particularly Type-2 fuzzy sets, excel in this regard by explaining decisions in a manner close to human language, significantly improving model trustworthiness.
Professor Hagras emphasized that XAI has already demonstrated immense value in practical applications. In finance, it enhances the credibility of credit scoring by visualizing key factors; in medical diagnostics, it helps interpret brain activity patterns; and in genomics research, it accelerates cancer drug development. Additionally, in fields such as engineering optimization and resource allocation, XAI technology has notably improved work efficiency.
After the lecture, Professor Hagras engaged in interactive discussions with faculty and students, patiently answering questions and providing targeted and insightful advice based on his expertise.
Professor Hani Hagras is a Professor at the School of Computer Science and Electronic Engineering, University of Essex, where he serves as the Director of the Centre for Computational Intelligence and leads the AI Research Group. He is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE), the Institution of Engineering and Technology (IET), and the Principal Fellow of the Higher Education Academy (PFHEA) in the UK. He is also a Fellow of the Artificial Intelligence Industry Alliance (AIIA) and the Asia-Pacific Artificial Intelligence Association (AAIA). His research focuses on explainable AI (XAI) and data science, with applications spanning seven major fields, including finance, cyber-physical systems, and neuroscience. He has published over 500 papers in international journals, conferences, and books. His accolades include the IEEE Transactions on Fuzzy Systems Outstanding Paper Award (2010/2004), the Global Telecom Business Award (2015/2017), and the IEEE Computational Intelligence Society Distinguished Lecturer (2016). He has also won Best Paper Awards at major international conferences, including the IEEE International Conference on Fuzzy Systems (2014/2006) and the UK Workshop on Computational Intelligence (2012). His work has pioneered paradigm shifts in industrial intelligent control and trustworthy AI deployment through XAI technology.
[Editor:Sijia Wang]