相关文章推荐
逃课的消炎药  ·  Python ...·  1 年前    · 
单身的斑马  ·  python打开没有工具栏 ...·  1 年前    · 
  • The Second Xiangya Hospital of Central South University, No. 139, Renmin Road Central, Changsha, Hunan, China.
  • School of Life Sciences, Central South University, Changsha, Hunan, China.
  • College of Computer Science and Engineering, Jishou University, Jishou, Hunan, China.
  • Key Laboratory of Medical Information Research, The Third Xiangya Hospital, Central South University, College of Hunan Province, Changsha, Hunan, China.
  • Clinical Research Center for Cardiovascular Intelligent Healthcare, Changsha, Hunan, China.
  • Big Data Institute, Central South University, Changsha 410083, China.
  • Background . Artificial intelligence (AI) has developed rapidly, and its application extends to clinical decision support system (CDSS) for improving healthcare quality. However, the interpretability of AI-driven CDSS poses significant challenges to widespread application. Objective . This study is a review of the knowledge-based and data-based CDSS literature regarding interpretability in health care. It highlights the relevance of interpretability for CDSS and the area for improvement from technological and medical perspectives. Methods . A systematic search was conducted on the interpretability-related literature published from 2011 to 2020 and indexed in the five databases: Web of Science, PubMed, ScienceDirect, Cochrane, and Scopus. Journal articles that focus on the interpretability of CDSS were included for analysis. Experienced researchers also participated in manually reviewing the selected articles for inclusion/exclusion and categorization. Results . Based on the inclusion and exclusion criteria, 20 articles from 16 journals were finally selected for this review. Interpretability, which means a transparent structure of the model, a clear relationship between input and output, and explainability of artificial intelligence algorithms, is essential for CDSS application in the healthcare setting. Methods for improving the interpretability of CDSS include ante-hoc methods such as fuzzy logic, decision rules, logistic regression, decision trees for knowledge-based AI, and white box models, post hoc methods such as feature importance, sensitivity analysis, visualization, and activation maximization for black box models. A number of factors, such as data type, biomarkers, human-AI interaction, needs of clinicians, and patients, can affect the interpretability of CDSS. Conclusions . The review explores the meaning of the interpretability of CDSS and summarizes the current methods for improving interpretability from technological and medical perspectives. The results contribute to the understanding of the interpretability of CDSS based on AI in health care. Future studies should focus on establishing formalism for defining interpretability, identifying the properties of interpretability, and developing an appropriate and objective metric for interpretability; in addition, the user's demand for interpretability and how to express and provide explanations are also the directions for future research. 中文翻译: 背景 。人工智能(AI)发展迅速,其应用扩展到临床决策支持系统(CDSS)以提高医疗质量。然而,人工智能驱动的 CDSS 的可解释性对广泛应用提出了重大挑战。 目标 。这项研究是对基于知识和基于数据的 CDSS 文献的回顾,该文献涉及医疗保健的可解释性。它强调了 CDSS 的可解释性的相关性以及从技术和医学角度进行改进的领域。 方法 . 对 2011 年至 2020 年发表的可解释性相关文献进行了系统检索,并在五个数据库中建立了索引:Web of Science、PubMed、ScienceDirect、Cochrane 和 Scopus。关注 CDSS 可解释性的期刊文章被纳入分析。经验丰富的研究人员还参与人工审查选定的文章以进行包含/排除和分类。 结果 . 根据纳入和排除标准,最终从16种期刊中选出20篇文章进行本次综述。可解释性,即模型的透明结构、输入和输出之间清晰的关系以及人工智能算法的可解释性,对于 CDSS 在医疗环境中的应用至关重要。提高CDSS可解释性的方法包括事前方法,如模糊逻辑、决策规则、逻辑回归、基于知识的人工智能的决策树和白盒模型,事后方法,如特征重要性、敏感性分析、可视化和黑盒模型的激活最大化。许多因素,例如数据类型、生物标志物、人机交互、临床医生和患者的需求,都会影响 CDSS 的可解释性。 结论 。该综述探讨了CDSS可解释性的含义,并从技术和医学角度总结了当前提高可解释性的方法。该结果有助于理解医疗保健中基于 AI 的 CDSS 的可解释性。未来的研究应侧重于建立用于定义可解释性的形式主义,确定可解释性的属性,并开发适当且客观的可解释性指标;此外,用户对可解释性的需求以及如何表达和提供解释也是未来研究的方向。