Veille en intelligence artificielle en santé Janvier 2025 | | Objectif de la veille Nourrir une vision stratégique tout en fournissant des outils concrets aux équipes du réseau de la Santé et des Services sociaux, à travers des articles portant sur les technologies et connaissances susceptibles de soutenir le développement et l'intégration de l'intelligence artificielle(IA) en santé. Le CEIAVD a choisi de ne publier que des articles en accès libre pour garantir leur disponibilité à un large public. Leur sélection est déterminée par une démarche éditoriale. L'IA appliquée au domaine clinique possède un spectre trop vaste à couvrir de manière exhaustive par notre activité de veille qui se veut plus généraliste. Vous voulez que l'on diffuse un article ou un événement? ceiavd@ssss.gouv.qc.ca 🍁 Canada 💙 Québec 🛠️ Outils ⭐ Recommandation 💡Intéressant | 🍁Regulating professional ethics in a context of technological change This study seeks to identify the challenges that health professionals face in the context of technological change, and whether regulators’ codes of ethics and guidance are sufficient to help workers navigate these changes. The bias algorithm: how AI in healthcare exacerbates ethnic and racial disparities – a scoping review This scoping review examined racial and ethnic bias in artificial intelligence health algorithms (AIHA), the role of stakeholders in oversight, and the consequences of AIHA for health equity. Explainable AI (XAI) in healthcare: Enhancing trust and transparency in critical decision-making This research explores the application of XAI techniques in healthcare, focusing on critical areas such as disease diagnostics, predictive analytics, and personalized treatment recommendations. Ethical Decision-Making in Artificial Intelligence: A Logic Programming Approach The paper proposes integrating ethical reasoning into AI systems using Continuous Logic Programming (CLP). It emphasizes transparency and accountability in automated decision-making, addressing concerns like algorithmic bias, data privacy, and ethical dilemmas in healthcare and autonomous systems. CLP offers a systematic framework for ethical AI. 🛠️The Coalition for Health AI (CHAI) applied model card It details an AI solution for health applications, embedded within an AI system. It meets HTI-1 criteria for predictive DSIs and supports CHAI's Responsible AI Principles. The draft includes complete documentation, a fillable template, and an example for public feedback until January 22, 2025. amc-schema on Github 💡Trustworthy AI for the enterprise: How to train an LLM to follow a code of conduct Outlining clear guidelines for model behaviour is essential for ensuring the responsible use of AI and building systems that enterprises and their customers can trust. In this post, I share how AI21 utilised the OECD’s AI Principles to develop an AI Code of Conduct to promote the safe and responsible use of AI21’s large language models (LLMs) in the enterprise. Strategies for creation of data reserve and stress testing of medical AI products Medical AI (MAI) products must undergo rigorous stress tests to ensure safety, accuracy, and efficiency before real-world implementation. These tests identify potential weaknesses and optimize MAIs. Independent bodies should administer stress tests to avoid bias and ensure fairness, enhancing trust in MAIs. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking This study investigates the relationship between AI tool usage and critical thinking skills, focusing on cognitive offloading as a mediating factor. Utilising a mixed-method approach, we conducted surveys and in-depth interviews with 666 participants across diverse age groups and educational backgrounds. | Le développement de solutions en intelligence artificielle nécessite un temps considérable et il est essentiel de se tenir informé des évolutions en cours afin d’anticiper l'impact potentiel de ces technologies sur la stratégie à adopter dans le domaine de la santé. Vous trouverez ci-dessous deux articles qui vous aideront à découvrir les IA autonomes. The rise of ‘AI agents’: What they are and how to manage the risks By 2027, half of companies using generative AI will have launched AI agents, transforming industries and productivity. These autonomous systems perform complex tasks with minimal supervision. The World Economic Forum and Capgemini's white paper explores their capabilities, implications, and strategies for mitigating associated risks. Agentic AI vs Generative AI: Understanding the Key Differences and Impacts In this blog series, we will break down the differences between Agentic AI and Generative AI, looking at how each affects industries and the future of technology. In this first post, we’ll start by exploring what sets these two types of AI apart. Developing healthcare language model embedding spaces This study aims to develop and evaluate efficient methods for adapting smaller LLMs to healthcare-specific datasets and tasks. We seek to identify pre-training approaches that can effectively instil healthcare competency in compact LLMs under tight computational budgets, a crucial capability for responsible and sustainable deployment in local healthcare settings. GDD: Generative Driven Design - Reflective generative AI software components as a development paradigm Generative Driven Design (GDD) is a development paradigm where AI tools automate software creation. Unlike traditional methods, GDD integrates codebase architecture and refactoring into the automation process, aiming to reduce "bot rot" and improve code quality by making code highly GenAI-able and incorporating automation as a core element. Large Concept Models: Language Modeling in a Sentence Representation Space The paper explores an architecture that operates on higher-level semantic representations called concepts. It uses SONAR, a sentence embedding space, to perform autoregressive sentence prediction. The model shows impressive zero-shot generalization performance across multiple languages. What is NVIDIA Cosmos? NVIDIA Cosmos™ is a platform of state-of-the-art generative world foundation models (WFM), advanced tokenizers, guardrails, and an accelerated data processing and curation pipeline built to accelerate the development of physical AI systems such as autonomous vehicles (AVs) and robots. Vidéo de présentation Enhancing the intelligibility of decision trees with concise and reliable probabilistic explanations This work focuses on enhancing the intelligibility of decision trees using concise and reliable probabilistic explanations. A greedy algorithm is proposed to derive these explanations, which are compared to state-of-the-art SAT encoding. Experiments on binary and multi-class classification, including text classification, demonstrate significant gains in interpretability. | Leveraging artificial intelligence to reduce diagnostic errors in emergency medicine: Challenges, opportunities, and futuredirections In this paper, we discuss opportunities and challenges for artificial intelligence (AI) to enhance the accuracy of ECs’ diagnostic decision making via clinician-oriented, patient safety–centric implementations. Improving triage performance in emergency departments using machine learning and natural language processing: a systematic review This review analyzes studies on ML and/or NLP algorithms for ED patient triage. Benefits and harms associated with the use of AI-related algorithmic decision-making systems by healthcare professionals: a systematic review This systematic review assesses the benefits and harms associated with AI-related algorithmic decision-making (ADM) systems used by healthcare professionals, compared to standard care. How Artificial Intelligence is altering the nursing workforce The paper explores the impact of AI on the nursing workforce, highlighting both opportunities and challenges. AI can relieve nurses of routine tasks, allowing more time for patient care and professional development. However, concerns about job displacement persist. The ethical integration of AI in nursing education and practice is emphasized, urging nurses to engage proactively with AI to enhance patient care and advance their roles. Adapting Artificial Intelligence Concepts to Enhance Clinical Decision-Making: A Hybrid Intelligence Framework We developed a conceptual framework designed to integrate AI-driven hybrid intelligence into clinical practice to enhance decision-making. This framework focuses on adapting key AI concepts, such as backpropagation, quantization, and avoiding overfitting, to help clinicians better interpret complex medical data and improve diagnosis and treatment planning Machine learning approaches for the discovery of clinical pathways from patient data: A systematic review This review provides a comprehensive overview of the literature concerning the use of machine learning methods for clinical pathway discovery from patient data. Implementation of Machine Learning Applications in Health Care Organizations: Systematic Review of Empirical Studies This work aimed to (1) examine the characteristics of ML-based applications and the implementation process in clinical practice, using the Consolidated Framework for Implementation Research (CFIR) for theoretical guidance and (2) synthesize the strategies adopted by health care organizations to foster successful implementation of ML. Mortality prediction using data from wearable activity trackers and individual characteristics: An explainable artificial intelligence approach This study uses novel data sources such as wearable activity tracking devices, combined with explainable artificial intelligence methods, to enhance the accuracy and interpretability of mortality predictions. | Deux publications pour nous inspirer. France : L’IA et l'avenir du service public - Rapport thématique #2 - IA et santé Ce rapport explore l'impact de l'intelligence artificielle (IA) sur le service public de santé en France. Il examine les avantages, les risques et les scénarios de déploiement de l'IA, tout en soulignant l'importance d'un cadre éthique et réglementaire pour son utilisation. The U.S. Department of Health and Human Services (HHS) Artificial Intelligence Strategic Plan The AI Strategic Plan provides a framework and roadmap to ensure that HHS fulfills its obligation to the Nation and pioneers the responsible use of AI to improve people’s lives. To increase accessibility and utility to the broad set of readers across the sector, we provide a high-level Overview along with the full Strategic Plan. From theory to practice: Harmonizing taxonomies of trustworthy AI The paper evaluates various trustworthy AI frameworks, highlighting their differences and proposing a harmonized approach. It focuses on the Department of Veterans Affairs, offering a comprehensive perspective on trustworthy AI principles to guide federal agencies in developing AI strategies that align with institutional needs and priorities. Australie - Voluntary AI Safety Standard The standard consists of 10 voluntary guardrails that apply to all organisations throughout the AI supply chain. They include testing, transparency and accountability requirements across the supply chain. They also explain what developers and deployers of AI systems must do to comply with the guardrails. ⭐Tackling algorithmic bias and promoting transparency in health datasets: the STANDING Together consensus recommendations The STANDING Together recommendations aim to address algorithmic bias in AI health technologies by promoting transparency in health datasets. They provide guidelines for documenting and using health datasets to identify and mitigate biases, ensuring that AI technologies are safe and effective for all population groups. | Avis de non-responsabilité Utilisation de l'anglais Pour maintenir l'intégrité du sens, nous avons choisi de ne pas traduire les titres et résumés. Vous pouvez utiliser les outils de traduction automatique disponibles dans certains navigateurs pour traduire les articles en anglais : faites un clic droit sur la page et sélectionnez « Traduire en français ». Notez que l'utilisation de ces outils peut entraîner des différences de sens par rapport au contenu original. Utilisation de l'IA Certains résumés de cette veille, générés avec des outils d'IA pour réduire le nombre de mots, peuvent contenir des imprécisions. Les articles originaux n'ont pas été lus en entier par l'auteure de cette veille, donc le CEIAVD décline toute responsabilité quant aux inexactitudes ou omissions. Les résumés visent à orienter le lectorat sur le contenu. Pour des informations complètes et précises, consultez les articles originaux. Hyperliens Les veilles incluent des liens vers des sites Web créés et mis à jour par des tiers afin de faciliter la consultations des articles. Le CEIAVD n'endosse ni ne garantit, de manière explicite ou implicite, l'exactitude ou l'exhaustivité du contenu de ces hyperliens ou des opinions qui y sont exprimées. Le CEIAVD décline toute responsabilité quant à ces sites Web externes. | | | | |