Veille en intelligence artificielle en santé

Décembre 2024

Objectif de la veille

Nourrir une vision stratégique tout en fournissant des outils concrets aux équipes du réseau de la Santé et des Services sociaux, à travers des articles portant sur les technologies et connaissances susceptibles de soutenir le développement et l'intégration de l'intelligence artificielle(IA) en santé.  

Le CEIAVD a choisi de ne publier que des articles en accès libre pour garantir leur disponibilité à un large public. Leur sélection est déterminée par une démarche éditoriale. L'IA appliquée au domaine clinique possède un spectre trop vaste à couvrir de manière exhaustive par notre activité de veille qui se veut plus généraliste. 

Vous voulez que l'on diffuse un article ou un événement? ceiavd@ssss.gouv.qc.ca

🍁 Canada             💙 Québec                🛠️ Outils                ⭐ Recommandation

 

💡 Mise à jour de notre veille – liens corrigés

Nous avons constaté que certains hyperliens du précédent envoi ne fonctionnaient pas correctement. Ce problème a été résolu et nous vous transmettons à nouveau la veille avec des liens fonctionnels.

Merci de votre compréhension et bon congé de fin d'année!

 

🚀Lancement du Plan directeur sur l’IA en santé 2024-2027 

Le MSSS dévoile un plan qui aligne les avancées technologiques avec les besoins du réseau de la santé et des services sociaux, tout en assurant une utilisation responsable et éthique de l’intelligence artificielle.

📌16 actions concrètes pour:
✅ Soutenir la performance
✅ Créer de la valeur
✅ Valoriser la recherche et l’innovation
✅ Encadrer l’utilisation responsable

Ce plan trace une voie claire vers une utilisation éthique de l’intelligence artificielle, conforme aux lois et réglementations.

📄 Découvrez ce plan ici 👇
https://publications.msss.gouv.qc.ca/msss/document-003840/

 

Consultez la liste des formations collégiales et universitaires dans la section Ressources de notre site web

 

💙Lancement du diplôme interuniversitaire en IA générative en santé
Janvier 2025 - Virtuel - $

L’ÉIAS est fière de faire partie du nouveau diplôme interuniversitaire en IA générative en santé, disponible dès janvier 2025! Cette formation 100% en ligne permettra aux apprenant·es du monde entier de profiter des enseignements d’une trentaine d’expert·es de haut niveau.
💡Information sur le programme          👉 Inscription

 

💙École d'été interdisciplinaire en numérique de la santé (EINS)
26 au 30 mai 2025 - Présentiel - $
Université de Sherbrooke

💡Information sur le programme         👉 Inscription
 

 

 

💙Towards Substantive Equality in Artificial Intelligence:Transformative AI Policy for Gender Equality and Diversity
The project’s overall objective is to contribute to ensuring that AI ecosystems have the appropriate tools, frameworks, and resources to incorporate effective Diversity and Gender Equality (DGE) strategies throughout the AI life cycle.
 

Méthodologie pour l’évaluation des risques et des impacts des systèmes d’intelligence
artificielle du point de vue des droits humains, de la démocratie et de l’état de droit
(méthodologie Huderia)

La méthodologie exige des réévaluations régulières pour s'assurer que le système d'IA continue à fonctionner de manière sûre et conforme à l'éthique à mesure que la situation et la technologie évoluent. Cette approche garantit aux citoyens une protection contre les nouveaux risques tout au long du cycle de vie du système d'IA.

Exploring bias risks in artificial intelligence and targeted medicines manufacturing
In this paper, we explore bias risks in targeted medicines manufacturing. Targeted medicines manufacturing refers to the act of making medicines targeted to individual patients or to subpopulations of patients within a general group, which can be achieved, for example, by means of cell and gene therapies.

Machine Learning in Health Care: Ethical Considerations Tied to Privacy, Interpretability, and Bias
Machine learning models hold great promise with medical applications, but also give rise to a series of ethical challenges. In this survey we focus on training data, model interpretability and bias and the related issues tied to privacy, autonomy, and health equity.

Privacy-preserving explainable AI: a survey
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures. Our contribution to this field comprises a thorough analysis of research papers with a connected taxonomy that facilitates the categorization of privacy attacks and countermeasures based on the targeted explanations.

AI Safety and Automation Bias - The Downside of Human-in-the-Loop
Automation bias is the tendency to overly rely on automated systems, leading to increased risks of accidents and errors. The article examines three case studies to understand and mitigate these biases. It proposes a three-level framework to analyze the roles of users, technical design, and organizations in influencing automation biases.
Le biais d'automatisation vous connaissez? C'est la tendance d’un individu à trop se fier à un système automatisé. Cela peut entraîner un risque accru d’accidents, d’erreurs et d’autres résultats défavorables lorsque des individus et des organisations favorisent le résultat ou la suggestion du système, même face à des informations contradictoires!

A Roadmap of Explainable Artificial Intelligence: Explain to Whom, When, What and How?

The article bridges the gap in explainable AI (XAI) by presenting a roadmap linking stakeholders' needs to XAI methods. It outlines explanation needs across the AI system lifecycle and offers guidelines for selecting suitable methodologies. The roadmap enhances XAI's practical use by guiding the fulfillment of diverse explanation requirements.

 

Compressing Large Language Models using Low Rank and Low Precision Decomposition
This work introduces CALDERA: Calibration Aware Low-Precision DEcomposition with Low-Rank Adaptation, which compresses LLMs by leveraging the approximate low rank structure inherent in these weight matrices.

Machine learning approaches for the discovery of clinical pathways from patient data: A systematic review
This review provides a comprehensive overview of the literature concerning the use of machine learning methods for clinical pathway discovery from patient data.

Enhancing Structured Query Language Injection Detection with Trustworthy Ensemble Learning and Boosting Models Using Local Explanation Techniques
This paper presents a comparative analysis of several decision models for detecting Structured Query Language (SQL) injection attacks, which remain one of the most prevalent and serious security threats to web applications.

Attribute Relevance Score: A Novel Measure for Identifying Attribute Importance
This study introduces a novel measure for evaluating attribute relevance, specifically designed to accurately identify attributes that are intrinsically related to a phenomenon, while being sensitive to the asymmetry of those relationships and noise conditions

The explosion in time series forecasting packages in data science
In the rest of this post, we’ll look at the new(ish) time series packages that are around, who built them, and what they might be good for.

Navigating the landscape of multimodal AI in medicine: a scoping review on technical challenges and clinical applications
This scoping review explores multimodal AI in healthcare. It highlights how integrating diverse data sources improves clinical decision-making, with a 6.2% AUC gain over unimodal models. Challenges include data heterogeneity and coordination. Recommendations aim to advance multimodal AI's clinical adoption and maturity across medical disciplines.

Data-driven robust flexible personnel scheduling
This paper addresses the flexible personnel scheduling problem under uncertain demand using robust optimization techniques. It introduces a distributionally robust model with a Wasserstein ambiguity set and a robust satisficing model, both leveraging historical data. Practical applications, algorithmic tractability, and real-world case studies highlight the models’ effectiveness in uncertain environments.

 

Data Validation with Pydantic
Pydantic is a Python library focused on parsing, not validation. It ensures the types and constraints of output models rather than input data. While validation isn’t its primary purpose, Pydantic allows custom validation, including field-specific checks using decorators like field_validator. Proposé par Mazid Osseni, conseiller en IA  au CEIAVD

Smart data-driven medical decisions through collective and individual anomaly detection in healthcare time series
This sudy presents a novel method for detecting both collective and individual anomalies in healthcare data through time series analysis using unsupervised machine learning.

Overcoming poor data quality: Optimizing validation of precedence relation data
Insufficient data quality hinders the use of decision support systems (DSS) in business, particularly in project scheduling and assembly line balancing. Experts can validate data, but their time is limited. We propose an optimization approach to the data validation problem (DVP), modeling it as a dynamic program to maximize the removal of unnecessary precedence relations, significantly improving business outcomes.

 

💙The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century
As healthcare systems worldwide grapple with rising costs, limited access, and the increasing demand for personalized care, artificial intelligence (AI) is emerging as a transformative solution. This review explores the urgent need to leverage AI to address these challenges and provides a critical assessment of its integration across various healthcare domains.

💙Jasper, le robot cuisinier pour nourrir nos aînés

Current State of Community-Driven Radiological AI Deployment in Medical Imaging
This paper explores the intersection of AI and medical imaging, emphasizing the need for standards in radiology workflows and the challenges of integrating AI in clinical settings. It introduces a taxonomy of AI use cases, highlights real-world hospital implementations, and discusses MONAI's role in supporting AI adoption in radiology.

Demystifying Large Language Models for Medicine: A Primer
This paper outlines guidelines for integrating Large Language Models (LLMs) into healthcare, covering task alignment, model selection, prompt engineering, fine-tuning, and deployment. It emphasizes ethical and regulatory compliance, bias monitoring, and task optimization. The aim is to help healthcare professionals use LLMs effectively and safely in clinical practice.


Leveraging Artificial Intelligence and Data Science for Integration of Social Determinants of Health in Emergency Medicine: Scoping Review
This scoping review examines the potential of AI and data science for modeling, extraction, and incorporation of SDOH data specifically within EDs, further identifying areas for advancement and investigation.

Forecasting severe respiratory disease hospitalizations using machine learning algorithms
In this study, we explore the capability of forecasting models to predict the number of hospital admissions in Auckland, New Zealand, within a three-week time horizon. Furthermore, we evaluate probabilistic forecasts and the impact on model performance when integrating laboratory data describing the circulation of respiratory viruses.

Use of Artificial Intelligence tools in supporting decision-making in hospital management
This study investigates the current state of decision-making in hospital management, explores the potential benefits of AI integration, and examines hospital managers’ perceptions of AI as a decision-support tool.

Generative artificial intelligence, patient safety and healthcare quality: a review
This review serves as a primer on foundation models’ underpinnings, upsides, risks and unknowns—and how these new capabilities may help improve healthcare quality and patient safety.

Artificial intelligence and the health workforce: Perspectives from medical associations on AI in health
The OECD conducted a survey into the perspectives of medical associations
regarding the integration of AI tools. The survey aimed to contribute to the
discussion on AI from the healthcare providers’ perspective whose roles are
critical to health systems

AI at work: A practical guide to implementing and scaling new tools
For a new report, the World Economic Forum and PwC asked more than 20 ‘early adopters’ about the lessons they have learned while promoting job augmentation and workforce productivity growth with GenAI.

 

💡🤔Comment intégrer et assurer la pérénité de l'IA dans notre système de santé? On a déniché quelques articles pour alimenter vos réflexions!

The Data Strategy Choice Cascade - What your data strategy should look like
This article demystify data strategy — an essential component for any organization striving to become data-driven in order to stay competitive in today’s digital world.

The Root Cause of Why Organizations Fail With Data & AI
This article shows what strategic groundwork is necessary to activate your company’s data assets.

Create a center of excellence to maximize the ROI of AI investments
A comprehensive strategy for AI integration, including effective data governance, can help healthcare organizations ensure a strong, long-term return on investment.

How Memorial Hermann Health System is prioritizing AI governance
The Houston provider is gaining big efficiencies from an array of AI use cases, says Chief Digital Officer Eric Smith. An AI Governance Council ensures that transparency, safety and human-centeredness stay top priorities.

Accountability for all artificial intelligence used by a health system lies with the Chief AI Officer
To hold this hot new position, executives must have a very diverse set of skills. They must encompass clinical and artificial intelligence, though there need not be a balance, says UC San Diego Health Chief Health AI Officer Dr. Karandeep Singh.

Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure
This product is the culmination of considerable dialogue and debate among the Artificial Intelligence Safety and Security Board (the Board), a public-private advisory committee established by DHS Secretary Alejandro N. Mayorkas, who identified the need for clear guidance on how each layer of the AI supply chain can do their part to ensure that AI is deployed safely and securely in U.S. critical infrastructure.

Bridging the Data Literacy Gap - The Advent, Evolution, and Current state of “Data Translators”
The article highlights the growing importance of data translators in bridging the divide between technical teams and decision-makers. These professionals play a crucial role in interpreting complex data, making it accessible and actionable for non-technical stakeholders, thereby fostering informed decision-making within organizations.

💙🍁Advancing scaling science in health and social care: a scoping review and appraisal of scaling frameworks
Using a scoping review, we examined the attributes of scaling frameworks for health innovations, comparing their similarities and differences.

🍁Rapport du Groupe de travail sur l’interopérabilité de la santé numérique
Ce rapport de recommandations présente les obstacles à la mise en œuvre de solutions de santé numérique interopérables au sein du réseau de la santé et propose des solutions pour les surmonter, en plus de contenir des recommandations pour favoriser l’adoption et l’utilisation de ces solutions

🍁Guide sur l’IA en santé à l’intention des spécialistes de la cybersécurité
Inforoute a élaboré un nouveau guide qui fournit des conseils sur la façon de bâtir une infrastructure de confiance afin de mettre en place un contexte sûr et responsable pour l’adoption de l’IA dans le secteur de la santé.

🍁Trousse d’approvisionnement en intelligence artificielle
Inforoute présente cette trousse afin d'orienter les organisations de la santé afin qu’elles puissent relever les défis complexes de l'approvisionnement en IA tout en respectant la réglementation et les pratiques éthiques et en assurant la sécurité des patients.

 



Public sector AI Playbook - A Resource from the AIStrategy for the Government
This playbook provides public officers, especially non technical officers, a guide on how AI can be adopted in their area of work and shares a range of AI projects implemented throughout the public service.

The potential of generative AI for the public sector: current use, key questions and policy considerations
This policy brief aims to examine the current usage, possible benefits, challenges and future prospects for the use of generative AI in the public sector.

 
 

Avis de non-responsabilité

Utilisation de l'anglais
Pour maintenir l'intégrité du sens, nous avons choisi de ne pas traduire les titres et résumés. Vous pouvez utiliser les outils de traduction automatique disponibles dans certains navigateurs pour traduire les articles en anglais : faites un clic droit sur la page et sélectionnez « Traduire en français ». Notez que l'utilisation de ces outils peut entraîner des différences de sens par rapport au contenu original.

Utilisation de l'IA
Certains résumés de cette veille, générés avec des outils d'IA pour réduire le nombre de mots, peuvent contenir des imprécisions. Les articles originaux n'ont pas été lus en entier par l'auteure de cette veille, donc le CEIAVD décline toute responsabilité quant aux inexactitudes ou omissions. Les résumés visent à orienter le lectorat sur le contenu. Pour des informations complètes et précises, consultez les articles originaux.

Hyperliens
Les veilles incluent des liens vers des sites Web créés et mis à jour par des tiers afin de faciliter la consultations des articles. Le CEIAVD n'endosse ni ne garantit, de manière explicite ou implicite, l'exactitude ou l'exhaustivité du contenu de ces hyperliens ou des opinions qui y sont exprimées. Le CEIAVD décline toute responsabilité quant à ces sites Web externes.