Veille en intelligence artificielle en santé

Septembre 2024

Objectif de la veille

Nourrir une vision stratégique tout en fournissant des outils concrets aux équipes du réseau de la Santé et des Services sociaux, à travers des articles portant sur les technologies et connaissances susceptibles de soutenir le développement et l'intégration de l'intelligence artificielle(IA) en santé.  

Le CEIAVD a choisi de ne publier que des articles en accès libre pour garantir leur disponibilité à un large public. Leur sélection est déterminée par une démarche éditoriale. L'IA appliquée au domaine clinique possède un spectre trop vaste à couvrir de manière exhaustive par notre activité de veille qui se veut plus généraliste. 

Vous voulez que l'on diffuse un article ou un événement? ceiavd@ssss.gouv.qc.ca

 

Énoncé de principes pour une utilisation responsable de l'intelligenec artificielle par les organismes publics - Ministère de la cybersécurité et du numérique (MCN) 💙
Le présent énoncé a pour but de fournir aux organismes publics des orientations
en matière de gestion des ressources informationnelles. Les principes qui y sont
exposés contiennent tous les éléments requis pour une utilisation responsable
de l’intelligence artificielle par de tels organismes.

Premières réflexions en intelligence artificielle responsable pour les non-spécialistes en intelligence artificielle - Ministère de la Santé et des Services sociaux (MSSS) 💙
En se basant sur des travaux faits au Québec sur l’IA, les premières réflexions en IA responsable (PRIAR) proposent des balises dans le cadre de l'avènement des technologies de l'IA. 
Voici la première publication du MSSS portant sur l'IA, un beau jalon! 🎉

Responsible AI Question Bank: A Comprehensive Tool for AI Risk Assessment
This study introduces our Responsible AI (RAI) Question Bank, a comprehensive framework and tool designed to support diverse AI initiatives. By integrating AI ethics principles such as fairness, transparency, and accountability into a structured question format, the RAI Question Bank aids in identifying potential risks, aligning with emerging regulations like the EU AI Act, and enhancing overall AI governance.

Rolling in the deep of cognitive and AI biases
We tackle AI fairness by focusing on human cognitive biases. Drawing from cognitive science, we map how these biases affect AI's lifecycle and uncover hidden influences. Our approach detects fairness levels and their interplay, aiming to deepen AI fairness studies and expose underlying biases.

 

 

Les chercheurs de Vector présentent leurs travaux à ACL 2024

Code Hallucination
In this study, we introduce various code hallucinations created manually via large language models. We detail HallTrigger, a method for efficiently inducing arbitrary code hallucinations. It utilizes three dynamic aspects of LLMs to elicit hallucinations without accessing the model's architecture or parameters. Tests on well-known blackbox models indicate HallTrigger's effectiveness and its significant influence on software development.

Looking into Black Box Code Language Models
Recent studies on code language models (LMs) primarily focus on performance metrics, often treating LMs as inscrutable entities, while only a few explore the role of attention layers. This research delves into the feed-forward layers of code LMs, using Codegen-Mono and PolyCoder, and languages like Java, Go, and Python, to understand concept organization, editability, and layer roles in output generation. Findings reveal lower layers grasp syntax, upper layers grasp abstract concepts, and editing within feed-forward layers is feasible without affecting performance, offering insights for improved code LM understanding and development.

Les différentes philosophies d’implémentation en RL
Cet article a été par Ludovic Denoyer. Il est chercheur scientifique au FAIR, se concentrant principalement sur divers problèmes d’apprentissage automatique, en particulier sur l’apprentissage par renforcement et l’interaction homme-machine. Il était auparavant professeur à Sorbonne Universités. Rendre plus simple la programmation d’algorithmes d’apprentissage par renforcement à l’aide des principes de l’apprentissage supervisé : la librairie « RLStructures ».

ChartGemma: Visual Instruction-tuning for Chart Reasoning in the Wild
ChartGemma, a new model for chart understanding, overcomes the limitations of prior models by training directly on chart images, not just data tables. This approach captures detailed visual trends, leading to superior performance in summarization, question answering, and fact-checking across various benchmarks. ChartGemma's effectiveness is further proven through extensive real-world chart analysis. GitHub

Blog: A Lifecycle Management Approach toward Delivering Safe, Effective AI-enabled Health Care

in this article, we will focus on the potential of leveraging LCM to address the unique challenges of generative AI in health care, with practices to help ensure these systems meet real-world needs while managing their inherent risks across the software lifecycle.

Dreaming is All You Need
This research introduces two novel deep learning models, SleepNet and DreamNet, to strike this balance. SleepNet seamlessly integrates supervised learning with unsupervised “sleep" stages using pre-trained encoder models. Dedicated neurons within SleepNet are embedded in these unsupervised features, forming intermittent “sleep" blocks that facilitate exploratory learning. Building upon the foundation of SleepNet, DreamNet employs full encoder-decoder frameworks to reconstruct the hidden states, mimicking the human "dreaming" process.

Query languages for neural networks
We introduce a database-inspired approach for interpreting neural network models using declarative query languages. We compare black-box query languages, which only access the network’s input-output function, with white-box languages that treat the network as a weighted graph. We show that the white-box approach can subsume the black-box approach, demonstrating this for feedforward neural networks with piecewise linear activation functions.

What is CyclOps? 🛠️
The toolkit is designed to help researchers and practitioners to adopt Vector Institute’s AI trust and safety principles. Specifically, the toolkit focuses on evaluation and monitoring of AI systems developed for clinical applications where robustness, safety, and fairness are critical.The primary goal of CyclOps is to improve transparency, accountability, and trust in AI systems that are developed for clinical applications.

 

Data Quality–Driven Improvement in Health Care: Systematic Literature Review
This review aims to investigate how existing research studies define, assess, and improve the quality of structured real-world health care data.  This review aims to investigate how existing research studies define, assess, and improve the quality of structured real-world health care data. ⭐
On dit que, pour faire de l'IA, il faut des données de qualité. Alors, comment s'assurer que nous produisons des données de qualité? Quels mécanismes mettre en place? Un article qui vous permettra d'entamer votre réflexion.

The impact of data governance on data warehouse performance
This article explores the impact of data governance on data warehouse performance. It highlights the key practices that can lead to optimal results.
Un article intéressant avec l'arrivée des lacs de données dans plusieurs établissements!

To use AI effectively, clinicians must learn proper data hygiene
In her AI classes for medical students, Stéphanie Allassonière, professor of applied mathematics at the Univérsite Paris Cité, first teaches them how to properly collect and clean data to get the most accurate responses out of AI tools.


Implementing privacy preserving record linkage: Insights from Australian use cases
The objective is to describe the use of privacy preserving linkage methods operationally in Australia, and to present insights and key learnings from their implementation.

Privacy-Preserving Prediction of Postoperative Mortality in Multi-Institutional Data: Development and Usability Study
This study explores whether using HE to integrate encrypted multi-institutional data enhances predictive power in research, focusing on the integration feasibility across institutions and determining the optimal size of hospital data sets for improved prediction models.

Unlocking biomedical data sharing: A structured approach with digital twins and artificial intelligence (AI) for open health sciences
Data sharing promotes the scientific progress. However, not all data can be shared freely due to privacy issues. This work is intended to foster FAIR sharing of sensitive data exemplary in the biomedical domain, via an integrated computational approach for utilizing and enriching individual datasets by scientists without coding experience.

 

Medical artificial intelligence for clinicians: the lost cognitive perspective
We argue that clinicians are contextually motivated, mentally resourceful decision makers, whereas AI models are contextually stripped, correlational decision makers, and discuss misconceptions about clinician–AI interaction stemming from this misalignment of capabilities.🤔 Quel est l'impact de l'IA sur le processus décisionnel des professionnels?

How interoperability helps healthcare organizations accelerate digital progress

Anne Snowdon, chief scientific research officer at HIMSS, explains that interoperability is fundamental to enabling data-informed patient care decisions and is required for the use of AI and advanced analytics tools.

 

The Use of Deep Learning and Machine Learning on Longitudinal Electronic Health Records for the Early Detection and Prevention of Diseases: Scoping Review
This study aims for a scoping review of the evidence on how the use of ML on longitudinal EHRs can support the early detection and prevention of disease. The medical insights and clinical benefits that have been generated were investigated by reviewing applications in a variety of diseases.


A modeling framework for evaluating proactive and reactive nurse rostering strategies — A case study from a Neonatal Intensive Care Unit
This paper introduces a general framework for evaluating proactive and reactive rostering strategies and studying the interaction between them. The framework comprises three main components: a nurse rostering model incorporating proactive strategies, a simulation model accounting for uncertainty in nurse absences and healthcare demand, and a nurse rerostering model implementing reactive strategies.

An AI-based prognostic model for postoperative outcomes in non-cardiac surgical patients utilizing TEE: A conceptual study
The primary objective of this study was to assess the potential of artificial intelligence techniques, in conjunction with transthoracic echocardiography (TEE) examinations, to forecast postoperative mortality outcomes in patients undergoing moderate-to-high-risk noncardiac surgeries.

Artificial intelligence prediction of In-Hospital mortality in patients with dementia: A multi-center study
Prediction of mortality is very important for care planning in hospitalized patients with dementia and artificial intelligence has the potential to serve as a solution; however, this issue remains unclear. Thus, this study was conducted to elucidate this matter.

Feasibility of forecasting future critical care bed availability using bed management data
This project aims to determine the feasibility of predicting future critical care bed availability using data-driven computational forecast modelling and routinely collected hospital bed management data.

A pre-trained language model for emergency department intervention prediction using routine physiological data and clinical narratives
Emergency rooms demand fast, accurate decisions for patient care. Our study aims to reduce diagnostic errors by developing predictive models that use triage data—symptoms and vital signs—to ensure critical tests and treatments are timely. This approach seeks to enhance traditional diagnostic methods and improve patient outcomes.

Prediction of patient flow in the emergency department using explainable artificial intelligence
This study aims to develop two machine learning (ML) models to assist in early and accurate resource allocation in EDs. The first model predicts patient admission at the time of triage, while the second predicts the specialty of care needed indicated by the initial ward transfer.

 

 

La loi sur l'IA de l'UE a été publiée au Journal officiel (JO) de l'Union européenne le 12 juillet 2024.


Assessing the State of AI Policy

This work provides an overview [the landscape] of AI legislation and directives at the international, U.S. state, city and federal levels. It also reviews relevant business standards, and technical society initiatives. Then an overlap and gap analysis are performed resulting in a reference guide that includes recommendations and guidance for future policy making.

Governing with artificial intelligence: Are governments ready?
The OECD’s policy paper assesses the opportunities and challenges associated with integrating AI in the public sector. It outlines the significant benefits AI can bring alongside the risks and policy issues that must be managed. Here, we summarise the paper’s main findings and policy recommendations.

 
 

Vous aimez planifier sur le long terme? Consultez la section Événements de notre site web.

Colloque sur la cybersécurité du Centre de recherche en données massives
16 septembre - Présentiel - $

Panel virtuel de septembre : Les essentiels de la santé numérique pour les populations mal desservies - Inforoute Santé du Canada
17 septembre 12h00 à 13h00 - Virtuel - Gratuit

Webinaire - Adaptabilité des modèles fondations - Institut intelligence et données (IID)
27 septembre  - Hybride - Gratuit

Rencontre de l’écosystème : santé et IA  - CRIM
2 octobre - présentiel - $

Colloque Cybersécurité et protection des données personnelles - Groupe le Point en santé et services sociaux
10 octobre 2024 - Présentiel - $

Symposium International CITADEL - De la donnée au déploiement : optimisation en santé - Citadel - CR CHUM
15 & 16 octobre - Présentiel - $

Digital Health Interoperability Task Force’s recommendation report and Infoway’s 2023 Physicians Survey - Inforoute Santé du Canada
15 octobre - Virtuel - Gratuit

Le traitement de données massives : la fondation de l’IA - Université Laval
15 octobre 12h00 à 13h00 - Virtuel - Gratuit

 

 

Avis de non-responsabilité

Utilisation de l'anglais
Pour maintenir l'intégrité du sens, nous avons choisi de ne pas traduire les titres et résumés. Vous pouvez utiliser les outils de traduction automatique disponibles dans certains navigateurs pour traduire les articles en anglais : faites un clic droit sur la page et sélectionnez « Traduire en français ». Notez que l'utilisation de ces outils peut entraîner des différences de sens par rapport au contenu original.

Utilisation de l'IA
Certains résumés de cette veille, générés avec des outils d'IA pour réduire le nombre de mots, peuvent contenir des imprécisions. Les articles originaux n'ont pas été lus en entier par l'auteure de cette veille, donc le CEIAVD décline toute responsabilité quant aux inexactitudes ou omissions. Les résumés visent à orienter le lectorat sur le contenu. Pour des informations complètes et précises, consultez les articles originaux.

Hyperliens
Les veilles incluent des liens vers des sites Web créés et mis à jour par des tiers afin de faciliter la consultations des articles. Le CEIAVD n'endosse ni ne garantit, de manière explicite ou implicite, l'exactitude ou l'exhaustivité du contenu de ces hyperliens ou des opinions qui y sont exprimées. Le CEIAVD décline toute responsabilité quant à ces sites Web externes.