Questions éthiques de l’usage de l’intelligence artificielle en santé : regard épistémologique

 CDD · Thèse  · 36 mois    Bac+5 / Master   IHRIM et Oxford · Lyon (France)  2135€ brut

 Date de prise de poste : 1 septembre 2021

Mots-Clés

intelligence artificielle données de santé epistémologie

Description

Doctoral Fellowship 2021–24: Ethical Questions Concerning the Use of Artificial Intelligence in Medicine and Health Management: Contributions of an Epistemological Approach


Doctoral Fellowship offered within the framework of the project CNRS 80 Prime 2021: “Ethical Design of Artificial Intelligence Models in Patient Management and Treatment Plans” (ED-AIM)


Disciplinary background: medical ethics / applied ethics / philosophy of science / computer science.

Period: 3-year contract (36 months), September 2021 – August 2024. Admission at the Doctoral School pf Philosophy at the University of Lyon (ED 487) with an affiliation at the CNRS institute IHRIM (UMR 5316, ENS de Lyon). Obligation to undertake prolonged research stays at the Maison Française d’Oxford (UMIFRE 11 and USR 3129 CNRS, Oxford) of total duration of minimum 12 months.

Profile: Master 2, MPhil or equivalent in applied ethics/medical ethics/philosophy of science. Some competence in computer science is required. The thesis can be written in English or in French, but a capacity to communicate and collaborate in both languages is necessary. Non-French students are welcome to apply.

Supervision: co-direction by Mogens LÆRKE (MFO UMIFRE 11 Oxford / IHRIM ENS de Lyon) and Thomas GUYET (Inria).

Context: ED-AIM is an inter-disciplinary collaboration of the CNRS between the Maison Française d’Oxford (USR 3129 CNRS), the Institut de recherche en informatique et systèmes aléatoires (UMR 6074 IRISA, Rennes 1), the Institut d’histoire des représentations et des idées dans les modernités (IHRIM, UMR 5317, ENS de Lyon), and the Institute of Biomedical Engineering at Oxford University.

Deadline: 5 July.

Shortlisted candidates will be invited to an online interview during the first half of July. For further information, please write mogens.laerke@cnrs.fr et/ou thomas.guyet@irisa.fr

Topic: In recent years, machine learning, a subfield of artificial intelligence (herea.er AI), has generated considerable interest in health care circles due to its potential to help improving efficiency and safety at all levels of our health care systems, from diagnostics to the organisation of care. Our societies attempt to anticipate the changes that IA may bring (Conseil Européen, 2020) and put safeguards into place to ensure that it abides by our moral principles, principles of professional conduct, and social norms. Ideally, ethical notions should be integrated into these tools at their very inception (ethical design). Machine learning models do, however, have well-known weaknesses that are currently being investigated in the field of computer science. First, machine learning model can be subject to different kinds of bias. Second, artificial intelligence models are mostly black-box models (Wang et al. 2020). Users of such systems lose their ability to contest automatic decisions. The use of black-box models raises important ethical problems. For example, to what extent do such models embody values inasmuch as they entail actions and choices and can inadvertently contain design or implementation errors that result in unforeseen consequences and unfair outcomes (Keskinbora 2019; Chen et al. 2019)? Should we embrace decisions made on the basis of black-box models whose outputs cannot be clearly interpreted and explained? Or, to what extent can and should patient and practitioners trust these models? How does the use of IA affect patients’ trust in the treatments offered and in the health workers offering them? By approaching these question specifically in relation to IA, the doctoral candidate should bring both an ethical and an epistemological perspective on these issues within the development of science and technology in health management.

The multi-disciplinary context and supervision of the thesis offers support in the domains of IA and its concrete application in relation to health management. The project can imply the application and use IA tools (either already existing tools, or tools that can be appropriately adapted to the research question) or experiments or survey that concern the actual use of such tools in health management.

Références / References

Geis, J. R., Brady, A. P., Wu, C. C., Spencer, J., Ranschaert, E., Jaremko, J. L., ... & Kohli, M. (2019). Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety statement. Canadian Association of Radiologists Journal 70(4): 329-334.        

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9): 389-399.

European Council, European framework on ethical aspects of artificial intelligence, robotics and related technologies, 2020.

Wang, F., Kaushal, R., & Khullar, D. (2020). Should health care demand interpretable artificial intelligence or accept black box medicine? Annals of Internal Medicine,172(1) : 59-61.

Keskinbora, K. H. (2019), Medical ethics considerations on artificial intelligence, Journal of Clinical Neuroscience 64 : 277-282.

Chen, I. Y., P. Szolovits, and M. Ghassemi (2019), Can AI Help Reduce Disparities in General Medical and Mental Health Care? AMA J Ethics 21(2) : 167-179.

Candidature

Procédure : Covering Letter explaining the interests and motivations of the applicant (1 page); CV (2 pages max); Thesis Project (3 pages max). The complete dossier must be deposited on the employment portal of the CNRS on the following address: https://emploi.cnrs.fr/Offres/Doctorant/UMR5317-ANNMOT-001/Default.aspx

Date limite : 5 juillet 2021

Contacts

Thomas Guyet

 thNOSPAMomas.guyet@irisa.fr

Offre publiée le 17 juin 2021, affichage jusqu'au 9 juillet 2021