Overview
This study investigates the acceptability of artificial intelligence (AI) as a diagnostic support tool among patients with localized prostate cancer and healthcare providers, as well as their willingness to share health data for AI development.
Background AI tools in healthcare show promising potential, especially in improving diagnosis accuracy and personalizing treatment. However, successful implementation depends not only on technical performance but also on the acceptability of AI among its users-both patients and professionals. Prior research has shown varied acceptability depending on context, disease severity, task performed by AI, and user population.
Objectives Assess patients' acceptability of AI as a diagnostic support in prostate cancer.
Explore patients' willingness to share health data for developing clinical AI.
Assess healthcare providers' acceptability of AI in this diagnostic context.
Methodology Design: A cross-sectional, mixed-method, multinational study (Belgium, Italy, Spain).
Quantitative Phase: Online questionnaire, using adapted theoretical frameworks (Value Perception Model, NASSS-AI, TFA).
Qualitative Phase: Will follow based on quantitative findings.
Participants: Adults diagnosed with localized prostate cancer. Recruitment via hospitals, social media, and patient associations.
Data Collected: Personal and health information, attitudes toward AI, willingness to share data.
Ethics Approved by ethics committees in each participating country.
Informed consent obtained digitally before participation.
Data anonymized and GDPR-compliant.
Description
- Introduction 1.1. Acceptability of artificial intelligence 1.1.1. Artificial intelligence in healthcare Healthcare professionals are facing a multitude of growing and more and more complex challenges. The nature of health problems is changing, workloads are increasing, which leads to a more difficult management in care and requires also a constant update of knowledge. At the same time, the implementation of artificial intelligence (AI) tools in healthcare is exploding and offers a bright opportunity to meet the needs of the evolving medical sector.
Many studies have explored the feasibility of clinical AI by evaluating, among other things, the technical performance of AI systems. However, the technical nature of AI development is not the only challenge facing the implementation of clinical AI. Indeed, studies have shown that the acceptability of clinical AI by healthcare professionals and patients impacts the effective adoption of AI by these users. Ignoring this parameter could lead to a waste of resources, by not taking advantage of the available AI systems.
The acceptability of clinical AI by patients and healthcare professionals has been studied in different contexts but most studies concern the acceptability of AI in healthcare in general. This subject has already been systematically summarised and shows in majority good acceptability of AI. On the other hand, studies in specific health contexts are rarer and the AI acceptability in these contexts cannot be defined by simply extrapolating the AI acceptability in the healthcare. Indeed, the acceptability of AI differs according to the context (disease diagnosed, severity of the disease, consequences of the AI's decision, complexity of decision-making) but also to the tasks that the AI performs (diagnosis, choice of treatment, prognosis). In addition, AI acceptability factors differ depending on the population studied (patients, healthcare professionals, researchers and healthcare managers) due to differences in needs, preferences and the context of use. It is therefore important to study the AI acceptability into the precise context to which the study refers.
1.1.2. Acceptability of AI in the prostate cancer diagnosis process The implementation of AI tools in the prostate cancer diagnosis (PCa) process will improve information from medical images and creation of predictive models. This will represent a significant advance in optimizing diagnosis of PCa and predicting its aggressiveness, with the final objective of personalizing treatment by adaptation to the biological characteristics of the tumour. This will help reduce the need for prostate biopsies, thus increasing the individuals' compliance and encouraging them to get tested before the appearance of symptoms. Moreover, early detection of PCa dramatically improves the treatment success rate.
Given the opportunity of the implementation of AI in the PCa diagnosis process, it is essential to study the acceptability of this implementation for patients and healthcare professionals who would the final users of AI in the PCa context, and the conditions for this implementation. Although some studies have examined this acceptability, research in this area remains limited and requires further investigation.
A recent study has assessed patients' trust in AI and their perception of urologists relying on AI. This study has showed that patients had more confidence in physicians' ability to make the right diagnosis, to consider the latest research, to individualise the communication of the diagnosis and to explain the information in an understandable way in comparison to AI. Patients stated that they had higher trust in a diagnosis made by AI controlled by a physician versus AI not controlled by a physician.
A second study explored men's fears and expectations of AI diagnosis and treatment in the context of PCa, through three representations of AI imagined by the participants. The three imagined representations were AI as a tool to help healthcare providers, AI as an advanced machine and AI as an entity that would replace the healthcare providers. The last imaginary is the worst-case scenario for participants. In all imaginaries, participants saw AI in a positive light, but the more AI took the place of the physician, the more participants saw AI in a negative light.
1.1.3. The concept of acceptability The concept of acceptability of clinical AI is not well-defined. Even if frameworks exist to explain and predict the behavioural intention of individuals to use and accept a technological innovation, these models are either not adapted to the healthcare context; either considering AI in the same way as digital health technologies. However, it is essential to take account of the healthcare context and the complexity of AI in comparison with digital technologies that work in a more simplistic way than AI. It is therefore necessary to better define the concept of acceptability of clinical AI. Some studies have modified existing technology acceptance frameworks to take account of the above limitations, such as the two following frameworks.
The first framework statistically models patient acceptance of AI-based clinical decision support devices. The authors of this model have extended the theoretical framework of technological acceptance that include perceptions of value (VAM), adapting them to the healthcare sector. This framework evaluates the perceived risks and benefits of AI-based clinical decision support devices to measure their acceptability. The authors of this model suggest that future studies should add additional factors to their model, including social influence which is use in some others theorical frameworks of acceptance, to increase its exploratory power (the R2 of this model is 0.80). In addition, the authors state that the variables gender, annual household income, education level, employment, perceived technical knowledge about AI technology and familiarity with an AI-based service had a significant effect on intention to use AI-based clinical decision support devices.
Second, a recent study in The Lancet systematically summarised qualitative studies on the acceptance of AI for diagnostic purposes by stakeholders (patients, clinicians, researchers and healthcare managers). The authors extended the existing NASSS framework in order to create a new model adapted to AI diagnostic; the NASSS-AI framework. This new framework is structured in six domains, twenty sub-domains and forty-three themes grouping together the elements frequently discussed by the studies included in this review.
Several factors not included in the different theoretical models mentioned above have been identified by other studies as factors that influence patient acceptability of clinical AI. These factors were age, previous exposure to similar tools, the geographical location, the diagnosed disease, the severity of the disease and the trust in healthcare system.
Moreover, a systematic review has investigated how information about AI was provided to participants in studies assessing patients' acceptability of AI in the healthcare. This systematic review indicates that most studies cite limited patient knowledge of AI in the context of their research as a limitation of their study. Additionally, several studies report that the type of information provided to patients at the beginning of a study influences the results. Consequently, an "intervention coherence" factor will be added to our research framework. This factor is integrated into the Theorical Framework of acceptability (TFA) which studies the acceptability of healthcare interventions.
1.2. Data altruism - data providing Data altruism is understood as the voluntary providing of personal and non-personal data, based on the consent of data subjects or the permission of natural and legal persons, without seeking a reward and for objectives of general interest.
The development of clinical AI requires the use of a large amount of health data. The use of this data is inevitable because it enables the tools to progress and evolve. As explained above, clinical AI must be accepted by patients and by healthcare providers, but it must also be fed by patient data. They must therefore agree to provide their health data.
Although the patients' willingness to provide personal health data has been widely studied in the context of secondary uses in general (data sharing for clinical, public health research, epidemiology, etc purposes), it has been not well studied in the context of AI in particular. Some studies assess the willingness to provide health data by a simple question. A study about AI in skin cancer diagnostic show that 88% of the respondents would make their health data anonymously available for the development of AI-based medical application.
A pilot study explored patients' perspectives on data providing for the purpose of developing AI for general practice. Participants in this study felt that the development of AI for general practice was a goal for which they were willing to provide their data. They said they wanted to help manage disease and help their general practitioner (GP) by providing their data. The willingness to provide their data was dependent on the trust relationship they had with their GP.
1.2.1. The concept of willingness to provide health data No validated framework for studying patients' willingness to provide their health data for secondary uses or for AI development purposes has been identified in the literature. A recent systematic review also concluded that there was no validated measurement tool to determine this willingness.
A study developed a conceptual model of the patients' willingness to provide health data for research purposes. This framework is based on the validated value perception framework that evaluates the perceived benefits and risks of data providing. The authors added to this framework several factors specific to data providing for research purposes, such as the type of final users, the type of data collected, the context, etc. These factors can be understood as conditions for providing health data.
2. Objectives The objective of this study is to better understand the acceptability of AI, and its conditions, by people with prostate cancer and healthcare providers in the context of prostate cancer diagnosis and AI as support for healthcare providers. This study also aims to better understand people's willingness to provide their health data for clinical AI development and the conditions surrounding this willingness.
Objectives of this study are:
- Determine the acceptability of people with prostate cancer regarding the implementation of AI as a support to healthcare providers in the diagnosis of prostate cancer and explore the conditions for this implementation.
- Determine people's willingness to provide their health data for the development of clinical AI and explore the conditions for this willingness.
- Determine the acceptability of healthcare providers regarding the implementation of AI as a support for them in the diagnosis of prostate cancer and explore the conditions for this implementation.
- Methods 3.1. Type of study and research process The study is a cross-sectional observational, mixed-method, sequential study comprising an initial quantitative phase which will feed into a second qualitative phase. The study is multinational, recruiting participants from Belgium, Italy and Spain.
Each country submits the protocol to its ethics committee if it's required. The design of this study is presented in Figure 1. This protocol pertains to the quantitative phase. The protocol for the qualitative phase will be submitted to the ethics committee at a later date, as the analysis of the quantitative phase questionnaire is necessary for the development of the qualitative phase.
Figure 1: study design 3.2. Frameworks of the quantitative phase Two frameworks are used, corresponding to the two objectives in this phase (determine the acceptability of people with prostate cancer regarding the implementation of AI as a support to healthcare providers in the diagnosis of prostate cancer and determine people's willingness to provide their health data for the development of clinical AI).
To determine the acceptability of people with prostate cancer regarding the implementation of AI as a support to healthcare providers in the diagnosis of prostate cancer, a framework was created on the basis of existing frameworks previously presented (framework based on perception of values, NASSS-AI and TFA). The framework based on perception of values was used as a basis for weighing risks against perceived benefits. Social influence was added to this model has suggested by the authors. A "conditions to use" factor and its sub-dimensions was also added, inspired by the NASSS-AI and the previously cited framework. The "intervention coherence" factor was added, in line with the TFA.
The final framework used to investigate the acceptability of people with prostate cancer regarding the implementation of AI as a support to healthcare providers in the diagnosis of prostate cancer is presented in Figure2.
Figure 2: Theoretical model of the acceptability of people with prostate cancer regarding the implementation of AI as a support to healthcare providers in the diagnosis of prostate cancer.
To determine the people's willingness to provide their health data for AI development purposes, a framework was created on the basis of existing framework based on value perception that balances perceived risks and perceived benefits. A "conditions" factor was added to explore this willingness. The final framework used to study the willingness of people to provide their health data for the development of clinical AI is presented in Figure 3.
Figure 3: Theoretical model of people's willingness to provide their health data for the development of clinical AI.
3.3. Population characteristics of the quantitative phase The study population is people with localized prostate cancer.
The inclusion criteria are defined as follows:
• over 18 years;
• with diagnose of localized prostate cancer;
• consent to the study.
The exclusion criteria are defined as follows:
• not talking French;
• diagnosed with metastatic cancer from the outset;
• terminally ill;
• people suffering from mental retardation, dementia or altered state of consciousness.
3.4. Sample of the quantitative phase The chosen sampling method is volunteer sampling. Any individual meeting the inclusion criteria and not meeting the exclusion criteria may choose to participate in the study, which involves completing a self-administered online questionnaire.
The questionnaire will be distributed through the following channels:
• A poster and flyers campaign in the urology departments of the Elipse network. The posters and flyers will display the link and QR code for the questionnaire. The co-investigator E. Koshmanova of this study will be available in one or more hospitals of the Elipse network to offer assistance in accessing the questionnaire, without intervening in the completion process. She will also have a laptop or tablet available for participants who wish to take part in the study but don't have the necessary equipment.
• Distribution of the link and QR code of the questionnaire via the poster design in photo format on social media.
• Distribution of the link and QR code via the poster design in photo format or not via patient associations. These associations will share these through their networks or mailing lists. Associations may also display the poster and/or flyers in their facilities if deemed relevant.
No sample size calculation was performed, as the study constraints require the inclusion of a minimum of 50 participants per country and a maximum of 100 participants per country.
To have access to the questionnaire, people use the questionnaire link or scan the QR code. There is no need to create an account. After using the link or scanning the QR code, the first thing that appears is the questionnaire home page, on which participants are invited to read the information and consent form and then consent to the study. It's specified in the information and consent form that consent is not collected directly in the information and consent form but via the first section of the questionnaire (retranscription of the informed consent form). This blocks access to the rest of the questionnaire to people who don't consent to the study, i.e. those who don't tick all the statements.
If participants provide their consent, the questionnaire will be accessible and the participants will be asked to confirm that they meet the study's inclusion criteria and don't meet the study's exclusion criteria. If they don't confirm these elements, the questionnaire will terminate. If they confirm these elements, the questionnaire will start.
Given that regulations allow participants to withdraw their consent from the study at any time, a unique code will be assigned to them. This code will be formed by each participant with this identification key : first letter of the last name + last letter of the last name + first letter of the first name + the two last digits of their years of birth.
At the end of the online form, participants have the option of clicking on a link if they wish to take part in the second phase (qualitative phase) of this study. This link takes them to a second online questionnaire asking for their contact data. The second questionnaire is not linked to the first. With the contact data, individuals can be contacted to take part in the second phase of this study, and this only after the approval of the ethics committee for the second phase of this study (qualitative phase).
In the event that participant recruitment proves more effective through a paper-based approach, a paper version of the questionnaire identical to the online version will be used. In such cases, participants will be asked to complete the paper questionnaire and sign a paper version of the informed consent form prior to participation. The questionnaire in this case will be encoded online on REDCap by the research team.
Each participant has the opportunity to ask questions to the co-investigator (E. Koshmanova) at any time. Her contact information is provided in the information and consent form and at the beginning of the questionnaire.
3.5. Studied parameters of the quantitative phase
Several parameters will be collected with the first online questionnaire:
• Information relating to individuals: age, perceived income, employment, education, perceived health status, country of residence, severity of prostate cancer (how long ago was the diagnosis made, is it being treated), previous care experience, previous exposure to similar tools, frequency of healthcare use, trust in healthcare systems.
• Information on people's skills: health literacy, e-health literacy, perceived technical knowledge of AI technology, familiarity with an AI-based service.
• Questions relating to the acceptability to people with prostate cancer of the implementation of AI as a support to healthcare providers in the diagnosis of prostate cancer.
• Questions relating to the willingness of people with prostate cancer to provide their health data for the development of clinical AI.
Several data will be collected with the second online questionnaire:
• first and last name;
• telephone number;
• country of residence;
• postal code.
This questionnaire is independent of the first. 3.6. Data collection tool of the quantitative phase Online questionnaires will be shared with participants by REDCap platform. 3.6.1. Questionnaire number 1: The online questionnaire aims to assess both the acceptability of people with prostate cancer regarding the implementation of AI as a support to healthcare providers in the diagnosis of prostate cancer and their willingness to provide their health data for the purposes of developing clinical AI. The questionnaire also asks about certain people characteristics, as mentioned in point 3.5. The questionnaire is introduced by an explanation of how AI is involved in care in the context of this study.
The questionnaire is based on two questionnaires. First, the questionnaire associated on the framework based on perception of values has been used. Second, the questionnaire associated on the framework studying the willingness to provide health data for research purposes was used as an inspiration (19).
3.6.2. Questionnaire number 2: The aim of this questionnaire is to contact participants who want to take part of the second phase of this study (qualitative phase).
3.7. Calendar Figure 1: Gantt chart. 4. Ethical concerns of the quantitative phase The study presents no risks or disadvantages for participants. No change in their usual clinical management will be necessary. No collected information will be disclosed to the healthcare staff.
Surveys will be developed using the REDCap platform, which is a software package designed for the overall management of studies, and which meets the regulatory requirements for health study hosts and supervised by the Clinical Trial Center (CTC) from CHUL.
An information and consent form will be completed (electronic version) by participants before they actually take part in the study. Participants can withdraw their consent from the principal investigator (D. Waltregny) at any time.
All data collected will be pseudonymized with a unique code that will be assigned to each participant. This code will be formed by participants with this identification key : first letter of the last name + last letter of the last name + first letter of the first name + the two last digits of their years of birth.
In addition, contact data will be collected, processed, and stored in accordance with the General Data Protection Regulation (GDPR).
The study will begin only after approval by the Ethics Committee.
Eligibility
Inclusion Criteria:
- over 18 years;
- with diagnose of localized prostate cancer;
- consent to the study.
Exclusion Criteria:
- not speaking French;
- diagnosed with metastatic cancer from the outset;
- terminally ill;
- people suffering from mental retardation, dementia or altered state of consciousness.