Introduction
Diabetic retinopathy (DR) is a leading cause of blindness in working-age adults.1 Early detection of DR with timely intervention is critical in preventing vision loss. Yet many patients, especially ones who live in underserved communities, do not receive the recommended DR screening.2–4
Traditionally, DR screening requires referral to an eye clinic, trained ophthalmic staff to perform imaging, and an eye specialist to interpret the findings. This process is dependent on resources and eye provider availability and contributes to increased burden on the healthcare system.5,6 Fully autonomous artificial intelligence (AI)-based detection of DR, approved by the Food and Drug Administration in the US in 2018,7,8 has the potential to dramatically improve screening rates and decrease preventable blindness. Because fully autonomous AI-based DR screening can diagnose referrable DR at the point of care without the input of eye care providers, it can broaden access to care and improve DR screening among underserved populations, especially in resource-limited settings.9
However, as more algorithms are approved and AI-based DR screening is increasingly incorporated in healthcare systems,10 it is important to consider the perspectives of patients–the stakeholders most directly impacted by the technology. Prior studies have examined patient perspectives of AI in DR screening, but have been primarily based outside of the US.11–14
The purpose of this study is to examine patient perspectives on AI-based DR screening–including awareness, trust, perceived benefits and drawbacks, and receptivity to use of the technology–at a tertiary academic medical center in an urban environment. Findings from our study can help inform strategies that optimize the implementation and uptake of DR screening programs using AI-based tools.
Materials and Methods
Study Design and Population
This was a prospective cohort study of adult participants (≥ 18 years old) with diabetes seen at the Wilmer Eye Institute at Johns Hopkins Hospital in the resident ophthalmology clinic (Patient Access Center for the Eye [PACE]) and the retina clinics in East Baltimore and Greenspring Station between August 22, 2024 and April 25, 2025. Informed consent was obtained in English or Spanish. The study adhered to the Declaration of Helsinki and was approved by the Institutional Review Board of the Johns Hopkins Hospital (IRB#00431841).
AI Screening
All participants underwent pharmacologic pupil dilation using (1% tropicamide and 2.5% phenylephrine) prior to a comprehensive ophthalmologic examination as part of a routine scheduled eye care visit. Fundus photographs of both eyes were then captured in a dark room using the Aurora AEYE, a handheld, non-mydriatic fundus camera with AEYE-DS, an artificial intelligence software designed to detect more-than-mild diabetic retinopathy.15–17
AI Acceptability Survey
Following imaging, participants completed a 20-minute orally administered survey in English or Spanish with the assistance from a research team member (Z.R., J.A.M., G.Z., S.Y). The survey included eighteen questions designed to assess the patient’s experience with the handheld retinal camera and their perspectives on the integration of AI in DR screening. The survey was developed based on a review of the literature on AI acceptance in healthcare and refined through expert consultation for content and face validity.11,12 The survey comprised of six dichotomous (Yes/No) questions (1–6) evaluating baseline awareness of AI and its applications in healthcare, and twelve questions (7–18) rated on a 3-point Likert scale (Agree/Disagree/Neither Agree nor Disagree) assessing domains including awareness of AI in healthcare, trust in AI systems, perceived efficiency of AI, preferences for personal interaction, and overall receptivity of AI in DR screening. (Supplemental Material) For participants unfamiliar with AI, the research team member provided a brief explanation.
Statistical Analysis
Sociodemographic data for the participants were extracted from the electronic health records. Participants’ other medical co-morbidities were extracted as well as previously described.18–20 The Charlson Comorbidity Index represents the overall health of a person by measuring the severity of comorbid diseases, with higher scores representing sicker participants.18 The Diabetes Complications Severity Index quantifies long-term effects of diabetes on body systems.19 The census block group of the participant’s residential address was extracted and linked to the 2019 Area Deprivation Index (ADI) at the national level as previously described.21 ADI is a measure of neighborhood disadvantage based on the American Community Survey, where a higher ADI percentile rank indicates greater socioeconomic disadvantage.22 Descriptive statistics are reported for the overall group and stratified across demographic (sex, age, race/ethnicity) and neighborhood characteristics (ADI).
Results
A total of 100 participants completed the survey and were included in the study. The mean age was 60 years (range 33–93 years). (Table 1) About half of the participants were female (N=52, 52%). There were 24% Hispanic (N=24) participants, 20% non-Hispanic Black (N=20), 31% non-Hispanic White (N=31), and 25% of other (N=25) race/ethnicity. Most of the participants (N=62, 62%) were seen at the PACE clinic.
|
Table 1 Baseline Demographics of the Participants
|
Awareness
Most participants (N=78, 78%) were aware of AI, its implementation in healthcare (N=70, 70%), and use in making a diagnosis (N=70, 71%). But, many were not aware of its application in diagnosing eye diseases (N=45, 46%) (Figure 1). Awareness of AI varied by race and ethnicity, with Hispanic participants having the lowest awareness (N=13, 54%) compared to non-Hispanic White (N=28, 90%), non-Hispanic Black (N=17, 85%), and other (N=20, 80%) participants (Supplemental Table 1).
|
Figure 1 Graph of participant responses to the artificial intelligence (AI) acceptability survey across the categories of awareness of AI in healthcare, trust in AI systems, efficiency of AI, personal interaction, and overall receptivity of AI in diabetic retinopathy (DR) screening.
|
Trust
Overall, most participants felt that AI could improve accuracy and minimize human errors (N=77, 77%) and could keep personal information confidential (N=76, 77%). (Figure 1) However, most participants (N=83, 83%) reported they would trust AI more if it were supervised by a doctor. When asked if participants would trust results delivered by the AI as much as those of a trained health professional, the results were mixed: 51% (N=51) reported agreement with that statement, but 35% (N=35) disagreed. Participants had mixed responses when asked if they worry that AI will make doctors lazy or less attentive: 51% (N=51) disagreed with the statement, whereas 33% (N=33) agreed with it.
Trust also varied by subgroup (Supplemental Table 2). There was a trend for younger participants (under 45) to have more trust that AI could improve accuracy and minimize human errors (N=15, 94%) in the under 45 group, (N=32, 78%) in those aged 45–65, and (N=30, 70%) in those over 65 years. However, this did not translate to major differences in comfort with the use of AI in eye exams, which ranged from 81% (N=13) in the under 45 years group, 76% (N=31) in the 45–65 years group, and 90% (N=38) in the over 65 years group. Trust varied substantially across race and ethnic groups, with non-Hispanic Black participants having the least trust in AI improving accuracy (N=10, 50%) compared to non-Hispanic White participants (N=26, 84%), Hispanic participants (N=23, 96%), and other race and ethnicity groups (N=18, 72%). Non-Hispanic Black participants also had less trust that AI would keep information confidential (N=11, 55%) compared to non-Hispanic White participants (N=25, 81%), Hispanic participants (N=21, 91%), and other race/ethnicity (N=18, 72%). And non-Hispanic Black participants had the least trust in AI as much as providers (N=5, 25%) compared to non-Hispanic White participants (N=14, 45%), Hispanic participants (N=18, 75%), and those of other race/ethnicity (N=14, 56%).
Efficiency
Overall, participants preferred getting results sooner with AI (N=61, 61%) and not as concerned about computer errors being more harmful than human errors (N=45, 45%). (Figure 1) The preference for earlier results varied by race and ethnicity, with Hispanic participants having the highest preference (N=19, 79%) compared to non-Hispanic White participants (N=20, 64%), non-Hispanic Black participants (N=9, 45%), and other race/ethnicity (N=13, 52%) (Supplemental Table 3).
Personal Interaction
Participants thought they would be able to spend more time with the doctor if AI was making the diagnosis (N=68, 69%), but many still preferred the human-led screening program even if it meant waiting a longer time for the results (N=63, 64%) (Figure 1). The majority of the participants (N=94, 94%) believed doctors will always remain responsible for diagnosing, even if AI is evaluating the scans, with low variability across the subgroups (90–96% across all race/ethnicity groups). (Supplemental Table 4)
Receptivity
At the conclusion of this survey, 76% (N=76) of participants reported being comfortable with AI being used as part of the eye exam, and most (N=92, 92%) were satisfied with the AI-based retina screening. (Figure 1) However, only 31% (N=31) agreed that AI-based retina screening can replace a doctor’s visit. Younger participants were more likely to agree that AI-based retina screening could replace a doctor’s visit, with agreement reported by 50% of patients under 45 years (N=8), 29% of those aged between 45–65 years (N=12), and 25% of those over 65 years (N=11). There were also differences by race and ethnicity, with Hispanic participants being more likely to agree that AI screening could replace doctor visits (N=13, 54%) compared to non-Hispanic Black (N=3, 15%), non-Hispanic White (N=7, 23%), and other (N=8, 32%) participants (Supplemental Table 5).
Discussion
In this prospective cohort study of participants seen at an urban academic medical center who underwent autonomous AI-based DR screening, we identify several important insights into participants’ perspectives of this technology. Overall, most participants were aware of AI, but fewer were aware of its application in diagnosing eye diseases. Participants trusted AI, but most reported trusting it if it were supervised by a doctor. Although many valued the potential increase in efficiency and getting results sooner, most still preferred the human-led screening program, even if it meant waiting a longer time for the results. The majority of participants believed doctors will always remain responsible for diagnosing, even if AI is evaluating the images. Most did not believe that AI-based DR screening could replace a doctor’s visit. After experiencing AI-based DR screening with Optomed Aurora AEYE, most participants were comfortable with AI being used as part of the eye exam and with AI-based retinal screening. Exploratory subgroup analyses suggest that these perspectives could differ substantially by sociodemographic characteristics.
The more prominent subgroup findings were differences by age, race, and ethnicity. Younger participants overall had more trust that AI could improve the accuracy of screening as compared to older participants. The variation of trust by age is consistent with prior studies.12,23 Fritsch et al investigated perspectives on AI in healthcare and reported elderly were more skeptical of the new technology compared to younger participants.23
There were also striking differences by race and ethnicity. Overall, Hispanic participants were more trusting of AI technology. Surprisingly, despite having the lowest awareness of AI, 96% of Hispanic patients agreed that AI could improve screening accuracy, and 54% believed that AI screening could potentially replace doctor visits. The reasons for this higher trust are not fully clear, but may reflect translation or communication nuances, distinct cultural perspectives toward technology in healthcare, or a greater openness to AI intervention in certain communities.24 In contrast, non-Hispanic Black participants had very low trust and receptivity, similar to prior studies.25 Implementation of new technology should be sensitive to the lived experience of marginalized communities that have faced biases and inequities in research and healthcare, and not exacerbate mistrust.26,27 Not addressing mistrust of technology could potentially widen disparities with the adoption of AI powered tools, leading to certain communities benefiting less from these advancements.
Overall, there was high acceptance of AI-based retinal screening, with most participants comfortable with its use as part of the eye exam. This is consistent with studies from other countries.11–14 A study conducted in New Zealand, where they reported 78% of their participants being comfortable with the use of AI in eyecare.12 Similarly, Shah et al and Keel et al reported 96% satisfaction after undergoing AI-based screening in India and Australia.11,28 Together, these findings highlight broad receptivity to AI-based DR screening across diverse settings.
However, there is a mismatch between the intended use of fully autonomous AI screening and participants’ understanding of the role that would mean for physicians. Fully autonomous AI-based DR screening is intended to function and identify referrable DR without the input from physicians.29,30 The majority of participants did not feel that AI-based retinal screening could replace a doctor’s visit and that humans should always remain responsible for the diagnosis. This perspective has been reported in prior studies as well. Studies conducted in Denmark, China, and New Zealand also report a high percentage of participants (40–88%) trusting AI more if it were supervised by a doctor.12,13,31 Similar to our study, only a little over half of participants trusted the results delivered by the AI eye exam as much as those of a trained health professional.13,31 Safe and responsible use of this technology should address the mismatch between the participant’s understanding of the technology and the intended use of the technology.
There are several limitations to this study. The participants were recruited from a tertiary academic referral center located in an urban city and the sample size limits the generalizability of these participant perspectives. Given the sample size, the subgroup analyses were exploratory in nature and designed to generate hypotheses, therefore inferential statistics were not performed. AI was explained to participants who were not already aware of the technology, so the reports of awareness of AI could be artificially inflated. We did not collect data on patients’ education level or health literacy, which may affect their understanding and acceptance of AI. The non-mydriatic camera is intended for use without dilation, but due to the context of the study, participants were already dilated for their routine eye examination. Because the survey was administered orally by the study team, there is a possibility of interviewer-related errors in the collected data. Using the camera as it was intended could have further improved receptivity.
Conclusion
As healthcare AI technologies are developed and integrated into healthcare, understanding diverse participants’ perspectives is essential.32 We found that while participants were generally comfortable with the use of AI as part of the eye examination, most would trust AI more if it were supervised by a doctor and did not believe that AI-based DR screening could replace a doctor’s visit. Implementation of autonomous AI-based DR screening should thoroughly address these concerns to promote acceptance, for example, explaining how AI systems work, the safeguards in place, and the ongoing role of clinicians.
Ethics Approval
This research project was approved by Johns Hopkins Institutional Review Board (IRB00431841). All participants provided written informed consent to participate and gave approval for quotations from their transcripts to be published. Data collected have been stored safely in an IRB approved secure folder.
Acknowledgments
We would like to acknowledge Anushka Tambade, BS (Johns Hopkins University) for her assistance with this study. The handheld fundus camera (Aurora AEYE) used in this study was provided by the company Optomed and AEYE Health. The study’s conception received their approval. However, the sponsors had no role in the analysis or interpretation of the results.
Author Contributions
All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work.
Funding
This project was supported by the Shelter Foundation, a Career Development Award from the Research to Prevent Blindness (CXC), K23 award from the NIH/NEI (award number K23EY033440) (CXC), and an unrestricted grant from Research to Prevent Blindness (Wilmer Eye Institute). Dr. Cai is the Jonathan and Marcia Javitt Rising Professor of Ophthalmology.
Disclosure
CXC reports grants from Regeneron Pharmaceuticals Inc; personal fees from Optomed USA Inc, Boehringer Ingelheim, and 4D Molecular Therapeutics, Inc., outside of the submitted work. The authors report no other conflicts of interest in this work.
References
1. Yau JWY, Rogers SL, Kawasaki R, et al. Global prevalence and major risk factors of diabetic retinopathy. Diabetes Care. 2012;35(3):556–9. doi:10.2337/dc11-1909
2. Diabetic Retinopathy Preferred Practice Pattern®, Lim JI, Kim SJ, Bailey ST, et al. Diabetic retinopathy preferred practice pattern®. Ophthalmology. 2025;132(4):PP75–P162. doi:10.1016/j.ophtha.2024.12.020
3. Increase the proportion of adults with diabetes who have a yearly eye exam — D04. Available from:
4. Hatef E, Vanderver BG, Fagan P, Albert M, Alexander M. Annual diabetic eye examinations in a managed care Medicaid population. Am J Manag Care. 2015;21(5):e297–e302.
5. Teo ZL, Tham Y-C, Yu M, et al. Global prevalence of diabetic retinopathy and projection of burden through 2045: systematic review and meta-analysis. Ophthalmology. 2021;128(11):1580–1591. doi:10.1016/j.ophtha.2021.04.027
6. Meng Y, Liu Y, Ma Y, et al. Global, regional, and national burden of blindness due to diabetic retinopathy, 1990–2021. Ophthalmo Therapy. 2025;14(10):2599–2615. doi:10.1007/s40123-025-01230-y
7. Digital Diagnostics. Digital Diagnostics. 2021. Available from:
8. DE NOVO CLASSIFICATION REQUEST FOR IDX-DR. Available from:
9. Huang JJ, Channa R, Wolf RM, et al. Autonomous artificial intelligence for diabetic eye disease increases access and health equity in underserved populations. NPJ Digit Med. 2024;7(1):196. doi:10.1038/s41746-024-01197-3
10. Shah SA, Sokol JT, Wai KM, et al. Use of artificial intelligence–based detection of diabetic retinopathy in the US. JAMA Ophthalmol. 2024;142(12):1171–1173. doi:10.1001/jamaophthalmol.2024.4493
11. Shah P, Mishra D, Shanmugam M, Vighnesh MJ, Jayaraj H. Acceptability of artificial intelligence-based retina screening in general population. Indian J Ophthalmol. 2022;70(4):1140–1144. doi:10.4103/ijo.IJO_1840_21
12. Yap A, Wilkinson B, Chen E, et al. Patients perceptions of artificial intelligence in diabetic eye screening. Asia Pac J Ophthalmol. 2022;11(3):287–293. doi:10.1097/APO.0000000000000525
13. Krogh M, Germund Nielsen M, Byskov Petersen G, et al. Patient acceptance of AI-assisted diabetic retinopathy screening in primary care: findings from a questionnaire-based feasibility study. Front Med Lausanne. 2025;12:1610114. doi:10.3389/fmed.2025.1610114
14. Chen Y, Song F, Zhao Z, et al. Acceptability, applicability, and cost-utility of artificial-intelligence-powered low-cost portable fundus camera for diabetic retinopathy screening in primary health care settings. Diabet Res Clin Pract. 2025;223(112161):112161. doi:10.1016/j.diabres.2025.112161
15. ClinicalTrials.gov. Available from:
16. Re: K240058 trade/device name: aeye-ds regulation number: 21 CFR 886.1100 regulation name: retinal diagnostic software device regulatory class: class II product code: PIB. 2024. Available from:
17. Optomed Aurora AEYE. Optomed US. 2022. Available from:
18. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373–383. doi:10.1016/0021-9681(87)90171-8
19. Glasheen WP, Renda A, Dong Y. Diabetes complications severity index (DCSI)-update and ICD-10 translation. J Diabetes Compl. 2017;31(6):1007–1013. doi:10.1016/j.jdiacomp.2017.02.018
20. See EJ, Jayasinghe K, Glassford N, et al. Long-term risk of adverse outcomes after acute kidney injury: a systematic review and meta-analysis of cohort studies using consensus definitions of exposure. Kidney Int. 2019;95(1):160–172. doi:10.1016/j.kint.2018.08.036
21. Tang T, Tran D, Han D, Zeger SL, Crews DC, Cai CX. Place, race, and lapses in diabetic retinopathy care. JAMA Ophthalmol. 2024;142(6):581. doi:10.1001/jamaophthalmol.2024.0974
22. Kind AJH, Buckingham WR. Making neighborhood-disadvantage metrics accessible — the neighborhood atlas. N Engl J Med. 2018;378(26):2456–2458. doi:10.1056/NEJMp1802313
23. Fritsch SJ, Blankenheim A, Wahl A, et al. Attitudes and perception of artificial intelligence in healthcare: a cross-sectional survey among patients. Digit Health. 2022;8:20552076221116772. doi:10.1177/20552076221116772
24. Kraft SA, Chopra S, Duran MC, et al. Perspectives of Hispanic and Latinx community members on AI-enabled mHealth tools: qualitative focus group study. J Med Internet Res. 2025;27:e59817.
25. Al-Haque E, Thompson G, Smith ADR, Johnson B. An investigation into black and brown communities’ engagement with data & technology. Procee AAAI/ACM Conf AI Ethics Soci. 2025;8(1):66–75. doi:10.1609/aies.v8i1.36531
26. Henrietta Lacks: science must right a historical wrong. Nature. 2020;585(7823):7. doi:10.1038/d41586-020-02494-z
27. Scharff DP, Mathews KJ, Jackson P, Hoffsuemmer J, Martin E, Edwards D. More than Tuskegee: understanding mistrust about research participation. J Health Care Poor Underserved. 2010;21(3):879–897. doi:10.1353/hpu.0.0323
28. Keel S, Lee PY, Scheetz J, et al. Feasibility and patient acceptability of a novel artificial intelligence-based screening model for diabetic retinopathy at endocrinology outpatient services: a pilot study. Sci Rep. 2018;8(1):4330. doi:10.1038/s41598-018-22612-2
29. Bhaskaranand M, Ramachandra C, Bhat S, et al. The value of automated diabetic retinopathy screening with the EyeArt system: a study of more than 100,000 consecutive encounters from people with diabetes. Diabetes Technol Ther. 2019;21(11):635–643. doi:10.1089/dia.2019.0164
30. Grzybowski A, Brona P, Lim G, et al. Artificial intelligence for diabetic retinopathy screening: a review. EYE. 2020;34(3):451–460. doi:10.1038/s41433-019-0566-0
31. Yang K, Zeng Z, Peng H, Jiang Y. Attitudes of Chinese cancer patients toward the clinical use of artificial intelligence. Patient Prefer Adher. 2019;13:1867–1875. doi:10.2147/PPA.S225952
32. Nelson CA, Pérez-Chada LM, Creadore A, et al. Patient perspectives on the use of artificial intelligence for skin cancer screening: a qualitative study: a qualitative study. JAMA Dermatol. 2020;156(5):501–512. doi:10.1001/jamadermatol.2019.5014
link
