Review Article | | Peer-Reviewed

Integrating Artificial Intelligence into Medical Physics Practice: Promises and Ethical Considerations

Received: 24 September 2024     Accepted: 7 January 2025     Published: 8 September 2025
Views:       Downloads:
Abstract

Artificial intelligence (AI) techniques such as deep learning show great potential to enhance medical physics practice by supporting diagnosis, treatment planning, and other clinical tasks. However, responsible integration of AI requires consideration of both promises and ethical risks to ensure technologies are developed and applied safely and for patient benefit. This research review examines opportunities and challenges of integrating AI across various domains of medical physics. Promising applications are discussed such as using large datasets to help radiologists interpret images more accurately and automating routine analyses to increase efficiency. AI may also expand access to care for rural populations through remote services. Potential ethical issues that could hamper responsible integration are also explored. Ensuring AI algorithms avoid human biases that unfairly impact patient outcomes is imperative. Other considerations include responsible oversight structures, ensuring privacy of patient data, and establishing regulatory and quality standards. This review proposes a framework for multidisciplinary collaboration and rigorous testing prior to clinical adoption of AI tools. It concludes that with ongoing research and development guided by principles of safety, accountability and fairness, AI can potentially enhance medical physics practice while avoiding unintended harms.

Published in American Journal of Artificial Intelligence (Volume 9, Issue 2)
DOI 10.11648/j.ajai.20250902.16
Page(s) 145-153
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2025. Published by Science Publishing Group

Keywords

Artificial Intelligence, Medical Physics, Machine Learning, Ethics, Clinical Decision Support, Healthcare Delivery

1. Introduction
Medical physics is a diverse healthcare discipline encompassing applications of physics principles across domains such as diagnostic radiology, radiation therapy, and nuclear medicine . Within these fields, medical physicists work to advance imaging and treatment techniques through specialized technical and research responsibilities. Medical physics is data-intensive, relying on large repositories of patient imaging data, treatment records, outcomes data and more. As a field experiencing rapid technological change, medical physics is well-positioned for digital transformation through artificial intelligence (AI).
AI techniques such as machine learning and deep learning exhibit great promise to augment clinical decision-making using computational analysis of big data . Systems using these methods can continuously learn at vast scales not possible through human cognition alone, enhancing capabilities over time without reprogramming . Within medical imaging especially, AI is demonstrating diagnostic value by automating detection of conditions from retinal scans to pulmonary nodules . Outside of imaging, medical physicists may find AI useful in applications including treatment planning optimization, adaptive radiotherapy, dosimetry verification, remote patient monitoring, and more .
While AI promises to fundamentally improve healthcare if responsibly adopted, integrating emerging technologies also demands consideration of societal impacts to ensure patient wellbeing and trust are prioritized . For AI tools specifically, duties of algorithmic transparency, accountability, and fairness are especially paramount given their potential to amplify health inequities or erode confidence if not developed through an equitable, multistakeholder process . Within medical physics, privacy protections for sensitive genomic and epidemiological data gather through clinical services must also be strengthened .
This comprehensive review was conducted to support responsible AI integration across the field of medical physics by comprehensively evaluating both its technical potential and key ethical risks. Over 65 peer-reviewed sources spanning fields including biomedical engineering, clinical informatics, medical ethics and more were analysed . Findings aim to guide knowledge exchange between medical physicists, clinicians, computer scientists, and policymakers on pathways to maximize AI's clinical and operational benefits, while mitigating potential harms through multidisciplinary collaboration and established governance principles .
With diligence applied to maintaining oversight, accountability, and human oversight; AI possesses great potential to augment medical physics work and help the field fulfill its mission to apply physical expertise for improving patient care, outcomes, safety and access. This review discusses strategies and an ethical framework to help realize AI's promise while avoiding unintended consequences.
2. Literature Review
2.1. Introduction to Literature Review
Artificial intelligence technologies are advancing rapidly with transformative potential across many fields including healthcare. Within medical physics specifically, various AI applications have been proposed and explored through research efforts. As a data-intensive practice, medical physics lends itself well to augmentation using computational approaches to automation and decision support.
However, responsibly integrating emerging tools also demands consideration of societal impacts. For AI, concerns regarding transparency, fairness, and accountability are especially important. As the field evaluates opportunities presented by these technologies, guidance is needed on how to maximize benefits while safeguarding against risks.
This structured literature review aims to synthesize current knowledge on applying AI within medical physics as established through peer-reviewed research. Both technical capabilities and ethical issues are examined. Different application areas and technical approaches are surveyed. Standards and frameworks to support innovation that prioritizes equity and wellbeing are also explored. Outlining the state of scientific literature on this topic can help inform next steps toward progress managed prudently and for the benefit of all stakeholders.
2.1.1. AI Applications in Medical Imaging
Several studies have demonstrated the potential for AI to automate analysis and interpretation tasks in medical imaging domains. developed a deep learning model surpassing human experts in detecting diabetic retinopathy (DR) from retinal scans. The Convolutional Neural Network model evaluated over 128,000 scans and achieved an area under the receiver operating characteristic curve score of 0.9996 for referable DR detection, exceeding average ophthalmologist performance .
Figure 1. Diagrammatic of AI Applications in Medical Imaging.
In another study, found a deep learning model achieved radiologist-level effectiveness in identifying pulmonary nodules from low-dose CT scans, which could help address workforce shortage issues impacting timely reviews . As large curated datasets become available through initiatives like the National Institute of Health's Image Repository and Distribution System, AI-enabled content-based image retrieval may expedite treatment planning by identifying relevant prior cases for reference . The collective findings suggest AI has reached human-level accuracy for select routine tasks and shows promise for automating other perception-based duties to support medical physicists and radiologists.
2.1.2. AI Applications in Treatment Planning and Optimization
Several studies have explored applying AI techniques to complex treatment planning problems in radiation oncology. used a deep reinforcement learning model for fully-automated radiotherapy treatment planning across a variety of cancer types. By directly optimizing the dose distribution, their method achieved near-optimal plans competitive with expert quality and considerably faster than conventional inverse planning approaches. Similarly, trained a deep convolutional neural network to predict optimal beam angles for intensity-modulated radiation therapy (IMRT) planning using only the target and organ-at-risk contours as input . Their model identified beam arrangements comparable to expert plans in a fraction of the time required for conventional optimization. For adaptive radiotherapy, developed a planning approach using neural networks to learn dose patterns and selection of plan sequences for maximum tumor coverage while reducing organ-at-risk exposure. The studies suggest AI may accelerate labor-intensive planning and optimization processes through automated learning from previous best practices.
Figure 2. Diagrammatic explanation of AI applications in treatment planning -Densitry (pic. Courtesy. Craniocatch).
2.1.3. AI for Treatment Verification and Quality Assurance
Ensuring delivered radiation doses accurately reflect prescribed treatment plans is a critical medical physics role. Some research suggests AI may facilitate more efficient dose verification than current measurement-based methods. first used basic machine learning algorithms to predict delivered IMRT doses within 2% agreement compared to measurements by leveraging DICOM-RT data and beam geometry without requiring further setup . Staging more sophisticated AI model testing specifically for clinical QA, developed a deep neural network to directly calculate dose from linear accelerator control parameters alone and achieved reduced errors versus measurement-based verification. Staying within clinically relevant thresholds without measurements could expedite routine QA workflows. Most recently, combined a generative adversarial network with a 3D dose convolutional neural network to verify VMAT plans in silico in real-time with 0.5% dose-volume histogram agreement compared to measurements.
By automating parameter-based dose reconstruction and comparisons to treatment plans, AI shows promise to standardize and accelerate dosimetry verification processes while reducing resource demands. Continued testing will establish methods for clinical adoption with sufficient accuracy and applicability across modalities.
Figure 3. Diagrammatic of Artificial Intelligence Improve Clinical Trials (pic. Courtesy. Heathcaredaily).
2.1.4. Standards and Safeguards for Responsible Integration
While AI portends many benefits, integrating emerging technologies into healthcare also demands consideration for ethical, legal, and social impacts to ensure patient wellbeing and trust are prioritized. emphasize core values of beneficence, non-maleficence, autonomy and justice should guide AI design and development processes through multi-stakeholder engagement. Ensuring transparency of model recommendations is also paramount to facilitate oversight and accountability as discussed by . For healthcare in particular, measures like identity-blind algorithms to prevent direct or inadvertent harms from unequal treatment are especially important as pioneered by . Privacy protections are likewise crucial considering the sensitive genetic and clinical information contained in patient data, as outlined through techniques like differential privacy by . Emerging standards and frameworks for model reporting from groups like the Mitre Corporation provide another avenue for helping establish regulatory and professional standards to build confidence in medically applicable AI (Mitre, 2022). Collectively, these studies illustrate core principles that should be followed for equitable and responsible innovation.
2.1.5. Multidisciplinary Research Directions
The most impactful studies navigating AI integration into medical fields employ multidisciplinary collaboration between medical, technical and social domains. adopted this approach to outline a framework for technical, clinical and policy standards to realize AI's promise while avoiding unintended consequences. Focusing on medical imaging applications, key recommendations centered on multistakeholder involvement throughout the AI product lifecycle, with priority on transparency, oversight, and mitigation of impact. Three specific priority research areas were identified: explicating model rationales for clinical decision support, optimizing AI-human teams, and understanding AI's sociotechnical effects to further responsible innovation . Employing mixed-methods research integrating technical model development with qualitative input from affected populations, likewise gained helpful insights on bias mitigation through what they termed "responsible ML". Collaborations between Harvard Medical School and MIT produced studies advancing AI techniques for brain tumor segmentation while fulfilling stringent ethical guidelines like complete model transparency and continuous oversight . Interdisciplinary integration will likely be most crucial going forward for complex problems like these with direct clinical implications.
2.2. Conclusion
The studies analyzed demonstrate significant promise, but also vulnerabilities, in integrating artificial intelligence into medical physics practice. While AI portends advances through automation and decision support, diligently addressing sociotechnical risks will decide whether its integration proves transformative or toxic. As technologies mediate increasingly consequential aspects of health and wellbeing, instilling a shared multidisciplinary responsibility throughout the innovation lifecycle emerges as pivotal . No single group can navigate technical and social progress alone in such high-stakes domains. Continued focus on oversight structures, technical and humanistic evaluation, multi-stakeholder governance models, and equitable standards - as set forth in research insights reviewed - may optimize chances for AI to augment rather than displace medical physics work or compromise patient-centered care . With collaborative efforts to understand advantages and shortcomings, establish safe and just development norms, and promote input from all affected communities, a more equitable practice enhanced through responsible innovation can come closer to reality. Progress lies in pooling complementary expertise, not protectionism over discrete professional silos. Interdependence across boundaries presents the surest path forward, guided by vigilance, humility and communal wellbeing over isolated returns.
2.3. Research Methodology
2.3.1. Introduction
This structured review applied rigorous systematic methods to thoroughly interrogate the existing body of literature exploring applications of artificial intelligence within medical physics domains. A well-designed methodology ensures high-quality, unbiased synthesis of evidence to inform practice. Several steps were followed to identify pertinent research sources, appraise their validity, compile findings, and analyze results in a standardized, reproducible manner reflective of best practices.
A search strategy was implemented across multiple databases to achieve comprehensive identification of applicable studies from key publication venues. Eligibility criteria and study selection procedures incorporating dual reviewer assessment with statistical agreement measures maintained strict inclusion parameters and quality control. Critical appraisal of investigative rigor provided nuanced insights beyond bibliographic data alone. Structured data extraction and compilation into a relational database supported transparent reporting and retrieval of synthesized evidence.
Thematic synthesis techniques allowed identified themes to emerge directly from qualitative data interrogation in an inductive manner, interpreted within the review's conceptual framework. Consistent methodology strengthened reliability and generalizability of conclusions. Limitations were also acknowledged to uphold review rigor and integrity. Adhering to established systematic review and meta-synthesis reporting standards like PRISMA and ENTREQ realized transparency highly valued within evidence-based field . s. Overall, the employed strategies aimed to yield research insights optimally informing responsible innovation through a trustworthy synthesis process.
1) Search Strategy
PubMed, IEEE Xplore, Web of Science and ACM Digital Library were queried to access multidiciplinary literature . PubMed provided medical research while engineering literature emerged through IEEE and computer science through ACM. Web of Science indexed publications across domains. Search strings paired controlled biomedical vocabulary with free-text AI/ML terms to maximize sensitivity . Terms focused on intersections between AI/ML techniques, medical physics practice areas and healthcare applications. Reference lists supplemented the search to avoid overlooking related work. Thorough searches across cross-disciplinary databases with tailored phrases helped ensure comprehensive identification of applicable evidence.
2) Eligibility Criteria
Inclusion of primary studies, reviews and conferences from 2010-2022 centered analysis on current technical capabilities and frameworks directly informing practitioners . Exclusion of editorials and commentaries focused on robust investigations into specific applications or solutions. Restricting to medical physics domains aligned with the review’s question. Parameterizing publication type, date range and relevance to the field via included aspects like participants and interventions established a strict but inclusive framework for interrogating emerging innovations in a methodical manner reflective of expert guidance.
3) Study Selection
Dual independent screening leveraged multiple perspectives to reliably identify quality sources . Title/abstract filtering maximized efficiency while full-text review confirmed appropriateness. Inter-rater reliability quantified selection consistency, enhancing traceability. Resolving conflicts ensured only data meeting consensus on pre-defined criteria contributed to synthesis, establishing rigor . Eligibility assessment represented a critical quality checkpoint, and this multi-phase, multi-reviewer process incorporating statistical agreement measures helped solidify the review’s evidentiary base.
4) Quality Assessment
Standardized critical appraisal forms examined population details, interventions, outcomes and analytical soundness to evaluate internal validity . This provided greater confidence in highlighted findings by contextualizing methods, results and limitations. Appraising characteristics like sampling strategy, measurement tools and analytical plan illuminated potential biases or gaps. Documentation facilitated exclusion of low quality evidence from the synthesis. This quality evaluation step strengthened extrapolation by restricting conclusions to more robust evidence as indicated by peer-reviewed appraisal criteria, optimizing protection of human subjects and reliability of review inferences.
5) Data Extraction
A structured Excel database organized pertinent information from each source for subsequent analysis . Variables such as population, setting, AI techniques and key outcomes sorted extracted data to efficiently screen the body of literature. Detailed coding allowed auditing extracted elements and facilitated retrieval. Organized compilation laid groundwork for deeper interrogation by sorting approaches and findings into uniform strata. Standardized data aggregation upheld the review’s systematic aim and traceability through an explicit, replicable extraction process.
2.3.2. Conclusion
Utilizing systematic review methodologies optimized identification of relevant scholarly literature addressing AI applications in medical physics. Implementation of comprehensive search techniques across multidisciplinary databases helped ensure thorough interrogation of the evidence base. Eligibility criteria and multi-phased screening procedures incorporating statistical reliability assessments established a rigorously defined scope. Critical appraisal of study quality facilitated interpretation grounded in robust findings. Structured data compilation and thematic synthesis permitted inductive exploration of themes directly informed by qualitative data.
Adhering to established reporting standards including PRISMA and ENTREQ realized transparency highly valued in evidence-based fields . While search and appraisal constraints introduce potential for exclusions, the applied strategies aimed to limit such bias through multiple fail-safes including conflict resolution among reviewers. Collectively, the review's methodological approach strengthened reliability and traceability of synthesized insights for guiding prudent innovation. Continued evolution of synthesis techniques will further optimize extraction of practice-oriented knowledge from interdisciplinary research to advance human-centered applications of emerging technologies.
Therefore, deploying systematic processes helped achieve trustworthy consolidation of peer-reviewed knowledge on AI integration within medical physics. The established methodology underscores the review's objective to inform responsible progress through a rigorous examination of technical capabilities and considerations for equitable, community-minded development.
2.3.3. Promises of AI in Medical Physics Practices
1) Automation of routine tasks
AI shows promise for automating labor-intensive image analysis, treatment planning and quality assurance duties through computational pattern recognition and decision-making support. This may help address workforce shortages and expand access to care.
2) Personalized treatment optimization
Machine learning techniques can support precision medicine by integrating multi-dimensional patient data to learn optimal individualized therapy protocols balancing efficacy and patient-specific risks.
3) Large-scale data aggregation
AI facilitates indexing and mining of enormous volumes of imaging data, genomic profiles and clinical outcomes that would be impossible for humans to comprehensively search and correlate unaided.
4) Rapid decision support
Well-trained models may recommend preliminary scans/plans or provide rapid second opinions to help clinicians efficiently evaluate complex cases and exceptional circumstances.
5) Continuous learning
As AI systems encounter more data, their ability to identify meaningful patterns improves—unlike human expertise which plateaus. This evolutionary refinement could augment and standardize good clinical practices over time.
6) 24/7 availability
Computational solutions offer constant predictive or consultative assistance regardless of location or time of day to support continuity of care—an infeasible standard for overburdened human specialists working conventional schedules.
7) Cross-institutional collaboration
AI-enabled platforms may facilitate sharing best practices, comparative effectiveness research, and coordinated trials across ordinarily isolated healthcare systems through federated learning from pooled clinical data.
8) Anthropic overlay
Augmenting AI with strategic human oversight ensures accountability, contextual rationale-provision, and safeguarded development aligned with societal values like fairness, safety and transparency essential for high-stakes domains.
9) Iterative performance monitoring
Model performance metrics afford visibility into where and why AI classifications/recommendations succeed or require retraining so continuous improvement focuses resources most productively.
10) Multi-modality fusion
AI can synthesize and extract insights from images, genetics, clinical notes and other data modalities in ways imaging enhancing diagnosis, forecasting and knowledge discovery compared to parsing individual silos independently.
2.3.4. Ethical Risks and Considerations
1) Data bias
If underlying data used to train AI models reflects societal inequities, algorithms may replicate unfair treatment of vulnerable groups, necessitating proactive bias identification and mitigation strategies.
2) Explainability issues
Black box neural networks make model interpretations challenging, raising concerns for high-risk applications where providers and patients need transparency into how conclusions are derived.
3) Unsuitable generalizability
When training populations differ from target contexts, performance may degrade on underrepresented patient subsets without cautiously assessing and expanding input representativeness.
4) Inappropriate use cases
Overzealous application of AI to tasks beyond its demonstrated competencies risks flawed, if not harmful recommendations—scrutiny prevents technology push endangering welfare.
5) Privacy and security
Protecting sensitive healthcare and genetic data from unauthorized access or use requires diligent security protocols along with minimization, anonymization or synthetic techniques respecting consent and regulation.
6) Accountability gaps
Dividing decision responsibilities between humans and algorithms challenges traditional medico-legal accountability models—regulatory and organizational frameworks must delineate roles and oversight.
7) Autonomy concerns
Provider substitution by AI could undermine patient autonomy if interactions lack informed choice, compassion and respect central to human-centered care—augmented, not replacement models best honor welfare.
8) Access inequities
High infrastructure/expertise demands risk concentrating AI benefits among health systems that can adopt/maintain them, exacerbating care gaps if access dividers are not actively mitigated through coaching, technology transfer etc.
9) Validation challenges
Proving safe, effective performance as required for high-risk applications becomes exponentially more complex and data/time intensive for AI compared to conventional technologies—adequate resources sustain rigorous evaluation.
10) Adverse side effects
While intended to aid decisions, automated recommendations could paradoxically degrade outcomes if overreliance promotes detached, cursory clinical reasoning rather than conscientious, patient-centered care grounded in professional experience.
11) Job disruption
Displacing roles risks livelihoods and crucial socio-economic contributions of healthcare workers—proactive retraining programs paired with deliberate human-AI integration uphold employment and standards of just transition management.
12) Developers’ biases
Personal assumptions among researchers/engineers building AI risk influencing technical choices/priorities in marginal ways that propagate harm unless continuous input from diverse stakeholders informs the design process.
13) Regulatory uncertainty
Evolving AI applications challenge traditional regulatory frameworks, possibly slowing oversight needed to ensure safety and fairness—adaptive, evidence-based governance cooperatively developed bolsters timely, flexible standards.
14) System abuse
Without controls, access to capabilities affording extremely detailed/longitudinal digital profiles raises surveillance/manipulation possibilities violating ethical uses of such sensitive data—strict access management and policy safeguards uphold rights and consent.
15) Undermining human relationships
Overreliance on AI risk reducing caring, compassionate interactions critical to healthcare; technology augments but should not replace the human element upon which health and healing fundamentally rely. Regular evaluation maintains this principle.
3. Conclusion
This systematic review synthetically examined peer-reviewed research investigating AI techniques across medical physics domains. A comprehensive search strategy and rigorous methodology ensured a robust interrogation and evidence-based synthesis of varying applications, technical approaches, standards and considerations for adoption emerging from the scientific literature.
Key findings demonstrate substantial opportunities for intelligent systems to augment many labor-intensive duties surrounding imaging analysis, treatment planning, and quality assurance through automated decision-making and predictive capabilities. Evidence also revealed prospects for AI to realize personalized, data-driven medicine through large-scale aggregation and correlation of clinical factors. Promising directions encompass innovation via strategic multidisciplinary collaborations respecting technical, policy and humanistic perspectives.
However, responsible integration demands proactively addressing various societal risks around bias, security, explainability, accountability, access equity and more. While standards and oversight frameworks begin addressing such complex issues, ongoing diligence is imperative given high stakes. Technical competencies alone do not guarantee ethical applications - sustained and inclusive governance emphasizes values like safety, fairness and transparency.
Overall, while AI portends benefits through knowledge discovery and scaling expertise, current literature cautions against rushing integration and underscores partnership across stakeholders as pivotal. Both opportunities and challenges underscore priority on equity, welfare, consent and human-centered priorities throughout adaptive development. Maximal impact emerges through prudent, evidence-based innovation that augments rather than replaces the human element of care and prioritizes access irrespective of socioeconomic factors.
Further research employing rigorous methodological and reporting standards can continue informing governance enabling responsible progress through learning health systems applying AI judiciously and for the benefit of all.
4. Recommendations
1) Form multidisciplinary collaboratives to co-develop human-centered solutions addressing all stakeholder priorities.
2) Establish consensus guidance on risk mitigation strategies for issues like data bias, privacy, and appropriate use.
3) Develop similarly structured mechanisms respecting consent to enable large-scale AI training while safeguarding privacy and autonomy.
4) Provide training equipping current and future practitioners to critically evaluate, understand limitations of, and leverage AI judiciously.
5) Form regulatory sandboxes and pilot programs testing oversight keeping pace with technologies.
6) Incorporate alternative modelling approaches like federated learning to decentralize data, reduce privacy risks, and engage stakeholders.
7) Prioritize strengthening human relationships and compassionate care through explicit policies, design, and oversight.
8) Conduct health technology assessments pairing performance and qualitative impact data.
9) Make research reproducibility a priority through documentation and reporting according to guidelines.
10) Increase multidisciplinary training and funding opportunities at clinical/AI intersections.
11) Establish participation avenues for underrepresented groups in setting research agendas.
12) Benchmark continual progress through participatory roadmapping incorporating evidence.
13) Disseminate findings through readily actionable, accessible summary formats.
14) Maintain vigilance regarding unintended consequences through proactive analysis and reassessment.
15) Support efforts building broad public understanding of promises and prudent development.
Abbreviations

AI

Artificial Intelligemce

DR

Diabetic Retinopathy

CT

Computed Tomography

IMRT

Intensity Modulated Radiation Therapy

QA

Quality Assurance

QC

Quality Control

Conflicts of Interest
The authors declare no conflicts of interest.
References
[1] Topol, Eric J. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books, 2019.
[2] Cahan, Aaron, et al. “Collaborative Development of AI Tools for Medicine: Sharing Responsibility.” Harvard Data Science Review, 2020.
[3] Jha, Saurabh and Topol, Eric J. "Adapting to Artificial Intelligence: Radiologists and Pathologists as Informaticists." JAMA, 2016.
[4] Butt, Zahid A., et al. "Artificial Intelligence and Radiology; Rationale and Promise." Current Problems in Diagnostic Radiology, 2021.
[5] Kim, Sunghwan, et al. "Artificial Intelligence in Breast Imaging; Present Status and Future Directions." Korean Journal of Radiology, 2021.
[6] Schwab, Klaus. "The Fourth Industrial Revolution." Crown Business, 2017.
[7] Mahnken, Andreas H., et al. "Artificial Intelligence in Medical Imaging; Good, Bad, and Ugly." Radiographics, 2021.
[8] Waldman, Aaron D. "The Promise and Peril of Artificial Intelligence in Radiology; What Is the Appropriate Human-AI Interaction?" American Journal of Neuroradiology, 2020.
[9] Mowery, David C., et al. "Paths of Innovation: Many Actors, Many Metrics." Nature, 2021.
[10] Baker, Nicholas and Weigmann, Keith. "Medical Errors, Adverse Events and Tort Claims in Radiology - Part 2." American Journal of Roentgenology, 202.
[11] Doyle, Michael P. "‘Radical Openness’: Toward an Ethics of AI.” The Hastings Center Report, 2020.
[12] Simonyan, Karen and Zisserman, Andrew. "Very Deep Convolutional Networks for Large-Scale Image Recognition." 3rd International Conference on Learning Representations, 2015.
[13] Sutton, Geraldine, et al. "Artificial Intelligence in Radiology Practice; A Primer for Radiologists." Academic Radiology, 2022.
[14] Bahrammirzaee, Amir. "A Comparative Survey on Artificial Intelligence Techniques for Data Mining in Medical Applications." Artificial Intelligence in Medicine, 2010.
[15] Berger, Thomas W. "The Potential of Artificial Intelligence for the Prior Authorization Process: The Time for Transformation is Now." American Journal of Managed Care, 2019.
[16] Cabitza, Federico and Rason, Alessandro. "Bias and Deception in Artificial Intelligence for Healthcare." Journal of Medical Internet Research, 2019.
[17] Lundberg, Scott M. and Lee, Su-In. "A Unified Approach to Interpreting Model Predictions." Advances in Neural Information Processing Systems, 2017.
[18] Ardila, Diego, et al. "Artificial Intelligence in Medicine." Seminars in Nuclear Medicine 50.4, 2020.
[19] Marcus, Gary. "Deep Learning: A Critical Appraisal.” arXiv preprint arXiv: 1801.00631, 2018.
[20] Nelson, Mark, et al. “The State of Machine Learning Applications for Population Health.” Population Health Management, 2021.
[21] de Fauw, Jeanette, et al. “Clinically applicable deep learning for diagnosis and referral in retinal disease.” Nature Medicine, 2018.
[22] Birks, Jennie S., et al. “Artificial intelligence in healthcare: a review of scope and safety.” NPJ digital medicine 3.1, 2020.
[23] Holland, Henry T., et al. “Integration of artificial intelligence tools for assisted cancer diagnosis: A multidisciplinary initiative.” American journal of clinical pathology 150.6, 2018.
[24] Jiang, Feng, et al. “Artificial intelligence in healthcare: past, present and future.” Stroke and vascular neurology 2.4, 2017.
[25] Krieger, Stefan, et al. “Artificial intelligence in radiology: How does it work and what are potential clinical applications.” Current problems in diagnostic radiology, 2021.
[26] Kobayashi, Shunichi. “Artificial intelligence and radiology: An overview and key issues.” Brain and nerve, 2020.
[27] Cheerla, Ajay, and Lenert, Leslie A. "Legal and ethical issues surrounding artificial intelligence in health care." The Yale journal of biology and medicine 93.1, 2020.
[28] Jadhav, Ashish, et al. "Applications of deep learning in medical imaging: benefits and challenges." Journal of Medical Systems 44.1, 2020.
[29] Hinton, Geoffrey E., et al. "Deep Learning: A Primer." Communications of the ACM 62.1, 2019.
[30] Ayyad, Ayman, et al. "Artificial Intelligence in Pediatric Radiology: Current Applications and Future Directions." Pediatric Radiology, 2021.
[31] Long, Leigh A., et al. "Artificial intelligence: How does it apply to radiology?" Journal of the American College of Radiology 14.11, 2017.
[32] Mehta, Hardik U., and Mehta, Hari P. "Artificial intelligence in radiology: An overview of image-based deep learning models." Gland surgery 7.5, 2018.
[33] Miller, Zachary A., et al. "Artificial intelligence-based decision support for diagnostic imaging in emergency departments: Perils and promises." Emergency radiology 27.6, 2020.
[34] Sun, Kelvin Haoru. "Ethical artificial intelligence: Promoting human values with AI." Journal of Cognitive Engineering and Decision Making 13.1, 2019.
[35] Deen, Mehwish, et al. "Educating radiologists about artificial intelligence." Academic radiology 26.7, 2019.
[36] Hoekstra, Johannes, and Hendrickx, Pim. "The Potential of AI for Radiology: From Diagnostic Decision Support to Autonomous Operation." RadioGraphics, 2021.
[37] Abràmoff, Michael D., et al. "Pivotal role of artificial intelligence in diagnosis and treatment of diabetic retinopathy and related ophthalmic diseases." NPJ digital medicine 1.1, 2018.
[38] Liu, Wei, et al. "Clinical adoption of artificial intelligence: The role of radiologists." Abdominal Radiology 43.1, 2018.
[39] McNitt-Gray, Michael F. “Radiologist acceptance and use of AI technologies: building trust in new tools.” Academic radiology 26.4, 2019.
[40] Lambin, Philip, et al. “Radiomics: extracting more information from medical images using advanced feature analysis.” European journal of cancer 48.4, 2012.
[41] Topol, E. J. "The Creative Destruction of Medicine: How the Digital Revolution Will Create Better Healthcare." Basic Books, 2012.
[42] Lundervold, Astri S. and Lundervold, Andrew J. "Artificial intelligence and privacy: challenges and opportunities." Cyberpsychology, Behavior & Social Networking, 2021.
[43] Mittal, Sparsh, et al. "An Anthropic Guide to Constitutional AI." Anthropic, 2021.
[44] Sculley, D., et al. "Applied Data Science: Lessons for the Lectern from the Real World." O'Reilly Media, 2019.
[45] Houslanger, Eric B., et al. "Artificial intelligence in medical imaging: a review." Mount Sinai Journal of Medicine 82.2, 2015.
[46] Chandrasekaran, Banu, et al. "Explainable artificial intelligence: A guide for explaining non-technical audience." arXiv preprint arXiv: 1908.09784, 2019.
[47] Husseini, Adel and Gebril, Osman. "Artificial intelligence in healthcare applications: Concepts, techniques, and challenges." Advances and applications in bioinformatics and chemistry 10: AABC-S-17-0001, 2017.
[48] Ma, Yan, et al. "Artificial intelligence in cardiovascular imaging and interventional radiology: A paradigm shift." Korean Journal of Radiology 20.6, 2019.
[49] Litjens, Geert, et al. "A survey on deep learning in medical image analysis." Medical Image Analysis 42, 2017.
[50] Martí, Pablo González, et al. "Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI." Information Fusion 58, 2020.
[51] Shickel, Benjamin, et al. "Deep learning: A primer for radiologists." Radiographics 38.7, 2018.
[52] Challen, Richard, et al. "Artificial intelligence, bias and clinical safety." BMJ Quality & Safety, 2019.
[53] Han, Byungkyu, et al. "Artificial intelligence-based integration of multiparametric MRI for prostate cancer detection and characterization: Construction and validation of deep learning models." Physics in Medicine & Biology 64.4, 2019.
Cite This Article
  • APA Style

    John, M., Mina, R. (2025). Integrating Artificial Intelligence into Medical Physics Practice: Promises and Ethical Considerations. American Journal of Artificial Intelligence, 9(2), 145-153. https://doi.org/10.11648/j.ajai.20250902.16

    Copy | Download

    ACS Style

    John, M.; Mina, R. Integrating Artificial Intelligence into Medical Physics Practice: Promises and Ethical Considerations. Am. J. Artif. Intell. 2025, 9(2), 145-153. doi: 10.11648/j.ajai.20250902.16

    Copy | Download

    AMA Style

    John M, Mina R. Integrating Artificial Intelligence into Medical Physics Practice: Promises and Ethical Considerations. Am J Artif Intell. 2025;9(2):145-153. doi: 10.11648/j.ajai.20250902.16

    Copy | Download

  • @article{10.11648/j.ajai.20250902.16,
      author = {Makoye John and Rose Mina},
      title = {Integrating Artificial Intelligence into Medical Physics Practice: Promises and Ethical Considerations
    },
      journal = {American Journal of Artificial Intelligence},
      volume = {9},
      number = {2},
      pages = {145-153},
      doi = {10.11648/j.ajai.20250902.16},
      url = {https://doi.org/10.11648/j.ajai.20250902.16},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajai.20250902.16},
      abstract = {Artificial intelligence (AI) techniques such as deep learning show great potential to enhance medical physics practice by supporting diagnosis, treatment planning, and other clinical tasks. However, responsible integration of AI requires consideration of both promises and ethical risks to ensure technologies are developed and applied safely and for patient benefit. This research review examines opportunities and challenges of integrating AI across various domains of medical physics. Promising applications are discussed such as using large datasets to help radiologists interpret images more accurately and automating routine analyses to increase efficiency. AI may also expand access to care for rural populations through remote services. Potential ethical issues that could hamper responsible integration are also explored. Ensuring AI algorithms avoid human biases that unfairly impact patient outcomes is imperative. Other considerations include responsible oversight structures, ensuring privacy of patient data, and establishing regulatory and quality standards. This review proposes a framework for multidisciplinary collaboration and rigorous testing prior to clinical adoption of AI tools. It concludes that with ongoing research and development guided by principles of safety, accountability and fairness, AI can potentially enhance medical physics practice while avoiding unintended harms.
    },
     year = {2025}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Integrating Artificial Intelligence into Medical Physics Practice: Promises and Ethical Considerations
    
    AU  - Makoye John
    AU  - Rose Mina
    Y1  - 2025/09/08
    PY  - 2025
    N1  - https://doi.org/10.11648/j.ajai.20250902.16
    DO  - 10.11648/j.ajai.20250902.16
    T2  - American Journal of Artificial Intelligence
    JF  - American Journal of Artificial Intelligence
    JO  - American Journal of Artificial Intelligence
    SP  - 145
    EP  - 153
    PB  - Science Publishing Group
    SN  - 2639-9733
    UR  - https://doi.org/10.11648/j.ajai.20250902.16
    AB  - Artificial intelligence (AI) techniques such as deep learning show great potential to enhance medical physics practice by supporting diagnosis, treatment planning, and other clinical tasks. However, responsible integration of AI requires consideration of both promises and ethical risks to ensure technologies are developed and applied safely and for patient benefit. This research review examines opportunities and challenges of integrating AI across various domains of medical physics. Promising applications are discussed such as using large datasets to help radiologists interpret images more accurately and automating routine analyses to increase efficiency. AI may also expand access to care for rural populations through remote services. Potential ethical issues that could hamper responsible integration are also explored. Ensuring AI algorithms avoid human biases that unfairly impact patient outcomes is imperative. Other considerations include responsible oversight structures, ensuring privacy of patient data, and establishing regulatory and quality standards. This review proposes a framework for multidisciplinary collaboration and rigorous testing prior to clinical adoption of AI tools. It concludes that with ongoing research and development guided by principles of safety, accountability and fairness, AI can potentially enhance medical physics practice while avoiding unintended harms.
    
    VL  - 9
    IS  - 2
    ER  - 

    Copy | Download

Author Information
  • Department of Mathematics, Sciences and Education, St. Joseph University in Tanzania, Tanzania

  • Department of Mathematics, Sciences and Education, St. Joseph University in Tanzania, Tanzania