MEDICINE, LAW & SOCIETY Vol. 18, No. 1, pp. 93–108, April 2025 https://doi.org/10.18690/mls.18.1.93-108.2025 CC-BY, text © Mrčela, Vuletić, 2025 This work is licensed under the Creative Commons Attribution 4.0 International License. This license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use. https://creativecommons.org/licenses/by/4.0 NAVIGATING CRIMINAL LIABILITY IN AN ERA OF AI-ASSISTED MEDICINE Accepted 8. 1. 2025 Revised 29. 1. 2025 Published 11. 4. 2025 MARIN MRČELA,1 IGOR VULETIĆ2 1 Supreme Court of the Republic of Croatia, Zagreb, Croatia marin.mrcela@vsrh.hr 2 Josip Juraj Strossmayer University of Osijek, Faculty of Law, Osijek, Croatia ivuletic@pravos.hr, vuleticigor600@gmail.com CORRESPONDING AUTHOR ivuletic@pravos.hr Keywords AI-assisted medicine, criminal, liability, standard of care, accountability Abstract This paper explores the interplay between artificial intelligence (AI) and criminal liability within the healthcare sector, particularly in the context of medical malpractice. Adopting a multidisciplinary approach, the study evaluates current legal frameworks and their adequacy in addressing liability for errors involving AI-driven medical systems. Through an analysis of legal theory, case studies, and technological integration, the research highlights the complexities of assigning liability when errors arise from AI- assisted medical decision-making. The methodology includes a comparative legal analysis and a detailed examination of a real- world case involving AI-related treatment errors. Findings reveal that while existing legal frameworks are sufficient for holding humans accountable under the standard of care, they struggle with the unique challenges posed by AI's "black-box" nature. The study argues that further refinement of liability models is necessary, especially as AI systems gain greater autonomy. The paper concludes by offering a roadmap for balancing innovation in AI with the imperative to protect patient rights, emphasizing that liability frameworks must evolve in tandem with technological advancements. 94 MEDICINE, LAW & SOCIETY, Vol. 18, No. 1, April 2025 1 Introduction The modern period can be marked as an era of the evolution of advanced technologies. The process of automation and the powerful incursion of technologies based on AI are gradually becoming a part of everyday life. Medicine is one of the fields in which this phenomenon has taken on particular importance. In specific fields of medicine, robotic technology has gradually been introduced during the last thirty years, with the aim of reducing the risk of mistakes and improving the quality of medical treatment (Camarillo, Krummel & Salisbury, 2004). In surgery, for instance, AI technology is used in various procedures to facilitate the surgeon's work and occasionally aid in diagnosis, selecting appropriate procedures, and determining their execution methods (Kinoshita & Komatsu, 2023). In some branches of medicine, such as radiology, there is even a prediction that AI will soon replace humans entirely. However, these forms of AI are currently still in the initial experimental stages. In medicine, AI technology is currently employed as an aid to human doctors, who still retain the role of making the final decisions (Syed, Zoga & Musculoskelet Radiol, 2018). However, the human's "dominant" position in the interaction with AI here requires deeper understanding. Specifically, the final decision made by the human doctor is based on specific inputs provided by AI. This can be vividly illustrated in the context of neurosurgical practice. Consider a robot-assisted brain tumor operation where AI autonomously determines (maps) coordinates for the entry of a biopsy needle. In particular, the system utilizes specialized software to navigate the direction of the biopsy needle's entry, significantly reducing the procedure and patient recovery time. Therefore, the neurosurgeon places complete reliance on this prior assessment by the AI system (Rotim, Splavski & Vrban, 2021). Advancements in the interaction between AI and humans can complicate criminal liability in the event of a treatment error. Specifically, if a human error is based on a prior incorrect input provided by AI, it may raise difficulties regarding determination (establishing) the necessary standards of criminal responsibility. This may become even more complex if a certain legal system does not regulate the criminal liability of legal entities, automatically eliminating the criminal responsibility for companies that produced, programmed, or put a particular AI system into circulation. In other words, the interactive involvement of humans and AI in this scenario can complicate criminal liability, highlighting the need to clearly define appropriate standards that M. Mrčela, I. Vuletić: Navigating Criminal Liability in an Era of AI-Assisted Medicine 95. will enable criminal protection of people's health and patients' rights. As it is already noted in literature, „AI-enabled surgical robot lawsuits are also on the rise“ (Griffin, 2021). It is important to note that the purpose of this paper is not to engage in a debate about the pros or cons of criminalizing medical errors, since other authors have already engaged in such debate (Dekker, 2007). On the contrary, it starts from the assumption that this standard is generally accepted, despite potential adverse effects such as defensive medicine. Here, we first outline the current state and modern trend of the integration of AI into patient treatment processes. Additionally, we present some predictions from the scientific literature about the direction of this development in the future. The second part of this paper discusses criminal liability for malpractice from the perspective of using AI systems for inpatient treatment. The thesis advocated is that the existing standard of care criterion still emphasizes human decision-making, thus precluding the exclusion of criminal liability by invoking the contribution of an AI system that would have a significant impact on guilt (culpability). This study employs a multidisciplinary approach, combining legal analysis, case study examination, and a review of existing literature on AI in healthcare. The legal analysis focuses on comparative frameworks for criminal liability in medical malpractice, particularly in cases involving AI-assisted decision-making. A case study of a real-world incident involving an AI-driven medical device provides practical insights into how liability issues arise in practice. The literature review encompasses legal, technological, and ethical perspectives, highlighting the challenges posed by AI’s opacity and evolving role in medical decision-making. By integrating these methods, the research aims to assess the adequacy of current legal standards and propose avenues for adapting liability frameworks to the growing presence of AI in healthcare. 2 AI in Healthcare: Current State In order to better understand the role of new technology in medicine, it is necessary to first differentiate between related but distinct concepts: AI, machine learning (ML), and deep learning (DL). AI refers to the broader concept of machines or systems being able to perform tasks that would typically require human intelligence. This can include tasks such as problem-solving, learning, perception, language understanding, and decision-making. AI encompasses a wide range of techniques, 96 MEDICINE, LAW & SOCIETY, Vol. 18, No. 1, April 2025 algorithms, and approaches aimed at creating intelligent systems that can mimic or simulate human cognitive abilities (Janiesch, Zschech & Heinrich, 2021). ML is a subset of AI that focuses on developing algorithms and techniques that allow computers to learn and make predictions or decisions based on data. Instead of being explicitly programmed to perform a specific task, ML algorithms learn from examples and experience, adjusting their parameters automatically to improve performance over time. In essence, ML algorithms enable computers to recognize patterns in data and make decisions or predictions without being explicitly programmed for every scenario (Janiesch, Zschech & Heinrich, 2021). DL is a subset of ML that uses neural networks with many layers (hence the term "deep") to learn from sustantial amounts of data. These neural networks are inspired by the structure and function of the human brain, with interconnected nodes (neurons) organized in layers. DL algorithms are particularly effective at automatically extracting features from raw data, which makes them well-suited for tasks such as image recognition, natural language processing, and speech recognition (Janiesch, Zschech & Heinrich, 2021). While all three technologies are essential in healthcare, DL, with its ability to handle complex, unstructured data and its remarkable performance in tasks such as medical imaging analysis, has gained significant prominence in recent years (Castiglioni et al., 2021). However, the choice of technology depends on the specific needs and requirements of each healthcare application, and often a combination of AI, ML, and DL techniques is used to address diverse challenges in healthcare delivery, diagnosis, treatment, and research (Ahmed et al., 2020). In medicine, there is an increasing presence of ML and DL forms known as AI foundation models. They are capable of predicting what comes next from existing data (Moor et al., 2023). One example of such software is the Mount Sinai healthcare research system, which enables the early detection of COVID-19 (see more bmeiiadmin, 2020). Additionally, AI is present in the healthcare sector through the concept of digital health, aiming to implement models that enhance the medical decision-making process and thus improve human decision-making (Bennett & Hauser, 2013). M. Mrčela, I. Vuletić: Navigating Criminal Liability in an Era of AI-Assisted Medicine 97. Currently, various forms of AI, ML, and DP in medicine are being utilized in the following five areas: 1) patient messaging and communication; 2) prediction models; 3) summarization of patient data and medical history; 4) virtual scribes; and 5) radiology imaging (Naik et al., 2024). These areas facilitate and expedite the handling and treatment process but simultaneously pose certain risks of compromising personal data privacy, misuse of personal data, and discrimination. For example, researchers at the University of Chicago found that computer programs trained to recognize cancer from large sets of data can also identify where the data came from.1 These programs learn to spot patterns in cancer images to make predictions about patients' outcomes. However, they sometimes take a shortcut by grouping patients based on where the data was submitted from, rather than looking at each patient's unique biology. This could create problems because it might overlook the needs of patients from minority groups who are more likely to go to certain medical centers. They might not get the right treatment because the program assumes they are similar to other patients from the same location (Hoffman & Podgurski, 2019). Here, a critical distinction must be drawn between AI as a tool requiring human oversight and fully autonomous AI systems, as this differentiation significantly impacts the attribution of liability. Presently, most AI systems in healthcare function as advanced tools that assist medical professionals by providing data-driven insights, diagnostic suggestions, or procedural guidance. In such cases, the standard of care and responsibility remain firmly grounded in human oversight. Healthcare providers retain full accountability for decisions and outcomes, as they are expected to validate AI-generated recommendations and ensure that AI systems operate correctly and within their intended scope (Gerke, Minssen & Cohen, 2020). In contrast, the emergence of fully autonomous AI systems—capable of making independent decisions without human intervention—presents a paradigm shift in liability frameworks. Such systems raise questions about the sufficiency of existing legal models, which are predominantly human-centered. In scenarios involving autonomous AI, liability may need to be redistributed across a broader spectrum, including AI developers, manufacturers, and regulatory bodies. This shift would require revisiting the traditional standard of care to accommodate the unique operational capabilities and risks associated with autonomous AI. The current research underscores the importance of preemptively addressing these distinctions 1 For further information see University of Chichago Medical Center (2021). 98 MEDICINE, LAW & SOCIETY, Vol. 18, No. 1, April 2025 to ensure legal clarity and maintain patient safety as AI technologies continue to evolve (Schweikart, 2021). It is crucial to note that currently in medicine, however, there are no models that would completely usurp the role of humans, meaning that medical practice remains predominantly human-oriented, and the standard of care is still the responsibility of humans and healthcare institutions. 3 Criminal Liability for Malpractice in AI Context We will divide the following discussion into two parts. The first part will briefly explain the theoretical framework of thinking. It encompasses the main features of existing concepts of criminal liability for medical malpractice. Understanding these features is important to assess the hypothesis of this paper that the existing criminal malpractice law is sufficient to provide effective protection in AI-involved cases. The second part will describe the facts of a recent healthcare practice case in the USA and will evaluate the hypothesis of this paper against those facts. 3.1.1 Theoretical Framework First, it is important to note that some authors have already examined how the use of AI in medicine is reshaping medical malpractice and tort law. This literature highlights AI’s potential to enhance healthcare, specifically through ML learning algorithms that analyzes vast datasets to inform clinical decisions. This literature also underscores challenges in assigning liability due to AI's "black-box" nature, which obscures its decision-making processes. Traditional tort paradigms—such as negligence, product liability, and informed consent—struggle to address AI's complexities. The other studies explores how evolving legal frameworks might address these issues, proposing solutions like recognizing AI as a legal entity, establishing joint liability for developers, and adapting the standard of care to include AI usage. Ultimately, these scholars stress the need to balance promoting AI’s benefits with safeguarding patient rights, as liability frameworks evolve to accommodate this transformative technology (Schweikart, 2021). Other scholars emphasize the necessity for specialized legislation in this domain, arguing that traditional legal frameworks are insufficient to address the complexities of AI in healthcare. The unique nature of AI, particularly its reliance on "black-box" algorithms, creates significant challenges in attributing liability when harm occurs. M. Mrčela, I. Vuletić: Navigating Criminal Liability in an Era of AI-Assisted Medicine 99. These scholars contend that current legal doctrines, such as negligence and product liability, are often inadequate for evaluating cases involving AI, as they rely on the ability to trace decision-making processes—a requirement that is frequently impossible with opaque AI systems. Additionally, questions arise about who should be held responsible in scenarios where AI systems malfunction or provide incorrect recommendations. Liability could potentially rest individually or jointly on developers, healthcare providers, or even manufacturers of the hardware used to support AI systems. However, traditional tort law struggles to accommodate the distributed nature of AI development, which often involves multiple entities contributing to different components of the system. This lack of clear accountability could result in significant gaps in patient protection and compensation. Some scholars propose frameworks that incorporate elements of joint or shared liability to address these gaps. For example, theories of common enterprise liability suggest holding all entities involved in the development and deployment of AI jointly responsible for any harm caused (Almemari, Al-Enizi & Madi, 2024). Others advocate for the establishment of new legal standards that consider the unique capabilities and limitations of AI, such as requiring physicians to validate AI recommendations or ensuring that developers adhere to rigorous data quality and transparency standards during the design phase (Park, Choi & Byeon, 2021). Moreover, ethical considerations intersect with these legal challenges, particularly in ensuring that AI technologies do not exacerbate existing inequalities in healthcare. Poorly designed algorithms, developed using non- representative data, risk introducing biases that disproportionately harm certain populations. Legal reforms must therefore include mechanisms to ensure fairness, transparency, and accountability in AI deployment. A growing consensus highlights the need for a specialized regulatory framework that goes beyond traditional tort law to govern the use of AI in healthcare. Such frameworks could mandate independent validation of AI systems, establish more precise guidelines for informed consent when AI is used, and create liability insurance requirements for developers. These measures would help balance innovation with patient safety, ensuring that the benefits of AI can be realized without compromising ethical or legal standards. The basis for further elaborations can be found in the concept of medical error in terms of criminal law and it needs to be clearly defined and classified separately from the concept of complication. A medical error in the descriptive sense is considered to be every procedure (act or omission) by a medical professional that deviates from 100 MEDICINE, LAW & SOCIETY, Vol. 18, No. 1, April 2025 the accepted rules of the medical profession, and which leads to unwanted health consequences for the patient. In contrast, a complication is considered to be an unwanted consequence of a medical procedure which, however, is performed in accordance with the rules of the profession and the necessary degree of care (Korošec, 2016). Thus, the constitutive premise for criminal liability is to establish breach of the rules of a specific medical profession (standard of care), whether written or unwritten. This means establishing the starting point for proving the guilt of the perpetrator. Another assumption is determining the causality between such omission and the consequences. After both objective elements have been established, it is necessary to prove the subjective element, which consists of the existence of a suitable form of guilt (Mrčela & Vuletić, 2017). Criminalization of medical error is typical for continental Europe's legal system. This approach differs from the approach in the US, where malpractice cases are usually addressed within the framework of civil liability. However, it is important to note that in the US, healthcare is largely part of the private sector. Conversely, in Europe, healthcare is a significant pillar of the public sector, which influences a different approach to malpractice. Therefore, it is not uncommon in European countries for criminal proceedings to be pursued alongside civil litigation in cases of medical errors. While there are different approaches from legislation to legislation, it is common to consider criminal liability through the criterion of the standard of care, i.e., the violation of such a standard by a physician, which then results in unwanted consequences for the patient's health. Some authors also examine the integration of robotics and AI into healthcare, emphasizing the legal challenges surrounding medical malpractice within the European regulatory framework. These authors advocate for the development of specialized legislation to address the complexities introduced by AI-mediated decision-making in medical practice. They argue that current European legal systems, while offering some degree of standardization, remain insufficient in clarifying the allocation of liability when autonomous AI systems contribute to adverse medical outcomes. Specifically, the authors highlight the potential for inequitable attribution of responsibility among healthcare professionals, institutions, and AI developers, given the opaque nature of AI operations and decision-making processes. To mitigate these issues, the authors propose a legal framework tailored to the distinct characteristics of robotics and AI in healthcare, including clear guidelines on liability distribution, the recognition of AI’s unique operational risks, M. Mrčela, I. Vuletić: Navigating Criminal Liability in an Era of AI-Assisted Medicine 101. and mechanisms to protect patients’ rights. By advocating for this legislative evolution, the authors aim to foster legal certainty and accountability while promoting the ethical and effective deployment of AI technologies in modern medicine (De Micco, et al., 2024). This paper focuses specifically on criminal liability, which will be discussed below. In England, the most severe cases of medical errors are criminally prosecuted under the legal qualification of gross negligence manslaughter. The aspect of culpability in gross negligence implies that the accused significantly breached their duty of care required. The assessment criteria were established in the case of R v Adomako in 1994 (R v Adomako, [1995] 1 AC 171 (HL))2, where an anesthesiologist was accused of failing to notice the disconnection of one of the tubes regulating the patient's airflow during surgery, resulting in a fatal outcome. According to these standards, criminal liability requires the cumulative fulfillment of the following elements: a serious breach of duty (standard of care), conduct significantly departing from that of a reasonable doctor with the same competencies and experience, and the commission of one of four possible types of professional errors (Hubbeling, 2010). A somewhat different model is employed by legal systems that include a specific criminal offense (often referred to as medical malpractice or similar). For example, the Croatian Criminal Code regulates this area within a specialized section (offenses against human health) and a specific criminal act called "unconscious medical treatment" (Croat. Nesavjesno liječenje; Article 181 CC). Such an approach is typical for countries in the former Yugoslavia and can also be found, for instance, in the criminal laws of Slovenia (Jakulin, 2020) and Serbia (Ćirić & Pajtić, 2019). According to Croatian law, to establish criminal liability, it is necessary to prove that a medical professional acted in an evidently unconscious manner, meaning they committed a particularly severe violation of professional standards (standard of care) significantly deviating from the norms of certain medical field. As forms of such breaches of professional standards, the law lists the application of obviously unsuitable means or methods of treatment, another obvious failure to adhere to professional rules, or, in general, evidently unconscious conduct. Furthermore, it is essential to demonstrate that such a violation led to a foreseeable consequence, resulting in harm to health, aggravation of an illness, and in the most severe cases, the death of the patient (Vuletić, 2019). 2 Available on: https://vlex.co.uk/vid/r-v-adomako-793554125 (30 January, 2025). 102 MEDICINE, LAW & SOCIETY, Vol. 18, No. 1, April 2025 Therefore, we may conclude that, although different legal systems regulate this issue differently, the common denominator is the criterion of breaching the standard of care as a necessary prerequisite for criminal liability. The standard of care is based on the conduct and diligence of an average person, or the conduct of an average professional with the same characteristics as the perpetrator in the same or comparable situation. In other words, the standard of care criterion is human- centered, not machine-centered. AI software is legally considered only as a tool under the control of the health professional (Gerke, Minssen & Cohen, 2020). 3.1.2. A Case Study: How AI Can Affect Criminal Liability for Malpractice An incident which occurred at Evanston Hospital in Illinois, is illustrative of a situation where technology involved in treatment led to errors in treatment and negative consequences for the patient’s health. It should be noted here that ultimately the case did not end up in the judicial system as no charges were filed nor was a lawsuit initiated. However, the facts are remarkably interesting in the context of the discussion in this text, so we will use them to test the thesis. Two patients experienced overdoses due to a malfunction in a medical device called a linear accelerator, used for a treatment called stereotactic radiosurgery (SRS). SRS is a type of radiation therapy designed to precisely target small tumors or abnormalities in the brain or spinal cord while minimizing damage to surrounding tissue. The device, manufactured by Varian Medical Systems, has been modified to perform SRS in addition to its standard radiation therapy function. However, these modifications led to communication problems between electronic components, causing serious safety issues (more Bogdanich & Rebelo, 2010). In both incidents at Evanston, the linear accelerator allowed radiation to leak outside a protective cone attachment meant to focus the radiation beam. This leakage occurred because the beam was four times larger than it should have been, resulting in healthy tissue being irradiated along with the targeted area. The device's design concealed the error from operators, as the settings were not clearly displayed on the computer screen, and the metal tray covering the cone's jaws prevented visual inspection. The errors stemmed from the complex nature of adapting linear accelerators for SRS, where the beam can be shaped using either computer- controlled leaves or a cone attachment. In this case, the cone attachment failed to M. Mrčela, I. Vuletić: Navigating Criminal Liability in an Era of AI-Assisted Medicine 103. contain the beam within its circumference, causing radiation to escape through the corners of the jaws and affect unintended areas. Despite efforts to ensure accuracy in SRS, the lack of necessary safety features and communication failures between components led to these dangerous incidents (Bogdanich & Rebelo, 2010). Overall, the accidents highlight the importance of thorough safety protocols and communication in medical device design and operation, especially when dealing with highly concentrated and intense forms of radiation therapy like SRS. Such incidents underscore the need for ongoing vigilance and improvements in technology to minimize the risk to patients undergoing these treatments. Determining whether this case constitutes a breach of the standard of care, or malpractice, would require a thorough investigation and assessment by legal and medical experts. However, based on the information provided, there are elements that could suggest a potential breach of the standard of care. The fact that the linear accelerator malfunctioned allowing radiation to leak outside of the intended treatment area, indicates a failure in the proper functioning of the medical device. Additionally, the lack of necessary safety features, such as those that could have prevented radiation leakage, raises questions about the adequacy of the equipment and its maintenance. Furthermore, the inability of operators to detect the error due to the design of the machine suggests a failure in the system for monitoring and ensuring treatment accuracy. Considering that the machine had been modified to perform SRS, several decisive facts should be established at various levels for the potential liability of individuals managing the machine. First, whether they knew or should have known that the machine was modified. If the answer is affirmative, whether they were or should have been aware of the modification, especially whether they ensured that even with that modification, the machine could operate without danger to the patient. For instance, whether they checked or tested the machine before use. If they knew about the modification but took no action to ensure beforehand that the machine was fit for use without harm to the patient, then their criminal liability could be discussed. This is particularly true if it was their first time using such a modified machine, as it is a reasonable assumption that they should have checked settings that were not visible on the computer screen. The mere fact that settings were not visible should have been a cause for alarm because without visible (safety) settings, the operator should not operate the modified machine. 104 MEDICINE, LAW & SOCIETY, Vol. 18, No. 1, April 2025 If it can be shown that these issues resulted from negligence, oversight, or failure to follow established protocols, it could be argued that the healthcare providers involved did not meet the standard of care expected in their field. Ultimately, whether this constitutes malpractice would depend on various factors, including the specific circumstances of the case, the applicable legal standards, and any evidence of negligence or deviation from accepted medical practices. An in-depth analysis by legal and medical professionals would be necessary to make a definitive determination. However, what needs to be emphasized here is that this concerns the misapplication of the standard of care from a human perspective, as, ultimately, humans remain responsible for the proper functioning of the system. This individual cannot be absolved of criminal liability based on AI system interference because they are responsible for maintaining the proper functioning of that system at all times. Every AI is „only good as the humans programming it and the system in which it operates“ (Kocher & Emanuel, 2019). In this sense, we agree with the assertion that only „once ML diagnosticians…are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings“ (Froomkin et al., 2019). 4 Conclusion This paper addresse current criminal law trends regarding the integration of AI into healthcare, with a focus on the issue of criminal liability for medical malpractice. We conclude that existing models of such liability currently meet the needs of practice because current AI, ML, and DP systems used in healthcare have not yet dominated human decision-making. Accordingly, the standard of care, as a key and arguably universal criterion for assessing criminal liability, still implies a standard of care applied by humans (rather than machines). It seems that there will not be significant changes in the criminal liability paradigm in the near future. Therefore, the role of AI is to assist and facilitate but not to subsume treatment (or responsibility for treatment). The fundamental issue lies in the fact that medical procedures are still perceived as distinctly human skills. M. Mrčela, I. Vuletić: Navigating Criminal Liability in an Era of AI-Assisted Medicine 105. Changes in the need for a new system of criminal liability can only arise if healthcare becomes AI or ML-centered in the future. In that case, it will be necessary to seek new models of liability. We believe that, in such cases, it would be crucial to distinguish between medical procedures where AI is engaged because it performs actions beyond human capability (such as requiring precision or speed unachievable by a human) and cases where AI is used to expedite processes, reduce costs, and enhance efficiency. In the former, AI engagement serves the interest of patients' health, and the responsibility of individuals behind the AI system should cease once the AI is properly tested and approved by the relevant authorities. However, in the latter scenario, human health becomes secondary, with profit or cost reduction being primary objectives. Therefore, in such instances, through new acts of abstract endangerment, liability should be introduced for merely deploying AI into use (even if duly certified), with an additional objective criterion for penalization: the AI made an error and endangered or harmed the health or life of a patient. In this sense, it would represent a specific form of prior culpability or culpability due to assuming a risk that was not assumed solely in the interest of safeguarding public goods. In our view, only such a model would be optimal in preventing criminal responsibility from becoming entirely unprovable in practice. This becomes even more pertinent if in the future AI gains more authority and a higher degree of autonomy in performing routine medical treatments and procedures. Acknowledgement The research for this paper was conducted within ‘Artificial Intelligence and Criminal Law (IP- PRAVOS-18)’, a project funded by the Faculty of Law Osijek. References Ahmed, Z., Mohamed, K., Zeeshan, S., & Dong, X. (2020). Artificial intelligence with multi- functional machine learning platform development for better healthcare and precision medicine. Database, 2020, Vol.me 2020, 1-35. Almemari, A., Al-Enizi, Z. & Madi, R. (2024). Establishing Liability in Medical Malpractice Due to Artificial Intelligence and Robotics Based Diagnostic and Therapeutic Interventions, 2024 Global Digital Health Knowledge Exchange & Empowerment Conference (gDigiHealth.KEE), Abu Dhabi, United Arab Emirates, pp. 1-9, https://doi.org/10.1109/gDigiHealth.KEE62309.2024.10761723 Bennett, C. C. & Hauser, K. (2013). Artificial intelligence framework for simulating clinical decision- making: A Markov decision process approach. Artificial Intelligence in Medicine, 57(1), 9-19. bmeiiadmin (2020). Mount Sinai First in U.S. to Use Artificial Intelligence to Analyze Coronavirus (COVID- 19) Patients. Retrieved from: https://bmeiisinai.org/2020/06/mount-sinai-first-in-u-s-to-use- artificial-intelligence-to-analyze-coronavirus-covid-19-patients/ (January 7, 2025). 106 MEDICINE, LAW & SOCIETY, Vol. 18, No. 1, April 2025 Bogdanich, W. & Rebelo, K. (2010). A Pinpoint Beam Strays Invisibly, Harming Instead of Healing. The New York Times, Dec. 28, 2010. Retrieved from: https://www.nytimes.com/2010/12/29/health/29radiation.html (January 7, 2025). Camarillo, M. S., Krummel, T. M., & Salisbury, K. J. (2004). Robotic technology in surgery: Past, present, and future. The American Journal of Surgery, 188(4), 2. Castiglioni, I., Rundo, L., Codari, M., Di Leo, G., Salvatore, C., Interlenghi, M., Gallivanone, F., Cozzi, A., D'Amico, N. C., & Sardanelli, F. (2021). AI applications to medical images: From machine learning to deep learning. Physica Medica, 83, 9-24. Ćirić, J., & Pajtić, M. (2019). Lekarske greške – od zaboravljene gaze do izvađenog plućnog krila. In I. Stevanović & N. Vujičić (Eds.), Kazneno pravo i medicina (pp. 219). Institut za kriminološka i sociološka istraživanja. Dekker, S. (2007). Criminalization of medical error: Who draws the line? ANZ Journal of Surgery, 77, 831 – 837. Dekker, S. (2011). The criminalization of human error in aviation and healthcare. Safety Science, 49, 121 – 127. De Micco, F., Grassi, S., Tomassini, L., Di Palma, G., Ricchezze, G., & Scendoni, R. (2024). Robotics and AI into healthcare from the perspective of European regulation: who is responsible for medical malpractice?. Frontiers in Medicine, 11. Froomkin, A. Michael, Kerr, I. R., & Pineau, J. (2019). When AIs Outperform Doctors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning. Arizona Law Review, 61, 33. [University of Miami Legal Studies Research Paper No. 18, 33-99. Gerke, S., Minssen, T., Cohen, G. (2020). Ethical and legal challenges of artificial intelligence-driven healthcare. In A. Bohr & K. Memarzadeh (Eds.), Artificial Intelligence in Healthcare (pp. 295- 336). Academic Press. Griffin, F. (2021, May 21). Artificial Intelligence and Liability in Health Care. Health Matrix: Journal of Law-Medicine, 31, 65-106. Hoffman, S., & Podgurski, A. (2019). Artificial intelligence and discrimination in health care. Yale J. Health Pol'y L. & Ethics, 19, 1-49. Hubbeling, D. (2010). Criminal prosecution for medical manslaughter. Journal of the Royal Society of Medicine, 103, 216-218. Jakulin V. (2020). Criminal Offences against Public Health under the Criminal Code of the Republic of Slovenia. Medicine, Law & Society, 13(1), 45-66. Janiesch, C., Zschech, P., & Heinrich, K. (2021). Machine learning and deep learning. Electronic Markets, 31, 685–695. Kinoshita, T., & Komatsu, M. (2023). Artificial Intelligence in Surgery and Its Potential for Gastric Cancer. Journal of Gastric Cancer, 23(3), 400-409. Kocher, B. & Emanuel, Z. (2019). Will robots replace doctors?. Brookings. Retrieved from: https://www.brookings.edu/articles/will-robots-replace-doctors/ (January 30, 2025). Korošec, D. (2016). Criminal Law Dilemmas in Withholding and Withdrawal of Intensive Care. Medicine, Law & Society, 9(1), 21-39. Moor, M., Banerjee, O., Abad, Z.S.H., et al. (2023). Foundation models for generalist medical artificial intelligence. Nature, 616, 259–265. Mrčela, M. & Vuletić, I. (2017). Granice nehajne odgovornosti za kazneno djelo nesavjesnog liječenja. Zbornik radova Pravnog fakulteta u Splitu, 54 (3), 685-704. Naik, K., Goyal, R. K., Foschini, L., Chak, C. W., Thielscher, C., Zhu, H., Lu, J., Lehár, J., Pacanoswki, M. A., Terranova, N., Mehta, N., Korsbo, N., Fakhouri, T., Liu, Q., & Gobburu, J. (2024). Current Status and Future Directions: The Application of Artificial Intelligence/Machine Learning for Precision Medicine. Clinical Pharmacology & Therapeutics, 115(4), 673-686. Park, S. H., Choi, J., & Byeon, J. S. (2021). Key principles of clinical validation, device approval, and insurance coverage decisions of artificial intelligence. Korean Journal of Radiology, 22(3), 442– 453. M. Mrčela, I. Vuletić: Navigating Criminal Liability in an Era of AI-Assisted Medicine 107. Pranka, D. (2021). The Price of Medical Negligence – Should it Be Judged by the Criminal Court in the Context of the Jurisprudence of the European Court of Human Rights? Baltic Journal of Law & Politics, 14(1), 124-152. R v Adomako [1995] 1 AC 171 (HL). Rotim, K., Splavski, B., & Vrban, F. (2021). The Safety and Efficacy of Robot-Assisted Stereotactic Biopsy for Brain Glioma: Earliest Institutional Experiences and Evaluation of Literature. Acta Clinica Croatica, 60(2), 296-302. Schweikart, S. J. (2021). Who will be liable for medical malpractice in the future? how the use of artificial intelligence in medicine will shape medical tort law. Minnesota Journal of Law, Science and Technology, 22(2), 1-22. Syed, A. B., Zoga, A. C., & Musculoskelet Radiol, S. (2018). Artificial Intelligence in Radiology: Current Technology and Future Directions. Seminars in Musculoskeletal Radiology, 22(5), 540- 545. University of Chicago Medical Center (2021). Artificial intelligence models to analyze cancer images take shortcuts that introduce bias. ScienceDaily, 22 July 2021. Retrieved from: www.sciencedaily.com/releases/2021/07/210722113043.htm (January 27, 2025). Vuletić, I. (2019). Medical Malpractice as a Separate Criminal Offense: a Higher Degree of Patient Protection or Merely a Sword Above the Doctors' Heads? The Example of the Croatian Legislative Model and the Experiences of its Implementation. Medicine, Law & Society, 12(2), 39 – 60. Povzetek v slovenskem jeziku Članek obravnava prepletanje umetne inteligence (UI) in kazenske odgovornosti v zdravstvenem sektorju, zlasti v smislu zdravniških napak. Študija z multidisciplinarnim pristopom ocenjuje sedanje pravne okvire in njihovo ustreznost pri obravnavi odgovornosti za napake, ki vključujejo na umetni inteligenci temelječe medicinske sisteme. Skozi analizo pravne teorije, študije primerov in tehnološko integracijo, raziskava izpostavlja kompleksnost določanja odgovornosti pri napakah, ki nastanejo pri medicinskem odločanju s pomočjo UI. Metodologija vključuje primerjalno pravno analizo in podroben pregled dejanskega primera, ki vključuje napake nastale pri zdravljenju z UI. Ugotovitve razkrivajo, da so obstoječi pravni okviri sicer zadostni za uveljavljanje odgovornosti ljudi v skladu s standardom oskrbe, da pa se vendarle spopadajo z edinstvenimi izzivi, ki jih predstavlja „črna škatla“ UI. Študija trdi, da je treba modele odgovornosti še dodatno izpopolniti, zlasti ker sistemi UI pridobivajo vse večjo avtonomijo. Dokument se zaključi s predlogom načrta za uravnoteženje inovacij na področju UI z nujno obveznostjo zaščite pacientovih pravic, pri čemer je poudarjeno, da se morajo okviri odgovornosti razvijati vzporedno s tehnološkim napredkom. 108 MEDICINE, LAW & SOCIETY, Vol. 18, No. 1, April 2025