83 2591-2259 / This is an open access article under the CC-BY-SA license https://creativecommons.org/licenses/by-sa/4.0/ DOI: 10.17573/cepar.2024.2.04 1.01 Original scientific article The Role of Automated Decision- Making in Modern Administrative Law: Challenges and Data Protection Implications Grega Rudolf Information Commissioner, Republic of Slovenia University of Ljubljana, Faculty of Law, Slovenia (PhD student) grega.rudolf@ip-rs.si https://orcid.org/0000-0001-9449-6905 Polonca Kovač University of Ljubljana, Faculty of Public Administration, Slovenia polonca.kovac@fu.uni-lj.si http://orcid.org/00-0002-7743-0514 Received: 24. 9. 2024 Revised: 17. 10. 2024 Accepted: 24. 10. 2024 Published: 27. 11. 2024 ABSTRACT Purpose: The integration of artificial intelligence (AI) in automated de- cision-making (ADM) represents a transformative moment in public ad- ministration. This paper explores the incorporation of ADM systems into administrative procedures, focusing on their impact on personal data pro- tection and the fundamental principles underpinning administrative law. Design/Methodology/Approach: Utilising a combination of descriptive, normative, and doctrinal research methods, the study draws on recent regulatory initiatives, analyses selected ADM use cases in Slovenia and abroad, and closely examines the 2023 Schufa case decided by the Court of Justice of the European Union (CJEU). By combining theoretical per- spectives with practical insights, the research provides a comparative analysis within the context of EU and Slovenian legal frameworks. Findings: The study assesses how ADM systems interact with, and poten- tially reshape, key principles of administrative and data protection law. It presents a clear picture of the legislative, organisational, and technologi- cal changes required to ensure that ADM systems align with existing legal frameworks. Academic Contribution to the Field: By offering valuable guidance for public administration professionals, the paper enhances the understand- ing of implementing ADM technologies in administrative practice. Its Rudolf, G., Kovač, P. (2024). The Role of Automated Decision-Making in Modern Administrative Law: Challenges and Data Protection Implications. Central European Public Administration Review, 22(2), pp. 83–108 Central European Public Administration Review, Vol. 22, No. 2/2024 84 Grega Rudolf, Polonca Kovač insights assist policymakers and legislators in crafting regulations that embrace the benefits of AI while ensuring these systems are subject to proper oversight. Research/Practical/Social Implications: The deployment of ADM sys- tems must align with legal principles to maintain transparency, account- ability, and the protection of fundamental rights. This paper highlights the importance of not only understanding the legal implications but also ensuring that ADM technologies uphold standards of good governance. Originality/Value: This research extends the boundaries of established legal frameworks and raises critical questions about how core principles of administrative and data protection law can adapt to new technologies. The challenge lies in leveraging AI to increase efficiency while ensuring these innovations respect individual rights, safeguard the public interest, and uphold standards of good administration and governance. Keywords: administrative law, administrative procedures, artificial intelligence, automated decision-making, good administration, legal principles, personal data protection JEL: K23 1 Introduction The increasing incorporation of artificial intelligence (AI) in public administra- tion, particularly through automated decision-making systems, marks a criti- cal juncture in the evolution of public governance. These AI technologies, de- signed to streamline administrative processes and improve decision-making accuracy, significantly alter how data is being processed and managed, funda- mentally reshaping the administrative landscape, both in quantity of informa- tion and speed by which information can be processed (Galetta and Hofmann, 2023). Yet, this transformation raises essential legal and ethical questions. At the core of this shift is the challenge of balancing technological progress with the protection of fundamental rights and adherence to established principles of administrative (procedural) law, the rule of law, and personal data protec- tion (Galetta and Hofmann, 2023; Enqvist and Naarttijärvi, 2023). As AI ac- celerates the pace of information processing, it is imperative that legal safe- guards, such as the principles of legality, proportionality, and participation, remain intact to guide this digital transformation responsibly. While the European Union has made strides in adapting its legal framework to modern technological developments, particularly through instruments like the General Data Protection Regulation (GDPR) 1 and the Artificial Intelligence Act (AI Act), 2 a significant gap persists in integrating these regulations with- 1 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119. 2 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) Central European Public Administration Review, Vol. 22, No. 2/2024 85 The Role of Automated Decision-Making in Modern Administrative Law: Challenges and Data Protection Implications ing the remit of administrative (procedural) law. There remains a parallelism rather than an integration between the fundamental principles of administra- tive (procedural) law and the automation of administrative decision-making. This disconnect is partly due to multilevel regulation at both EU and national levels, whereas the administrative procedures are generally regulated by national administrative procedure acts (Dragos, 2023; Kovač, 2016), such as Slovenia’s General Administrative Procedure Act (GAPA), 3 while the GDPR is complemented by the national Personal Data Protection Act (PDPA-2). 4 The coexistence of these multi-layered legal frameworks demands a careful rec- onciliation to ensure that AI-driven decision-making processes not only com- ply with existing legal standards but also bolster the democratic principles that form the foundation of public administration. Although AI brings substantial opportunities for improving public govern- ance, it also introduces complex challenges that necessitate careful and vigi- lant oversight. Constant adaptation of both ethical standards and regulatory frameworks is vital to effectively manage these challenges. Balancing the po- tential benefits of AI with the need to maintain fundamental legal and demo- cratic values is critical—especially regarding administrative decision-making, which requires thorough regulation under administrative procedural law. Addressing these challenges is imperative to ensure that AI enhances public administration while respecting citizens’ rights and maintaining public trust. As digital transformation continues to unfold, it becomes increasingly neces- sary for legislators and policymakers to adapt legal frameworks in a way that keeps pace with technological advancements while ensuring robust protection of individual rights. A thoughtful and coherent alignment of AI technologies with the core legal principles is essential to foster a future where innovation enhances, rather than undermines, the integrity of administrative governance. A review of the studies carried out to date in this area shows that there has already been extensive research, particularly on the use of AI and its impact on the protection of personal data (Goldsteen et al., 2022; Hamon et al., 2022; Rhahla et al., 2021, etc.), fundamental elements of the rule of law and ad- ministrative law (Palmiotto, 2024; Ranchordas, 2024; Enqvist and Naarttijärvi, 2023; Carlsson, 2023; Finck, 2019; Reis et al., 2019), and administrative proce- dures (Galetta and Hofmann, 2023; Parycek et al., 2023; Carlsson, 2023; Grim- melikhuijsen, 2023). However, there is still limited research on the direct im- pact of AI on the principles of administrative procedures and the protection of personal data, their appropriate balancing, and the concrete implications of using automated decision-making in administrative procedures. In the con- text of personal data protection, balancing such rights with other fundamen- tal rights or principles, such as transparency, remains a constant challenge, highlighting collisions between interrelated constitutional rights (cf. Galetta et al., 2015; Kovač, 2022). From this perspective, finding the relevant balance 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelli- gence Act), OJ L. 3 Official Gazette of the RS, No. 80/99 and amendments. 4 Official Gazette of the RS, No. 163/22. Central European Public Administration Review, Vol. 22, No. 2/2024 86 Grega Rudolf, Polonca Kovač is crucial for the effective role of authorities and the proper conduct of ad- ministrative procedures. The aim of this paper is to fill this gap and examine in depth the multifac- eted interplay between administrative (procedural) law and the protection of personal data in the context of the integration of AI systems for automated decision-making in administrative procedures. The goal is to propose appro- priate ways and views on reconciling technological progress with legal regula- tion in this area. These findings may be useful for policymakers in the fields of data protection and administrative procedural law. The lessons learned, par- ticularly regarding the proposal to harmonise the regulatory framework, can serve as a basis for developing and understanding similar solutions in other comparable EU Member States. 2 Research Design, Questions and Methods Applied This study aims to assess the intersection of AI in automated decision-making with principles of administrative law and personal data protection. In order to do so, the research draws on a diverse range of methods, including descrip- tive, normative, and dogmatic approaches, supported by a comprehensive re- view of secondary and complementary literature and legal sources. The focus is on how AI impacts public administration, with particular attention to the implications for administrative procedural law and data protection principles. The methodology further incorporates content analysis, synthesis, compila- tion, and the axiological method, ensuring an in-depth exploration of the re- searched subject. Central to the research is an evaluation of the CJEU’s deci- sion in the Schufa case (CJEU, C-634/21, December 2023), which provides a relevant case study for examining the practical implications of AI in adminis- trative decision-making. In addressing the research questions, the paper adopts a triangulation ap- proach, combining multiple perspectives and methodologies for enhanced ob- jectivity. This involves the integration of literature analysis, case law evaluation, and comparative studies. The selected methods ensure a holistic understand- ing of how AI systems affect the core principles of administrative procedural law and data protection. The main research questions guiding this research are: – How does the use of AI in automated decision-making impact core princi- ples of administrative law and data protection within administrative proce- dures? – To what degree do existing EU regulations, such as the GDPR, and national laws like Slovenia’s PDPA-2 and GAPA, successfully uphold individual rights while ensuring efficient public administration? The methodological framework for this research rests on a qualitative ap- proach (see figure 1), complemented by doctrinal and case law analysis. The study begins by defining the key concepts and principles affected by AI-driven automation and follows with a comparative evaluation of current regulatory Central European Public Administration Review, Vol. 22, No. 2/2024 87 The Role of Automated Decision-Making in Modern Administrative Law: Challenges and Data Protection Implications frameworks in the EU and Slovenia. The research relies on credible academic sources, including peer-reviewed journals, legal monographs, case commen- taries, and relevant case law. Sources were selected based on their relevance to the research objectives related to AI and administrative law, their contribu- tion to understanding the intersection of AI and legal frameworks, and their potential to provide both theoretical and practical insights. Particular empha- sis was given to materials that address current challenges and regulatory de- velopments to ensure a comprehensive exploration of the topic. Figure 1. Basic research steps with methods applied Definition of research problem (AI and ATM in administrative and data protection law), objectives, research questions and methods Theoretical part: descriptive, normative-dogmatic analysis, legal framework review, and comparative method Empirical part: case law anaylsis, content synhesis and complilation Interpretation of results through the initial reserch question (axiological- sociological method), definiton of discussion points Source: own Building on this foundation, the structure of the paper unfolds in a logical se- quence. Section 3 provides a critical examination of the existing legal frame- work governing automated decision-making systems and their practical appli- cations. This sets the stage for the discussion of how these systems interact with established principles of administrative (procedural) law and personal data protection, including an analysis of the Schufa case in Section 4. Section 5 delves deeper into key constitutional principles that serve as safeguards for European democratic standards, while Section 6 shifts the discussion on the potential developments in regulatory frameworks and risk management strategies for the responsible implementation of ATD in public administra- tion. Lastly, the conclusion in Section 6 summarizes the key findings and sug- gests pathways for future research and regulatory reform. Through a combination of normative and axiological methods, this study seeks to contribute to a deeper understanding of how AI reshapes legal prin- ciples and affects the balance between efficiency in public administration and the protection of individual rights. 3 AI in Administrative Procedures: An Overview of the Slovenian Legal Framework and Their Application Slovenian administrative procedures, much like those in most EU countries, are governed by a combination of sector-specific laws and the General Ad- ministrative Procedure Act (GAPA), which functions as a subsidiary frame- work (lex generalis) to those sector-specific laws, except when it comes to fundamental principles. These foundational principles, rooted in constitutional guarantees, provide consistency across various administrative areas and authorities, serving as Central European Public Administration Review, Vol. 22, No. 2/2024 88 Grega Rudolf, Polonca Kovač anti-fragmentation mechanisms (Kovač, 2022). Among these nine principles, some are considered sub-principles of others—for example, the assessment of evidence is part of the broader principle of substantive truth. The majority of these principles are also reflected in GAPAs in the broader region and are inte- gral part good governance. The principle of legality stands out as particularly critical, supported by complementary principles such as decision-making au- tonomy, the right to be heard, the right to appeal, and the pursuit of substan- tive truth. Moreover, Slovenian administrative law not only follows national legal standards but is also aligned with European Union guidelines on good administration, especially regarding the emphasis on public participation, legal protection, and balancing the right to access information with data protection safeguards (Galetta, 2015; Kovač, 2016; Galetta and Hofmann, 2023; Roehl, 2023). At the EU level, these guarantees are codified in the Charter of Funda- mental Rights, with Article 41 ensuring the right to good administration, which offers procedural rights such as the right to be heard, access to file, and the obligation for administrative bodies to provide reasons for their decisions. Legal principles, in general, serve as value-based criteria that are drawn from legal theory, case law, and both the constitutional and international guaran- tees. They guide the interpretation and application of codified legal rules, providing a framework for applying substantive law and ensuring proper in- terpretation of procedural discretion. These principles become particularly relevant when assessing the legality of administrative acts, where failure to adhere to them may form the basis for legal challenges. In such regard, GAPA principles are instrumental in navigating the complex interplay between pub- lic interest and the legally protected interests of private parties to the proce- dure, ensuring that public interest is prioritized when conflicts arise. At the same time, these principles help uphold the fundamental rights of individuals, striking a balance that safeguards both public administration’s efficiency and individuals’ legal protections. As administrative procedures increasingly adapt to new technologies, such as AI-driven decision-making, these principles must continue to evolve to maintain their relevance in the face of modern chal- lenges to governance and personal data protection (on the relation between EU and national regulation in this area see Kovač, 2016). This is particularly relevant to the topic under consideration, as it requires individual countries to find a balance between general/supranational common regulations and the specifics of individual administrative traditions and areas, status of public ad- ministration, and the country’s political objectives. By highlighting key principles in Slovenian law and their relevance at the EU level, and while also noting their indirect link to automated administrative decision-making, we can draw the following table. The analysis indicates that the Slovenian GAPA aligns with all relevant standards of the CJEU case law and operationalises rights enshrined in the Constitution of the Republic of Slovenia (ConRS) 5. However, the hierarchy and significance of these principles can be unclear, leading to difficulties in their interpretation in practice, par- 5 Official Gazette of the RS, No. 33/91-I and amendments. Central European Public Administration Review, Vol. 22, No. 2/2024 89 The Role of Automated Decision-Making in Modern Administrative Law: Challenges and Data Protection Implications ticularly when conflicts arise, such as between access to personal data and privacy (as under the Freedom of Information Act, FOIA). 6 Table 1: Overview of administrative principles in CJEU case law and Slovenian general regulations Principles and fundamental rights according to CJEU Slovenian sources Rule of law, legality and protection of public interest, legal certainty, legitimate expectations ConRS (2, 3–, 15, 120, 153, 155, 158), GAPA (6, 7, 15, 42–55, 83, 237, 224, 225, 237, 282) Impartiality, equality ConRS (14, 22), GAPA (6, 12, 35–9, 237) Proportionality, fairness, due care ConRS (2–, 22, 23, 25), GAPA (7–) Right to be heard (fair hearing) and participatory democracy ConRS (3, 21, 22, 34, 44), GAPA (9, 146, 237) Access to (one’s) file ConRS (22), GAPA (82) Transparency, access to information ConRS (39), GAPA (82), FOIA Data protection and quality (18, 19) ConRS (38), GAPA (82, 74–80, 164–201), GDPR and PDPA-2 Reasons for decisions ConRS (22, 25), GAPA (214, 237) Reasonable time ConRS (23), GAPA (14, 222, 256) Effective remedy ConRS (25, 157), GAPA (13, 215, 229–81) Good administration Indirectly, throughout ConRS and GAPA Source: Kovač, 2022; Galetta et al., 2015. In this context, it is important to emphasise that the protection (and qual- ity) of personal data constitutes a fundamental principle of administrative law and a key pillar of European identity (more in Rudolf and Kovač, 2023). AI systems of automated decision-making that process personal data are sub- ject to the stringent rules of the GDPR. This includes the general principles of data protection outlined in Article 5 and the requirements for a relevant legal basis for processing specified in Article 6, in parallel with Article 9 in cases where special categories of data are processed. Notably, Article 22 of the GDPR guarantees data subjects the right not to be subject to a decision based solely on automated processing, including profiling, which produces le- gal effects concerning or significantly affects them. Additionally, Article 15(1) 6 This is also indicated by the case law of the Slovenian Supreme Court, e.g. case I Up 168/2017, 5 March 2019, stating that the principle of proportionality under the GDPR and the Consti- tution prevails over sector-specific laws, or case X Ips 4/2020, 27 May 2020, stating that the provisions of the sector-specific (criminal or administrative), albeit general procedural law, prevail over a systemic law on public information or the protection of personal data. Central European Public Administration Review, Vol. 22, No. 2/2024 90 Grega Rudolf, Polonca Kovač (h) of the GDPR provides data subjects with the right to obtain information about the logic involved in any automated decision-making process, as well as the significance and the envisaged consequences of such processing. These provisions underscore the importance of protecting personal data and ensure that such systems are deployed in accordance with the values of transpar- ency, fairness, and respect for individuals’ autonomy. Nevertheless, despite the relatively ambiguous legal framework surrounding the integration of AI systems into administrative decision-making, there are several examples of application of these systems both in Slovenia and internationally. The use of AI systems is already transforming public administration in Slove- nia and beyond, enhancing the efficiency of public tasks and services. While Slovenia remains relatively cautious compared to other countries (e.g. Den- mark, Finland, Hungary, Netherlands, Estonia, Spain, USA, cf. Kuziemski and Misuraca, 2020; Kovač, 2022; Ranchordas, 2024; della Cananea and Parona, 2024), there are notable examples of automated decision-making systems be- ing implemented. These systems are sometimes introduced with limited re- gard for, or even in defiance of, existing legal frameworks, with shortcomings often only becoming apparent when mistakes or abuses occur (Babšek and Kovač, 2023). In Slovenia, for example, the ‘e-Welfare’ system automates the processing of social benefits applications. The Slovenian Financial Administra- tion employs machine learning to detect tax evasion by analysing tax data for patterns of fraud. AI is also used in the allocation of agricultural subsidies. Additionally, chatbots are increasingly used to enhance public service delivery and citizen engagement through personalised virtual interactions. Mass tax and social procedures are particularly suited for automated decision- making due to their potential to improve efficiency, transparency, and equal- ity before the law. However, both in Slovenia and internationally, there have been significant issues associated with these practices, including violations of fundamental human rights due to discriminatory algorithms, lack of transpar- ency, inadequate legal safeguards during IT system failures and problems with accountability in multilevel decision-making processes, e.g. in the areas of mi- grations and asylum (see Palmiotto, 2024; Algorithm Watch, 2020; Tangi et al., 2022; Benjamin, 2023), social welfare (see Babšek and Kovač, 2023), employ- ment (see Kuziemski and Misuraca, 2020) etc. While the use of AI systems is diverse and innovative, it can also raise concerns regarding the principles of good administration, including adherence to the principles of proportionality, the right to be heard, the obligation to provide reasons for decisions, and, con- sequently, legal protection (Galetta and Hofmann, 2023; Ranchordas, 2024). 4 The Role of AI in Shaping Key Principles of Administrative Procedure and Data Protection The integration of AI into public administration marks a pivotal shift in the modernisation of public governance (Kovač, 2016; Reis et al., 2019; Galetta and Hofmann, 2023; Roehl, 2023). Although AI promises numerous advantag- es, such as reducing administrative barriers and accelerating administrative Central European Public Administration Review, Vol. 22, No. 2/2024 91 The Role of Automated Decision-Making in Modern Administrative Law: Challenges and Data Protection Implications decision-making processes, it also raises critical concerns about maintaining core administrative law principles. These include transparency in decision- making, fairness in interactions with parties to the procedure, and ensuring that individuals participation in processes driven by AI. The central aim of ad- ministrative procedures is to achieve a balance between public and private interests, safeguarding the rights of the weaker party in any given situation. That in mind, the introduction of AI into these processes, presents several challenges. One of the foremost issues is how to harness the potential of AI tools without disturbing the fine balance between public and private inter- ests. At the same time, it is vital to provide all participants as parties to the procedure with the procedural protections necessary to uphold their rights. It is not sufficient that authorities simply reach decisions; these decisions must also be accepted, and trusted by the affected persons. Given that administrative decision-making is inherently linked to individual- ised decision-making on an administrative matter, the use of AI in this context must ensure that, in addition to the principles of administrative law, also the principles concerning the protection of personal data are adhered to. These include core principles such as legality, fairness, transparency, purpose limita- tion, data minimisation, accuracy, storage limitation, integrity, confidentiality, and accountability. These rules are not just procedural requirements; they act as ethical standards guiding the responsible use of AI in automated decision- making. Adherence to these principles ensures that AI technologies uphold individual rights while complying with the legal safeguards and obligations in place. The convergence of these principles creates a complex framework that public administration must navigate to ensure fairness, accountability, and transparency in modern public governance. The deployment of AI in administrative procedures also touches upon deeper concerns regarding the rule of law, particularly the separation of powers. This principle, aimed at preventing the concentration of power by ensuring the in- dependent operation of legislative, executive, and judicial branches, becomes increasingly relevant as AI blurs the lines between these roles. Moreover, the delegation of tasks on AI supervision between EU and national authorities fur- ther underscores the importance of this principle, requiring clear boundaries and checks on the use of AI within administrative procedures (Benjamin, 2023). The role of AI in automating decision-making processes, traditionally the do- main of humans, blurs these divisions, raising issues of accountability and con- trol. The potential encroachment on the separation of powers intersects with the principle of legality. The non-transparency of AI makes it difficult to verify whether the results of these systems – whether used as recommendations or as legally binding sources – are valid. This compromises both the legitimacy of decisions and legal certainty of those affected (Grimmelikhuijsen, 2023; Gal- etta and Hofmann, 2023). The use of outdated or inaccurate data, including invalid legal frameworks or other inaccurate data, can lead to decisions that are not only inaccurate but de facto wrong, which undermines the principles of substantive truth and (substantive) legality. When using AI, any inaccuracy in input data is even more critical, as it can cause a chain reaction of wrong de- Central European Public Administration Review, Vol. 22, No. 2/2024 92 Grega Rudolf, Polonca Kovač cisions based on inaccurate assumptions. This raises the question of whether accuracy should be prioritised over the explainability of the results. The abil- ity of AI to learn dynamically and produce new or different results each time leads to unpredictability, thus undermining legal certainty and the reliability of authoritative decisions that the principle of trust in the law is supposed to guarantee (Carlsson, 2023; Cetina Presuel and Martinez Sierra, 2022). In addition, AI’s capacity to interpret evidence based on predefined algorithms and patterns can significantly limit the scope of human discretion in assessing and evaluating evidence in administrative procedure, thus compromising the principle of the free assessment of evidence, which requires consideration of nuances and context provided by human judgement. Paramount in such respect, particularly regarding the processing of personal data, is the principle of legality. According to this principle, AI technologies must operate within the legal constraints and bases outlined in Articles 6 and 9 of the GDPR. Ensuring that AI systems – which often process personal data in complex and sometimes non-transparent ways – comply with the legal re- quirements is essential to protect the rights of individuals and maintain trust in both decision-makers and AI systems (Grimmelikhuijsen, 2023). In this con- text, legality acts as a safeguard to ensure that personal data is not used ar- bitrarily or without a clear legal basis. The challenges concerning legality and legal certainty are also closely linked to the risks that the use of AI systems entails in terms of equality before the law. Algorithmic biases can lead to dis- criminatory outcomes, thus undermining the principle of equality before the law. It is crucial to ensure that AI systems, particularly when integrated into (administrative) decision-making, are designed and assessed in terms of their understanding and in terms of ensuring fairness and equality of treatment. 7 In this context, the principle of proportionality benefits from AI’s ability to tailor results and decisions to specific situations and cases. However, this benefit depends on the quality of input data and the fairness and ethicality of the algorithms used to process such (Finck, 2019). Control over preventing un- desirable effects from AI-generated results is crucial. However, the principles of purpose limitation and data minimisation, as derivatives of the principle of proportionality in personal data protection, face numerous challenges in the use of AI, which requires large amounts of data for learning and decision- making. Their advancement must therefore ensure that personal data are collected and processed solely for the explicit and legitimate purposes for which they were collected (purpose limitation) and that the personal data processed are adequate, relevant, and limited to what is necessary for the purposes pursued (data minimisation) (Goldsteen et al., 2022). All of the above is also related to the principles of openness and transpar- ency, which are essential for ensuring democratic accountability (see also a comparison of EU values and rules as opposed to the Chinese approaches, in Kovač and Rudolf, 2022). Pursuing explainable AI and raising public awareness 7 This is discussed in more detail by Ranchordas (2024), who even advocates for the introduc- tion of ‘digital constitutionalism’. Equality is particularly important for vulnerable groups, such as the socially disadvantaged, where parties are, by definition, less informed, less educated, and less empowered to protect their rights (cf. Babšek and Kovač, 2023). Central European Public Administration Review, Vol. 22, No. 2/2024 93 The Role of Automated Decision-Making in Modern Administrative Law: Challenges and Data Protection Implications of its role in decision-making are key to maintaining trust in administrative decisions and human decision-makers. This enhances transparency in public governance and encourages public participation and engagement (Grimme- likhuijsen, 2023). The author explores the challenge of transparency by distin- guishing between (i) accessibility, which refers to the availability of the algo- rithmic code, and (ii) explainability, which focuses on the ability to explain the functioning of AI algorithms and their impact on decisions in an understand- able way. According to the research, the explainability of AI systems is more important for fostering trust, not only in the AI algorithm itself but especially in the human decision-makers who use AI in their decision-making. This dual aspect of trust – both in the technology and in the individuals behind it – high- lights the manifold implications of the transparency of algorithms for the pro- tection of personal data. The use of AI in individual administrative procedures challenges the participa- tion and involvement of both the public and the parties to the procedure. It hinders the right to be heard by limiting the parties’ ability to understand and influence the outcome of the administrative decision (Kovač, 2016; Gal- etta and Hofmann, 2023; Enqvist and Naarttijärvi, 2023). When AI assists in or even guides decision-making (either in whole or in part), the parties involved in these procedures find it more difficult to control their requests, monitor the collection and balancing of evidence, and influence the assessment and adoption of the final decision. This is particularly problematic if the reasoning behind the AI-generated decision remains unclear, preventing parties from understanding how the conclusions that contributed to and influenced the final decision were reached. This also raises concerns about the accountabil- ity of administrative decisions. If parties are unaware that AI is being used or cannot understand the AI decision-making process, their ability to challenge administrative decisions and effectively pursue remedies and judicial review is compromised. This affects the rights of individuals and undermines the integ- rity of and trust in the law. Uncritical acceptance of AI risks undermining fair and just procedures and the rights of the individuals who are the subjects of such decisions. As regards the fairness of algorithms and related processing, there are also concerns about the potential for AI-generated results to pro- duce biased or erroneous outcomes, which further complicates the possibility of integrating AI systems into administrative procedures. Legal discussions also often overlook the wider societal impacts and risks associated with er- rors and biases in algorithms (Carlsson, 2023; Ranchordas, 2024). Individuals seeking to challenge automated decisions therefore face several procedural obstacles, including the complexity and non-transparency of AI systems and their decision-making processes, which blur the way decisions are made and limit access to the information needed to challenge them effectively. As regards the confidentiality of administrative processes, AI’s reliance on large databases for learning and decision-making increases the risks of disclosure or misuse of personal and confidential information. It is therefore crucial to implement robust measures to protect personal and confidential information from the procedure and to develop AI technologies that respect the essence Central European Public Administration Review, Vol. 22, No. 2/2024 94 Grega Rudolf, Polonca Kovač of the right to (informational) privacy (Hamon et al., 2022; Rhahla et al., 2021; Rudolf and Kovač, 2023). This includes limited use of personal data, aligned with purpose limitation and data minimisation. Prolonged data storage, in par- ticular when unnecessary, increases the risk of misuse and security breaches. In conclusion, the impact of AI on the principles of administrative (procedural) law and data protection law is profound, necessitating a synchronised ap- proach to technology introduction and legal framework enhancement. The relevant principles are interrelated; emphasising one can interfere with the other. Thus, ways must be found to bridge these dilemmas to minimise col- lisions or justify the predominance of one principle over another based on the specific circumstances of a case. The opportunities AI offers for better (public) governance are substantial, but they come with complex challenges that require vigilant and careful oversight, ethical consideration, and constant adaptation of both legal frameworks and practices. Balancing the benefits of AI with the need to uphold fundamental legal and democratic standards is therefore crucial, which is particularly evident in the regulation of administra- tive decision-making through administrative procedural law, as illustrated by the CJEU case below. The Schufa case (CJEU, C-634/21, 7 December 2023) is relevant to the topic at hand, even though the case concerns the decision of a private sector en- tity, specifically German bank, using AI to assess a client’s creditworthiness (credit score). This mechanism often forms the basis for decision-making by authorities in administrative procedures when granting rights to parties. The decision raises several issues, such as the need for transparency, accounta- bility, and the protection of personal data. The analysis of this case aims to highlight how these principles can lead to the responsible deployment of AI technologies, respecting legal principles and safeguards, while building trust in AI systems. The analysis also seeks to link theoretical guidance with practi- cal insights to offer a pragmatic approach to the integration of AI systems into administrative practice. The dispute arose when Schufa made a prediction about a data subject’s cred- itworthiness using AI and passed that prediction to the bank, which subse- quently refused to grant a loan. The data subject claimed legal protection before the supervisory authority in the German state of Hessen, followed by an action before the Administrative Court of Wiesbaden. The court stayed the procedure and referred a preliminary question to the CJEU. The CJEU consid- ered whether Schufa’s automated credit scoring fell within the scope of “au- tomated individual decision-making” as defined in Article 22(1) of the GDPR. This provision prohibits any decision based solely on automated processing, including profiling, which produces legal effects or similarly significantly af- fects the data subject, unless the specific conditions of Article 22(2) of the GDPR are met. Having established the adequacy of the request, the CJEU as- sessed whether the three cumulative conditions for the application of Article 22 of the GDPR were met in the processing of personal data at issue. The Court held that (i) there was a “decision”, as this term also covers the result Central European Public Administration Review, Vol. 22, No. 2/2024 95 The Role of Automated Decision-Making in Modern Administrative Law: Challenges and Data Protection Implications of the calculation of the data subject’s creditworthiness; (ii) the decision was “based solely on automated processing, including profiling”; and (iii) the deci- sion produced “legal effects concerning the person at issue or affect him or her similarly significantly”. The CJEU ruled that it was apparent from the very wording of the question referred that “the action of the third party to whom the probability value is transmitted draws ‘strongly’ on that value” and that, according to the factual findings of the referring court, an insufficient prob- ability value leads, in almost all cases, to the refusal of that bank to grant the loan applied for. The Court’s judgment thus takes the view that preparatory documents (i.e., those offered by AI systems as a result of processing) can also be considered autonomous decisions under the provisions of Article 22(1) of the GDPR. In line with Recital 71 and Article 22(2)(b) of the GDPR, the CJEU emphasised the need for protective measures to respect the rights and free- doms of data subjects when automated decision-making is carried out. These (additional) measures include the right for data subject (a) to obtain human intervention, (b) to express their point of view, and (c) to challenge the deci- sion taken in their regard. In this context, the question arises whether automated draft decisions or related results generated by AI systems in administrative procedures should also be considered as a prohibited type of data processing, particularly in cases where all conditions under Article 22(1) of the GDPR (see points i-iii above) are met. Given the nature of the administrative relationship, where the rights and obligations of individuals vis-à-vis the authorities are deter- mined in administrative procedures, it is reasonable to assume that the mere consideration by a human decision-maker of the results, predictions, opin- ions, or assessments of AI systems as part of evidentiary materials within fact-finding and evidence-taking procedures significantly influences its con- duct and the final decision. However, the distinction between different types of documents or outputs (e.g., a prediction or assessment as evidence versus a draft decision) remains unclear, raising questions about whether such use constitutes a decision based solely on automated data processing. In light of the Schufa decision, the processing and use of AI in administrative proce- dures would, as a rule, be prohibited, at least under Slovenia’s current legal framework. 8 This directly collides with or undermines fundamental principles such as legality, proportionality, trust, data quality and substantive truth, 9 the right to be heard and be given reasons for a decision, and effective legal protection. This is not only in contravention of the GDPR but also the EU Charter of Fundamental Rights and the principles enshrined in the Slovenian Constitution and the GAPA. 8 Unless compliance with the conditions set out in Article 22(2)(b) of the GDPR is ensured – namely, that such processing is authorised by Union or Member State law to which the con- troller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests. 9 According to the CJEU, the operator of an AI system must apply appropriate mathematical or statistical procedures, along with technical and organisational safeguards, to minimise the risks of error and correct inaccuracies. Additionally, these measures must ensure the protec- tion of personal data by considering potential risks and preventing discriminatory consequen- ces that could affect an individual’s interests and rights (cf. Grimmelikhuijsen, 2023). Central European Public Administration Review, Vol. 22, No. 2/2024 96 Grega Rudolf, Polonca Kovač In Slovenia, according to constitutional provisions, such a legal basis should be established by law (see Article 38(2) of the Constitution) or by a hierarchi- cally superior legal basis (e.g. EU law). In addition to the conditions mentioned above, such regulation should also fulfil the conditions set out in Article 6(2) of the PDPA-2. Such a law should specify the processing of personal data, the types of personal data to be processed, the categories of data subjects, the purpose of the processing, and the retention period of the personal data or the period for periodic review of the need for retention. Where possible, it should also specify the users of the personal data, the specific processing operations and procedures, and other measures to ensure lawful, fair, and transparent processing. Currently, no such provision exists in the Slovenian legal order that explicitly allows such processing while meeting the strict cri- teria of the national legal framework. The Schufa decision thus sets a precedent by setting clear criteria for auto- mated decisions, including the right to human intervention, the right to chal- lenge automated decisions, the right to be heard, and the right to transpar- ency in the decision-making process. By establishing these standards, the Court underscores the imperative to bolster legal frameworks to protect in- dividual rights amid the growing prevalence of automated decision-making. This ruling not only reflects the increasing importance of data protection in our digital era (cf. Rudolf and Kovač, 2023) but also points to potential short- comings in existing legal frameworks to adequately address the challenges posed by automation. It serves as a compelling call to action for legislators to modernize and strengthen laws, ensuring they effectively safeguard individu- als against the risks associated with automated procedures. 5 Constitutional Principles of Democracy as Safeguards of Arbitrary AI As public administrations increasingly adopt digital tools to enhance effi- ciency and streamline decision-making, the need for robust legal frameworks to safeguard democratic values has become more urgent. This shift toward automation, particularly through ADM systems, presents both opportunities and significant challenges. In response to these challenges, the European Le- gal Institute (ELI) adopted a Charter of Fundamental Constitutional Principles of a European Democracy (2024), which provides critical guidance on how to integrate emerging technologies into governance structures without under- mining core democratic principles. Illustrated by the Figure 2 below is the complex intertwining of principles re- lated to good governance with those related specifically to ADM, as outlined by the ELI Charter. While ADM promises efficiency, it also risks eroding trans- parency, accountability, and fundamental rights (cf. Grimmelikhuijsen, 2023; Galetta and Hofmann, 2015). Legal certainty and the rule of law are particu- larly vulnerable, as opaque algorithms complicate the ability to challenge decisions, potentially undermining individuals’ rights. Similarly, ADM systems can dilute accountability by distancing human oversight, which calls into ques- Central European Public Administration Review, Vol. 22, No. 2/2024 97 The Role of Automated Decision-Making in Modern Administrative Law: Challenges and Data Protection Implications tion the adequacy of current regulatory frameworks. The integration of ADM also raises broader concerns about bias and discrimination, challenging non- discrimination and equality before the law. These issues expose gaps in the governance of ADM systems, suggesting that mere compliance with existing principles may be insufficient to prevent democratic erosion (cf. Enqvist and Naarttijärvi, 2023; Ranchordas, 2024). Effective regulation of ADM requires not only adherence to these principles but also a proactive approach to antici- pate and address the unique risks ADM introduces, ensuring the preservation of democratic values in the face of rapid technological change. Figure 2. Interconnecting principles of good governance with ADM Source: own; based on ELI, 2024. Among the listed principles, Principle 32 directly addresses ADM and serves as the focal point for understanding how the Charter seeks to regulate the in- tegration of ADM systems into public administration. This principle acknowl- edges the transformative potential of ADM systems but insists that these systems must not operate without the legal and constitutional safeguards. It establishes that ADM systems must be transparent, accountable, and sub- ject to meaningful human oversight, while ensuring that they do not unduly limit access to judicial protection, reinforcing the importance of preserving avenues for individuals to contest decisions that affect their rights (Benjamin, 2023; Finck, 2019). Central European Public Administration Review, Vol. 22, No. 2/2024 98 Grega Rudolf, Polonca Kovač Principle 32’s call for transparency is central to mitigating the risks associated with ADM systems. Without full transparency, the use of ADM systems risk violating Principle 7’s mandate that all administrative actions be governed by clear legal basis. Transparency, separately outlined in Principle 20, is es- sential not only for maintaining public trust but also for allowing individuals to understand how decisions that impact their rights are made. However, the challenge with ADM systems lies in making this transparency meaning- ful. Disclosing the technical details of an algorithm or the data it processes may not be sufficient for the average individual to fully understand how a decision was reached. Therefore, Principle 32 should be expanded to require that transparency in ADM systems includes not technical details but rather a clear, understandable explanations of how decisions are made and the logic behind them. This approach aligns with the right to legal certainty, a fair trial and an effective remedy by also resonating strongly with the CJEU judgment in the Schufa case. This need for deeper transparency is echoed also in the Advocate General’s Opinion in Case C-203/22, where the AG stressed that under the GDPR, trans- parency must not be reduced to superficial technical explanations. Instead, individuals must be given meaningful insights into the logic of automated de- cision-making to understand how such systems operate. The AG further high- lighted that the GDPR’s right to explanation is meant to empower individuals, not just inform them superficially, so they can make informed decisions about seeking legal protection when their rights are affected (De la Tour, 2024). Another key aspect of Principle 32 is its insistence on accountability and hu- man oversight in ADM systems. While this reflects a recognition of the limi- tations of automated processes, the principle’s current formulation leaves questions about the extent and depth of this oversight. Human oversight, in many cases, can become merely procedural, where humans simply “rubber stamp” decisions made by machines without truly engaging in a meaningful review. To be effective, human oversight must go beyond formalities and involve a substantive review of the ADM system’s decision-making process, ensuring that any biases, errors, or injustices are identified and corrected (cf. Grimmelikhuijsen, 2023; Hamon et al., 2022; Cetina Presuel and Martinez Si- erra, 2022). Without strong human involvement, ADM systems can produce outcomes that perpetuate existing biases or inequalities, directly conflicting with Principle 26’s emphasis on non-discrimination and Principle 27’s empha- sis on the protection of fundamental rights. In this regard, Principle 32 should also be critically evaluated in light of Princi- ple 19’s broader framework of accountability. While the principle highlights the need for ADM systems to be accountable, the specific mechanisms for ensuring this accountability are left vague. Public administrations that rely on ADM systems should be required to establish clear lines of accountability, in- cluding external audits and oversight bodies that can review ADM decisions and hold public institutions accountable for their use. This ties back to Prin- ciple 22 on anti-corruption and the need for mechanisms that prevent the Central European Public Administration Review, Vol. 22, No. 2/2024 99 The Role of Automated Decision-Making in Modern Administrative Law: Challenges and Data Protection Implications misuse of power. In the context of ADM, without external checks and robust accountability systems, there is a real risk that ADM systems could be misused or that public administrations could hide behind the opacity of algorithms to avoid responsibility for unjust decisions. Despite the strong framework established by Principle 32, significant gaps remain, particularly in relation to data protection. ADM systems typically de- pend on the processing of large volumes of personal data, bringing them directly under the scope of the GDPR and national data protection laws. The GDPR offers individuals robust protections, including the right not to be subject to decisions based solely on automated processing unless spe- cific safeguards, are in place. Yet, Principle 32—and indeed, the Charter as a whole—fails to explicitly incorporate these vital data protection rights and safeguards. This omission represents a critical gap, especially in the context of administrative procedures, where the handling of personal data and pri- vacy concerns are paramount in the introduction and operation of ADM sys- tems. This lack of alignment with the GDPR also weakens Principle 27’s com- mitment to the protection of fundamental rights, particularly the right to data protection and privacy. The lack of emphasis on data protection within Principle 32 is particularly concerning given the increasing role that ADM systems are playing in areas where individuals’ rights are most vulnerable. For example, in social welfare decisions, ADM systems could determine eligibility for benefits; while in immi- gration, they may influence decisions about asylum or residency (cf. Carlsson, 2023; Babšek and Kovač, 2023). In these contexts, personal data is not just a byproduct of the decision-making process—it is the basis of the decision itself. Without adequate safeguards in place, individuals could face signifi- cant harm, with limited legal recourse to challenge or correct erroneous or biased decisions. By expanding Principle 32 to incorporate data protection safeguards or even by introducing a standalone principle of data protection as a key tenant of European democracy, the ELI Charter would be better equipped to navigate the challenges of integrating ADM systems into public administration. ADM systems must not only be transparent and accountable but also operate in full compliance with data protection regulations to ensure that individuals’ fundamental rights are upheld. Without these protections, the risk of undermining democratic governance through the misuse of ADM systems remains significant (cf. Kovač and Rudolf, 2022). In the digital age, where automated systems and algorithms drive many public administrative processes, safeguarding data protection is not merely a legal obligation - it is a democratic imperative. 6 Discussion: Key Considerations and Future Trends Public administration as a vital social subsystem plays a pivotal role in uphold- ing the rule of law, serving as the mechanism through which public policies are designed, implemented, and enforced. In this capacity, administrative authorities make crucial decisions that govern the relationship between the Central European Public Administration Review, Vol. 22, No. 2/2024 100 Grega Rudolf, Polonca Kovač state and individuals. With the advent of AI and the broader push for digi- talisation, there is no doubt that these technologies offer significant oppor- tunities to streamline processes and enhance decision-making efficiency. However, adopting automation without careful consideration poses risks. Reckless implementation can erode the fundamental principles of administra- tive procedure and compromise the protection of personal data. Law should not be understood as a mere barrier; but rather in a sense that is provides the essential structure for administrative bodies to intervene in individual rights in a lawful and controlled manner, preventing arbitrariness and ensuring deci- sions are made in the public interest. Therefore, the modernisation of legal frameworks must always go hand in hand with technological reforms (Kovač and Rudolf, 2022; Galetta and Hofmann, 2023). Proponents of rapid digitalisation often argue that law is an obstacle, which hinders the development, but they overlook that the law also provides the necessary foundation for legitimate administrative decision-making. Ignor- ing legal boundaries can lead to misuse of power, unequal treatment, and destabilisation of social order, ultimately undermining the benefits that AI and automation promise. Instead of seeing laws as impediments, it is crucial to recognize their role in providing both limits and empowerment to admin- istrative authorities as they process personal data and determine the rights and obligations of individuals. By ensuring compliance with legal principles, public administration not only upholds the rule of law but also builds trust in its decision-making processes. Although administrative procedures were traditionally perceived as rigid structures ensuring consistency in governance, societal changes demand a more adaptable and flexible approach. Administrative processes must there- fore evolve to meet the complexity of modern governance, where flexibil- ity is necessary to address diverse and dynamic social order (Carlsson, 2023; Dragos, 2023). This evolution highlights the balance between maintaining the core values of administrative law and embracing innovation in a way that strengthens, rather than weakens, public trust in the system. The evo- lution of administrative procedures, and thus public governance as a whole, reflects the necessity for these procedures to remain relevant and effective amidst the challenges posed by accelerating technological progress, increas- ing globalisation, and changing social values. In this context, traditional rigid frameworks are increasingly being supplemented or replaced by more flex- ible, responsive, and participatory approaches. These approaches ensure that administrative procedures not only support and safeguard the rule of law but also meet modern expectations of transparency, efficiency, and participatory processes. Consequently, the traditional function of administrative procedur- al law to control bureaucracy is being expanded to include data and risk man- agement, as well as predictive rather than merely reactive decision-making (more in Ranchordas, 2024). In this context, the rapid development of AI stands out as a key factor that could significantly impact the future functioning of public administration and Central European Public Administration Review, Vol. 22, No. 2/2024 101 The Role of Automated Decision-Making in Modern Administrative Law: Challenges and Data Protection Implications the role of civil servants as personalised holders of executive power. The pace at which AI technologies are developing often outstrips existing legal frame- works, necessitating their continuous adaptation to address new ethical and legal considerations, dilemmas, and risks. AI technologies, capable of process- ing large amounts of data, performing predictive analyses, and automating decision-making processes, are revolutionising the delivery of public services and the way individuals’ rights and obligations are determined. However, in- tegrating such systems into administrative procedures is not without its chal- lenges. The incorporation of AI into public administration raises several sig- nificant regulatory challenges, particularly in ensuring the accountability and transparency of decision-making processes (Grant et al., 2023; Grimmelikhui- jsen, 2023; Galetta and Hofmann, 2023; Cetina Presuel and Martinez Sierra, 2022). This includes ensuring well-established procedural and constitutional principles are upheld alongside the principles of personal data protection (Rhahla et al., 2021; Goldsteen et al., 2022; Kovač and Rudolf, 2022). To meet these challenges, it is essential that today’s legal frameworks spe- cifically address all dimensions of introducing automated decision-making in public administration. This orientation requires not only minor revisions of ex- isting regulations but also a forward-looking approach, including defining the guidelines on the use and protection of personal data in administrative pro- cedures. The current legal framework for AI, consisting of a set of European and other international guidelines, represents an important step towards cre- ating a comprehensive regulatory framework for both the development and use of AI systems (see ELI, 2022; Jančova and Fernandes, 2022). The Slovenian GAPA remains underregulated in this respect, despite the coun- try updating its PDPA-2 with the GDPR in late 2022. However, there are two systemic problems here that remain completely or largely unresolved. First, the GAPA has been in force since 1999 and has never been significantly mod- ernised despite the rapid evolution of the administrative environment, includ- ing digitalisation (Kovač, 2022; Dragos, 2023). There is virtually no mention of digitisation or even automated decision-making, the necessary guarantees, or legal protection, with only a few minor and comparatively outdated rules on e-communication and the automated issuing of certificates or signing of decisions issued by the IT system. Moreover, the competent decision-makers do not even feel the need to include these rules in the GAPA, 10 even though other EU countries are either comprehensively amending their laws (e.g. Hun- gary, Croatia, Romania, Finland; see della Cananea and Parona, 2024) or find practices to be radically ahead of regulation (e.g. Netherlands, USA, Estonia; see Ranchordas, 2024). Second, the implementation of the GDPR, together with national laws, has already identified procedural shortcomings at the EU level, particularly con- cerning the principles of good administration in cross-border decision-making 10 This can be inferred from the strategies and published plans of the Slovenian Government and Ministry of Public Administration, which in 2023 established the baselines but left the regula- tion to be defined by sector-specific regulations, case law, and a potential new law following the completion of the analysis of comparative regimes, expected to be concluded in 2026. Central European Public Administration Review, Vol. 22, No. 2/2024 102 Grega Rudolf, Polonca Kovač on the processing of personal data. This led to the drafting of what is known as “GDPR’s cross border procedural regulation” 11 in 2023, which strengthens certain rights of the parties, particularly regarding good administration, such as the right to appeal, access to data, and alternative resolution disputes. This underscores the importance of an additional and substantively higher level of regulation at both the EU level and national levels. Moreover, the European Commission published a proposal in 2021 and adopted a general AI Act in 2024, a landmark move to address the rapidly evolving field of AI in society. This initiative represents an ambitious effort by the EU to establish a coherent set of rules for the development, deploy- ment, and use of AI technologies to protect the fundamental human rights of individuals and guide AI development in a coherent and human-centred way (see more Palmiotto, 2024). Notably, the AI Act focuses primarily on the use of AI systems rather than the technology itself. The main added value of the AI Act, apart from its direct and uniform applicability across the EU, is the incorporation of a system of different levels of risk, which is crucial in the so- cial welfare sphere or, more generally, in terms of authoritative interventions and public services. It adopts a pyramidal approach based on minimal, limited, high, or unacceptable risk, which translates into prohibited, partly permitted, or relatively free use of AI, with corresponding conformity assessments and ex post surveillance. According to Article 6 of the AI Act, the use of an AI system is not considered to be high-risk if the AI system is intended to perform narrow procedural tasks, improve the results of previously completed human activi- ties, detect decision-making patterns without replacing human assessment, or perform preparatory tasks to risk assessments. Nevertheless, any AI sys- tem dealing with profiling of natural persons is always classified as high-risk. Pursuant to Article 7 of the AI Act, risk assessments must therefore take into account, inter alia, the extent to which persons who are potentially harmed or suffer an adverse impact are in a vulnerable position in relation to the deploy- er of the AI system, in particular due to the imbalance of power, knowledge, economic or social circumstances, or age. The introduction of automated decision-making systems in administrative procedures, as foreseen in the above Act, represents a key moment in the dig- ital transformation of public administration and administrative procedures. Although the proposed Act lays down some essential foundations for regulat- ing such systems, balancing innovation in AI technologies and human rights protection, it highlights the pressing need for further legal development within national frameworks. In this context, national rules governing admin- istrative procedure (in Slovenia, at least the GAPA and the PDPA-2) should be amended to take into account the nuances of using these systems as future co-pilots in managing and deciding in these procedures, or even as (relatively) autonomous agents making fully automated decisions. These changes should specifically address the integration of AI technologies into procedures, par- 11 See the Proposal for a Regulation laying down additional procedural rules relating to the enforcement of GDPR (July 2023); https://ec.europa.eu/commission/presscorner/detail/en/ ip_23_3609. Central European Public Administration Review, Vol. 22, No. 2/2024 103 The Role of Automated Decision-Making in Modern Administrative Law: Challenges and Data Protection Implications ticularly to ensure fair process and the principles that protect individuals from authorities, with due consideration of the principles of fairness, accountabil- ity, and transparency throughout the procedure. In addition to the above, the regulatory framework for the future use of AI systems for automated decision-making should ensure that preparatory de- cisions or results generated by AI systems in administrative procedures (in- cluding predictions, assessments, or other documents relied on by officials in decision-making) are considered intermediate substantive decisions in the procedure. This would ensure adequate legal protection against such deci- sions and the resulting entitlements, thereby upholding the principles of both data protection and administrative law (transparency, accuracy, etc.). The fol- lowing conditions should also apply to these decisions: – A party to the procedure should be aware that such a decision is being taken. This supports the principles of transparent processing of personal data, openness of public administration, and transparent management of administrative procedures. It also aligns with the protection of the rights of the parties, legality, and the party’s right to be heard. The party should be informed not only of the existence of the automated decision but also of the data deemed legally relevant by that decision and the process of weighing these facts, including the weights assigned by the algorithm for each data to generate the result; – A party to the procedure should have the opportunity to comment on and challenge the result, as well as make other submissions or challenge the in- accuracy of both the input and the output data. This aligns with the princi- ples of accuracy of personal data, substantive truth, legality, the right to be heard, and the protection of the rights of the parties and the public interest; – A party to the procedure should be able to reject the automated decision, at least in part, or request that an official, i.e., a human decision-maker, substantively participate in the decision-making process. This would en- sure that the automated decision/prediction/assessment does not apply to them, consistent with the principles of protection of the rights of the parties and the public interest, the right to be heard, and legality; – A party to the procedure should have the possibility of legal protection and thus to challenge the automated decision. This upholds the principles of legality in administrative procedure and effective legal protection. If AI- generated results (e.g. recommendations, assessments, opinions) are used in administrative decision-making, it should be further specified that an ap- peal against a contested decision (in this case, an AI-generated result) sus- pends the procedure until the appeal is decided by the appellate body or the court in an administrative dispute. To preserve the principles of legality and effective legal protection, it is essential that the parties whose final decision will, in its essence (compare paragraphs 61 and 62 of the Schufa case (CJEU C-634/21)), be based on the AI-generated result can effectively challenge such result before the final decision is made. Given the signifi- Central European Public Administration Review, Vol. 22, No. 2/2024 104 Grega Rudolf, Polonca Kovač cant reliance of individuals on AI-generated decisions (in theory known as automation bias; see Parycek et al., 2023), it is crucial to establish whether the disputed AI result is based on correct facts. Only then can the final deci- sion rest on the substantive truth of the case, reducing the need to resort to legal remedies. In addition to respecting the principles and rules of administrative law, na- tional rules on the protection of personal data must be enhanced regarding the use of AI systems in public administration. This is necessary to address the challenges of handling the vast volume, speed, and variety of data these sys- tems process. Automated decision-making should be covered already by the principle of legality. This includes establishing clear legal bases for process- ing personal data at all stages of using these systems (from development to deployment), pursuing of the principle of minimum necessary data process- ing, and ensuring that individuals’ data protection rights, such as the right to access personal data and the right to object to processing, etc., are duly re- spected. Adapting future regulatory frameworks to the unique capabilities of AI in personal data processing is essential. This involves reinforcing the prin- ciples of accountability and transparency of automated decision-making and addressing ethical and legal challenges, such as discrimination, in line with the AI Act. However, gaps remain regarding the digitalisation of administrative law at a systemic level, such as the lack of an EU Regulation on administrative procedures or some other general administrative act or digital rights code (Jančova and Fernandes, 2022). Such regulatory gaps lead to negative conse- quences for society, e.g. a lack of legal certainty, disproportionate burden on citizens, and a lack of awareness about administrative injustice. The answers to the two opening research questions are now apparent. Re- garding the principles of administrative law and personal data protection most affected by the introduction of AI systems for automated decision-mak- ing in administrative procedures, it is evident that it is difficult to single out a specific principle, as they are inherently intertwined. However, the integra- tion of automated decision-making systems into administrative procedures significantly impacts the fundamental principles of administrative law. Addi- tionally, the implications for personal data protection are profound, as AI sys- tems require stringent measures to prevent misuse and breaches of personal data protection, raising concerns about personal data protection and its fur- ther use in decision-making. Among the traditional administrative-procedural principles underpinned by EU personal data protection safeguards, the prin- ciples of legality, the right to be heard, giving reasons for the decision, and legal or judicial protection are particularly important (Dragos, 2023; Galetta et al., 2015; Kovač, 2016). Unfortunately, the national legislator in Slovenia has not (yet) addressed these dilemmas, unlike neighbouring and other countries (for instance, Fin- land or Croatia, although some might be too efficiency oriented at the ex- pense of basic safeguards protection like Hungary). This leaves it to the courts to decide on a case-by-case basis instead of systemic approach. Regarding the sufficiency of the existing legal framework, significant gaps remain in how Central European Public Administration Review, Vol. 22, No. 2/2024 105 The Role of Automated Decision-Making in Modern Administrative Law: Challenges and Data Protection Implications these systems should be used in administrative procedures, particularly since the national legal order, at least in Slovenia, does not explicitly regulate their use. Therefore, in response to the second research question on the extent to which EU regulations, particularly the GDPR and the Slovenian PDPA-2 and GAPA, adequately regulate the balance between respect for the principles of personal data protection and effective public administration in Slovenia, it can be stated that trends in Slovenia are significantly slow and unambitious in practice. However, key developments at the EU level, particularly the adop- tion of the AI Act, offer hope. For Slovenia, the primary safeguard against automated decisions is currently Article 22 of the GDPR, which prohibits such decisions unless adequate safeguards are provided, such as the intervention of a human decision-maker, transparency of processing, and the possibility to challenge the decision and one’s view. The balance of power between AI systems and human decision-makers is central to the debate on AI’s role in public governance. While AI can provide valuable insights and efficiency in conducting administrative procedures, the ultimate responsibility for decisions, especially those with significant implica- tions for the rights and obligations of individuals, must remain with human decision-makers (cf. Palmiotto, 2024). AI should be used ‘merely’ as a tool to enhance human judgement, not replace it, thus preserving the human touch that is essential for ethical and responsible decision-making and exercise of power. Successfully integrating good foreign practices into the Slovenian (administrative) legal environment requires a thorough, systematic, and well- thought-out approach. A comprehensive review of the extensive existing practices of such systems in public administration is needed, assessing the consistency of these uses with the existing legal framework, which covers the rules of administrative law, personal data protection, and broader con- stitutional guarantees. By ensuring procedural adjustments and robust safe- guards, the legal framework should prevent AI systems from undermining existing legal principles, which, at least in part, still need to be developed. By fostering a legal environment where technological progress and fundamental legal principles coexist harmoniously, it can be ensured that AI systems, with their automated ability to make predictions, assessments, or even decisions, do not undermine the rule of law and good administration but, together with other rules and principles, reinforce these guarantees. 7 Conclusion Artificial intelligence, once a concept of science fiction, is now shaping the reality of public administration. As AI technologies become increasingly inter- connected within administrative processes, we find ourselves at the thresh- old of a new era. This transformation comes with plenty of potential: AI can streamline decision-making, enhance the efficiency of administrative proce- dures, and fundamentally reshape how personal data is being protected (or even eroded). Its ability to process large volumes of information quickly and offer predictive insights provides a vision of a future where administrative work is more responsive, flexible and efficient. Central European Public Administration Review, Vol. 22, No. 2/2024 106 Grega Rudolf, Polonca Kovač To answer the research question regarding the AI usage in general and ADM in particular on the principles of administrative law as well as data protection law one can firstly establish a mutual interdependence of both areas. Namely, deciding upon data protection rights is an administrative matter under the scope of fundamental principles. On the other hand, personal data is by nature always applied in administrative procedures, which leads to a requirement to follow the basic principles of both otherwise rather autonomous legal fields. Further, automatization of decision-making and digitalisation of public ad- ministration definitely affect so called traditional principles to be necessarily modernized by acknowledging the AI and ADM impact and their peculiarities, such as AI explainability and still human accountability for the decisions made. However, especially national regulation, at least in Slovenia, lags behind these developments, but with the EU law, e.g. the AI Act, pushing it forward. However, AI progress brings along important challenges that cannot be ig- nored. AI’s role in automated decision-making poses significant questions about how we uphold the rule of law, ensure fairness, and protect individuals‘ rights, particularly regarding personal data protection. What’s at stake is not just the efficiency of public administration, but the foundational principles that have long governed the relationship between the rulers and the ruled. Introducing AI without proper safeguards could weaken those principles, risk- ing arbitrariness and undermining trust in administrative systems and pro- cesses. To address these challenges, the regulatory framework for AI must establish several key safeguards. Firstly, individuals must be informed when an AI-driven decision is being made, including the specific data inputs that were used and their role in reaching the decision. Secondly, individuals must have the opportunity to comment on or contest these decisions, ensuring transparency and accuracy of the outcomes. Thirdly, human oversight must be preserved, ensuring that AI outputs are not implemented without mean- ingful human review, thereby maintaining procedural fairness. Lastly, robust legal frameworks must be established to allow individuals to effectively chal- lenge AI-driven decisions, thus upholding the principle of legality and ensur- ing comprehensive judicial oversight. As public administrations increasingly turn to AI, it is therefore crucial that these innovations are guided by law and not just by technological possibili- ties. The task ahead is to integrate AI in ways that enhance, rather than under- mine, transparency, fairness, and accountability. Only by carefully navigating this balance can we ensure that AI strengthens public administration while preserving the fundamental values of justice, transparency and personal data protection that form the bedrock of democratic governance. Central European Public Administration Review, Vol. 22, No. 2/2024 107 The Role of Automated Decision-Making in Modern Administrative Law: Challenges and Data Protection Implications References Algorithm Watch (2020). Automating Society Report 2020. At , accessed 3 August 2024. Babšek, M. and Kovač, P. (2023). The Covid-19 Pandemic as a Driver of More Responsive Social Procedures: between Theory and Practices in Slovenia. NISPAcee Journal of Public Administration and Policy, 16(1), pp. 1–32. https://doi.org/10.2478/nispa-2023-0001. Benjamin, J. (2023). Safeguarding the Right to an Effective Remedy in Algorithmic Multi-Governance Systems: An Inquiry in Artificial Intelligence- Powered Informational Cooperation in the EU Administrative Space. Review of European Administrative Law, (2), pp. 9–36. Carlsson, V. (2023). Legal Certainty in Automated Decision-making in Welfare Services. Public Policy and Administration, 0(0). https://doi. org/10.1177/09520767231202334. Cetina Presuel, R. and Martinez Sierra, J. M. (2022). The Adoption of Artificial Intelligence in Bureaucratic Decision-making: A Weberian Perspective. Digital Government: Research and Practice. De la Tour, R. (2024). Opinion of Advocate General Richard de la Tour in Case C‑203/22, Court of Justice of the European Union. At , accessed 20 August 2024. Della Cananea, G. and Parona, L. (2024). Administrative procedure acts in Europe: An emerging “common core”? The American Journal of Comparative Law, avae016, 20, pp. 1–56. https://doi.org/10.1093/ajcl/avae016 Dragos, D. C. (2023). Administrative Procedure. In A. Farazmand, ed., Global Encyclopaedia of Public Administration, Public Policy, and Governance. Springer Nature, pp. 363–369. Enqvist, L. and Naarttijärvi, M. (2023). Discretion, Automation, and Proportionality. In M. Suksi, ed., The Rule of Law and Automated Decision- Making: Exploring Fundamentals of Algorithmic Governance. Cham: Springer International Publishing, pp. 147-178. European Legal Institute (ELI). (2022). Model Rules on Impact Assessment of Algorithmic Decision-Making Systems used by Public Administration. Report of the European Law Institute. European Legal Institute (ELI). (2024). Charter of Fundamental Constitutional Principles of a European Democracy. Report of the European Law Institute. Finck, M. (2019). Automated Decision-Making and Administrative Law. In P. Cane et al., ed., Oxford Handbook of Comparative Administrative Law. Oxford, Oxford University Press, Max Planck Institute for Innovation in Competition Research Paper, pp. 19–10. Galetta, U. D. and Hofmann, H. C. H. (2023). Evolving AI-based Automation – The Continuing Relevance of Good Administration. European Law Review, 48/6, pp. 617–635. Galetta, D. U. et al. (2015). The General Principles of EU Administrative Procedural Law. Brussels: European Parliament. https://doi. org/10.2861/641578. Goldsteen, A. et al. (2022). Data Minimization for GDPR Compliance in Machine Learning Models. AI and Ethics, 2, pp. 477–491. https://doi.org/10.1007/ s43681-021-00095-8. Central European Public Administration Review, Vol. 22, No. 2/2024 108 Grega Rudolf, Polonca Kovač Grimmelikhuijsen, S. (2023). Explaining Why the Computer Says No: Algorithmic Transparency Affects the Perceived Trustworthiness of Automated Decision‑ making. Public Administration Review, 83(2), pp. 241–262. Hamon, R. et al. (2022). Bridging the Gap Between AI and Explainability in the GDPR: Towards Trustworthiness-by-Design in Automated Decision-Making. IEEE Computational Intelligence Magazine, 17(1), pp. 72–85. https://doi. org/10.1109/MCI.2021.3129960. Jančova, L. and Fernandes, M. (2022). Digitalisation and Administrative Law. European Parliament, Brussels. Kovač, P. (2016). The Requirements and Limits of the Codification of Administrative Procedures in Slovenia According to European Trends. Review of central and east European law, 41, pp. 427–461. https://doi. org/10.1163/15730352-04103007. Kovač, P. (2022). Traditional and European Oriented Principles in the Codification of Administrative Procedures in Central Eastern Europe. Croatian and comparative public administration, 22(1), pp. 9–36. https://doi.org/https:// doi.org/10.31297/hkju.22.1.6 Kovač, P. and Rudolf, G. (2022). Social Aspects of Democratic Safeguards in Privacy Rights: A Qualitative Study of The European Union and China. Central European Public Administration Review, 20(1), pp. 7–32. https://doi. org/10.17573/cepar.2022.1.01. Kuziemski, M. and Misuraca, G. (2020). AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings. Telecommunications policy, 44(6). https://doi.org/10.1016/j. telpol.2020.101976. Palmiotto, F. (2024). When Is a Decision Automated? A Taxonomy for a Fundamental Rights Analysis. German Law Journal, 25, pp. 210–236. https:// doi.org/10.1017/glj.2023.112. Parycek, P., Schmid, V. and Novak, A. S. (2023). Artificial Intelligence (AI) and Automation in Administrative Procedures: Potentials, Limitations, and Framework Conditions. Journal of the Knowledge Economy, pp. 1–26. Ranchordas, S. (2024). The Invisible Citizen in the Digital State: Administrative Law Meets Digital Constitutionalism. In J. De Poorter, C. Oirsouw and G. van der Schyff, eds., European Yearbook of Constitutional Law (forthcoming). Reis, J., Santo, P.E. and Melão, N.F. (2019). Impacts of Artificial Intelligence on Public Administration: A Systematic Literature Review. 14th Iberian Conference on Information Systems and Technologies (CISTI), pp. 1–7. Rhahla, M., Allegue, S. and Abdellatif, T. (2021). Guidelines for GDPR Compliance in Big Data Systems. Journal of Information Security and Applications, 61(C). Roehl, U. B. (2023). Automated Decision-making and Good Administration: Views from Inside the Government Machinery. Government Information Quarterly, 40(4). Rudolf, G. and Kovač, P. (2023). Procedural Challenges of Cross-border Cooperation and Consistency in Personal Data Protection in the EU. NISPAcee Journal, 16(2), pp. 143–170, https://doi.org/10.2478/nispa-2023-0017. Tangi L. et al. (2022). AI Watch, European Landscape on the Use of Artificial Intelligence by the Public Sector. Publications Office of the European Union, Luxembourg.