ZB O RN IK Z N A N ST V EN IH R A ZP RA V L ju bl ja na L aw R ev ie w 2 02 4 LE TN IK L X X X IV ZBORNIK ZNANSTVENIH RAZPRAV Ljubljana Law Review 2024 LETNIK LXXXIV ZZR 2024 ovitek.indd 1 3. 02. 2025 18:26:35 Zbornik znanstvenih razprav, letnik LXXXIV, 2024 Ljubljana Law Review, Vol. LXXXIV, 2024 Izdala / Published by Pravna fakulteta Univerze v Ljubljani, za Založbo Pravne fakultete v Ljubljani Irena Kordež, direktorica Zbornik je izšel s finančno pomočjo Javne agencije za raziskovalno dejavnost RS. The publication is financially supported by Slovenian Research Agency. Uredniški odbor / Editorial Board dr. Maruša Tekavčič Veber, sekretarka dr. Samo Bardutzky, dr. Aleš Galič, dr. Mojca M. Plesničar, dr. Tilen Štajnpihler Božič Mednarodni uredniški svet / International Editorial Council dr. Bojan Bugarič (Univerza v Sheffieldu), dr. Janko Ferk (Univerza v Celovcu, Deželno sodišče v Celovcu), dr. Katja Franko Aas (Pravna fakulteta Univerze v Oslu), dr. Velinka Grozdanić (Pravna fakulteta Univerze na Reki), dr. Tatjana Josipović (Pravna fakulteta Univerze v Zagrebu), dr. Claudia Rudolf (Inštitut za pravo Evropske unije, mednarodno pravo in primerjalno pravo, Pravna fakulteta Univerze na Dunaju), dr. dres. h. c. Joseph Straus (Inštitut Maxa Plancka za inovacije in konkurenco) Odgovorni urednik / Editor-in-Chief dr. Luka Mišič Članki so recenzirani. Articles are peer-reviewed. Oblikovanje naslovnice / Cover design Rok Marinšek Jezikovni pregled in grafična priprava / Proofreading and layout Dean Zagorac Tisk / Printed by Litteralis, d.o.o. Cena zvezka skupaj z DDV / Price (VAT incl.) 41,30 EUR Prvi natis / First press run 200 izvodov, Ljubljana 2024 Naročila / Orders telefon: 01 42 03 113, faks: 01 42 03 115, www.pf.uni-lj.si UDK34(497.4)(05) Zbornik znanstvenih razprav Ljubljana Law Review ISSN 1854-3839 ISSN (spletna izdaja / on-line edition): 2464-0077 Spletni strani / Web pages http://zbornik.pf.uni-lj.si http://journal.pf.uni-lj.si To delo je ponujeno pod licenco Creative Commons Priznanje avtorstva-Brez predelav 4.0 Mednarodna. Content on this publication is licensed under a Creative Commons Attribution- NoDerivatives 4.0 International licence. (http://creativecommons.org/licenses/by-nd/4.0/) ZBORNIK ZNANSTVENIH RAZPRAV Ljubljana Law Review 2024 LETNIK LXXXIV Pravna fakulteta Univerza v Ljubljani Kazalo Table of Contents 9 Samo Bardutzky Zaslužni profesor dr. Igor Kaučič – sedemdesetletnik Professor Emeritus Dr Igor Kaučič: Celebrating 70 Years 17 Jure Spruk Ideološke premise ameriškega pravnega realizma Ideological Premises of American Legal Realism 39 Timotej F. Obreza Privid pravne konstrukcije. O duhu in porah pravnega (spo)znanja The Phantasm of Legal Construction. On the Spirit and Pores of Legal Knowledge 65 Luka Vavken (Ne)priznavanje privilegija zoper samoobtožbo pravnim osebam s poudarkom na enoosebni gospodarski družbi (Non-)Recognition of the Privilege Against Self-Incrimination for Legal Persons, with an Emphasis on Single-Member Companies 87 Urh Šelih Izbrani vidiki pravice do izjave v azilnih postopkih Selected Aspects of the Right to be Heard in Asylum Procedures 109 Polona Brumen Pisma iz Tokia Letters from Tokio 125 Anže Mediževec The Right of Self-defence in the Earth’s Orbit Pravica do samoobrambe v Zemljini orbiti Agora: Selected Aspects of Intersections Among Artificial Intelligence, Law, and the Right to Life Agora: Izbrani vidiki presečišč med umetno inteligenco, pravom in pravico do življenja 159 Vasilka Sancin Agora: Selected Aspects of Intersections Among Artificial Intelligence, Law, and the Right to Life Agora: Izbrani vidiki presečišč med umetno inteligenco, pravom in pravico do življenja 167 Yuval Shany To Use AI or Not to Use AI? Autonomous Weapon Systems and Their Complicated Relationship with the Right to Life Uporabljati umetno inteligenco ali ne? Avtonomni orožni sistemi in njihov zapleteno razmerje s pravico do življenja 189 Joana Gomes Beirão, Jan Wouters Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible? Na poti do mednarodnega pravnega okvira za smrtonosno umetno inteligenco, ki temelji na spoštovanju človekovih pravic: misija nemogoče? 217 Maruša T. Veber Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent Umetna inteligenca in humanitarna pomoč: preučitev vloge soglasja držav 255 Anže Singer Artificial Intelligence in Space: Overview of the European Space Agency and its Rrole in the AI Environment Umetna inteligenca v vesolju: pregled Evropske vesoljske agencije in njena vloga v okolju umetne inteligence 279 Iva Ramuš Cvetkovič AI—A Possible Solution to the Threats Against Human Lives Arising from Space Objects? UI – možna rešitev za grožnje človekovemu življenju, ki prihajajo iz objektov v vesolju 307 Kristina Čufar AI Software/Hardware as Mind/Body Problem. Global Supply Chains, Shadow Workers, and Wasted Lives Programska/strojna oprema UI kot problem uma/telesa. Globalne dobavne verige, delavci v senci in zavržena življenja 335 Povzetki Abstracts 9 © The Author(s) 2024 DOI: 10.51940/2024.1.9-16 UDK: 34:929Kaučič I. Samo Bardutzky* Zaslužni profesor dr. Igor Kaučič – sedemdesetletnik Naj ta kratek zapis, posvečen osebnemu jubileju profesorja Igorja Kaučiča, zač- nem s tem, da mi je to, da ga lahko napišem, v čast. Najprej zato, ker so tovrstni zapisi ob osebnih jubilejih naših profesoric in profesorjev lepo fakultetno izročilo. Prek tega izročila in tudi prek drugih fakultetnih običajev se vsakič znova oblikuje- mo in utrdimo kot majhna, vendar tesno povezana akademska skupnost, v kateri se prepletajo generacije, ki se druga od druge učijo in skrbijo za to, da se znanje, ki ga razvijamo, plemeniti in širi naprej. Del tega izročila je tudi navada, da pisanje članka, objavljenega ob sedemdesetletnici, prevzame mlajši kolega, študent ali mentoriranec jubilanta, član njegove katedre oziroma raziskovalec na področju, ki mu je svoje življenjsko delo posvetil jubilant. Tako je glavni razlog za to, da sem kot pisec tega kratkega besedila počaščen, tudi v tem, ker mi je ta čast pripadla kot študentu, men- torirancu, sodelavcu in kolegu profesorja Kaučiča. Zato kljub temu, da sem ob pisanju tega članka moral pregledati kar nekaj za- pisov in preveriti marsikateri podatek, ki je našel pot v prikaz profesorjevega dela, članek vendarle pišem tudi zelo osebno. Pišem ga namreč kot nekdo, ki je v prvem letniku študija prava obiskoval profesorjeva predavanja, pri njem opravil ustni iz- pit, pozneje pa na takratnem magistrskem izpitu odgovarjal komisiji, v kateri je bil tudi profesor Kaučič. Ta izpit sem opravljal v času, ko sem kot asistent za ustavno pravo vodil vaje pri profesorjevem predmetu. V letih, ki so sledila, pa sem pod nje- govim skrbnim mentorstvom napisal doktorsko disertacijo. Dobro desetletje zatem v zimskem semestru, v katerem pišem te vrstice, ob ponedeljkih predavam Ustavno procesno pravo, predmet, ki ga fakulteta ni uvrstila v svoj predmetnik, dokler ni s to pobudo prodrl profesor Kaučič. Vsak torek, ko bruce učim osnove ustavnega prava, pa pred predavanjem vsakič znova prelistam učbenik Ustavno pravo, ki so ga napisali profesor Kaučič in soavtorja, profesorja Grad in Zagorc. Zato vrstice, ki sledijo in v katerih poskušam strnjeno predstaviti profesorjevo delo, dosežke in prizadevanja, niso le izraz spoštovanja do jubilanta. Razumem jih Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review – voL. LXXXiv, 2024 • pp. 9–16 ISSN 1854-3839 • eISSN: 2464-0077 * Izredni profesor, Katedra za ustavno pravo, Pravna fakulteta Univerza v Ljubljani, samo.bardutzky@ pf.uni-lj.si 10 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 tudi kot zahvalo, v lastnem imenu in v imenu vseh tistih, ki podobno kot jaz svoje znanje, razmišljanja in strokovno delo gradijo na vsem, kar so se lahko naučili iz pre- davanj in prispevkov profesorja Kaučiča. Te vrstice sem poskusil strukturirati tako, da najprej očrtam profesorjevo karierno in akademsko pot (1.). V nadaljevanju pa sem želel kratko predstaviti profesorjevo raziskovalno (2.), pedagoško (3.) in stro- kovno delo (4.). 1. Oris jubilantove poti Igor Kaučič se je rodil 31. oktobra 1954 v Ljubljani. Na Pravni fakulteti Univerze v Ljubljani je diplomiral leta 1982. Naziv magistra pravnih znanosti je pridobil leta 1989 z zagovorom magistrske naloge Institucija šefa države v ustavnem sistemu Socialistične republike Slovenije, ki jo je napisal pod mentorstvom prof. dr. Cirila Ribičiča. Doktoriral je pod mentorstvom prof. dr. Majde Strobl leta 1993 s tezo Referendum v ustavnorevizijskem postopku. Sprva je bil asistent na takratni Višji šoli za notranje zadeve, na Pravni fakulteti pa se je zaposlil leta 1989 in postal leta 1994 docent za področje ustavnega prava. Leta 1999 je bil prvič izvoljen v naziv izrednega profesorja, leta 2015 pa v naziv rednega profesorja. Med letoma 2006 in 2014 je profesor Kaučič vodil našo katedro. Na fakulteti je bil tudi predsednik upravnega odbora, dvakrat pa tudi prodekan: med letoma 2004 in 2006 prodekan za gospodarske zadeve in znanstveno raziskovalno delo, med letoma 2012 in 2014 pa prodekan za študijske zadeve. Bil je tudi član univerzitetne disciplinske komisije (1994–1999) in Komisije Republike Slovenije za nagrade in priznanja za znanstveno-raziskovalno delo (2004–2006). Profesorju Kaučiču je Univerza v Ljubljani leta 2022 podelila naziv zaslužnega profesorja, ker je dolga leta »s svojim prizadevnim in odličnim pedagoškim, razi- skovalnim in vodstvenim delom prispeval k razvoju in delovanju tako fakultete kot Univerze v Ljubljani,« hkrati pa se je »na visoki strokovni ravni […] ves čas aktivno vključeval v izgradnjo slovenske državne ureditve, tako pri spreminjanjih ustave kot pri sprejemanju sistemske zakonodaje.« 2. Raziskovalno delo jubilanta O obsegu raziskovalnega dela profesorja Kaučiča v zadnjih 35 letih pričajo števil- ke iz bibliografskih baz podatkov, kot je na primer zavidljivih 48 izvirnih znanstve- nih člankov, 26 samostojnih znanstvenih prispevkov v monografskih publikacijah, pa tudi avtorstvo ali soavtorstvo 13 znanstvenih monografij. Temu ob bok lahko 11 Samo Bardutzky – Zaslužni profesor dr. Igor Kaučič – sedemdesetletnik postavimo občudovanja vreden seznam profesorjevih prispevkov na konferencah v Sloveniji in v tujini. Če pozornost namenimo vsebini profesorjeve bibliografije, pa podrobnejše seznanjanje s profesorjevim obsežnim opusom razkrije, da je na svoji raziskovalni poti sledil predvsem trem sklopom raziskovalnih vprašanj oziroma trem ožjim temam ustavnega prava, ki sem jih predstavil v nadaljevanju. 2.1. Neposredna demokracija Najobsežnejši seznam jubilantovih del se gotovo nanaša na različne institute neposredne demokracije v našem pravu in primerjalnih sistemih, zlasti na referen- dum, katerega spreminjajočo vlogo v slovenskem ustavnem sistemu profesor Kaučič spremlja že vse od uveljavitve ustave leta 1991. Referendumu se je jubilant posvetil že v svoji prvi znanstveni monografiji Referendum in sprememba ustave (1994). S članki ali poglavji je poleg ustavnorevizijskega znanstveno pokril različne oblike referen- duma, uveljavljene v našem sistemu (Pravna ureditev lokalnega referenduma, 2020; Referendumsko odločanje o mednarodnih povezavah, 2003; Ustavna ureditev referendu- ma o mednarodnih povezavah, 2003). Največ pozornosti je namenil zakonodajnemu referendumu (denimo v znanstveni monografiji Kaučič in drugi, Zakonodajni refe- rendum: pravna ureditev in praksa v Sloveniji, 2010) in ustavnopravnim dilemam, ki jih zakonodajni referendum odpira (Ustavne omejitve in prepovedi zakonodajne- ga referenduma, 2014; Zavrnitveni zakonodajni referendum, 2013; Referendum na zahtevo parlamentarne opozicije, 2010), ki jih je obravnaval tudi primerjalnopravno (Zakonodajni referendum v nekaterih evropskih državah s posebnim poudarkom na švi- carski ureditvi, z Mirom Cerarjem, 2003). Ureditev zakonodajnega referenduma je komentiral tudi v izdajah Komentarja Ustave Republike Slovenije l. 2011 (ur. Šturm in Avbelj) in 2019 (ur. Avbelj). Slovensko ureditev neposredne demokracije in njene specifike je predstavil tudi mednarodnemu bralstvu z objavami v tujih jezikih, deni- mo v člankih Referendum challenges in the Republic of Slovenia (z Bruno Žuber, 2019), Il referendum nella Repubblica slovena (1999), Formen der unmittelbaren Demokratie in Slowenien (2004), Novo ustavno uređenje zakonodavnog referenduma u Sloveniji (2014) in s poglavjem o Sloveniji v monografiji The Legal Limits of Direct Democracy: A Comparative Analysis of Referendums and Initiatives Across Europe, ki ga je objavil v soavtorstvu z Bruno Žuber leta 2021. 2.2. Ustavna revizija Druga tema, ki je bila deležna jubilantove pozornosti, so spremembe ustave. Že prej omenjena prva znanstvena monografija Referendum in sprememba ustave, ki jo je 12 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 profesor Kaučič objavil leta 1994, je na preseku preučevanja neposredne demokracije in ustavne revizije. V letih, ki so sledila, je jubilant še velikokrat prispeval k razpra- vi o ustavni reviziji. Njegovi prispevki so bili po eni strani prerezni, saj so prinesli abstraktno obravnavo ureditve ustavnorevizijskega postopka v Sloveniji, zlasti prek primerjalne metode, denimo Prepovedi in omejitve revizije ustave (1992) in Postopek spreminjanja ustave v našem in primerjalnem pravu (1991), pa tudi Pravo in ustavne reforme (2007). Po drugi strani pa je jubilant obravnaval konkretne ustavnorevizij- ske projekte, na primer v članku Izmene Ustava Republike Slovenije radi uključenja u Evropsku uniju (2010) in v leta 2021 objavljenem poglavju Spreminjanje Ustave Republike Slovenije: 1991–2021. Abstraktno se je jubilant ukvarjal tudi z institutom ustavnega zakona kot osrednjega pravnega akta slovenskega ustavnorevizijskega procesa (Ustavni zakon v slovenskem ustavnem sistemu, 2001; Pravna narava pream- bule ustavnega zakona, 2012) in komentiral IX. poglavje Ustave RS v dveh izdajah Komentarja URS (2011, 2019). 2.3. Šef države Jubilant je v svojem raziskovalnem delu pozornost pogosto namenil vprašanjem državne ureditve, zlasti tudi parlamentarnega prava, denimo v delih Poslanska imuni- teta (1993), Zaupnica vladi (1994), Postopek parlamentarne preiskave (1995), Omejitev ali odprava poslanske imunitete (2005) in Razmerje zakonodajne in izvršilne oblasti do nadzornih institucij (2007). Vendar pa je jubilantovo pozornost gotovo najpogosteje pritegnila funkcija šefa države oziroma predsednika (predsednice) republike. Vprašanje ustavne vloge šefa države je obravnaval že pred sprejemom nove ustave, v zvezi s takratnim kolektivnim republiškim šefom države, ko je na to temo pripra- vil magistrsko nalogo, pa tudi napisal deli Aktualna vprašanja volitev predsedstva SR Slovenije (1989) in Pristojnosti in razmerja predsedstva Republike Slovenije v organiza- ciji oblasti (1991). Po sprejemu ustave leta 1991 je profesor Kaučič v svojih delih pre- učil novo vzpostavljeno ureditev predsednika republike (Volitve predsednika republike in Obtožba predsednika republike zaradi kršitve ustave ali hujše kršitve zakona, oboje 1992), k vprašanjem vloge in položaja predsednika republike pa se redno vrača tudi pozneje, na primer v prispevkih Predsednik republike in sodstvo (2002), Predsednik republike med ustavo in politično prakso (2006), Vloga predsednika republike v par- lamentarnem sistemu (2011), Pristojnosti predsednika republike pri oblikovanju vlade (2012), Predlagalne pristojnosti Predsednika republike (2014), leta 2017 s poglavjem Predsednik republike in varstvo ustavnosti zakonov, leta 2011 in 2019 pa s komentarji členov IV. C) podpoglavja Ustave. Posebej je treba poudariti, da je jubilant leta 2016 uredil tudi obsežen znanstveni zbornik Ustavni položaj predsednika republike, ki je 13 Samo Bardutzky – Zaslužni profesor dr. Igor Kaučič – sedemdesetletnik hkrati tudi prva slovenska publikacija, ki na zaokrožen način predstavlja mnenja ustavnopravnih strokovnjakov o ključnih vprašanjih ustavne ureditve šefa naše dr- žave. Profesor Kaučič je v tem zborniku objavil kar tri poglavja (Volitve predsednika Republike Slovenije, Pristojnosti predsednika republike v postopkih volitev in imenovanj ter Predsednik republike in volitve ustavnih sodnikov) in s tem utrdil pomen svojega neprecenljivega prispevka na področju preučevanja ustavne vloge šefa države. 2.4. Druge ustavnopravne teme Zgornji poskus strukturiranega prikaza tem, ki se jim je jubilant najbolj posvetil, pa ne pomeni, da profesor Kaučič ni raziskoval tudi drugih ustavnopravnih vprašanj. S svojimi prispevki namreč sodeluje v razpravi o številnih ustavnopravnih dilemah, ki so se odpirale v prvih štirih desetletjih veljavnosti slovenske ustave iz leta 1991. S klasičnima ustavnopravnima vprašanjema se denimo ukvarjata razpravi Načela ustav- ne ureditve razmerja med državo in cerkvijo v Sloveniji (2001) in Ustavnopravni temelji izrednega stanja (2024). Na področje varstva človekovih pravic posegata deli La tutela del diritto constituzionale ad un salubre ambiente di vita in Slovenia (1998) in poglavje o Sloveniji v zborniku (ur. Weber) Fundamental rights in Europe and North America, napisano skupaj z Janezom Šinkovcem in Arnetom Mavčičem. Jubilant je s kar nekaj svojimi objavljenimi deli prispeval tudi k izgradnji literature o ustavnem sodstvu, zlasti v zvezi z volitvami sodnikov Ustavnega sodišča (Spremembe položaja in volitev sodnikov Ustavnega sodišča, 2007, Izbor sudija ustavnih sudova u Sloveniji i kompa- rativno, 2014, Volitve ustavnih sodnikov in tripartitne kandidatne liste, 2023) ter v zvezi s pravnimi učinki odločitev Ustavnega sodišča (Ugotovitvene odločbe Ustavnega sodišča in Državni zbor, 2011). 3. Jubilant kot pedagog in mentor Profesor Kaučič je na fakulteti znan kot med študenti priljubljen predavatelj, zlasti na račun razgibanega sloga predavanj in slikovitih primerjav, s katerimi si po- maga, ko pojasnjuje institute ustavnega prava. Še danes se mi v pogovoru s sošolcem s fakultete zgodi, da se ta z nasmehom spomni kakšne duhovite domislice profesorja Kaučiča, ki mu na um pride vedno, ko sliši za ta ali oni državni organ ali pravilo ustavnega prava. Profesor Kaučič je z zaslužnim profesorjem dr. Francem Gradom na fakulteti dolga leta predaval osrednji predmet naše katedre – Ustavno pravo. Profesorja sta bila tudi gonilni sili razvoja študijske literature za ta osrednji in obvezni predmet. Poleg sodelovanja pri nastanku učbenika Državna ureditev Slovenije (kjer sta bila 14 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 soavtorja tudi profesor Ivan Kristan in profesor Ciril Ribičič) sta profesorja napisala učbenika Ustavno pravo Slovenije (1997) in Ustavna ureditev Slovenije, ki je prvič izšel leta 1999. Bila sta namenjena predvsem študentom nepravnih študijev, vendar se je zlasti slednji zaradi jedrnatih in jasno podanih informacij priljubil tudi študentom prvega letnika Pravne fakultete. Leta 2016 pa je izšlo novo delo, ki sta ga poleg jubi- lanta napisala še profesor Grad in profesor Saša Zagorc, obsežna monografija Ustavno pravo, ki je slovenskim pravnikom končno prinesla tudi zaokroženo enoto študijske literature za ustavno pravo, namenjeno prav študentom prava. Ustavnega prava pa se od profesorja niso učili le študenti naše fakultete. Profesor Kaučič ustavno pravo še vedno predava tudi na Pravni fakulteti Univerze v Mariboru. Na Univerzi v Ljubljani je predmet Ustavna ureditev Slovenije dolga leta predaval bodočim socialnim delavcem; učil je tudi na Fakulteti za varnostne vede Univerze v Mariboru in Fakulteti za upravo Univerze v Ljubljani. Poleg tega je bil profesor Kaučič na naši fakulteti nosilec predmetov na podi- plomskem študiju: pred uvedbo bolonjske reforme je bil nosilec obveznega predmeta Ustavno pravo in izbirnih predmetov na takratnem znanstvenem magistrskem študi- ju Ustavno pravo, po reformi študija pa je prevzel predavanja – na tako imenovanem bolonjskem magisteriju v okviru Državnopravnega modula – ter bil nosilec predme- tov na prenovljenem doktorskem študiju. Pedagoško delo se za jubilanta nikoli ni omejevalo zgolj na predavanja in izpi- te. Profesor Kaučič je bil med študenti nadvse priljubljen tudi kot razumevajoč in korekten mentor, ki zna študentu prisluhniti, pa tudi svetovati, ko je to potrebno. Nič nenavadnega torej, da je toliko študentov študij končalo z uspešnim zagovo- rom zaključnega dela, izdelanega pod jubilantovim mentorstvom. Med leti 1989 in 2016, ko se je izvajal »stari«, predbolonjski univerzitetni študij, je bil profesor Kaučič mentor kar 179 diplomskim nalogam. Pred reformo študija je obsežnejšo magistrsko nalogo, ki je vodila do pridobitve naziva magister/magistrica pravnih znanosti, pod jubilantovim mentorstvom napisalo 11 študentov. Po bolonjski reformi se je tem pridružilo še 25 študentov z magistrsko nalogo in sedem študentov z diplomsko nalogo, izdelano pod jubilantovim mentorstvom. Profesor Kaučič je bil tudi mentor sedmim doktorskim disertacijam, ki so bile uspešno obranjene med leti 2006 in 2024. 4. Strokovno delo jubilanta Jubilant je, kot je prikazano v zgornjih vrsticah, prehodil zavidljivo karierno pot raziskovalca in pedagoga na področju ustavnega prava. Ves ta čas pa skrbi tudi za to, 15 Samo Bardutzky – Zaslužni profesor dr. Igor Kaučič – sedemdesetletnik da je njegovo obsežno znanje in poglobljeno razumevanje ustavnega prava na voljo tudi pri reševanju konkretnih vprašanj, ki se postavljajo državnim organom in dru- gim v vsakdanjem življenju slovenskega ustavnega prava. Vseh zakonskih predlogov, ki so nastali tudi po posvetovanju pripravljavcev s profesorjem Kaučičem, je daleč preveč, da bi jih navedli v tako kratkem zapisu. Prav tako je preveč posvetov, ki so jih v zvezi z odprtimi vprašanji zakonodajne ureditve ustavnih institutov organizi- rali državni organi in na katerih je s svojimi prispevki sodeloval jubilant. Lahko pa primeroma navedemo, da je jubilant v vsaki od mandatnih dob Državnega zbora Republike Slovenije od leta 2000 naprej sodeloval v kateri od strokovnih skupin, ustanovljenih pri Ustavni komisiji Državnega zbora in tako bdel tudi nad projekti spreminjanja najvišjega pravnega akta v državi. Z Državnim zborom je sodeloval tudi, ko se je pripravljal nov poslovnik Državnega zbora (PoDZ-1), in sicer je pred- sedoval delovni skupini, ki je začrtala novo ureditev v jedru slovenskega parlamentar- nega prava. Med leti 1996 in 2000 je bil član Državne volilne komisije. Med jubilantovo strokovno delo štejemo tudi članstvo v uredniških odborih: revij Revus (2003–2010) in Lex Localis (od 2013) ter knjižne zbirke Založbe Pravne fakultete Scripta (od 2002). V njegovi bibliografiji najdemo tako strokovne kot tudi poljudne članke, k seznamu teh objav pa je treba dodati še podatek, da profesor- ja Kaučiča kot ustavnopravnega strokovnjaka v Sloveniji še zdaleč ne poznajo le (ustavni) pravniki. Njegove jasne razlage institutov ustavnega prava, za katere ga prosijo novinarji, marsikateremu gledalcu televizijskih oddaj in poslušalcu radijskih informativnih oddaj pomagajo razumeti zahtevne pravne vidike aktualnih politič- nih razprav. Tudi zato se marsikomu od jubilantovih mlajših kolegov, ko v kakšnem pogovoru omenimo, da se ukvarjamo z ustavnim pravom, pogosto zgodi, da sogo- vornik v odgovor povpraša, če morda poznamo profesorja Kaučiča. Marsikateremu Slovencu so ustavna načela in instituti bližje prav zahvaljujoč profesorjevim umirje- nim in logičnim razlagam. Za krepitev in razvoj slovenskega ustavnega prava, ki sega onkraj univerzitetnih predavanj na univerzah in onkraj objavljanja raziskovalnih dognanj v knjigah in člankih, si profesor Kaučič prizadeva tudi v okviru našega osrednjega strokovnega združenja: Društva za ustavno pravo Slovenije. V okviru društva, ki mu trenutno tudi predseduje, je organiziral in sodeloval pri številnih javnih razpravah, s katerimi društvo spremlja razvoj ustavnega prava. Podobno je jubilant dejaven pri osrednjem strokovnem dogodku našega poklica – Dnevih slovenskih pravnikov. Redno orga- nizira ustavnopravne sekcije in tako skrbi, da se z dilemami, ki zaposlujejo ustavne pravnike, seznani tudi širša pravniška javnost. Na pomen jubilantovega dela v stro- kovnem življenju slovenskih pravnikov kaže tudi to, da ga je leta 2013 Zveza društev pravnikov Slovenije imenovala za pravnika leta. 16 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 * * * S čim je sploh mogoče skleniti ta strnjeni prikaz bogatega opusa profesorja Kaučiča na pedagoškem, raziskovalnem in strokovnem področju? Morda z ugotovitvijo, da tudi po upokojitvi in pridobitvi naziva zaslužnega profesorja z nič počasnejšim tem- pom gradi slovensko ustavno pravo še naprej. V letu okrogle obletnice sem sodeloval v komisiji za zagovor doktorata Andreje Krabonja, ki je nastal pod mentorstvom profesorja Kaučiča. Pred izidom je profesorjeva monografija, ki jo nujno potrebuje- mo, saj nam bo prinesla prvi zaokrožen prikaz slovenske referendumske ureditve po spremembi ustave 2013 in po obsežnejših spremembah Zakona o referendumu in ljudski iniciativi, ki so ji sledile. Najini pogovori v zadnjih tednih in mesecih se pogosto vrtijo okrog treh tem. Najprej, o treh projektih ustavne revizije, ki jih obravnava naš ustavodajalec, saj je profesor Kaučič v tej mandatni dobi Državnega zbora prevzel zahtevno vlogo koordi- natorja vseh treh strokovnih skupin Ustavne komisije DZ. Nato o izvedbi predavanj za naše magistrske študente, kjer letos vnovič skupaj predavava predmet Ustavno pro- cesno pravo. Kadar nama uspe, pa tudi o prihodnjih dejavnostih Društva za ustavno pravo Slovenije. Jubilant je tako dejaven s svojimi pisnimi prispevki, predavanji na fakulteti, strokovnim sodelovanjem pri spreminjanju našega najvišjega pravnega akta in vodenjem našega društva. Če sem lahko ta kratki zapis začel z izrazi spoštovanja in zahvale za vse preteklo delo, pa naj ga sklenem z iskrenimi željami – v imenu kolegov s katedre in s fakultete – da naj jubilant še dolgo tako uspešno in plodovito sooblikuje slovensko ustavno- pravno znanost. 17 © The Author(s) 2024 * Doktor politoloških znanosti, doktorski študent na Pravni fakulteti Univerze v Ljubljani. Znanstveni članek DOI: 10.51940/2024.1.17-37 UDK: 340.12:165.82(73) Jure Spruk* Ideološke premise ameriškega pravnega realizma Povzetek Avtor obravnava ameriški pravni realizem in njegove ideološke sledi. S pravno teoretične- ga stališča ameriški pravni realizem zaobjema teorijo sodniškega odločanja, ki se je zlasti v dvajsetih in tridesetih letih 20. stoletja razvila kot odgovor na Langdellov pravni for- malizem in formalistično sodniško odločanje. Namesto pravnih pravil so ameriški pravni realisti v središče analize sodniškega odločanja postavili iz konkretnih primerov izvedena dejstva, bolj kot notranja logika pravnega razlogovanja pa so jih zanimale njegove po- sledice. Umestitev teoretičnih poudarkov ameriškega pravnega realizma v družbeni kon- tekst njihovega nastanka pokaže na njihove ideološke implikacije. Z ideološkega vidika je kritika formalizma pomenila kritiko klasično liberalnih ideoloških konstruktov, zlasti nevtralnega in svobodnega trga, ki so družbeno moč pospešeno zgoščali v rokah posame- znikov in korporacij na škodo manj privilegiranih družbenih skupin. Kritika eksaktnega preračunavanja pravilnosti sodniških odločitev je bila dejansko kritika naravne neizogib- nosti trga kot pravičnega posrednika med interesi materialno neenakih posameznikov in družbenih skupin. Ključne besede ameriški pravni realizem, ideologija, teorija prava. Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review – voL. LXXXiv, 2024 • pp. 17–37 ISSN 1854-3839 • eISSN: 2464-0077 18 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 1. Uvod Pričujoča razprava je namenjena ameriškemu pravnemu realizmu – pravnoteoretič- ni usmeritvi, v kateri se je na začetku 20. stoletja zgostil odpor proti formalističnemu pojmovanju prava, kakršno se je pod vodstvom harvardskega profesorja Christopherja Langdella vzpostavilo v drugi polovici 19. stoletja. Natančneje, jedro razprave bo name- njeno razlagi hkratnega obstoja navidezno nasprotujočih si kategorij – ideološkega in realnega oziroma razlagi ideoloških primesi v dozdevni empirično-opisovalni realistični podobi prava, kot je izšla iz teoretskih nastavkov ameriškega pravnega realizma. Tukaj se potemtakem postavlja vprašanje, koliko ideološkega prenese realno v teoriji ameriškega pravnega realizma. Poudarek v razpravi bo na pojasnjevanju razmerja med realnim in ideološkim, pri čemer izhodiščna teza pravi, da realno idealnega ne izzove, ker v prvem ni mogoče zaznati drugega, temveč ga izzove, ker sta realno in ideološko kontekstualno pogojena. Kriteriji realnega in ideološkega so vselej vezani na konkretna družbena raz- merja. Kontekstualna pogojenost realnega in ideološkega pomeni njuno izgrajevanje iz konkretnih materialnih in intelektualnih dejavnikov, ki nastajajo kot produkti politič- ne, ekonomske in socialne prepletenosti družbe. Pravo kot družbeni fenomen se vsem tem vrstam prepletenosti preprosto ne more popolnoma izogniti. V temelju je ameriški pravni realizem formalizem izzval zaradi dekontekstualizacije prava, tj. zaradi neupošte- vanja vzrokov in posledic pravnega urejanja družbenih razmerij. Šele upoštevanje pos- ledic pravnega (beri: sodniškega) odločanja namreč pravo umešča v njegovo »naravno« okolje – človeško družbo z vsemi njenimi konjunktivnimi in disjunktivnimi procesi. Langdellova zagledanost v pravno znanost je predpostavila izolacijo prava od družbe in njenih pristranosti, namesto na družbene posledice prava se je osredotočala zlasti na izpopolnjeno logiko pojmovnih izpeljav. Ameriški pravni realizem je v temelju zavračal tako neživljenjskost prava, v imenu katere so (pre)pogosto trpeli interesi deprivilegiranih družbenih skupin. Pravo lahko razumemo kot pravno znanost v toliko, kolikor nam uspe ignorirati enega ključnih poudarkov pravnega realizma, po katerem za nastankom, spre- membo in uporabo prava vedno stoji človek z vso svojo odgovornostjo. Prav tako bi lah- ko ta poudarek pravni realisti usmerili proti najeminentnejšemu poskusu metodološke izpopolnitve pravne znanosti v 20. stoletju – Kelsnovi čisti teoriji prava, čeprav je Kelsen sam pravo razumel kot družbeni pojav,1 pravno znanost pa kot družboslovno znanost2 oziroma humanistično znanost.3 Ključ do prepoznave ideoloških premis ameriškega pravnega realizma je v instrumen- talni razsežnosti prava. Pomen slednje so poudarjali skoraj vsi pionirji te pravno teoretič- ne usmeritve, kar lahko pripišemo intrinzični povezavi med instrumentalizmom, voljo, 1 Kelsen, 2005, str. 20. 2 Prav tam, str. 10. 3 Prav tam, str. 27. 19 Jure Spruk – Ideološke premise ameriškega pravnega realizma sredstvi, cilji in posledicami. Pravni realizem je v tem pogledu interdisciplinaren, saj teži prav k temu, kar je Kelsen očital tradicionalni pravni znanosti: »Danes skorajda ni strokovnega področja, na pašnike katerega bi si pravnik pomiš- ljal vdreti. Prepričan je celo, da bo svoj znanstveni ugled z izposojanjem pri drugih vedah celo povečal. Ob tem pa se izgublja pravna znanost sama.«4 Pravni realisti na gornji očitek pogledujejo z dobršno mero skepse, tisti bolj zajedljivi pa bi lahko v njem prepoznali priročno izhodišče za porogljive in cinične komentarje. Biti (pravni) realist namreč pomeni, da posameznik skuša v svoji razlagi izbranega poja- va zaobjeti kar največ dejavnikov vplivanja, v primeru ameriškega pravnega realizma to pomeni upoštevanje zunajpravnih dejavnikov vplivanja na (sodniško) pravo, najsi bodo to politični, ekonomski, socialni ali psihološki dejavniki. Kar sledi, je razprava o ideolo- ških premisah ameriškega pravnega realizma, ki naj pokaže, kako na ravni pravne teorije sovpadeta realno in ideološko, materialno in idejno. 2. Realno in ideološko: nepomirljivo nasprotje ali koristno zavezništvo? Realno v osnovi pomeni nekaj otipljivega, zaznavnega, dejanskega. Kot tako je pro- dukt grobega opisa brez upoštevanja vrednostnih sodb. Če smo realni, nas zanima de- jansko stanje in ne stanje, kakršnega bi si želeli. Za realnega človeka rečemo, da trdno stoji na tleh, saj ga zlepa ne premamijo lepe besede ali izpričani konstruktivni nameni. Realist čaka predvsem na dejanja, pri načrtovanju pa se nasloni zgolj na že obstoječe dejavnike. V pravni teoriji ločimo dve osnovni kategoriji prava – pravo de lege lata in pravo de lege ferenda. Pravnega realista (tako kot tudi pravnega pozitivista) očitno zanima prvo, torej pravo, kakršno dejansko je. Kelsen je na primer svojo čisto teorijo prava videl kot radikalno realistično teorijo prava, ker prava ne želi niti legitimirati kot pravičnega niti ga diskvalificirati kot nepravičnega.5 Zanimala ga je zgolj norma, sama na sebi, taka, kakršna obstaja brez človeških pritiklin. Kelsnov ontološki realizem kaže na njegov osrednji cilj – pravo vzpostaviti kot znanost o (pravnih) normah. V pravnem kontekstu je bila za Kelsna edina realnost pozitivno pravo. Ni ga zanimalo, kaj se dogaja v naravni realnosti, temveč ga je zanimalo, kaj naj se zgodi glede na pozitivno pravo, torej v pravni realnosti, zato je tako ameriškemu kot tudi skandinavskemu (Alf Ross) realizmu zaradi njunega behavio- rizma odrekal realistično poreklo.6 Sklic na Kelsnovo teorijo prava je tukaj priročen, saj pokaže na konstrukcijo normativne realnosti, ki je samozadostna in se zato odmika od vsakršnih socialnih, političnih, ekonomskih in drugih vplivov. Ameriški pravni realizem (tako kot tudi skandinavski realizem) realnost pojmuje bistveno širše, namesto pravne realnosti ga zanima družbena realnost, katere del je pravo. Kelsnovi čisti teoriji prava 4 Prav tam, str. 19. 5 Prav tam, str. 30. 6 Bindreiter, 2013, str. 116. 20 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 delamo veliko krivico, če jo skušamo na silo umestiti v družbeno realnost, saj se tedaj izkaže za v resnici nerealno teorijo.7 Čista teorija prava je lahko realna zgolj v izključno pravnem univerzumu, ki pa je vselej del večjega družbenega univerzuma, kar pa je Kelsen seveda razumel. Kelsen s čisto teorijo prava ni želel razložiti vzročno posledičnega delovanja prava, zato s sociološkega vidika ne gre za realistično teorijo. Kelsnova čista teorija prava je toliko očiščena , da je od prava kot socialne resničnosti ostala le še formalna struktura prava.8 Ta vsebinska izpraznjenost je pravzaprav temeljni prvi pogoj čistosti pravne teorije, ki ne pristaja na naravnopravniške kriterije moralnosti prava ali pravnorealistični pragmatizem oziroma instrumentalizem. Za ameriške pravne realiste je bila zato Kelsnova teorija sterilni intelektualni dosežek, ki se je namesto življenju prava posvečal njegovim formalno logičnim razsežnostim.9 Nerealistična teorija je kontradiktoren pojem, saj je smisel teorije prav v tem, da opiše obstoječ pojav. Realnost teorije je vedno znova pogojena z okoljem, v katerem določimo predmet proučevanja, na kar nas opominja Kelsnova čista teorija prava, ki se omeji na svet prava in pravne norme. Ameriški pravni realizem je v osnovi teorija o sodniškem odloča- nju, pri čemer drugače kot Kelsen za svoje izhodišče vzame širše okolje, v katerega skuša umestiti predmet proučevanja. Ameriški pravni realisti so skušali sodniško odločanje ume- stiti v družbo kot celoto, zato jih ni zanimala zgolj pravna razsežnost tega pojava, temveč predvsem njegova politična, ekonomska in psihološka razsežnost, kar se je kazalo tudi v izboru metodoloških pristopov k proučevanju sodniškega odločanja, pri katerih v ospred- ju ni bil toliko vpliv pravne norme, temveč bolj vpliv zunjpravnih kriterijev odločanja.10 Zato je ameriški pravni realizem primer interdisciplinarnega pristopa k proučeva- nju prava, ki so ga na začetku 20. stoletja v ZDA razvijali zlasti na univerzah Yale in Columbia. Interdisciplinarno povezovanje prava in družboslovnih ved je za pravne rea- liste tudi danes najustreznejša metodološka izbira, saj z njo celovitejše zajamejo naspro- tja, ki zaznamujejo sodobne družbe. Ta nasprotja so materialna in idejna, s katerimi mora upravljati praktično vsaka demokratična politična skupnost, pri čemer ima zelo pomembno vlogo pravo kot niz obvezujočih in učinkovitih normativnih vodil, ki so podprta z državno prisilo. Kot ključna predpostavka pravnih realistov se kaže prepričanje o tem, da sodniki v resnici ne presojajo zgolj o zmagi in porazu v konkretnih sodnih pri- merih, temveč ob tem upoštevajo tudi javnopolitične kriterije odločanja, tako imenovane policy kriterije, pri katerih imajo znatno vlogo sodnikove ideološke premise, iz katerih 7 Kot primer lahko navedemo Kelsnovo vztrajanje pri zanikanju obstoja pravnih praznin v pravnem sistemu. Nemški svobodnopravniki in ameriški pravni realisti so (z nekaj pretiravanja) prepričljivo pokazali, da je razumevanje prava kot popolnega sistema pomanjkljivo. 8 Pavčnik, 2015, str. 37. 9 Telman, 2010, str. 354. 10 To nikakor ne pomeni, da so ameriški pravni realisti pravne norme razumeli kot popolnoma irele- vantne priveske sodniškega odločanja. 21 Jure Spruk – Ideološke premise ameriškega pravnega realizma je mogoče razbrati njegovo razumevanje politične skupnosti kot celote. Pojem ideolo- gija tukaj označuje hierarhično urejene vrednote, kot so na primer svoboda, enakost, individualizem, kolektivizem, etc., na podlagi katerih nastajajo vizije življenja v urejeni politični skupnosti, kar zajema tudi razumevanje vertikalnih in horizontalnih razmerij med ljudmi. Ideologija nam pomaga poiskati odgovore na vprašanja, kot so, kdo smo ter kaj želimo in kako to doseči. V tem smislu je ideologija vselej produkt idejnega oziroma intelektualnega napora, usmerjenega v urejanje medčloveških odnosov. Bolj so slednji nestabilni in nepredvidljivi, večjo moč dobi ideologija. Taka opredelitev ideologije se očitno odmika od bolj znanih opredelitev, denimo de Tracyjeve znanosti o idejah ali Marxove lažne zavesti, tj. ideologije niti ne povzdiguje niti je ne stigmatizira, pokaže pa, da je ideologija nujen sestavni del družbenopolitične strukture. Med realnim in ideološkim ni vnaprejšnjega nasprotja, realno se vselej skuša umestiti v dominantno ideologijo, medtem ko se vsaka ideologija skuša uveljaviti kot realnost. Ideološka zasnova je doktrinarna, zato ideologija realnost predstavlja kot neizogibno, saj le tako lahko realno usmerja proti spremembi ali ohranitvi obstoječega stanja. V tem pogledu vse ideologije delujejo po enakem načelu.11 Za analitične namene je vsekakor dobrodošlo, da razlikujemo med realnim in ideološkim, na ravni družbe pa se obe kate- goriji zlijeta ena v drugo, ko je govora o dominantni ideološki shemi, ki osmišlja dano realnost. V vsakdanjem političnem življenju je ideologija hrbtna stran pragmatizma,12 tj. profanim oblikam boja za oblast nudi idejno oporo. Ob tem se vnovič vračam k čisti teoriji prava, s katero je Kelsen želel pravno znanost očistiti političnih in ideoloških ele- mentov. Da mu je to uspelo, je Kelsen verjel, ker so različni politični akterji tistega časa čisto teorijo prava pripisovali k ideološki zakladnici njihovih nasprotnikov. »Toda,« je ugotavljal Kelsen, »prav to še bolje kakor teorija sama dokazuje njeno čistost«.13 3. Teoretično jedro ameriškega pravnega realizma Osrednji avtorji ameriškega pravnega realizma – Roscoe Pound, Karl Llewellyn in Jerome Frank14 – niso tvorili enotne teoretične šole. Med njimi je bilo kar nekaj po- membnih razlik, ki pa jih v tej razpravi ni mogoče podrobneje obrazložiti. Namen tega 11 Kar druži različne ideologije, je njihov odklonilen odnos do ideološkosti per se, s čimer se kaže na- ravnanost ideološkega, da se zlije z realnostjo. Vse glavne moderne ideološke doktrine – liberalizem, konservativizem in socializem – v ideologijo projicirajo negativne vplive na družbo. Paradoksalno posamezna ideologija postane dominantna tedaj, ko se ji uspe umestiti kot neideološka realnost. 12 Heywood, 2012, str. 3. 13 Kelsen, 2005, str. 11. 14 Seznam z naštetimi imeni ni izčrpan, saj bi lahko dodali tudi imena, kot so Oliver Wendell Holmes, Benjamin Cardozo, John Chipman Gray, Underhill Moore, Max Radin, Hermann Oliphant, Thurman Arnold, Robert Hale in Arthur Corbin. 22 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 razdelka je predstaviti teoretično jedro ameriškega pravnega realizma, ki se nanaša na njegovo metodološko (razlagalno) razsežnost. Skupna značilnost pravzaprav vseh pravnih realistov (ameriških, skandinavskih ali italijanskih) je njihov razlagalni skepticizem (zmer- ni ali skrajni), tj. metodološki pristop, ki kot temeljno predpostavko sprejema pomensko odprtost pravnih norm, zavrača pa predpostavko o enem in edinem pravilnem odgovoru na pravna vprašanja, katere najbolj znani zagovornik tudi danes ostaja Ronald Dworkin.15 Mauro Barberis meni, da je prav razlagalni skepticizem oziroma teorija o razlaganju tista značilnost, ki pravni realizem napravlja za samostojni nauk, zlasti v razmerju do pravnega pozitivizma, s katerim ga družita etični subjektivizem in ločevanje prava in morale.16 Ključna metodološka zahteva pravnih realistov je dosledno razlikovanje med vrednotami in dejstvi.17 Ameriški pravni realizem primarno namenja pozornost sodni- škemu pravu,18 na vsebino katerega vplivajo tudi vrednote posameznih sodnikov, kadar upoštevajo javnopolitične kriterije odločanja in kadar se v procesu odločanja zatečejo k moralnim in političnim teorijam. Ob omembi sodniškega prava je treba dodati pojasnilo o nastajanju sodniškega prava. Ustvarjanje prava,19 kar sodniki počnejo v večini prime- rov, ne vodi mehanično do nastanka sodniškega prava, saj slednje nastane šele, kadar posamična pravna pravila, ki izhajajo iz konkretnega sodnega spora, začnejo učinkovati splošno in abstraktno.20 Osrednje mesto pri nastajanju sodniškega prava, kot izhaja iz teorije pravnega realizma, imajo dejstva in ne pravne norme. Brian Leiter kot glavno maksimo pravnega realizma navaja deskriptivno tezo o sodniškem odločanju: sodnike pri odločanju bolj kot pravne norme stimulirajo dejstva konkretnih primerov.21 To pravza- prav pomeni, da sodnik bolj kot zakonu ali predhodnim sodnim odločbam (stare decisis) sledi dejstvom primera in iz njih izpeljani pravičnosti. V tem pogledu proces sojenja bolj kot postavitev silogističnega razmerja zaznamuje pragmatično »tehtanje« dejstev. Z vidi- ka nastajanja sodniškega prava je pomembna zlasti skepsa pravnih realistov do doktrine stare decisis. Pravni realisti poudarjajo svobodo posameznih sodnikov pri upoštevanju predhodnih sodnih odločb, zato zanje razlikovanje med ratio decidendi in obiter dictum 15 Teorijo pravnega realizma sta poleg Dworkina zavrnila tudi Lon Fuller in H. L. A. Hart. Za Fullerjevo kritiko glej The Law in Quest of Itself (1940), za Hartovo kritiko glej The Concept of Law (1961) in za Dworkinovo kritiko glej Law‘s Empire (1986). 16 Barberis, 2011, str. 37. 17 Prav tam, str. 39. 18 John Chipman Gray je kot edino pravo označil sodniško pravo, torej pravo je zgolj tisto, kar odloči- jo sodniki. V tem pogledu je zakon pravni vir, sodna odločba pa pravo. Grayevo favoriziranje sodišč pred zakonodajalcem je odraz njegove osredotočenosti na ameriški sistem common law. Podobno kot Gray je pravo razumel tudi enfant terrible ameriškega pravnega realizma Jerome Frank. 19 Riccardo Guastini akt interpretacije pravne norme označuje za akt odločanja oziroma akt volje in ne za kognitivni oziroma spoznavni akt. Za več glej v Guastini, 2013. 20 Novak, 2023, str. 341. 21 Leiter, 2007, str. 21. 23 Jure Spruk – Ideološke premise ameriškega pravnega realizma niti ni bistveno.22 Kljub temu pa pravni realizem (v svoji radikalni obliki) sojenje, tj. reševanje konkretnih sporov, pojmuje kot splošnejšo funkcijo od zakonodajne funkcije.23 Pravni realisti precedenčnega prava ne zavračajo a priori, vsekakor pa drži, da pravo razu- mejo kot izrazito dinamičen pojav, ki naj drži korak z razvojem družbe. Hessel Yntema je ključne hipoteze ameriškega pravnega realizma strnil v štiri točke: 1. koncipiranje prava kot sredstva za doseganje ciljev, 2. koncipiranje prava kot družbenega pojava, 3. koncipiranje vzajemnih sprememb tako v pravu kot tudi v družbi in 4. koncipiranje pravnega raziskovanja kot znanstvene dejavnosti.24 Razen točke 4, ki je še najbližje Llewellynu, navedene hipoteze nakazujejo skupno izhodišče kritične analize osrednjega antagonista – pravnega formalizma, ki ga je poo- sebljal Christopher Langdell skupaj s sodelavci na Univerzi Harvard. Formalizem je v osnovi metodologija, pri kateri je pomembna zlasti rigoroznost logike, ki se dviga nad in- strumentalnostjo prava in njegovimi družbenimi posledicami. Mogoče najznamenitejši stavek v zgodovini pravne misli Oliverja Wendella Holmesa v razpravi The Common Law iz leta 1881 je deloval kot spodbuda za pionirje ameriškega pravnega realizma: življenja prava ni zaznamovala logika, temveč izkušnja. Zaznane potrebe časa, prevladujoče mo- ralne in politične teorije, javnopolitične intuicije in predsodki posameznih sodnikov so k zakonom, s katerimi se vlada ljudem, prispevali veliko več kot silogizem.25 Sporočilna moč Holmesovega stavka je tako pomembna zato, ker je z njim artikuliral zavračanje podobe prava kot tesno zaprtega in samozadostnega sistema, hkrati pa pokazal, da se v pravu kot človeški stvaritvi prepletajo elementi subjektivnega in objektivnega. Ameriški pravni realisti so v njihovih razlagah prava resda dajali prednost iz konkret- nih primerov izvedenim dejstvom pred pravnimi pravili, vendar to ne pomeni, da so vpliv slednjih v celoti zanikali. Tako je na primer Karl Llewellyn, nesporni voditelj ame- riških pravnih realistov, pojasnil, da njegov namen ni niti v odvzemanju pomembnosti materialnih pravnih pravil in pravic niti v izključitvi obeh kategorij iz polja prava.26 Izključitev pravnih pravil in pravic iz polja prava Llewellynu resda težko pripišemo,27 je pa obema kategorijama očitno odvzel pomembnost, ki jima jo je namenil formalizem. Za Llewellyna pravna pravila niso bila osrednja referenca v pravnem diskurzu, zato je predla- gal, da se jih nadomesti s kontaktnim območjem med sodniškim vedenjem in vedenjem 22 Cross in Harris, 1991, str. 49–50. 23 Barberis, 2016, str. 8. 24 Yntema, 1960, str. 323. 25 Holmes, 1880, str. 1. 26 Llewellyn, 1930/1993, str. 56. 27 Llewellyn je v pravnih pravilih prepoznal avtoritativna pravila, ki uradnim osebam (beri: sodnikom) sporočajo, kaj naj storijo. Te osebe tovrstna pravila bodisi popolnoma ignorirajo bodisi jih delno ali v celoti upoštevajo. 24 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 laikov.28 Povedano drugače, Llewellyn je predlagal, da se pravni diskurz pomakne od nor- mativizma k pragmatizmu. Prevzemanje behavioristične paradigme je pravnim realistom služila prav v tem pogledu, tj. jedro pravnega diskurza so pomaknili od neoprijemljivih norm k empirično preverljivemu vedenju konkretnih ljudi, ki omogoča napovedovanje. Pragmatizem pravnih realistov se je kazal v njihovem glavnem cilju – napovedovanju, kaj bo dejansko storilo sodišče.29 Pravo, na katerega se osredotoča pravni realizem, je zato sodniško pravo, ki je v primerjavi z zakonskim pravom prožnejše pri iskanju rešitev sodnih sporov. Pravni pragmatizem se od pravnega formalizma razlikuje v perspektivi, tj. formalistično pojmovanje sojenja je vezano na pogled nazaj k zapisani normi ali prece- densu, pragmatizem pa to dvoje uporablja zgolj kot izhodišče za svoje urejanje bodočih družbenih interakcij. Teorija pravnega realizma v središče postavlja podobo sodnika, ki se odziva na dane družbene pogoje, kar pomeni, da ga bolj kot dogmatsko izvedena pravil- nost pravnega razlogovanja zanimajo širše posledice vsebinske odločitve, ki jo sprejmejo v konkretnem sporu. Pragmatični sodnik torej manj pozornosti namenja vprašanju uskla- jenosti dejanskega stanja in semantičnega dosega pravnega pravila, saj ga zanima zlasti namen pravnega pravila, iz katerega razbira posledice posameznih rešitev.30 Namensko interpretacijo pravnih pravil sestavljata subjektivni in objektivni namen, pri čemer se subjektivni namen osredotoča na avtorjeve neposredne namene, objektivni namen pa preseže konkretnega avtorja in se osredotoča bodisi na namen razumnega avtorja (ožji vidik) bodisi na namen, ki ga izvede iz temeljnih vrednot pravnega sistema (širši vidik).31 Za pravni pragmatizem velja zanimanje zlasti za širši vidik namenske interpretacije, s po- močjo katere konkretne sodne odločitve umešča v kontekst celotne politične skupnosti. Pojmovno oziroma analitično pravoznanstvo, ki je skupaj z zgodovinsko šolo prava tvorilo glavni tok pravoznanstva ob koncu 19. stoletja, z vidika pravnega realizma zagreši kapitalno napako prav v svojem koncipiranju pravnega sistema kot celovitega in zaprtega sistema, iz katerega je mogoče logično izpeljati rešitev na prav vsako pravno vprašanje. Gre za predstavo o pravnem sistemu, ki so jo tedaj ustvarjali zlasti kodifikaciji naklonjeni nemški pravniki, kot prvi pa so jo odkrito kritizirali nemški svobodnopravniki, ki so jim nato sledili še ameriški pravni realisti. Nemški svobodnopravniki so se v svoji kritiki lotili ideje popolnega in celovitega pravnega sistema, ki ga obvladuje formalna logika, ameriški pravni realisti pa so kot svojega osrednjega antagonista določili domnevni formalizem občega prava (common law). Kritika pravnih realistov je bila mogoče res pretirana glede na to, da večji del prav- nikov, profesorjev in sodnikov ni sledil Poundovemu prikazu sojenja kot mehanične de- 28 Prav tam. 29 Leiter, 2007, str. 52. 30 Posner, 2008, str. 243. 31 Barak, 2006, str. 126. 25 Jure Spruk – Ideološke premise ameriškega pravnega realizma dukcije,32 vendar pa ostaja dejstvo, da je bilo v času poglabljanja socialne neenakosti kar nekaj družbeno odmevnih primerov odločeno po metodologiji togega ločevanja prava od družbe.33 Aharon Barak sodniku v demokratičnih političnih sistemih namenja vlogo vmesnega člena med pravom in družbo,34 to pa lahko uspešno opravlja, če mu je dopuš- čeno, da k pravu pristopa dinamično. Opravljanje te vloge prinaša veliko naporov, če vemo, da mora biti pravo stabilno, vendar hkrati gibljivo, kot je pravno dinamiko opisal Roscoe Pound.35 Ameriški pravni realizem pravo razume izrazito dinamično, saj v njem prepoznava predvsem sredstvo za dosego družbenih ciljev. Kot pravi Benjamin Cardozo, modrost pri izbiri poti ni možna, če ne vemo, kam nas pot vodi.36 Ameriški pravni realizem je v osnovi teorija o sodniškem odločanju. Odgovoriti skuša na vprašanje, po kakšnih kriterijih sodniki dejansko presojajo konkretne sodne spore. Besedna zveza »dejansko presojajo« nakazuje realistično pomikanje onkraj vpliva pravnih pravil in precedensov pri sodniškem odločanju. Pravni realisti v tem pogledu proučujejo meje prava, ki jih dogmatična pravna znanost v svojem poudarjanju avtonomije prava le stežka priznava. Ker za pravne realiste pravo ni neprodušno zaprt sistem norm, so se (vsaj nominalno) usmerili v širše zastavljene metodologije družboslovnih znanosti, na primer sociologije in ekonomije, s čimer bi pravo iztrgali izolaciji samoreferenčnega sistema in ga tako približali družbeni stvarnosti. Ob tem lahko parafraziramo znano Marxovo misel iz 11. teze o Feuerbachu – sodniki naj se ne omejijo na mehanično aplikacijo obstoječih pravnih pravil, temveč naj pravna pravila prilagajajo obstoječim okoliščinam. Le tako lahko pravo sledi danim potrebam ljudi in se ob tem izogne pastem mističnega konser- vativizma, revolucionarne vneme ali popreproščenega večinskega načela odločanja.37 Tukaj je potreben vnovični opomnik, da pravni realisti (v veliki večini) pravnih pravil niso šteli za popolnoma nepomembne priveske sodniškega odločanja, kar je razvidno tudi iz njihovega osredotočanja na prizivno sodniško odločanje, znotraj katerega naj bi bilo več manevrskega prostora za diskrecijsko odločanje sodnikov. Slednjega ne sme- mo enačiti s samovoljnim odločanjem, saj tudi diskrecijsko odločanje sodnikov poteka znotraj določenih institucionalnih okvirov. H. L. A. Hart diskrecijo opisuje kot vmesno točko med osebno kaprico in jasnim metodološkim vodilom pri uporabi pravnega pra- vila.38 Do te točke pridemo vselej, kadar se razumni in dobronamerni ljudje ne glede na 32 Tamanaha, 2010, str. 28. Tamanaha mehanično pravoznanstvo znotraj sistema common law ozna- čuje za mit. 33 Vzorčni primer je Lochner v New York (1905). 34 Barak, 2006, str. 5. 35 Pound, 1923/2013, str. 1. 36 Cardozo, 1921, str. 102. 37 Calabresi, 2002-2003, str. 2120. 38 Hart, 2013, str. 658. 26 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 njihovo stopnjo informiranosti ne morejo zediniti o pravilnem rezultatu.39 Diskrecijsko odločanje se zdi neizogibno v okoliščinah razumnega nesoglasja, ki se v procesu prav- nega odločanja zaradi (relativne) nedoločnosti pravnih pravil pojavlja zlasti v primerih obravnave tako imenovanih mejnih primerov, ko se pravna razlaga sooči z lastnimi ome- jitvami. Diskrecijsko odločanje je vprašljivo zlasti z vidika vladavine prava, ki med dru- gim predvideva obstoj pravnih pravil, ki so jasna, predvidljiva in določna. Gre torej za vprašanje pravne varnosti, ki se tesno prepleta z zaupanjem ljudi v pravo. Prav zaupanje v pravo je ključni moment v odpravljanju negotovosti ljudi tako v vertikalnih kot tudi horizontalnih razmerjih, s čimer se krepi koordinacijski potencial prava v družbi.40 Za pravne realiste splošna koordinacijska funkcija prava ni vprašljiva, vendar pravu hkrati odrekajo popolno predvidljivost in s tem stabilnost. Veliki izziv pravnih realistov je bil v iskanju odgovora na vprašanje, kako pravo nap- raviti bolj predvidljivo, zato so se metodološko pomaknili onkraj pravne dogmatike. Kljub interdisciplinarnosti pravnim realistom ni uspelo razviti prepričljivega teoretičnega modela za napovedovanje, kaj bo sodišče storilo v posameznih primerih, kar velja tudi za teoretične naslednike pravnega realizma, kot so ekonomska analiza prava ali psiholo- gija pravnega odločanja. Pravna dogmatika uči, da sodniki prava ne ustvarjajo, ampak ga le iščejo. Ustvarjati pomeni stopati po še ne prehojenih poteh, na katerih ni mogoče najti trdnih opornih mest. Iskanje je aktivnost, pri kateri ni potrebe po ustvarjalnosti. Vztrajnost in natančnost običajno zadostujeta, saj iščemo nekaj, kar že obstaja. Za pravne realiste ni dvoma – sodniki pravo ustvarjajo in zato pravo ni toliko predvidljivo, kot to zahteva dogmatično pravoznanstvo. Jerome Frank v prepričanju, da sodniki prava ne ustvarjajo, vidi le mit, ki ga vzdržujejo ljudje, za katere velja, da jih njihove subjektivne potrebe po stabilnosti ohranjajo v otroškem svetu, v katerem so popolnoma varni.41 V Frankovi kritiki je prisoten jasen odtis Freudove psihoanalize, tj. stabilna pravna pravila so ekvivalent za avtoritativnega očeta v primarni družini. Občutek stabilnosti in predvi- dljivosti je enako pomemben na ravni družine kot tudi na ravni družbe, odsotnost avto- ritete pa je lahko vir močnih frustracij. Zaščitniški oče in stabilno (ter pravično) pravo imata zato podobno funkcijo – prvi krepi občutek varnosti v družini, drugo ga krepi v družbi. Ameriški pravni realisti so stabilnost in predvidljivost zamenjali za ustvarjalnost. Max Radin v sodnikovi arbitrarnosti (sic!) ni zaznal težav, saj naj bi sodniki s preostalimi ljudmi delili občutek za pravično, ki bi ga uveljavljali tudi brez usmerjanja pravnih pra- vil.42 Joseph Hutcheson je znaten vpliv pri iskanju pravičnih rešitev pripisoval sodnikovi intuiciji,43 saj iz primera izluščena dejstva presegajo vpliv domnevno ustreznih pravnih 39 Prav tam, str. 664. 40 Spruk, 2022, str. 148. 41 Frank, 1949, str. 35. 42 Radin, 1925/1993, str. 198. 43 Hutcheson, 1929/1993, str. 204. 27 Jure Spruk – Ideološke premise ameriškega pravnega realizma pravil na sodbo. Felix Cohen je pot iz Jheringovih nebes pravnih načel premagoval z empiricizmom, tj. z redefinicijo pravnih načel na podlagi konkretnih sodb, kot sta to predlagala že Holmes in Hohfeld.44 Realnost, kot nam jo naslika ameriški pravni realizem, je negotova, vendar obvladlji- va. Odgovor na izziv pravne nestabilnosti in nepredvidljivosti ne nudita niti dedukcija niti indukcija niti analogno sklepanje niti stare decisis. Ameriški pravni realisti so, podob- no kot nemški svobodnopravniki, v kritiki pravnega formalizma ponekod pretiravali (na primer Frank ali Moore), toda njihova vloga ni bila ikonoklastična. Razlagalni skeptici- zem ne pomeni samovolje, gotovo pa pomeni, da je razlagalno polje, ki je na voljo raz- lagalcu, širše, kot je to pripravljena prenesti pravna dogmatika. Pri tem so znatne razlike med različnimi pravnimi panogami, tj. razlagalno polje je na primer v kazenskem pravu precej ožje kot v pogodbenem pravu. Ni naključje, da je bila velika večina ameriških pravnih realistov tako imenovanih civilistov, tj. strokovnjakov na področju zasebnega prava. Med pravnimi realisti bi težko našli takega, ki bi pravnim pravilom absolutno odrekal vpliv na sodniško odločanje, vsekakor pa je velika zasluga pravnega realizma v dekonstrukciji podobe prava, za katero je značilna že skoraj teološko navdahnjena tran- scendentalnost. Pravo prihaja od človeka za človeka, ljudje, ki pravo ustvarjajo in upora- bljajo, niso posvečeni, so ljudje, kot vsi drugi. 4. Ideološke primesi ameriškega pravnega realizma V pravno teoretičnem smislu je ameriški pravni realizem nastal kot odziv na for- malistično pojmovanje prava, kakršno se je v 19. stoletju ustalilo na Univerzi Harvard. Langdellova zasnova pravne znanosti je bila za pravne realiste preozka, saj so se namesto na notranjo logiko prava osredotočili na njegove posledice. Natančneje, pravne realiste so zanimale družbene posledice prava, zato so jih bolj kot pravna dogmatika pritegni- le družboslovne vede, kot so ekonomija, sociologija ali psihologija. Teoretične nastavke ameriškega pravnega realizma lahko najdemo že pri Poundu in Grayu (Harvard), vendar so glavni protagonisti realističnega gibanja izšli iz univerz Yale in Columbia. William Twining kot vodilne pravne realiste med letoma 1914 in 1931 našteva Llewellyna, Corbina, Hohfelda, Moora, Cooka in Oliphanta.45 Ameriških pravnih realistov ni pove- zoval nikakršen skupni program, jih pa lahko razumemo kot gibanje v smislu medosebno povezanih posameznikov, ki so delili kompleksne ideje s poudarkom na nezadovoljstvu s tedaj obstoječim intelektualnim miljejem prava na splošno in pravnega izobraževanja konkretno.46 Obdobje hitrih procesov urbanizacije in industrializacije je zahtevalo pri- 44 Cohen, 1935/1993, str. 216. 45 Twining, 2012, str. 26. Razen Oliphanta (Columbia) so bili v navedenem obdobju vsi našteti pove- zani z univerzo Yale. 46 Prav tam. 28 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 lagoditve, poenotenje in poenostavitve pravnih virov v sicer federalno razdeljenih ZDA, pri čemer je zlasti na področju zasebnega prava prevladoval vpliv sodišč.47 Gibanje ame- riških pravnih realistov tako lahko razumemo kot eno od faz v odgovoru ameriških prav- nikov na izzive poenotenja, sistematizacije in modernizacije ameriškega prava.48 V tem delu razprave želim razviti tezo, po kateri ameriški pravni realizem ni zgolj specifična pravna teorija, temveč lahko znotraj njega prepoznamo idejne oziroma ideo- loške premise, ki gredo onkraj teorije o sodniškem odločanju. Te razsežnosti so tiste, ki najbolj prepričljivo pokažejo na vpetost pravnih realistov v dane družbene okoliščine, ki so bile zaznamovane z naraščanjem socialnih razlik in z veliko gospodarsko krizo iz leta 1929. Dotlej prevladujoča pravna in politična doktrina je utrjevala podobo prava in države kot nevtralnih institucij, ki posredujeta med formalno enakopravnimi subjekti. Formalistično pojmovanje prava in klasično liberalna teorija minimalne države hodita z roko v roki. Ideološko jedro ameriškega pravnega realizma tvori predrugačeno konceptu- alizacijo razmerja med državo in posameznikom, tj. nadomestitev restriktivne vloge dr- žave tako na polju civilnega prava kot tudi javnega prava s posegajočo državo, ki aktivno gradi javni interes. Državni intervencionizem predpostavlja oblikovanje interesa, ki gre onkraj interesa posameznika, tj. gre za interes politične skupnosti kot celote. Na ravni ideologije gre za pomik od klasičnega liberalizma k modernemu liberalizmu, ki državne vloge regulatorja trga več ne dojema kot a priori grožnje posameznikovi svobodi. Na po- dročju civilnega prava je bila največja sprememba vezana na razumevanje pogodbene svo- bode, ki so ji bile nadete omejitve javnega interesa. V zadevi Lochner iz leta 1905 sodišče teh omejitev še ni priznalo, saj je pogodbeno svobodo povzdignilo nad javni interes do zdravega in varnega delovnega okolja.49 Znatnejši premiki na tem področju so se začeli sprožati v 30. letih dvajsetega stoletja, tj. neposredno po velikem gospodarskem zlomu in vpeljavi Rooseveltovega državnega programa New Deal med letoma 1933 in 1939. Holmesovo odklonilno ločeno mnenje v zadevi Lochner so med znamenite razprave povzdignili pravni realisti, ki so pritrjevali njegovi drzni tezi: splošna določila ne določajo 47 Prav tam, str. 4–5. Ameriške pravne fakultete so se tradicionalno osredotočale na sodniško pravo, kar je izhajalo iz prevladujoče metode študija konkretnih primerov (case study), osredotočanja večjega dela pravne zgodovine na razvoj sodniškega prava in preučevanja narave sodnega procesa v okviru pravoznanstva. 48 Prav tam, str. 7. 49 Zadeva Lochner v New York v zgodovini prava velja za paradigmatični primer formalističnega sod- niškega odločanja. Zakon zvezne države New York iz leta 1895 (Bakeshop Act) je lastnikom pekarn prepovedal zaposlovanje delavcev za več kot 10 ur na dan oziroma 60 ur na teden. Na podlagi zakona je newyorško sodišče leta 1899 lastnika pekarne Josepha Lochnerja obsodilo na plačilo kazni v znesku 50 dolarjev. Obsodbo so potrdila vsa prizivna sodišča v zvezni državi New York, zato je sledila pritožba na ameriško vrhovno sodišče, kjer pa je večina sodnikov presodila, da so bile Lochnerju z obsodbo kršene pravice iz 14. amandmaja k ustavi. V obrazložitvi se je večina sodnikov sklicevala na pogodbeno svobodo delodajalca in delojemalca in zavrnila argument javne politike zaščite zdravja zaposlenih v visoko rizičnih gospodarskih panogah. 29 Jure Spruk – Ideološke premise ameriškega pravnega realizma izida konkretnih zadev. Pravni realisti so bili realisti v toliko, kolikor so jezikovni pomen pravnih pravil podredili družbenim izzivom, kot so jih razumeli sodniki. To pomeni, da se pravna pravila interpretirajo v sozvočju z aktualnimi političnimi, socialnimi ali eko- nomskimi problemi, nikakor pa ne ločeno od njih. Tukaj se pokaže razumevanje prava kot sredstva za doseganje družbenih ciljev, kar izhaja že iz Jheringovega in Poundovega sociološkega pravoznanstva, ki v osrčje pravne analize postavita kategorijo interesa. Prehod od srednjega veka k moderni dobi je potekal tudi skozi rekonceptualizacijo interesa, ki je v temelju prinesla zamenjavo korporativizma z individualizmom. Interes postane sredinska kategorija med neučinkovitim razumom in razdiralno strastjo, vase vnese dobre lastnosti obeh, tj. trdoživo strast samoljubja in razum na pogon moči stras- ti.50 V srednjem veku je bil poglavitni smoter prava harmonično vzdrževanje družbenega statusa quo, v moderni dobi pa se pravo prilagodi idealizirani podobi družbe svobodnih in medsebojno tekmujočih posameznikov, ki se je dokončno izčistila šele v 19. stoletju.51 Rudolf Von Jhering je v razpravi Pravo kot sredstvo za dosego cilja celotno poglavje namenil vprašanju, kako egoizem usmeriti v altruizem. Jhering v iskanju odgovora pre- pozna načelo, ki prežema celotno življenje posameznika: vsak posameznik naj osebne namene uskladi z interesi drugih ljudi.52 Pri tem se postavi vprašanje nasprotja interesov. Funkcija prava v tem primeru je v prepoznavi različnih interesov, določitev prevladujo- čih interesov in zagotovitvi učinkovitega izvrševanja ter njihovi harmonizaciji z javnim interesom.53 Za pravne realiste je bil formalizem iz 19. stoletja tako zelo moteč, ker je bil ovira prilagajanju prava danim družbenim okoliščinam. Močna kritika, ki jo je zoper formalizem usmeril zlasti Llewellyn, je ciljala na predstavo o avtonomni pravni znanosti, ki ni pristajala na prilagajanje pravnih pravil in precedensov družbeno želenim ciljem.54 Poglavitna težava pravnih realistov s formalističnim odločanjem je bila prav v tem, da formalizem ne priznava izbiranja med večimi mogočimi rešitvami, temveč izbrano rešitev predstavi kot edino možno rešitev. To se je pokazalo tudi v že omenjeni zadevi Lochner, ki ni postala tarča kritike toliko zaradi napačne izbire rešitve, ampak zato, ker je bila očitna politična, moralna, socialna ali ekonomska izbira predstavljena kot edina možna rešitev.55 Glavni očitek formalizmu s strani pravnih realistov je bil torej prav v tem, da posebej v zadevah, o katerih presojajo prizivna sodišča, pravo ne daje unikatnih rešitev, zato sodniki lahko ali morajo med več možnimi (pravnimi) rešitvami izbirati na podlagi zunajpravnih kriterijev.56 50 Hirschman, 2002, str. 47. 51 Pound, 1997, str. 13–14. 52 Jhering, 1913/2012, str. 28. 53 Gordon, 1981, str. 1042. 54 Sebok, 1998, str. 106. 55 Schauer, 1988, str. 512. 56 Schauer, 2012, str. xi. 30 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Postrevolucionarno obdobje v Združenih državah Amerike je minilo v znamenju transformacije pravnega sistema, v okviru katere je zavezanost k regulaciji materialne pravičnosti ekonomske izmenjave izzvenela.57 Zaščitniško, regulativno in paternalistično podobo naravnega in običajnega prava iz 18. stoletja je zamenjala podoba prava, ki je reflektirala obstoječo politično in ekonomsko moč posameznikov in korporacij na škodo kmetov, delavcev, potrošnikov in drugih manj vplivnih družbenih skupin.58 Do sredine 19. stoletja je bila opisana transformacija končana, do tedaj prožno in instrumentalno pojmovanje prava je postalo nezaželeno, takoj ko so bili doseženi poglavitni cilji nosilcev trgovinske in industrijske dejavnosti.59 Nekoč progresivna ideološka paradigma nosil- cev ekonomskega razvoja se je preobrazila v konservativno branjenje obstoječih razmerij moči. Pri tem je bila ključna vloga države kot osrednje pravnopolitične institucije z zako- nodajno, sodno in izvršilno močjo. Določitev vloge države v politični skupnosti je izrazito ideološke narave, saj so v njej razgrnjeni temeljni postulati razmerja med državno oblastjo in državljani, ki ga nikakor ne moremo izpeljati iz naravno ali božansko ustvarjenih principov. Delovanje države je vezano na upravljanje javnega oziroma skupnega prostora, znotraj katerega so parcialni interesi podrejeni splošnemu oziroma javnemu interesu. Ločitev na javno in zasebno sfero človekovega življenja oziroma na javno in zasebno pravo je ključen moment v določitvi dosega državne avtoritete, ki primarno brani javni interes, oziroma določitvi obsega prostora negativne svobode posameznika, v katero država ne sme posegati.60 Na področju trgovinskega prava se je tekom 19. stoletja v Združenih državah Amerike raz- vila izrazito restriktivna doktrina poseganja države v substantivne standarde pravičnega trgovanja.61 Slednji so veljali za nekakšno atavistično ostalino preživetih časov, ko prosti trg še ni obstajal. Klasični liberalizem je prosti trg povzdignil na piedestal vrednot, saj je v njem prepoznal glavni instrument človekove svobode, regulatorno moč države pa za neupravičeno represijo. Na področju prava je posameznikovo svobodo na prostem trgu najbolje odražala pogodbena svoboda, ki predpostavlja formalno enakopravne posame- znike, ki brez poseganja države prostovoljno sklepajo pogodbene odnose. Iz klasičnega liberalizma izpeljana doktrina laissez-faire izključuje ekonomsko vlogo države in v ma- niri ekonomskega individualizma nasprotuje vsakršni regulaciji delovnih procesov, kot je denimo regulacija delovnega časa, dela otrok ali varnega delovnega okolja.62 Velik del pravnih realistov se je v nasprotju s tem zavzemal za državno regulacijo ekonom- 57 Horwitz, 1977, str. 253. 58 Prav tam. 59 Prav tam, str. 254. 60 Morris Cohen je ločitev na javno in zasebno sfero kritično analiziral skozi enačaj med zasebno last- nino in oblastjo. Podrobneje v Cohen, 1927. 61 Prav tam. 62 Heywood, 2012, str. 48. 31 Jure Spruk – Ideološke premise ameriškega pravnega realizma skih aktivnosti, saj so v prostem trgu prepoznali instrument zatiranja in ne osvoba- jajoče institucije.63 Za pravne realiste je bila pogodbena svoboda v okoliščinah velike materialne neenakosti le priročen mit, ki so ga pri življenju ohranjali privilegirani sloji. Institucionalna ekonomija je prostemu trgu zaradi neenakih pogajalskih izhodišč po- godbenih strani pripisovala prisilno naravo.64 Pravni realisti so se formalizmu na sodiščih zoperstavili prav s to ključno predpostavko institucionalne ekonomije.65 Spoznanje, da je pravo vsaj delno ideološko pogojeno, je lično urejeno podobo prava kot celovitega, popolnega in logično povezanega sistema zamajala v temeljih. Mit o notranji konsisten- tnosti in zunanji stabilnosti prava se je začel podirati, saj se življenjskih dejstev pač ne da spremeniti, s čimer je nastopila doba negotovosti.66 Ali kot je zapisal pravni zgodovinar Grant Gilmore, vsak Blackstone mora imeti svojega Benthama in vsak Langdell mora imeti svojega Llewellyna.67 Kar je združevalo ameriške pravne realiste, je bil njihov cilj ustvarjanja dinamičnega pravoznanstva kot osnove, na kateri bi lahko reševali problem prilagoditve pravnega sistema potrebam 20. stoletja.68 Morton Horwitz kot najpomembnejšo zapuščino pravnega realizma prepoznava v zasaditvi dvoma v popolno ločenost prava in politike ter morale.69 Ameriška klasična pravna misel od revolucije naprej avtonomnost prava povezuje s preprečevanjem tiranije večine.70 Ameriških pravnih realistov ne gre brati v smeri preprostega izenačevanja prava in politike, so pa opozarjali na poroznost meje med njima. Ideja vladavine prava v osnovi pomeni obstoj nekaterih procesnih načel, ki naj bodo spoštovana zato, da bomo lahko vsaj z določeno mero verjetnosti rekli, da nam vlada pravo in ne ljudje. Pravni realisti so bili do sintagme vladavina prava precej zadržani, saj so v njej prepoznali zavajajoče sporočilo o vladavini brezosebnih pravil brez vpliva in odgovornosti konkretnih ljudi, ki ta pravila v praksi uporabljajo. Njihovo kritiko vladavine prava je treba razumeti v kontekstu kritike abstrakcij, ki so tako v pravu kot tudi v ekonomiji ali politiki ovira na poti do pristne demokratične ureditve.71 Številni pravni realisti so se odkrito postavili na stran administrativne države, navsezadnje so nekateri med njimi aktivno sodelovali 63 Duxbury, 1995, str. 105. 64 Pravnik in ekonomist Robert Hale je dokazoval, da celoten sistem laissez-faire temelji na prisilnem omejevanju posameznikove svobode ne glede na vsakršne formule enakih priložnosti ali enakih pravic. Podrobneje glej v Hale, 1923. 65 Prav tam, str. 106. 66 Gilmore, 1977, str. 68. 67 Prav tam. 68 Twining, 2012, str. 8. 69 Horwitz, 1992, str. 193. 70 Prav tam. 71 Purcell, 1969, str. 436. 32 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 v programu New Deal, s katerim je predsednik Roosevelt načrtoval sanacijo velike eko- nomske krize iz leta 1929.72 Prepoznavne značilnosti pravnega realizma – nezaupanje v tradicionalna pravna pra- vila, funkcionalni pristop k pravu in empiricizem – so se izkazale za združljive s poudarki New Deala na pragmatizmu, javni politiki in eksperimentiranju.73 Izvrševanje »nove- ga dogovora« je zahtevalo predrugačen metodološki pristop k interpretaciji temeljnega pravnega in političnega dokumenta v državi – ustave, ki je nastala v 18. stoletju in jo je bilo po logiki vpeljave družbenih reform treba prilagoditi razmeram v 20. stoletju. Originalizem in stare decisis sta bolj kot korenite družbene spremembe obetala ohranja- nje statusa quo, zato se je kot najbolj logična izbira izkazala namenska razlaga ustave, ki v svoji instrumentalno-pragmatični razsežnosti bolje zaobjame dinamiko prilagajanja prava spremenjenim družbenim razmeram. Tako se na primer Llewellynova teorija ustav- nega prava s poudarki na prožnosti in eksperimentiranju ni zgolj slučajno prilegala New Dealu, temveč se je vsaj deloma neposredno sklicevala na ta prenoviteljski program.74 Prožnost ustave, kot jo je zagovarjal Llewellyn, je v danih okoliščinah pomenila razširitev pristojnosti države pri poseganju v ekonomijo in krepitev administrativne vloge države, katerih cilj je bila izgradnja trdnejših temeljev socialne države. Razkorak med zmagovalci in poraženci kapitalistične igre prostega trga je bil namreč preprosto prevelik, dokončna socialna devastacija ne bi koristila nikomur in zdi se, da so šli pionirji ameriškega pravne- ga realizma s tem prepričanjem v mislih na okope teorije prava in države. Odtujenost sodniškega prava od stanja v družbi, ki se je tako nazorno pokazala v zadevi Lochner, je nekatere pravne realiste usmerila v favoriziranje zakonskega prava, s katerim bi zakonodajalec potrebne reforme dosegel hitreje in zanesljiveje, toda zavedanje o odločujoči vlogi sodnika, ki razsoja v konkretnih zadevah, je vendarle ostalo. Ameriški pravni realizem je pravna teorija prav zato, ker v središče postavlja sodnika. Upor pravnih realistov zoper Langdellov formalizem je v širšem pogledu pomenil upor zoper konserva- tivno ideološko zaledje večjega dela nosilcev sodne oblasti. Ameriški pravni realizem uči, da popolnoma avtonomna področja v družbi ne obstajajo. Ne obstaja popolnoma avto- nomno pravo in ne obstaja popolnoma avtonomen trg, ki deluje po načelu nevidne roke. Družba je visoko kompleksna entiteta, v kateri se številna področja med seboj prepletajo. Da bi bolje razumeli pojav sodniškega odločanja, so se pionirji ameriškega pravnega rea- lizma ozrli tudi k drugim družboslovnim vedam, od ekonomije in sociologije do psiho- logije. K naštetim družboslovnim vedam so se pravni realisti ozrli iz preprostega razloga – njihova podoba sodnika ne vključuje subsumpcijskega avtomata, ki sodbe ustvarja, ne da bi ob tem zaznaval lastno okolico. 72 Na primer, Thurman Arnold, Felix Cohen, Herman Oliphant in Jerome Frank so v okviru državne administracije aktivno sodelovali pri oblikovanju in izvrševanju programa New Deal. 73 Curtis, 2015, str. 172–173. 74 Prav tam, str. 178–179. 33 Jure Spruk – Ideološke premise ameriškega pravnega realizma 5. Sklep Ali pravu z razgrinjanjem njegovih ideoloških razsežnosti odvzemamo veljavo in po- men? Lahko ameriške pravne realiste označimo za ikonoklaste, ki so uničili sveto podobo prava? Spoznavanje in priznavanje ideoloških razsežnosti pravu per se ne odvzema niti veljave niti pomena. Le kratek ekskurz k primerjalnemu pravu zadošča, da lahko zazna- mo tako procesne kot tudi materialne razlike med posameznimi pravnimi tradicijami v državah z različnimi prevladujočimi ideologijami. Slovenski pravni red, ki deluje v oko- liščinah ideološke prevlade liberalne demokracije, se v temelju razlikuje od pravnega reda Islamske republike Iran, ki deluje pod ideološkim okriljem konservativne teokracije, ali pravnega reda Demokratične ljudske republike Koreje, kjer prevladuje totalitarno ustro- jena komunistična ideologija. Neideološko pravo je pravo brez vrednot, iz katerih vzni- kajo medčloveški odnosi. Pravo, morala, politika in ekonomija imajo lastne vrednotne temelje, kar pomeni, da gre za produkte človeškega uma, ki so v vsakdanjem življenju tesno prepleteni. Neideološko je lahko zgolj pravo, ki je tesno izolirano od politične skupnosti, ki naj bi ji pripadalo. Takšno pravo v realnem svetu pač ne obstaja. Pravni realizem je pravna teorija, ki jo bolj kot rigoroznost interne logike pravnega sistema zanimajo njegove družbene posledice. Biti pravni realist pomeni pravo razumeti instrumentalno, tj. v smislu doseganja družbenih ciljev prek prava. Pravni realizem v pravu prepoznava osrednji družbeni institut, saj poudarja njegovo funkcijo medija, skozi katerega politična skupnost sploh lahko deluje. Pravni realisti resda s skepso pogledujejo proti načelu vladavine prava, vendar ne zato, ker bi nasprotovali osnovni ideji, ki je za njim, temveč zato, ker se zavedajo vseh mogočih zlorab, povezanih s tem načelom. Ena od temeljnih funkcij prava je tudi oblastna funkcija, s katero državna oblast vzpostavlja vertikalna razmerja do posameznikov. Oblastna funkcija prava se najočitneje kaže v jav- nem pravu, ameriški pravni realisti pa so skušali pokazati, da oblastna funkcija prava ne deluje zgolj na polju javnega prava, temveč tudi na polju zasebnega prava. Tu korenini njihova kritika razdelitve prava na javno in zasebno. Velikega pomena za teorijo države je teza pravnih realistov, da oblasti ne izvršujejo zgolj nosilci državne oblasti, temveč tudi posamezniki. Prvi jo izvršujejo skozi državne institucije, drugi skozi svojo moč na »nev- tralnem in svobodnem« trgu. Za ameriške pravne realiste trg ni ne nevtralen ne svobo- den, saj se na njem srečujejo ljudje z izrazito neenakimi pogajalskimi močmi. Nevtralen in svobodni trg je tipičen ideološki konstrukt, ki ga je ustvaril klasični liberalizem zlasti v 19. stoletju. Ta se je že v izhodišču zadovoljil s formalno enakostjo ljudi, ki naj ponudi izhodišče za svobodno tekmo na trgu. Čas, v katerem se je vzpenjal ameriški pravni realizem, je bil zaznamovan s socialnimi neravnovesji. V prvih desetletjih 20. stoletja se je pokazalo, da na trgu preveč ljudi izgu- blja in premalo ljudi zmaguje. Temeljne ideološke premise ameriškega pravnega realizma je mogoče izpeljati iz predrugačene vloge države na polju zasebnega prava, na katerem so se oblikovala lastninska razmerja moči med formalno enakopravnimi posamezniki. 34 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Koncept posegajoče države predpostavlja regulatorno funkcijo države, s katero se blažijo posledice materialne neenakosti, ki prevladuje na trgu. Ameriških pravnih realistov ne smemo razumeti kot ikonoklastov, ki so razbili sveto podobo prava, kot so jo naslikali formalisti, zavedanje o »političnosti« prava je obstajalo že pred njimi, na kar prepričljivo pokaže Brian Tamanaha,75 vendar jim ne smemo odrekati zaslug, da so kot intelektualno gibanje to zavedanje postavili v središče pravne analize. Z vidika ocenjevanja ideoloških premis ameriškega pravnega realizma je kritika, ki so jo njegovi pionirji usmerili zoper Langdellov formalizem, bila kritika konservativne socialne javne politike. Pravni realisti so razumeli, da pot k družbenim spremembam vodi skozi spremembo pravne metode – od rigorozne formalistične logike do namenske razlage pravnih pravil. 75 Tamanaha, 2010. 35 Jure Spruk – Ideološke premise ameriškega pravnega realizma Literatura Barak, A. (2006) The Judge in a Democracy. Princeton in Oxford: Princeton University Press. Barberis, M. (2011) ‘Pravo in morala danes’ (prevod Mariza Žgur, Matija Žgur in Andrej Kristan), Revus. Revija za ustavno teorijo in filozofijo prava 16, str. 13–53. Barberis, M. (2016) ‘For a truly realistic theory of law’ (prevod Paolo Sandro), Revus. Revija za ustavno teorijo in filozofijo prava 29, str. 7–14. Bindreiter, U. (2013) ‘The Realist Hans Kelsen’, v: L. Duarte d’Almeida, J. Gardner in L. Green (ur.) Kelsen Revisited. New Essays on the Pure Theory of Law . Oxford in Portland: Hart Publishing, str. 101–129. Calabresi, G. (2002-2003) ‘An Introduction to Legal Thought: Four Approaches to Law and to the Allocation of Body Parts’, v: Stanford Law Review 55, str. 2113– 1151. Cardozo, B. (1921) The Nature of the Judicial Process. New Haven: Yale University Press. Cohen, F. (1935/1993) ‘Transcendental Nonsense and the Functional Approach’, v: W. Fisher, M. Horwitz in T. Reed (ur.) American Legal Realism . Oxford: Oxford University Press, str. 212–227. Cohen, M. (1927) ‘Property and Sovereignty’, Cornell Law Quarterly 13, str. 8–30. Cross, R., in Harris, J. W. (1991) Precedent in English Law (4. izdaja). Oxford: Oxford University Press. Curtis, M. (2015) ‘Realism Revisited: Reaffirming the Centrality of the New Deal in Realistic Jurisprudence’, Yale Journal of Law & Humanities 27, str. 157–200. Duxbury, N. (1995) Patterns of American Jurisprudence. Oxford: Oxford University Press. Dworkin, R. (1986) Law’s Empire. Cambridge: The Belknap Press of Harvard University Press. Frank, J. (1949) Law and the Modern Mind. London: Stevens & Sons Limited. Fuller, L. (1940/1999) The Law in Quest of Itself. New Jersey: The Lawbook Exchange. Gilmore, G. (1977) The Ages of American Law. New Haven in London: Yale University Press. Gordon, R. (1981) ‘Historicism in Legal Scholarship’, Yale Law Review 90, str. 1017– 1056. Guastini, R. (2013) ‘Redefinicija pravnog realizma’ (prevod Milan Franić), Revus. Revija za ustavno teorijo in filozofijo prava 19, str. 83–96. Hale, R. (1923) ‘Coercion and Distribution in a Supposedly Non-Coercive State’, Political Science Quarterly 38, str. 470–494. 36 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Hart, H. L. A. (1961/2012) The Concept of Law (3. izdaja). Oxford: Oxford University Press. Hart, H. L. A. (2013) ‘Discretion’, Harvard Law Review 127, str. 652–665. Heywood, A. (2012) Political Ideologies (5. izdaja). London: Palgrave MacMillan. Hirschman, A. (2002) Strasti in interesi. Ljubljana: Založba Krtina. Holmes, O. W. (1881/1951) The Common Law (44. natis). Boston: Little, Brown and Company. Horwitz, M. (1977) The Transformation of American Law 1780–1860. Cambridge in London: Harvard University Press. Horwitz, M. (1992) The Transformation of American Law 1870–1960. Oxford: Oxford University Press. Hutcheson, J. (1929/1993) ‘The Judgment Intuitive: The Function of the 'Hunch' in Judicial Decision’, v: W. Fisher, M. Horwitz in T. Reed (ur.) American Legal Realism. Oxford: Oxford University Press, str. 202–204. Jhering, R. (1913/2012) Law as a Means to an End. Forgotten Books. Kelsen, H. (2005) Čista teorija prava. Ljubljana: Cankarjeva založba. Leiter, B. (2007) Naturalizing Jurisprudence. Essays on American Legal Realism and Naturalism in Legal Philosophy. Oxford: Oxford University Press. Llewellyn, Karl (1930/1993) ‘A Realistic Jurisprudence – The Next Step’, v: W. Fisher, M. Horwitz in T. Reed (ur.) American Legal Realism. Oxford: Oxford University Press, str. 53–58. Novak, A. (2023) ‘Pojem in pojavnosti sodniškega prava’, v: A. Novak in M. Pavčnik (ur.) Sodniško pravo. Ljubljana: GV Založba, str. 323–367. Pavčnik, M. (2015) Čista teorija prava kot izziv. Ljubljana: GV Založba. Posner, R. (2008) How Judges Think. Cambridge: Harvard University Press. Pound, R. (1997) Social Control Through Law. London in New York: Routledge. Pound, R. (1923/2013) Interpretations of Legal History. Cambridge: Cambridge Uni- versity Press. Purcell, E. (1969) ‘American Jurisprudence between the Wars: Legal Realism and the Crisis of Democratic Theory’, The American Historical Review 75 (2), str. 424–446. Radin, M. (1925/1993) ‘The Theory of Judicial Decision: Or How Judges Think’, v: W. Fisher, M. Horwitz in T. Reed (ur.) American Legal Realism. Oxford: Oxford University Press, str. 195–201. Schauer, F. (1988) ‘Formalism’, The Yale Law Journal 97 (4), str. 509–548. 37 Jure Spruk – Ideološke premise ameriškega pravnega realizma Schauer, F. (2012) ‘Foreword’, v: W. Twining: Karl Llewellyn and the Realist Movement (2. izdaja). Cambridge: Cambridge University Press, str. ix–xxiv. Sebok, A. (1998) Legal Positivism in American Jurisprudence. Cambridge: Cambridge University Press. Spruk, J. (2022) ‘Pravna država in pravno načelo zaupanja v pravo’, Zbornik znanstvenih razprav LXXXII, str. 121–155. Tamanaha, B. (2010) Beyond the Formalist – Realist Divide. Princeton in Oxford: Princeton University Press. Telman, J. (2010) ‘A Path Not Taken: Hans Kelsen’s Pure Theory of Law in the Land of Legal Realists’, Valparaiso University, Law Faculty Publications, str. 353–376. Twining, W. (2012) Karl Llewellyn and the Realist Movement (2. izdaja). Cambridge: Cambridge University Press. Yntema, H. (1960) ‘American Legal Realism in Retrospect’, Vanderbilt Law Review 14 (1), str. 317–330. 39 © The Author(s) 2024 Znanstveni članek DOI: 10.51940/2024.1.39-64 UDK: 340.1 Timotej F. Obreza* Privid pravne konstrukcije O duhu in porah pravnega (spo)znanja Povzetek Pravniki pri svojem delu razmišljajo na podoben način. Prek vzorcev védenja, ki jih pri- dobijo z ustrezno izobrazbo in poznejšim delovanjem na posameznem pravnem podro- čju, privzemajo ustaljeno miselno shemo. Avtor predstavi tezo, da je takšno miselno izhodišče smiselno razumeti kot privid pravne konstrukcije, ki se pravnikom pri njiho- vem delovanju utira pred očmi. Pri tem »privid« napoveduje dejstvo, da je ta podoba na- mišljena, čeprav nujna, »pravna konstrukcija« pa označuje zasnovo (rezultat) in snovanje (ustvarjalni proces) pravnega sveta. S prividom si pravniki ne samo zagotovijo dostop do specifične resnice, prek katere spoznavajo in razvijajo svet prava, pridobijo tudi privile- gij, s katerim to področje védnosti monopolizirajo in monetizirajo. Ključna pri tem je priučena spoznavna metoda, ki tvori »ogrodje« pravne konstrukcije in je sestavljena iz enote, tehnike in vrline pravne sporočilnosti. Avtor te tri elemente na kratko ponazori, posebej pa poudari tudi nekatere pomanjkljivosti pravnega razmišljanja. Kot ključen je prepoznan metodološki pluralizem, ki šele omogoči celovitejše razumevanje sveta okrog nas: spoznavno sintezo. Rdeča nit, ki ideji privida pravne konstrukcije usodno botruje, zadeva sprejemanje pomembnosti pravnega dela na eni in odgovorno koriščenje privile- gija pravnega znanja na drugi strani. Ključne besede privid, pravna konstrukcija, pravna vednost, metodološki pluralizem, odgovoren mono- pol znanja. * Pravna fakulteta Univerze v Ljubljani, Katedra za teorijo in sociologijo prava, asistent, elektronski naslov: timotej.obreza@pf.uni-lj.si. Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review – voL. LXXXiv, 2024 • pp. 39–64 ISSN 1854-3839 • eISSN: 2464-0077 40 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 1. Namesto začetka konec: očrt nekega miselnega izhodišča Pravna znanost1 se pogosto trudi biti ekskluzivna. Visokoleteči izrazi, nedostopnost platforme znanja in abstraktnost vsebine so vsaj nekateri od dejavnikov, ki k temu pri- pomorejo. Poleg tega je pogosto partikularna in kulturno pogojena. Čeprav o tem, kar bom tukaj poimenoval »tipično pravno razmišljanje«, na tak ali drugačen način pišejo na različnih koncih sveta, so med njimi pomembne razlike. K pojmu prava in arhitektoniki pravne vednosti pristopajo nekoliko drugače: prisvajajo si različne resnice in ponujajo različne trditve o tem, kaj naj bi pravo sploh bilo. V prispevku obravnavam oba momenta in si prizadevam vsaj deloma doseči, da tr- ditve o pravnih resnicah niso povsem borne in nedostopne. Prizadevam pa si tudi, da izmed možnih teoretskih nastavkov vsaj v zametku opredelim tistega, za katerega menim, da ima določeno vrednost za pravno znanje samo. Smo v trenutku zgodovine, ko lahko trdimo, da so velike tradicionalne smeri pravnega mišljenja presežene.2 Pravni realizmi, pravni naturalizmi in pravni pozitivizmi sami zase ne prepričajo; preprosto ne nudijo dovolj trdnih izhodišč, da bi lahko privolili v njihovo samozadostnost. Postreči je treba z nečim »primernejšim«.3 Nadalje, razpravljanje o globinah pojavnosti, ki nas obdajajo, je tvegano početje. Še tako bujna domišljija težko prebije vrtenje v brezkončnem krogu ničkolikokrat prežve- čenih položajev, nelukrativnosti poklica pa se s tem običajno pridruži še zavedanje o majhnosti lastnega intelektualnega obstoja. Stojimo na ramenih velikanov: biti skromen pred mogočnimi gorami zgodovine idej in nakopičene vednosti pa je morda največ, kar lahko z zretjem v globino sploh dosežemo. Zakaj vztrajati? V prispevku se poskušam na zagate skromno in vsaj deloma odzvati. Bolj kot za izvir- nost gre pri njem za očrt, napoved nekaterih lastnih stremljenj: pravni znanosti že dobro poznane elemente bom obravnaval v okviru tega, čemur pravim privid pravne konstruk- cije. Zatrjeval bom, da se s skupnimi napori pravnikov zgodovinsko razvija in utemeljuje specifična pravna normativnost,4 ki obstaja predvsem kot hoten privid; nekakšna miselna maska, ki pravniku omogoči dostopanje do specifične resnice. S pomočjo tega miselnega eksperimenta si lahko olajšamo kompleksno »kognitivno- -računsko« operacijo spoznavanja in mišljenja prava oziroma pravnega sistema. Z njim lahko obravnavamo to, kar pravnike druži: miselni duh, prežet z idejo pravne konstruk- 1 Pravne znanosti tukaj ne enačim s pravno doktrino oziroma pravno dogmatiko. Prim. Guastini, 2015, točka 4. 2 Teza ni nujno zgolj senzacionalistična. Glej na primer Somek in Forgó, 1996, str. 160 in nasl. 3 Neznanka ostaja, komu ta naloga pripada. 4 Ne gre torej za empirično vprašanje, temveč za vprašanje miselne sheme, ki je delujočemu pravniku eksistencialno nujna in ki jo kot umetno privzame. Ta deluje in je prisotna, kot da je resnična (prim. Vaihinger, 1922, str. 129–143, 154–160 in 771–790). 41 Timotej F. Obreza – Privid pravne konstrukcije. O duhu in porah pravnega (spo)znanja cije, ki je postavljena nadnje in ki ji zdaj bolj zdaj manj služijo.5 Biti pravnik pomeni biti skromen ob pogledu na privid normativne gore, na katero se z odrekanjem in potrpež- ljivostjo vzpenja, vendar odločno zre z njenega vrha.6 Privzeti duha pravnega spoznanja torej, vendar se hkrati zavedati njegovih nujnih por. 2. Imeti privide in zreti … 2.1. Previdno s prividnim S prividom pravne konstrukcije merim na miselno shemo, rezultat, ki nastane s pomnjenjem, usvajanjem, razumevanjem, skratka: sintetiziranjem pravnega znanja. Ta miselni nastavek, ki ga privzamemo, oziroma maska, ki si jo nadenemo, predstavlja hkra- ti gonilo in tudi cilj ideala pravne urejenosti družbenega življenja. Pojmujem ga kot ide- alno-tipsko kategorijo – tudi sicer v ospredje jemljem prav idealno razsežnost7 pravnega znanja – in ga najprej asociiram z izobraženim pravnikom. S pravom se v naši družbi namreč praviloma ukvarjajo prav ti: osebe, ki so prestale kurz določenega vednostnega in ideološkega8 uokvirjanja.9 To ni nepomembno, celo več, ključno je, da lahko prividu pripišem dve pomembni lastnosti: da je hoten in učinkovit. Nadalje predlagam, da pojem privida ne nosi negativne konotacije; bolj kot za »vidno zaznavo brez stvarne podlage« gre pri njem morda za »izmišljeno, fantazijsko podobo česa«.10 Kot tak ima pomembno kolektivno razsežnost: njegov obstoj in reprodukcija zadeva najprej tiste, ki pravno delujejo, njihovo delovanje pa je poleg tega usmerjeno v 5 To vprašanje, ki ima po svojem bistvu opraviti s poklicno oziroma etično dolžnostjo pravnika, je sicer povsem ločeno od vprašanja, ali sploh obstaja splošna (moralna) dolžnost, da pravo spoštu- jemo. Strinjam se z Novakom (2003, str. 393 in nasl.), ki to možnost zanika in predstavi koncept pripravljenosti sprejeti pravo (prav tam, str. 403–404). 6 Prispodobo normativne gore najdemo že pri Pitamicu (1917, str. 344). 7 Drugačen – materialističen – poudarek razvija Komel (2023, zlasti str. 175–186), ko govori o prak- tični vednosti pravnih delavcev. 8 Tukajšnje razumevanje pojma ideologije je blizu opredelitvi, ki jo poda Višković (1976, str. 49), kjer ga razlikuje od naziranja, značilnega za »marksistično in državljansko« literaturo. Moja delovna de- finicija torej nasprotuje Pašukanisovi (2014, str. 29), ki trdi, da »pravne kategorije nimajo nobenega drugega pomena razen ideološkega«. Prim. tudi Kelsen, 1979 (1931), str. 57 in nasl. S pojmom se je pri nas sicer na več mestih ukvarjal Cerar, zlasti 2001, 2006, 2009 in 2011. 9 Prim. Bourdieu, 1986, zlasti str. 18–19 (prevod dela je v slovenskem jeziku pred kratkim izšel pri reviji Problemi). O ideološki funkciji postulata kompletnosti pravnega sistema, ki jo bo tukajšnji koncept privida impliciral, govorita tudi Alchourrón in Bulygin, 1971, str. 176. 10 Geslo »privid«, Slovar slovenskega knjižnega jezika. 42 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 korist skupnosti, vseh nas torej.11 Privid je odločilen za vzpostavitev »pravniškega ceha« in »pravne družbe«. Če privid privzamemo kot družbeno nujen, je lahko to morda pretirano udobno. Utegne namreč nekritično privilegirati status quo, ista teza pa lahko po drugi strani izpade tudi pompozno, poetično, celo poduhovljeno.12 Tukaj se do opazk ograjujem. Poudariti želim le ključno pozitivno plat oziroma paradigmo tega konceptualnega ustroja: pravniškemu delu – in s tem našemu prividu – je specifična, morda inherentna določena zgodovinsko razvijajoča se vizija,13 ki ob krajevni, časovni in vsebinski kon- tingentnosti14 vsakokratnega pravnega izvajanja in vrednotenja stremi k določenemu cilju. Ta cilj lahko pojmujemo na različne načine. Eni bodo za nas »bolj«, drugi »manj« ustrezni; eni si bodo prizadevali za opisno točnost, drugi za predpisovalno pravilnost, tretji pa bodo ti dve kategoriji hote ali nehote združevali. Gre za nerazrešljivo argumen- tativno bojno stanje, ki pa volens nolens vedno znova oživlja, morda celo osmišlja, samo razmišljanje o nedosegljivem.15 Zato se moram zateči k poenostavitvam. Ne pristajam, denimo, na tezo o inherentni moralnosti prava,16 saj jo štejem za pretirano naivno in škodljivo nadzgodovinsko, čeprav ontološko mamljivo. Tudi ne na tezo o zaprtosti in samodoločnosti pravnega simbol- nega koda17 – ta je brezupna in spoznavno omejujoča, čeprav trdoživa – ali na tezo o pravu kot suverenovem ukazu,18 ki je empirično in normativno neustrezna, čeprav zelo pragmatična. Glavna kvaliteta, »cilj« prava oziroma pravnega razmišljanja zato ni niti iz- ključno moralna, izključno sistemska ali izključno zaukazujoča. Za primaren razločevalni element in »cilj« pravniškosti tukaj štejem že samo ohranjanje in prevpraševanje privida o pravni urejenosti družbenih položajev. Kako pravno opredelimo položaj n in zakaj prav 11 V zvezi z dojemanjem prava na psihološki ravni glej na primer Cerar in Matić, 2001, str. 330–332. 12 Ob tem se spominjam pogovora z dekanom frankfurtske Pravne fakultete Thomasom Vestingom, ki je diskurzivno teorijo svojega kolega Habermasa označil za pretirano preroško in priestlike. Cilj teze o prividu, drugače, ni biti priestlike, temveč stvari karseda preprosto opisati take, kakršne so. S podobnim tonom berem tudi Kelsnovo (1962, str. 316) trditev, da je naravnopravni nauk – v nasprotju s pozitivističnim, ki da je realističen – idealistični pravni nauk. Preveč preroški torej. 13 Prim. Furlan (2002, str. 86), kjer pravi, da »[p]ravni instituti ne žive v nas kot neka aritmetična suma abstraktnih pravil, marveč neka enotna podoba.« V zvezi z nastajanjem in razvijanjem pravnih pojmov prim. Obreza, 2022, zlasti str. 87–89. 14 Podoben poudarek napravi Furlan, 2002, str. 95–102. 15 »Noch suchen die Juristen eine Definition zu ihrem Begriff vom Recht« (»Pravniki še vedno iščejo defi- nicijo svojega pojma prava«, prevod T. F. O.) cinično pravi Kant (2000, str. 759, opomba). 16 Fuller, 1963, str. 33 in nasl. 17 Luhmann, 2004, zlasti str. 76–140; glej tudi Luhmann, 2000, str. 56–58. 18 Austin, 1995 (1832), Lecture I, str. 21; prim. Kelsen, 1923, str. 213 in nasl. Po Bobbiu (2011, str. 50) je pravni pozitivizem smiselno obravnavati iz treh različnih vidikov, in sicer bodisi kot meto- do, teorijo ali ideologijo. Zame bo v nadaljevanju bistven metodološki vidik – v tem delu torej Austinovo tezo sprejemam. 43 Timotej F. Obreza – Privid pravne konstrukcije. O duhu in porah pravnega (spo)znanja tako? Do katerih pravnih posledic to vodi? Ustanovili smo institucijo in miselni nastavek, ki te odgovore zagotavlja. V različnem kraju, ob različnem času in pri različnem vsebinskem vprašanju bomo oba, tako pravo kot tudi pravno rešitev, koncipirali na različna načina.19 Toda neodvisno od tega bo štelo predvsem to, da je v družbi prisoten miselni nastavek, privid pravni- škosti, ki ga deloma ohranimo in ohranjamo, deloma pa prevprašujemo in razvijamo, in na podlagi katerega običajno z grožnjo sile preoblikujemo, posegamo v našo družbeno resničnost. Privid ima torej statično in dinamično komponentno. Na opisan način de- luje delujoči pravnik: oseba, ki ima razgled po normativnih planotah in ki mora svoje argumente podati jasno in prepričljivo. Njegova vloga pri tem ni poljubna: sodeloval naj bi pri skupnem družbenem projektu, ki si zdaj bolj in zdaj manj trudi biti »pravičen«.20 Toda končnih odgovorov na kateremkoli področju se moramo bati. Odstirajo bodisi na vnaprejšnjo dogmatično samozadostnost miselnega sistema, ki se že po definiciji za- dovolji z danim naborom spremenljivk, ali pa na golo površnost pri izvajanju dokaza, ki ne vpne zadostnega dvoma. Običajno nam tako ne preostane nič drugega kot skromna relativna spoznavna zadovoljivost, ki jo je možno operacionalizirati.21 Podobno velja za opis pravnega delovanja: največ, kar lahko zanesljivo opredelimo, je spoznavna shema, skozi katero zaznavamo in izvajamo projekt pravnega. Ne glede na nevšečnost teze si je zato v naslednjem namenoma provokativnem primeru vredno priznati: tudi nacistični pravnik se je ukvarjal s pravom.22 In presneto dobro ga je moral poznati, da ga je lahko zastavil na svoj, »boljši« način.23 Pravno delovati tako ne pomeni nujno biti moralen ali etičen, čeprav bi si morda tega želeli. Pomeni na podlagi privida pravne konstrukcije soustvarjati – ohranjati in prevpraševati – določeno družbeno resničnost. Pri tem pa je izbira vsakega posameznika in posameznice, kaj bo zanj ali zanjo pojem prava sploh predstavljal. Zaslužek? Gotovo.24 Prestiž? Mogoče.25 Integriteto? Usodno.26 19 V zvezi z nekaterimi epistemološkimi pogoji opredeljevanja prav(neg)a, ki sicer časovni, krajevni in vsebinski kontingentnosti nesporno botrujejo, glej Novak, 2001, str. 86–92. 20 Element vsakršnega pravnega razmišljanja bo stremljenje k tej ali oni pravičnosti; težavo, kot re- čeno, vidim v tem, da je tudi pravičnost kategorija, ki se v času, prostoru in po vsebini spreminja. Nujno je, da je pravo razumno napredno družbeno gonilo in da imajo pravniki močen občutek za pravičnost. Toda to je lahko le relativen preskriptivni element, ki sheme pravnega razmišljanja ne konstituira, temveč spremlja. Praktičen razum bo vedno podajal različne univerzalizacije pravičnega. 21 O zgolj »približni« naravi spoznavanja smisla norme oziroma abstraktnih pojavov nasploh glej tudi Furlan, 2002, str. 89 in nasl. 22 Hart, 1958, str. 624–629. 23 Prim. Dahm in Schaffstein, 1933; Schaffstein, 1934, 614–627; Schmitt, 1934, str. 47–53; Fraenkel, 2019. 24 Somek, 2006, 9 in nasl.; prim. Somek in Forgó, 1996, 148–152. 25 Prav tam. 26 Dworkin, 1986, str. 176–224. 44 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 2.2. Še dve plati prividnega Pravnikov privid, ločeno od vprašanja etičnega naboja, opravlja več funkcij. V gro- bem bi poudaril dva momenta. Prvi zadeva specifičen vid, lasten habitusu posameznega pravnega delavca.27 Disciplina, s katero se bo strokovnjak ukvarjal, in mesto v pravo- sodnem sistemu, ki ga bo zavzemal, bosta pomembno sodoločala usmeritev in nabor njegovega dojemanja ter zatrjevanja o pravu. Tukajšnje opazke bodo morda ob priznanju, da jih dajem brez ustrezne lastne empirične analize, videti toliko bolj medle. Pa vendarle se mi zdi dovolj precizno opažanje, da bo usvojeni privid denimo državnemu tožilcu po- menil predvsem sredstvo, način boja zoper družbeno kriminaliteto, odvetniku pogoj za pridobitev stranke, sodniku pa vir racionalnega gradiva, s katerim bo poskušal zagotoviti karseda pravilno in pravično odločitev. Prek poklicne diferenciacije prihaja do niansira- nega razumevanja in vrednotenja pravnih pojavov. Prilagaja se miselna shema – privid –, na podlagi katere pravniki delujejo, kar omogoča učinkovitejši potek pravnega dela in udejanjanje ciljev, h katerim posamezne poklicne skupine stremijo.28 Toda diferenciacija bi ovenela, če ne bi vsakega od pravnih akterjev družil minimalen skupni imenovalec: poistovetenje s smiselnostjo pravne igre. S tem pa je volens nolens neizogibno povezan drug, »zunanji« moment pravnikovega privida. Družbeni posledici diferenciacije dela in pravnega (pri)vida sta monopolizacija pravne vednosti in lukrativ- nost pravne ekspertize. V rokah peščice posameznikov in posameznic je orodje, s katerim razkrivajo tančico v obči službi pravnosti.29 2.3. Hoté zreti Najbolj izčiščeno lahko o pravnem delovanju razmišljamo kot o naučeni aplikaciji kategorij najstva na družbeno dogajanje. Če nastopijo okoliščine O1, naj bo pravna posledica P1. Če O1 in ne P1, potem naj bo sankcija S1. Privid predpostavlja odločitev o specifičnem zrenju: gre za spoznavni pogoj pravnega dogajanja, privzetje notranje perspektive pravne ureditve, prek katere se oziramo po normativnih planotah.30 Imamo 27 Prim. Bourdieu, 1986, str. 3–5. Koncept habitusa za tukajšnje potrebe dovolj jasno poudari po- gojenost vzorcev delovanja poklicno diferenciranega pravnika. Na analitični ravni sicer bolj obeta Komelov pojem praktične vednosti (2023, zlasti str. 175–186), ki gre pri koncipiranju pravnega dela še korak dlje. 28 Prim. Komel, 2023, str. 182–183. 29 To dejstvo lahko bodisi prezremo, ker ni pravno stricto sensu, lahko pa ga postavimo kot problem. Mehanizmi socialne države so gotovo neprecenljivi. Toda dostopna mora biti tudi elita: moč privida prinaša družbeno odgovornost. 30 Za razumevanje najstvenega dogajanja je ključen poudarek Merkl-Kelsnove teorije o stopnjevitosti (Kelsen, 1960, str. 72–73, 228 in nasl.; Merkl, 1931, str. 272 in 279). Zagotovi nam nastavek, ki je za privid usoden: mišljenje dinamične enotnosti pravnega reda. Somek (1996, str. 104–105) pou- dari Merklovo tezo, da morajo biti organi, ki ustvarjajo pravo, nujno vezani na enotno razumevanje 45 Timotej F. Obreza – Privid pravne konstrukcije. O duhu in porah pravnega (spo)znanja se za igralce, sodelujoče v pravni igri. Pred nami je zdaj nič, zdaj snovni svet, zdaj kom- pleksna pravna zasnova. Privid je torej hoten: če pravnik želi pravno delovati, je priučeno shemo primoran priklicati. Pri tem privid sam zase nima niti pozitivne niti negativne vrednosti; je le odraz stanja družbe, ki pravu dodeljuje posebno vlogo. Toda če je hoten, je lahko tudi spregle- dan. Pretirano sledenje prividu je lahko zaslepljujoče, kar onemogoči stik z realnostjo.31 Fiksacija na privid pravnega oteži spoznavno sintezo, ki vedno črpa tudi iz induktivno- -kavzalne metode, lastne svetu dejstev, in ne zgolj deduktivno-normativne metode, lastne svetu pravnih kategorij.32 Pravnikov privid je zato na individualni ravni le izhodišče, prek katerega poteka sinteza družbene totalnosti. Če privid ni dovolj oster, pa bodo pravni odgovori ohlapni in nezanesljivi: ključno je torej še, da je učinkovit. Zagotavljanje poenotenega privida in njegove ostrine, ki omogočita spoznavanje in dograjevanje pravne normativnosti, mora biti najprej cilj pravne izobrazbe. »Ideološki obrazec«, ki sotvori družbo in učinkovito slika miselne sheme mladih kolegov. 3. … v pravno konstrukcijo 3.1. »Pravno, ki je konstruirano« Definiranje prava je praviloma vsaj nehvaležna, če ne nemogoča naloga. Vsebuje na- mreč inherenten non sequitur, saj bosta cilj in vsebina takega izvajanja vedno odvisna od številnih izraženih ali neizraženih okoliščin, spremenljivk, ki si pri različnih opredeljeva- njih nasprotujejo.33 Denimo to, kaj želimo s pravom sploh početi, kakšen je naš intimen odnos do pravil, s katero (pravno) panogo in disciplino se ukvarjamo in podobno. Zato pristajam na nekatere poenostavitve. Ko govorim o prividu pravne konstrukcije, merim na tiste pravne stavke, ki so pred- met procesa pomnjenja, usvajanja, razumevanja in sintetiziranja. Na stavke torej, ki so v pravnem redu osrednji eksplicitni posredovalci normativne vsebine, prek katerih pravo z nami »komunicira« in ki se združujejo oziroma ki jih mi združujemo v pravna pravila.34 Pri tem celoto pravnih stavkov v danem trenutku in prostoru imenujem pravno znanje prava; ne samo, da to »navznoter« varuje kontinuiteto prava, hkrati tudi »navzven« omogoči pogoje njegove objektivizacije, s tem pa identifikacije. Pri tem se sicer distancira od Hartovega razlikovanja med internim in eksternim (prav tam, str. 106, op. 613). 31 Prim. Forgó, 2023, str. 462–465; Pavčnik, 2004, str. 83. 32 Pitamic, 1917, str. 340; glej tudi Pavčnik, 2008, str. 40–41. 33 Prim. Novak, 2001, zlasti str. 86–92. 34 Pravnih stavkov tukaj ne enačim s pravnimi pravili, prim. Kelsen, 1960, str. 72 in nasl. 46 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 oziroma pravna vednost.35 Osrednja lastnost slednjega je, da je specifično pravniško in da je podlaga za oblikovanje veljavnopravnih argumentov oziroma trditev. Da bi se namreč lahko učinkovito udeležil pravne igre, je za pravnika ključno prav to, da je bil usvojil kritično maso znanja, prek katerega pozna in spoznava »pravo« – te stavke torej. Zanje je značilno, prvič, da so normativne narave.36 Znanje o lombrosijanskih krimi- noloških teorijah, potencialih delavskega razrednega boja ali kompleksnosti palestinsko- -izraelskega konflikta stricto sensu ne koristi. Vsaka od treh alternativ nesporno pomeni močno vpletenost ali indikacijo pravnega urejanja oziroma pravnega sankcioniranja, toda zgolj posredno – kot pri morebitni uporabi pravne norme navzoče družbeno dejstvo. Drugič, odražati morajo veljavno pravo.37 Znanje o specifikah turškega ustavnega prava, južnokorejske ureditve pregona mladoletnikov ali jugoslovanskega samoupravlja- nja pravniku stricto sensu ne koristi. Privid se mora nanašati na aplikabilnost »tukaj in zdaj«, kar hkrati pomeni, da anticipira določeno mišljenjsko ekonomijo.38 Če ne vsebuje tipa sporočilnosti, ki je za pravnega delavca ključen, privid ni uporaben.39 3.2. »Konstrukcija, ki je pravna« »Pravniki so kot črvi, ki živijo samo od slabega lesa,« v zvezi z njihovo znanostjo pa velja, da zadoščajo le »tri popravljene besede zakonodajalca in celotne knjižnice bodo postale odpadni papir.«40 35 Pri tem izrazu zaradi analitičnosti predlagam razlikovanje od besedne zveze »znanje o pravu«, ki zadeva splošnejše, bolj oddaljene perspektive. 36 Pojem normativnosti je dvoumen. Ponekod je uporabljan kot nasprotje čisto deskriptivnega (vred- nostno), ponekod pa kot nasprotje dejanskega (najstveno). Glej na primer Engisch, 2010, str. 197. Sam ga uporabljam v drugem smislu. Posebej sicer opozarjam tudi na koncept normativnih redov. Pravnoteoretično (pa tudi širše) se s tem ekstenzivno ukvarjajo v okviru sodobnejše frankfurtske misli. Glej na primer delo in platformo Normative Ordnungen, ki sta se razvila pod taktirko Rainerja Forsta in Klausa Güntherja. 37 Kelsen, 1923, str. 7–11. Pojem veljavnosti (angl. validity, nem. Geltung) je ena pomembnejših točk razhajanj med različnimi filozofskimi smermi. Za pozitiviste denimo tipično zadostuje, da je norma postavljena, za realiste šele, da jo sodniki uporabljajo. Sander (1923, str. 11) to cinično ponazori z »veljavnost [za dogmatičnega jurista] ni učinkovitost, temveč hipoteza učinkovitosti« (prevod T. F. O.). Guastini (1996, str. 376–377) sicer nazorno razlikuje še med veljavnostjo in obstojem (angl. existence) pravnega pravila. Prvo veže na usklajenost predmetnega pravila z vsemi, drugo pa na uskla- jenost zgolj z nekaterimi pravili, ki nadzorujejo njegov nastanek in vsebino (angl. rules of change). 38 Pitamic, 1917, str. 346–347. Za Pitamica je med drugim ključno, da ekonomično izbiramo tako izhodišče kot tudi pot svojega spoznavanja, s tem pa »preprečimo odvečne konstrukcije na pod- ročju najstva«. 39 Težko bi bilo sprejeti tezo, ki veljavno pravo opredeljuje na podlagi statistične napovedljivosti, ti- pično na primer Holmes, 1881, str. 1 in nasl. Miselna shema možnih pravnih argumentov v danem primeru nikoli ne bo determinirana s površno predikcijo. 40 Kirchmann, 1848, str. 23 (prevod T. F. O.). 47 Timotej F. Obreza – Privid pravne konstrukcije. O duhu in porah pravnega (spo)znanja Z izrazom pravni konstruktivizem se v zgodovini pravne misli na Kontinentu običaj- no označujejo tisti teoretski poskusi v prvi polovici 19. stoletja, ki so pravo pretežno opi- sovali kot postopek ali proizvod oblikovanja pravnoznanstvenih konceptov.41 Korenine tega gibanja avtorji pogosto iščejo v civilistični pandektistiki, katere metoda se je pozneje prenesla tudi na javnopravno področje, kot nekateri glavni predstavniki pa so pogosto navedeni (zgodnji) Rudolf von Jhering, Georg Jellinek, Carl Friedrich von Gerber in Paul Laband. Značilno za »ta« pravni konstruktivizem je povezovanje obstoječih ele- mentov in kvalitet pravnih institutov, kot so podani v pravnem gradivu, in ustvarjanje novih. Tipično je denimo Jhering pravnikovo vlogo videl v urejanju postavljenega prava v posebna pravna telesa (nem. Rechtskörper).42 V tem smislu je (bilo) gibanju lastno priza- devanje, ki se ga je v zgodovini oprijelo precej stigme.43 Združevati se ga je namreč začelo v zbirni pojem pojmovne jurisprudence.44 Od te smeri razlikujem vsaj dva druga pojava. Na eni strani je to način razlage prava v konkretnih primerih, značilen za sodišča v pravnih sistemih s tradicijo common law, kjer construction opisuje drugačen pristop k razlagi zakonskega prava kot goli interpretation. Slednja se osredotoča zgolj na besedilo, prejšnja pa na podlagi besedila in drugih virov opravlja zahtevnejšo sintezo, ki ima pogosto tudi precedenčni učinek.45 Po drugi strani pa je treba omeniti še sodobnejše konstruktivistične drže, ki se od »izvornega« razumevanja – čeprav nastopajo pod istim izrazom legal constructivism – pomembno razlikujejo. To je lahko problematično: ne le, da se avtorji, običajno an- glosaško analitično navdahnjeni, te ključne zgodovinske tradicije pogosto ne zavedajo,46 ampak so se pod pojem pravni konstruktivizem v zadnjih desetletjih začele vključevati številne vzporednice ali pa celo neposredne povezave s konstruktivističnimi gibanji dru- gih disciplin, denimo psihologije, sociologije in filozofije.47 Do tega sem zadržan. Iz nekoč pravoznanstvenega projekta, ki motri, koncipira, razvija pravne vsebine, so tako nastali včasih že težko dojemljivi poskusi konstruktivističnih drž, ki na krilih smeri, kot so social in radical constructivism pravo soočajo z različnimi nastavki posameznikovega 41 Paulson, 1996, str. 799. Paulson v prispevku prikazuje Kelsnovo zgodnje obdobje, ki ga poimenuje critical constructivism. Somek (1992, str. 176 in nasl.) sicer omenja tri funkcije konstruktivistične drže – reduktivno, gramatično in produktivno. 42 Seinecke, 2013, str. 260–268. 43 Jhering (1899, str. 347 in nasl.) kot negativne pojme, ki se povezujejo z gibanjem, navaja na primer Lückenlosigkeit des Rechts, innere Fruchtbarkeit, logische Expansionskraft, Subsumtionsidealismus, le- bensferner Kult des Logischen in Konstruktivismus. 44 Somek, 1992, str. 175; tudi Haferkamp, geslo Begriffsjurisprudenz v Enzyklopädie zur Rechts- philosophie; Haferkamp, 2004, str. 79. 45 Pitamic, 1956, str. 201, zlasti opomba 10; Scalia, 2018, str. 14 in nasl. 46 Primeroma navajam Lee, 2010. 47 Glej na primer Niet, 2021. 48 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 subjektivnega dojemanja zunanjega sveta ali pa vpliva pravnih institucij na zasnovo družbene resničnosti. Tukaj natančnejše analize konstruktivističnih nastavkov ne želim in ne morem op- raviti. Sam s prividom pravne konstrukcije prej merim na gibanje, kot se je razvijalo v 19. stoletju (in kot ga prevzame zgodnji Kelsen), čeprav se nanj ne navezujem povsem izčrpno. Natančneje, merim predvsem na dve stvari. Prvič, na konstrukcijo kot bolj ali manj trdno zasnovo pravnega znanja (pravnih stavkov), ki jo predvsem pravniki zgodo- vinsko razvijajo ter prek izobraževanja in prakse miselno prevzemajo. Drugič, na kon- strukcijo kot na specifičen akt spoznavanja in ravnanja s pravnim znanjem (pravnimi stavki), prek katerega predvsem pravniki sodelujejo v procesu ustvarjanja – »konstruira- nja« – nadaljnjega pravnega znanja.48 Pravo je v okviru ambicije tega prispevka smiselno razumeti prav kot konstrukcijo, miselno in pojmovno zgradbo in izgrajevanje, ki sta oba odraz nekega zgodovinskega angažmaja in ki nista niti arbitrarna niti nespremenljiva.49 Privid te pa je tisti, ki pravnika vodi, spremlja in opolnomoči. S tem zanjo ne trdim, da je predeterminirana in ustaljena, le pravoznanstvena in abstraktna ali pa vezana le na postavljeno pravo. Tako razumevanje bi tvegalo usodo, ki jo v zgornjem citatu cinično napove Kirchmann. Za pravno konstrukcijo ni izključ- no relevanten niti konkreten nabor sodnih odločitev, pravnodogmatičen sistem, korpus zakonskega oziroma podzakonskega prava ali pa seznam z ustavo varovanih človekovih pravic. Privid, na podlagi katerega pravnik deluje in za učinkovitost katerega se izobražu- je, anticipira obstoj, spoznavanje in razvijanje v nekem kraju ter času prisotnih veljavnih normativnih pravnih stavkov, ki jih lahko uporabi pri svojem delu. Zagotovi mu držo, s katero tudi počne, kar pač počne: zagotavlja zanesljivo pravno rešitev, zagovarja trdno pravno stališče, oblikuje prepričljivo pravno odločitev. Pravna konstrukcija, namišljeno postavljena nad vse nas, je privid, ki ga hkrati privzamemo in prebijamo. 4. Manifestacije zróčega duha V sodnih dvoranah, odvetniških in notarskih pisarnah ter v predavalnicah (pozitivno) pravnih predmetov, če zelo poenostavljeno opredelim nekaj ključnih mest, kjer se privid pravne konstrukcije najbolj izčiščeno manifestira, je navzoč specifičen miselni duh. Tam se vse tegobe človeškega družbenega življenja reducirajo na pravne kategorije, poteka argument norme: vlada jezik prava. Podvržen je specifičnemu zrenju – pri opazovanju 48 Prim. Dworkin, 1986, str. 52: »Roughly, constructive interpretation si a matter of imposing purpose on an object or practice in order to make of it the best possible example of the form or genre to which it is taken to belong.« 49 Taka opredelitev se želi zoperstaviti »postmodernističnim« nastavkom, ki bi pravu odrekali legitim- nost oziroma ga umetno umeščali na polje aparatov moči. 49 Timotej F. Obreza – Privid pravne konstrukcije. O duhu in porah pravnega (spo)znanja in razumevanju sveta privzema tipično pravniško prizmo.50 Namen je pokazati, da ta ti- pična pravniškost – ne glede na svojo morebitno nedostopnost, kar ni stricto sensu težava pravne teorije in pravne filozofije, še več, morda je celo njun prvi pogoj – sledi določene- mu simbolnemu kodu. Sledi (po)tezam, ki so glede na svoje lastnosti zdaj bolj zdaj manj utemeljeno zakoreninjene znotraj pravne kulture in pravnega znanja.51 Naloga je na prvi pogled preprosta: vse, kar moram narediti, je prepisati gradivo, ki ga uporabljam pri predmetih, ki jih poučujem. Toda resne težave lahko zelo hitro nastanejo že s tem, ko nekdo v znanstveno oziroma ideološko verodostojnost privida podvomi: »Zakaj bi na tovrstno teoretiziranje sploh pristali, če pa je tako zelo očitno, da je pravo le odsev družbenih okoliščin, raziskovanje in odkrivanje teh pa šele zares smotrno?« Takrat bo namreč vsej hvalisani pravniškosti dodan priokus krutega družbenega živ- ljenja, moj poskus pa ultimativno poražen. »Duh« se bo razblinil ob intervenciji njegovih lovilcev, ki ne hlinijo tolikšne idealističnosti. Smo mar danes res vsi pravni realisti? Le prepisovati moram na zvit način. Zatekel se bom k trem notoričnim manifestacijam zrenja, ki ga omenjam: k vprašanju jezika, tehnike in ponotranjenja normativnopravnega. Lastnost notoričnosti jim pripisu- jem zlasti, ker gre za tipična mesta tako pripoznavanja kot tudi kritiziranja najstvenega do- gajanja. Hkrati pa – in kar je ključno – sestavljajo nosilno ogrodje pravne konstrukcije.52 4.1. Iskati skupen jezik Evropsko sodišče za človekove pravice (ESČP) je v svoji odločitvi zapisalo: P1 »Ob upoštevanju zgornjih ugotovitev Sodišče meni, da bi tožeče stranke lahko razumno predvidele, da bodo imela njihova ravnanja v spornih vodah v skladu z veljavno hrvaško zakonodajo naravo prekrškov.«53 Zapis na prvi pogled ni nedostopen. Laiku, zlasti izobraženemu, bo njegova vsebina vsaj na površini preprosto ugotovljiva: »Sodišče« je nekaj pregledovalo in ugotavljalo, stranke, ki so tožile, so v določenih vodah bojda sporno ravnale, to pa je po hrvaški za- 50 »Tipična pravniška prizma« nas sili, da smo udeleženci pravne igre. Od te se razlikuje perspektiva opazovalca, ki pravno razmišljanje spregleda in umesti v od njega oddaljeno – nepraktično – spo- znavno shemo. 51 Dva morda »najčistejša« (toda različna) nastavka v tem smislu sta Kelsnova čista teorija in Luhmannova sistemska teorija. 52 Pravna konstrukcija ima tako zunanji in notranji element. Zunanjega sestavljajo navedeni trije elementi pravne vednosti, notranjega pa sama vsebina, ki jo odražajo pravni stavki. 53 Sodba ESČP v zadevi Chelleri in drugi proti Hrvaški, št. 49358/22, 49562/22 in 54489/22, z dne 16. aprila 2024, točka 162. 50 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 konodaji prepovedano. Nekateri bi to še dodatno poenostavili: »Slovenija je pred celotno Evropo izgubila proti Hrvaški«. Kje tiči zajec? Pravniški vidik ob zapisu omogoči dodatno, umetno, vendar ne poljubno stopnjo abstrakcije. Razsežnost, ki jo zagotovi privid, je osnova pravne vednosti. Izraze »razumno predvidele«, »sporne vode«, »veljavno hrvaško zakonodajo« in »naravo prekrškov« bo pravnik umestil v svojo miselno shemo, znotraj katere ti zavzemajo specifično odmerje- no pojmovno mesto. Ne gre torej le za poljubno frazo »predvidevanje, ki je razumno«, temveč za pripis in zahtevo specifičnega odnosa, ki so ga tožeče stranke morale in mogle gojiti do svojih ravnanj. Tudi zahteva ni poljubna, temveč najbolj neposredno izhaja iz 7. člena Evropske konvencije o človekovih pravicah (EKČP), ki normira načelo za- konitosti.54 Načelo zakonitosti je eden od temeljev kazenskega, pa tudi prekrškovnega prava nasploh. V prekrškovnem pravu je bistvena zahteva, da mora biti ravnanje, ki je prepovedano, posamezniku dano vnaprej: na jasen (lex certa) in strikten (lex stricta) način. Da prestane test načela zakonitosti, mora biti predvidljivost razumna, tj. ustrezati določenemu standardu dojemanja. S tem pa se je kljub vsemu v svoji bogati sodni praksi že večkrat ukvarjalo tudi Sodišče; oblikovalo je korpus opredelitev in distinkcij, ki (so) tvorijo vsebino uporabljenih pravnih pojmov.55 Podobno kot za opisanega velja za pojme »sporne vode« (stvar mednarodnopravne opredelitve je, kako točno poteka meja med Hrvaško in Slovenijo), »veljavna hrvaška zakonodaja« (to lahko ugotovimo šele prek formule prepoznave, ki jo omenjam spodaj) ali »naravo prekrškov« (prekršek v tipologiji pravnih pravil nacionalnih zakonodaj je specifična kršitev, ki se običajno kaznuje z globo, v tem primeru pa predpostavlja norma- tivno zgradbo Republike Hrvaške, ki ga veljavno definira ter bolj ali manj sistematično ureja). Brez ustreznega miselnega nastavka ne (iz)vemo, kaj se v normativnem svetu sploh dogaja; vsebina nam preprosto ni dostopna. Opraviti imamo s »pravnim« ali »normativnim« jezikom.56 Ta je sredstvo pravn(išk)e komunikacije: tvori osrednjo enoto sporočilnosti, v kateri se izraža miselni duh pravne- ga.57 Pravni pojmi, nastali ali uvoženi v pravni diskurz, opisujejo svet najstva. Njihovi izključni stvaritelji pa niso niti sodniki, profesorji, zakonodajalci ali sami naslovniki; stvar 54 Prvi odstavek 7. člena EKČP. 55 Glej na primer zadevi ESČP Baranowski proti Poljski, št. 28358/95, ter Groppera Radio AG in drugi proti Švici, št. 10890/84. 56 Prim. Pavčnik, 2021, str. 115 in nasl; v zvezi s tem je treba posebej omeniti tudi slovenski Pravni terminološki slovar, ki smo ga dobili v letu 2018. 57 »Pravo govori svoj lasten jezik,« pravi Engisch (2010, str. 139) (prevod T. F. O.). Nekoliko dlje gre Somek (2019, str. 216), ko zatrjuje, da je pravo neskončno in samodoločujoče, saj si meje postavlja samo. Nekateri avtorji pa za jezik prava štejejo kar pravno dogmatiko samo (prim. Jestaedt, 2014, str. 9). Kot delovno opredelitev slednje lahko tukaj navedem Bulyginovo (1993, str. 193–194), ki pravno dogmatiko definira kot complex activity treh faz: identifikacije, sistematizacije pravnih norm in modifikacije oziroma transformacije pravnih sistemov. 51 Timotej F. Obreza – Privid pravne konstrukcije. O duhu in porah pravnega (spo)znanja statistične oziroma empirične in ne vrednostne analize je, kdo k pravnemu pojmovnemu repertoarju sploh prispeva. Nekakšen konsenz – vsaj na Kontinentu – je, da ta privilegij pretežno pripada zakonodajalcem,58 zaradi česar se pogosto (morda celo praviloma) go- vori tudi o zakonskem jeziku.59 Raziskovanje in zavzemanje stališč okrog izvora pravnega jezika me tukaj ne zanima- ta. V tej razpravi ga štejem za postranskega, saj se osredotočam na kvaliteto enote sporo- čilnosti kot take, ne pa na načine njenega vzpostavljanja. Podobno zadržan sem tudi do vprašanja tipiziranja vrst oziroma podmen »pravnega« jezika. Toda kljub sumničavosti, ki jo gojim, bi bilo naivno zanikati naslednje: uveljavljene distinkcije, denimo med jedrnim in robnimi pomeni pojmov60 ali pa med tistimi pojmi, ki so izključen proizvod pravnega razmišljanja,61 in tistimi, ki se v drugačni obliki uporabljajo tudi v splošnem pogovor- nem jeziku, so skoraj neizogibne. To je deloma razvidno tudi iz zgornjega primera, kjer izpostavljeni pojmi v citatu Sodišča tvorijo posebno maso sporočilnosti. Kot taki pa so dostopni zlasti tistemu, ki je usvojil privid: običajno izobraženemu pravniku. 4.2. Technê pravne forme Prejšnji primer je poudarjal vprašanje jezika, naslednji primer pa v ospredje postavlja vprašanje pravne tehnike:62 P2 »[Glede nesubsumabilnosti varnih sob pod zakonski opis »da na razpolago prosto- re«] je treba poudariti, da je metoda teleološke redukcije dopustna tudi v kazen- skem pravu in ne nasprotuje načelu zakonitosti.«63 Zapis je primer znanstvenega stališča o kaznivosti vzpostavljanja tako imenovanih varnih sob, v katerih bi se zasvojencem omogočilo varno in nadzorovano injiciranje drog. Avtor pravni jezik tvori in uporablja na podlagi specifične metode, ki je zanj pri obrav- navi spornega vprašanja ena od možnih poti za njegovo rešitev. Vzorec, po katerem bo razmišljal izobražen pravnik, je mogoče napovedati z naslednjimi koraki. Ko si posta- vi vprašanje, ali je vzpostavitev varne sobe kaznivo dejanje, najprej preveri, kaj določa veljavna zakonodaja. Po prepoznavi ustrezne pravne določbe – šlo je za prvi odstavek 58 Glej Pavčnik, 2020, str. 395 in nasl. 59 Prav tam. 60 Hart, 1994, str. 123. 61 Pravo, nastajajoče s praktično in znanstveno aktivnostjo pravnikov, je Puchta poimenoval Juristen- recht. Glej Haferkamp, 2004, str. 141. 62 Prim. Kelsen, 1911, str. 25 in nasl.; Pavčnik, 2022a, str. 23. Posebej o tehniki, ki pa jo privzema pravna dogmatika kot specifičen tip pravnega diskurza, na primer van Hoecke, 1983, zlasti str. 217–225; Bulygin, 1983, str. 199 in nasl.; Bumke, 2014; Bydlinski, 1991, str. 8–50. 63 Ambrož, 2006, str. 235. 52 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 197. člena takrat veljavnega Kazenskega zakonika (KZ)64 – sledi razlaga posameznih za- konskih znakov. V našem primeru je bila problematična izvršitvena oblika »kdor da na razpolago prostore«. Sledi vprašanje subsumpcije: varno sobo je pod te zakonske znake mogoče subsumirati; species umestimo pod genus. Varna soba je vendarle prostor, ki ga nekdo da na razpolago. Mar res? Avtor zapisa se s tem ne strinja; trdi namreč, da je v takem primeru mogoče in smiselno uporabiti metodo teleološke redukcije. Z njo bo in bonam partam prebil jezikovni pomen kazenske norme in sklenil: non sequitur. Vzpostavljanje varnih sob več ne bo razumljeno kot kaznivo dejanje, saj očitno nasprotuje namenu kaznovalne norme. S čim imamo opraviti? Enoti pravne sporočilnosti se v pravnem argumentu pogosto pridruži določena for- mula sporočilnosti: konstrukciji normativnega jezika je lastna struktura logičnih povezav, bolj ali manj ustaljenih oziroma sprejetih tehnik, s katerimi si pomagamo pri urejanju in razumevanju pravnega znanja. Če je pravni jezik (P1) medij sporočilnosti, štejem pravno tehniko (P2) za njegovo vezivo. Tega delim predvsem na formulo prepoznave veljavne normativne pravne ureditve, formulo razlage pravnega pravila in formulo njegove upora- be. Vse tri temeljijo na spoznavni poenostavitvi oziroma redukciji, usmerjene pa so v goli know-how, »mišično-kognitivni spomin« pravnega delovanja. 4.2.1. »You know it when you see it« S formulo prepoznave merim na vzorce prepoznavanja in utemeljevanja veljavnosti nekega pravnega pravila; ključna za ta korak je ugotovitev, ali je (splošna oziroma posa- mična) pravna norma del pravnega reda, veljavnega hic et nunc, v katerem pravnik deluje. Ne glede na to, s kakšno gotovostjo lahko njeno morebitno (ne)aplikacijo v konkret- nem primeru napovemo – denimo sklicujoč se na oceno, da je ta »očitno ustavnopravno sporna in zato neveljavna« ali pa na empirično dejstvo, da »je sodišča običajno sploh ne uporabljajo« – bo po tej formuli ključno predvsem, ali se lahko nanjo v pravnem svetu sploh veljavno sklicujemo. 4.2.2. Pomenske akrobacije Prepoznano pravno pravilo lahko pravnik različno razlaga in razume.65 Formula razla- ge je relevantna predvsem, ko smo soočeni s primerom, katerega pravna rešitev ni povsem 64 Ta se je glasil »Kdor napelje drugega k uživanju mamila ali mu da mamilo, da ga uživa on ali kdo drug, ali kdor da na razpolago prostore za uživanje mamila, ali kako drugače omogoči drugemu, da uživa mamilo, se kaznuje z zaporom od treh mesecev do petih let.« 65 Novak (2022, str. 284) v zvezi s tem ključno poudari, da številne okoliščine, vključujoč pravno tradicijo in specifično doktrinarno obravnavo, privedejo do tega, da »razlage ni mogoče razumeti le kot proces razumevanja besedila« (poudarki izpuščeni) in da znotraj enega pravnega reda obstajajo različni pristopi k interpretaciji besedila – kar poimenuje interpretativni pluralizem. Podobnost vidim z Esserjevim (1972, zlasti str. 116–141) konceptom Vorverständnis. Ker oba namigujeta na 53 Timotej F. Obreza – Privid pravne konstrukcije. O duhu in porah pravnega (spo)znanja očitna. Najpozneje takrat bomo posegli po ustaljenem kanonu pravnih razlag in prav- nih argumentov,66 ki pomeni nabor analitičnih orodij za soočanje s pravnim jezikom. V zgornjem primeru je avtor posegel po metodi teleološke redukcije, ki je specifičen miselni korak prebijanja pomena danega besedila. Varne sobe, ki je gotovo prostor v običajnem jeziku, ne bomo šteli za prostor v danem normativnem kontekstu. Prav za ta primer bomo naredili izjemo (»izjemni dejanski stan«).67 Pri razlaganju pravnih pravil lahko pridemo do različnih (tudi izključujočih se) rešitev, ki argument v danem primeru in trenutku zdaj bolje zdaj slabše utemeljujejo. Formula razlage nam ne pove, katera rešitev je pravilna; lahko le anticipira nabor možnih miselnih korakov, ki jih bomo v danem primeru iz tega ali onega razloga tudi izbrali. Ko pa se za eno od možnosti odločimo, je ambivalentnosti konec: najpozneje takrat se moramo zavedati, da je ta za nas najboljša možna – tj. pravilna – rešitev. 4.2.3. Aplikacija dognanega Jasnost in preciznost, skratka optimalnost, naše razlage moramo naslovniku tudi raz- kriti. S formulo razlage tesno povezana je formula uporabe pravnega pravila, ki pomeni rešitev dialektike najstvenega in dejanskega, navzoče v konkretnem primeru. Tukaj pri- meroma navajam – sicer problematičen – logični obrazec, ki ga pravna teorija postop- ku uporabljanja pravnih pravil običajno predpisuje: pravni silogizem. Po njem bomo v abstraktnem zakonskem dejanskem stanu ZDS1 prepoznali konkretni življenjski primer KDS1, kar bo vodilo do predvidene pravne posledice PP1. 4.3. Sprejem paradigme Zadnji kos sheme, ki jo opisujem, zadeva vrednostno podstat, ki preveva vsakokra- tno pravno ureditev. Čeprav pravno delovati ne pomeni (nujno) biti moralen ali etičen – to zadeva vprašanje posameznikove lastne drže in vprašanje, katero oziroma čigavo moralo ali etiko imamo v mislih, je neizogibno, da pravni red poleg pravnih stavkov, ki ga vsebinsko tvorijo, spremlja tudi združevalni element. Drugače kot prejšnja dva je ta preskriptiven: poleg enote in formule pravne sporočilnosti se s prividom pravne kon- strukcije usvoji in reproducira tudi določena vrlina. Iz tega sledi, da še tako izvrstno poznavanje pravnega jezika in njegovih logičnih specifik ne more nadomestiti nekaterih niti politično-etičnega ustroja, lastnih instinktu pravnega delovanja in urejanja. Gre za področje neopisljivega, neizrekljivega: latentne- ga. Za argument, ki vključuje naslednjo tezo: če želimo sredstvo in tehniko pravne ko- nekakšno »sedimentirano pravno znanje«, bi konceptualne vzporednice morda lahko iskali tudi pri Polanyiju (2022). 66 Pavčnik, 2022, zlasti str. 57–104, tudi str. 167–185; Bydlinski, 1991, str. 436 in nasl. 67 Pavčnik, 2022, str. 177–179. 54 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 munikacije sploh priznati oziroma uspešno privzeti, se moramo z normativno-pravnim diskurzom, ki ga opisujem, pretežno šele poistovetiti. S tem delujoči pravnik sistemu, znotraj katerega deluje, in svojemu delovanju kot takemu, pripiše ustrezno stopnjo legi- timnosti.68 Zagotovljen mu je odgovor na vprašanje: »Zakaj to sploh počnem?« Naj ponazorim. V pogovoru dveh študentov višjih letnikov v avli fakultete sem zaznal naslednjo trditev: P3 »Zadeva je jasna: izvršilni organ mora splošne pravne akte, zato da ti postanejo veljavni, ustrezno javno objaviti. Gre za osnovno zahtevo legalitetnega načela, ki izhaja že iz Ustave.« Sodelovanje v pravni igri torej predvideva sprejetje njenih – latentnih – pravil. Pravila vsakokratne pravne igre so drugačna od pravnih pravil stricto sensu, tj. tistih pravil, ki vključujejo pravne stavke. Oboja se lahko glede na čas, prostor in politično ureditev raz- likujejo. Toda med njima je pomembna razlika. Če pravna pravila običajno izražamo ek- splicite, denimo prek določbe: »Upravni organi opravljajo svoje delo samostojno v okviru in na podlagi ustave in zakonov«,69 potem pravila pravne igre same – »etos« – običajno izražamo implicite. To denimo posredno storimo, ko trdimo, da »[l]egalitetno načelo, načelo demokratičnosti in načelo delitve oblasti [...] nimajo izjem: treba jih je spoštovati, tudi ko razsaja huda bolezen ali če je vladajoča politika brez potrebne večine v parlamen- tu« ali pa »Ustava je naš najmanjši skupni imenovalec, osnova našega družbenega sožitja. Vsi bi se morali truditi, da ji ne odrekamo spoštovanja.«70 Prek opazovanja in ponotranjenja vednostnih in etičnih vzorcev, skozi katere smo se izobrazili, pravnemu jeziku in pravni tehniki pripišemo minimalno mero legitimnosti. Menim, da nam v nasprotnem primeru pravniškega dela ne bo uspelo opravljati dobro.71 Argument pod P3 zato vendarle ni »zgolj« formalno-praven. Je odraz premisleka o in zaveze k temu, da naj oblastno delovanje poteka ustaljeno, odgovorno, pa tudi pravno urejeno. Pravnikova poklicna – in s tem volens nolens tudi politična72 – aktivnost antici- 68 Ni naključje, da v zadnjem času spremljamo čedalje več pobud za raziskovanje prav temeljev prava. Jheringov »Warum?« lahko tako vendarle razumemo kot opozorilo na zimzeleno problematiko, ki jo je treba vedno znova – zlasti pa v časih (pretirane) družbene diferenciacije in specializacije – kon- tekstualizirati in osmišljati. 69 Drugi odstavek 120. člena Ustave Republike Slovenije (Uradni list RS, št. 33/91-I, 42/97 – UZS68, 66/00 – UZ80, 24/03 – UZ3a, 47, 68, 69/04 – UZ14, 69/04 – UZ43, 69/04 – UZ50, 68/06 – UZ121,140,143, 47/13 – UZ148, 47/13 – UZ90,97,99, 75/16 – UZ70a in 92/21 – UZ62a). 70 Pritrdilno ločeno mnenje sodnice dr. Špelce Mežnar k sklepu Ustavnega sodišča RS U-I-210/21 z dne 30. septembra 2021, str. 2. 71 Do tega bi morda prišlo v (hipotetičnem) primeru, ko bi sodnik izrecno zavrnil sojenje po veljav- nem pravu. 72 To implicira Somek (1992, str. VW II), ko pravi, da bo njegova naloga »[...] Rechtstheorie zu einem gewissen Abschluß zu bringen und die Theorie der Rechtsanwendung auf den Boden der politischen 55 Timotej F. Obreza – Privid pravne konstrukcije. O duhu in porah pravnega (spo)znanja pira držo, ki je precej več kot zgolj tehnično-strokovna. Gre za odraz sprejetja paradigme, v kateri je ta posameznik. 5. Nekaj (s)pornih prividov Pravno razmišljanje je nujno spoznavno pomanjkljivo. Z njim lahko operacionalizi- ramo le zelo omejen nabor spremenljivk, ki pa ne bodo nikoli mogle (vnaprej) celostno zajeti ustvarjalnosti in živahnosti niti normativnega niti dejanskega dogajanja. Glede prvega to pomeni, da v marsikaterem primeru še tako dober pravnik, ne glede na vlogo, ki jo opravlja, ne bo mogel podati zanesljivo predvidljivega – »pravilnega« – odgovora. V zvezi z drugim pa velja, da pravniška miselna shema navsezadnje zadeva le zelo ozek del družbenega življenja, ki nikakor ne omogoča spoznavnega prebijanja skozi nadaljnje sloje resnice tam zunaj.73 Obravnavati moram torej nekatere najočitnejše (s)pore okrog pravnega razmišljanja; v zvezi s tem v nadaljevanju navajam dva tipska argumenta. 5.1. Norma presenečenja S prvim argumentom – poimenoval ga bom argument normativnega presenečenja – zatrjujem, da so pravnemu redu nujni položaji, ko zanesljivi vnaprejšnji pravni odgovori niso mogoči. Gre za zlasti dve situaciji. Na eni strani je lahko že samo zaporedje pravnih stavkov (ki tvorijo pravno pravilo) pomensko odprto. V tem primeru bo pravnikova naloga ta, da izmed možnih rešitev izbere najboljšo in jo ustrezno utemelji. Vzrok za to težavo je lahko denimo nedoločnost oziroma nedoločljivost uporabljenega pravnega pojma, pa tudi logično nejasna sintaksa posameznih pravnih stavkov. Po drugi strani so lahko presenečenja le odraz kompleksnosti in nepredvidljivosti »družbenega laborato- rija«.74 Nekatere situacije bodo pravno tako zapletene, da jih bo obvladala šele skupina specialistov z različnih področij. Obe situaciji nas lahko spravita v zadrego: kako obravnavati pomanjkljivost? Normativna presenečenja niso napaka v matrici. So del »vednostnega dizajna«, ki omo- Philosophie zurückzuholen.« (»[…] pravno teorijo pripeljati do določenega zaključka in teorijo upo- rabljanja prava vrniti na področje politične filozofije«, prevod T. F. O.). 73 Če dosledno pristajamo na dihotomijo Sein in Sollen, pravna shema »resničnega« družbenega življenja sploh ne zadeva. 74 Namenoma se izogibam izrazu pravna praznina. Čeprav je v pravni teoriji prav konceptu oziroma problemu pravnih praznin namenjeno ogromno pozornosti – nekateri teoretski nastavki denimo pravne praznine zanikajo, drugi priznavajo – ga tukaj štejem za preozkega. Če s pravno praznino razumemo družbeni položaj, ki bi zaradi svoje pomembnosti moral biti pravno urejen, pa ni, potem s tem ne zajamemo situacij, kjer pravni red neko družbeno situacijo sicer ureja, vendar je pravna re- šitev zaradi njene kompleksnosti, morda celo prelomnosti, le stežka doumljiva. Gre za presenečenje, ki ga moramo mukoma, toda metodično razčleniti. 56 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 goča prevajanje strogo družbenega v strogo najstveno. Privid pravne konstrukcije operira s predpostavko, da sta cilj in smisel pravnega razmišljanja šele v zagotavljanju ustrezne miselne sheme, ki nam bo zdaj omogočila brezhibno orientacijo po obstoječem in zdaj – vezano – kreiranje novega normativnega sveta. Presenečenja pravzaprav anticipira. S tem pa se zoperstavlja pretiranemu formalizmu. Pravne konstrukcije namreč ne opredeljuje kot »vse, kar sploh je« in svojega obstoja ne utemeljuje s svetostjo pojmovnih nebes.75 Za svoje reproduciranje ne potrebuje poslušnega, temveč ustvarjalnega posame- znika: graditelja novega normativnega sveta.76 5.2. Materialističen rez Za delujočega pravnika je ključno, da s prividom pravne konstrukcije usvoji določeno metodo. Šele takrat bo lahko pravo spoznaval, poznal, razumel in uporabljal. S tem pa mu bo tako na praktični kot tudi teoretični ravni vedno znova umanjkala neka perspek- tiva, ki bi te uokvirjene procese umestila v kontekst,77 izpostavila kot kulturno in zgodo- vinsko pogojene, skratka: prebila onkraj njihove praktične relevantnosti »tukaj in zdaj«. Ne bi torej obravnavala usodnega dejstva, da je miselni duh pravnega tako na »mikro« kot tudi »makro« ravni vedno in tudi nujno del vladajočih ekonomskih,78 političnih in drugih determinant. Manjkal bi del enačbe. Tovrstno (s)pornost pravnega razmišljanja orisujem še s pomočjo argumenta metodološke ukleščenosti. Pri privzemanju privida bo najprej umanjkala predvsem izkušnja79 družbeno-prav- nega dogajanja, prek katere bi bila miselna shema preizkušena in diferencirana. »V pra- ksi je namreč vse drugače,« odmeva ob inavguracijskih sestankih mladih odvetniških pripravnikov. Da bi v okviru posamezne pravne obrti šele lahko delovali, bo ključno, da pridobljene vzorce pravnega (spo)znanja modificiramo oziroma kalibriramo. Privid pravne konstrukcije s tem ne bo okrnjen, temveč dopolnjen: prestajal bo vedno znova 75 Prim. Jhering, 1899, str. 245 in nasl. 76 Tak graditelj deluje na podlagi Bobbiove (1997, str. 42–44) teze o možnosti izpopolnjevanja prav- nega reda (angl. rule of closure). 77 Pri tem pa ne bo šlo zgolj za seznanitev s svetom tam zunaj per se, temveč bo lahko tako tudi lažje razumel, da in kako dejanske okoliščine vplivajo na pravno razlago samo (tj. uporabo pravne tehni- ke). Glej Novak, 2022, str. 286. 78 Posebno pozornost je treba nameniti Pašukanisovemu (1917) poskusu aplikacije marksizma na »buržoazno« razumevanje pravne forme, naloga katere je razkrinkanje razrednosti pravne logike (str. 41), njena predpostavka pa prisotnost blagovno-denarnega gospodarstva (str. 57). 79 V našem prejšnjem skupnem prostoru poskuša to težavo obravnavati integralna teorija prava. Višković (1976, str. 53) denimo pravi, da »definicija prava treba da bude istodobno radikalno is- kustvena« (poudarek izpuščen) (»definicija prava mora biti sočasno radikalno izkustvena«, prevod T. F. O.). Največji zagovornik sicer modificirane integralne teorije prava v našem pravnem prostoru je Pavčnik. 57 Timotej F. Obreza – Privid pravne konstrukcije. O duhu in porah pravnega (spo)znanja razvijajoče se zaporedje spoznavnih sintez, vezanih na zakonitosti (pravnikove) partiku- larne življenjske situacije. V teoretičnem smislu bo metodološka ukleščenost preprečevala seznanitev in sodelo- vanje z drugimi sistemi znanja,80 kar vodi do dveh nujnih posledic. Pravni metodološki redukcionizem je nujen za razumevanje in delovanje znotraj normativnega sveta. Brez tako ločenega diskurza ne gre resno izpeljati družbenega projekta, ki gradi s »pravom« in na njegovi podlagi kot privida avtonomne celote. Sistem tega preprosto ne bi dopuščal. Pri tem je sicer na vsakemu od nas odločitev, ali bomo uresničevanje tega projekta tudi sprejeli ali zavrnili. Toda v nobenem primeru ne bo mogoče trditi, da prek usvojenega privida opisujemo več kot zgolj to, kar naj se zgodi. Opis družbenega dogajanja na način, da ga zvedemo na najstvene kategorije, ne ponudi opisa tega, kar poteka onkraj in mimo opisanega miselnega duha. Nujno je torej, da na drugi strani sprejmemo metodološki pluralizem: šele izstopanje iz najstvenih vzorcev bo omogočilo pogled na celostno sliko, ki jo s skupnimi močmi ustvarjamo.81 5.2.1. Silogizem – non sequitur Argument metodološke ukleščenosti lahko hkrati razumemo kot hibo in odliko. Odločitev za prvo ali drugo bo pretežno odvisna od (oddaljenosti) perspektive, ki jo za opisovanje s pravom povezanega dogajanja privzemamo, kar pa je mogoče ponazoriti na problemu pravnega odločanja. Silogistično sklepanje je v pravu najbolj plastična, redukcionistična formula, po kate- ri naj bi se miselno ravnal pravnik, ko je soočen s konkretnim primerom. Gre za logični obrazec, katerega conclusio je rezultat deduktivnega sklepa na podlagi dveh premis. Ta vključuje subsumpcijo relevantnih lastnosti konkretnega življenjskega primera pod ab- straktno in splošno pravilo. Obrazec določa, da iz dejanskega stanja izluščimo tip ravna- nja, ki ga predvidevajo vnaprej opredeljene najstvene kategorije: je torej izčiščen logični korak, v okviru katerega zgolj prepoznamo pravno rešitev.82 80 Integralna teorija prava poskuša obravnavati tudi to zagato, zaradi česar Višković (prav tam, str. 57) vztraja, da mora integralna teorija prava zgraditi svojo metodologijo, in sicer takšno, ki bo zagoto- vilo celostno pravno spoznanje marksističnega tipa. To mu ni uspelo. 81 Prim. Kantorowicz, 1911, str. 21–23; Pitamic, 1917, str. 366–367. Nemški prostor v tej razpravi sicer zelo veliko, če ne največ pozornosti namenja pravni dogmatiki, kar v začetku 20. stoletja problematizirajo zlasti svobodnopravniki. Glej na primer Kantorowiczevo (1917, str. 29) trditev: »Dogmatik ohne Soziologie ist leer, Soziologie ohne Dogmatik ist blind« (»Dogmatika brez sociologije je prazna, sociologija brez dogmatike pa slepa«, prevod T. F. O.). 82 Furlan (1933, str. 48) je zapisal, da smo s tem »[v] individualnosti konkretnega primera [...] pre- poznali splošni lik norme.« Vendar je do tega kritičen. Ne samo, da »je [to] neko intuitivno zrenje, neka neposredna danost, [ki] [...] ni istovetna z razumskimi danostmi«, samo spoznanje je »šele ana- liza med dvema sintezama«. Osrednjega pomena je zanj akt prepoznave, ki je »torej osnova vsakega pravnega sklepanja in vsake primembe prava.« 58 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Prednost tega nastavka je predvsem v njegovi didaktičnosti in praktičnosti. Nudi prikladno formo, ki jo pravna teorija privzame in na podlagi katere kompleksnost prav- nega dogajanja zvede na zgolj dve premisi: tip ravnanja lahko v vnaprej podanih katego- rijah bodisi prepoznamo ali pa ne. Če prvo, potem obsodba; če drugo, potem oprostitev. Ker premisi anticipirata deduktiven sklep, s tem poenostavimo še način, s katerim do uporabe pravnega pravila bodisi pridemo ali pa ne. S tem pa prezremo vse druge logične možnosti in procese, ki spremljajo posameznikovo odločanje.83 Po drugi strani pa ima silogizem precejšnjo vrednost v kontekstu teorije argumentaci- je, ki posebno pozornost namenja pravni obrazložitvi.84 Z ubiranjem sodnikovega vidika si ta prizadeva doseči, da je odločevalčeva miselna pot prenesena na navzven racionalno in preverljivo raven: brez ustreznega podajanja trditev in razlogov zanje pravni diskurz preprosto ne more steči. Težave se pojavijo vsaj na dveh ravneh. Prva je ta, da silogistično sklepanje pravni metodi pravzaprav ne zadosti v celoti.85 Zgornji P2 – pa tudi drugi primeri, v katerih gre za »zapolnjevanje nepopolnosti v pravnih virih«86 – zajema prav situacijo, ko bomo od jezikovnega pomena odstopili, zaradi česar bo deduktiven sklep logično nemogoč. Druga pa zadeva primere, ko pravno sklepanje opazujemo od zunaj, brez privzetja »prav- ne perspektive«. Pravna igra lahko takrat v resnici deluje precej izprijeno. Kar bo denimo iz pravnoteoretičnega gledišča racionalno silogistično utemeljevanje kazni, bo morda iz empiričnega, psihološkega oziroma kriminološkega gledišča predvsem prelivanje stereo- tipov s strani lačnega, podplačanega sodnika nad obsojenčevo usodo. Pluralizem metode lahko prav tukaj prepreči morebiten konflikt: če nastalo »zadre- go« postavimo kot znanstveno vprašanje, pogojeno z metodološko, disciplinsko in s tem spoznavno razlikujočimi se nastavki, oba sistema ohranita »svojo resnico«; pri tem pa bo politično vprašanje, kaj s takim znanjem sploh početi. 6. Namesto konca začetek: sprejeti nujno zlo V prispevku sem nekaj besed namenil konceptu, ki sem ga poimenoval privid prav- ne konstrukcije. Pod njim sem združil že dobro poznane teoretične elemente. Zatrjeval sem, da pravniki prek ohranjanja in razvijanja specifične miselne drže zagotavljajo re- produciranje normativnega sveta, pri čemer sem opredelil nekaj ključnih mest, ki to držo spoznavno bodisi tvorijo ali pa ji poskušajo nasprotovati. Ob tem sem poudaril, da taka zasnova pridobivanja vednosti vodi do njene monopolizacije – »le« pravniku je 83 Denimo indukcijo in predvsem tudi abdukcijo, ki sta nujno prisotni. 84 Prim. Furlan, 1933, str. 49. Glej tudi Pavčnik, 2022, str. 268 in nasl. Za »hermenevtično rekon- strukcijo« silogizma glej Pavčnik, 2008. 85 Štejemo pa ga lahko – z zadržkom – za »temeljno ogrodje«. Glej Pavčnik, 1990, str. 327. 86 Glej Pavčnik, 2022, str. 167–185. 59 Timotej F. Obreza – Privid pravne konstrukcije. O duhu in porah pravnega (spo)znanja omogočen vpogled v abstraktne kategorije najstva –, kar pa se hkrati kaže tudi na trgu pravičnosti: dobra pravna storitev stane, sleherniku pa pogosto ni na voljo. Pravno raz- mišljanje sem pri tem opredelil kot nesamoumevno, potencialno problematično, vendar nepogrešljivo. Toda zagotavljanje sinopsisa ne bo moj zadnji korak. Želim poudariti ambicijo, ki je prispevku pravzaprav ves čas botrovala. Pravno delovanje in podoba prava sta morda prepogosto na preizkušnji. Tako pravniku kot tudi pravnemu naslovniku – sleherniku – se morda prepogosto zdi, da pravo ne zagotavlja tega, kar naj bi najverjetneje bilo eno od njegovih ključnih vodil: služiti človeku. »To je igra za elito,« lahko beremo. S prispev- kom sem se te problematike želel dotakniti na dva načina. Prvič, pravnemu razmišljanju, kljub ali pa morda prav zaradi njegove specifičnosti, moramo zaupati. Drugič, pravno razmišljanje se mora udejanjati prek skupne vizije, ki v ospredje ne postavlja samega sebe, temveč znotraj skupnosti odgovorno prevzema breme dobrega argumenta. Privid pravne konstrukcije je morda le eden od načinov, kako lahko pravno miselno shemo koncipiramo. Toda cilj našega delovanja, ne glede na obliko, ki jo privzema, pa je lahko v svojem bistvu samo eden: ohranjati in razvijati boljši pravni in s tem dejanski svet. 60 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Literatura Alchourrón, E. C., in Bulygin, E. (1971) Normative Systems. Dunaj in New York: Springer-Verlag. Ambrož, M. (2006) ‘Varne sobe za injiciranje drog – tudi v Sloveniji’, Revija za krimina- listiko in kriminologijo 57 (3), str. 232–239. Austin, J. (1995) The Province of Jurisprudence Determined. Cambridge: Cambridge University Press. Banovič, D. (2023) Realistička teorija prava: ogledi o pravnom realizmu, konvencionalizmu i naturalizaciji u pravu. Sarajevo: Pravni fakultet Univerziteta. Bobbio, N. (1997) ‘The Science of Law and the Analysis of Language’ v Pintore, A., in Jori, M. (ur.) (1997) Law and Language: The Italian Analytical School. Liverpool: Deborah Charles Publications, str. 21–50. Bobbio, N. (2011) Giusnaturalismo e positivismo iuridico. Prefazione di Luigi Ferrajoli. Edizione digitale: Biblioteca universale Laterza. Bourdieu, P. (1986) ‘La force du droit: Eléments pour une sociologie du champ juri- dique’, Actes de la Recherche en Sciences Sociales 64, str. 3–19. Bulygin, E. (1983) ‘Legal Dogmatics and the Systematisation of Law’, v Eckhoff, T., Friedman, M. L., in Uusitalo, J. (ur.) (1983) Vernunft und Erfahrung im Rechtsdenken der Gegenwart. Berlin: Duncker & Humblot, str. 193–210. Bumke, C. (2014) ‘Rechtsdogmatik. Überlegungen zur Entwicklung und zu den Formen einer Denk- und Arbeitsweise der deutschen Rechtswissenschaft’, Juristen Zeitung 13, str. 641–650. Bydlinski, F. (1991) Juristische Methodenlehre und Rechtsbegriff. Zweite, ergänzte Auflage. Dunaj in New York: Springer-Verlag. Cerar, M., in Auersperger Matić, A. (2001) (I)racionalnost modernega prava. Ljubljana: Bonex. Cerar, M. (2006) ‘Ideološki vidiki razmerja med (demokratično) politiko in pravom’, Uprava (Ljubljana) 4(2-3), str. 161–180. Cerar, M. (2009) ‘The Relationship Between Law and Politics’, Annual Survey of Inter- national & Comparative Law 15(1), str. 19–41. Cerar, M. (2011) ‘The Ideology of the Rule of Law’, ARSP: Archiv für Rechts- und Sozialphilosophie / Archives for Philosophy of Law and Social Philosophy 97(3), str. 393–404. Dahm, G., in Schaffstein, F. (1933) Liberales oder autoritäres Strafrecht. Hamburg: Hanseatische Verlagsanstalt. 61 Timotej F. Obreza – Privid pravne konstrukcije. O duhu in porah pravnega (spo)znanja Dugar, G., Fajfar, T., Humar, M., Žagar Karer, M., Novak, A., Tičar, L., in Jemec Tomazin, M. (ur.) (2018) Pravni terminološki slovar. Ljubljana: Založba ZRC, ZRC SAZU. Dworkin, R. (1986) Law’s Empire. Cambridge: Harvard University Press. Engisch, Karl (2010) Einführung in das juristische Denken. Elfte Auflage. Stuttgart: Kohl- hammer Verlag. Enzyklopädie zur Rechtsphilosophie: IVR (Deutsche Sektion) und Deutsche Gesellschaft für Philosophie, . Slovar slovenskega knjižnega jezika, . Esser, J. (1972) Vorverständnis und Methodenwahl in der Rechtsfindung. Rationalitäts- grundlagen richterlicher Entscheidungspraxis. Frankfurt na Majni: Athenäum Fischer Taschenbuch Verlag. Forgó, N. (2023) ‘Jenseits der Rechtsdogmatik’, v: Bezemek, C. (ur.) (2023) Rechts- dogmatik: Stand und Perspektiven. Dunaj: MANZ’sche Verlags- und Universitätsbuchhandlung, str. 459–468. Forst, R., in Günther, K. (ur.) (2021) Normative Ordnungen. Frankfurt na Majni: Suhrkamp Verlag. Fuller, L. L. (1963) The Morality of Law. New Haven in London: Yale University Press. Furlan, B. (1933) ‘Teorija pravnega sklepanja’, Zbornik znanstvenih razprav 10, str. 29–53. Frank, J. (1931) ‘What Cours Do in Fact: Part One’, Illinois Law Review 26, str. 645–666. Fraenkel, E. (2019) Dvojna država. Ljubljana: cf. založba. Guastini, R. (1996) ‘Fragments of a Theory of Legal Sources’, Ratio Juris: An International Journal of Jurisprudence and Philosophy of Law 9(4), str. 364–386. Guastini, R. (2015) ‘Realistični pogled na pravo in (s)poznavanje prava’, Revus – Journal for Constitutional Theory and Philosophy of Law 27, str. 35–44. Haferkamp, P.-H. (2004) Georg Friedrich Puchta und die »Begriffsjurisprudenz«. Frankfurt na Majni: Vittorio Klostermann. Hart, H. L. A. (1958) ‘Positivism and the Separation of Law and Morals’, Harvard Law Review 71(4), str. 593–629. Hart, H. L. A. (1994) The Concept of Law. Oxford: Oxford University Press. Van Hoecke, M. P. (1983) ‘La Systematisation dans la Domatique Juridique’, v: Eckhoff, T., Friedman, M. L., in Uusitalo, J. (ur.) (1983) Vernunft und Erfahrung im Rechtsdenken der Gegenwart. Berlin: Duncker & Humblot, str. 217–230. Holmes, O. W. Jr. (1881) The Common Law. Boston: Little, Brown and Company. 62 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Jestaedt, M. (2014) ‘Wissenschaft im Recht. Rechtsdogmatik im Wissenschaftsvergleich’, Juristen Zeitung 69 (1), str. 1–12. Jhering von, R. (1899) Scherz und Ernst in der Jurisprudenz. Eine Weinachtsgabe für das juristische Publikum. Leipzig: Breitkopf und Härtel. Kant, I. (2000) Kritik der reinen Vernunft. Band I und II. Frankfurt na Majni: Suhrkamp Verlag. Kantorowicz, H. (1911) Rechtswissenscahft und Soziologie. Verhandlungen des Ersten deut- schen Soziologentages zu Frankfurt am Main. Tübingen: Verlag von J. C. B. Mohr. Kelsen, H. (1911) Über Grenzen zwischen juristischer und soziologischer Methode. Vortrag gehalten in der Soziologischen Gesellschaft zu Wien. Tübingen: Verlag von J. C. B. Mohr. Kelsen, H. (1923) Hauptprobleme der Staatsrechtslehre. Enwickelt aus der Lehre vom Rechtssatze. Tübingen: Verlag von J. C. B. Mohr. Kelsen, H. (1960) Reine Rechtslehre. Zweite, vollständig neu bearbeitete und erweiterte Auflage. Dunaj: Österreichische Staatsdruckerei. Kelsen, H. (1962) ‘Naturrechtslehre und Rechtspositivismus’, Politische Vierteljahresschrift 3(4), str. 316–327. Kelsen, H. (1979) La teoria generale del diritto e il materialismo storico. Introduzione e Traduzione di Francesco Riccobono. Roma: Christengraf (prevod Kelsnovega spisa Allgemeine Rechtslehre im Lichte materialistischer Geschichtsauffassung iz leta 1931). Kirchmann von, J. H. (1848) Die Werthlosigkeit der Jurisprudenz als Wissenschaft: ein Vortrag, gehalten in der juristischen Gesellschaft zu Berlin. Berlin: Julius Springer Verlag. Komel, S. (2023) ‘Praktična vednost in praktična vednost pravnih delavcev’, Problemi: revija za kulturo in družbena vprašanja 9-10, str. 159–193. Leiter, B. (2008) ‘Naturalizing Jurisprudence: Three Approaches’, Public Law and Legal Theory Working Papers 246, str. 1–15, (dostop: julij 2024). Luhmann, N. (2000) ‘Die Rückgabe des zwölften Kamels: zum Sinn einer soziologi- schen Analyse des Rechts’, Zeitschrift für Rechtssoziologie 21(1), str. 3–60. Luhmann, N. (2004) Law as a Social System. Oxford in New York: Oxford University Press. Merkl, A. J. (1931) ‘Prolegomena einer Theorie des rechtlichen Stufenbaus’, v: Verdross, A. (ur). (1931) Gesellschaft, Staat und Recht. Festschrift Hans Kelsen zum 50. Geburtstag gewidmet. Frankfurt na Majni: Sauer und Auvermann, str. 252–294. 63 Timotej F. Obreza – Privid pravne konstrukcije. O duhu in porah pravnega (spo)znanja Nieto, E. C. (2021) ‘The Foundations of Legal Constructivism’ v: Fabra-Zamora, J. L., in Rosas Villa, G. (ur.) (2021) Conceptual Jurisprudence: Methodological Issues, Classical Questions and New Approaches. Cham: Springer Nature Switzerland AG, str. 295–319. Novak, A. (2001) ‘O definiciji prava’, Zbornik znanstvenih razprav 61, str. 81–101. Novak, A. (2003) Narava in meje zavezujoče moči prava. Ljubljana: Pravna fakulteta Univerze v Ljubljani. Novak. A (2022) ‘Interpretativni pluralizem’ v: Novak, A., in Pavčnik, M. (ur.) (2022) Pravne panoge in metodologija razlage prava. Ljubljana: Lexpera (GV Založba), str. 275–304. Novak, A. (2023) ‘Pojem in pojavnosti sodniškega prava’, v: Novak, A., in Pavčnik, M. (ur.) (2023) Sodniško pravo. Ljubljana: Lexpera (GV Založba), str. 323–367. Obreza, F. T. (2022) Pravnoteoretične in praktične razsežnosti pojma pravne dobrine. Ljubljana: samozaložba. Paulson, L. S. (1996) ‘Hans Kelsen’s Earliest Legal Theory: Critical Constructivism’, The Modern Law Review 59(6), str. 797–812. Pavčnik, M. (1990) ‘Okrog »pravnega silogizma«: odgovor prof. Lukiću in odpiranje novih vprašanj’, Zbornik za teoriju prava 4, str. 327–334. Pavčnik, M. (2004) ‘Die (Un)produktivität der Positivistischen Jurisprudenz’, v: Himma, E. K. (ur.) (2004) Law, Morality, and Legal Positivism: Proceedings of the 21st World Congress of the IVR. Wiesbaden: Franz Steiner Verlag, str. 81–91. Pavčnik, M. (2008) ‘Das Hin und Herwandern des Blickes’, Slovenian Law Review 5(1- 2), str. 31–44. Pavčnik, M., in Novak, A. (2020) Teorija prava. Prispevek k razumevanju prava. Šesta, pregledana in dopolnjena izdaja (s poglavjem Aleša Novaka). Ljubljana: Lexpera (GV Založba). Pavčnik, M. (2022) Argumentacija v pravu. Od življenjskega primera do pravne odločitve. Četrta, pregledana in dopolnjena izdaja. Ljubljana: Lexpera (GV Založba). Pavčnik, M. (2021) Razumevanje prava. Ljubljana: Lexpera (GV Založba). Pavčnik, M. (2022a) ‘Enotnost v različnosti, različnost v enotnosti’, v: Novak, A., in Pavčnik, M. (ur.) (2022) Pravne panoge in metodologija razlage prava. Ljubljana: Lexpera (GV Založba), str. 17–35. Pitamic, L. (1917) ‘Denkökonomische Voraussetzungen der Rechtswissenschaft’, Öster- reichisches Zeitschrift fürr öffentliches Recht 3, str. 339–367. Pitamic, L. (1956) ‘Naturrecht und Natur des Rechtes’, Österreichische Zeitschrift für öffentliches Recht NF 7, str. 190–207. 64 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Polanyji, M. (2022) Razsežnost tihe vednosti. Ljubljana: Krtina. Pritrdilno ločeno mnenje sodnice dr. Špelce Mežnar k sklepu Ustavnega sodišča RS U-I- 210/21 z dne 30. septembra 2021. Radbruch, G. (1946) ‘Gesetzliches Unrecht und übergesetzliches Recht’, Süddeutsche Juristen-Zeitung 1(5), str. 105–108. Sander, F. (1923) Kelsens Rechtslehre – Kampfschrift wider die normative Jurisprudenz. Tübingen: Mohr (Paul Siebeck) Verlag. Schaffstein, F. (1934) ‘Nationalsozialistisches Strafrecht. Gedanken zur Denkschrift des Preußischen Justizministers’, Zeitschrift für die gesamte Strafrechtswissenschaft 53, str. 603–628. Scalia, A. (2018) A Matter of Interpretation. Princeton: Princeton University Press. Schmitt, C. (2023), Über die drei Arten des rechtswissenschaftlichen Denkens. Vierte, korrigierte Auflage. Berlin: Duncker & Humblot. Seinecke, R. (2013) ‘Rudolf von Jhering anno 1858: Interpretation, Konstruktion und Recht der sog. „Begriffsjurisprudenz“’, Zeitschrift der Savigny-Stiftung für Rechtsgeschichte: Germanistische Abteilung 130, str. 238–280. Somek, A. (1992) Rechtssystem und Republik. Über die politische Funktion des systemati- schen Rechtsdenkens. Dunaj in New York: Springer-Verlag. Somek, A. (1996) Der Gegenstand der Rechtserkenntnis: Epitapth eines juristischen Prob- lems. Baden – Baden: Nomos. Somek, A., in Forgó, N. (1996) Nachpositivistiches Rechtsdenken: Inhalt und Form des positiven Rechts. Dunaj: WUV-Universitätsverlag. Somek, A. (2006) Rechtliches Wissen. Frankfurt na Majni: Suhrkamp Verlag. Somek, A. (2019) ‘Ex facto ius oritur’ v: Bersier Ladavac, N., Bezemek, C., in Schauer, F. (ur.) (2019) The Normative Force of the Factual: Legal Philosophy Between Is and Ought. Cham: Springer Nature Switzerland AG, str. 121–134. Somek, A. (2021) Knowing What the Law Is: Legal Theory in a New Key. Oxford: Hart Publishing. Vaihinger, H. (1922) Die Philosophie des Als Ob: System der theoretischen, praktischen und religiösen Fiktionen der Menschheit auf Grund eines idealistischen Positivismus. Leipzig: Felix Meiner Verlag. Visković, N. (1976) Pojam prava. Prilog integralnoj teoriji prava. Split: Biblioteka Pravnog fakulteta u Splitu. 65 © The Author(s) 2024 Znanstveni članek DOI: 10.51940/2024.1.65-86 UDK: 347.7:347.9:343.1 Luka Vavken* (Ne)priznavanje privilegija zoper samoobtožbo pravnim osebam s poudarkom na enoosebni gospodarski družbi Povzetek Prispevek analizira vprašanje, ali je treba v kaznovalnih postopkih priznati pravico do privilegija zoper samoobtožbo ne le fizičnim, temveč tudi pravnim osebam. Ker pregon kaznivih dejanj oziroma prekrškov, v katerem nastopajo pravne osebe, bolj kot pregon fizičnih oseb temelji na materialnih, torej neverbalnih dokazih, uvodni deli razprave obravnavajo vprašanje dometa privilegija zoper samoobtožbo. Ta v sodobni pravni do- gmatiki in sodni praksi ne zajema le testimonialnih dokazov, temveč tudi materialne do- kaze oziroma dokumentarno gradivo, nad katerim ima osumljenec kontrolo. Ker je kaz- novalni očitek – zaradi sistema limitirane akcesorne odgovornosti pravnih oseb – fizični (odgovorni) osebi pravne osebe praviloma vsebinsko prepleten z očitkom pravni osebi, privilegij zoper samoobtožbo, ki ga uživa domnevni storilec kaznivega dejanja oziroma prekrška, pogosto hkrati varuje pred izpovedovanjem in izročanjem dokumentarnega gradiva v svojo škodo tudi pravno osebo. Ne pa vselej! Avtor zavzema stališče, da bi bilo treba v slednjem primeru pravnim osebam priznati samostojno pravico do privilegija zo- per samoobtožbo. Še zlasti, kadar je osumljena oziroma obdolžena enoosebna gospodar- ska družba, pri kateri se s podelitvijo privilegija zoper samoobtožbo dejansko varuje pred izpovedovanjem (ravnanjem) v svojo škodo njenega »lastnika« – edinega družbenika. Ključne besede privilegij zoper samoobtožbo, jamstva poštenega postopka, pravna oseba, enoosebna družba z omejeno odgovornostjo, odgovornost pravnih oseb za kazniva dejanja, odgo- vornost pravnih oseb za prekrške, limitirana akcesorna odgovornost. * Doktor pravnih znanosti, pravosodni svetnik I. na Vrhovnem sodišču Republike Slovenije, vavken.luka@ gmail.com. Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review – voL. LXXXiv, 2024 • pp. 65–86 ISSN 1854-3839 • eISSN: 2464-0077 66 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 1. Uvod Privilegij zoper samoobtožbo je eno najzahtevnejših procesnih jamstev, ki odpira šte- vilna vprašanja v pravni dogmatiki in sodni praksi. Je odraz spoznanja, da je v vertikal- nem razmerju med državo in posameznikom, ki se izoblikuje v kaznovalnem postopku,1 posameznik šibkejša stranka, ki ima pravico, da se svobodno in zavestno odloči, ali bo v celoti sodeloval z organi pregona ali bo ostal povsem pasiven. Med tema dvema skraj- nostma je širok razpon bolj ali manj aktivnih ravnanj posameznika, na katerega je osre- dotočen sum storitve kaznivega dejanja ali prekrška. Osrednji predmet prispevka je upravičenec, torej tisti, ki mu pripada pravica do pri- vilegija zoper samoobtožbo. Je to le fizična oseba, ki je primarno nosilka človekovih pra- vic in svoboščin, katerih končni cilj je varstvo človekovega dostojanstva v kaznovalnem postopku, ali pa je mogoče oziroma celo nujno privilegij zoper samoobtožbo priznati tudi pravnim osebam? To ključno vprašanje prispevka bo obravnavano na treh ravneh: na ravni pravne dogmatike, stališč sodne prakse najvišjih sodišč v državi in končno na praktični ravni kazenskih oziroma prekrškovnih postopkov zoper pravne osebe. Cilj razprave je torej predvsem podati temeljito analizo možnosti priznavanja samostojne pravice do privilegija zoper samoobtožbo pravnim osebam, ki so osumljene kaznivega dejanja oziroma prekrška. Kaznovalni pregon pravnih oseb bolj kot pregon fizičnih oseb temelji na dokumen- tarnem gradivu, torej na neverbalnih dokazih. Zato bo prvi korak pri iskanju odgovora na vprašanje priznavanja privilegija pravnim osebam analiza dometa privilegija zoper samoobtožbo. Ta problematika je obravnavana v uvodnem delu razprave, ki išče odgovor na vprašanje, ali privilegij obsega le izjave osumljenca ali pa se njegov domet razteza tudi na materialne dokaze, nad katerimi ima osumljenec kontrolo. Slednjič se bo razprava od splošnejših vprašanj, ki so povezana s privilegijem zoper samoobtožbo pravnim osebam, zožila na jedrno in čisto konkretno vprašanje nujnosti priznavanja privilegija zoper samoobtožbo v primeru, ko se v kaznovalnem postopku znajde ena od najbolj priljubljenih in razširjenih oblik gospodarske družbe – enoosebna gospodarska družba. Nujni pogoj za ta del razprave je dobro poznavanje razvoja, zlasti pa notranje zgradbe enoosebne gospodarske družbe, ki bo prikazan na primeru enoo- sebne družbe z omejeno odgovornostjo. Ta je v primerjavi s preostalimi gospodarskimi družbami tako specifična, da argumentov glede (ne)priznavanja pravice do privilegija zoper samoobtožbo pravnim osebam, ki veljajo za preostale gospodarske družbe, zanjo ni mogoče uporabiti. 1 Z besedno zvezo »kaznovalni postopek« merimo na kazenski in prekrškovni postopek. 67 Luka Vavken – (Ne)priznavanje privilegija zoper samoobtožbo pravnim osebam s poudarkom na enoosebni gospodarski družbi 2. Namen privilegija zoper samoobtožbo in iz njega izhajajoče pravice Privilegij zoper samoobtožbo (angl. privilege against self-incrimination) je procesno jamstvo kazenskega postopka, ki obdolžencu omogoča, da ne postane del dokazov zoper samega sebe, izpove zoper samega sebe ali prizna krivde.2 Bistvo privilegija zoper samo- obtožbo je v tem, da morajo organi pregona v najširšem smislu obdolžencu pustiti, da je povsem pasiven oziroma da se sam zavestno, razumno in povsem prostovoljno odloča, ali bo z njimi sodeloval.3 Končni namen privilegija zoper samoobtožbo je zagotoviti, da ima obdolženec položaj subjekta v kazenskem postopku.4 S privilegijem zoper samoobtožbo se torej varuje obdolženčeva svobodna volja, ali bo v kaznovalnem postopku izpovedoval ali raje molčal, s čimer se zagotavlja spoštovanje njegovega osebnega dostojanstva.5 Iz privilegija zoper samoobtožbo izhajajo tri temeljne pravice:6 – pravica do molka, ki obdolžencu zagotavlja, da o svoji vpletenosti v kazenskopravni relevantni dogodek pove zgolj toliko, kolikor sam prostovoljno in zavestno hoče pove- dati. Pravica do molka je vsebinsko ožja od privilegija zoper samoobtožbo, saj se nana- ša zgolj na tako imenovano akustično prisilo, to je prisilo, da obdolženec nekaj pove, privilegij pa je, kot bo predstavljeno v nadaljevanju, po svojem dometu precej širši;7 – pravica do pravnega pouka, ki se nanaša na pravni pouk o pravici neinkriminirati samega sebe. Za udejanjanje privilegija zoper samoobtožbo je nujna, saj zagotavlja njegovo učinkovitost. Prvi pogoj, da bo prava neuki obdolženec lahko uresničeval svoj privilegij zoper samoobtožbo je, da je o njem ustrezno informiran. Pravni pouk, s katerim se obdolženca opozori na pravico do privilegija zoper samoobtožbo, mora biti tak, da bo njegovo odločitev o tem, ali bo uveljavljal pravico do molka oziroma pasivnosti, v celoti odvisna od njegove svobodne volje;8 – pravica do seznanitve s procesnim gradivom. Obdolžencu je treba omogočiti, da lahko sprejme vsebinsko odločitev o tem, ali se bo skliceval na privilegij zoper samoobtožbo, zato mu je treba omogočiti, da ve, kaj se mu očita in na kakšni dejstveni podlagi teme- lji ta očitek. Šele seznanjenost s procesnim gradivom namreč obdolžencu omogoči, da avtonomnost njegove volje sploh doseže želeni procesni učinek.9 2 Primerjaj Horvat, 2004, str. 27; in Šošić, 2023, str. 90. 3 Primerjaj na primer odločbo Ustavnega sodišča RS Up-134/97-17 z dne 14. marca 2002 in sodbi Evropskega sodišča za človekove pravice v zadevah Jalloh proti Nemčiji, št. 54810/00, z dne 11. julija 2006 in Allan proti Združenemu kraljestvu, št. 48539/99, z dne 5. novembra 2002. 4 Tako Zupančič, 1996, str. 29. 5 Primerjaj na primer odločbo Ustavnega sodišča RS v zadevi Up-1293/08-24 z dne 6. julija 2011, točka 31. 6 Podrobneje o tem Žaucer, 2013, str. 319–321. 7 Podrobneje o tem Šugman, 2000, str. 166 in 249. 8 Primerjaj na primer odločbo Ustavnega sodišča RS v zadevi Up-134/97-17 z dne 14. marca 2002. 9 Primerjaj Gorkič, 2011, str. 108 in nasl. 68 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Privilegij zoper samoobtožbo je tesno povezan z domnevo nedolžnosti, ki jo določata 27. člen Ustave Republike Slovenije (Ustava) in 6. člen Evropske konvencije o varstvu človekovih pravic in temeljnih svoboščin (EKČP).10 Ker domneva nedolžnosti med dru- gim pomeni, da nosi dokazno breme tožilec, obdolžencu ni treba storiti ničesar v svo- jo obrambo.11 To pomeni, da lahko tudi molči, državni tožilec pa mora dokazati vse elemente kaznivega dejanja, da prepriča neodvisno sodišče tudi, če ostane obdolženec povsem pasiven. Pravica do molka je namreč tisti branik, ki preprečuje, da se breme dokazovanja prenese na obdolženca.12 3. Ustavna, konvencijska, skupnostna in zakonska razsežnost privilegija zoper samoobtožbo EKČP privilegija zoper samoobtožbo in pravice do molka izrecno ne omenja. Kljub temu ga je Evropsko sodišče za človekove pravice (ESČP) v svoji sodni praksi prepoznalo kot del pravice do poštenega sojenja iz 6. člena EKČP.13 ESČP je tako večkrat poudarilo, da pravica do molka in neizpovedovanja zoper sebe spada v bistvo poštenega postopka in je splošno priznan mednarodni standard. Njen temeljni smisel je zaščititi obdolženca pred nedopustno silo s strani države in preprečevanje zlorab. Privilegij po stališču ESČP ne ščiti pred vsako samoobtožbo, temveč zgolj, kadar bi bili dokazi od obdolženca prido- bljeni pod prisilo, to je v nasprotju z njegovo voljo. Sem spadajo situacije, ko je obdol- ženec prisiljen pod grožnjo sankcij, kadar se nad njim vrši fizični ali psihološki pritisk in kadar je dokaz od njega pridobljen z zvijačo.14 V nasprotju z določbami EKČP je privilegij zoper samoobtožbo izrecno in razmero- ma podrobno urejen v pravu Evropske unije. Direktiva EU 2016/343 Evropskega parla- menta in Sveta15 določa, da je pravica posameznika, da ne izpove zoper sebe, pomemben 10 Domneva nedolžnosti zajema zlasti tri vidike: (1.) da oseba velja za nedolžno, dokler se ji ne dokaže krivda; (2.) da mora krivdo dokazati državni tožilec in ne obdolžena oseba; ter (3.) da mora sodišče v dvomu, ko krivda ni nesporno dokazana, obdolženo osebo oprostiti. Primerjaj na primer odločbi Ustavnega sodišča RS v zadevah U-I-6/93 z dne 1. aprila 1994, točka 15; in Up-124/16 z dne 27. oktober 2017, točka 7. 11 Primerjaj na primer odločbi ESČP v zadevah John Murray proti Združenemu kraljestvu, št. 18731/91, z dne 8. februarja1996 in Telfner proti Avstriji, št. 33501/96, z dne 20. marca 2001. 12 Podrobneje o domnevi nedolžnosti glej Zobec, 2019, str. 268–269. 13 Primerjaj na primer sodbe v zadevah Funke proti Franciji, št. 10828/84, z dne 25. februarja 1993, John Murray proti Združenemu kraljestvu, št. 18731/91, z dne 8. februarja 1996 in Jalloh proti Nemčiji, št. 54810/00, z dne 11. julija 2006 ter številnih drugih. 14 Primerjaj na primer zadevo Saunders proti Združenemu kraljestvu, št. 19187/91, z dne 17. decembra 1996. 15 Direktiva (EU) 2016/343 Evropskega parlamenta in Sveta z dne 9. marca 2016 o krepitvi nekaterih vidikov domneve nedolžnosti in krepitvi pravice biti navzoč na sojenju v kazenskem postopku, 69 Luka Vavken – (Ne)priznavanje privilegija zoper samoobtožbo pravnim osebam s poudarkom na enoosebni gospodarski družbi vidik domneve nedolžnosti. V zvezi z ugotavljanjem kršitve pravice do molka ali pravice posameznika, da ne izpove zoper sebe, se Direktiva sklicuje na razlago ESČP glede pravi- ce do poštenega sojenja v okviru EKČP. Privilegij zoper samoobtožbo ni izrecno zapisan v določbi 29. člena Ustave, ki določa temeljna jamstva v kazenskem postopku. V četrti alineji 29. člena Ustava določa, da mora biti obdolžencu kaznivega dejanja ob popolni enakopravnosti zagotovljena pravica, da ni dolžan izpovedati zoper sebe ali svoje bližnje ali priznati krivde. Skladno z navedeno ustavno določbo ima obdolženec pravico do molka. Iz utrjene sodne prakse Ustavnega sodišče RS je razvidno, da to sodišče določbo četrte alineje 29. člena Ustave razlaga kot določbo, ki zagotavlja privilegij zoper samoobtožbo.16 Besedilo Ustave je širše od razlage privilegija zoper samoobtožbo v praksi ESČP. V sodbah ESČP je namreč poudarek iz- ključno na varovanju pred obdolžitvijo samega sebe, ne pa tudi pred inkriminacijo svojih bližnjih.17 V slovenski pravni ureditvi je torej vsebina privilegija, ki varuje tudi pred pri- silo izpovedovanja proti našim bližnjim, povzdignjena na ustavno raven.18 Privilegij zoper samoobtožbo je udejanjen tudi na zakonski ravni. V določbi tretjega odstavka 5. člena Zakona o kazenskem postopku (ZKP)19 je določeno, da se obdolženec ni dolžan zagovarjati in odgovarjati na vprašanja; če pa se zagovarja, ni dolžan izpove- dati zoper sebe ali svoje bližnje ali priznati krivde. Tako kot druge pravice iz tega člena obdolženec uživa privilegij zoper samoobtožbo od trenutka, ko je nanj osredotočen sum izvršitve kaznivega dejanja, torej tudi že v predkazenskem postopku.20 V prekrškovnem postopku se privilegij zoper samoobtožbo kaže v določbi drugega odstavka 55. člena Zakona o prekrških (ZP-1),21 po kateri prekrškovni organ ob ugotovi- tvi oziroma obravnavanju prekrška in še pred izdajo odločbe o prekršku kršitelja obvesti 24.–32. uvodna izjava in 7. člen. 16 Primerjaj odločbo Ustavnega sodišča RS Up-134/97 z dne 14. marca 2002, točka 10, in številne poznejše. 17 Primerjaj Polajžar in Stajnko, 2020, str. 151. 18 Taka ureditev se zdi smiselna, upoštevaje slovensko polpreteklo zgodovino. V povojnem obdobju so bili številni posamezniki obsojeni, ker na primer organom oblasti niso naznanili svojih sorodnikov, ki so se skrivali pred novo oblastjo, se pripravljali na beg v tujino ali pa so jih le podpirali na primer tako, da so jim dajali hrano ali obleko. Vrhovno sodišče je v preteklih tridesetih letih v postopku z zahtevo za varstvo zakonitosti številne posameznike, ki so bili obsojeni zaradi takih kaznivih dejanj, oprostilo obtožbe. 19 Uradni list RS, št. 176/21 – uradno prečiščeno besedilo in 96/22. 20 Privilegij zoper samoobtožbo se tako razteza na zbiranje obvestil po četrtem odstavku 148. člena ZKP, na zaslišanje po 148.a, 203. in 227. členu ZKP, izjavljanje o krivdi na predobravnavnem naroku ali pri pogajanjih o priznanju krivde po prvem odstavku 450.a člena ZKP. Po drugi strani spontane izjave obdolženca, ki jih poda organom kazenskega pregona, niso zajete s privilegijem zoper samoobtožbo. Primerjaj na primer odločbo Ustavnega sodišča RS Up-1293/08-24 z dne 6. julija 2011 in sodbo Vrhovnega sodišča RS I Ips 500/2008 z dne 9. julija 2009. 21 Uradni list RS, št. 29/11 – uradno prečiščeno besedilo, 21/13, 111/13 in 74/14. 70 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 o prekršku in ga med drugim pouči, da se lahko izjavi o dejstvih oziroma okoliščinah prekrška, da tega ni dolžan storiti niti odgovarjati na vprašanja, če se bo izjavil ali odgo- varjal, pa ni dolžan izpovedovati zoper sebe ali svoje bližnje. Podobnega pravnega pouka je prekrškovni obdolženec v skladu z določbo prvega odstavka 114. člena ZP-1 deležen tudi pred zaslišanjem s strani sodišča v rednem postopku.22 Privilegij zoper samoobtožbo je pravica pozitivnega statusa, saj so državni organi dolžni obdolženca v položaju, ko naj bi se izjavil o očitkih oziroma se na kakršenkoli način obremenil, opozoriti na pravico do molka.23 4. Domet privilegija zoper samoobtožbo Za razpravo o priznavanju privilegija zoper samoobtožbo pravnim osebam je vpraša- nje, ali so s privilegijem zajete le izjave osumljenca ali pa tudi drugi (materialni oziroma dokumentarni) dokazi, s katerimi razpolaga osumljenec, kardinalnega pomena. Pregon kaznivih dejanj oziroma prekrškov, v katerem nastopajo pravne osebe, bolj kot pregon fizičnih oseb temelji prav na materialnih, torej neverbalnih dokazih. Pri poslovanju prav- nih oseb namreč nastajajo številni dokumenti (pogodbe, bilance, računi, elektronski do- kumenti, do katerih ni mogoče priti, ne da bi organi pregona vstopili v informacijski sistem pravne osebe, in podobno), ki imajo v kaznovalnem postopku naravo listinskih oziroma elektronskih dokazov. Brez njihove pridobitve in izvedbe na sodišču je o obdol- ženčevi krivdi praktično nemogoče odločiti. 4.1. V pravni dogmatiki Vprašanje, katero dokazno gradivo zajema privilegij zoper samoobtožbo ni preprosto. Iz strokovnih in znanstvenih del, ki so bila objavljena pred desetletjem ali več, je mogoče razbrati, da se privilegij zoper samoobtožbo zanesljivo nanaša na tako imenovane testi- monialne dokaze, torej dokaze, ki se od obdolženca pridobijo z njegovo izjavo. Na drugi strani je bila in je teorija enotna, da privilegij ne zajema tistih dokazov, za pridobitev katerih obdolženčeva privolitev ni potrebna.24 Horvat navaja, da privilegij zoper samo- 22 Po določbi prvega odstavka 114. člena ZP-1 se obdolžencu pove, česa je obdolžen, nato pa se ga vpraša, kaj lahko navede v svoj zagovor; pri tem se mu pove, da se ni dolžan zagovarjati in tudi ne odgovarjati na vprašanja, če pa se zagovarja, ni dolžan izpovedati zoper sebe ali svoje bližnje ali priznati odgovornosti za prekršek. 23 Primerjaj Bošnjak in Žaucer Hrovatin, 2019, str. 288–289. 24 V sodobni pravni dogmatiki se zlasti na podlagi izsledkov nevroznanosti meja med tema dvema skupinama dokazov vse bolj izgublja. Na primer pri tako imenovanih možganskih prstnih odtisih, pri pridobivanju katerih je osumljenec povsem pasiven – beležijo se le električni valovi, ki jih oddajajo njegovi možgani, na vprašanja pa mu ni treba odgovarjati. Kljub temu gre po mnenju teorije za testimonialni dokaz, ki ga ščiti pravica do molka, ker preiskovalce v resnici zanimajo 71 Luka Vavken – (Ne)priznavanje privilegija zoper samoobtožbo pravnim osebam s poudarkom na enoosebni gospodarski družbi obtožbo ne velja za dejanja, pri katerih je obdolženec zgolj objekt kazenskega postopka in se lahko dokazi zoper njega pridobijo brez njegove izjave. Med takimi dokazi našteva na primer odvzem prstnih odtisov, odvzem brisa ustne sluznice za preiskavo DNK, telesni pregled, odvzem krvi in drugih telesnih tekočin.25 Tudi Selinšek, ki se sicer na podlagi analize sodne prakse ESČP zavzema, da je privilegij zoper samoobtožbo treba razumeti zelo široko glede vrst postopkov, na katere se razteza, glede vrste dokazov pa ga je treba razumeti razmeroma široko, domet privilegija omejuje na tako imenovane pričevalne oziroma testimonialne dokaze, torej na izpovedbo posameznika. Nadaljuje, da privile- gij zoper samoobtožbo posamezniku ne zagotavlja absolutnega varstva pred tem, da bi postal vir obremenilnega gradiva zoper sebe, ampak preprečuje le tiste oblike prisile, ki vplivajo na njegovo voljo dati posamezne izjave.26 V novejši slovenski pravni dogmatiki je mogoče zaznati trend širitve obsega privilegija zoper samoobtožbo. Gorkič ugotavlja, da se privilegij nesporno nanaša na izjave o ne- kem dogodku.27 V nadaljevanju se sprašuje, ali je mogoče privilegij zoper samoobtožbo razumeti tudi širše, torej da privilegij domnevnega storilca varuje tudi pred prisilo k dru- gačnemu, neverbalnemu prispevku k dokaznemu gradivu. V zvezi z edicijsko dolžnostjo ugotavlja, da ravnanje obdolženca lahko obravnavamo kot ravnanje, ki ga obdolženec voljno obvladuje. V tem delu je izročitev listine podobna obdolženčevi izjavi in se razli- kuje od na primer odvzema krvi, brisa sluznice in drugih dejanj, ki jih mora obdolženec trpeti zgolj pasivno.28 Sklene, da je izročitev listine ali drugega predmeta nekje vmes: je voljno ravnanje obdolženca (enako kot njegova izjava), pri katerem pa obdolženec ne more več vplivati na njegovo vsebino, torej oblikovati informacij, ki jih posreduje orga- nom pregona. V tem segmentu je izročitev listine oziroma drugega predmeta podobna položaju, ko organi pregona dokaze pridobivajo iz obdolženčevega telesa. Žaucer Hrvatin gre še korak dlje. Določno zapiše, da so predmet pravice do molka vsa voljna razpolaganja, namenjena pridobivanju dokaznega gradiva, in ne le izjave ob- dolženca. Privilegij torej zajema tako testimonialne kot tudi dokumentarne dokaze, pri pridobivanju katerih obdolženec voljno sodeluje.29 Še bolj jasen je Šošić, ko ugotavlja, da se privilegij zoper samoobtožbo ne razteza le na obdolženčeve izjave, temveč tudi na izročanje predmetov, listin in drugih obremenilnih materialnih dokazov.30 osumljenčeve misli, ne pa njegovi možgani sami po sebi. Podrobneje o tem Hafner, 2018, str. 155–164. 25 Tako Horvat, 2004, str. 27. 26 Primerjaj Selinšek, 2010, str. 305. 27 Primerjaj Gorkič, 2014, str. 375. 28 Prav tam, str. 376. 29 Primerjaj Žaucer, 2013, str. 324. 30 Primerjaj Šošić, 2023, str. 93. 72 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 4.2. V sodni praksi V praksi ESČP je domet privilegija zoper samoobtožbo nejasen.31 Iz zadeve Saunders sicer lahko izluščimo pogoja, ki morata biti kumulativno izpolnjena, da pridobljeni do- kaz krši privilegij zoper samoobtožbo. Prvi pogoj je, da je dokaz pridobljen proti volji obdolženca.32 Drugi pogoj pa je, da je dokazni material odvisen od obdolženčeve volje. Bolj določna je Direktiva (EU) 2016/343 Evropskega parlamenta in Sveta, ki v 7. členu, ki ureja pravico do molka in pravico posameznika, da ne izpove zoper sebe, obsega privi- legija zoper samoobtožbo sicer ne določa, iz njene preambule pa je nedvoumno razvidno, da se privilegij nanaša na izjave obdolženca in dokumentarno gradivo, ki lahko privede do samoobtožbe.33 Starejša slovenska sodna praksa, na podlagi katere bi lahko sklepali o razlikovanju med dokazi, ki jih privilegij zoper samoobtožbo pokriva, in tistimi, ki jih ne, je zelo skopa.34 Ustavno sodišče RS je praviloma domet privilegija omejilo na testimonialne ozi- roma komunikativne izjave.35 Po drugi strani pa je na primer že v odločbi Up-134/97 z dne 14. marca 2002 presodilo, da morajo organi pregona v najširšem smislu obdolžencu pustiti, da je povsem pasiven oziroma da se sam zavestno, razumno in prostovoljno odlo- ča, ali bo z njimi sodeloval ali ne. Taka presoja bi lahko nakazovala, da domet privilegija zoper samoobtožbo sega tudi onkraj zgolj testimonialnih izjav. Najbolj jasen in nedvoumen odgovor o dometu privilegija zoper samoobtožbo je po- dalo Vrhovno sodišče RS v sodbi I Ips 88506/2010 z dne 15. oktobra 2020. Ta odločba v zvezi z dometom privilegija zoper samoobtožbo je prelomna, ker je Vrhovno sodišče RS v njej prvič nedvoumno presodilo, da se privilegij zoper samoobtožbo ne nanaša le na osumljenčeve izjave, temveč tudi na izročanje materialnih dokazov.36 31 Primerjaj Mekše, 2023, str. 26. 32 Na nejasnost in pomensko odprtost navedene zadeve opozarja tudi pravna teorija. Podrobneje o tem glej na primer Redmayne, 2007, str. 214–216. 33 Uvodna izjava 25 Direktive (EU) 2016/343 Evropskega parlamenta in Sveta z dne 9. marca 2016 o krepitvi nekaterih vidikov domneve nedolžnosti in krepitvi pravice biti navzoč na sojenju v kazenskem postopku določa, da osumljene in obdolžene osebe, ki so pozvane, da podajo izjavo ali odgovarjajo na vprašanja, ne bi smele biti prisiljene predložiti dokazov ali dokumentov ali posredovati informacij, ki lahko privedejo do samoobtožbe. 34 Primerjaj Mekše, 2023, str. 26. 35 Primerjaj na primer odločbo Ustavnega sodišča RS Up-1678/08-13 z dne 14. oktobra 2010. 36 Obsojenec je bil v tej zadevi spoznan za krivega storitve kaznivega dejanja nasilništva in treh kaznivih dejanj poškodovanja tuje stvari na škodo svoje nekdanje partnerke. Dejansko stanje zadeve je bilo naslednje: policista sta ustavila obsojenčevo vozilo in obsojenca pozvala, naj izroči predmete, ki jih ima pri sebi. Neposredno po tem pozivu je obsojenec iz žepa potegnil mobilni telefon in ga odvrgel na sedež vozila. Policista sta obsojencu zasegla mobilni telefon, nato pa opravila pregled njegovega vozila. Policija je skoraj mesec pred tem, ko je policijska patrulja ustavila obsojenčevo vozilo, od oškodovanke prejela prijavo več kaznivih dejanj ogrožanja varnosti. Na podlagi te prijave 73 Luka Vavken – (Ne)priznavanje privilegija zoper samoobtožbo pravnim osebam s poudarkom na enoosebni gospodarski družbi V tej zadevi je Vrhovno sodišče opravilo presojo v dveh korakih. V prvem koraku je presodilo, da je bil v času ustavitve obsojenčevega vozila in policijskega postopka na obsojenca že osredotočen sum, da je storil kaznivo dejanje. Zaradi osredotočenosti suma nanj bi moral biti obsojenec v času policijskega postopka – torej preden mu je bil zasežen mobilni telefon – seznanjen s pravico do privilegija zoper samoobtožbo. V drugem koraku je Vrhovno sodišče preizkusilo, ali je poziv policistov k izročitvi predmetov in zaseg mobilnega telefona pomenil kršitev privilegija zoper samoobtožbo. Presodilo je, da se privilegij sicer primarno nanaša na testimonialne ali komunikativne izjave, vendar pa je treba razlikovati med fizičnimi dokazi, ki izvirajo iz ali od telesa obdolženca, ki jih je od njega mogoče pridobiti neodvisno od njegove volje, in med fizič- nimi dokazi – torej predmeti, ki jih ima obdolženec v svoji posesti, in bi lahko bili zanj obremenilni.37 Vrhovno sodišče je jasno in določno zaključilo, da predmeti, ki jih ima obdolženec v svoji posesti in bi lahko bili zanj obremenilni, poleg testimonialnih izjav spadajo v domet privilegija zoper samoobtožbo.38 5. Privilegij zoper samoobtožbo le fizičnim ali tudi pravnim osebam? To vprašanje je v svojem bistvu odvisno od tega, kako gledamo na pravno osebo. Za tiste pravne teoretike, ki zanikajo njeno pravno osebnost, ker trdijo, da je pravna oseba abstrakcija, umetna tvorba, celota ljudi, ki nato kot taka na zunaj nastopa kot pravni su- bjekt, ne more biti dvoma.39 Zanje pravna oseba ni resničen pojav, ker je nosilec pravic in dolžnosti lahko samo človek. Drugačen odgovor na postavljen problem privilegija zoper samoobtožbo pravnim osebam bi dali teoretiki, ki zagovarjajo stališča o realnosti pravnih oseb. Zanje je pravna oseba resnična, ne le namišljena oseba. Tako kot fizična oseba je so policisti, kot so navedli, »že dlje časa polagali pozornost na obsojenca z namenom ugotoviti, ali se v resnici vozi za oškodovanko in ji grozi po mobilnem telefonu«. Policista sta neposredno pred ustavitvijo obsojenčevega vozila in izvedbo postopka opazila, da je obsojenec na bencinskem servisu kupil dve kartici za polnjenje računa mobilnega telefona. To informacijo sta preverila tudi na bencinskem servisu. 37 Te dokaze lahko organi pregona pridobijo na dva načina: (1.) da sodišče na predlog državnega tožilstva izda odredbo za izdajo osebne oziroma hišne preiskave in (2.) da obdolženec navedene predmete organom pregona prostovoljno izroči na njihov poziv, pri čemer je pogoj za prostovoljnost izročitve seznanjenost s pravicami iz določbe četrtega odstavka 148. člena ZKP. 38 Vrhovno sodišče s svojo odločitvijo ni poseglo v doktrino plain view, ki policistom omogoča, da zasežejo predmete, ki niso v neposredni zvezi s kaznivim dejanjem, za katerega je bila izdana odredba za hišno ali osebno preiskavo ali preiskavo elektronske naprave, in se pri preiskavi ne iščejo, ampak kažejo na drugo kaznivo dejanje, za katerega se storilec preganja po uradni dolžnosti. V to doktrino ni poseglo, ker – kot je bilo že rečeno – odredba za preiskavo osumljenčevega vozila v obravnavanem primeru ni bila izdana. Podrobneje o tem Vavken, 2022, str. 33. 39 Sem spadajo na primer teorija fikcije in teorije, ki zanikajo pravno osebnost. Podrobneje o tem Pavčnik, 2016, str. 163–164. 74 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 pravni subjekt zato, ker se ji kot taki priznava kakovost pravne osebnosti. Pavčnik na primer meni, da je narava pravnih oseb razpeta med življenjsko resničnostjo, ki pravne osebe potrjuje, in človekovo vrednostno odločitvijo, da se določenim družbenomaterial- nim substratom ta kakovost tudi prizna. Zanj je njihovo priznanje umetno, ker ustvarja novo pravno kakovost, na drugi strani pa je izraz resničnih interesov in potreb, ki pravno osebnost pogojujejo.40 Ne glede na to, kateri pogled na pravno osebo nam je bližje, ostaja dejstvo, da sloven- ski pravni red določa, da pravna oseba odgovarja tako za civilne delikte in kršitve pravic kot tudi za prekrške in ob določenih pogojih celo za kazniva dejanja.41 Zato je vprašanje priznanja privilegija zoper samoobtožbo pravni osebi v našem pravnem sistemu še kako relevantno. 5.1. (Ne)priznavanje privilegija zoper samoobtožbo pravni osebi na ravni pravne dogmatike in odločb Ustavnega sodišča RS in Vrhovnega sodišča RS V pravni dogmatiki srečamo tri teorije, ki skušajo pojasniti problem, ali privilegij zo- per samoobtožbo priznati tudi pravnim osebam.42 Subjektivna teorija privilegij zoper sa- moobtožbo pravni osebi odreka, ker meni, da pravna oseba nima osebnostnih pravic, ki bi jih privilegij lahko varoval. Izhaja iz stališča, da je privilegij zoper samoobtožbo name- njen varovanju pravic obdolženca prav zato, ker je fizična oseba, ki je subjekt človekovih pravic in človekovega dostojanstva. Po objektivni teoriji je privilegij zoper samoobtožbo procesno jamstvo v kazenskem postopku, ki deluje samo zase, objektivno in neodvisno od subjekta kazenskega postopka, zato ga je treba priznati tudi pravnim osebam. Bolj zapletena od omenjenih teorij je teorija krivde. Ta teorija odreče privilegij pravni osebi, ker meni, da tako kot pravna oseba na primer nima volilne pravice, tudi nima pravice do legitimne obsodbe, zato tudi ne more uživati varstva svoje volje kot pogoja za doseganje legitimne obsodbe. Po teoriji krivde pravna oseba ne opredeljuje normativnih pogojev lastne kazenske obsodbe, ker te pravice preprosto nima.43 Slovenska sodna praksa najvišjih sodišč v državi se nagiba k subjektivni teoriji. Ustavno sodišče RS je v odločbi U-I-108/99 z dne 20. marca 2003 zavzelo stališče, da je privilegij zoper samoobtožbo dan (le) fizičnim osebam in se torej ne razteza na pravne osebe. Svojo odločitev je argumentiralo z obrazložitvijo, da pravna oseba oblikuje voljo 40 Prav tam str. 164–165. 41 Odgovornost za kazniva dejanja je uvedel Kazenski zakonik (KZ) iz leta 1994, ohranja pa jo tudi trenutno veljavni Kazenski zakonik (KZ-1). Uvesti jo je mogoče samo za določena kazniva dejanja in ob predpostavkah, da je storilec izvršil kaznivo dejanje v imenu in na račun ali v korist pravne osebe ter je v zakonu, ki ureja odgovornost pravnih oseb za kazniva dejanja, navedeno, da pravna oseba zanj odgovarja. 42 Podrobneje o tem von Freier, 2010, str. 126 in nasl.; ter Žaucer, 2013, str. 329–336. 43 Primerjaj Žaucer, 2013, str. 336. 75 Luka Vavken – (Ne)priznavanje privilegija zoper samoobtožbo pravnim osebam s poudarkom na enoosebni gospodarski družbi prek svojih zastopnikov, njeni zastopniki v kazenskem postopku pa nastopajo v imenu pravne osebe in za njen račun. Bistveno je, da ne izvršujejo svojih pravic in obveznosti, ampak pravice in obveznosti pravne osebe. Tudi z listinami in dokumenti pravne ose- be razpolagajo kot zakoniti zastopniki pravne osebe in ne kot njihovi lastniki oziroma imetniki. Odločitev je dodatno utemeljilo še razlago, da po določbi četrte alineje 29. člena Ustave nihče ni dolžan izpovedati zoper sebe ali svoje bližnje. Na podlagi (očitno zgolj) jezikovne razlage te določbe je presodilo, da more zoper sebe izpovedovati le tisti, ki nastopa v postopku sam, in je sposoben sam oblikovati svojo voljo. Sklenilo je, da je očitno tudi, da ima bližnje lahko le fizična oseba. V zvezi z navedeno odločitvijo Ustavnega sodišča je treba opozoriti, da je bila odlo- čitev v zvezi s problematiko privilegija zoper samoobtožbo pravnih oseb sprejeta v zvezi s presojo skladnosti davčnega postopka z Ustavo. To pomeni, da se Ustavno sodišče ni izrecno izreklo o vprašanju privilegija pravnih oseb v kazenskem postopku. Dilemo, kako je s privilegijem zoper samoobtožbo v primeru, ko se pravna oseba znajde v kazenskem postopku, je več kot deset let po sprejemu odločbe Ustavnega sodišča rešilo Vrhovno sodišče. V zadevi I Ips 96123/2010 z dne 13. februarja 2014 je presodilo, da privilegij zoper samoobtožbo velja le za fizične osebe, ne le v davčnem, ampak tudi v kazenskem postopku. Pri svoji argumentaciji se je oprlo na stališče Ustavnega sodišča v prej navedeni odločbi. Argumentaciji Ustavnega sodišča je dodalo, da že jezikovna razlaga določbe četrtega odstavka 148. člena ZKP pokaže, da se ta določba ne nanaša na pravno osebo, saj je pravna oseba lahko pod pogoji, ki jih določa Zakon o odgovornosti pravnih oseb za kazniva dejanja (ZOPOKD),44 odgovorna za kaznivo dejanje, ki ga stori druga oseba, sama pa ne more izvršiti kaznivega dejanja oziroma pri njem sodelovati. Stališči obeh najvišjih sodišč v državi sta v pravni dogmatiki deležni kritike. Žaucer na primer ugotavlja, da odrekanje privilegija zoper samoobtožbo pravni osebi v praksi pomeni dopustitev sile zoper eno ali več fizičnih oseb, ki bodo zastopale pravno osebo v kazenskem postopku.45 Zanjo torej ni dvoma, da odrekanje privilegija v realnosti kazen- skega postopka pomeni prisilitev obdolžene pravne osebe (dejansko njenih zastopnikov) k izjavljanju. Oziroma še več. K posledičnemu denarnemu kaznovanju, če na vprašanja, s katerimi bi obdolžili pravno osebo, njeni zastopniki ne želijo odgovarjati.46 Podobno kritiko izraža tudi Šošić. Ugotavlja, da je stališče, ki pravni osebi odreka pravico do pri- vilegija, vprašljivo in preseženo ob upoštevanju poznejše odločbe Ustavnega sodišča U-I- 40/12-31 z dne 11. aprila 2013, v kateri je presodilo, da je treba pri razlagi ustavnih določb upoštevati njihov namen in pravno naravo ter presoditi, ali se lahko nanašajo tudi na pravne osebe in v kolikšnem obsegu. Ugovor zoper nepriznavanje privilegija pravnim osebam sklene z razmišljanjem, da ni vsebinskega razloga, da se zoper pravno osebo po 44 Uradni list RS, št. 59/99, 12/00, 50/04, 65/08 in 57/12. 45 Žaucer, 2013, str. 341. 46 Prav tam, str. 339. 76 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 eni strani uveljavlja kazenska odgovornost, po drugi strani pa se ji odreka eno temeljnih ustavnopravnih jamstev kazenskega postopka. 5.2. (Ne)priznavanje privilegija zoper samoobtožbo pravni osebi v živem pravu – praksi kaznovalnih postopkov V slovenski pravni ureditvi je odgovornost pravne osebe za kazniva dejanja načeloma akcesorna oziroma pridružena. To pomeni, da pravna oseba lahko odgovarja le poleg neposrednega storilca kaznivega dejanja. Njegovo ravnanje je namreč formalni pogoj za uveljavljanje materialnih temeljev odgovornosti pravne osebe.47 Vendar pa akcesornost ni absolutna, temveč limitirana. Pravna oseba lahko odgovarja za kazniva dejanja tudi, če storilec ni kazensko odgovoren.48 Taka situacija je podana na primer, če je neposredni storilec neprišteven, pod vplivom dejanske ali pravne zmote, če je ravnal v skrajni sili, pod vplivom sile ali grožnje, če ga varuje imuniteta ali (kar je v praksi najpogostejše) je bil za isto kaznivo dejanje že obsojen. V takih primerih izključitev storilčeve odgovornosti na pravno osebo ne vpliva, temveč je lahko spoznana za odgovorno na podlagi načela o omejeno pridružitveni (limitirano akcesorni) odgovornosti pravne osebe. Limitirana akcesorna odgovornost ne velja le za odgovornost pravnih oseb za kazniva dejanja, temveč tudi za prekrške. V določbi prvega odstavka 14. člena ZP-1, ki drugače kot KZ-1 celovito in sistematično ureja odgovornost pravnih oseb za prekrške, je pred- pisana akcesorna odgovornost pravne osebe, ki pa je relativizirana v drugem odstavku istega člena, ki vzpostavlja možnost principalne odgovornosti pravne osebe, če storilec prekrška za prekršek ni odgovoren ali če ga ni mogoče odkriti.49 Z vidika morebitnega priznavanja privilegija zoper samoobtožbo je pomembna pro- cesna posledica (primarno) akcesorne odgovornosti pravne osebe, ki se kaže v pravilu, da se zoper pravno in fizično (odgovorno) osebo praviloma vodi enoten postopek.50 Zahteva po 47 V skladu z določbo 42. člena KZ-1 in 4. člena ZOPOKD je formalni pogoji za odgovornost pravne osebe, da storilec izvrši kaznivo dejanje v imenu, na račun ali v korist pravne osebe. 48 Podrobneje o tem Šepec, 2019, str. 1359–1360. 49 Podrobneje Selinšek, 2018, str. 100–103. 50 V skladu z določbo 27. člena ZOPOKD se zaradi istega kaznivega dejanja zoper pravno osebo praviloma uvede in izvede postopek skupaj s postopkom zoper storilca. Samostojni postopek zoper pravno osebo pa je mogoč le, ko ga zoper storilca ni mogoče uvesti oziroma izvesti iz razlogov, določenih z zakonom, ali pa je bil postopek zoper njega že izveden. V enotnem postopku se zo- per pravno osebo in obdolženca vloži ena obtožba in izda ena sodba. Podobno je določeno tudi v določbi 70. člena ZP-1, po kateri se zoper pravno in odgovorno osebo praviloma vodi enoten postopek, zgolj zoper pravno osebo pa se vodi postopek le, če obstajajo dejanske ali pravne ovire za vodenje postopka zoper odgovorno osebo. Enotnost vodenja prekrškovnega postopka zoper pravno in odgovorno osebo izhaja tudi iz določbe drugega odstavka 80. člena ZP-1, ki določa pristojnost za obravnavanje prekrškov, ki sta jih storili pravna in njena odgovorna oseba. Za obe osebi je pristojno sodišče, ki je pristojno za pravno osebo. Pravilo o enotni izvedbi postopka zoper pravno in odgo- 77 Luka Vavken – (Ne)priznavanje privilegija zoper samoobtožbo pravnim osebam s poudarkom na enoosebni gospodarski družbi vodenju enotnega postopka je posledica načela ekonomičnosti postopka. Bistveni razlog za enotnost postopka je prepletenost očitkov fizični (odgovorni) in pravni osebi, katerega nujna posledica je povezano ugotavljanje odgovornosti pravne in njene odgovorne osebe. Praktična posledica enotnega vodenja postopka in vsebinske prepletenosti očitkov fi- zični in pravni osebi pomeni, da v praksi s privilegijem zoper samoobtožbo pravne osebe ne bi smelo biti posebnih težav. Fizična (odgovorna) oseba, ki se bo skupaj s pravno ose- bo znašla v kaznovalnem (kazenskem oziroma prekrškovnem postopku), bo seveda uživa- la pravico do privilegija zoper samoobtožbo glede očitka, ki se nanaša na njeno ravnanje. Ker je očitek fizični (odgovorni) osebi praviloma vsebinsko prepleten z očitkom pravni osebi, bo privilegij zoper samoobtožbo, ki gre odgovorni osebi – storilcu kaznivega de- janja oziroma prekrška – dejansko hkrati varoval pred izpovedovanjem in izročanjem dokumentarnega gradiva v njegovo škodo in škodo pravne osebe. Ne pa vselej: v primeru notranje bolj razvejanih gospodarskih družb, ko organi pregona ne zahtevajo dokaznega gradiva od osumljene odgovorne osebe, temveč od neke tretje osebe, ki je zaposlena v gospodarski družbi (na primer vodje pravne službe, računovodje in podobno), in na ka- tero sum storitve kaznivega dejanja oziroma prekrška ni osredotočen. V tem primeru bo pravna oseba ostala brez varstva, ki ga zagotavlja privilegij zoper samoobtožbo. Podobna procesna situacija nastopi tudi takrat, ko se iz zakonsko določenih razlo- gov vodi kaznovalni postopek le zoper pravno osebo. V tem primeru zakoniti zastopnik pravne osebe – glede na (restriktivni) stališči Ustavnega sodišča in Vrhovnega sodišča – ne uživa pravice do privilegija zoper samoobtožbo, vezanega na ugotavljanje odgovor- nosti pravne osebe. Tako procesno situacijo (sicer ne v fazi kazenskega postopka, tem- več ob policijskem zbiranju obvestil) je že obravnavalo Vrhovno sodišče v zadevi I Ips 96123/2010 z dne 13. februarja 2014. V tej zadevi je policija v predkazenskem postopku od pravne osebe, na katero je bil osredotočen sum storitve kaznivega dejanja, zbirala obvestila – med drugim tudi od njenega direktorja. Vrhovno sodišče je ugotovilo, da v času zbiranja obvestil sum storitve kaznivega dejanja ni bil osredotočen na zakonitega za- stopnika – direktorja pravne osebe, temveč le na pravno osebno. Sklenilo je, da je policija ravnala zakonito, ker direktorju pravne osebe ni dala pravnega pouka po določbi četrtega odstavka 148. člena ZKP. To pa zato, ker: prvič v času zbiranja obvestil, kot je bilo že rečeno, sum nanj še ni bil osredotočen. In drugič, ker je privilegij zoper samoobtožbo dan le fizičnim, ne pa tudi pravnim osebam. Tukaj se začne razprava zoževati k bistvu v uvodu postavljenega problema. Ali je presoja obeh najvišjih sodišč v državi, ki pravnim osebam na splošno odrekata pravico do privilegija zoper samoobtožbo, skladna z bistvom in namenom privilegija tudi tedaj, ko v pravni osebi nastopa le en družbenik. Za odgovor na to vprašanje je treba najprej vorno osebo velja v skladu z določbo drugega odstavka 58. člena ZP-1 tudi v hitrem postopku o prekršku. 78 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 predstaviti osnovne značilnosti enoosebne gospodarske družbe. Te bodo prikazane na primeru enoosebne družbe z omejeno odgovornostjo. 6. Enoosebna družba z omejeno odgovornostjo Gospodarske družbe s pravnim poslom ustanovijo pravni subjekt zaradi opravljanja pridobitvene dejavnosti in drugih družbenih ciljev na trgu, pravni red pa jim z vpisom v register podeli oziroma prizna lastnost pravne osebe.51 V slovenskem pravnem redu velja načelo numerus clausus, ki pomeni, da je mogoče ustanoviti le tisto obliko pravnih oseb, ki jo pravni red izrecno določa in pravno ureja. Med takimi družbami je tudi družba z omejeno odgovornostjo, ki je kapitalska gospodarska družba, katere osnovni kapital sestavljajo osnovni vložki družbenikov.52 6.1. Razvoj enoosebne družbe z omejeno odgovornostjo Družba z omejeno odgovornostjo je bila prvič uvedena v Nemčiji leta 1892.53 Postala je najbolj priljubljena in razširjena oblika gospodarske družbe. V preteklosti je bila na- menjena zlasti manjšim poslovnim podvigom z manjšim številom družbenikov. Njen osnovni kapital je razmeroma nizek, kar manjšim podjetnikom, ki nimajo veliko kapi- tala, omogoča, da ustanovijo tako družbo. V sodobnem času digitalizacije poslovanja in globalnega financiranja novih projektov, postaja družba z omejeno odgovornostjo vse bolj sprejemljiva tudi za velike poslovne podjeme.54 Posebna oblika družbe z omejeno odgovornostjo je enoosebna družba z omejeno odgovornostjo, katere bistvo je, da so vsi poslovni deleži družbe v rokah ene osebe. V nasprotju z družbo z omejeno odgovornostjo, ki ima že 130-letno tradicijo,55 je enooseb- na družba z omejeno odgovornostjo razmeroma nov institut, ki je bil v slovenski pravni prostor uveden šele leta 1993 po zgledu nemške zakonodajne ureditve, ki je tako družbo prvič dopustila šele leta 1980.56 Pravo je živ organizem, ki se nenehno spreminja, praviloma »od spodaj navzgor«, kar pomeni, da so dejanski družbeni pojavi tisti, ki vplivajo na pravno regulacijo posamezne- ga pravnega instituta. Na začetku je bilo ustanavljanje enoosebnih družb prepovedano. Kljub temu se je razširila praksa ustanavljanja takih družb s tako imenovanimi slamnati- mi možmi, ki so se z družbeno pogodbo zavezali, da bodo takoj po vpisu družbe v sodni 51 Primerjaj Korže, 2015, str. 73. 52 Tako 471. člen ZGD-1. 53 Imenovala se je die Gesellschaft mit beschränkter Haftung – GmbH. 54 Smiselno primerjaj Bratina, 2018, str. 1035. 55 Prav tam. 56 Podrobneje o tem Cepec, Ivanc, Kežmah in Rašković, 2010, str. 17. 79 Luka Vavken – (Ne)priznavanje privilegija zoper samoobtožbo pravnim osebam s poudarkom na enoosebni gospodarski družbi register prenesli svoj poslovni delež na enega družbenika. Ustanovitelj je torej preskrbel družbenike, ki so prevzeli in vplačali deleže, pozneje pa je te deleže od njih pridobil in tako postal edini družbenik družbe z omejeno odgovornostjo.57 Druga možnost za usta- novitev enoosebne družbe, ki je prav tako pomenila izigravanje pravnih pravil o prepove- di njene ustanovitve, je bila, da so slamnati družbeniki ohranili zgolj simbolični vložek v družbi z omejeno odgovornostjo. Take družbe so bile že od svoje ustanovitve pravzaprav enoosebne družbe z omejeno odgovornostjo.58 Obstoj enoosebne družbe z omejeno odgovornostjo je moralo pravo najprej priznati v primeru, ko sta družbo z omejeno odgovornostjo sestavljala dva družbenika, od katerih je eden umrl. Pozneje so pravni redi na podlagi dejanskih družbenih potreb začeli priz- navati ustanovitev enoosebne družbe z omejeno odgovornostjo. S tem je pravo pristalo na personifikacijo podjetniškega substrata, ki je postal sredstvo za doseganje cilja enega samega ustanovitelja.59 6.2. Posebnosti enoosebne družbe z omejeno odgovornostjo Kot je bilo že rečeno, je bistvena značilnost enoosebne družbe, da so vsi poslovni deleži v rokah ene osebe. Za enoosebno družbo gre tudi v primeru, ko ima vse poslovne deleže skupnost oseb ali so vsi poslovni deleži skupna lastnina več oseb.60 V Sloveniji je enoosebna družba z omejeno odgovornostjo urejena v določbah od 523. do 526. člena Zakona o gospodarskih družbah (ZGD-1).61 Za vsa vprašanja, ki niso urejena s temi posebnimi pravili, se uporabljajo splošne določbe, ki veljajo za vse gospo- darske družbe, in seveda določbe, ki se uporabljajo za družbo z omejeno odgovornostjo z več družbeniki, razen tistih, ki niso združljive z naravo enoosebne družbe z omejeno odgovornostjo ali kadar zakon izrecno določa drugačno ureditev.62 V svojem bistvu in namenu je enoosebna družba z omejeno odgovornostjo primer- ljiva s samostojnim podjetnikom. Bistvena razlika med njima je, da je enoosebna družba z omejeno odgovornostjo kapitalska družba, kar pomeni, da družba za svoje obveznosti odgovarja z vsem svojim premoženjem. S tem se zagotovi ločitev premoženja fizične ose- be od premoženja družbe kot pravne osebe.63 Enoosebna družba zaradi svoje narave nima skupščine družbenikov, ki jo ima obi- čajna družba z omejeno odgovornostjo, imeti pa mora direktorja oziroma poslovod- 57 Prav tam. 58 Primerjaj Puharič, 1999, str. 962. 59 Tako Prelič, Zabel, Ivanjko, Podgorelec in Kobal, 2000, str. 508 in nasl. 60 Prav tam. 61 Uradni list RS, št. 42/06. 62 Primerjaj Cepec, Ivanc, Kežmah in Rašković, 010, str. 19. 63 Primerjaj Bratina in Jovanović, 2009, str. 228. 80 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 jo. Praviloma in v veliki večini primerov je poslovodja hkrati tudi edini družbenik. Ustanovitelj kot edini družbenik enoosebne družbe z omejeno odgovornostjo odloča o vseh vprašanjih, ki zadevajo upravljanje družbe, razen če družbi postavi prokurista ali poslovnega pooblaščenca.64 V najpogostejši situaciji, ko je edini ustanovitelj enoosebne družbe z omejeno odgovornostjo tudi njen poslovodja, lahko ta sklepa pravne posle tudi s samim seboj, pogoj je edino, da morajo biti sklenjeni v pisni obliki. V praksi je gospodar enoosebne družbe njen edini ustanovitelj in družbenik, ki sa- mostojno odloča o pravni usodi družbe. Z drugimi besedami: enoosebna družba z ome- jeno odgovornostjo65 je samo posebna oblika upravljanja premoženja družbenika. 6.3. Jedro problema – ali je privilegij zoper samoobtožbo treba odreči tudi pravnim osebam z le enim družbenikom oziroma delničarjem? Enoosebna družba z omejeno odgovornostjo je tako kot vse druge gospodarske druž- be pravna oseba, zato se na prvi pogled ustvari vtis, da tudi zanjo velja pravilo, ki sta ga vzpostavili Ustavno sodišče in Vrhovno sodišče: torej, da se pravica iz četrte alineje 29. čle- na Ustave nanaša le na fizične, ne pa tudi na pravne osebe. Vendar, kot bomo videli v na- daljevanju razprave, vprašanje ni tako preprosto in enoznačno, kot se zdi na prvi pogled. V nadaljevanju je treba nameniti pozornost procesni situaciji, ko se kaznovalni po- stopek vodi ločeno zoper fizično oziroma odgovorno osebo in enoosebno gospodarsko družbo. Kadar postopek zoper njiju poteka hkrati, velja (zaradi vsebinske prepletenosti kaznovalnih očitkov fizični in pravni osebi) enako kot za vse druge gospodarske družbe: privilegij, ki ga uživa fizična oseba pred samoobremenjevanjem, praviloma varuje tudi pravno osebo. Edini družbenik družbe z omejeno odgovornostjo, ki je v veliki večini pri- merov tudi njen direktor oziroma poslovodja, bo v enotno vodenem kazenskem oziroma prekrškovnem postopku nastopal kot obdolženec in hkrati v skladu določbo drugega odstavka 32. člena ZOPOKD tudi kot zakoniti zastopnik obdolžene pravne osebe.66 V taki procesni situacij bo privilegij zoper samoobtožbo, ki ga uživa fizična oseba, zaradi vse- binske prepletenosti očitkov, naslovljenih na pravno in fizično osebo, zakonitega zastop- nika enoosebne družbe, de facto varoval tudi pred izpovedovanjem zoper pravno osebo. Drugače je v procesni situaciji, kadar bi bil v predkazenskem postopku sum osredoto- čen le na (enoosebno) gospodarsko družbo, ne pa tudi na njenega edinega družbenika67 64 Primerjaj določbo 505. člena ZGD-1. 65 Mutantis mutandis velja enako tudi za enoosebno delniško družbo. 66 V skladu z navedeno določbo zastopnik obdolžene pravne osebe ne more biti tisti, zoper katerega teče postopek zaradi istega kaznivega dejanja, razen, če je edini član obdolžene pravne osebe. 67 Taka situacija je v praksi skoraj docela nemogoča, ker enoosebno družba z omejeno odgovornostjo praviloma kot poslovodja zastopa njen edini družbenik. 81 Luka Vavken – (Ne)priznavanje privilegija zoper samoobtožbo pravnim osebam s poudarkom na enoosebni gospodarski družbi oziroma kadar se kaznovalni postopek vodi samo zoper enoosebno gospodarsko družbo, ne pa tudi zoper fizičnega storilca kaznivega dejanja oziroma prekrška.68 V tem primeru menim, da mora enoosebni družbenik, ki sam sicer ni pod kaznoval- no obtožbo in bo v postopku nastopal (le) kot zastopnik pravne osebe, uživati pravico do privilegija zoper samoobtožbo, ki ga bo ščitil pred izpovedovanjem v škodo »svoje druž- be«. Z drugimi besedami: enoosebni družbi z omejeno odgovornostjo mora biti priznan privilegij zoper samoobtožbo. Za razpravo o tej problematiki je treba upoštevati sodbo Vrhovnega sodišča RS v za- devi I Ips 35999/2015 z dne 23. marca 2017, v kateri je sodišče, kakor je samo navedlo, z namenom poenotiti sodno prakso, odstopilo od svojega že sprejetega stališča, ki ga je zavzelo v sodbi I Ips 266/2007 z dne 13. decembra 2007, da storilec in oškodovanec ne moreta biti isti osebi, zaradi česar ne more priti do oškodovanja družbenika, kot temeljnega pogoja za obstoj kaznivega dejanja zlorabe položaja v primeru enoosebne gospodarske družbe.69 Vrhovno sodišče je torej v sodbi I Ips 35999/2015 z dne 23. marca 2017 odstopilo od svoje dotedanje prakse in presodilo, da lahko (edini) družbenik stori kaznivo dejanje zlo- rabe položaja po 244. členu KZ tudi v enoosebni gospodarski družbi. S tem je zavzelo sta- lišče popolne ločenosti premoženja enoosebne gospodarske družbe in premoženja njenega ustanovitelja ter hkrati edinega družbenika. Presodilo je, da premoženje družbe ni le nav- zven, temveč tudi v razmerju do njenega edinega družbenika tuje premoženje. Poudarilo je, da je enoosebna gospodarska družba samostojna pravna oseba, katere premoženje je ločeno od premoženja družbenika in namenjeno opravljanju gospodarske dejavnosti. Navedena odločitev Vrhovnega sodišča je v pravni teoriji sprožila večinoma70 nega- tivne odzive. Argumenti, ki jih v nadaljevanju navajam kot kritiko stališča Vrhovnega sodišča o popolni ločenosti premoženja enoosebne gospodarske družbe in njenega edi- nega družbenika, so hkrati argumenti, ki podpirajo stališče, da mora enoosebna družba z omejeno odgovornostjo v kaznovalnem postopku zoper njo uživati svoj samostojen privilegij zoper samoobtožbo. Sgueglia Detiček se sprašuje, kako lahko edini družbenik sploh v celoti in sam razpolaga s premoženjem, če upoštevamo stališče o strogi ločenosti premoženja družbe od premoženja njenega edinega družbenika.71 Šepec ugotavlja, da v enoosebni družbi, v kateri je edini družbenik hkrati tudi direktor, nimamo notra- njih razmerij med družbeniki, zato je zaradi združene funkcije lastništva in poslovodstva 68 Do take situacije v praksi lahko pride v že prej navedenih primerih, ko je na primer fizična oseba za to kaznivo dejanje že obsojena, če je bila v času storitve kaznivega dejanja neprištevna, če je ravnala v situaciji skrajne sile in podobno. 69 Vrhovno sodišče je skušalo stališče, navedeno v sodbi I Ips 141/2006 z dne 24. maja 2007, ki ga je ponovilo v sodbi I Ips 266/2007 z dne 13. decembra 2007, omiliti z navedbo, da je bilo to stališče v obeh sodbah zapisano kot obiter dictum. 70 Ne pa v celoti. Zanjo se zavzema na primer Kozina, 2019, str. 991–993. 71 Primerjaj Sgueglia Detiček, 2023, str. 28. 82 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 oškodovanje lastne družbe le fiktivno. To pomeni, da posameznik z oškodovanjem eno- osebne družbe v resnici oškoduje le svoje lastno premoženje, ki ga je pravno pretvoril v samostojno pravno osebo in ga lahko z likvidacijo svoje družbe preoblikuje nazaj v lastno premoženje. Poudarjata, da edini lastnik družbe ne more zlorabiti zaupanja do samega sebe, kakor tudi ne more zlorabiti premoženja, ki ga lahko s postopkom likvidacije druž- be zakonito pretvori v svoje lastno premoženje. Sklene, da je trditi, da lahko edini druž- benik – njen lastnik – oškoduje »svojo« pravno osebo, katere usoda je v celoti v njegovih rokah, nesmiselno in meji na argumentum ad absurdum.72 Še konkretnejši je Šošić, ki absurdnost stališča Vrhovnega sodišča pojasni na konkretnem primeru, v katerem fizična oseba, ki lahko s svojimi denarnimi sredstvi prosto razpolaga in jih celo uniči, ne da bi zato trpela kakršnekoli kazenske ali druge pravne posledice, ta ista finančna sredstva vloži v ustanovitev enoosebne družbe z omejeno odgovornostjo. Še istega dne kot direktor in hkrati edini družbenik razpolaga s premoženjem novoustanovljene družbe na primer tako, da iz njega krije svoje zasebno potovanje. Po stališču Vrhovnega sodišča si taka »kr- šitev« formalnih pravil gospodarskega statusnega prava zasluži do pet let zaporne kazni.73 Stališče o strogi ločenosti premoženja enoosebne gospodarske družbe in njenega družbenika, ki ga je zavzelo Vrhovno sodišče v navedeni sodbi, ni uporabno za preso- jo dileme, ali je treba enoosebni gospodarski družbi v kaznovalnem postopku priznati pravico do privilegija zoper samoobtožbo, še iz enega razloga. Namen sodbe Vrhovnega sodišča, ki iz same odločbe sicer ni neposredno razviden, je namreč očitno bil varovanje potencialnih upnikov ali morebitnih drugih upravičencev do premoženja družbe (na primer državnega proračuna, v družbi zaposlenih delavcev, njenih poslovnih partnerjev oziroma upnikov) pred potencialnimi zlorabami njenega edinega družbenika, ne pa v ustvarjanju povsem neživljenjske situacije, v kateri bi bila vez med premoženjem edinega družbenika in premoženjem »njegove« pravne osebe povsem pretrgana. Ta vez zagotovo obstaja in ni le v ekonomskem smislu, temveč je tudi praktično, življenjsko gledano tako močna, da je po splošnem občutku pravičnosti mogoče premoženje edinega družbenika, ki ga je vložil v pravno osebo, šteti za njegovo lastno premoženje. Končno je od razlage besedne zveze »ni dolžan izpovedati zoper sebe«, ki je srčika privilegija zoper samoobtožbo v smislu, da osumljenec ni dolžan sodelovati z organi pregona, odvisno, ali je enoosebni gospodarski družbi v kaznovalnem postopku treba (neposredno in njej lastno, ne le posredno prek privilegija zoper samoobtožbo, ki ga uživa odgovorna oseba pravne osebe) priznati to procesno jamstvo. Menim, da je treba v tem primeru vsebino privilegija zoper samoobtožbo razlagati široko. V smislu, da je posamezniku v kaznovalnem postopku treba dopustiti, da je povsem pasiven ne le v svojih voljnih ravnanjih, s katerimi lahko škoduje neposredno svoji osebi, torej svojim osebnostnim pravicam, temveč tudi v tistih ravnanjih, ki lahko povzročijo škodo njegovi 72 Primerjaj Sgueglia Detiček in Šepec, 2021, str. 771. 73 Primerjaj Šošić, 2020, str. II–VII. 83 Luka Vavken – (Ne)priznavanje privilegija zoper samoobtožbo pravnim osebam s poudarkom na enoosebni gospodarski družbi premoženjski sferi. Privilegij zoper samoobtožbo tako ne varuje le pred prisilnim sodelo- vanjem osumljenca z organi pregona, katerega končna posledica bi bila lahko obsodilna sodba, temveč varuje tudi pred sodelovanjem, ki bi lahko škodljivo vplivalo na njegove premoženjske pravice. Kazenski postopek zoper pravno osebo ima seveda lahko zanjo in s tem neposredno za njenega edinega družbenika negativne premoženjske učinke v smislu izreka denarne kazni, odvzema premoženja, prenehanja pravne osebe in prepovedi razpolaganja z vrednostnimi papirji, katerih imetnica je pravna oseba,74 v prekrškovnem postopku pa tudi izločitev družbe iz postopkov javnega naročanja.75 Ker je tako, je vsebino privilegija zoper samoobtožbo v primeru enoosebne gospodarske družbe treba razširiti tudi na premoženjske pravice posameznika. Če edinemu družbeniku gospodarske družbe v kazenskem postopku to možnost odvzamemo s tem, da ga silimo v izpovedovanje (ravnanje), s katerim bo povzročil škodo svojemu premoženju, dejansko posegamo najmanj v njegovo ustavno pravico do zasebne lastnine iz 33. člena Ustave. Taki široki razlagi privilegija končno pritrjuje tudi določba 238. člena ZKP, po kateri priča med drugim ni dolžna odgovarjati na posamična vprašanja, če je verjetno, da se bo s tem spravila v znatno materialno škodo. Če torej pravna ureditev priči priznava pravico, da odkloni odgovor na vprašanje, če bi z njim povzročila znatno škodo svojemu premože- nju, je še toliko bolj na mestu, da se v kaznovalnem postopku, ki se vodi le zoper pravno osebo, zakonitemu zastopniku enoosebne gospodarske družbe z omejeno odgovornostjo, ki je hkrati tudi njen edini družbenik, omogoči, da s svojim ravnanjem ne škoduje družbi in s tem dejansko svojemu premoženju. Utemeljenega razloga za razlikovanje med pričo, ki ji ni treba odgovarjati na vprašanja, če bi s tem znatno škodovala svojemu premoženju, in »lastnikom« enoosebne družbe z omejeno odgovornostjo, preprosto ni videti. Zato je na mestu sklep, da je enoosebni gospodarski družbi treba priznati pravico do privilegija zoper samoobtožbo. V smislu priznavanja tega ustavnega jamstva je na- mreč premoženjski substrat družbe treba obravnavati kot premoženje njenega edinega družbenika. S podelitvijo privilegija zoper samoobtožbo enoosebni gospodarski družbi se namreč dejansko varuje pred izpovedovanjem (ravnanjem) v svojo škodo njenega »la- stnika« – edinega družbenika. 7. Sklep o priznavanju privilegija zoper samoobtožbo pravnim osebam Pravo je dinamičen organizem, ki se zlasti pod vplivom živega, sodniškega prava (angl. case law) nenehno spreminja, dopolnjuje in nadgrajuje. Pravne osebe v sodobnem pravu vse bolj postajajo nosilke določenih človekovih pravic, na primer pravice do za- sebnosti.76 Zagotavljanje varstva človekovih pravic pravnim osebam ni samo sebi namen, 74 Primerjaj 12. člen ZOPOKD. 75 Primerjaj drugi odstavek 4. člena ZP-1. 76 Primerjaj na primer odločbo Ustavnega sodišča RS v zadevi U-I-40/12 z dne 11. aprila 2013. 84 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 temveč je namenjeno varstvu pravic posameznikov, ki se skrivajo za pravnimi osebami.77 Razvoj prava bo šel prej ali slej v smer, da se pravnim osebam samostojno, ne le posredno, prek njene odgovorne osebe zagotovi pravica do privilegija zoper samoobtožbo. Če na- mreč država sprejme, da je pravna oseba lahko kazensko odgovorna, ji navsezadnje mora zagotoviti tudi pravice, ki pripadajo obdolžencem v kaznovalnem postopku. Končni cilj vsakega kaznovalnega (kazenskega ali prekrškovnega) postopka je najti pravilno in zakonito odločitev, ki bo sprejeta v poštenem postopku. Pri dosegi tega cilja morata tako postavodajalec kot tudi sodna praksa najti pravo (so)razmerje med varstvom pravic na eni strani in učinkovitostjo postopka na drugi strani. Menim, da priznanje pri- vilegija zoper samoobtožbo pravnim osebam v kaznovalnih postopkih ne bi bilo posebno pogubno za učinkovitost postopka. Že ob trenutno veljavnem stališču sodne prakse, ki pravnim osebam odreka pravico do privilegija zoper samoobtožbo, zaradi akcesornosti vodenja postopka zoper pravno osebo in praviloma vsebinski prepletenosti očitkov od- govorni osebi pravne osebe in pravni osebi privilegij, priznan odgovorni osebi, v veliki večini primerov varuje pred samoobdolžitvijo tudi pravno osebo. Samostojno priznan privilegij pravni osebi bi v praksi dejansko vplival na učinkovitost postopkov le v prime- rih, ko bi se kaznovalni postopek iz enega od zakonsko določenih razlogov vodil le zoper pravno osebo ali pa v primeru notranje bolj razvejanih pravnih oseb, ko organi pregona ne bi zahtevali izjav oziroma dokaznega gradiva od osumljene odgovorne osebe, temveč od neke tretje osebe v gospodarski družbi, na katero sum storitve prekrška oziroma ka- znivega dejanja ni osredotočen. Trenutna sodna praksa pravnim osebam na splošno odreka pravico do privilegija zo- per samoobtožbo. Vendar pa je z veliko mero občutljivosti in kritičnosti treba premisliti, ali je tako stališče ustrezno. Še zlasti v primeru, ko se sum storitve kaznivega dejanja osredotoča na enoosebno gospodarsko družbo. V razpravi so navedeni argumenti, ki upoštevaje specifično notranjo zgradbo enoosebne gospodarske družbe, kažejo, da je edi- na Ustavi prijazna razlaga kazenskega oziroma prekrškovnega postopnika taka, da je treba enoosebni gospodarski družbi priznati samostojno pravico do privilegija zoper samoob- tožbo takoj, ko se nanjo osredotoči sum storitve kaznivega dejanja oziroma prekrška. V nasprotnem primeru bi bilo siljenje direktorja oziroma zakonitega zastopnika enoosebne družbe k izpovedovanju oziroma ravnanju zoper »njegovo pravno osebo« v resnici nič drugega kot prisila k izpovedovanju zoper njega samega. 77 Tako Tratar, 2017, str. 134. 85 Luka Vavken – (Ne)priznavanje privilegija zoper samoobtožbo pravnim osebam s poudarkom na enoosebni gospodarski družbi Literatura Bošnjak, M., in Žaucer Hrovatin, M. (2019) v: Avbelj, M. (ur.) Komentar Ustave Republike Slovenije. Nova Gorica: Nova univerza, Evropska pravna fakulteta. Bratina, B., in Jovanović, D. (2009) D.o.o. – družba z omejeno odgovornostjo: z vzorci aktov za njeno delovanje. Maribor: De Vesta. Bratina, B. (2018) ‘Novosti pri poslovanju družbe z omejeno odgovornostjo’, Podjetje in delo, letnik 44, št. 6-7, str. 1034–1044. Cepec, J., Ivanc, T., Kežmah, U., in Rašković, M. (2010) Pot v podjetništvo, s.p. ali d.o.o. GV Ljubljana: Založba. von Freier, F. (2010): ‘Selbstbelastungsfreiheit für Verbandspersonen?’ Zeitschrift für die gesamte Strafrechtswissenschaft, letnik 122, št. 1. Gorkič, P. (2011) ‘Razpravna sposobnost obdolženca v kazenskem postopku’, Zbornik znanstvenih razprav – Ljubljana Law Review, str. 93–116. Gorkič, P. (2014) ‘Edicijska dolžnost domnevnega storilca v kazenskem postopku’, Pravnik, letnik 69, št. 5-6, str. 373–389. Hafner, M. (2018) Pomen in uporaba izsledkov nevroznanosti v kazenskem pravu, doktor- ska disertacija. Ljubljana: Pravna fakulteta Univerze v Ljubljani. Horvat, Š. (2004) Zakon o kazenskem postopku (ZKP): s komentarjem. Ljubljana: GV Založba. Korže, B. (2015) Pravo družb in poslovno pravo. Ljubljana: Uradni list RS. Kozina, J. (2019) v: Korošec, D., Filipčič, K., in Devetak, H. (ur.) Veliki znanstveni komentar posebnega dela Kazenskega zakonika (KZ-1), 2. knjiga. Ljubljana Uradni list Republike Slovenije in Pravna fakulteta Univerze v Ljubljani. Mekše, V. (2023) Razlikovanje med testimonialnimi in telesnimi dokazi pri dometu privi- legija zoper samoobtožbo, magistrsko delo. Ljubljana: Pravna fakulteta univerze v Ljubljani. Pavčnik, M. (2016) Teorija prava. Ljubljana: IUS Software (GV Založba). Polajžar, A., in Stajnko, J. (2020) ‘Privilegij zoper samoobtožbo kot ahilova peta sekcij- skega merjenja hitrosti’, Varstvoslovje, letnik 22, št. 2, str. 137–157. Puharič, K. (1999) ‘Enoosebne družbe v ZGD’, Podjetje in delo, letnik 25, št. 6-7, str. 961–965. Prelič, S., Zabel, B., Ivanjko, Š., Podgorelec, P., in Kobal, A. (2000) Družba z omejeno odgovornostjo. Ljubljana: GV Založba. Redmayne, M. (2007) ‘Rethinking the Privilege Against Self-Incrimination’, Oxford Journal of Legal Studies, letnik 27, št. 2, str. 209–232. 86 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Sgueglia Detiček, A., in Šepec, M. (2021) ‘Kaznivo dejanje zlorabe položaja v enoo- sebni družbi z omejeno odgovornostjo’, Podjetje in delo, letnik 47, št. 5, str. 753–773. Sgueglia Detiček, A. (2023) Zloraba položaja s strani lastnikov poslovnih deležev, magistr- sko delo. Maribor: Pravna fakulteta Univerze v Mariboru. Selinšek, L. (2010) ‘Učinek privilegija zoper samoobtožbo na dopustnost dokazov iz nekazenskih postopkov’, Pravnik, letnik 65, št. 5-6, str. 301–326. Selinšek, L. (2018) Zakon o prekrških (ZP-1) s komentarjem. Ljubljana: Lexpera (GV Založba). Šepec, M. (2019) ‘Odgovornost pravnih oseb za kazniva dejanja: zakonska ureditev in aktualne dileme’, Podjetje in delo, letnik 45, št. 8, str. 1355–1371. Šošić, M. (2020) ‘Zloraba položaja ali zaupanja pri gospodarski dejavnosti kljub soglasju družbenikov’, Pravna praksa, letnik 39, št. 15-16, str. II–VII. Šošić, M. (2023) Zakon o kazenskem postopku (ZKP): s komentarjem, Ljubljana: Lexpera (GV Založba). Šugman, K. (2000) Dokazne prepovedi v kazenskem postopku. Ljubljana: Bonex. Vavken, L. (2022) ‘Ekskluzija dokazov – (nova) metodologija presoje?’, Pravosodni bilten, letnik 43, št. 1, str. 23–41. Tratar, B. (2017) ‘Pravne osebe kot nosilke človekovih pravic’, Dignitas, št. 75–76, str. 115–136. Zobec, B. (2019) v: Avbelj, M. (ur.) Komentar Ustave Republike Slovenije. Nova Gorica: Nova univerza, Evropska pravna fakulteta. Zupančič, B. M. (1996) ‘Med državo in posameznikom: privilegij zoper samoobtožbo’, Pravnik, letnik 51, št. 1-3, str. 19–44. Žaucer, M. (2013) ‘Nekateri vidiki priznavanja privilegija zoper samoobtožbo pravnim osebam’, Pravnik, letnik 68, št. 5-6, str. 317–343. 87 © The Author(s) 2024 Znanstveni članek DOI: 10.51940/2024.1.87-107 UDK: 314.15:343.123.11:341 Urh Šelih* Izbrani vidiki pravice do izjave v azilnih postopkih Povzetek Avtor obravnava pomen pravice do izjave v azilnih postopkih s poudarkom na njeni vlogi pri oceni tveganja vračanja in dodelitvi mednarodne zaščite. Njegova temeljna teza je, da so izjave prosilcev za azil, pridobljene skozi osebni pogovor, ključna podlaga za nadaljnje ravnanje in odločanje pristojnih organov. Še posebej pomembne so v primerih, ko prosil- ci nimajo drugih trdnih dokazov, kar je pogost pojav. Prispevek se osredotoča na pravne vidike in zahteve pravice do izjave, kot izhajajo iz prava Evropske unije, Evropske kon- vencije o človekovih pravicah in nacionalnega prava. Glavni namen prispevka je predsta- vitev nekaterih ključnih vidikov pravice do izjave, še zlasti v kontekstu procesnih zahtev, ki jih določajo evropski in nacionalni pravni viri. Med te vidike spadajo tudi procesne pravice prosilcev, kot je pravica do komentiranja poročila o osebnem pogovoru in dajanja pripomb na ugotovitve pristojnih organov glede verodostojnosti izjav in dokazov. Avtor se osredotoča na pomembnejše in problematične vidike pravice do izjave, ki zahtevajo posebno pozornost pri obravnavi prošenj za azil. Hkrati navaja, da se pravica do izjave ne izčrpa z osebnim pogovorom, temveč zahteva tudi možnost dodatnih informacij in popravljanja napak ter podajanja pripomb na ugotovitve pristojnih organov. Avtor ana- lizira tudi relevantno sodno prakso, ki podpira tezo o pomembnosti pravice do izjave v azilnih postopkih. Ključne besede pravica do izjave, mednarodna zaščita, azil, sodelovalna dolžnost, procesna direktiva. * Asistent na Pravni fakulteti Univerze v Ljubljani. Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review – voL. LXXXiv, 2024 • pp. 87–107 ISSN 1854-3839 • eISSN: 2464-0077 88 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Uvodna pojasnila Pravica do izjave je nepogrešljiva v številnih (nacionalnih) postopkih, v azilnih po- stopkih pa je njen pomen le še potenciran. Zavedati se moramo namreč, da imajo izjave prosilcev bistveno vlogo pri oceni tveganja vračanja in s tem pri dodelitvi mednarodne zaščite. Izjave prosilcev, ki jih uradna oseba pridobi skozi osebni razgovor, so temelj tako za nadaljnje ravnanje organa kot tudi za odločitve, sploh kadar prosilec nima drugih »trdnejših« dokazov, kar pa je bolj pravilo kot izjema. Namen tega prispevka je predsta- viti temeljne vidike oziroma zahteve pravice do izjave, ki izhajajo tako iz prava Evropske unije kot tudi Evropske konvencije o človekovih pravicah in nacionalnega (ustavnega) prava. Prispevek ni izčrpna analiza vseh vidikov pravice do izjave, zato sem se osredotočil na tiste, ki se mi zdijo pomembnejši in/ali so med bolj problematičnimi. V azilnih postopkih so osebne izjave prosilcev ključnega pomena, saj so pogosto edini vir informacij glede njihovega preganjanja ali resnega tveganja preganjanja, s katerim bi se soočili ob vrnitvi v izvorno državo. V številnih primerih so prosilci prisiljeni zapustiti svoje domove v naglici, ne da bi lahko vzeli s seboj kakršnekoli dokumente, ki bi lahko podkre- pili njihove zgodbe. Poleg tega so lahko ti dokumenti zaradi političnih ali vojnih razmer v izvornih državah težko dosegljivi ali celo uničeni. Zaradi teh okoliščin se v azilnih po- stopkih pogosto zgodi, da so osebne izjave prosilcev edini dokaz, ki ga lahko predložijo.1 Pravica do izjave torej ni le formalnost, ampak nujen element azilnega postopka, saj to zahtevata tako Ustava kot tudi mednarodno pravo človekovih pravic, kot bo to predstavljeno v nadaljevanju. Omogoča prosilcem, da predstavijo svojo zgodbo, pojas- nijo svoje strahove in tveganja ter odgovarjajo na vprašanja uradnih oseb, ki odločajo o njihovi (pravni) usodi. Zato je bistveno, da so postopki za zbiranje teh izjav pošteni, nepristranski in temeljiti. Pomembno je tudi, da so uradne osebe, ki izvajajo pogovore, ustrezno usposobljene, da lahko zaznajo in razumejo kulturne, jezikovne in psihološke vidike, ki lahko vplivajo na izjave prosilcev. 1. Splošno o pravici do izjave v mednarodnem pravu človekovih pravic Pravica do izjave ima veliko vidikov in jo lahko najdemo v širokem naboru pravnih virov. V tem prispevku se bom omejil zlasti na pravico do izjave v kontekstu prava EU, dopolnjeno z Evropsko konvencijo o človekovih pravicah, ki pa bo na nekaterih mestih predstavljena še v kontekstu nacionalnega prava in pripadajoče sodne prakse. Temelj pravice do izjave, ki izvira iz pravice do dobrega upravljanja, je določen v Listini EU z 41., 47. in 48. členom. Pogosto je obravnavana kot sestavni del pravice do 1 Glej na primer že UNHCR, 1984. 89 Urh Šelih – Nekateri vidiki pravice do izjave v azilnih postopkih obrambe2 in pravice do učinkovitega pravnega sredstva.3 Pravica do izjave je v upravnem postopku zagotovljena v 41. členu, pravico do izjave v sodnem postopku pa zagotavlja 47. člen Listine. Nekatere odločitve Sodišča EU, ki jih bom obravnaval oziroma s po- močjo katerih bom razlagal vsebino pravice, se nanašajo zgolj na pravico do izjave v upravnem postopku, tj. 41. člen Listine. Vendar pa je razlaga Sodišča EU pomembna na vseh ravneh azilnega postopka (tudi sodnega), saj pravico šteje za temeljno pravico oziroma splošno načelo prava EU že vse od leta 1963.4 Pravica do izjave splošno zagotavlja vsaki osebi možnost, da koristno in učinkovito poda svoje stališče (se izjavi) v upravnem in morebitnem sodnem postopku pred spre- jetjem vsake odločbe ali ukrepa, ki bi lahko negativno vplivala na njene interese,5 glede dokazov, na katerih temelji odločitev,6 ter relevantnih dejstev in okoliščin zadeve.7 Iz tega posredno izhaja, da mora biti subjektu(!) postopka znano, na katerih dokazih bo organ utemeljil odločitev,8 in/ali je osnutek odločitve posredovan subjektu (ne pa nujno tudi končna odločitev).9 Kot je bilo že omenjeno, mora biti pravica do izjave su- bjektu postopka zagotovljena tudi pred sprejemom odločbe oziroma ukrepa, prav tako pa mora biti subjektu dodeljen primeren rok, v katerem lahko učinkovito predstavi svoja stališča pred organom.10 Namen pravice je v tem, da se pristojnemu organu omogoči, da pred odločitvijo v za- devi ustrezno upošteva celoto tako imenovanih upoštevnih elementov.11 Posledično mora organ ustrezno upoštevati pripombe, ki jih je podal subjekt med postopkom, pri tem pa skrbno in nepristransko preučiti vse upoštevne elemente obravnavane zadeve in svojo odločitev podrobno obrazložiti.12 V skladu s sodno prakso Sodišča EU, se spoštovanje 2 Zalar, 2023, str. 150. 3 Module 3, 2019, str. 5 in 32. 4 Sodba Sodišča EU v zadevi M. Maurice Alvis proti Svetu Evropske gospodarske skupnosti, št. 32-62, z dne 4. julija 1963, točka 1. 5 Sodba Sodišča EU v zadevi C-277/11 MM proti Minister for Justice, Equality and Law Reform Ireland z dne 22. novembra 2012, točka 87. 6 Sodba Sodišča EU v zadevi C-32/95P Komisija proti Lisrestal el al. z dne 24. oktobra 1996, točka 21. 7 Sodba Sodišča EU v zadevi C-269/90 Technische Universität München proti Hauptzollamt München- Mitte z dne 21. novembra 1991, točka 25. 8 Sodba Sodišča EU v zadevi T-228/02 Organisation des Modjahedines du peuple d’Iran proti Conuncil z dne 12. decembra 2006, točka 93. 9 Sodba Sodišča EU v zadevi C-462/98P Mediocurso proti Komisiji z dne 21. septembra 2000, točka 42. 10 Sodba Sodišča EU v zadevi C-51/92 P Hercules Chemicals proti Komisiji z dne 8. julija 1999, točki 78–79. 11 Sodba Sodišča EU v zadevi C-249/13 Khaled Boudjlida proti Préfet des Pyrénées-Atlantique z dne 11. decembra 2014, točka 37. 12 Prav tam, točka 38. 90 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 pravice do izjave zahteva tudi, če s pravno ureditvijo ali področjem, ki se uporablja, to ni izrecno predpisano ali zahtevano.13 Pravica do izjave v splošnem pomenu ni absolutna, torej je lahko omejena. Omejitve in način izvršitve pravice morajo v skladu z 52. členom Listine EU, upoštevati morajo cilje splošnega interesa in glede na zasledovan cilj ne smejo pomeniti pretiranega posega, ki bi ogrožal bistvo zagotovljenih pravic.14 Omejitve pravice do izjave in njihova dopu- stnost se lahko razlikujejo glede na okoliščine postopka, zato jih je treba obravnavati v kontekstu specifičnega (u)pravnega področja.15 2. Pravica do izjave v kontekstu azilnih postopkov Kot sem že navedel zgoraj, prosilci v številnih azilnih zadevah nimajo osebnih doku- mentov in drugih dokazov, s katerimi bi lahko podprli svoje trditve preganjanja in posle- dično prošnjo za mednarodno zaščito kot tako. Ne sme nas torej presenetiti dejstvo, da je ocena verodostojnosti prosilčevih izjav pogosto odločilna za izid zadeve. Izjave prosilca imajo torej bistveno vlogo pri oceni, ali prosilcu grozi preganjanje v primeru vrnitve v izvorno državo.16 Bistveno je, da imajo prosilci realne možnosti, da organom predstavijo svoje vidike in razloge, s katerimi utemeljujejo prošnjo, za kar je navadno potreben oseb- ni pogovor, poleg tega pa mora imeti prosilec možnost izreči se o sklepih organa zoper njega in glede dokazov, na katerih temelji oziroma bo temeljila odločba o mednarodni zaščiti.17 Po stališču Sodišča EU je izjava prosilca v nekaterih primerih le »izhodišče« za postopek presoje dejstev in okoliščin.18 Organi pa morajo zaradi nevarnosti preganjanja v primeru vrnitve prosilca v izvorno državo posamezno zadevo presojati strogo. Naštetemu standardu pa ni mogoče zadostiti brez prisotnosti prosilca v državni članici. Prvi pogoj za učinkovito izvrševanje pravice do izjave je pravica prosilca, da ostane na ozemlju države, kar pa nastavlja temelje za učinko- vito zagotavljanje prepovedi vračanja.19 13 MM., točka 86. 14 Sodba Sodišča EU v zadevi C-28/05 Dokter in ostali proti Minister van Landbouw, Natuur en Voedselkwaliteitpara z dne 15. julija 2006, točka 75. 15 Zalar, 2023, str. 151. 16 UNHCR, 2013, str. 28. 17 Sodba Sodišča EU v zadevi C-517/17 Milkiyas Addis proti Bundesrepublik Deutschland z dne 16. julija 2020, točki 68 in 69. 18 Glej na primer sodbo Sodišča EU v zadevi C-238/19 EZ proti Bundesrepublik Deutschland z dne 19. novembra 2020, točka 52. 19 MM., točka 28. 91 Urh Šelih – Nekateri vidiki pravice do izjave v azilnih postopkih 2.1. Dolžnost države, da sodeluje s prosilcem (med osebnim pogovorom) Za učinkovito zagotavljanje in izvrševanje pravice do izjave med postopkom na prvi stopnji odločanja ni dovolj, da država izvede osebni pogovor. Pravica do izjave državo obvezuje tudi, da v sodelovanju s prosilcem obravnava ustrezne elemente prošnje.20 Za razumevanje tega vidika pravice do izjave je pomembna Kvalifikacijska direktiva 2011/95/EU, natančneje določba prvega odstavka 4. člena, po kateri lahko države člani- ce naložijo prosilcu dolžnost, da čim prej predloži vse potrebne elemente za utemeljitev prošnje. Države članice pa morajo v sodelovanju s prosilcem obravnavati ustrezne ele- mente prošnje. Določba torej vsebuje obveznosti komuniciranja, tako za državo članico kot tudi za prosilca.21 Ta določba nalaga tudi tako imenovano sodelovalno dolžnost (angl. obligation to cooperate). Že na podlagi jezikovne razlage lahko ugotovim, da se določeno aktivno ravnanje zahteva tako od prosilca kot tudi od države oziroma da je ta dolžnost deljena, kot to izhaja že iz povzetka Kvalifikacijske direktive.22 Na podlagi prvega odstavka 4. člena je mogoče od prosilca pričakovati, da predloži vse dokaze (elemente) za utemeljitev prošnje, ki so mu dostopni. Pravica do izjave pa na tej točki zahteva, da mu uradna oseba v osebnem pogovoru postavlja (vsa) pravno rele- vantna vprašanja in podvprašanja. Prosilec namreč ne more vedeti, kdo so lahko subjekti preganjanja v pravnem smislu, katere okoliščine so pomembne z vidika posameznega razloga za preganjanje ali celo kateri so sami razlogi za preganjanje,23 niti mu ni vedno to dobro razloženo. Prosilec lahko predstavi svoj strah pred vrnitvijo, ne more pa vedeti, katere okoliščine so pomembne in pravno relevantne za utemeljevanje stopnje ogrože- nosti ali intenzivnosti preganjanja ter s tem za uspeh njegove prošnje.24 Ob prevajanju ali pa zaradi socialno-kulturnih razlik lahko pride do nesporazumov med uradno osebo in prosilcem, ki lahko pomembno vplivajo na postopek, če uradna oseba tega ne upošteva in pomembnih dejstev ne razčisti docela med osebnim pogovorom oziroma jih ob sode- lovanju s prosilcem ne preveri dovolj.25 Šele dosledno spoštovanje obveznosti do komu- niciranja zagotavlja, da prosilec dovolj dobro razume, kaj država šteje za »potrebne ele- mente« prošnje.26 Urad Visokega komisariata Združenih narodov za begunce (UNHCR) dolžnost države razlaga še širše: če prosilec ne more predložiti potrebnih relevantnih dokazov, mora uradna oseba s pomočjo vseh razpoložljivih sredstev predložiti relevantne 20 Zalar, 2023, str. 148. 21 Noll, 2005, str. 4. 22 Povzetek Direktive 2011/95/EU, 2018. 23 Sodba Sodišča EU v zadevi C-604/12 H. N. proti Minister for Justice, Equality and Law Reform and Others z dne 8. maja 2014, točka 34. 24 Zalar, 2023, str. 148. 25 Prav tam. 26 Noll, 2005, str. 300. 92 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 dokaze v podporo prošnji.27 Če uradna oseba take dokaze oziroma informacije predloži oziroma predloži druge, samostojno pridobljene dokaze, jih mora vnovič obravnavati v sodelovanju s prosilcem.28 Zakon o mednarodni zaščiti (ZMZ-1)29 ni v celoti prenesel določbe prvega odstavka 4. člena Kvalifikacijske direktive v slovenski pravni red. Prvi odstavek 21. člena ZMZ-1 sicer navaja, da »mora prosilec navesti vsa dejstva in okoliščine, ki utemeljujejo njegov strah pred preganjanjem ali resno škodo«, o sodelovalni dolžnosti pa ni mogoče zasledi- ti nič. Dolžnost sodelovanja prosilca pri utemeljevanju prošnje sicer z vidika 18. člena Ustave sama po sebi niti ni sporna.30 Nacionalno zakonodajo, kot je ZMZ, ki v naš pravni red prenaša Kvalifikacijsko direktivo, je treba razlagati ob upoštevanju sodne pra- ske Sodišča EU, ki prvi odstavek 4. člena Kvalifikacijske direktive razlaga tako, da mora pristojni organ prosilcu omogočiti, da se celostno izjavi o dejstvih, s katerimi bi lahko utemeljil svojo prošnjo.31 Če informacije, ki jih predloži prosilec, zaradi kateregakoli razloga niso popolne, ažurne ali upoštevne, mora organ »aktivno sodelovati s prosilcem, da se zberejo vse informacije, ki so potrebne za obravnavo njegove prošnje.«32 Še več, tudi po sodni praksi Sodišča EU mora aktivno sodelovanje omogočiti, da se zberejo vsi dokazi, ki podpirajo prošnjo.33 Sodelovalna dolžnost pa se ne nanaša samo na osebni po- govor, ampak tudi na celoten prestopek na prvi stopnji. V nasprotju s tem prvostopenjski organ pogosto ne omenja in ne razčiščuje informacij o izvorni državi med pogovorom34 in večino časa nameni zbiranju informacij o bioloških podatkih prosilca ter poti, ki jo je ubral, namesto ugotavljanju razlogov za mednarodno zaščito,35 kar bi vsekakor morala biti prednost postopka. 2.2. Vezanost organa na trditveno podlago prosilca V zvezi s sodelovalno dolžnostjo se odpira problematika, ali navajanje okoliščin, ki ne utemeljujejo pogojev za mednarodno zaščito, od pristojnega organa zaradi nevarnosti kršitve prepovedi vračanja zahteva postavljanje vprašanj, ki bi prosilca vodila k navajanju drugih morebitno upoštevnih okoliščin glede preganjanja ali resne škode. V okviru tega se tudi sprašujemo: ali lahko ustreznost vprašanj ocenjujemo brez upoštevanja okoliščin, 27 UNHCR, 2004, str. 13. 28 Noll, 2005, str. 299. 29 Uradni list RS, št. 16/17 – uradno prečiščeno besedilo, 54/21 in 42/23 – ZZSDT-D. 30 Odločitev Ustavnega sodišča RS U-I-292/09-9, Up-1427/09-16 z dne 20. oktobra 2011, točka 16. 31 MM., točke 64–67. 32 Sodba Sodišča EU v zadevi C-560/14 M. proti Minister for Justice and Equality Ireland and the Attorney General z dne 9. februarja 2017, točka 48. 33 MM., točka 66. 34 UNHCR, 2010, str. 151. 35 Prav tam, str. 150. 93 Urh Šelih – Nekateri vidiki pravice do izjave v azilnih postopkih ki jih je prosilec sam navedel kot razlog za mednarodno zaščito? Sprašujemo se torej, ali so nacionalni organi v svojem ravnanju vezani na trditveno podlago prosilca ali jim sodelovalna dolžnost nalaga kaj več. V skladu s sodno prakso Sodišča EU morajo namreč pristojni organi ugotoviti najprimernejši status glede na položaj prosilca,36 zato je razum- no od njih pričakovati, da bodo v sklopu vprašanj zaradi nevarnosti preganjanja pokrili tudi druge razloge preganjanja, ne zgolj tistih, ki se jim najprej ponudijo v ospredje, zlasti če se pozneje izkaže, da ti niso izkazani. Odgovor deloma daje tudi sodba velikega senata Evropskega sodišča za človekove pravice (ESČP) v zadevi F.G. proti Švedski, ki se nanaša na iranskega državljana, ki je na Švedskem zaprosil za azil zaradi svojega političnega udejstvovanja. Prosilec je uvodno navedel, da se je spreobrnil iz islama v krščansko vero kmalu po prihodu na Švedsko, vendar je pozneje izjavil, da se na versko spreobrnitev ne želi sklicevati kot na razlog za azil, saj je zadevo štel za strogo zasebno. Švedski organi tako niso upoštevali slednjega razloga in niso nadalje preverjali, ali bi mu na podlagi verske spreobrnitve lahko v izvor- ni državi grozilo preganjanje oziroma resna škoda v smislu Kvalifikacijske direktive. V zvezi s postopkovnim vidikom zadeve je UNHCR pred ESČP poudaril, da obveznosti iz Konvencije o beguncih od državnega organa zahtevajo, da ugotovi vsa ustrezna dejstva v kontekstu obravnave prošnje.37 Zato ugotavljanje, ali utemeljeno pričakuje preganjanje ali mu grozi druga resna škoda, temelji na dejstvih, ki so bistvena za prošnjo za azil, na primer tudi na dejstvih, ki jih je prosilec predložil, vendar je zahteval, da se ne upoštevajo zaradi njihove zasebne narave, ali če jih je prosilec štel za nepomembna.38 ESČP splošno razlikuje med dvema vrstama prošenj za azil glede na naravo tveganja. Pri prvi tveganje izhaja iz splošnih in dobro znanih razmer v izvorni državi. Kadar se država članica sreča s tako prošnjo, mora na lastno pobudo oceniti tveganje.39 Druge vrste prošenj pa temeljijo na osebnem tveganju in prosilec mora tako tveganje predsta- viti ter utemeljiti.40 Metodološko razlikovanje je pomembno, saj splošno gledano od države ni mogoče pričakovati, da bo sama odkrila ta individualni razlog, če se prosilec nanj ne sklicuje.41 Sodišče pa je v tej zadevi poudarilo še, da mora država ob upoštevanju absolutne narave pravic, zagotovljenih v 2. in 3. členu Evropske konvencije o človeko- vih pravicah (EKČP), ter ranljivosti prosilcev za azil, individualno tveganje oceniti ex proprio motu, ne glede na to, ali se je prosilec odločil za sklicevanje na ta dejstva, če je bila z njimi seznanjena.42 36 H.N., točka 34. 37 ESČP v zadevi F.G. proti Švedski, št. 43611/11, z dne 23. marca 2016, točka 109. 38 Prav tam. 39 Prav tam, točka 126. 40 Prav tam, točka 127. 41 Prav tam. 42 Prav tam. 94 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Glede tega in pritožnikove verske spreobrnitve je ESČP ugotovilo, da organi nikoli niso temeljito preučili tveganja spreobrnitve, čeprav so se zavedali, da bi bil ob vrnitvi v izvorno državo lahko žrtev ravnanja, ki krši 3. člen EKČP.43 Posameznik se s svojim ravnanjem ne more odpovedati absolutni zaščiti, ki jo zagotavljata 2. in 3. člen EKČP oziroma si je po stališču ESČP to »težko predstavljati«.44 Nacionalni organi so zato po uradni dolžnosti dolžni oceniti vse informacije, s katerimi so se seznanili, preden odločijo o prošnji, čeprav posameznik tega ne želi oziroma ne glede na prosilčevo ravnanje.45 V sodbi J. K. proti Švedski je sodišče (podobno) navedlo, da so pristojni organi dolžni po uradni dolžnosti upoštevati vse relevantne informacije, ki jih imajo, ne zgolj trditve, ki jih poda prosilec, in dokaze, na katere se sklicuje.46 Ravnanje v Sloveniji v veliko primerih ne zadosti navedenim standardom, je pa v sodni praksi Upravnega sodišča mogoče zaznati odločitve, ki so skladne z opisanimi stan- dardi.47 Prvostopenjski organi se ob odločanju pogosto (nepravilno) sklicujejo na stališča Vrhovnega sodišča48 in navajajo, da iz njegove sodne prakse izhaja, da: »sta vsebina in širina upoštevanih okoliščin, ki jih ugotavlja pristojni organ, defini- rani s prosilčevimi navedbami, saj je obseg presoje vezan na trditveno podlago«.49 Vrhovno sodišče je namreč zavzelo stališče, da izjava prosilca opredeljuje okvir od- ločanja upravnega organa in da ni naloga upravnega organa, da išče morebitne druge razloge za mednarodno zaščito.50 Kot sem ugotovil, to stališče drži le pod pogojem, da je upravna oseba skozi postopek v celoti zadostila prosilčevi pravici do izjave in izpolnila obveznost sodelovanja s prosil- cem tako, da je v sodelovanju s prosilcem razčistila dejansko stanje ter morebitne nespo- razume in mu postavila vsa pravno relevantna vprašanja ob ugotavljanju razmer v izvorni državi,51 kar pa je redkost v praksi prvostopenjskih organov. Sodna praksa Ustavnega sodišča tej tezi sledi, saj iz nje izhaja, da se glede na zahteve, »ki izhajajo iz 18. člena Ustave in mednarodnopravnih instrumentov, v postopku obravnave prošenj za mednarodno zaščito ni mogoče izogniti presoji okoliščin, ki so pomembne z vidika spoštovanja načela nevračanja.«52 43 Prav tam, točka 156. 44 Prav tam. 45 Prav tam. 46 ESČP v zadevi J. K. in drugi proti Švedski, št. 59166/12, z dne 23. avgusta 2016, točke 87, 83 in 90. 47 Glej na primer sodbo Upravnega sodišča RS I U 481/2020-9 z dne 5. maja 2021. 48 Glej odločbi Vrhovnega sodišča RS I Up 41/2016 z dne 2. marca 2016 in I Up 322/2016 z dne 22. februarja 2017. 49 Zalar, 2023, str. 149. 50 Sklep Vrhovnega sodišča RS I Up 41/2016 z dne 2. marca 2016, točki 34 in 35. 51 Zalar, 2023, str. 149. 52 Sodba Vrhovnega sodišča RS Up-1427/09-16 z dne 20. oktobra 2011, točka 18. 95 Urh Šelih – Nekateri vidiki pravice do izjave v azilnih postopkih Obseg ugotavljanja teh dejstev in informacij je sicer najprej odvisen od navedb in izjav prosilca, vendar pa mora pristojni organ tudi sam zbrati vse potrebne podatke in ni vezan samo na prosilčeve navedbe in dokaze v okviru preučevanja odločevalskih ele- mentov.53 Nevednost in neukost prosilca ne sme vplivati na zagotovitev pravic, ki mu pripadajo po zakonu, na kar mora pristojni organ paziti po uradni dolžnosti.54 2.3. Pravica do izjave po osebnem pogovoru Že prej sem nakazal, da se pravica do izjave ne izčrpa z osebnim pogovorom. Postavlja pa se vprašanje, kako naj se ta uresničuje v nadaljevanju azilnega postopka. Prizadeval si bom razložiti, da pravica do izjave od držav zahteva, da te nudijo možnost, da po opravljenem pogovoru prosilec doda informacije oziroma popravi napake ter da poda pripombe na ugotovitve pristojnega organa v zvezi s sklepi o (ne)verodostojnosti njegovih izjav in vrednosti dokazov, na katerih prosilec utemeljuje prošnjo in tveganje za preganjanje. 2.3.1. Pravica komentirati oziroma dati pripombe na poročilo pogovora Procesna direktiva od držav v 17. členu zahteva, »da se o vsakem osebnem razgovoru sestavi podroben in stvaren zapisnik z vsemi bistvenimi elementi, ali da se osebni razgo- vor dobesedno zapiše«,55 lahko pa zvočno ali avdiovizualno posnamejo pogovor.56 Države članice morajo nadalje zagotoviti, da ima prosilec ob koncu osebnega pogovora ali v določenem roku, preden organ za presojo izda odločbo, možnost, da izrazi pripombe in zagotovi pojasnila glede morebitnih napačnih prevodov. Zato države članice zagotovijo, da je prosilec v celoti seznanjen z vsebino zapisnika ali bistvenimi elementi dobesednega zapisa (transkripta). Prosilcu pa komentiranja ni treba omogočiti, kadar ima na voljo do- besedni prepis in je bil pogovor tudi posnet. Države članice morajo za konec še zahtevati, da prosilec potrdi, da vsebina zapisnika ali dobesedni zapis pogovora odraža pogovor.57 V Sloveniji je zapisnik pisen in se pogovor ne snema,58 zato niti ne moremo govoriti o poro- čilu v smislu Procesne direktive. Zapisnik se po opravljenem pogovoru prebere prosilcu, ki s podpisom potrdi ustreznost.59 53 Prav tam. 54 Odločba Ustavnega sodišča RS U-I-238/06-19 z dne 7. decembra 2006, točka 11. 55 Direktiva 2013/32/EU Evropskega parlamenta in Sveta z dne 26. junija 2013 o skupnih postopkih za priznanje ali odvzem mednarodne zaščite (prenovitev), Uradni list EU, št. L 180, prvi odstavek 17. člena. 56 Prav tam, drugi odstavek 17. člena. 57 Prav tam, tretji odstavek 17. člena 58 ZMZ-1, 37. člen; in UNHCR, 2010, str. 153. 59 UNHCR, 2010, str. 168. 96 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Vemo, da so izjave prosilca ključni dokaz v azilnem postopku. Za kakovost odločitve je torej zelo pomembno, da je poročilo o osebnem pogovoru popolno in da so informa- cije v njem točne,60 saj bo ta temelj odločitve pristojnega organa.61 Pravica do izjave in sodelovalna dolžnost zahtevata, da imajo prosilci dejansko možnost pregledati poročila in predložiti morebitne popravke, pojasnila ali druge informacije in dokazila.62 Kadar organ za presojo na tej točki meni, da so nekatere izjave prosilca, ki so napisane v poročilu, nedosledne, nejasne ali nepopolne, ga mora na to opozoriti, saj na podlagi tega lahko sklepa, da je prosilec neverodostojen, ta pa mora imeti možnost, da se glede tega izjasni.63 Prosilec mora posledično imeti dostop do poročila ali dobesednega prepisa, preden je sprejeta odločitev o prošnji.64 Ta pravica je omejena v pospešenih postopkih, saj se lahko organ odloči, da bo dostop omogočil samo hkrati z izdajo odločbe.65 To pa lahko pomeni kršitev pravice do izjave,66 zlasti ko se zavedamo, da obstaja velika možnost, da pogovor ne odraža vidikov in strahov prosilca v celoti in bo prosilec morebiti izpostavljen preganjanju v izvorni državi, če bo odločitev organa negativna. Sodelovalna dolžnost, kot jo razume Sodišče EU v sodni praksi, nalaga organu dolžnost, da prosilca seznani z ne- doslednostmi, iz česar logično izhaja, da mora biti prosilcu poročilo o pogovoru na voljo, saj lahko zaradi dolgotrajnih postopkov pozabi podrobnosti svojih izjav in tako tvega, da bo ob branjenju svojih pravic organu izpadel nedosleden. 2.3.2. Pravica komentirati ugotavljanje dejstev in oceno tveganja Po osebnem pogovoru organ ugotovi dejstva in oceni verodostojnost prošnje ter pro- silca. Omenil sem že, da je bistveno to, da se prosilcu omogoči, da se seznani z vsemi pomembnimi neskladji, ki bi lahko bila podlaga za oceno prosilčeve (ne)verodostojnosti, in se o tem izreče, tako da se lahko obravnava relevantnost, časovna ustreznost in točnost pridobljenih informacij.67 Nato pristojni organ začne fazo ocene tveganja preganjanja ob morebitni vrnitvi v izvorno državo ali ob premestitvi v tretjo državo. V azilnih postopkih se dogaja, da organ v oceno tveganja vključi dokaze, ki jih je pridobil sam zunaj pogovora in o katerih se pro- 60 Reneman, 2013, str. 170. 61 UNHCR, 2013, str. 124. 62 Prav tam. 63 Prav tam. 64 Procesna direktiva, peti odstavek 17. člena. 65 Prav tam. 66 Reneman, 2013, str. 171. 67 UNHCR, 2011, str. 26. Battjes, 2006, str. 226. 97 Urh Šelih – Nekateri vidiki pravice do izjave v azilnih postopkih silec ni imel možnosti izjaviti.68 Zato se postavlja vprašanje, ali mora imeti prosilec mož- nost, da se izjavi o ugotovitvah upravnega organa glede presoje dejstev in ocene tveganja. Evropski pravni okvir, konkretneje Procesna direktiva, ne vsebuje izrecnih določb, ki bi se nanašale na pravico prosilca, da komentira take sklepe organa. Je pa za nas tukaj vnovič pomembna Kvalifikacijska direktiva in sodelovalna dolžnost, ki izhaja iz prvega odstavka 4. člena. Spomnimo, da ta zahteva, da države članice ocenijo tveganje »v sode- lovanju s prosilcem«. Battjes na podlagi tega meni,69 da pravica prosilca biti seznanjen o oceni tveganja in dokaznem sklepu ter podajanju pripomb v zvezi z njima izhaja prav iz Kvalifikacijske direktive, zato bi morale države članice prosilcu omogočiti sodelovanje pri ocenjevanju dokazov in oceni tveganja. Noll gre tukaj še korak dlje, saj meni, da se sodelovalna dolžnost razteza čez celoten postopek oziroma do sprejema odločitve.70 Prvi odstavek 4. člena namreč državi nalaga tudi obveznost obravnave (angl. duty to assess) ustreznih elementov. Pri vseh elementih, ki jih je predložil prosilec, je treba opraviti presojo ustreznosti. Elementi, ki so ustrezni, pa so nadalje podvrženi še dokazni oceni, katere rezultat mora imeti prosilec možnost komentirati, da lahko izpolni svoje trditveno breme. To pomeni, da mora biti prosilcu omogočen dostop do informacij in dokaznih ocen, ki jih je pristojni organ uporabil v zadevi. V nasprotnem primeru ne moremo zago- toviti, da je prosilec subjekt dokaznega postopka, kar pa nasprotuje sodelovalni dolžnosti in jedru pravice do izjave same. Posledično lahko to pomeni tudi, da je treba tajno dokaz- no gradivo, s katerim prosilec ni seznanjen, izključiti iz podlage za odločitev v zadevi.71 Kadar organ sprejme negativno odločitev, to lahko pomeni, da so informacije ali izjave, ki jih je podal prosilec, v neskladju s tistimi, ki jih je organ pridobil po uradni dolžnosti (na primer glede razmer v izvorni državi).72 Procesna garancija biti seznanjen s takimi sklepi organa pa v povezavi z 16. členom Procesne direktive zajema tudi taka neskladja ali nedoslednosti in ne zgolj nedoslednosti med prosilčevimi izjavami, kot to rade razlagajo države članice.73 Podobno je že tudi Sodišče EU odločilo, da lahko pravica do izjave pomeni, da mora organ subjektu postopka predložiti osnutek odločitve in mu omogočiti, da se o odločitvi izreče, preden je ta sprejeta, sploh kadar pravica do izjave ni bila spoštovana med postopkom.74 Namen teh določb kvalifikacijske direktive je v tem, da omogoča razjasnitev nedoslednosti s strani prosilca in s tem morebitnih nesporazu- 68 UNHCR, 2013, str. 128. 69 Battjes, 2006, str. 226. 70 Noll, 2005, str. 4. 71 Prav tam, str. 304. 72 Tukaj naj samo omenim, da informacije o izvorni državi nikoli ne morajo biti popolne in da pogos- to ne vključujejo različnih vidikov situacije. 73 Zalar, 2023, str. 152. 74 Mediocurso, točka 42 in M. 98 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 mov, ki lahko nastanejo zaradi kompleksne in edinstvene situacije azilnega postopka.75 UNHCR meni, da je del postopka ugotavljanje in ocene dejstev tudi to, da se prosilcu pred sprejemom končne odločitve omogoči, da pojasni (olajševalne) okoliščine v zvezi z očitanimi nedoslednostmi.76 To je bistveno za razumevanje sledečega. V zadevi MM je Sodišče EU, nekoliko drugače od doslej navedenega, prek predho- dnega vprašanja zavzelo stališče, da se sodelovalna dolžnost razteza samo na prvo fazo obravnavanja dejstev in okoliščin.77 To obravnavanje namreč poteka na dveh ločenih stopnjah. Prva se nanaša na ugotovitev dejanskih okoliščin, ki lahko pomenijo dokaze v utemeljitev prošnje, druga pa se nanaša na pravno presojo teh dokazov in subsump- cijo pod ustrezne pravne norme, v okviru katere se na podlagi dejstev, ki so značilna za obravnavano zadevo, ugotovi, ali so izpolnjeni vsebinski pogoji za priznanje mednarodne zaščite, določni s Kvalifikacijsko direktivo.78 Menim pa, da lahko odločitev sodišča raz- lagamo nekoliko drugače oziroma je v izogib zmotnemu razumevanju pravice do izjave potrebna natančnejša analiza. Namen Kvalifikacijske direktive namreč ni zagotavljanje procesnih garancij prosilcu.79 Zato, čeprav sodelovalna dolžnost ne bi prišla v poštev na drugi stopnji obravnavanja, je vedno treba spoštovati pravico do izjave, ki ima bistveno splošnejši pomen. Sodišče tako tudi v zadevi MM ni izključilo možnosti, da dejstvo, da prosilec ni imel možnosti odgovoriti na elemente, na katerih namerava organ za presojo utemeljiti svojo odločitev, lahko krši pravico do izjave,80 čeprav ti elementi niso bili zajeti v sodelovalno dolžnost glede na konkretno fazo postopka. Pravica do izjave namreč zah- teva, da se prosilcu predstavijo glavne ugotovitve organa za presojo, na katerih namerava ta utemeljiti svojo odločbo.81 Če prosilec ni mogel predložiti pripomb o takih dokazih in sklepih med postopkom (med osebnim pogovorom, v odgovoru na poročilo o osebnem pogovoru itd.), mu je to treba omogočiti, preden se sprejme negativna odločitev, drugače je njegova pravica do izjave kršena. To velja zlasti, če prosilčevih pripomb ni mogoče v celoti upoštevati v instančni (pritožbeni fazi) zaradi omejene sodne presoje s strani naci- onalne zakonodaje,82 in ko upoštevamo možnost preganjanja beguncev (vsaj) v izvornih državah ter stroge oziroma rigorozne presoje, ki jo mora opraviti organ.83 Na podlagi 75 Sodba Sodišča EU v združeni zadevi C-148/13, C-149/13 in C-150/13 A, B in C proti Staatssecretaris van Veiligheid en Justitie z dne 2. decembra 2014, točke 60–62. 76 UNHCR, 2013, str. 125. 77 MM., točka 64. 78 Prav tam. 79 Prav tam, točke 71–73. 80 Reneman, 2013, str. 173. 81 Prav tam. 82 Prav tam. 83 Sodbi ESČP v zadevah Soering proti Združenemu kraljestvu, št. 14038/88, z dne 7. julija 1989, točka 91; Paposhvili proti Belgiji, št. 41738/10, z dne 13. decembra 2016, točka 187. 99 Urh Šelih – Nekateri vidiki pravice do izjave v azilnih postopkih napisanega menim, da stroge presoje v smislu procesnih zahtev 4. člena Listine EU (in 3. člena EKČP) v povezavi s pravico do izjave84 na noben način ni mogoče opraviti brez tega, da se razčistijo vse nedoslednosti in nesporazumi, ki lahko bistveno vplivajo na izid postopka, to pa je mogoče samo ob (intenzivnem) sodelovanju prosilca. Smiselno lahko izpeljemo tudi iz postopkov vrnitve tujca. Pravica do izjave nacional- nemu pristojnemu organu v teh postopkih nalaga obveznost, da pred zaslišanjem v zvezi s sprejetjem odločbe obvesti vračano osebo, da namerava zoper njo sprejeti odločbo o vrnitvi in ji predhodno posreduje dejstva, na katerih bo utemeljil to odločbo, ter dodeli rok za premislek, preden poda svoja stališča, če prej ni mogel ustrezno in učinkovito po- dati svojega mnenja o razlogih, s katerimi bi bilo mogoče v skladu z nacionalnim pravom utemeljiti, zakaj naj nacionalni organ ne sprejme odločbe o vrnitvi.85 Iz tega izhaja, da je glede neuporabe sodelovalne dolžnosti na drugi stopnji treba dopustiti izjemo (tj. da prosilec komentira dokazno oceno) v primerih, »kadar državljan tretje države ni mogel razumno domnevati, katera dejstva so mu očitana«.86 »Ker se omejitve pravice do izjave v postopkih vračanja vežejo na krog pravno zavarovanih dobrin, kot so prepoved nečloveškega ravnanja, koristi otrok, varstvo družinskega življenja, zdravstveno stanje, kar je ožje od pravno zavarovanih dobrin v postopku odločanja o prošnji za mednarodno zaščito, je razumno sklepati, da omenjena izjema, ko je treba dejstva, ki se očitajo tujcu in na podlagi katerih na- merava pristojni organ sprejeti odločbo, predhodno posredovati prosilcu za med- narodno zaščito, da nanje odgovori in da lahko relevantna dejstva preveri. Zato je treba takšne elemente predočiti prosilcu za mednarodno zaščito v okviru osebnega razgovora ali pa pisno.«87 Podobno stališče lahko izpeljemo tudi iz sodne prakse ESČP. V zadevi IM je ESČP ugotovilo kršitev 13. člena EKČP, ker nacionalni organi niso omogočili prosilcu, da bi se pisno ali ustno izrekel o ugotovitvah države glede točnosti in verodostojnosti njego- vih izjav, niti mu ni bila dana možnost, da pojasni domnevne neskladnosti in predloži manjkajoče dokumente, ki bi bili potrebni za postopek.88 Po mnenju ESČP mora imeti prosilec možnost pojasniti neskladja v svojih izjavah in podati pripombe na informacije, ki nasprotujejo prosilčevim navedbam ter posledično sejejo dvom o verodostojnosti nje- 84 J. K., točka 98. 85 Boudjlida, točka 69. 86 Prav tam, točka 56. 87 Zalar, 2023, str. 152 88 IM., točka 147. Glej tudi sodbe ESČP v zadevah S. H. H. proti Združenemu kraljestvu, št. 60367/10, z dne 29. januarja 2013, točka 71; Collins in Akaziebie proti Švedski, št. 23944/05, z dne 8. marca 2007; J. K., točka 93; in F. G., točka 113. 100 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 gove prošnje,89 in imeti možnost izpodbijati ugotovitve organa glede pristnosti njegovih dokumentov.90 Iz vsega tega izhaja, da je odtegnitev pravice prosilca, da je soočen z nedoslednostmi v njegovih izjavah ali v njegovih izjavah v primerjavi z informacijami o izvorni državi, nesorazmerna omejitev pravice do izjave, kadar prosilec nima očitno neutemeljenega zahtevka glede varstva pred prepovedjo mučenja in nečlovečnega ravnanja v smislu 4. člena Listine (in 3. člena EKČP) in prvega odstavka 52. člena Listine EU.91 Taka ome- jitev ne spoštuje jedra pravice do obrambe in pravice do izjave ter ne ustreza ciljem splošnega interesa Unije. Slovenski pravni red vsebuje splošno določbo, ki (vsaj de iure) neposredno rešuje navedeno zagato. Peta točka tretjega odstavka 146. člena Zakona o splošnem upravnem postopku (ZUP)92 namreč določa, da mora uradna oseba, ki vodi postopek, stranki na ustni obravnavi ali zunaj nje pisno oziroma ustno na zapisnik omogočiti, da se seznani z uspehom dokazovanja ter da se o tem izreče. Vendar pa sta slovensko Upravno sodišče in Vrhovno sodišče zadržani do uporabe te splošne določbe v azilnih postopkih,93 čeprav bi jo glede na 33. člen ZMZ-1 morali uporabljati. Vrhovno sodišče je v eni od zadev celo določilo, da ima prosilec to pravico le, če jo izrecno zahteva.94 Upravno sodišče pravico do izjave (in pravico do obrambe) pravilno obravnava kot splošno načelo znotraj prava EU in je v številnih azilnih zadevah izpeljalo razlago, skladno z navedeno sodno prakso, da pravica prosilca biti soočen z nedoslednostmi zajema tako neskladja med izjavami pro- silca in informacijami v izvorni državi (ne glede na način pridobitve) kakor tudi morebi- tna neskladja med samimi prosilčevimi izjavami.95 Žal pa se razumevanje in razlaga 16. člena Procesne direktive v povezavi z določilom prvega odstavka 4. člena Kvalifikacijske direktive razlikuje, ko pogledamo prakso Vrhovnega sodišča. Slednje se pogosto opira zgolj na jezikovno razlago omenjenih določb, iz katere izhaja, da ima prosilec pravico do soočenja le, če gre za neskladja med njegovimi izjavami samimi, ne pa tudi med njego- vimi izjavami in stanju o izvorni državi.96 To za prosilca lahko pomeni, da sploh nima možnosti komentirati ugotovitve pristojnega organa o razmerah v izvorni državi oziroma 89 Sodbi ESČP v zadevah RC proti Švedski, št. 41827/07, z dne 9. marca 2010, točka 50; in Collin in Akaziebie. 90 Sodba ESČP v zadevi Matsiukhina in Matsiukhin proti Švedski, št. 31260/04, z dne 21. junija 2005. 91 Zalar, 2023, str. 152. 92 Uradni list RS, št. 24/06 – uradno prečiščeno besedilo, 105/06 – ZUS-1, 126/07, 65/08, 8/10, 82/13, 175/20 – ZIUOPDVE in 3/22 – ZDeb. 93 Zalar, 2023, str. 152. 94 Sklep Vrhovnega sodišča RS I Up 262/2017 z dne 17. januarja 2018, točka 15. 95 Sodba Upravnega sodišča RS I U 433/2016-12 z dne 24. avgusta 2016. 96 Zalar, 2023, str. 153; in odločitve Vrhovnega sodišča RS v zadevah I Up 283/2016 z dne 3. novem- bra 2016, točke 26–31, I Up 208/2016 z dne 1. februarja 2017, točka 10 in I Up 96/2018 z dne 20. junija 2018, točka 17. 101 Urh Šelih – Nekateri vidiki pravice do izjave v azilnih postopkih jih izpodbijati, kar pa je diametralno nasprotno esenci pravice do izjave. Vrhovno sodišče namreč meni, da se predmetno vprašanje nanaša na drugo fazo ugotavljanja dejstev in okoliščin, kot jo je postavilo Sodišče EU v omenjeni sodbi M, v kateri ne gre več za so- delovalno dolžnost, ampak za suvereno dokazno oceno pristojnega organa, zunaj dometa pravice do izjave.97 Še več, argumentiramo lahko, da razlaga Vrhovnega sodišča lahko poseže v načelo non refoulement, ki se v azilnih postopkih lahko popolnoma uresničuje zgolj z intenzivnim sodelovanjem prosilca in je zato tesno povezano s pravico do izjave. 2.3.3. Dokazni standard in dokazno breme Dejstva, ki jih je v azilnem postopku treba »dokazati«, so tista, ki se nanašajo na osebne izkušnje prosilca, zaradi katerih naj bi se bal preganjanja, in ozadje zadeve, zaradi katerega ni želel izrabiti zaščite oblasti v izvorni državi. Obstajati mora utemeljen strah pred preganjanjem, ki pa mora biti resničen, kar se oceni glede na njegov osebni položaj ter predložene dokaze in razmere v izvorni državi.98 Težava, ki se pojavi, je: kdaj je prosi- lec izkazal, da je njegov strah resničen? Vnovič, odločitev države glede (ne)dodelitve mednarodne zaščite navadno temelji na izpovedi prosilca in oceni verodostojnosti, saj ti pogosto nimajo drugih (fizičnih) doka- zov, s katerimi bi podprli svoje trditve o preganjanju. Ni pa mogoče vedno ugotoviti, ali prosilec govori resnico, zato je pomembno, kakšen dokazni standard naj velja v azilnih zadevah, ne da bi ta pomenil nedosegljivo nalogo za prosilca in mu tako efektivno one- mogočil pravico do azila ter s tem pomenil tveganje za kršitev vračanja. Ko govorimo o dokaznem standardu v azilnih zadevah, se postavljata dve vprašanji: 1. Kako verjetno (tvegano) mora biti preganjanje oziroma resna škoda? 2. Katere kazalnike naj upoštevamo pri ocenjevanju tveganja?99 Ker Kvalifikacijska direktiva ne vsebuje določb glede dokaznega standarda,100 se po- nuja preprosta, toda nepravilna rešitev, da lahko države članice uvedejo lastne zahteve glede standarda dokazovanja. Vendar pravo EU ob upoštevanju pravice do izjave, prepo- vedi vračanja in pravice do azila prepoveduje, da države določijo tak dokazni standard, ki bi spodkopaval učinkovito izvrševanje navedenih pravic.101 Pa poglejmo, kakšen pravni okvir ponujajo mednarodni pravni akti. Kvalifikacijska direktiva v točki f 2. člena določa, da je oseba upravičena do subsidiarne zaščite, kadar obstajajo utemeljeni razlogi za prepričanje, da bi se zadevna oseba v primeru vračanja soočila z utemeljenim tveganjem, da utrpi resno škodo. Ta določba in vsebina pojma resne škode sta osnova za določanje dokaznega standarda v praksi ESČP in Komisije za 97 Zalar, 2023, str. 153. 98 Gorlick, 2003, str. 9. 99 Reneman, 2013, str. 185. 100 Noll, 2005, str. 3. 101 Reneman, 2013, str. 185. 102 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 preprečevanje mučenja,102 ki bi jo bilo treba upoštevati, ko razlagamo dokazni standard v luči prava EU, saj morajo države članice načelo vračanja spoštovati skladno s svojimi mednarodnimi obveznostmi.103 Prav tako pa moramo ob razlagi navedenega upoštevati 19. člen Listine EU, ki določa, da se: »nihče se ne sme odstraniti, izgnati ali izročiti državi, v kateri obstaja zanj resna nevarnost, da bo podvržen smrtni kazni, mučenju ali drugemu nečloveškemu ali ponižujočemu ravnanju ali kaznovanju.« Na prvi pogled 19. člen zahteva višjo stopnjo predvidljivosti nevarnosti vračanja kot ESČP, ki zahteva realno tveganje,104 vendar pa že iz tretjega odstavka 52. člena Listine EU izhaja, da prepoved vračanja, ki jo določa pravo EU, vsebuje sodno prakso ESČP v zvezi s 3. členom Konvencije,105 čemur pritrjuje tudi Sodišče EU. Nadalje Kvalifikacijska direktiva določa, da je preganjanje v preteklosti resen znak prosilčevega utemeljenega strahu pred preganjanjem,106 razen če obstajajo tehtni razlogi organa za prepričanje, da se tako preganjanje ali resna škoda ne bosta ponovila.107 Navsezadnje pa se je treba zavedati, da je dolžnost države, da v sodelovanju s prosil- cem oceni ustrezne elemente prošnje. Ta norma, kot izhaja iz prvega odstavka 4. člena Kvalifikacijske direktive, ni fakultativna in s tem ni odvisna od morebitnih specifičnih pravil nacionalnega dokaznega postopka niti od tega, na kateri strani je dokazno breme.108 Reneman povzema stališče UNHCR in trdi, da prosilcu utemeljenega strahu ni treba »dokazati« v klasičnem pomenu besede, ampak naj se v azilnih zadevah uporablja test razumne stopnje verjetnosti.109 UNHCR v povezavi s tem še navaja, da naj se strah pred preganjanjem šteje za utemeljenega, če lahko prosilec razumno dokaže, da je njegovo bivanje v izvorni državi postalo zanj nevzdržno zaradi navedenih razlogov.110 Nadalje niti naj ne bi bilo treba, da je preganjanje verjetnejše kot ne.111 Prosilec svoje dokazno breme 102 Prav tam, str. 186. 103 Člen 21 Direktive 2011/95/EU Evropskega parlamenta in Sveta z dne 13. decembra 2011 o stan- dardih glede pogojev, ki jih morajo izpolnjevati državljani tretjih držav ali osebe brez državljanstva, da so upravičeni do mednarodne zaščite, glede enotnega statusa beguncev ali oseb, upravičenih do subsidiarne zaščite, in glede vsebine te zaščite (prenovitev), Uradni list EU, št. L 337. 104 Battjes, 2006, str. 115. 105 Reneman, 2013, str. 187. 106 Četrti odstavek 4. člena Kvalifikacijske direktive. 107 Tako tudi drugi odstavek 23. člena ZMZ-1. Glej tudi J. K. točka 102. 108 Noll, 2005, str. 303–304. 109 Reneman, 2013, str. 187. Tak standard je mogoče zaznati tudi v praksi Zgornjega doma parlamenta Združenega kraljestva, primeroma glej zadevo R. proti Secretary of State for the Home Department, Ex parte Sivakumaran and Conjoined Appeals (UN High Commissioner for Refugees Intervening), [1988] AC 958, [1988] 1 All ER 193, [1988] 2 WLR 92, [1988] Imm AR 147, 16. december 1987. 110 UNHCR, 2019, točka 53. 111 UNHCR, 1998, točka 17. 103 Urh Šelih – Nekateri vidiki pravice do izjave v azilnih postopkih izpolni s tem, da navede dejstva, ki so pomembna za odločitev o prošnji, če drugih do- kazov ne poseduje.112 Glede na posebnosti azilnih postopkov in položaja prosilcev ima uradna oseba skupaj s prosilcem dolžnost, da ugotovi in oceni vsa relevantna dejstva. To se doseže tako, da se uradna oseba seznani z razmerami v izvorni državi, usmerja prosilca pri podajanju informacij in da navedbe, ki jih je mogoče, preveri.113 Podobne standarde je postavilo tudi ESČP. Najprej je treba poudariti, da možnost potencialnega preganjanja samodejno ne pomeni kršitve 3. člena EKČP.114 Nadalje, od prosilca ESČP ob razlagi 3. člena EKČP ne zahteva, da nesporno dokaže, da bo ob vrnitvi v izvorno državo obravnavan na način, ki krši omenjeni člen.115 Dokazno breme ne sme biti tako, da ovira vsebinsko preučitev domnevne nevarnosti kršitve, in se mora presojati od primera do primera.116 Zahtevati od prosilca, da predloži nesporne dokaze o nevar- nosti preganjanja, bi prosilcu naložilo očitno nesorazmerno breme in pomenilo, da mora prosilec dokazati bodoči dogodek, kar pa je seveda nemogoče.117 Ves čas postopka mora pristojni organ upoštevati, da je morda nemogoče, da prosilec predloži dokaze, katerih odsotnost ne more biti odločilen argument za zavrnitev prošnje.118 Pristojni organ naj raje preuči predvidljive posledice odstranitve za vsakega posame- znega prosilca posebej in v vsakem primeru.119 ESČP prav tako pri presoji tveganja upo- števa dejstvo, da je bil prosilec že preganjan v preteklosti, kar pa ne pomeni samodejno, da je obstoj prihodnjega tveganja verjeten.120 Glede na navedeno je načeloma prosilec tisti, ki mora utemeljiti, da obstajajo uteme- ljeni razlogi za prepričanje, da bo izpostavljen resnični nevarnosti preganjanja oziroma da mu grozi resna škoda glede na to, da je edino on tisti, ki lahko poda informacije o osebnih okoliščinah,121 in predložiti morebitne dokaze, če jih ima.122 Vendar to še ne pomeni, da mora prosilec izkazati, da je preganjanje (ali resna škoda) bolj verjetno kot 112 Gorlick, 2003, str. 5. 113 UNHCR, 1998, točka 6. 114 Sodba ESČP v zadevi Dzhaksybergenov proti Ukrajini, št. 12343/10, z dne 10. februarja 2011, točka 35. 115 Glej tudi Soering. 116 Sodba ESČP v zadevi MSS proti Belgiji in Grčiji, št. 30696/09, z dne 21. januarja 2011, točka 389. 117 Sodba ESČP v zadevi Rustamov proti Rusiji, št. 11209/10, z dne 3. julija 2012, točka 117. 118 J. K., točka 92. 119 Sodba ESČP v zadevi Sufi in Elmi proti Združenemu kraljestvu, št. 8319/07 in 11449/07, z dne 28. junija 2011, točka 249. 120 Sodba ESČP v zadevi Salah Sheekh proti Nizozemski, št. 1948/04, z dne 11. januarja 2007, točki 146–147. Glej tudi R. C., točka 55. 121 Sodba ESČP v zadevi M. K. in drugi proti Poljski, št. 40503/17, 42902/17, 43643/17 ..., z dne 23. julija 2020, točka 170. 122 F.G., točka 11. Tako tudi Sodišče EU v zadevah C-411/10 in C-493/10 N. S. in M. E. proti Refugee Applications Commissioner and Minister for Justice, Equality and Law Reform z dne 21. decembra 2011, točka 94 ter C-71/11 in C-99/1 Bundesrepublik Deutschland proti Y in Z z dne 5. septembra 2012. 104 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 ne.123 Zaradi posebne situacije prosilcev je pogosto treba v dvomu šteti, da so njiho- vi predloženi dokazi in navedbe verodostojni.124 Ko prosilec izpolni naštete dolžnosti (dokazno breme), se ta prevali na pristojni organ, da ovrže vsakršen dvom o zatrjevanju preganjanja ali resni škodi.125 Drugače kot to je dokazovanje splošnih razmer v izvorni državi breme pristojnega organa, saj ima ta precej lažji dostop do informacij o razmerah v izvorni državi in mora dejstva ugotoviti po uradni dolžnosti.126 V sodbah Upravnega sodišča je mogoče zaznati vpeljavo navedenih standardov, zlasti kadar je bistvo spora ocena (ne)verodostojnosti prosilca. Sodišče v takem primeru v pove- zavi z varstvom pravice iz 3. člena EKČP opravlja strogo sodno presojo.127 To v slovenski sodni praksi še ne pomeni, da sodišče »z vso strogostjo« išče odgovor na vprašanje, ali so se dogodki resnično zgodili tako, kot jih opisuje tožnik.128 Dokazni standard naj bo, ob upoštevanju sodnih praks drugih držav članic in UNHCR, precej nižji od 50 odstot- kov,129 kar pomeni, da je tudi nižji od standarda tako imenovanega ravnotežja med dve- ma različnima možnostma in standarda onkraj razumnega dvoma, ki velja v kazenskem pravu. Upravno sodišče pravilno ugotavlja, da gre za standard razumne verjetnosti, da bi do preganjanja prišlo.130 Podobno tudi Ustavno sodišče razlikuje med subjektivnim in objektivnim elementom. Subjektivni element nalaga dolžnost prosilcu, da utemelji prošnjo, pri čemer prosilcu ne sme biti naloženo pretežko breme, objektivni pa organu, da preveri izjave prosilca z vidika objektivnih dejstev in informacij o izvorni državi.131 3. Sklep V azilnih postopkih so osebne izjave prosilcev ključnega pomena, saj so pogosto edini vir informacij glede njihovega pregona ali resnega tveganja, s katerim bi se soočili ob vrnitvi v izvorno državo. V številnih primerih so prosilci prisiljeni zapustiti svoje domo- ve v naglici, ne da bi lahko vzeli s seboj kakršnekoli dokumente, ki bi lahko podkrepili njihove zgodbe. Poleg tega so lahko ti dokumenti zaradi političnih ali vojnih razmer v izvornih državah težko dosegljivi ali celo uničeni. Zaradi teh okoliščin se v azilnih po- stopkih pogosto zgodi, da so osebne izjave prosilcev edini dokaz, ki ga lahko predložijo. 123 Sodba ESČP v zadevi Saadi proti Italiji, št. 37201/06, z dne 28. februarja2008, točka 140. 124 Zalar, 2023, str. 156. 125 Sodba ESČP v zadevi W. proti Franciji, št. 1348/21, z dne 30. avgusta 2022, točka 67. 126 J. K., točka 98. 127 Sodba Upravnega sodišča RS I U 787/2012-4 z dne 29. avgusta 2012, točke 75–79. 128 Sodba Upravnega sodišča RS I U 411/2015-57 z dne 24. aprila 2015, točka 118. 129 Prav tam. 130 Prav tam. 131 Odločba Ustavnega sodišča RS U-I-292/09-9, Up-1427/09-16 z dne 20. oktobra 2011, točka 15. 105 Urh Šelih – Nekateri vidiki pravice do izjave v azilnih postopkih Pravica do izjave torej ni le formalnost, ampak nujen element azilnega postopka. Prosilcem omogoča, da predstavijo svojo zgodbo, pojasnijo svoje strahove in tveganja ter odgovarjajo na vprašanja uradnih oseb, ki odločajo o njihovi (pravni) usodi. Zato je bi- stveno, da postopki za zbiranje teh izjav in drugih dokazov potekajo v sodelovanju obeh strani postopka, ob ustreznih dokaznih pravilih in standardih. Pomembno je tudi, da so uradne osebe, ki izvajajo pogovore, ustrezno usposobljene, da lahko zaznajo in razumejo kulturne, jezikovne in psihološke vidike, ki lahko vplivajo na izjave prosilcev. 106 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Literatura ACTIONES Handbook on the Techniques of Judicial Interactions in the Application of the EU Charter, Module 3 (2019) Right to an effective remedy, (dostop: 1. maj 2024) Bardutzky, S., Fajdiha, M., Grief, M., Lipovec Čebron, U., Regvar, U., Samobor, A., Zagorc, S., Zalar, B., in Zorn, J. (2023) Uvod v pravo migracij in mednarodne zaščite. Ljubljana: Založba Pravne fakultete. Battjes, H. (2006) European Asylum Law and International Law. Leiden/Boston: Immigration and Asylum Law and Practice in Europe. Gorlick, B. (2003) ‘Common Burdens and Standards: Legal Elements in Assessing Claims to Refugee Status’, International Journal of Refugee Law, letnik 15, št. 3, str. 357–376. Noll, G. (2005) Evidentiary Assessment and the EU Qualification Directive, New Issues in Refugee Research, Working Paper št. 117, Ženeva: UNHCR. Reneman, A. M. (2013) EU Asylum proceduress and the right to an effective remedy. Meijers-reeks 208, Faculty of Law, Leiden University. R. proti Secretary of State for the Home Department, Ex parte Sivakumaran and Conjoined Appeals (UN High Commissioner for Refugees Intervening), [1988] AC 958, [1988] 1 All ER 193, [1988] 2 WLR 92, [1988] Imm AR 147, 16. december 1987. UNHCR (1984) Executive Commite Meetings, Identity Documents for Refugees EC/ SCP/33, (dostop: 14. september 2024). UNHCR (1998) Note on Burden and Standard of Proof in Refugee Claims, (dostop: 10. maj 2024). UNHCR (2004) Annotated Comments on the EC Council Directive 2004/83/EC of 29 April 2004 on Minimum Standards for the Qualification and Status of Third Country Nationals or Stateless Persons as Refugees or as Persons who otherwise need International Protection and the Content of the Protection granted, (dostop: 3. maj 2024). UNHCR (2010) Improving Asylum Procedures, (dostop: 3. maj 2024). UNHCR (2011) Building in Quality: A Manual on Building a High Quality Asylum System, (dostop: 5. maj 2024). 107 Urh Šelih – Nekateri vidiki pravice do izjave v azilnih postopkih UNHCR (2013) Beyond proof: Credibility Assessment in EU Asylum Systems, (dostop: 4. maj 2024). UNHCR (2019) Handbook on Procedures and Criteria for Determining Refugee Status, (dostop: 10. maj 2024). 109 © The Author(s) 2024 Kratki znanstveni članek DOI: 10.51940/2024.1.109-124 UDK: 341.3/.4:341.645(5-11) Polona Brumen* Pisma iz Tokia Povzetek Z analizo primarnih in sekundarnih virov v angleškem jeziku avtorica predstavi nekate- re značilnosti Mednarodnega vojaškega sodišča za Daljni vzhod, ki je po koncu druge svetovne vojne dve leti in pol zasedalo v Tokiu. Opre se na uradno, pogosto tajno kore- spondenco nekaterih članov sodišča, saj so njihove misli, izražene domačim institucijam, zelo zanimiv vpogled v delovanje sodišča, njegove značilnosti, meddržavno sestavo in tudi dileme, s katerimi so se pri sprejemanju končne odločitve soočali sodniki. Skozi multilateralno prisotnost in sočasno aplikacijo značilnosti različnih pravnih sistemov – čeprav je sodišče delovalo na osnovi Statuta Mednarodnega vojaškega sodišča za Daljni vzhod in takrat veljavnega mednarodnega prava –, sodniki niso zmogli v celoti izstopiti iz svojih pravnih tradicij in v mnogoplastnih okoliščinah dolgotrajnega dela daleč od doma je prihajalo med udeleženci na lokaciji – in v njihovih odnosih z vodilnimi v domačih državah – do nepričakovanih, dotlej nepoznanih zapletov in merjenj moči na različnih ravneh. Glavna posledica tega je bila neenotnost razsodbe: sprejeta večinska sodba je 25 obtožencev spoznala za krive, sedem jih je obsodila na smrtno kazen. Od enajstih članov senata so trije podali (delno) ločeno odklonilno mnenje, predsednik senata pa je na koncu uradno vložil le izjavo o nestrinjanju z višino kazni. Tako je glavni doprinos tega prispevka razkritje neenotnosti sodnega senata, saj nasprotuje običajno razširjenim predstavam, da je bilo predmetno sojenje »ameriška predstava«. Ključne besede Mednarodno vojaško sodišče za Daljni vzhod, mednarodni odnosi, mednarodno pravo, ločeno mnenje, diplomatska korespondenca. * Univerzitetna diplomirana japonologinja in sinologinja, študentka doktorskega študijskega progra- ma Pravo. Samostojna prevajalka in tolmačka, sodna tolmačka za japonski jezik; elektronska pošta: pilonajp@yahoo.co.jp. Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review – voL. LXXXiv, 2024 • pp. 109–124 ISSN 1854-3839 • eISSN: 2464-0077 110 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 1. Uvod Po kapitulaciji v drugi svetovni vojni1 so Japonsko okupirale zavezniške sile pod po- veljstvom ameriškega generala Douglasa MacArthurja.2 Okupacija Japonske se je končala s Pogodbo iz San Francisca (The Treaty of San Francisco), mirovno pogodbo z Japonsko, ki je začela veljati 28. aprila 1952. Osem mesecev in pol po japonski kapitulaciji je v glavnem mestu te otoške države na skrajnem vzhodu evrazijske celine v stavbi nekdanjega Ministrstva za vojno začelo delovati Mednarodno vojaško sodišče za Daljni vzhod.3 Ta prispevek skozi delček (pretežno uradne) korespondence udeležencev iz prvih vrst predstavi okolje in okoliščine, v katerih so delovali, delovne razmere in težave, s katerimi so bili soočeni, ter s tem na inovativen način približa delovanje enega najpomembnej- ših sojenj v zgodovini.4 Kljub drugačnim, običajno razširjenim5 pogledom na delovanje predmetnega sodišča, v katerem so večino tožilskega osebja sestavljali ameriški strokov- njaki, sodniškega pa strokovnjaki iz držav britanskega Commonwealtha, slednje predsta- vi kot mednarodno, multilateralno organizacijo, ki je na Japonskem – vsaj na začetku – delovala sredi popolnega uničenja.6 1 S podpisom Instrumenta predaje (Instrument of Surrender) 2. septembra 1945, ki ga je zahtevala Potsdamska deklaracija, sprejeta 26. julija 1945. 2 Vrhovni poveljnik za zavezniške sile (Supreme Commander for the Allied Powers – SCAP). 3 International Military Tribunal for the Far East (IMTFE). S Posebnim razglasom (Special Procla- mation) je 19. januarja 1946 Statut Mednarodnega vojaškega sodišča za Daljni vzhod (Charter of the International Military Tribunal for the Far East; v nadaljevanju »Statut«) razglasil vrhovni poveljnik za zavezniške sile. Statut je bil tri dni pred vložitvijo obtožnice (29. aprila 1946) spremenjen tako, da je sodniški senat okrepil na 11 članov. Že delegiranim predstavnikom iz Avstralije, Francije, Kanade, Kitajske, Nizozemske, Nove Zelandije, Sovjetske zveze, Velike Britanije in Združenih dr- žav sta se pridružila še predstavnika s Filipinov in iz Indije. 4 »Najdaljše kazensko sojenje v zgodovini« (predsednik sodišča Avstralec William Webb v zasebni korespondenci, 17. avgust 1948; Sedgwick, 2012, str. 91), »sojenje stoletja« (predsednik sodišča; Kaufman, 2013, str. 770) je trajalo od 3. maja 1946 do 12. novembra 1948, ko se je končalo po osmih dneh branja sodbe in z izrekom kazni preostalim 25 obtožencem. Če ni navedeno drugače, so prevodi avtoričini. 5 Dower, 1999; Minear, 1971. 6 »[W]hen we entered Yokohama proper, there was little to be seen in the dark at all, except for shell of buil- dings here and there which had withstood the fire. The rest was mostly devastation at ground level.« (»[K] o smo vstopili v mesto Yokohama, v temi praktično ni bilo ničesar videti, razen tu in tam ogrodij stavb, ki so še stale po požarih. Preostalo je bilo skoraj vse popolnoma uničeno.«) (Harold J. Evans, asistent novozelandskega sodnika Northcrofta, v zasebni korespondenci po prihodu februarja 1946). »The streets were flat. It had been bombed completely. That was the biggest impression; to see the devastation.« (»Ulice so bile zravnane z zemljo. Bilo je popolnoma zbombardirano. To je pustilo najmočnejši vtis, videti opustošenje na lastne oči.«) (Elaine B. Fischel, asistentka dveh ameriških zagovornikov, v zasebni korespondenci po prihodu aprila 1946). Oba navedka sta iz Sedgwick, 2012, str. 3 in 7. 111 Polona Brumen – Pisma iz Tokia Na 1212. (od 1218.) strani besedila sodbe izvemo, da se indijski sodnik ne strinja z večinsko sodbo in da je podal izjavo o svojih razlogih za nestrinjanje, da sta člana senata iz Francije7 in Nizozemske podala delno odklonilno mnenje, v zvezi z večinsko sodbo in v arhiviranih izjavah za navedeno predstavila svoje razloge, da je predstavnik Filipinov podal ločeno pritrdilno mnenje, predsednik senata iz Avstralije pa zaradi strinjanja z večinsko sodbo, ugotovljenimi dejstvi, pravno podlago in pristojnostjo sodišča, ni vložil ločenega mnenja,8 ampak le kratko izjavo v zvezi z višino kazni za nekatere obsojence iz naslova kazenske neobravnave cesarja.9 Po kratkem pritožbenem postopku na ameriškem Vrhovnem sodišču, ki se je razgla- silo za nepristojno, in po izvršitvi smrtnih kazni 23. decembra leta 1948 je »najpomemb- nejše sojenje v izpričani zgodovini«10 skupaj s preganjanjem mednarodne kazenske od- govornosti posameznika za nekaj desetletij poniknilo, do vnovičnega vznika skupaj z nürnberškim »bratrancem« na začetku 90. let, ko so bila obnovljena prizadevanja za usta- novitev mednarodnega kazenskega sodišča skozi obstoj ad hoc mednarodnih kazenskih sodišč za nekdanjo Jugoslavijo in Ruando.11 Drugače kot v primeru nürnberškega pred- hodnika, Mednarodnega vojaškega sodišča, dokument sodbe Mednarodnega vojaškega sodišča za Daljni vzhod – enako velja tudi za dokumente ločenih mnenj – ni bil uradno objavljen. Do objave je prišlo šele leta 1977, ko je sodbo in ločena mnenja za objavo pod okriljem Univerze v Amsterdamu skupaj s sodelavci pripravil nizozemski sodnik.12 Edini dokument izmed navedenih, ki je v doglednem času postal dostopen širši javnosti, je bilo ločeno odklonilno mnenje indijskega sodnika Radhabinoda Pala, dokument, ki po 7 Francoski sodnik Henri Bernard se ni strinjal z načinom izvajanja sodnega postopka, izrazil je dvom v izvor sodišča samega in v imuniteto, ki je bila podeljena japonskemu cesarju Shōwa. Minear, 1971, str. 33 in 125; Varadarajan, 2015, str. 797. 8 The President’s Judgement – [splet]. Izhajajoč iz lastne pravne tradicije in prepričanja, da naj pred- sednik sodišča pripravi vodilno sodbo (angl. leading judgment), je sodnik Webb svoje besedilo začel pripravljati že jeseni leta 1946. Članom senata je predložil tudi drugi osnutek svoje sodbe, ki pa so ga – predvsem sodniki večine – razcefrali. V besedilu na 658 straneh, ki ga hranijo v Australian War Memorial v Canberri, predsednik sodišča argumentirano pojasni opravljeno analizo očitanih hu- dodelstev, raziskanih dejstev in okoliščin ter jih še zlasti, ko se odloča za višino kazni za obtožence, primerja s kaznimi, ki so bile določene v Nürnbergu. Cohen, 2020, str. 251; Higurashi, 2022, str. 207; Sedgwick, 2012, str. 316; von Lingen, 2018, str. 119. 9 Judgment of the Military Tribunal for the Far East, November 1948 in Judgment IMTFE. Separate Opinion of the President. Razen glavnega besedila večinske sodbe navedeni dokumenti v okviru za- sedanja vojaškega sodišča niso bili prebrani. Komentator britanskega medija, ki je poročal o ugoto- vitvah tokijskega sodišča, je zapisal, da je na Japonskem razširjeno mnenje, da zato, ker se sodnikom mudi domov za božične praznike. Dyer, 2018, str. 100. 10 Nizozemski sodnik B. V. A. Röling na prvi mednarodni konferenci o predmetnem vojaškem sodiš- ču, ki je leta 1983 potekala v Tokiu. Futamura, 2005, str. 32. 11 Totani, 2020, str. 156. 12 Futamura, 2005, str. 29–30; Röling, 1994, str. 6 in 81. 112 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 dolžini presega dokument sodbe sodišča in ga je sodnik na lastne stroške izdal leta 1953 v Kalkuti.13 Glede na številčnost osebja in velikega števila obtožencev Mednarodnega vojaškega sodišča za Daljni vzhod ter medijsko pozornost, ki je je bilo sojenje zaradi pritiska doma- če javnosti v zavezniških državah vsaj v začetni fazi deležno, je najverjetneje tudi upravi- čeno vprašanje, zakaj. Zakaj je sojenje, ki je bilo glavni ukrep tranzicijske pravičnosti za eno od poraženk v drugi svetovni vojni, za več desetletij utonilo v pozabo? V okviru vred- notenjske razprave bom skušala ponuditi tudi nekaj iztočnic za odgovor na to vprašanje. 2. Pisma 2.1. »Buitenlandsche Zaken (Foreign Affairs Ministry) has actually very little interest in the Tokyo process.«14 Tako se je glasil del vsebine pisma, ki ga je nizozemski sodnik B. V. A. Röling 6. julija 1948 poslal vodji divizije za diplomatske zadeve na Ministrstvo za zunanje zadeve Nizozemske. Želel je pojasnila v zvezi s to navedbo, ki mu jo je po srečanju z nizozem- skim tožilcem v Haagu nekaj dni pred tem posredovala žena. Minister za zunanje zadeve je oktobra 1947 – kot kaže, je takrat že bilo širše znano, da se nizozemski predstavnik v celoti ne strinja z večinskim mnenjem – Rölinga rotil, da naj ne stori ničesar, kar bi javnosti lahko dalo vedeti, da se ne strinja s sodbo, saj to ne bi bilo sprejemljivo za družine žrtev japonskega nasilja: antantnega v Evropi in v nizozemski koloniji15 na otočju, ki je leta 1949 končalo večletni proces osamosvajanja v državo Indonezijo. Pritisk ministra k podpisu sodbe je bil najmočnejši poleti leta 1948, ko je slednji Rölingu pripisal, da s svojim načrtovanim delovanjem na kocko postavlja nacionalne interese oziroma ugled nizozemske vlade.16 Obtožbe o japonski agresiji je Röling zavrnil zaradi pomanjkanja preteklih primerov, prav tako pa se tudi ni zmogel poistovetiti z izumljanjem mednarodne pravne prakse in 13 Palova ločena odklonilna sodba, kot jo imenuje sodnik sam, s katero je oprostil vse obtožence, je imela zelo pomembno vlogo pri izoblikovanju japonske percepcije bremena krivde za vojno (Nakazato, 2016). Poglobljena raziskava je pokazala, da je prevod ločenega mnenja indijskega sodnika v japonski jezik že junija 1948 pripravilo Vrhovno poveljstvo (Cheng, 2019, str. 109; Higurashi, 2022, str. 218). Kopije prevoda Palove sodbe v japonski jezik so bile že med izrekom sodbe na voljo v sodni dvorani, kmalu za tem pa so začele pronicati tudi v javnost. Prva tiskana izdaja, v japonsko korist prilagojenega besedila, z naslovom Teorija o japonski nedolžnosti: resnična sodba je bila izdana po uradnem konča- nju okupacije – v prvi polovici leta 1952. Babovic, 2019, str. 135; Ushimura, 2007, str. 218. 14 »Buitenlandsche Zaken (MZZ) ima pravzaprav zelo malo interesa za Tokijski proces.« Sedgwick, 2012, str. 308. 15 Minister za zunanje zadeve v pismu sodniku Rölingu, 28. oktober 1947. Prav tam, str. 309. 16 Prav tam, str. 309–310. 113 Polona Brumen – Pisma iz Tokia norm sodišča v Tokiu, katerega del je bil. Na začetku julija 1948 je izrazil globoko ne- strinjanje oziroma celo zgražanje nad osnutkom sodbe, rekoč, da s tem ne želi povezovati svojega imena. Röling ni mogel sprejeti kriminalizacije zločinov proti miru in načrtova- nja ter izvajanja agresivne vojne,17 ki jo je kriminaliziral Statut Mednarodnega vojaškega sodišča za Daljni vzhod (v nadaljevanju »Statut«), saj da pred letom 1939 ali pa tudi še pred letom 1943 to nista bili hudodelstvi v okviru sporazumov mednarodnega prava, kar v večinski sodbi nakazuje na kršitev načela nullum crimen, nulla poena sine lege praevia. Kakor situacijo pojasni Röling, v danem primeru nullum crimen, nulla poena sine lege praevia ne pomeni načela pravice oziroma pravičnosti, ker da če bi bilo tako, v Statutu ne bi mogli inkriminirati dejanj za nazaj ex post facto. Posledično je bilo po njegovem mnenju načelo nullum crimen zgolj samo pravilo politike oziroma smernica (angl. rule of policy).18 Ko je sicer že bilo prepozno,19 je minister novembra 1948 nekoliko bolj spravno navedel, da na Rölinga niso želeli izvajati pritiska, ampak da so ga želeli le obvestiti o tedanjem haaškem mnenju v zvezi s celotno zadevo v Tokiu. Prav tako je še dodal, da javnosti ne zanimajo specifični pravniški in akademski odtenki postopkovnih podrobno- sti, temveč da javnost zanima zgolj obsodba. Izraz nestrinjanja z večinsko sodbo v obliki objave ločenega odklonilnega mnenja je za nizozemskega sodnika Rölinga bil dokaz, da je bila odločitev Mednarodnega sodišča za Daljni vzhod sprejeta skozi razumski proces in da sodba ni bila vnaprej določena. Ostati tiho je bil po njegovem mnenju najjasnejši prikaz nepravičnosti.20 2.2. »This is not very happy.«21 Indonezija se v predelanem gradivu pojavlja pretežno znotraj že navedene perspekti- ve, in sicer kot nizozemska kolonija, ki je bila v času Mednarodnega vojaškega sodišča za Daljni vzhod država in statu nascendi. Sicer je imela tudi Indonezija med osebjem preu- čevanega mednarodnega sodišča svoja predstavnika, ki sta pomagala raziskovati japonske aktivnosti na navedenem otočju in sta delovala v okviru nizozemske divizije. Podobne predstavnike pri sodišču so imele tudi druge kolonije, pri čemer je šlo, kot kaže, za ime- 17 Dyer, 2018, str. 30; Lowe, 2007, str. 146–148; Varadarajan, 2015, str. 797–798. 18 Dyer, 2018, str. 62 in 63. 19 Ker bi javno nasprotovanje sodnikov na izrečeno sodbo lahko povzročilo zmanjšanje pomena sod- nega procesa v Tokiu, je Vrhovno sodišče Združenega kraljestva na vrhovnega poveljnika za zavez- niške sile generala MacArthurja že maja 1947 naslovilo prošnjo, naj zato posreduje pri indijskem sodniku Palu in nizozemskem sodniku Rölingu. Že pred tem se je namreč razvedelo za obstoj obeh ločenih mnenj, prošnjo za pomoč pa sta na London naslovila britanski sodnik Patrick in novoze- landski sodnik Northcroft. Cheng, 2019, str. 107. 20 Sedgwick, 2012, str. 308–311. 21 »To pa ni najbolj razveseljivo.« Britansko zunanje ministrstvo v pismu Uradu za zvezo (UKLIM) v Tokiu, 14. junija 1946. Prav tam, str. 254. 114 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 novanja in neke vrste nadzor nad delom sodelavcev sodišča iz nastajajočih držav s strani kolonialnih uradnikov in njihovo usmerjanje. Primeroma lahko navedemo potencialno pričanje voditelja neodvisne Burme (Mjanmar)22 – človeka, ki je imel veliko politično moč v enem od »japonskih satelitov« –, na katerega se navezuje naslovni citat tega razdelka. Najobsežnejše japonsko hudo- delstvo v Burmi je bilo trpinčenje vojnih ujetnikov,23 ki so gradili železnico med Burmo in Tajsko. Ideja o morebitnem pričanju burmanskega oblastnika, ki je bil v pridržanju v tokij- skem zaporu Sugamo, se je v zvezi s predlogom Združenih držav Amerike, da bodo sodile največjim japonskim vojnim zločincem, pojavila že jeseni leta 1945. Takrat je vodja bri- tanske pisarne v Burmi in avtor naslovnega citata L. B. Walsh Atkins predlagal, da z Ba Mawom račune poravnajo sami. Njegovo pričanje naj bi bilo namreč nezaželeno, ker bi lahko v zadrego spravilo tako tožilstvo kot tudi vlado njegovega veličanstva.24 Kot predstavnik Burme je v Tokiu deloval generalni pravobranilec U. E. Maung,25 ki je delo opravljal na strani tožilstva.26 Ena od njegovih nalog je bilo službeno potovanje v domačo državo, kjer je zbiral dokaze o japonskih hudodelstvih. V poročilo o delu 22 Ba Maw: predsednik začasne vlade v letih 1937–1939 in voditelj države v letih 1943–1945, (dostop: 30. marec 2024). Udeleženec Konference velike Vzhodne Azije, na kateri so se konec leta 1943 v Tokiu sestali voditelji že navedene Burme, Mandžurije, reorganizirane nacionalne vlade Republike Kitajske, Tajske, Druge filipinske republike in Svobodne Indije, voditelji od matičnih oziroma kolonialnih držav od- druženih političnih entitet, vključenih v poskus vseazijskega gibanja. Greater East Asia Co-Prosperity Sphere, (dostop: 30. marec 2024). 23 Še en primer hudih zlorab vojnih ujetnikov, ki je tudi postal eden od poglavitnejših razlogov za to, da so bili nekateri obtoženci spoznani za krive (konvencionalnih vojnih hudodelstev proti ujetnikom), saj niso spoštovali relevantnih konvencij in so bili zaradi opustitvenih kaznivih dejanj spoznani za krive v skladu s 55. točko obtožnice (Totani, 2010, str. 154; Judgment IMTFE, 1948, str. 1144) je bil tako imenovani Bataanski pohod smrti na Filipinih. Državo je poleg pomočni- ka tožilstva na sojenju zastopal sodnik Deflin Jaranilla, preživeli udeleženec tega pohoda. Sodnik Jaranilla je pripadal sedemčlanski večini, spisal je pritrdilno ločeno mnenje, v katerem je izrazil podporo ameriški uporabi jedrskega orožja. 24 Sedgwick, 2012, str. 253. 25 »Very reliable and capable colleague.« (»Zelo zanesljiv in sposoben sodelavec.«) Kot ga je označil britanski tožilec Comyns Carr v dopisu državnemu pravobranilcu 9. junija 1946. Comyns Carr je v dopisu 2. oktobra 1946 prav tako izrazil podporo indijskima kolegoma, za katera se je razvedelo, da se z misije zbiranja dokazov iz domovine ne bosta vrnila; »[I] would have been pleased to have them back.« (»Veselilo bi me, če bi se vrnila.«) Prav tam, str. 263. 26 Vodil ga je kontroverzni Joseph B. Keenan iz Združenih držav in ga bomo nekoliko podrobneje spoznali v spodnjem razdelku. Kot dodatek k temu pa je zanimivo njegovo pismo indijski vladi 26. avgusta 1946, v katerem je tudi on izrazil željo po podaljšanju mandata tožilca Govinde Menona z besedami: »I know you will be pleased to learn that his work has been very much to my satisfaction.« »Vem, da vas bo razveselila informacija o tem, da me s svojim delom zelo navdušuje.« Prav tam, str. 263. 115 Polona Brumen – Pisma iz Tokia je Maung 31. julija 1946 zapisal tudi, da ob obisku domovine od guvernerja ni prejel dovoljenja za Ba Mawovo pričanje, čeprav so Ba Mawa v zvezi z njegovo kolaboracijo z Japonci želeli izprašati ameriški in filipinski tožilci. Šlo je zlasti za vztrajanje Walsh Atkinsa, da se je treba Ba Mawovemu pričanju izogniti. Prav tako pa tudi za pomisleke avtorja obtožnice, tožilca Arthurja Comyns Carra iz Združenega kraljestva. Rezultate morebitnega pričanja burmanskega voditelja je tudi sam označil za škodljive in na glav- nega pravobranilca v Londonu naslovil vprašanje, kako naj se do situacije politično opre- deli.27 Posledično je britanska vlada – kot kaže – kolonialnim oblastem v Burmi naročila, kako je oziroma ni treba ravnati: guverner v Rangunu ni izdal dovoljenja za pričanje, o čemer je pravobranilec Maung poročal v Tokio. Sedgwick navaja, da je šlo pri zapletu z onemogočanjem pričanja za dvojni imperialni standard,28 saj naj bi bilo predmetno sojenje usmerjeno proti propadlemu japonskemu kolonialnemu projektu, in sicer v izvedbi kolonialnih velesil.29 Ba Mawa so po izpustitvi iz zapora Sugamo predali britanskemu Uradu za zvezo, ki je s pomočjo britanskega vele- poslaništva poskrbel za Ba Mawov prevoz z britanskim letalom v Burmo.30 2.3. »We need more […] The best we have are Japanese prisoners of war.«31 V predhodnih dveh razdelkih sem predstavila mednarodni značaj osebja, ki je izvajalo in usmerjalo postopke Mednarodnega vojaškega sodišča za Daljni vzhod, prav tako sem poudarila geografske razsežnosti tega sodnega procesa. Delovanje Mednarodnega voja- škega sodišča za Daljni vzhod so omogočali »sodniki in njihove tajnice ter pomočniki, tožilci in njihovo osebje iz enajstih držav, ameriški in japonski zagovorniki ter njihovo osebje, sodni uradniki in njihovo osebje, administracija, jezikovno in dokumentarno osebje, oddelek za medije, vojaška policija in storitveno osebje, skupno vsaj 600 oseb.«32 27 11. junij 1946. Prav tam, str. 253. 28 Oziroma tako imenovani elephant in the (court)room, slon v sodni dvorani. Dyer, 2018, str. 54. 29 Sedgwick, 2012, str. 216, 230, 250, 251 in 253. Navedena iztočnica ponuja nove možnosti za vpogled na delovanje Mednarodnega vojaškega sodišča za Daljni vzhod, vendar tak vidik tokrat ni v središču našega zanimanja. 30 University of Virginia Law Library [splet]. 31 »Potrebujemo jih več. […] Najboljše, kar imamo, so japonski vojni ujetniki.« Tožilec Maurice Reed v dopisu v London, marec 1946. Sedgwick, 2012, str. 154. 32 Novozelandski tožilec James Robinson aprila 1947 v pismu ameriškemu tožilcu Walterju McKenzieju, ki je pomagal pripraviti uvodne ugotovitve tožilstva, kljub temu pa postal eden od mnogih, za katere so stresne delovne razmere v multilateralnih okoliščinah deljenja povojne pravice bile previsoka ovira, da bi v postopku lahko in situ prisostvovali do njegovega končanja, saj se je novembra 1946 vrnil domov. Od poletja tistega lega je menda bolj kot delu pozornost namenjal rekreaciji: potovanjem, golfu, ribolovu in nakupovanju. Prav tam, str. 68–69 in 102. 116 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Prav tako je o razsežnostih stavbe,33 v kateri je zasedalo vojaško sodišče, v podobnem slogu Svetovalni komisiji za Daljni vzhod34 poročal glavni tožilec Keenan. Oba navedena opisa sta elementarna opora, ko si poskušamo predstavljati okolje, okoliščine in z njimi povezane organizacijske ter logistične zahteve preučevanega projekta. Uradna jezika sodnega procesa sta bila angleški in japonski, ki sta bila izmenično kon- sekutivno tolmačena. Tolmačenje med angleščino in japonščino je po potrebi potekalo na treh stopnjah: tolmač – nadzorni tolmač – jezikovni arbiter.35 V zapisniku je skupno nave- denih 27 različnih tolmačev, skozi celotno sojenje jih je delala le peščica; trije so bili priso- tni na več kot 200 zasedanjih sodišča. Jezikovno razsodišče je v celotnem obdobju sojenja naredilo 443 jezikovnih popravkov, kar v povprečju pomeni nekaj več kot enega dnevno.36 Naslovna misel tega razdelka je izjava britanskega tožilca Comyns Carra, naslovljena na London iz začetnega stadija oziroma pred začetkom sojenja, ko je tožilstvo pripravlja- lo obtožnico.37 Konec novembra 1946, ko je tožilstvo končalo predstavitev svoje vsebine, se je na pravni oddelek vrhovnega poveljstva s podobno prošnjo za nujen dostop do 25 prevajalcev38 in petih tolmačev obrnila obramba. Kot je navedla, ogromne količine dela ne bodo mogli opraviti brez dodatnega osebja. Če brez nadaljnjega odlašanja ne bi zmog- 33 »Gre za zelo veliko stavbo, ki je bila v celoti predana v uporabo Mednarodnemu vojaškemu sodišču za Daljni vzhod. Vrste hodnikov, ki se odpirajo ena za drugo, vodijo do soban, kjer na stotine delavcev pridno dela, pregleduje, preučuje in organizira dokaze ter jih pripravlja za predstavitev sodišču v primerni obliki.« (14. junij 1946). Prav tam, str. 102. 34 Far Eastern Commission (FEC). Ustanovljena na konferenci Sovjetske zveze, Velike Briatnije in Združenih držav Amerike v Moskvi decembra 1945 z namenom »vključitve zavezniških držav, ki so aktivno sodelovale v vojni proti Japonski, v izdelavo politik za Japonsko« skozi »pripravo priporočil sodelujočih vlad pri izdelavi politik, načel in standardov, s katerimi bi Japonska lahko izpolnila obveznosti, ki so ji bile naložene z Instrumentom predaje«. Borton, 1947, str. 256. 35 Zaradi teatralnega učinka je sedel v bližini tožilske ekipe, kjer je običajno oznanjal dognanja veččlan- skega jezikovnega razsodišča, v zvezi z njihovo rešitvijo nastalih (med)jezikovnih zagat v dvorani. 36 Takeda, 2007a, str. 57, 69, 158 in 194; Takeda, 2007b, str. 14; Watanabe, 2009, str. 59–65. Watanabe je skozi podrobno raziskovalno analizo zapisnika izpeljala zelo zanimive rezultate tolma- čenja navzkrižnega zaslišanja glavnega obtoženca: v skupno šestih dneh pričanja so nadzorni tolma- či v 845 tolmaških enot v angleški jezik s popravki posegli 35-krat in 161-krat v 1.178 tolmaških enot v japonski jezik. Jezikovno razsodišče je med zaslišanjem generala Hidekija Tōjōja odločalo o štirih primerih. Prav tam, str. 67, 70 in 72. 37 Vsi tolmači in nadzorni tolmači na Mednarodnem vojaškem sodišču za Daljni vzhod so bili (po narodnosti) Japonci, saj je bilo poznavanje japonskega jezika med nejaponci takrat preslabo, da bi lahko prevzeli tako pomembno zadolžitev, čeprav je ameriška vojska v pričakovanju konfrontacije z Japonsko že novembra leta 1941 začela skrivni obveščevalni projekt načrtnega nadgrajevanja znanja japonskega jezika med njihovimi uslužbenci: v projektu je sodelovalo 58 oseb japonskega rodu in dva belca. Takeda, 2007b, str. 14. 38 30.000 strani dokaznega gradiva, med njimi 779 zapriseženih in drugih izjav, zaslišanje 419 prič, ki so izpovedovale svoje med 818 zasedanji oziroma 417 sodnimi dnevi, je prevedlo ali pretolmačilo v angleški ali japonski jezik skupno 230 jezikovnih delavcev, od tega 175 na tožilski in 55 na strani 117 Polona Brumen – Pisma iz Tokia li predelati enormne količine dokumentov oziroma če ne bi prejeli dodatne delovne sile, bi bila s tem Vrhovnemu poveljstvu, prav tako pa tudi sodišču in obrambi, povzročena velikanska sramota.39 Najpomembnejše opravilo jezikovnih strokovnjakov je bil prevod sodbe, ki je sko- raj tri mesece potekal v močno zastraženi Rezidenci Hattori (ki je bila v času zasedanj sodišča namenjena namestitvi narodnostno mešane tožilske ekipe, kjer so se poglabljali neuradni odnosi in sklepala nova mednarodna prijateljstva). Dokument s 300.000 bese- dami je prevajalo 35 prevajalcev (od tega 26 Japoncev in 9 Američanov japonskega rodu). Njihovo delo je nadziral profesor mednarodnega prava s Tokijske univerze. Poleg preva- jalcev je v tem obdobju v Rezidenci Hattori bivalo tudi strojepisno in mimeografsko ose- bje40 (v nasprotju s prevajalci so bile to povečini ženske), storitveno osebje in vzdrževalci. Štiriindvajset ur dnevno jih je varovalo 30 vojaških policistov.41 2.4. »Tōjō’s cross-examination of Keenan.«42 Veliko članov osebja Mednarodnega vojaškega sodišča za Daljni vzhod je zapusti- lo Tokio.43 Odhajali so iz različnih razlogov. Eden od prvih, vidnejših primerov je bil prvoimenovani ameriški sodnik John P. Higgins.44 Higginsov odhod junija 1946 je med drugim povzročil tudi zaplet glede imenovanja naslednika, saj za tak primer ni bila pred- videna ustrezna pravna podlaga. Nekateri so odhajali zaradi razočaranja nad trajanjem postopkov, saj so pričakovali, da se bodo ti rešili prej. Pred začetkom sojenja je bilo namreč predvideno, da se bodo postopki končali v šestih mesecih.45 Drugi so zaradi nestrinjanja z načinom dela znotraj obrambe. Zapisnik sojenja v angleškem jeziku obsega 48.488 strani. Takeda, 2007a, str. 50, 65 in 172. Na začetku sojenja je imelo tožilstvo 102 prevajalca, obramba pa 3. Futamura, 2005, str. 134. 39 Takeda, 2007a, str. 65. 40 Prav tam, str. 66 in 172. 41 Sado Matsuzawa, upravnik Rezidence Hattori, je v zasebnem pismu nekdanjemu ameriškemu to- žilcu McKenzieju 30. julija 1948 skupno število ocenil na 180 ljudi. Sedgwick, 2012, str. 110. 42 »Tōjōjevo navzkrižno zaslišanje Keenana.« Asistent novozelandskega sodnika Evans sekretarju mi- nistrstva za zunanje zadeve A. D. McIntoshu, 9. januar 1948. Prav tam, str. 70. 43 Christmas Humphreys, eden od britanskih članov tožilstva, ki je postal navdušenec nad japonskim budizmom zen, je v Via Tokyo pri izpovedovanju osebne izkušnje, ki je izšla že na začetku leta 1948, navedel, da je bila obilica poslovilnih zabav ena od svetlejših točk sodelovanja na IMTFE. Prav tam, str. 33, 34, 69 in 100. 44 Ob zasebnem obisku obrambe je navedel, da je sojenje farsa in ekipo pohvalil za dobro opravljeno delo, ob neki drugi priložnosti pa, da ne želi biti del tega [procesa]. Slednje vzbuja pomisleke, da bi tudi ameriški sodnik lahko vložil ločeno odklonilno mnenje, česar pa njegova vlada oziroma SCAP gotovo ne bi zlahka sprejela. Dyer, 2018, str. 40–41. Uradni razlog za Higginsov odhod je bila smrt njegovega naslednika in posledična potreba po vrnitvi domov. Sedgwick, 2012. str. 79. 45 Prav tam, str. 143. 118 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 tožilstva – pa tudi znotraj obrambe – odstopali s svojih položajev in se vračali domov. Nekatere so odpoklicale institucije, ki so jih imenovale. Spet tretji so kljub nejevolji še naprej opravljali svoje naloge, čuteč dolžnost in odgovornost do države, ki jih je napotila na delo na Japonsko, ne glede na to, pa so se hkrati pogajali za odpoklic. Grožnje z odhodom oziroma prošnje za odpoklic so na uradnike svojih držav na- slovili tudi člani jedra »tihe večine«, to so sodniki iz Kanade, Nove Zelandije in Velike Britanije. Čeprav so se na koncu vdali v formalno strinjanje z (ob)sodbo in jo v dobršnem delu pomagali tudi napisati, je bila pot do takega razpleta vse prej kot konsenzualna. »Prepričan sem, da obtoženi niso bili in ne morejo biti deležni pravičnega sojenja,«46 je v dopisu državnemu sekretarju na ministrstvu za zunanje zadeve Kanade sredi marca 1947 zapisal kanadski sodnik E. Stuart McDougall in dodal: »Ali nisem opravil svoje dolžnosti in imam pravico biti umaknjen […]?«47 V istem dopisu je predlagal tudi, da naj Kanada sojenje zapusti, saj da opravičevanje maščevanja proti uspešnemu nasprotniku ne bo ko- ristilo nikomur. Tri mesece pozneje, ko odgovora še ni prejel – in ker sta novozelandski sodnik Erima Northcroft48 in predstavnik Velike Britanije William Patrick49 v istem ob- dobju na domače institucije naslovljene prošnje za razrešitev z dolžnosti prejela negativne odgovore – se je tudi sam odločil nadaljevati; če bi mu to le zdravje dopuščalo.50 Nekateri sodelavci tokijskega procesa so odšli nenadoma, kot na primer ameriški tožilec John Fihelly, ki se je dve leti pripravljal na navzkrižno zaslišanje Hidekija Tōjōja.51 Takrat se je zgodil eden od večjih incidentov tega sodnega postopka, ki je postal povod za različne korespondence – tako uradne kot tudi zasebne – prav tako pa tudi za senzaci- onalistično obravnavo »novice leta« v ameriških medijih.52 Navzkrižno zaslišanje je začel vodja tožilstva Keenan in ko je besedo želel predati Fihellyju, ga je ustavil predsednik Webb, ki ni dovolil odstopanja od postopkovnih pravil, čeprav se je obramba vnaprej 46 Dyer, 2018, str. 26. 47 Prav tam, str. 47–48. 48 Odgovor novozelandskega predsednika vrhovnega sodišča dober mesec pozneje se je glasil, da od- stop ne pride v poštev, saj da bi lahko obtožencem omogočil upravičeno pritožbo in da se mora Nova Zelandija ogniti škandalom. Prav tam, str. 44. 49 Pravno mnenje britanskega zunanjega ministrstva se je glasilo, da je treba doseči obsodbo zločinov proti miru, saj bodo le tako načela, sprejeta v Nürnbergu, lahko potrjena, in bodo s tem dobila legitimnost. Sellars, 2010, str. 1097. Odstop njihovega sodnika seveda ni prišel v poštev. Dyer, 2018, str. 48. 50 Prav tam, str. 47–48. 51 V obdobju, ki ga zajema obtožnica (1928–1945), je zasedal različna visoka mesta znotraj japonske- ga državnega in vojaškega aparata. V času napada na Pearl Harbour je bil kot predsednik vlade in minister za vojno v njenem samem vrhu. Bil je glavni osumljenec, »obraz« sovražnika. 52 New York Herald Tribune je objavil članek z naslovom We Made Tōjō a Hero, v katerem je bilo med drugim zapisano tudi, da je bilo v tokijski sodni dvorani mogoče videti, kako je Tōjō obesil Keenana. Sedgwick, 2012, str. 69. 119 Polona Brumen – Pisma iz Tokia strinjala s predlogom, da bosta Tōjōja zaslišala dva tožilca. Keenan nalogi, ki jo je zato moral izvesti sam, ni bil dorasel z vsebinskega vidika, Tōjō pa ga je v mnogoterih prvi- nah premagal tudi besedno.53 Kot izhaja iz naslova tega razdelka, je to zaslišanje postalo širše poznano, kar je v zadrego spravilo tudi vrhovnega poveljnika MacArthurja, ki je navzkrižnemu zaslišanju v osnovi nasprotoval, izhajajoč iz dejstva, da je Tōjō pred tem že podal zapriseženo izjavo. Tožilec Fihelly se po predmetnem zasedanju v sodno dvorano ni več vrnil; pred odhodom domov pa si je še na kratko oddahnil v Šanghaju.54 3. Sklep V pričujočem prispevku sem predstavila nekatere vidike Mednarodnega vojaškega so- dišča za Daljni vzhod, ki je po koncu druge svetovne vojne dve leti in pol v Tokiu sodilo japonskim vojnim hudodelcem. Čeprav je splošno sprejeto prepričanje, da je šlo za sodiš- če ZDA in držav Commonwealtha, ki naj bi imele ključno vlogo pri končni odločitvi, pa je bilo v prispevku prikazano, da so imele tudi številne druge države pomembno vlogo pri delovanju sodišča, pri čemer so bila v ozadju prisotna številna trenja. Skozi dopisovanje B. V. A. Rölinga z nizozemskim ministrstvom za zunanje zadeve sem predstavila del raz- deljenega sodnega senata. V dopisih med britanskimi uradniki sem predstavila delovanje imperialne velesile pri usmerjanju sodelavcev sodišča in prič, ki so prihajale iz kolonizira- nih držav, hkrati pa tudi eno od prvih prisotnosti55 članov iz koloniziranih držav v global- ni, multilateralno organizirani meddržavni zasedbi. S prošnjo britanskega tožilstva za več delavcev, ki bodo pomagali pri reševanju medjezikovnih tegob, sem predstavila obseg ter količino osebja sodišča in opisala nekaj značilnosti jezikovnih delavcev, ki so posredovali 53 V avtobiografskem intervjuju, ki ga je izvedel njegov znanec Antonio Cassese, je nizozemski sod- nik Röling navedel, da je Tōjō ob odgovarjanju uporabljal slog (v japonskem jeziku obstaja več ravni (ne)formalnosti govora), ki ga uporablja nadrejeni, ko govori s podrejeno osebo, v skladu s tradicionalnimi hierarhičnimi pravili kategorizacije njunega odnosa v družbi. Röling, 1994, str. 34. Navedbe mi ni uspelo potrditi. Pomislimo, da so k Tōjōjevi »zmagi na ravni besed« pripomogli tudi posegi nadzornih tolmačev, ki so izrečena tolmačenja luščili v njihovo pomensko bistvo. Glej opombo 36 zgoraj. Watanabe, 2009. Kajti, »Translations cannot be made from the one language into the other with the speed and certainty which can be attained in translating one Western speech into another. Literal translation from Japanese into English or the reverse is often impossible. To a large extent nothing but a paraphrase can be achieved, and experts in both languages will often differ as to the correct paraphrase.« (»Prevodov iz enega jezika v drugega se ne da opraviti s hitrostjo in gotovostjo, ki ju je mogoče doseči pri prevajanju enega zahodnega govora v drugega. Dobesedni prevod iz japonščine v angleščino ali obratno je pogosto neizvedljiv. Večinoma ni mogoče doseči drugega kot parafraziranje, pri čemer se mnenja strokovnjakov za oba jezika v zvezi s tem, katera parafraza je pravilna, pogosto razlikujejo.«) (Sodba Mednarodnega vojaškega sodišča za Daljni vzhod, 1948, str. 17; Zapisnik, str. 48429). 54 Lowe, 2007, str. 141–144; Sedgwick, 2012. str. 67–70. 55 Sedgwick, 2012, str. 250. 120 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 pri komunikaciji med vpletenimi stranmi. Z novozelandskim dopisom sem predstavila način, na katerega so prisotni dojeli navzkrižno zaslišanje glavnega osumljenca, ki ga je izvedel glavni tožilec, in kako se je o incidentu govorilo zunaj sodne dvorane. Če poskušamo odmisliti pravico zmagovalcev,56 ki je eden od poglavitnih povodov za negativno vrednotenje obravnavanega vojaškega sodišča, saj naj bi bil slednji boljša od slabih možnosti tranzicijske pravičnosti, glede na to, da so se Angleži zavzemali za zunaj- sodno kaznovanje poraženih nasprotnikov s strelskim vodom,57 pa kritiki Mednarodnega vojaškega sodišča za Daljni vzhod pogosto poudarjajo opustitve, izpustitve preganjanj nekaterih kaznivih dejanj, celo amnestije. Mednje večinoma spada zlasti izvzem kazenske odgovornosti vrhovnega poveljnika japonskih vojaških sil, cesarja, v imenu katerega je bila podpisana japonska predaja. Kazenskega pregona pred Mednarodnim vojaškim so- diščem za Daljni vzhod ni bilo deležno japonsko delovanje v od leta 1895 anektiranem Tajvanu (Republika Kitajska, nekdanja Formoza), delovanje v od leta 1910 anektirani Koreji, razvoj kemičnega in biološkega orožja ter povezani poskusi v laboratorijih na se- verovzhodu Kitajske, geografsko in številčno obsežno spolno suženjstvo in gospodarsko izkoriščanje okupiranih območij s strani japonskih industrijskih konglomeratov.58 Zakaj je najzloglasnejše testiranje bakteriološkega in biološkega orožja na ljudeh in živalih v Enoti 731 in Enoti 100, ki sta delovali v s pomočjo Japonske osamosvojeni državi Mandžuriji, v okviru Mednarodnega vojaškega sodišča za Daljni vzhod ostalo neobravnavano, čeprav je bil glavni tožilec Keenan seznanjen z rezultati preiskave in ugotovitvami preliminarnih zaslišanj sodišča v Habarovsku?59 Poleg navedb, da sodba, ki je bila izdana ob koncu procesa v Habarovsku, vključuje tudi pričevanja, da sta eno- ti v Mandžuriji delovali na podlagi ukazov japonskega cesarja, nizozemski član senata Röling v že navedenem intervjuju izpove še, da je tožilstvo v Tokiu v nekem trenutku celo predložilo dokument o biološkem vojskovanju. Njegovo obravnavo je predsednik senata Webb zavrnil, saj naj navedena tematika ne bi spadala v pristojnost sodišča (v okvirih, začrtanih s Statutom).60 Prav tako je z vidika sodobnega vojskovanja in njegovih posledic zelo zanimiva raz- sežnost pridelave in (pre)prodaje opija na območjih Kitajske, pod japonsko okupacijo, in Koreje, kot tudi precejšen obseg, ki je tej tematiki namenjen v besedilu sodbe. Beseda opij je v sodbi navedena 78-krat. 56 Minear, 1971. 57 Gallant, 2008, str. 5–8; Lowe, 2007, str. 138. 58 Futamura, 2005, str. 141–145; Kaufman, 2013, str. 758, 774–775 in 779–780; Sellars, 2011, str. 1093 in 1100. 59 Naj bo odgovor na postavljeno vprašanje, ko v zadnjih letih naše življenjske vzorce, gospodarstvo, prehranjevalne navade, vključno z določenimi pravicami oblikujejo smernice za globalno obvlado- vanje širitve bakterij in virusov, v premislek prepuščen vsakemu od nas, ki živimo v teh razburljivih časih. 60 Röling, 1994, str. 48, 49. 121 Polona Brumen – Pisma iz Tokia Za del takih besedil, s katerimi smo se spoznali v tem prispevku, čeprav je šlo za do- pisovanje med vladnimi uradniki ali pa za dopisovanje med uradniki različnih držav, si izposodimo kar McIntoshevo oznako undiplomatic dialogue. To nas posledično privede tudi do zaključka, ki ga je ponudila že Totani,61 da razvoj sodnih postopkov v Tokiu le ni bil tako močno (iz)oblikovan s tožilsko agendo Združenih držav Amerike, kot je to obi- čajno predvidevano. Po drugi strani pa površinski pregled možnega nabora tematik, ki v obtožnico niso bile vključene, razkrije še eno možnost, in sicer da ima vnaprej predvideni vsebinski zaključek še eno plat. Ameriški asistentki odvetnikov obrambe Elaine Fischel je po vrnitvi mati povedala, da je videti izredno, izredno slabo. V pismih domov je večkrat govorila o tem, da si ne more prav oddahniti, ker veliko zadev poteka sočasno. Konec leta 1948 je v starosti 27 let doživela obsežno krvavitev. Izkazalo se je, da gre za napredovano aktivno tuberkulo- zo, ki jo je pahnila v dolgotrajno okrevanje. »V to sojenje ne verjamem, sprašujem se, kaj je pravzaprav dokazalo. […] Naša generacija tega najverjetneje ne bo izvedela,«62 se je glasilo njeno (o)vrednotenje Mednarodnega vojaškega sodišča za Daljni vzhod skozi prvoosebno izkušnjo. Imela je še kako prav, saj je kmalu po koncu sojenja izbruhnila hladna vojna. Pretežno zato je sredi 50. let 20. stoletja Japonska postala ameriška zaveznica, najpomembnejša v tem delu sveta. Razprave o Mednarodnem vojaškem sodišču za Daljni vzhod so do ustanovitve mednarodnih kazenskih sodišč za nekdanjo Jugoslavijo in Ruando bolj ali manj poniknile. Razveseljuje pa dejstvo, da je preučevanje Mednarodnega vojaškega sodišča za Daljni vzhod v zadnjih dveh desetletjih dobilo nov zalet. Vsak avtor posebej znova in znova poudari dejstvo, da gre za premalo preučevan, spregledani del mednaro- dnega kazenskega prava. Tudi sami se nadejamo, da bomo v prihodnje z njega odstrnili še kakšno tančico. 61 2010, str. 158. 62 Sedgwick, 2012, str. 343. 122 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Literatura Babovic, A. (2019) The Tokyo Trial, Justice, and the Postwar International Order. Singapur: Palgrave Macmillan. Borton, H. (1947) ‘United States Occupation Policies in Japan Since Surrender’, Political Science Quarterly, let. 62, št. 2, str. 250–257, (dostop: 23. avgust 2020). Cheng, Z. (2019) ‘CHAPTER 4: The Declaration of Judgment’ v A History of War Crimes Trials in Post 1945 Asia-Pacific. Singapur: Palgrave Macmillan. Cohen, D. (2020) ‘The “President’s Judgment” and Its Significance for the Tokyo Trial’ v Dittrich, V. E., in drugi (ur.) The Tokyo Tribunal: Perspectives on Law, History and Memory. Bruselj: Torkel Opsahl Academic Epublisher, str. 251–274, (dostop: 26. januar 2021). Dower, J. D. (1999). Embracing Defeat: Japan in the Wake of World War II. New York: W. W. Norton. Dyer, L. N. (2018) Victor’s Justice, Victim’s Justice: The Role of ‘Class A’ War Crimes in Shaping the Legacy of the Tokyo Tribunal. Ontario: Queen’s University Kingston (magistr- sko delo), (dostop: 3. de- cember 2019). Futamura, M. (2005) Revisiting the 'Nuremberg Legacy': Societal Transformation and the Strategic Success of International War Crimes Tribunals. London: King‘s College London (doktorska disertacija), (dostop: 3. december 2019). Gallant, K. S. (2008) ‘Chapter 3: Nuremberg and Tokyo’ v The Principle of Legality in International and Comparative Criminal Law. Cambridge: Cambridge University Press, (dostop: 3. december 2019). Higurashi, Y. (2022) The Tokyo Trial: War Criminals and Japan’s Postwar International Relations. Tokio: Japan Publishing Industry Foundation for Culture (JPIC). Kaufman, Z. D. (2013) ‘Transitional Justice for Tōjō’s Japan: The United States Role in the Establishment of the International Military Tribunal for the Far East and other Transitional Justice Mechanisms for Japan after World War II’ v Memory International Law Review, let. 23, št. 1, str. 755–798, (dostop: 25. maj 2020). von Lingen, K. (2018) Transcultural Justice at the Tokyo Tribunal: The Allied Struggle for Justice, 1946-48. Leiden in Boston: Brill. 123 Polona Brumen – Pisma iz Tokia Lowe, P. (2007) ‘An Embarrassing Necessity: The Tokyo Trial of Japanese Leaders, 1946- 48’ v Melikan, R. A. (ur.) Domestic and International Trials, 1700-2000: The Trial in History – Volume II. Manchester: Manchester University Press, str. 137– 156, (dostop: 25. maj 2020). Minear, R. H. (1971) Victors’ Justice: The Tokyo War Crimes Trial. Princeton: Princeton University Press. Nakazato, N. (2016) Neonationalist Mythology in Postwar Japan: Pal’s Dissenting Judgment at the Tokyo War Crimes Tribunal. Lanham [etc.]: Lexington Books. Röling, B. V. A. (1994). The Tokyo Trial and Beyond: Reflections of a Peacemonger. Cambridge, UK: Polity. Sedgwick, J. (2012) The Trial Within: Negotiating Justice at the International Military Tribunal for the Far East. Vancouver: The University of British Columbia, (dostop: 5. december 2019). Sung, Y. C. (1967) ‘The Tokyo War Crimes Trial’ v The Quarterly Journal of the Library of Congress, let. 24, št. 4, str. 309–318, (dostop: 23. avgust 2020). Sellars, K. (2010) ‘Imperfect Justice at Nuremberg and Tokyo’, European Journal of International Law, let. 21, št. 4, str. 1085–1102, (dostop: 3. december 2019). Takeda, K. (2007a) Sociopolitical Aspects of Interpreting at the International Military Tribunal for the Far East (1946-1948). Španija: Universitat Rovira i Virgili (doktorska disertacija), (dostop: 3. december 2019). Takeda, K. (2007b) Nisei Linguists during World War II and the Occupation of Japan. ATA Chronicle, let. 36, št. 1, str. 14–17, (dostop: 5. avgust 2020). The International Military Tribunal for the Far East (1948). Judgment of 4 November 1948, (dostop: 15. marec 2020). The International Military Tribunal for the Far East (1948). The IMTFE Charter, (dostop: 15. marec 2020). The International Military Tribunal for the Far East (1948). Separate Opinion of the President, (dostop: 29. avgust 2020). 124 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Totani, Y. (2010) ‘The Case Against the Accused’ v Tanaka, Y., McCormack, T., in Simpson G. (ur.) Beyond Victor’s Justice? The Tokyo War Crimes Trial Revisited. Leiden in Boston: Martinus Nijhoff Publishers, str. 147–161, (dostop: 3. december 2019). Totani, Y. (2020). Individual Responsibility at the Tokyo Trial. Dittrich, V. E., in drugi (ur). The Tokyo Tribunal: Perspectives on Law, History and Memory. Bruselj: Torkel Opsahl Academic Epublisher, str. 155–175, (dostop: 26. januar 2021). University of Virginia Law Library. Dr. Ba Maw is Handed Over to British Here; Is Returning to Burma News Article, (dostop: 30. marec 2024). Ushimura, K. (2007) Pal’s „Dissentient Judgment“ Reconsidered: Some Notes on Postwar Japan‘s Responses to the Opinion. Japan Review, št. 19, str. 215–223, (dostop: 25. maj 2020). Varadarajan, L. (2015) The Trials of Imperialism: Radhabinod Pal’s Dissent at the Tokyo Tribunal. European Journal of International Relations, št. 21, str. 793–815, (dostop: 23. avgust 2020). War Crimes Documentation Initiative (1948). The President’s Judgement, (dostop: 25. marec 2024). Watanabe, T. (2009) ‘Interpretation at the Tokyo War Crimes Tribunal: An Overview and Tojo’s Cross-Examination’, Traduction, terminologie, rédaction, let. 22, št. 1, str. 57–91, (dostop: 3. december 2019). 125 © The Author(s) 2024 Scientific Article DOI: 10.51940/2024.1.125-156 UDC: 347.85:341.229:347:355 Anže Mediževec* The Right of Self-defence in the Earth’s Orbit Abstract The increasing presence of non-State actors in space raises a plethora of legal questions, including those related to the use of force, especially in the context of the right of self-de- fence. The first aim of this article is to explain the legal basis for resorting to force in the exercise of self-defence in space, specifically in the Earth’s orbit. The second goal is to contribute to the legal framework concerning how States may exercise self-defence against attacks committed by non-State actors in space. In this regard, the author distinguishes between the rules of attribution of the use of force to a State and the “unwilling or unable” doctrine. It is suggested that the latter may be transposed into the space domain, mutatis mutandis, by a re-conceptualisation of the notion of a State’s “territory”, shifting from its sovereignty-based foundation towards State jurisdiction. Further on, in the realm of the rules of attribution of conduct to a State, the author compares the ARSIWA rules of State responsibility with the strict responsibility regime of the Outer Space Treaty (OST), to clarify which system applies when addressing State responsibility for the use of force by non-State actors in space. Three solutions are offered in this regard. The first rests on the premise that space law, specifically Article VI OST, may be seen as lex specialis in relation to ARSIWA. The second supports the view that the general rules of State responsibility in ARSIWA should apply, as they are secondary rules of international law, whereas Article VI OST encompasses primary rules. The third approach offers a combined reading of Article VI OST and ARSIWA, based on a systematic interpretation of the norms contained therein, to preserve the purpose of the secondary rules on State responsibility. Key words Outer Space Treaty, Article IV OST, Article VI OST, Peaceful Purposes, National Activities, Self-defence in Space, Strict Responsibility Regime. * LL.B. & LL.M. (University of Ljubljana), LL.M. (College of Europe/Collège d’Europe), Faculty of Law, University of Ljubljana, Department or International Law, Teaching Assistant and Researcher, E-mail: anze.medizevec@pf.uni-lj.si. Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review – voL. LXXXiv, 2024 • pp. 125–156 ISSN 1854-3839 • eISSN: 2464-0077 126 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 “We will engage terrestrial targets somedayships, airplanes, land targets—from space. We will engage targets in space, from space […] [The] missions are already assigned, and we’ve written the concepts of operations”. Gen. Joseph W. Ashy, United States Air Force (USAF), 1996 1. A New Era in the Usage of Space Since 12 April 1961, when Yuri Gagarin ventured into space on the Vostok 1 as the first human in history—opening new dimensions of human ingenuity and broadening the horizons of the imaginable—a plethora of questions on space remain unanswered. While scientists are searching for the elements of a complete theory of the universe, the so-called quantum theory of relativity1, to provide us with an explanation of our sur- roundings, legal scholars and practitioners are trying to define the contours of the juris spatialis internationalis. As explained by General Joseph W. Ashy of the USAF back in 1996, States globally seek to contribute and improve their space-related capabilities in the area of the use of force, with theoretical models of such usage being perfected as a matter of national strategic importance.2 It is noteworthy that, upon establishing the United States Space Force (USSF), former President of the United States of America (USA) Donald J. Trump stated that Space is the new warzone.3 With this, the USA’s plans for the militarisation of space became visible.4 The creation of the USSF naturally did not occur in a vacuum: it is said that this establishment is a direct response to the perceived threats arising from the Great Power Competition (GPC) in the space domain. Here, the aim is to shape the USSF in such a way as to ensure US military supremacy in space to deter, and if need be, prevail over competitors in the era of the GPC.5 Besides the USA, the Russian Federation (Russia) and the People’s Republic of China (China) are seen as major competitors build- ing up their space related capabilities in the use of force, with Russia having established its own Space Forces on 10 August 1992, and China forming the People’s Liberation Army Aerospace Force rather recently, on 19 April 2024.6 The deployment of gear into space that could be used in the exercise of force is no longer a purely futuristic idea. At the outset, it is important to distinguish the terms mil- itarisation and weaponisation of space when discussing a possible resort to force in space. 1 Hawking, 2016, pp. 187–204. 2 Ramey, 2018, p. 193. 3 BBC, 2019. 4 Schladebach, 2020, pp. 60–61. 5 United States Space Force, 2024; Savoy & Staguhn, 2022, p. 1. 6 China Aerospace Studies Institute, 2024, p. 2. 127 Anže Mediževec – The Right of Self-defence in the Earth’s Orbit Although both terms are intertwined and a clear-cut distinction is difficult to ascertain, militarisation usually implies the presence of military assets in space, while weaponisa- tion refers to the actual usage of those means for military activities.7 In contemporary debates on the weaponisation of space, the term “space weapons” most commonly refers to: a) any weapons (whether land-, sea-, or air-based) that can damage a satellite or inter- fere with its functioning8; and b) any space-based weapons which are intended to attack objectives (i.e., targets) in space or on the ground.9 Throughout this article, whenever space weapons are mentioned, the term reflects these definitions. Currently, the possibility of anti-satellite use of force is, worryingly, no longer a dis- tant imagination but rather an ever-closer reality. Concerns have already been expressed, for example, regarding the Russian “nesting doll” satellite, possibly capable of perform- ing kinetic attacks and serving as a model of a counterspace weapon to attack other satel- lites in low Earth orbit.10 Interestingly, Russia deployed this presumed new counterspace weapon into exactly the same orbit in which a U.S. government satellite operates. Other weapons are also in development, such as lasers to dazzle satellites, as well as “grappling” satellites that can be used to grab and tow other satellites out of their orbit.11 Thus, the concept of space warfare as it is currently most widely understood gravitates around the destruction of unmanned military assets in space, most commonly satellites in the Earth’s orbit.12 With ongoing technological developments in the field, other military tactics will likely follow, continuously expanding the scope of the use of force. Historically, space was considered State-ruled, but this is beginning to change rapidly. Nowadays, several actors are active in space, including manufacturers, launch providers and spacecraft operators. For example, in the USA, numerous corporations (e.g., Space Exploration Technologies Corporation (SpaceX), Virgin Galactic, and Lockheed Martin Commercial Launch Services) have been granted licences for space launches, with SpaceX alone having received 369 licences.13 The presence of private parties in space, together with the activities they undertake there, raises a plethora of questions.14 One of 7 Tripathi, 2013, pp. 193–194; Krepon & Clary, 2003, pp. 29–30. 8 The latter are also known as anti-satellite (or ASAT) weapons. 9 In this group, space-based ballistic missile defence interceptors and ground-attack weapons are included. See: Boothby, 2017, pp. 206–213; Wright et al., 2006, p. 1. 10 Hadley & Gordon, 2024. 11 Ibid. 12 Ramey, 2018, p. 190. 13 Federal Aviation Administration, 2023. 14 One challenge is to define the scope of liability of private operators in case catastrophic accidents occur in space resulting in death. Precisely the lack of a sufficiently clear and coherent international legal framework on the civil liability of private operators is one of the main reasons why States have adopted their own national laws in this area, with international organisations following their lead, such as the European Union (EU) with its upcoming EU Space Law, see: Ziemblicki & Oralova, 128 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 those is the potential use of force by private actors in space—whether via an anti-satellite system they operate or through the use of kinetic energy projectiles against another space object—be it following the orders of a State or of their own accord. It is true that, until now, no clear case of the use of force in space has been established, whether by a State or a non-State actor. However, given the increasing global reliance on space systems, as well as the rising militarisation and weaponisation of space, especially in the Earth’s orbit, an evolution of space into a distinct theatre of military operations appears highly likely, rendering the study of the use of force in space a much-needed reality.15 In light of the above, this article aims to tackle a topic of growing importance in the space domain: the use of force, especially in the context of self-defence, and the challenge of attributing responsibility for the use of force by non-State actors to a given State in space. Since, for mostly technical and scientific reasons, military development is limited to the Earth’s orbit—at least for the moment and in the near future—the analysis un- dertaken shall likewise be restricted to the Earth’s orbit, leaving the use of force in other areas of outer space only marginally addressed. The first aim of this article is to explain the legal basis for the resort to force in the exercise of self-defence in space, specifically in the Earth’s orbit. The second goal is to contribute to the framework concerning how States may exercise self-defence against attacks committed by non-State actors in space. In this regard, the article distinguishes between the ARSIWA model of responsibility of States for the use of force committed by non-State actors and the “unwilling or unable” doctrine. In the realm of the rules of attribution of conduct to a State, the article delves into the comparison between the ARSIWA16 model on State responsibility and the strict responsibility regime of the Outer Space Treaty (OST),17 to illuminate which system applies for addressing the use of force by non-State actors in space. 2. Resorting to the Use of Force in Space At the outset, it is beneficial to first briefly sketch the international space law rules relevant to the topic of this article. The analysis that unfolds throughout the article will be based on this legal framework. Already in the 1960s, the United Nations General 2021, pp. 1–2; Stefoudi, 2024; As regards the space-related activities of the EU, see: Cross, 2021, pp. 31–46. 15 Ramey, 2018, p. 205. See also: Buchan, 2023. 16 Articles on Responsibility of States for Internationally Wrongful Acts (ARSIWA), Annex to General Assembly resolution 56/83 of 12 December 2001, and corrected by document A/56/49(Vol. I)/ Corr.4. 17 Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, including the Moon and Other Celestial Bodies (the Outer Space Treaty, OST), Washington, Moscow & London, 27 January 1967, 610 UNTS 205. 129 Anže Mediževec – The Right of Self-defence in the Earth’s Orbit Assembly (UNGA) adopted several resolutions tackling what was to become the le- gal regime for space activities undertaken by States or private entities.18 For example, in Resolution 1962 (XVIII), entitled Declaration of Legal Principles Governing the Activities of States in the Exploration and Use of Outer Space19, the UNGA Member States concluded that the activities of States in the use of outer space must be carried out in accordance with international law, most notably the UN Charter, in the interest of international peace and security.20 Such resolutions were considered as an expression of instant custom shortly after their adoption.21 The legal framework of international space law, composed of the most signifi- cant treaties dates back to 1967 with the OST22 and the subsequent Convention on International Liability Caused by Space Objects23, the Convention on Registration of Objects Launched into Outer Space24, the Agreement on the Rescue of Astronauts, the Return of Astronauts and the Return of Objects Launched into Outer Space25, and the Agreement Governing the Activities of States on the Moon and Other Celestial Bodies (Moon Agreement).26 Adopted in the Annex of UNGA Resolution 2222 (XXI), the OST, which was largely based on UNGA Resolution 1962 (XVIII), is generally consid- ered to provide the principal regulatory mechanism for human activities in space. The subsequent treaties mostly address specific issues of space law, thus clarifying the scope of the OST via a systematic interpretation of the norms contained therein.27 Since the OST represents the foundation of international space law, it naturally follows to begin assessing the possibilities for the use of force in space within the contours of the OST.28 It is argued by some legal scholars that, by virtue of Article I of the OST—which states that the exploration and use of outer space “shall be carried out for the benefit and in the interests of all countries”—virtually any form of military activity is prohibited, however exercised and for whichever purpose.29 Indeed, already in its Preamble, the OST recognises that UNGA Resolution 110 (II) of 3 November 1947, condemning any form 18 Yun, 2018, pp. 1–3. 19 UNGA Resolution 1962 (XVIII), 13 December 1963. 20 Ibid., § 4. 21 Shaw, 2017, p. 405. 22 Washington, Moscow & London, 27 January 1967, 610 UNTS 205. 23 Washington, Moscow & London, 29 March 1972, 961 UNTS 187. 24 New York, 12 November 1974, 1023 UNTS 15. 25 Washington, Moscow & London, 22 April 1968, 672 UNTS 119. 26 New York, 5 December 1979, 1363 UNTS 3. 27 Borgen, 2020. 28 Martinez et al., 2019, pp. 28–29; Delegation of the European Union to the United Nations in New York, 2023. 29 Shaw, 2017, p. 406 and the references listed in fn. 343. 130 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 of propaganda designed or likely to provoke or encourage any threat to the peace, breach of the peace or act of aggression, is applicable to outer space. Nevertheless, when it comes to regulating the use of force, Article IV of the OST is widely regarded as the backbone of the legal rules on the use of force in space.30 While it is true that, during the genesis of the OST, the idea of a peaceful usage of space was stressed on several occasions, one must distinguish between the text of both paragraphs of Article IV to understand how the OST regulates the reliance on force in the space domain.31 2.1. The Prohibition of Weaponisation of Space as per OST According to Article IV(1), States Parties to the OST undertake not to place in orbit around the Earth any objects which could carry nuclear weapons or any other kinds of weapons of mass destruction (WMD). States Parties are also prohibited from installing such weapons on celestial bodies or from stationing such weapons in outer space in any other manner. The norm in paragraph 1 is a direct inclusion of UNGA Resolution 1884 (XVIII), adopted unanimously on 17 October 1963, indicating such prohibitions. Hence, paragraph 1 unequivocally prohibits the placement of WMD in space. However, the norm does not prohibit States from placing weapons other than WMD into the Earth’s orbit. This appears to be the first material limitation of the OST on the question of militarisation and weaponisation of space, alongside the absence of a definition of what constitutes a WMD.32 Furthermore, nuclear weapons and other WMD which are not stationed in the Earth’s orbit but merely pass through it also evade a clear-cut prohibition. By virtue of this, weapons such as intercontinental ballistic missiles with a nuclear warhead (with an arc-like trajectory) and orbital bombs are missing from the prohibition of Article IV(1), as well.33 This legal loophole was historically addressed to a certain extent via treaties on the limitation of armament such as the SALT-II Treaty of 197934, Article VII of which states that “fractional orbital missiles” shall be removed and destroyed.35 Thus, worrying- ly, as per the text of Article IV(1) OST, nuclear weapons and WMD may still venture into the Earth’s orbit, albeit only temporarily for the purposes of passage.36 On the other 30 Lee, 2003, p. 93. 31 Dembling & Arons, 1967, pp. 425 and 432–435. 32 Ramuš Cvetkovič, 2024, p. 62. 33 Schladebach, 2020, p. 61. 34 Treaty Between the United States of America and The Union of Soviet Socialist Republics on the Limitation of Strategic Offensive Arms, Together With Agreed Statements and Common Understandings Regarding the Treaty, Signed at Vienna, 18 June 1979. 35 Schladebach, 2020, pp. 60–62. 36 Nevertheless, certain limitations on the usage of nuclear weapons stemming from general inter- national law still exist, see for example: Legality of the Threat or Use of Nuclear Weapons, Advisory 131 Anže Mediževec – The Right of Self-defence in the Earth’s Orbit hand, stationing conventional weapons in space appears prima facie not to contravene the OST. Since conventional space weapons can be divided into three main categories (Earth-to-space, space-to-space, and space-to-Earth), with a further subdivision into ki- netic and non-kinetic weapons with either temporary or permanent effects,37 the choice of weaponry not expressly prohibited by the OST seems rather broad. To mitigate these concerns at least to a certain extent, Article IV(2) OST provides that the Moon and other celestial bodies must be used by the States Parties “exclusively for peaceful purposes”. In this regard, the establishment of military bases, installations and fortifications, the testing of any type of weapons, as well as the conduct of military manoeuvres on celestial bodies, is directly prohibited, as it contravenes the idea of usage for a “peaceful purpose”. On the other hand, relying on military personnel for scientific research or for any other “peaceful purposes” is expressly allowed. Thus, for the Moon and other celestial bodies, there exists an unambiguous prohibition of militarisation within the OST. The norm of Article IV(2) OST is also applicable, among others, to the question of the use of force in the Earth’s orbit. This is the case since celestial bodies, mostly asteroids, may enter that orbit, while military activities technically could be exer- cised on such bodies if the need arose.38 In light of the above, a crucial question arises as regards the general weaponisation of outer space: how is the term “utilisation for peaceful purposes” to be understood?39 2.2. Utilisation of Space for “Peaceful Purposes” as per OST The interpretation of the term “peaceful purposes” in the OST indeed forms one of the centrepieces of the discussion on the use of force in space. As explained by Su, it used to be understood that it might only fit the purpose of Article IV to interpret “peaceful” as “non-military”.40 According to this notion, any military operation in space violates Article IV. This interpretation, however, does not sufficiently appreciate the distinction between militarisation and weaponisation of space. Schladebach reasons convincingly that Article IV must be interpreted41 in light of Article III OST.42 Article III provides that States Parties to the OST must exercise their activities in the exploration and use of outer space, including the Moon and other celestial bodies, in accordance with international law, including the UN Charter—all in the interest of maintaining international peace Opinion, I.C.J. Reports 1996 (Legality of the Threat or Use of Nuclear Weapons), p. 226, especially paras. 31, 79; Kütt & Steffek, 2015, pp. 402–407. 37 Gleason & Hays, 2020, p. 2; For more information on space weapons see: Harrison, 2020. 38 Harris & D’Abramo, 2015, p. 302; Santos, 2021. 39 Sandeepa & Kiran, 2009, pp. 210–211. 40 Su, 2010, p. 258. 41 Gardiner, 2010, pp. 177–188 and 278–287. 42 Schladebach, 2020, p. 63. 132 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 and security. By virtue of this invocation of the UN Charter, the OST makes a direct reference to the norms contained therein, which is especially pertinent in the context of the law of treaties. Article 31(3)(c) of the Vienna Convention on the Law of Treaties (VCLT) provides that any relevant rules of international law applicable in the relations between the parties must be taken into account when interpreting a treaty.43 Thus, by virtue of the reference to the UN Charter in Article III OST, this provision of the OST should be interpreted, where relevant, in the context of the UN Charter. Such an integrated understanding of the norms governing the use of force in space, linking the OST with the provisions of the Charter, was confirmed in the Declaration on International Cooperation in the Exploration and Use of Outer Space for the Benefit and in the Interest of All States, Taking into Particular Account the Needs of Developing Countries.44 In Article I, the Declaration stipulates that the use of outer space for peace- ful purposes must be conducted in accordance with a joint interpretation of the provi- sions of international law, the UN Charter, and the OST.45 A systematic interpretation of the “peaceful purposes” diction of Article IV of the OST is, therefore, a prerequisite since the OST must be interpreted and applied in the broader context of international law, including the UN Charter.46 Since Article III OST provides for the application of the UN Charter in relation to the notion of “peaceful purposes” in Article IV, the rules on the use of force of the UN Charter become especially relevant. As is widely known, States bear international respon- sibility in line with the prohibition in Article 2(4) of the UN Charter, and customary international law on the unlawful use of force or the threat of its use.47 Nevertheless, in a few situations, the wrongfulness of such usage is precluded—one of them being self-de- fence.48 Hence, Article III OST, via the UN Charter as a linking element, provides for the application of Article 51 on self-defence in the context of the OST, together with its cus- tomary international law elements.49 The cross-reference of Article III OST to the norm of Article 51 UN Charter on self-defence shows that the use of force is indeed possible 43 Vienna, 23 May 1969, 1155 UNTS 331; d’Aspremont, 2012, pp. 147–148. 44 Adopted in the Annex of UNGA Resolution 51/126, 13 December 1996. 45 Similarly, Section 3 titled “Peaceful purposes” of the recently adopted Artemis Accords stipulates that: “The Signatories affirm that cooperative activities under these Accords should be exclusively for peaceful purposes and in accordance with relevant international law”: the Artemis Accords, Principles for Cooperation in the Civil Exploration and Use of the Moon, Mars, Comets, and Asteroids for Peaceful Purposes, 13 October 2020. See: Bartóki-Gönczy & Nagy, 2023, p. 888; Smith, 2021, pp. 679–680. 46 Lee, 2003, p. 94. 47 Armed Activities on the Territory of the Congo (Democratic Republic of the Congo v. Uganda), Judgment, I.C.J. Reports 2005 (Armed Activities on the Territory of the Congo), p. 168, § 148. 48 Wilmshurst, 2005, p. 4. 49 Merkouris, 2017, p. 138. 133 Anže Mediževec – The Right of Self-defence in the Earth’s Orbit in space, encompassing both the Earth’s orbit and outer space in general, via the exercise of self-defence. Naturally, other conditions must be satisfied if a State wishes to invoke self-defence as a circumstance precluding wrongfulness, such as the customary interna- tional law conditions of necessity and proportionality.50 Here, the particularities of space warfare in the Earth’s orbit ought to be considered. To indicate just one major challenge: as of 27 December 2024, there are 11,463 space objects orbiting Earth.51 Self-defence in the Earth’s orbit could thus affect the assets of other States, as well as cause harm to the space environment. Both factors must be addressed in any assessment of the criteria of necessity and proportionality of self-defence undertaken in the space domain.52 Exercising self-defence against an attack involves a response with military force to counter the attack. Thus, given that the right of self-defence is permitted in space, the term “peaceful purposes” in Article IV OST must not be understood as “non-military” or “without military force”, but rather as “without aggression” or “without aggressive force”.53 This is the case since, in light of Article 2(4) of the UN Charter, lawful self-de- fence precludes the wrongfulness (i.e., the “aggressiveness”) of the use of force.54 An interpretation of the term “peaceful purposes” as “without military force” would thus deprive the reference in Article III OST to the UN Charter—especially Article 51 on self-defence—of its effect.55 Hence, if a State Party to the OST uses force in space within the bounds of lawful self-defence in accordance with the prerequisite conditions, such use of force is not prohibited in the Earth’s orbit, nor in any other area of outer space. By virtue of this, the militarisation and weaponisation of space are precluded except for lawful acts of use of force in line with the UN Charter (which also includes the Chapter VII powers of the UN Security Council). An analogous conclusion can be made for self-defence as a norm of customary international law, given that Article III expressly re- fers to this source of international law, which continues to have its own separate existence despite being incorporated into Article 51 of the UN Charter.56 50 Oil Platforms (Islamic Republic of Iran v. United States of America), Judgment, I.C.J. Reports 2003 (Oil Platforms), p. 161, §§ 43 and 76; Kretzmer, 2013, pp. 239–240 and 273. 51 Orbiting now, 2024. 52 Similar to the impact in terms of environmental damage caused on Earth by the use of force, see: Legality of the Threat or Use of Nuclear Weapons, p. 226, §§ 28–33. 53 Tepper, 2024, p. 481; see also: Cheng, 2000, p. 107. For a different perspective, see: Friman, 2005. 54 ARSIWA, Art. 21: Self-defence; cf. Kreß, 2016, p. 412. 55 Schladebach, 2008, p. 220. 56 Military and Paramilitary Activities in and against Nicaragua (Nicaragua v. United States of America). Merits, Judgment. I.C.J. Reports 1986 (Military and Paramilitary Activities in and against Nicaragua), p. 14, § 178. 134 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 3. Self-Defence in the Earth’s Orbit: The Rules of Attribution for the Use of Force in International Space Law Until this point, it was established that, pursuant to Article IV of the OST, the use of force in space, including the Earth’s orbit, is not prohibited if exercised in lawful self-defence. Naturally, as with all circumstances that may preclude the international wrongfulness of a State’s act, one must scrutinise which standards apply to such self-de- fence for a State to successfully invoke it.57 This section will focus on the characteristics of the attack itself, against which self-defence can be exercised, specifically from the viewpoint of attribution of the use of force. Much ink has already been spilled on this question, yet the challenging part is whether it is suitable simply to transpose, by virtue of analogy, the existing rules on State responsibility in the realm of the use of force on Earth to the space domain. According to the jurisprudence of the ICJ, self-defence is an inter-State question, with one State bearing responsibility for the attack and another State being subjected to the use of force. For the responsibility of a State to arise and for self-defence subsequently to be possible, the use of force must, therefore, be attributed to a given State—even if ex- ercised by non-State actors. The latter derives from the Wall Advisory Opinion, in which the Court opined that self-defence, as an inherent right of States, only becomes relevant if armed attacks are imputable to a foreign State.58 By this reasoning, a State is barred from using force in self-defence against another State if the threshold for attributing the initial armed attack to that State is not met.59 This refers to the classic, State-centric concept of an armed attack, which was upheld by the Court in the later case between the Democratic Republic of Congo (DRC) and Uganda.60 In that case, the Court held on to the requirement that the notion of self-defence requires the involvement of anoth- er State—thus following the “State armed attack” concept. In its reasoning, the Court stated that it had not found satisfactory proof of the involvement in the alleged attacks, direct or indirect, of the Government of the DRC. Further, as the attacks did not ema- nate from armed bands or irregulars sent by the DRC or acting on behalf of the DRC, irrespective of the potency of the attack, they “remained non-attributable to the DRC”.61 Consequently, Uganda was barred from successfully invoking self-defence to preclude the wrongfulness of its exercise of armed force against the DRC. 57 Farhang, 2013, p. 3. 58 Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory, Advisory Opinion, I.C.J. Reports 2004 (Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory), p. 136, § 139; Oil Platforms, p. 161, § 51: “[…] the United States has to show that attacks had been made upon it for which Iran was responsible”. 59 Akande & Milanovic, 2015; O’Connell, 2013, pp. 380 and 383; Tladi, 2013, p. 572. 60 Starski, 2015, p. 462. 61 Ibid.; Armed Activities on the Territory of the Congo, p. 168, § 146. 135 Anže Mediževec – The Right of Self-defence in the Earth’s Orbit The premise of the ICJ is that successful reliance on Article 51 of the UN Charter thus requires the initial use of force in the form of an armed attack against the assets of a State to be attributable to another State. On the other hand, in the past 23 years, a new debate emerged on whether non-State actors per se (i.e., without their acts being attribut- ed to a State) may also be regarded as capable of launching an armed attack in line with the dictum of Article 51 of the UN Charter.62 This newly emerged State practice appears to dispense with the requirement of attributing the use of force to a State. Given the rapid development in space exploration and possible militarisation, a sit- uation in which non-State actors per se exercise force against the assets of another State in space appears plausible. Thus, the concept of self-defence of States against attacks by non-State actors, whether attributable to a State or not, requires further examination in space law. 3.1. Self-Defence of States Against Non-State Actors: The Challenge of (Non) Attribution Globally, several States argue that the ordinary understanding of self-defence as rec- ognised in the UN Charter is overly restrictive for the modern world.63 Within this tra- ditional framework, the paramount criterion was attribution—the act of attributing an attack to a given State, whether force was utilised by State organs or non-State actors. The State to which force could be attributed would be seen as responsible for the use of force, rendering it a target of potential self-defence by the victim State.64 Nevertheless, since the attack of 11 September 2001 on the World Trade Center in New York, some States have expressed doubts over whether such an understanding of self-defence is still appropriate for the contemporary challenges of transnational terrorism. For example, it was widely claimed that the USA acted in self-defence against Al-Qaida in 2001, while Iraq, France, the United Kingdom, and the USA asserted the right of self-defence (individual or col- lective) in their campaign against the Islamic State of Iraq and Syria (ISIS) in Syria.65 The situation in the immediate aftermath of 11 September 2001 seemed to suggest that the right of self-defence of Article 51 of the UN Charter is available against attacks 62 For a discussion on the topic see: O’Connell, Tams & Tladi, 2019; Brunnée & Toope, 2016; Korošec & Tekavčič Veber, 2016. 63 Wood, 2013, p. 355; see also the 2002 National Security Strategy with the references to “preventive action” therein, in: The White House, 2002. 64 Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory, p. 136, § 139, Military and Paramilitary Activities in and against Nicaragua, p. 14, § 195. 65 Paddeu, 2017, pp. 2–4; see also the summary record of the UNSC meeting of 20 November 2015, S/PV.7565, at p. 2 (France), 4 (USA), 9 (United Kingdom). See also the letters sent to the UNSC: UN Docs. S/2014/695 (USA); S/2014/851 (United Kingdom); S/2015/745 (France); Hakimi, 2015. See also several blog posts, e.g.: Ohlin, 2014; van Steenberghe, 2015. 136 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 committed by non-State actors without a need to attribute the actions of the non-State actor to a particular State. After the Twin Towers attacks, the Security Council adopted Resolutions 1368 (2001) and 1373 (2001), recognising “the inherent right of individual or collective self-defence in accordance with the Charter”.66 As indicated above, State practice, including that of the members of the North Atlantic Treaty Organization, the members of the Organization of American States and others, indeed supported such a right, in an apparent rift with the jurisprudence of the ICJ.67 In this context, the authors of the Chatham House study titled Principles of International Law on the Use of Force in Self-Defence concluded that necessary and proportionate action could be taken in a situation where the territorial State is itself “unable or unwilling” to take the required and necessary action.68 The drafters of the Leiden Policy Recommendations on Counter-Terrorism and International Law of 1 April 2010, as well as Daniel Bethlehem in his set of principles on the scope of a State’s right of self-defence against attacks by non-State actors, reached a similar conclusion.69 Several States have been strongly advocating since 2001 for this new understanding of self-defence to be acknowledged as a binding part of international law,70 with support- ers of the “unable or unwilling” standard arguing that it now forms a part of customary international law.71 Considering the abovementioned State practice and academic opinion, the standard of “unable or unwilling” demands further attention as to its scope and meaning. In this regard, it is helpful to refer to the precise examples offered in 2016 by the White House in its “Legal and Policy Framework” on the use of force. The White House submitted that the standard of an unwilling or unable State might be invoked when a “State has lost or abandoned effective control over the portion of its territory’ from which a given non- State actor operates”; or, alternatively, where “a State is colluding with or harbouring a terrorist organisation operating from within its territory and refuses to address the threat posed by the group”.72 In other words, on the basis of the “unwilling or unable” standard, self-defence could be exercised by States against non-State actors who have effectively displaced State control over the territory from which they operate. Alternatively, the standard covers situations where a State is providing a haven for terrorists and refuses to 66 UNSC Resolution 1368 (2001), S/RES/1368 (2001), 12 September 2001; UNSC Resolution 1373 (2001), S/RES/1373 (2001), 28 September 2001. 67 Wood, 2013, p. 356. 68 Wilmshurst, 2006, p. 963. 69 Leiden Policy Recommendations on Counter-Terrorism and International Law, 2010, p. 531; Bethlehem, 2012, pp. 770–777; Wilmshurst & Wood, 2013, pp. 390–395. 70 Brunnée & Toope, 2018, p. 282. 71 For a comprehensive analysis of this question, see: Williams, 2013, pp. 619–641; Corten, 2016; Jordan, 2024. 72 The White House, 2016, p. 10. 137 Anže Mediževec – The Right of Self-defence in the Earth’s Orbit act against them.73 Thus, instead of relying on the traditional link of attributing an attack by a non-State actor to a State, self-defence could be exercised as it is necessary “because the government of the State where the threat is located is unable or unwilling to prevent the use of its territory by the non-State actor for such attacks”.74 Nevertheless, it must be emphasised that the idea of pursuing self-defence against non-State actors within the territory of States deemed “unwilling or unable” is far from being universally accepted.75 In the opinion of dissenting States, the most lenient view vis-à-vis this standard is that the use of force by States in self-defence against attacks by non-State actors stemming from the territory of a third State is not “not unambiguously illegal”.76 One of the main challenges of this concept is the open vagueness regarding the parameters regulating when States can actually invoke it to exercise self-defence against non-State actors on the basis of Article 51 of the UN Charter. Namely, in the case of at- tacks launched by non-State actors from the territory of another State, the latter could be arbitrarily labelled either as unwilling or unable to act. While it is unreasonable to expect States to remain passive in the wake of an attack by non-State actors, it is likewise unrea- sonable to present a State that is willing but unable with a Catch-22 situation: either con- sent to the use of force by another State to your territory or be subjected to it regardless.77 3.2. “Unable or Unwilling” States in Space and the Rules of Attribution of Conduct When attempting to transpose the discussion on self-defence within “unwilling or un- able” States into space, the challenges posed by this concept are amplified even further. This is because, unlike on Earth, in space there exist no sovereign borders or territories of States.78 The elementary rule in this regard is Article II of the OST, which stipulates that outer space, including the Moon and other celestial bodies, is not subject to national ap- propriation via a claim of sovereignty, means of use or occupation, or by any other means. The dictum of Article II OST has been interpreted as precluding the possibility of property rights by any State, commercial entities, or private persons over any part of outer space.79 73 Brunnée & Toope, 2018, p. 286. 74 The White House, 2016, p. 10. 75 Ruys & Verhoeven, 2005, p. 310; Paddeu, 2017, pp. 2–4; Wood, 2013, pp. 365–367. 76 Paddeu, 2017, pp. 2–4; for a similar position, see: Antonopoulos, 2008, pp. 159–180. 77 Ruys & Verhoeven, 2005, p. 310; Paddeu, 2017, p. 310; Brunnée & Toope, 2018, p. 285. 78 Sharma & Singh, 2012, p. 277; Lee, 2004, pp. 128–129; Pop, 2000, p. 275; Indo-Pacific Defense Forum, 2023. 79 Similar provisions can be found in Article 11, §§ 2 and 3 of the Moon Agreement: “2. The moon is not subject to national appropriation by any claim of sovereignty, by means of use or occupation, or by any other means & 3. Neither the surface nor the subsurface of the moon, nor any part thereof or natural resources in place, shall become property of any State, international intergovernmental or 138 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Thus, with no possibility of a defined “territory” in space, the “unwilling or unable” stand- ard, even if part of (customary) international law, faces a structural obstacle. If there can be no notion of “territory” in line with the conventional understanding of a State’s territory, how could States exercise self-defence against non-State actors acting from the “territory” of “unwilling or unable” States? Does this imply that the act of force by a non-State actor must commence on Earth, on the territory of an unwilling or unable State, and then culminate in space against the assets of another State—for instance, a satellite? One potential way to overcome this predicament is to re-conceptualise, when in space, the concept of a State’s “territory” away from its sovereignty foundations and towards territorial jurisdiction. Both sovereignty and jurisdiction imply a State’s control over territory, but the notion of jurisdiction is broader. If outer space lies outside the traditional sovereignty rules of States, it is by no means entirely outside the rules on State jurisdiction. Sovereignty concerns a State’s control in a given territory, while jurisdiction refers to the ability of States to prescribe and enforce law and order within their territory. By virtue of this, jurisdiction is ascribed according to three dimensions: jurisdiction to prescribe, jurisdiction to adjudicate, and jurisdiction to enforce, each of which has al- ready been observed in space.80 In this sense, even the OST stipulates in Article VIII that those States on whose registry an object launched into outer space is carried shall retain jurisdiction and control over such object, and over any personnel thereof, while in outer space or on a celestial body.81 Additionally, it is possible for a State to exercise extraterri- torial jurisdiction in outer space. Thus, while activities can be extraterritorial—including in space—they are not necessarily extrajudicial.82 It is within this context that an argument could be made for re-interpreting the concept of a State’s “territory” as a part of the “unable or unwilling” standard, at least in space given its specific legal and factual circumstances, in accordance with the notion of territorial jurisdiction. Hence, if a State obliged to exercise its jurisdiction or control non- governmental organization, national organization or non-governmental entity or of any natural person. The placement of personnel, space vehicles, equipment, facilities, stations and installations on or below the surface of the moon, including structures connected with its surface or subsurface, shall not create a right of ownership over the surface or the subsurface of the moon or any areas thereof”; see also: Schladebach, 2020, pp. 53–54; Svec, 2022, p. 2; Angels, 2024; for a discussion on the prin- ciple of non-appropriation as a rule of jus cogens, see: Cepelka & Gilmour, 1970, p. 46. 80 Sinclair & Patton, 2024, pp. 1–4; American Society of International Law, 2014, pp. 1–16. 81 See also Article VI OST, which provides similar duties of supervision in the area of State responsibil- ity in space: “States Parties to the Treaty shall bear international responsibility for national activities in outer space, including the moon and other celestial bodies, whether such activities are carried on by governmental agencies or by non-governmental entities, and for assuring that national activities are carried out in conformity with the provisions set forth in the present Treaty. The activities of non-governmental entities in outer space, including the moon and other celestial bodies, shall re- quire authorization and continuing supervision by the appropriate State Party to the Treaty”. 82 Sinclair & Patton, (2024), pp. 1–4. 139 Anže Mediževec – The Right of Self-defence in the Earth’s Orbit over its space object (used by a non-State actor) fails to exercise such control, and if the non-State actor proceeds to use force against the assets of another State, the State of juris- diction could be deemed either “unwilling or unable”.83 The “unwillingness or inability” of a State to supervise or control the non-State actor using its space objects for the use of force is then equated to the “unwillingness or inability” of a territorial State to prevent its territory from being used as a launching ground for non-State actor attacks. This could then trigger the right of self-defence of the victim State against the State of jurisdiction.84 Such an understanding transposes, mutatis mutandis, the concept of non-State actor use of force from within the territory of an “unwilling or unable” State to the space domain while adjusting it to the realities of space. Focusing on the lack of jurisdictional supervi- sion and control when applying the “unwilling or unable” doctrine in space would also reflect the 2016 White House “Legal and Policy Framework” on the use of force as far as the specific circumstances of space law permit.85 While the “unwilling or unable” legal framework of self-defence against attacks by non-State actors in space is still evolving, as shown above, the rules for attributing re- sponsibility for the use of force to a State are more developed. Attribution of the use of force in space to a given State could be scrutinised based on three different tests: an institutional, a functional, and an agency test.86 The institutional approach is reflected in Article 4 ARSIWA, which stipulates that the acts of de jure or de facto State organs are attributable to the State.87 The functional test attributes the use of force to the State if an act has been committed by entities empowered by that State to undertake governmental authority as per Article 5 ARSIWA. Another option is the use of force exercised by an organ of another State placed at the disposal of the first State, in accordance with Article 6 ARSIWA. Lastly, the agency test, included in Article 8 ARSIWA, demonstrates that an ad hoc relationship between a State and a non-State actor performing an attack is some- times required. In such a relationship, the State either instructs or directs the use of force by the non-State actor or exercises effective control over that non-State actor.88 83 See, mutatis mutandis, Principle F of UNGA Resolution 37/92, Principles Governing the Use by States of Artificial Earth Satellites for International Direct Television Broadcasting, 10 December 1982, UN Doc. A/AC.105/572/Rev.1, p. 39; see also Principle B providing for the general application of the OST, including its Article VI, to the subject matter of the Resolution; von der Dunk, 2011, p. 5. 84 Provided, again, that the standard of “willing or unable” actually applies as part of binding interna- tional law. 85 The White House, 2016, p. 10. 86 Tsagourias, 2016, p. 805. 87 Müller & San Martin, 2024, § 9. 88 Application of the Convention on the Prevention and Punishment of the Crime of Genocide (Bosnia and Herzegovina v. Serbia and Montenegro), Judgment, I.C.J. Reports 2007 (Application of the Convention on the Prevention and Punishment of the Crime of Genocide), p. 43, §§ 398, 400, 402– 406 and 413–414; United States Diplomatic and Consular Staff in Tehran, Judgment, I.C.J. Reports 140 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 For these tests enshrined in ARSIWA, the common thread is in linking the use of force to the responsibility of a State.89 However, given their specific characteristics, the attribution of acts by private, non-State parties in space to a given State appears to bear its own distinct predicaments—even outside the “unwilling or unable” doctrine. In the area of State responsibility in space, the challenge is to address the interplay between the ARSIWA rules of attribution of conduct to a State and the strict responsibility regime of States otherwise present in the space domain—most notably via Article VI OST. 3.3. The Attribution of Acts in Space: The Strict Responsibility Regime of the OST Even though the space domain was once predominantly reserved for States and State-funded public procurement schemes involving private entities, this has dramati- cally changed in recent years, with companies such as Blue Origin, SpaceX, and Virgin Galactic paving the way for the presence of private parties in space on their own accord.90 Indeed, a surge may be observed in the development of satellite communications, space launches, and remote sensing capabilities by the private sector. It is expected that this trend will only increase in the future. In the USA, for example, the Department of Defense is increasingly reliant on commercial space systems, as they provide essential data for the armed forces.91 Thus, the commercial, private space sector plays a crucial role in Space Domain Awareness (SDA), the main goal of which is to provide knowledge of the space environment, including all operations in a given region of space, as well as the location of space objects and their missions.92 Nowadays, new technologies are being developed—among them weapons that could hit and destroy satellites in low-Earth and polar orbits—and private entities are at the forefront of such developments.93 Precisely this new involvement of private parties is central to the question of attributing the use of force in space, as it is becoming increasingly clear that non-State actors may become capable of exercising force in the space domain. Unlike the traditional rules on State responsibility, as encompassed in ARSIWA, Massingham and Stephens write that States appear unequivocally responsible for the actions undertaken by private parties in space.94 This understanding may be derived from Article VI of the OST, which provides that the States Parties to the OST 1980, p. 3, § 58; Military and Paramilitary Activities in and against Nicaragua, p. 14, §§ 116–177 and Separate Opinion of Judge Ago, § 16. 89 As regards the exercise of attributing the reliance on force by non-State actors to a State on the premises of the Earth, this notion has been extensively covered. See, for example, Lanovoy, 2017, pp. 563–585; Sancin et al., 2009, pp. 185–187. 90 Holmes, 2024; European Space Policy Institute, 2017, p. 3; NASA, 2014, pp. 11, 17 and 105. 91 Wehtje, 2023, fn. 31–32. 92 Ibid.; Baker-McEvilly et al., 2024, p. 1. 93 Thiruvanathapuram, 2010. 94 Massingham & Stephens, 2022, p. 18. 141 Anže Mediževec – The Right of Self-defence in the Earth’s Orbit “shall bear international responsibility for national activities in outer space, inclu- ding the Moon and other celestial bodies, whether such activities are carried on by governmental agencies or by non-governmental entities, and for assuring that national activities are carried out in conformity with the provisions set forth” in the OST. Additionally, to erase any possibility of doubt, Article VI subjects the activities of non-governmental entities in outer space, including the Moon and other celestial bodies, to authorisation and continued supervision by the appropriate State Party to the OST.95 With respect to Article VI and the question of attribution and responsibility, the reference to “non-governmental entities” in correlation with the imposition of responsi- bility upon a State for these private parties is of particular significance. According to the genesis of the OST, Article VI, with its provision on “non-governmental entities” and State responsibility, reflects a compromise reached between the Union of Soviet Socialist Republics (USSR) and the USA at the time of intense negotiation over the OST. While the USSR argued that only States might venture into space and engage in space activities, the USA, with notable foresight, sensed the important future role that private parties would play in the space domain.96 The USA argued that States might wish to license a private firm to carry out certain activities in space.97 According to the drafting history of the OST, the main premise of Article VI was to establish a strict regime of responsibility for space-faring States for the activities of a non- State entity, such as private companies, when it comes to “national activities” in outer space. This strict regime provides for the responsibility of a State regardless of whether the State was aware of the activity in question.98 The underlying reason for such a regime is the nature of space activities: since these activities are perceived as dangerous and capa- ble of causing widespread damage, the desire was to avoid any a potential impunity for the damage caused, as well as to ensure that States act with due diligence.99 Unsurprisingly, the definition of what constitutes a “national activity” has been inter- preted broadly. In 2013, the General Assembly proposed in its Resolution 68/74 titled 95 Von der Dunk, 2011, pp. 5–8. Interestingly, according to Article VI OST, not only States but also international organisations bear the burden of responsibility for their activities carried out in outer space, including the moon and other celestial bodies. This provision may become particularly im- portant for the EU in the development of its own space programme. See in this regard: European Union Agency for the Space Programme: 2024. 96 Committee on the Peaceful Uses of Outer Space Legal Sub-Committee, Summary Record of the Seventeenth Meeting, UN Doc A/AC.105/C.2/SR.17, 27 June 1963, p. 7; Massingham & Stephens, 2022, p. 18. 97 Committee on the Peaceful Uses of Outer Space Legal Sub-Committee, Record of the Twentieth Meeting, UN Doc A/AC.105/C.2/SR.20, 27 June 1963, p. 12. 98 Sancin, Grünfeld & Ramuš Cvetkovič, 2021, p. 50. 99 Ibid.; Massingham & Stephens, 2022, pp. 294–295. 142 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Recommendations on National Legislation Relevant to the Peaceful Exploration and Use of Outer Space, that the notion should include all activities conducted by a State’s nationals or from its national territory.100 It is true that, according to the making of the OST, the main premise of Article VI was to establish a very strict regime of responsibil- ity for space-faring States for a non-State entity—such as private companies—when it comes to “national activities” these entities undertake in outer space. Hence, there exists a clear obligation for States, as part of their duty of due diligence, to supervise private parties and authorise their activities, usually by way of domestic legislation, licensing schemes, and other requirements.101 What remains to be addressed now is how the word- ing of Article VI OST could be applied in the event of private entities (i.e., non-State actors) using force in space. 3.4. Attribution of the Use of Force to a State in Space: The OST Strict Regime Versus ARSIWA States are deemed responsible for the use of force by private parties if it can be con- cluded that the State exercised effective control over the non-State actor.102 The thresh- old for what constitutes effective control is quite high. According to Article 8 ARSIWA, the standard is fulfilled and responsibility attributed to the State “if the person or group of persons is in fact acting on the instructions of, or under the direction or control of, that State in carrying out the conduct” concerned.103 This understanding reflects customary international law.104 In the Jan de Nul award, for example, rendered by the International Centre for Settlement of Investment Disputes (ICSID), the tribunal relied on the ICJ’s reasoning in Nicaragua v. USA. The tribunal held that international juris- prudence prescribes a truly substantial benchmark for attributing the act of a person or entity to a State, as it requires two elements: (1) a general control of the State over the person or entity and (2) a specific control of that State over the given, concrete act for which attribution is sought.105 When comparing the strict responsibility regime of Article VI OST with ARSIWA, the main difference lies in the question of attribution.106 Erhart and Boutovitskai submit that 100 UNGA Resolution 68/74, 11 December 2013, § 2; Massingham & Stephens, 2022, pp. 293–294. 101 Aoki, 2012, p. 397. 102 Crawford, 2013, p. 141. 103 ARSIWA, Article 8, Conduct directed or controlled by a State. 104 Ryngaert, 2021, p. 171. 105 Jan de Nul N.V. and Dredging International N.V. v. Arab Republic of Egypt, ICSID Case No. ARB/04/13, § 173; Boon, 2014, p. 19; Military and Paramilitary Activities in and against Nicaragua, p. 14, §§ 113–115. 106 ARSIWA, Article 2: “There is an internationally wrongful act of a State when conduct consisting of an action or omission: (a) is attributable to the State under international law”. 143 Anže Mediževec – The Right of Self-defence in the Earth’s Orbit Article VI OST is clear in its wording and thus suggests a much lower threshold for attrib- uting the responsibility for any space activities by private parties to a given State.107 Thus, the main challenge is how to reconcile these two systems of norms. In other words, could Article VI OST perhaps be understood as lex specialis in relation to the general interna- tional law rules enshrined in ARSIWA? On the other hand, since Article III OST expressly references the UN Charter as well as international law more broadly, would it not be ap- propriate to extend the general international law rules in ARSIWA to the space domain?108 Three potential solutions may be discerned to answer these queries. The first rests on the premise that space law, especially Article VI OST, may be seen as a lex specialis norm in relation to ARSIWA. The second supports the view that the general rules of State responsibility in ARSIWA should apply, as they are secondary rules of international law, whereas Article VI OST encompasses primary rules.109 The third approach offers a combined reading of Article VI OST and ARSIWA, based on a systematic interpretation of the norms contained therein, to preserve the purpose of the secondary law rules on State responsibility. 3.4.1. The Strict Responsibility Regime of Article VI OST as Lex Specialis This approach rests on the premise that the ARSIWA rules on State responsibility were not intended to constitute a comprehensive code of secondary rules of international law.110 According to Article 55 ARSIWA, the rules on attribution enshrined therein do not apply where, and to the extent that, the conditions for the existence of an interna- tionally wrongful act or the content or implementation of the international responsi- bility of a State are governed by special rules of international law. Thus, if another, lex specialis norm governs the rules of attribution, that more specific regime prevails over the Articles in ARSIWA. Here, Article VI OST comes into play as a lex specialis provision in the field of attributing the acts of non-State actors in space to a given State.111 The argument offered in this regard is that the OST regime better accounts for the particularities of space activities carried out by non-State actors.112 It is argued that Article VI OST was drafted to establish an over-arching responsibility for acts conducted by private parties in space, thus rendering any and all space activities the responsibility of at least one State.113 The motive behind this normative choice of the OST drafters—to attribute all acts of non-State actors in space to a given State—was the inherently dan- 107 See, e.g., Erhart & Boutovitskai, 2021, pp. 2–3. 108 Breccia, 2016, pp. 20–21. 109 Ramuš Cvetkovič, 2021, p. 19. 110 See especially ARSIWA, Article 56, as well as Crawford, 2002, p. 879. 111 Li, 2023, p. 4; Zorzetto, 2012, pp. 61–64. 112 Koskenniemi, 2003, p. 10. 113 Cheng, 1998, pp. 14–16; von der Dunk, 2011, p. 3. 144 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 gerous nature of space activities.114 Consequently, a conflict of norms may be observed in the area of secondary rules of international law: between the rules in ARSIWA and the strict regime of Article VI OST.115 Given that Article VI OST provides for a specialised, strict regime of attribution compared to the general rules on attribution enshrined in ARSIWA, the former may prevail over the latter on the basis of its specificity: lex specialis derogat legi generali (Article 55 ARSIWA). Hence, the initial conflict between both systems of norms is resolved, with the ARSIWA rules on attribution giving way to Article VI OST when assessing the responsibility of States for the acts of non-State actors in space. 3.4.2. Article VI OST as a Rule of Primary International Law The second approach, which enjoys less support, departs from the application of Article VI OST as a lex specialis rule of attribution. This second interpretation is founded on a narrow reading of Article VI OST. According to this view, the strict responsibility regime applies to a State’s own actions or omissions. Consequently, a State is deemed responsible for the acts of private parties in space when it has itself failed to supervise such acts or to authorise them. Article VI OST is thus seen as a primary norm, not as a secondary norm of international law, merely demanding of States a heightened duty of supervision and due diligence vis-à-vis non-State actors.116 Since, in this scenario, the secondary law rules on attribution are not superseded by a lex specialis norm, the activities undertaken by private entities in space would continue to be attributed to a given State on the basis of the customary international law rules of ARSIWA.117 In this sense, Article VI OST enshrines a stricter primary law norm of due diligence for the State concerned in supervising the activities of non-State actors. Thus, a State would only be held responsible for the conduct of a non-State actor under its direction or control, or in a case of conduct adopted or acknowledged by the given State, if it failed to exercise diligent supervision.118 Following this approach, there is no conflict between ARSIWA and the OST regarding the attribution of responsibility to a State for the use of force by private parties in space. Article VI OST reflects a norm of primary law, while leaving the secondary law norms of attribution of conduct and responsibility for the activities of non-State actors to ARSIWA. 114 Marchisio, 2012, pp. 11 and 14–15. 115 Ramuš Cvetkovič, 2021, p. 20. 116 Ibid., p. 19; Grünfeld, 2022, p. 606; Arangio-Ruiz, 2017, pp. 126–127. 117 Dupuy, 2002, p. 464. 118 Grünfeld, 2022, p. 606. 145 Anže Mediževec – The Right of Self-defence in the Earth’s Orbit 3.4.3. Interpretation of “National Activities” in Light of the Purpose of ARSIWA Provided that the standard of effective control in Article 8 ARSIWA was substituted by virtue of Article VI OST with a lower standard of attribution, classifying the use of force as a “national activity” would suffice—without needing to fulfil the effective con- trol test—to attribute the use of force by non-State actors to a State. In a scenario where the threshold for attribution of responsibility for the use of force in space is easily met, States could be drawn much quicker into an armed conflict by virtue of non-State actors using force in space. Alternatively, imposing a standard that might be unrealistic risks reducing Article VI OST to near-irrelevance in a situation involving the use of force by non-State actors.119 For these reasons, the last option (which enjoys the least support of all three) rests on a teleological and systematic interpretation of Article VI OST and ARSIWA, with the aim of safeguarding the purpose of the secondary law rules on attributing the conduct of non-State actors to a State. According to the purpose of these rules, only acts that can be attached to a State on the basis of an objective (i.e., real) link—whether legal, functional, or factual—may be attributed to that State.120 In this sense, it is important to recall the crucial role played by the principle of effectiveness in international law with regard to State responsibility. Based on the principle of effectiveness, when determining a State’s responsibility for acts of non-State actors, it is necessary to establish the existence of a real link between the non-State actors performing the act and the State in question.121 The question then remains how to interpret the term “national activities” in Article VI OST to safeguard the purpose of the rules on State responsibility. Article VI OST can be linked to Article 8 ARSIWA in the case of non-State actors by interpreting the notion of “national activities” in the light of the customary international law standard of effec- tive control. This interpretive option may be viable given the explicit invocation of gen- eral international law in Article III OST. Therefore, it is beyond doubt that a substantial part of international law and the UN Charter applies to human activities in space.122 To incorporate fully the purpose of the customary international law rules on State responsi- bility into Article VI OST, and to ensure a systematic reading of the norms contained in ARSIWA and the OST via Article 31(3)(c) VCLT, the term “national activities” could be interpreted in line with the norm of Article 8 ARSIWA. It is thus difficult to conceive that, e.g., the use of force stemming from a malfunction of a space object operated by non-State actors may be directly attributed to the respon- 119 Ramey, 2018, pp. 230–231. 120 De Frouville, 2013, p. 261. 121 Report of the International Law Commission on the work of its fifty-third session, Commentaries on the draft ARSIWA, appears in the Yearbook of the International Law Commission, 2001, Vol. II, Part Two, as corrected, p. 47. 122 Breccia, 2016, pp. 20–21. 146 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 sibility of a State in the absence or a “real link”—just as it is equally inconceivable to do so on Earth on the basis of Article 8 ARSIWA and the rules on State responsibility.123 It follows from this reasoning, and from the purpose of the norm in Article 8 ARSIWA (to which Article III OST makes reference), that the use of force by private entities in space may be attributed to the State via Article VI OST if the force in question was in- structed or exercised under the effective control of that State. Such a use of force may be deemed a “national activity” in the terms of Article VI OST, and for such a use of force the responsibility of the State arises, thus leading to a possible invocation of Article 51 of the UN Charter. In other words, if a State instructs or effectively controls the exercise of force by a non-State actor in space, the latter exercise may be deemed a “national activity” in terms of Article VI OST for which the State could bear international responsibility. 4. Concluding Remarks With contemporary developments in space technology, it may be observed—wor- ryingly—that the notion of acts of force being committed in space no longer appears quite so remote. The possibility of the use of force in space raises questions regarding the weaponisation of space and the corresponding norms on the responsibility of States for internationally wrongful acts they commit, either in their capacity as States or via non- State actors. While current prospects for space warfare mostly involve the destruction of unmanned military assets in space—usually satellites traversing the Earth’s orbit124—fur- ther militarisation of space will, in all likelihood, follow. In the new space era, non-State actors will play a crucial role. To address some aspects of the emerging challenges related to the weaponisation of space, this article focused on an issue of increasing importance in the space domain: the use of force—especially in the context of self-defence—and the question of attributing responsibility for the use of force by non-State actors in space to a given State. With regard to the prospects for self-defence in the Earth’s orbit, it has been shown that, by virtue of Article III OST, which cross-references the UN Charter, Article IV OST must be interpreted in connection with the Charter. Hence, an integrated interpre- tation of the “peaceful purposes” wording in Article IV OST with the norms on self-de- 123 In Military and Paramilitary Activities in and against Nicaragua, the ICJ pronounced that for a “conduct to give rise to legal responsibility of the United States, it would in principle have to be proved that that State had effective control of the military or paramilitary operations in the course of which the alleged violations were committed”: Application of the Convention on the Prevention and Punishment of the Crime of Genocide, p. 43, §§ 399–400; Breitwieser-Faria, 2021, pp. 84–85; de Frouville, 2013, p. 261; Corfu Channel case, Judgment of April 9th, 1949: I.C.J. Reports 1949, p. 18; see also Olleson, 2001, p. 40; Massingham & Stephens, 2022, p. 295. 124 Ramey, 2018, p. 190. 147 Anže Mediževec – The Right of Self-defence in the Earth’s Orbit fence in Article 51 of the UN Charter leads to the conclusion that the use of force in space is allowed when exercised in self-defence. The term “peaceful purposes” in Article IV OST is, therefore, to be understood in line with the prohibition of aggression (or “aggressive force”) in customary international law and Article 2(4) of the UN Charter, rather than as a blanket prohibition of any acts of force in space.125 After concluding that the use of force in self-defence in space is permitted, the article turns to the rules of attribution for the use of force exercised by non-State actors in space to a State. Here, a distinction is drawn between two models: the traditional, ARSIWA model on attribution of acts to a State, and the doctrine of “unwilling or unable”, which departs from the standard criterion of attribution. Regarding the latter doctrine, the “un- willingness or inability” of a spacefaring State to supervise or control a non-State actor using its space objects to exercise force in space could be linked to the “unwillingness or inability” of a territorial State to prevent its territory from being used as a launching ground for non-State actor attacks. This could then trigger the victim State’s right of self-defence against the State of jurisdiction. Such an interpretation transposes, mutatis mutandis, the concept of non-State actor use of force from within the territory of an “un- willing or unable” State to the space domain, while adjusting it to the realities of space. Departing from the developing “unwilling or unable” doctrine and focusing on the much more developed and accepted model of attribution of responsibility for acts under- taken by non-State actors to a State, the article examined the interplay between ARSIWA and the strict regime of Article VI OST. Three potential solutions were assessed to ad- dress the apparent conflict between these two sets of attribution of conduct rules. The first option rests on the premise that space law, particularly Article VI OST, may be re- garded as a lex specialis system in relation to ARSIWA. By virtue of Article 55 ARSIWA, the rules on attribution in Article VI OST could thus supersede the norms of ARSIWA, in their capacity as a more specific set of secondary law rules. The second solution sup- ports the view that the general rules of State responsibility in ARSIWA should apply as secondary rules of international law, whereas Article VI OST encompasses primary rules. Here, Article VI OST is understood as reflecting a high due diligence obligation of spacefaring States—a primary norm—rather than constituting a lex specialis set of secondary rules. The third solution is founded on a combined reading of Article VI OST and ARSIWA via a systematic interpretation of the norms they contain, with a view to preserving the purpose of the secondary law rules on State responsibility. For this latter option, it is crucial to determine whether the use of force by non-State actors in space is always deemed a “national activity” within the meaning of Article VI OST. In light of the rules on State responsibility for internationally wrongful acts and relevant international jurisprudence, it was submitted that the use of force by private entities in space could be 125 Buchan, 2023. 148 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 attributed to a State via Article VI OST only if the force in question was either instructed by or exercised under the effective control of that spacefaring State. Of the three options presented, the lex specialis argument appears the most convinc- ing and enjoys the broadest support in literature. It follows from the inherently hazard- ous nature of space activities that the OST purposefully sets out a specific system of strict responsibility in Article VI to ensure not only that States actively supervise non-State actors under their jurisdiction, but also to prevent impunity for damage caused in space. This creates a distinct set of secondary law norms intended to regulate the consequences of the acts and omissions of spacefaring States in space. Thus, the ARSIWA rules on State responsibility must give way to the strict regime of Article VI OST when assessing the responsibility of States for acts committed by non-State actors in space, including acts involving the use of force. By virtue of this, a more specific system of attribution rules supersedes a more general one. 149 Anže Mediževec – The Right of Self-defence in the Earth’s Orbit References Akande, D. & Milanovic, M. (2015) ‘The Constructive Ambiguity of the Security Council’s ISIL Resolution’, EJIL:Talk!, 21 November 2015. American Society of International Law (2014), ‘Jurisdictional, Preliminary, and Procedural Concerns’ in: Amann, D. (ed.) (2014) Benchbook on International Law. § II.A. Angels (2024) Space Law Fundamentals, (accessed 19 October 2024). Antonopoulos, C. (2008) ‘Force by Armed Groups as Armed Attack and the Broadening of Self-Defence’, Netherlands International Law Review 55(2), pp. 159–180. Aoki, S. (2012) ‘The Standard of Due Diligence in Operating a Space Object’, Proceedings of the International Institute of Space Law 2012, pp. 392–405. Arangio-Ruiz, G. (2017) ‘State Responsibility Revisited: The Factual Nature of the Attribution of Conduct to the State’, Quaderni della Rivista di diritto interna- zionale Volume C – 2017, pp. 1–162. Baker-McEvilly, B. et al. (2024) ‘A comprehensive review on Cislunar expansion and space domain awareness’, Progress in Aerospace Sciences 147, pp. 1–16. Bartóki-Gönczy, B. & Nagy, B. (2023) ‘The Artemis Accords’, International Legal Materials 62(5), pp. 888–898. BBC (2019) Trump: “Space is the world’s newest war-fighting domain”, (accessed 27 August 2024). Bethlehem, D. (2012) ‘Principles Relevant to the Scope of a State’s Right of Self-Defense Against an Imminent or Actual Armed Attack by Nonstate Actors’, American Journal of International Law 106(4), pp. 770–777. Boon, K.E. (2014) ‘Are Control Tests Fit For the Future? The Slippage Problem in Attribution Doctrines’, Melbourne Journal of International Law 15, pp. 1–48. Boothby, B. (2017) ‘Space Weapons and the Law’, International Law Studies 93, pp. 179–214. Borgen, C.J. (2020) ‘Space Power, Space Force, and Space Law’, Lieber Institute – West Point, Articles of War, 10 September 2020. Breccia, P. (2016) ‘Article III of Outer Space Treaty and Its Relevance in the International Space Legal Framework’, Proceedings of the International Institute of Space Law 2016, pp. 17–35. Breitwieser-Faria, Y. (2021) ‘State Responsibility for Breaches of Prevention Obligations: Is the Distinction between Obligations of Conduct and of Result Useful?’, Australian International Law Journal 28, pp. 75–90. 150 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Brunnée, J. & Toope, S.J. (2018) ‘Self-Defence Against Non-State Actors: Are Powerful States Willing But Unable to Change International Law?’, The International and Comparative Law Quarterly 67(2), pp. 263–286. Buchan, R. (2023) ‘Self-Defence as an Exception to the Principle of Non-Use of Force: Debunking the Myth’, EJIL:Talk!, 29 November 2023. Cepelka, C. & Gilmour, J.H.C. (1970) ‘The Application of General International Law in Outer Space’, Journal of Air Law and Commerce, 36(1), pp. 30–49. Cheng, B. (1998) ‘Article VI of the 1967 Space Treaty Revisited: “International Responsibility”, “National Activity”, and “The Appropriate State”’, Journal of Space Law 26, pp. 7–32. Cheng, B. (2000) ‘Properly Speaking, Only Celestial Bodies Have Been Reserved for Use Exclusively for Peaceful (Non-Military) Purposes, but Not Outer Void Space’, International Law Studies 75, pp. 81–117. China Aerospace Studies Institute (2024) PLA Aerospace Power: A Primer on Trends in China’s Military Air, Space, and Missile Forces. United States of America: China Aerospace Studies Institute. Corten, O. (2024) ‘The ‘Unwilling or Unable’ Test: Has It Been, and Could It Be, Accepted?’, Leiden Journal of International Law 29, pp. 777–799. Crawford, J. (2002) ‘The ILC’s Articles on Responsibility of States for Internationally Wrongful Acts: A Retrospect’, American Journal of International Law 96(4), pp. 874–890. Crawford, J. (2013) ‘Direction or control by the State’ in: State Responsibility: The General Part. Cambridge: Cambridge University Press, pp. 141–165. Cross, M.K.D. (2021) ‘‘United Space in Europe’? The European Space Agency and the EU Space Program’, European Foreign Affairs Review 26, pp. 31–46. d’Aspremont, J. (2012) ‘The Systemic Integration of International Law by Domestic Courts: Domestic Judges as Architects of the Consistency of the International Legal Order’ in: Nollkaemper, A & Fauchald, O. K. (eds.) (2012) The Practice of International and National Courts and the (De-)Fragmentation of International Law. Hart, pp. 141–165. de Frouville, O. (2013) ‘Attribution of Conduct to the State: Private Individuals’ in Crawford, J. et al. (eds.) (2013) The Law of International Responsibility. Oxford: Oxford University Press, pp. 257–280. Delegation of the European Union to the United Nations in New York (2023) EU Statement – UN General Assembly 1st Committee: Outer Space, (accessed 27 August 2024). 151 Anže Mediževec – The Right of Self-defence in the Earth’s Orbit Dembling, P.G. & Arons, D.M. (1967) ‘The Evolution of the Outer Space Treaty’, Journal of Air Law and Commerce 33, pp. 419–456. Dupuy, P.-M. (2002) Droit International Public. Paris: Dalloz. Erhart, L. & Boutovitskai, M. (2021) ‘Transforming Article VI of the Outer Space Treaty into an Effective Mechanism of Space Debris Mitigation’ in: Flohrer, T., Lemmens, S. & Schmitz, F. (eds.) (2021) Proc. 8th European Conference on Space Debris (virtual), Darmstadt, Germany, 20–23 April 2021. ESA Space Debris Office, pp. 1–15. European Space Policy Institute (2017) ‘The Rise of Private Actors in the Space Sector’, July 2017, pp. 1–7. European Union Agency for the Space Programme (2024) The EU Space Programme, (accessed 7 September 2024). Farhang, C. (2015) ‘Self-Defence as a Circumstance Precluding the Wrongfulness of the Use of Force’, Utrecht Law Review 11(3), pp. 1–18. Federal Aviation Administration (2023) Commercial Space Data, (accessed 6 September 2024). Friman, L.J. (2005) ‘War and Peace in Outer Space: A Review of the Legality of the Weaponization of Outer Space in the Light of the Prohibition on Non- Peaceful Purposes’, FYBIL 16, pp. 285–312. Gardiner, R. (2010) Treaty Interpretation. Oxford: Oxford University Press. Gleason, M.P. & Hays, P.L. (2020) ‘A Roadmap for Assessing Space Weapons’, Center for Space Policy and Strategy, October 2020, pp. 1–13. Grünfeld, K. (2022) ‘NewSpace: The Star Wars Soldier of the Future?’, Proceedings of the International Institute of Space Law 2022, pp. 599–611. Hadley, G. & Gordon, C. (2024) Russia’s New Counterspace Weapon Is in the Same Orbit as a US Satellite, (accessed 27 August 2024). Hakimi, M. (2015) ‘Defensive Force against Non-State Actors: The State of Play’, International Law Studies 91, pp. 2–31. Harris, A.W. & D’Abramo, G. (2015) ‘The population of near-Earth asteroids’, Icarus 257, pp. 302–312. Harrison, T. (2020) International Perspectives on Space Weapons. Washington: CSIS. Hawking, S. (2016) A Brief History of Time – From the Big Bang to the Black Holes. London: Transworld Publishers. 152 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Holmes, O. (2024) Polaris Dawn astronauts complete first commercial spacewalk, (accessed 15 September 2024). Indo-Pacific Defense Forum (2023) Sovereignty in Space – No one can own it, which drives competition, cooperation, (accessed 19 October 2024). Jordan, L.V. (2024) ‘“Unwilling or Unable”’, International Law Studies 103, pp. 151–193. Korošec, T. & Tekavčič Veber, M. (2016) ‘Pravica do samoobrambe zoper nedržavne ak- terje v luči boja proti terorizmu’, Zbornik znanstvenih razprav 76, pp. 41–68. Koskenniemi, M. (2003) ‘The function and scope of the lex specialis rule and the qu- estion of ‘self-contained regimes’: An outline’, International Law Commission, Study Group on Fragmentation, p. 1–10. Krepon, M. & Clary, C. (2003) Space Assurance or Space Dominance?: The Case Against Weaponizing Space. Washington: The Henry L. Stimson Center. Kreß, C. (2016) ‘The State Conduct Element’ in: Kreß, C. & Barriga, S. (eds.) (2016) The Crime of Aggression: A Commentary. Cambridge: Cambridge University Press, pp. 412–564. Kretzmer, D. (2013) ‘The Inherent Right to Self Defence and Proportionality in Jus Ad Bellum’, European Journal of International Law 24(1), pp. 235–282. Kütt, M. & Steffek, J. (2015) ‘Comprehensive Prohibition of Nuclear Weapons: An Emerging International Norm?’, Nonproliferation Review 22, pp. 401–420. Lanovoy, V. (2017) ‘The Use of Force by Non-State Actors and the Limits of Attribution of Conduct’, European Journal of International Law 28(2), pp. 563–585. Lee, R.J. (2003) ‘The Jus Ad Bellum in Spatialis: The Exact Content and Practical Implications of the Law on the Use of Force in Outer Space’, Journal of Space Law 29(1-2), pp. 93–120. Lee, R.L. (2004) ‘Article II of the Outer Space Treaty: Prohibition of State Sovereignty, Private Property Rights, or Both?’, Australian International Law Journal 11, pp. 128–142. Leiden Policy Recommendations on Counter-Terrorism and International Law (2010), Netherlands International Law Review 57(3), pp. 531–550. Li, D. (2023) ‘Cyber-attacks on Space Activities: Revisiting the Responsibility Regime of Article VI of the Outer Space Treaty’, Space Policy 63, pp. 1–13. Marchisio, S. (2012) ‘The Legal Dimension of the Sustainability of Outer Space Activities: The Draft International Code of Conduct on Outer Space Activities’, Proceedings of the International Institute of Space Law 2012, pp. 3–22. 153 Anže Mediževec – The Right of Self-defence in the Earth’s Orbit Martinez, P. et al. (2019) ‘Reflections on the 50th Anniversary of the Outer Space Treaty, UNISPACE+50, and Prospects for the Future of Global Space Governance’, Space Policy 47, pp. 28–33. Massingham, E. & Stephens, D. (2022) ‘Autonomous Systems, Private Actors, Outer Space and War: Lessons for Addressing Accountability Concerns in Uncertain Legal Environments’, Melbourne Journal of International Law 23, pp. 276–305. Merkouris, P. (2017) ‘Interpreting the Customary Rules on Interpretation’, International Community Law Review 19, pp. 126–155. Müller, D. & San Martin, I. (2024) ‘Attribution’, Jus Mundi, February 2024. NASA (2014) Commercial Orbital Transportation Services: A New Era in Spaceflight. Nasa. O’Connell, M.E., Tams, C.J. & Tladi, D. (2019) Self-Defence Against Non-State Actors. Cambridge: Cambridge University Press. O’Connell, M. E. (2013) ‘Dangerous Departures’, American Journal of International Law 107(2), pp. 380–386. Ohlin, J.D. (2014) ‘The Unwilling or Unable Doctrine Comes to Life’, Opinio Juris, 23 September 2014. Olleson, S. (2001) ‘The Impact of the ILC’s Articles on Responsibility of States for Internationally Wrongful Acts’, British Institute of International and Comparative Law, Preliminary Draft 2001, pp. 1–289. Orbiting now (2024) Active Satellite Orbit Data, (accessed 26 October 2024). Paddeu, F.I. (2017) ‘Use of Force against Non-State Actors and the Circumstance Precluding Wrongfulness of Self-Defence’, University of Cambridge Repository 2017, pp. 1–28. Pop, V. (2000) ‘Appropriation in outer space: the relationship between land ownership and sovereignty on the celestial bodies’, Space Policy 16, pp. 275–282. Ramey, R.A. (2018) ‘Armed Conflict on the Final Frontier: The Law of War in Space’ in: von der Dunk, F.G. (ed.) (2018) International Space Law. Cheltenham (UK) & Northampton (USA): Edward Elgar Publishing Limited, pp. 188–345. Ramuš Cvetkovič, I. (2021) Space Law as Lex Specialis to International Law. Master Thesis, University of Ljubljana. Ramuš Cvetkovič, I. (2024) ‘Two sides of the same coin? Examining the interrelation between the proposed new human right and the law governing outer space’, Digital War 5, pp. 59–65. 154 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Report of the International Law Commission on the work of its fifty-third session: Commentaries on the draft ARSIWA (2001), Yearbook of the International Law Commission, 2001, Vol. II, Part Two, pp. 31–143. Ruys, T. & Verhoeven, S. (2005) ‘Attacks by Private Actors and the Right of Self- Defence’, Journal of Conflict & Security Law 10(3), pp. 289–320. Ryngaert, C. (2021) ‘Attributing Conduct in the Law of State Responsibility: Lessons from Dutch Courts Applying the Control Standard in the Context of International Military Operations’, Utrecht Journal of International and European Law 36(2), pp. 170–180. Sancin, V., et al. (2009) Mednarodno pravo oboroženih spopadov. Ljubljana: Poveljstvo za doktrino, razvoj, izobraževanje in usposabljanje. Sancin, V., Grünfeld, K. & Ramuš Cvetkovič, I. (2021) ‘Sodobni izzivi mednarodno- pravnega urejanja vesolja’, Pravnik 138(1-2), pp. 45–84. Sandeepa, B. & Kiran, M.V. (2009) ‘Anti Satellite Missile Testing: A Challenge to Article IV of the Outer Space Treaty’, NUJS Law Review 2(2), pp. 205–212. Santos, R. (2021) Yes, We Can Weaponize an Asteroid, (accessed 27 August 2024). Savoy, C.M. & Staguhn, J. (2022) ‘Global Development in an Era of Great Power Competition’, CSIS Briefs, March 2022, pp. 1–12. Schladebach, M. (2008) ‘Schwerpunktbereich – Einführung in das Weltraumrecht’, Juristische Schulung 3, pp. 217–222. Schladebach, M. (2020) Weltraumrecht. Tübingen: Mohr Siebeck. Sharma, H. & Singh, P. (2012) ‘Territorial Sovereignty in the Outerspace: Spatial Issues’ in: Singh, R. et al. (eds.) (2012) Current Developments in Air and Space Law. Delhi: National Law University Press, pp. 272–282. Shaw, M.N. (2017) International Law. Cambridge: Cambridge University Press. Sinclair, J. & Patton, D. (2024) ‘SPACE and the Return of Rome’, Journal of Space Safety Engineering 2024, pp. 1–6. Smith, W. A. (2021) ‘Using the Artemis Accords to Build Customary International Law: A Vision for a U.S.-Centric Good Governance Regime in Outer Space’, Journal of Air Law and Commerce 86(4), pp. 661–700. Starski, P. (2015) ‘Right to Self-Defense, Attribution and the Non-State Actor – Birth of the “Unable or Unwilling” Standard?’, Zeitschrift für ausländisches öffentliches Recht und Völkerrecht 75, pp. 455–501. Stefoudi, D. (2024) ‘EU Space Law – Three reasons against, three reasons in favour’, EJIL:Talk!, 29 April 2024. 155 Anže Mediževec – The Right of Self-defence in the Earth’s Orbit Su, J. (2010) ‘Use of Outer Space for Peaceful Purposes: Non-Militarization, Non- Aggression and Prevention of Weaponization’, Journal of Space Law 36(1), pp. 253–272. Svec, M. (2022) ‘Outer Space, an Area Recognised as Res Communis Omnium: Limits of National Space Mining Law’, Space Policy 60, pp. 1–7. Tepper, E. (2024) ‘The Laws of Space Warfare: A Tale of Non-Binding International Agreements’, Maryland Law Review 83(2), pp. 458–517. The White House (2002) The 2002 National Security Strategy, (accessed 19 October 2024). The White House (2016) Report on the Legal and Policy Frameworks Guiding the United States’ Use of Military Force and Related National Security Operations. Washington: The White House. Thiruvanathapuram (2010) India readying weapon to destroy enemy satellites: Saraswat, (accessed 7 September 2024). Tladi, D. (2013) ‘The Nonconsenting Innocent State: The Problem with Bethlehem’s Principle 12’, American Journal of International Law 107(3), pp. 570–576. Tripathi, P.N. (2013) ‘Weaponisation and Militarisation of Space’, CLAWS Journal, Winter 2013, pp. 188–200. Tsagourias, N. (2016) ‘Self-Defence against Non-state Actors: The Interaction between Self-Defence as a Primary Rule and Self-Defence as a Secondary Rule’, Leiden Journal of International Law 29, pp. 801–825. United States Space Force (2024) Reoptimizing for Great Power Competition, (accessed 27 August 2024). van Steenberghe, R. (2015) ‘From Passive Consent to Self- Defence after the Syrian Protest against the US-led Coalition’, EJIL:Talk!, 23 October 2015. von der Dunk, F.G. (2011) ‘The Origins of Authorisation: Article VI of the Outer Space Treaty and International Space Law’, Space, Cyber, and Telecommunications Law Program Faculty Publications 69, pp. 1–19. Wehtje, B. (2023) ‘Increased Militarisation of Space – A New Realm of Security’, Beyond the Horizon, 6 June 2023. Williams, G. D. (2013), ‘Piercing the Shield of Sovereignty: An Assessment of the Legal Status of the ‘Unwilling or Unable’ Test’, UNSW Law Journal 36(2), pp. 619–641. Wilmshurst, E. (2005) Principles of International Law on the Use of Force by States in Self- Defence. Chatham House, The Royal Institute of International Affairs. 156 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Wilmshurst, E. & Wood, M. (2013) ‘Self-Defense Against Nonstate Actors: Reflections on the “Bethlehem Principles”’, American Journal of International Law 107(2), pp. 390–395. Wilmshurst, E. (2006) ‘The Chatham House Principles of International Law on the Use of Force in Self-Defence’, International & Comparative Law Quarterly 55(4), pp. 963–972. Wood, M. (2013) ‘International Law and the Use of Force: What Happens in Practice?’, Indian Journal of International Law 53, pp. 345–367. Wright, D. et al. (2006) ‘An Introduction to Space Weapons’, Ensuring Space Security, Fact Sheet No. 1, pp. 1–2. Yun, Z. (2018) ‘Space Commercialization and the Development of Space Law’, Oxford Research Encyclopedia of Planetary Science, 30 July 2018, pp. 1–20. Ziemblicki, B. & Oralova, Y. (2021) ‘Private Entities in Outer Space Activities: Liability Regime Reconsidered’, Space Policy 56, pp. 1–11. Zorzetto, S. (2012) ‘The Lex Specialis Principle and its Uses in Legal Argumentation. An Analytical Inquire’, Eunomía. Revista en Cultura de la Legalidad 3(2012- 2013), pp. 61–87. Agora: Selected Aspects of Intersections Among Artificial Intelligence, Law, and the Right to Life 159 © The Author(s) 2024 DOI: 10.51940/2024.1.159-166 UDC: 342.7:004.8 Vasilka Sancin* Agora: Selected Aspects of Intersections Among Artificial Intelligence, Law, and the Right to Life 1. Introduction Artificial intelligence (AI) systems pose both immediate and long-term risks to hu- man rights, necessitating AI governance that aligns with international norms and princi- ples, including respect for human rights. Human rights are a well-established concept in legal discourse, and the right to life has, since the earliest codifications of human rights law, been recognised as the supreme right from which no derogation is permitted—even in situations of armed conflict or other public emergencies that threaten the life of a nation.1 The United Nations Human Rights Committee emphasises the right to life’s crucial significance for both individuals and society asserting that “it is most precious for its own sake as a right that inheres in every human being, but it also constitutes a fundamental right, the effective protection of which is the prerequisite for the enjoyment of all other human rights and the content of which can be informed by other human rights.”2 Therefore, any new development influencing societal behaviour, including the devel- opment and use of new technologies such as those powered by AI systems, necessitates legal analysis. Such an analysis must necessarily include an assessment of potential hu- man rights impacts, including on the right to life. This contribution aims to introduce the debate surrounding the various intersections among AI, law, and the right to life, as explored in the following contributions. Contrary to the well-defined and extensively explained content of human rights in general—and the right to life in particular—through domestic and international laws, ∗ PhD, Full Professor, Head of Department of International Law, Faculty of Law, University of Ljubljana, vasilka.sancin@pf.uni-lj.si; ORCID ID: oricd.org/0000-0002-1623-7278. The contribution is based on research conducted within the framework of the basic research project J5-3107 “Development and use of artificial intelligence in light of the negative and positive obligations of the state to guarantee the right to life” funded by the Slovenian Research and Innovation Agency. 1 Human Rights Committee, General Comment No. 36 on Article 6: right to life, § 2. 2 Ibid. Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review – voL. LXXXiv, 2024 • pp. 159–166 ISSN 1854-3839 • eISSN: 2464-0077 160 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 practices, and jurisprudence, the absence of legal definitions of AI suggests that analysis should turn to the practices that have emerged in this regard within various international organisations. 2. Developments within International Organisations In the absence of a universally codified definition of AI, one possible framing is the definition developed by the Organisation for Economic Co-operation and Development (OECD) in 2018 and revised in 2023. This definition now considers an AI system as: “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommen- dations, or decisions that can influence physical or virtual environments,” adding that “different AI systems vary in their levels of autonomy and adaptiveness after de- ployment.”3 This definition also informed the Council of Europe’s Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law,4 the first internationally legally binding treaty regulating AI, as well as the European Union (EU) Regulation (EU) 2024/1689 of the European Parliament and the Council of 13 June 2024, on estab- lishing harmonized rules on artificial intelligence and amending Regulations (EC) No. 300/2008, (EU) No. 167/2013, (EU) No. 168/2013, (EU) 2018/858, (EU) 2018/1139, and (EU) 2019/2144, as well as Directives 2014/90/EU, (EU) 2016/797, and (EU) 2020/1828 (the Artificial Intelligence Act), which defines an AI system as: “software developed using one or more techniques and approaches from Annex I and capable, for a defined set of objectives specified by a human, of generating ou- tputs such as content, predictions, recommendations, or decisions that influence the environment with which they interact.”5 Both states and international organisations have addressed the impacts of AI on hu- man rights. The OECD Council adopted recommendations as early as 2019, empha- sising the need for trustworthy and responsible AI development and use.6 The United Nations Educational, Scientific and Cultural Organization (UNESCO) adopted the Recommendation on the Ethics of Artificial Intelligence in 2021, highlighting relevant values, principles, and implementation methods for AI governance.7 In October 2023, 3 Grobelnik, Perset & Russel, 2024. 4 Council of Europe, Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (2024), Article 2. 5 OJ L, 2024/1689, 12.7.2024. 6 OECD, Recommendation of the Council on Artificial Intelligence (2019). 7 UNESCO, Recommendation on the Ethics of Artificial Intelligence (2021). 161 Vasilka Sancin – Agora: Artificial Intelligence and the Right to Life Selected Aspects of Intersections Among Artificial Intelligence, Law, and the Right to Life the Secretary-General of the United Nations (UN) adopted a plan for digital cooperation and established the Advisory Body on Artificial Intelligence to conduct analyses and prepare recommendations for international AI governance.8 In September 2024, this Advisory Body issued its final report, AI Governance for Humanity.9 Further, in March 2024, the UN General Assembly (UNGA) adopted a resolution on reliable AI systems,10 followed by the Pact for the Future11 in September 2024, which in- cludes multiple provisions addressing the increasing use of AI. Measure No. 30 acknowl- edges both the opportunities and risks posed by emerging technologies and calls for responsible and ethical research that upholds and promotes human rights. Additionally, this measure mandates the systematic incorporation of human rights considerations into regulatory and normative processes, highlighting the private sector’s role in adhering to ethical principles when developing new technologies. As will be discussed later, this Agora contributes significantly to the discussions needed to operationalise these goals. The United Nations Human Rights Council (UNHRC), a subsidiary body of the UNGA, has also been actively engaged with issues related to AI’s impact on human rights. In September 2024, UNHRC President Omar Zniber convened an informal dis- cussion on new technologies, AI, and the digital divide.12 In his opening remarks, he stressed that the challenge of harnessing AI’s potential while safeguarding human rights is “one of the most pressing challenges of our time.” He emphasised the urgent need for clearer guidelines on the application of human rights protection standards in the digital age. AI development must be based on respecting and ensuring human rights to prevent the erosion of rights and the exacerbation of global inequalities.13 3. The Importance of Legally Considering the Right to Life in the Era of AI A common characteristic of new technologies, including AI systems, is that they en- able and facilitate the synchronisation of online and physical space. They are not inert or 8 UNGA, Road map for digital cooperation: implementation of the recommendations of the High- level Panel on Digital Cooperation – Report of the Secretary-General, U.N. Doc. A/74/821 (2020). 9 UN AI Advisory body, Governing AI for Humanity – Final Report (2024), (29. 1. 2025). 10 UNGA, Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development, A/78/L.49 (2024). 11 UNGA, Resolution adopted by the General Assembly on 22 September 2024, The Pact for the Future, A/RES/79/1 (2024). 12 UN Human Rights Council, High-Level Informal Presidential Discussion on New Technologies, Data, Artificial Intelligence, and the Digital Divide from a Human Rights Perspective: Summary Report (2024). 13 On file with the author. 162 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 neutral and often embody the values and prejudices of the organisations or individuals who create and use them.14 In recent years, there has been an avalanche of literature on AI, including within the legal domain. This literature explores a broad range of topics, reflecting the complexities and societal impact of AI. It spans foundational issues of regulation, governance, ethics, bias, liability, and accountability, to more specialised areas, such as intellectual property, contract law, labour law, autonomous weapons systems etc. A distinct and growing corpus of legal and academic literature focuses on AI and human rights, examining how AI inter- sects with, impacts, and challenges internationally recognised human rights frameworks, including those recognised in core human rights treaties at global and regional levels. Among various human rights impacted by the development and use of AI, the main themes most often covered in this literature are usually the right to privacy, freedom of expression, equality and non-discrimination, freedom from arbitrary detention, access to justice etc. It is, however, also quintessential to recognise AI’s increasing intersections with scenarios where life is directly or indirectly at stake. These technologies can both uphold and endanger the fundamental right to life, as recognised in international human rights frameworks, such as Article 6 of the International Covenant on Civil and Political Rights (ICCPR)15 and Article 2 of the European Convention on Human Rights (ECHR).16 The potential impacts of AI systems on various human rights have been acknowledged in recent European legislative developments, such as the risk-based approach in the EU AI Act,17 and the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law.18 The advances in science and the development of new technologies, including AI, are increasingly manifesting in various spheres, including individual life, society, the state, and the international community. Since the outbreak of the COVID-19 pandemic, reliance on AI in meeting daily basic needs and work commitments has grown drastically. Automated decision-making has long influenced daily life, from route planning and online shopping to smartphones usage. In some countries, AI systems are already being used in policy-mak- 14 Sancin & Bobnar, 2024, p. 110. 15 International Covenant on Civil and Political Rights (opened for signature 16 December 1966, entered into force 23 March 1976) 999 UNTS 171. 16 European Convention for the Protection of Human Righst and Fundamental Freedoms (opened for signature 4 November 1950, and entered into force 3 September 1953) ETS 5. 17 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), OJ L, 2024/1689, 12 July 2024 (AI Act). 18 Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, Council of Europe Treaty Series – No. [225], 2024. 163 Vasilka Sancin – Agora: Artificial Intelligence and the Right to Life Selected Aspects of Intersections Among Artificial Intelligence, Law, and the Right to Life ing, judicial processes, and administrative decision-making. It can thus be expected that in the future data collection and (semi)autonomous processing will enable the widespread use of sophisticated AI systems in various areas extremely important for individuals and the society (e.g., health, judiciary, policy planning and police control). Given the all-encompassing potential of the use of AI technology, the development and use of this technology will inevitably encroach upon (and, in some cases, already encroach upon) fundamental human rights, including the right to life. Despite the fun- damental nature of this right, there is currently no in-depth scientific research into the various aspects of the interaction between the development and use of AI technology and this fundamental right. 4. The Insights from Near and Far The rapid advancement of AI has led to its integration into various sectors, including healthcare, finance, and national security. One of the most contentious debates revolves around the use of AI in the development of autonomous weapon systems (AWS). As Yuval Shany notes in his contribution To Use AI or Not to Use AI? Autonomous Weapon Systems and Their Complicated Relationship with the Right to Life, the increasing prevalence of AI technology developed or adapted for military use raises difficult questions about its compatibility with international law in general, and international human rights law (IHRL) in particular. Relying on the UN Human Rights Committee’s position, as reflect- ed in General Comment No. 36, he examines the terms of the IHRL debate surrounding the introduction of AI technology into military contexts and its relationship to the right to life. He does so by engaging with three principal objections to introducing military AI into battlefield environments: the capacity of autonomous or semi-autonomous AI sys- tems to properly apply international humanitarian law (IHL); concerns about the de facto lowering of humanitarian protection standards, and the ethical and legal implications of transferring certain life-and-death decisions from humans to machines. Yuval Shany thus engages with both the proponents of AWS and those who call for their prohibition. The former argue that AI-driven warfare has the potential to enhance precision, reduce human casualties, and ensure military efficiency, as machines, unlike human soldiers, are not driven by emotions, biases, or fatigue—factors that can lead to reckless decision-making on the battlefield. They also contend that AI can process vast amounts of data in real time, identify threats with greater accuracy, and minimise collat- eral damage compared to human-controlled operations. Despite these potential benefits, AWS raises profound concerns about the right to life. Article 6 of the International Covenant on Civil and Political Rights explicitly states that “no one shall be arbitrarily deprived of their life,” placing a legal obligation on states to prevent unlawful killings, even during armed conflict. 164 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Given the potential dangers posed by AWS, there have been increasing calls for global regulation. The United Nations and various human rights organisations advocate for strict controls or even a complete ban on fully autonomous lethal weapons. The 2021 UN Secretary-General’s Report on AI and Warfare urged states to adopt frameworks ensuring meaningful human control over AI-driven weapons. While some countries, in- cluding Russia and the United States, continue to develop AWS, others, such as Austria and Germany, support the prohibition of “killer robots.” The challenge lies in balancing military innovation with ethical responsibility, ensuring that AI remains a tool for pro- tection rather than destruction. One thing is certain: as AI continues to evolve, the global community must ensure that technological advancements align with humanitarian val- ues, rather than compromise them. The next contribution, by Joana Gomes Beirão and Jan Wouters, titled Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible?, continues the critical debate on the potential use of AWS both within and beyond armed conflict, including in law enforcement. It presents several inter- national initiatives that have emerged in recent years aiming to establish both non-binding and binding rules for the development and use of AI based on respect for human rights. The authors focus on the OECD Recommendation on AI, the UNESCO Recommendation on the Ethics of AI, the INTERPOL and UNICRI Toolkit for Responsible AI Innovation in Law Enforcement, and the Council of Europe AI Convention. Turning from the use of AWS to the emergence of AI systems in humanitarian assis- tance, Maruša T. Veber’s contribution Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent, offers an original insight into the complex web of in- ternational legal regimes involved. She analyses the notion of State consent in the delivery of humanitarian assistance supported by AI systems from the perspective of the general legal regime of humanitarian assistance and the specific rules derived from IHL and IHRL. She highlights the important distinction between strategic and operational consent to hu- manitarian assistance, arguing that valid reasons for withholding operational consent to AI-supported humanitarian assistance under IHL must be distinguished from the arbitrary withholding of strategic consent, which is always prohibited when it amounts to a violation of other existing obligations of the State concerned (e.g., under international humanitarian law or human rights law). She explains that the non-consensual delivery of humanitarian assistance could be legally justified either through United Nations Security Council au- thorisation or by secondary rules of international law, particularly countermeasures. The next two submissions share a common focus on space activities. Anže Singer, in his contribution titled Artificial Intelligence in Space: Overview of the European Space Agency and Its role in the AI Environment, discusses the importance of AI as an enabling technology for space missions, enhancing scientific output and mission efficiency. He examines relevant developments within the European Space Agency (ESA), which was 165 Vasilka Sancin – Agora: Artificial Intelligence and the Right to Life Selected Aspects of Intersections Among Artificial Intelligence, Law, and the Right to Life recently joined by Slovenia, noting that while AI has been successfully implemented in some ESA activities, its use remains relatively rare in the space industry. This is largely because models developed within neural networks are not human-readable. He provides examples of successful AI application within the ESA’s own activities and explores con- cerns about the challenges that may arise in the AI and space sector. Iva Ramuš Cvetkovič, in her contribution AI—A Possible Solution to the Threats against Human Lives Arising from Space Objects?, dives into conundrum of threats posed to hu- man lives in outer space, in airspace, and on Earth. Through an analysis of the exist- ing international legal framework, she demonstrates its insufficiency in addressing these threats. Finally, she assesses the extent to which AI systems can be used to mitigate such threats and outlines the legal challenges that the use of AI in this context would bring. She evaluates whether AI-driven threat mitigation can be as effective as currently predicted. The section concludes with Kristina Čufar’s contribution, AI Software/Hardware as Mind/Body Problem: Global Supply Chains, Shadow Workers, and Wasted Lives, in which she shifts the focus from ethical debates surrounding AI software-related issues to con- cerns related to AI hardware—an issue that has received significantly less attention in scholarly discourse. She argues that understanding AI primarily as software, or an “arti- ficial mind,” highlights only the supposedly new and exciting aspects of this technology, while ignoring the human and material costs of its fabrication. She proposes a concep- tualisation of AI as both hardware and software, broadening the scope of ethical and legal issues that ought to be addressed through AI regulation. She argues that when the worldwide extraction of materials, labour, and data necessary for AI systems is seriously considered, AI emerges as yet another instance of colonial capitalism. 5. Conclusion Legal scholarly exchanges on the complex legal issues involved in AI’s integration into various domains—warfare, humanitarian assistance, space exploration, and global supply chains—offer a unique opportunity to critically assess both the unprecedented opportunities and profound ethical dilemmas involved. The central challenge remains balancing technological progress with the protection of fundamental human rights, in- cluding the right to life. While AI has the potential to enhance precision, efficiency, and even save lives, its unchecked development could jeopardise human dignity, create accountability gaps, and deepen global inequalities. The development of autonomous weapons systems raises critical concerns regard- ing the right to life and the lack of legal oversight, making the adoption of a dedicated international legal framework an urgent necessity. Similarly, AI’s role in humanitarian assistance challenges traditional notions of state consent, requiring a reassessment of eth- ical deployment in crisis situations. In the space sector, AI holds promise for monitoring 166 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 threats from space objects and optimising European Space Agency missions, yet its gov- ernance remains underdeveloped. Moreover, AI’s role in global supply chains highlights concerns about shadow labour, ethical sourcing, and the mind-body dualism between AI software and hardware. To ensure that AI systems advance human well-being and respect and protect the right to life, rather than endanger it, the global community must commit to multilateral regulation, ethical AI policies, and human-centred governance frameworks. AI should remain a tool for progress controlled by humans, not an unchecked force shaping an uncertain future. References Council of Europe, Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (2024). Grobelnik, M., Perset, K., Russell, S. (2024), ‘What is AI? Can you make a clear distin- ction between AI and non-AI systems?’, (accessed 27 January 2024). Human Rights Committee, General Comment No. 36 on Article 6: right to life (CCPR/C/GC/36). OECD, Recommendation of the Council on Artificial Intelligence (2019). Sancin, V., Bobnar, L. (2024) ‘The right to freedom of expression in the era of artificial intelligence systems’, Pravni letopis: zbornik Inštituta za primerjalno pravo pri Pravni fakulteti v Ljubljani, pp. 109–133. UN AI Advisory body, Governing AI for Humanity – Final Report (2024), (29. 1. 2025). UNESCO Recommendation on the Ethics of Artificial Intelligence, No. 61910. 2021. (28. 1. 2025). UNGA, Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development, A/78/L.49 (2024). UNGA, Resolution adopted by the General Assembly on 22 September 2024, The Pact for the Future, A/RES/79/1 (2024). UNGA, Road map for digital cooperation: implementation of the recommendations of the High-level Panel on Digital Cooperation – Report of the Secretary- General, U.N. Doc. A/74/821 (2020). UN Human Rights Council, High-Level Informal Presidential Discussion on New Technologies, Data, Artificial Intelligence, and the Digital Divide from a Human Rights Perspective: Summary Report (2024). 167 © The Author(s) 2024 Scientific Article DOI: 10.51940/2024.1.167-188 UDC: 341.3:342.7:004.8 623.09:004.8 Yuval Shany* To Use AI or Not to Use AI? Autonomous Weapon Systems and Their Complicated Relationship with the Right to Life Abstract The increased prevalence of AI technology developed or adapted for military use raises difficult questions about the compatibility of this new technology with international law in general, and international human rights law (IHRL) in particular. The Human Rights Committee, the expert body entrusted with monitoring the application of the International Covenant on Civil and Political Rights, expressed its view in 2018 on the relationship between the emergence of new military AI and respect for the right to life. The article reviews the terms of the IHRL debate surrounding the introduction of AI technology into military contexts and its relationship to the right to life. Section one briefly reviews some actual and potential applications of AI in military contexts. Section two deals with three principal objections to introducing military AI to battlefield en- vironments: the capacity of autonomous or semi-autonomous AI systems to properly apply international humanitarian law (IHL), concerns about de facto lowering of stan- dards of humanitarian protection, and the ethical and legal implications of transferring certain life-and-death decisions from humans to machines. Section three reviews, in light of these three principled objections, specific proposals by the ICRC to limit the use of AI in military contexts (limiting the scope and manner of use of autonomous weapon systems, and excluding unpredictable and lethal systems). Section four reviews the main issues discussed in this article from the vantage point of the right to life under IHRL, as elaborated in General Comment No. 36. Key words autonomous weapon systems, right to life, international humanitarian law, human dig- nity, accountability, transparency, meaningful human control, ICRC, military AI. * Hersch Lauterpacht Chair in Public International Law, The Hebrew University of Jerusalem. Prof. Shany served in 2013–2020 as a member of the Human Rights Committee. Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review – voL. LXXXiv, 2024 • pp. 167–188 ISSN 1854-3839 • eISSN: 2464-0077 168 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 The increased prevalence of AI technology developed or adapted for military use raises difficult questions about the compatibility of this new technology with interna- tional law in general, and international human rights law (IHRL) in particular.** This is because moving away from human discretion and agency towards decision-making by machines in contexts involving the use of lethal force implicates some of the most basic human rights, including the right to life. Indeed, the Human Rights Committee, the expert body entrusted with monitoring the application of the International Covenant on Civil and Political Rights,1 expressed its view in 2018 on the relationship between the emergence of new military AI and respect for the right to life. General Comment No. 36 on the Right to Life alludes, in the following language, to legal concerns relating to the development and use of autonomous weapon systems: “65. States parties engaged in the deployment, use, sale or purchase of existing we- apons and in the study, development, acquisition or adoption of weapons, and me- ans or methods of warfare, must always consider their impact on the right to life. For example, the development of autonomous weapon systems lacking in human compassion and judgement raises difficult legal and ethical questions concerning the right to life, including questions relating to legal responsibility for their use. The Committee is therefore of the view that such weapon systems should not be developed and put into operation, either in times of war or in times of peace, un- less it has been established that their use conforms with article 6 and other relevant norms of international law.”2 The present article will review the terms of the IHRL debate surrounding the intro- duction of AI technology into military contexts and its relationship with the right to life. Due to time and space limitations, it will not deal with other human rights implicated by the use of AI in military contexts, including equality, privacy, and the emerging right not to be subject to automated decision-making.3 Furthermore, it will address, only to a limited degree, the parallel debate on the normative implications of military AI under international humanitarian law (IHL).4 ** Thanks are due to Dr. Shereshevsky for his comments on an earlier draft of this article. The research for this article was supported by ERC Grant No. 101054745 (DigitalHRGeneration3). 1 International Covenant on Civil and Political Rights (ICCPR), 16 December 1966, 999 UNTS 171. 2 Human Rights Committee, General Comment No. 36: The Right to Life, UN Doc. CCPR/C/ GC/36 (2018), § 65. 3 See, e.g., Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), Article 22, OJ L 119, 4 May 2016, p. 1. 4 For a comprehensive discussion of the legality of military AI under IHL, see Hua, 2019; Brenneke, 2018; Jensen & Alcala, 2019. 169 Yuval Shany – To Use AI or Not Use AI? Autonomous Weapon Systems and Their Complicated Relationship with the Right to Life In section one, I will briefly review some actual and potential applications of AI in military contexts. These applications will serve as a factual background against which normative questions will be discussed in subsequent parts of the article. Section two deals with three principal objections to introducing military AI to battlefield environments: the capacity of autonomous or semi-autonomous AI systems to properly apply IHL, concerns about the de facto lowering of standards of humanitarian protection, and other ethical and legal implications of transferring certain life-and-death decisions from humans to machines. I will maintain in that part that, whereas some objections to military AI are compelling, others are contingent on the actual technological state of the art, on a distinc- tion between lethal and non-lethal AI that is hard to maintain, and on an idealised—and, ultimately, unrealistic—portrayal of the qualities of human decision making. Section three reviews, in light of these three principled objections, specific proposals by the ICRC to limit the use of AI in military contexts (namely, limiting the scope and manner of use of autonomous weapon systems, and excluding unpredictable and lethal systems). Finally, section four reviews the main issues discussed in this article from the vantage point of the right to life under IHRL, as elaborated in General Comment No. 36. 1. The Growing Use of AI in Military Contexts The ‘AI revolution’—involving the transfer of decision-making power from human beings to computerised systems run by AI5—has not bypassed military organisations. In fact, these organisations are proving to be a particular hotbed for the development of new AI technologies in light of the complex, multi-factor environments in which they operate; the vital need for speedy, precise and reliable decisions in military contexts; the possibility of increasing troop safety by placing machines rather than humans in the line of fire; and the considerable resources that security bodies can command—especially in an “arms race” context. Indeed, the world’s leading militaries have already introduced a number of sophisticated AI systems into their ranks, and increasingly rely on them in their operations. Among the AI systems long in use by the US military, for example, one might mention Joint Assistant for Development and Execution (JADE)—a set of software tools, employ- ing AI technology, capable of quickly developing time-sensitive troop deployment plans on the basis of past and existing operational plans adapted to changing mission environ- ments.6 In the field of air defence, the US Navy already makes use of the Aegis Ballistic Missiles Defense (BDM) system, which automatically intercepts incoming missiles; its 5 See, e.g., Makridaki, 2017. 6 Morgan et al., 2020. 170 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 capacity is currently being upgraded by the introduction of AI technology to enable better identification of incoming threats and a faster selection of outgoing responses.7 In the field of offensive capabilities, a relatively straightforward weapon system used by the US Air Force is the High-Speed Anti-Radiation Missile (HARM) system, which is programmed to identify and target enemy air-defence systems.8 The increased reli- ance of such weapon systems on AI technology significantly enhances their loitering capacity.9 Other AI-based systems currently under development by the US military are Collaborative Operations in Denied Environments (CODE)—a weapon system consist- ing of autonomous aircrafts that can fly in swarms, engage in long-term loitering over targets and carry out a variety of intelligence and targeting missions,10 and the Combined Joint All-Domain Command and Control (CJADC2)—an integrative system compris- ing data collection (sense), threat identification and response selection capacity (make sense), and reaction through AI-supported or controlled weapon systems (act).11 A final example is Project Maven—an AI-based imagery analysis software (which also utilises facial recognition technology), developed by the US Department of Defense from 2017 onwards, with the aim of designating targets for military attacks.12 Of course, while the US is a global leader in developing military AI, it is by no means the only developer and user of such technology. Other countries, such as China,13 Russia,14 France15 and Israel,16 also possess significant capacities in this field, and they, like the US, are expected to share these with their allies as well. This brief survey thus suggests that military AI does not represent a “weapon of the future”, but rather forms part of the current state of the art. Furthermore, the more so- phisticated these weapon systems become—due to the evolution of their data collection, data storage, data analysis and overall functional capacities—the greater the tendency of military organisations might be to rely on them and to vest them with autonomous 7 Center for Strategic and International Studies, Maritime Security Dialogue: The Aegis Approach with Rear Admiral Tom Druggan, 22 November 2021, . 8 Hollings, 2021. 9 Ibrahim, 2022. 10 UAS Vision, DARPA Reveals Details of CODE Program, 2019, . 11 Department of Defense, Summary of the Joint All-Domain Command and Control (CJADC2) Strategy, March 2022, . 12 Brewster, 2021. 13 Morgan, 2020, pp. 60–82. 14 Ibid., pp. 83–99. 15 See, e.g., Manuel, 2022. 16 See, e.g., Min, 2022; Mimran, Pacholska, Dahan & Trabucco, 2024; Swoskin, 2024. 171 Yuval Shany – To Use AI or Not Use AI? Autonomous Weapon Systems and Their Complicated Relationship with the Right to Life or semi-autonomous decision-making power. This process of substituting human deci- sion-makers with machines, including in matters of life and death, nonetheless raises difficult ethical and legal concerns. 2. The Case Against LAWS Most ethical discussions of military AI focus on the development, deployment and use of lethal autonomous weapon systems (LAWS), and most legal discussions concer- ning LAWS revolve around their compatibility with IHL. Although IHL is the specific branch of international law governing the conduct of hostilities, its norms are highly relevant to IHRL as well, given the considerable substantive overlap between IHL and IHRL, and their concurrent application in situations of armed conflict.17 The ethical and legal debates around LAWS have accompanied the lengthy—and, so far, inconclu- sive—process of negotiating an agreement on their development, deployment and use by a Group of Governmental Experts (GGE) convened by the contracting parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons (CCW).18 Although no uniform definition of LAWS exists, the literature tends to regard them as weapon systems with the autonomous capacity to identify and select targets and to apply lethal force to them.19 While most military AI systems currently in use do not in- volve fully autonomous weapon systems—since they still feature a human “in the loop” or “on the loop”—there is little doubt that the combined effect of technologies for target identification (such as those developed by Project Maven) and autonomous targeting ca- pacity (such as that developed in CODE) could be harnessed to develop weapon systems capable of identifying and killing human beings with no human involvement (i.e., with humans “off the loop”). Furthermore, even activating existing AI weapon systems pro- grammed to target military objects—such as radar stations—may lead to loss of human life. Indeed, there is some anecdotal evidence that an attack carried out in 2020 by a Turkish-manufactured AI-powered drone on a militant convoy in Libya resulted in casualties.20 Finally, as explained below, the difference between autonomous AI and se- mi-autonomous AI (involving humans “in” or “on the loop”) might not be as sharp as it seems, since human control over sophisticated military AI systems is eroding across the 17 See, e.g., Legality of the Threat or Use of Nuclear Weapons, 1996 ICJ 226, 240; Human Rights Committee, General Comment No. 36, § 64. See also Shany, 2023. 18 Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or Have Indiscriminate Effects, 10 October 1980, 1342 UNTS 137. 19 See, e.g., Taddeo & Blanchard, 2022. See also ICRC, 2022. 20 See Nasu, 2021. 172 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 board, and the ability of human operators to exercise effective oversight is increasingly called into question.21 At the heart of the ethical and legal debate on LAWS lie three main issues: 1. Concerns regarding their ability to apply IHL properly; 2. The humanitarian implications of losing the moderating influence of human involve- ment in battlefield decisions; and 3. Other ethical and legal questions associated with letting machines decide to kill hu- mans. Taken together, these factors arguably cast doubt on the compatibility of LAWS with international law, generally, and, as section four will show, with the right to life under IHRL in particular. 2.1. Law Application One common criticism levelled against the development, deployment, and use of LAWS concerns worries about mistakes in target identification, risk assessment and cost– benefit analysis, which could lead to the misapplication of IHL rules—especially the principles of distinction and proportionality. Given the interrelationship between IHL and IHRL, such misapplication of IHL is also likely to also entail a violation of IHRL.22 Arguably, the risk of misapplying IHL and IHRL might justify outlawing LAWS under existing law, regardless of the outcome of the negotiations under the auspices of the CCW GGE. Human Rights Watch—at the forefront of the “Stop Killer Robots” campaign— published a report in 2021 in which its researchers, together with researchers from a Harvard Law School law clinic, stated: “It would be difficult for fully autonomous weapons systems, which would select and engage targets without meaningful human control, to distinguish between combatants and noncombatants as required under international humanitarian law […] [C]omplying with the principle of distinction frequently demands the abi- lity to assess an individual’s conduct and intentions, not just appearance. Such assessments may require interpreting subtle cues in a person’s tone of voice, facial expressions, or body language or being aware of local culture […] Humans possess the unique capacity to identify with other human beings and are thus equipped to understand the nuances of unforeseen behavior in ways that machines, which must be programmed in advance, simply cannot.”23 In the same vein, the report contends that the principle of proportionality cannot be properly applied by a machine: 21 See, e.g., Renic & Schwartz, 2023. 22 See Human Rights Committee, General Comment No. 36, § 64. 23 Human Rights Watch and International Human Rights Clinic – Harvard Law School, 2021, p. 7. 173 Yuval Shany – To Use AI or Not Use AI? Autonomous Weapon Systems and Their Complicated Relationship with the Right to Life “First, because a machine would have trouble distinguishing military from civilian targets, it will face obstacles to assessing the military advantage and civilian harm that would come from a possible attack. Second, the proportionality principle in- volves a qualitative balancing test that requires the application of human judgment and moral and ethical reasoning […] human characteristics that machines seem unlikely to possess through their programming […] Third, proportionality requi- res contextual decisions at the moment of attack. The lawful response to a situation could change considerably by slightly altering the facts, and it would be impossible to pre-program a robot to be prepared for the infinite number of scenarios that it could face.”24 Comparable objections, focusing on the capacity of LAWS to apply IHL properly, have also been raised or discussed by other NGOs,25 UN officials26 and academic resear- chers in this field.27 It is hard to disagree that relying on LAWS to apply IHL in complex battlefield conditions may yield false negatives and false positives, leading to legal mistakes—if not outright violations—given the problems AI systems face when attempting to de- velop situational awareness and respond to unforeseen circumstances.28 Serious doubts also remain as to whether the difficult, multi-factored and value-laden act of balancing between military necessity against humanitarian considerations that underlies IHL pro- portionality can be properly undertaken by an algorithm. Still, it is also difficult to deny that human beings applying IHL are prone to error, especially in the ‘fog of war’ and when responding to surprising developments on the ground; and some also commit intentional violations. Moreover, doubts regarding the feasibility of implementing the principle of proportionality in a fixed or predictable manner have been raised with regard to human decision makers as well.29 At least from a rule-consequentialist point of view, a key question may be: who is the less accident-prone decision-maker—the human soldier or the AI-based weapon system? The answer to this question appears largely contingent on developments in the rele- 24 Ibid., p. 8. 25 See, e.g., Article 36, 2019. 26 See, e.g., Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, UN Doc. A/HRC/23/47 (2013), §§ 63–74 (presenting the main features of the debate around the ability of LAWS to properly apply IHL). 27 See, e.g., Sharkey, 2012, pp. 787 and 788–790 (arguing that LAWS lack situational awareness and common-sense reasoning needed to apply the principles of distinction and proportionality). See also McFarland, 2015, pp. 1313 and 1335 (claiming that battlefield decisions by LAWS will be different than those reached by humans because they will be based on more general and pre-deter- mined rules and on anticipated circumstances). 28 For a discussion of the difference between legal mistakes and legal violations, see Pacholska, 2023. 29 See, e.g., Statman et al., 2020. 174 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 vant AI technology, including its ability to predict and emulate human decision-making. Importantly, any such comparison ought to be made not between a machine and an idealised version of a perfect human being, but rather between a machine and a realistic version of a human being, whose decisions are likely to suffer from human imperfecti- ons, biases, and frailties.30 In the long run, as in other highly complex fields of decision-making requiring spe- edy reactions and multifaceted analysis (such as driving), it is unlikely that human deci- sion-makers could keep pace with developments in machine sensory and analytical ca- pacity, given the constant improvements in computer-based data collection and storage capacity, processing speed and power, and system resilience.31 As a result, machines are expected, sooner or later, to make better-informed, more accurate, and faster decisions than human soldiers regarding the choice of means and methods of warfare necessary to attain military objectives with greater efficiency, while inflicting the least possible extent of collateral damage.32 Furthermore, while there remain serious methodological difficulties in quantifying the many variables comprising IHL proportionality analysis, such difficulties are not likely to be insurmountable (and, as noted above, they also pose a serious challenge for human decision-making).33 It is also noteworthy that—unlike human soldiers—machines do not grow tired, frustrated, or confused; nor do they rely on inaccurate heuristics (or hunches) as de- cision-making short-cuts, as humans do.34 Rather, they are expected to strictly follow pre-determined rules of conduct—including IHL rules—even in the most stressful of circumstances (including when their own continued existence is on the line), and to apply them in the exact manner in which they were trained (for example by studying past patterns of human conduct or drawing statistical predictions about future human decisions). And the more sophisticated the algorithms, machine learning capabilities, and training data available to military AI become, the smaller the likelihood of their involvement in deadly mistakes or legal violations (still, as explained below, the more sophisticated military AI becomes, the harder it is for humans to exercise over them effective control and monitoring). In fact, replacing humans with machines in the line of fire may enable decision- -makers to adopt higher standards of IHL protection than would otherwise have been 30 Cf. Zerilli et al., 2019; Heller, 2023. 31 See, e.g., Korteling et al., 2021; Schmitt, 2013. 32 See, e.g., Winter, 2022, pp. 18–19. 33 See, e.g., Schuller, 2019; Winter, 2022, pp. 16–17. 34 See, e.g., Walker, 2021, pp. 10 and 16. For more information on reliance on heuristics, see Tversky & Kahneman, 1974. 175 Yuval Shany – To Use AI or Not Use AI? Autonomous Weapon Systems and Their Complicated Relationship with the Right to Life possible. Such standards may include a “shoot second”35 or a “double check”36 rule of engagement, a “no civilian casualties” proportionality formulation,37 and adopting “li- mited tolerance for error” settings in battlefield operations.38 Indeed, it has already been alleged that, once established that LAWS can offer higher levels of IHL protection than human soldiers, there may be a legal obligation to opt for the LAWS-based approach.39 2.2. The Moderating Impact of Human Involvement Another cluster of objections to the application of LAWS to battlefield situations revolves around the inability of machines to exercise human compassion and discretion, and to moderate the application of IHL in circumstances where following the letter of the law would result in harsh consequences from a moral standpoint. Examples of situations, in which human compassion and discretion might provide a higher level of humanitarian protection than strict application of IHL include: using non-lethal we- apons against child soldiers;40 choosing to capture rather than kill enemy combatants even in the absence of a legal obligation to do so;41 and refraining from targeting soldiers withdrawing from the battlefield under conditions in which they are unlikely to rejoin the armed conflict.42 Arguably, delegating decision-making in such cases from humans to machines that operate on the basis of “black letter” rules might result in the loss of the additional safeguards human soldiers sometimes afford as a matter of discretion, leading to an overall reduction in the level of humanitarian treatment in and around the battlefield.43 This sentiment, regarding a potential increase in the lethality of battlefield conditions due to the introduction of LAWS, appears to underlie some of the concerns voiced in paragraph 65 of General Comment No. 36, which reads into the right to life under IHRL certain humanitarian considerations that go beyond those found in the language of IHL rules.44 Here too, doubts have been expressed in the literature concerning the comparative advantage of human beings over machines in affording enemy soldiers, civilians, and hors de combat humane treatment—over and above applicable legal obligations. Some 35 Geiss, 2016. But see Sassóli, 2014, pp. 308 and 336 (alleging that “conservative programming” is not likely to be sustainable, given the loss of military advantage). 36 See, e.g., Geiss, 2016. 37 Cf. Runkle, 2015. 38 Cf. Bellotti, 2021. 39 Cf. Jensen, 2020, pp. 26 and 55. 40 See, e.g., Barrett, 2019. 41 See, e.g., Schmitt, 2013. 42 See, e.g., Cook & Hamann, 1994. 43 See, e.g., Human Rights Watch, p. 9. 44 Human Rights Committee, General Comment No. 36, § 65. 176 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 commentators point out that while certain human emotions, such as compassion and empathy, may lead to higher standards of humanitarian treatment, other human emo- tions, such as fear, anger, or revenge, can generate the opposite result.45 Furthermore, although algorithms cannot experience emotions, they can be programmed to emulate emotion-driven human conduct or to follow a course of action deemed consistent with positive human emotions, such as compassion or empathy (e.g., they can be programmed to avoid targeting with lethal weapons children under any circumstances).46 2.3. Other Ethical and Legal Concerns Even if LAWS are capable of affording an equivalent level of humanitarian protec- tion to that afforded by human soldiers, the very delegation of decision-making from humans to machines raises difficult ethical and legal concerns for which no satisfactory technological solution appears available. First, referring decisions over life and death to a computer algorithm engaged in risk assessment and cost-benefit analysis, without effec- tive human supervision and control, is difficult to reconcile with moral norms requiring respect for human dignity and life.47 Arguably, treating a human being as nothing more than a node generating data about risks arising from his/her predicted conduct, or about protections due by virtue of the low-risk category to which he/she belongs, is dehuman- ising in a profound sense.48 Second, delegating decisions to machines creates an agency problem, potentially re- sulting in a lack of moral responsibility and legal accountability.49 AI weapon systems do not have moral agency, and it is possible that none of the human actors involved in developing, introducing, and deploying them in specific theatres of hostilities will have a full grasp of the system’s shortcomings and the precise battlefield conditions in which it is deployed. This hampers any attempt to assign ethical or legal responsibility for breaches of IHL or IHRL.50 Third, as with other applications of AI, military AI raises difficult questions of trans- parency—in particular, explainability and traceability.51 Difficulties in understanding the reasons underlying machine decisions, exercising control over them, and monitoring 45 See, e.g., Sassóli, 2014, p. 318; Price, 2016. 46 Cf. Xiao et al., 2016. 47 See, e.g., Asaro, 2012; Wagner, 2014. 48 See, e.g., Laitinen & Otto Sahlgren, 2021, pp. 10–11. 49 See, e.g., Taddeo and Blanchard, 2022, p. 37; Human Rights Watch, Mind the Gap: The Lack of Accountability for Killer Robots, 2015. 50 See, e.g., Amoroso & Giordano, 2019. 51 See, e.g., Atherton, 2022. 177 Yuval Shany – To Use AI or Not Use AI? Autonomous Weapon Systems and Their Complicated Relationship with the Right to Life their operations further undermine the conditions for ethical and legal accountability, and ultimately weaken the rule of law.52 Simultaneously, one might acknowledge that even before the introduction of LAWS, military organisations had already come to rely on weapon systems subject to limited human control (e.g., torpedoes),53 on long-distance control (e.g., drones),54 and on big data for targeting decisions (e.g., “signature strikes” following long-term data collection to establish “patterns of life” for suspected militants).55 In other words, they have long employed weapons, means and methods of warfare featuring some of the same ethical and legal issues afflicting LAWS: delegating significant decision-making capacity to ma- chines, and operating with a reduced sense of accountability and limited transparency. Furthermore, as noted above, even military AI systems that leave humans the ulti- mate decision whether or not to use lethal force on a specific target (“humans in the loop” or “humans on the loop”), often constrain or shape human decision-making—through “black box”56 and “automation bias”57 features—in ways that render such human su- pervision and control merely nominal, from a practical viewpoint. In other words, the increased reliance on military AI is leading to an erosion of human decision-making capacity across the board, and excessive reliance on distinctions between humans “in the loop”, “on the loop”, and “off the loop” might perpetuate an illusion of effective human supervision and control, which little resemblance to reality. Put differently, it is questionable whether opposition to LAWS can be meaningfully distinguished, over time, from broader opposition to military AI—with all the operational implications such opposition might entail. 3. The Position of the ICRC It is against the background of the extensive discussion about the conformity of LAWS with IHL and IHRL that the position of the ICRC on the legality of LAWS is particularly interesting. This is both because of the pride of place the ICRC occupies in the field as guardian and promoter of IHL,58 and because its position directly engages 52 See, e.g., Rosengrün, 2022. 53 See, e.g., Work, 2021. 54 See, e.g., Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Philip Alston, Study on Targeted Killings, UN Doc. A/HRC/14/24/Add.6 (2010), § 84. 55 See, e.g., Gibson, 2021. The legality of such practices under IHL has been, however, challenged. See, Heller, 2013. 56 See, e.g., Schwartz, 2018. 57 See, e.g., Cabitza, 2019, pp. 283 and 293. 58 For a discussion, see, e.g., Geiss & Zimmermann, 2017, p. 215. 178 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 with the relationship between LAWS, IHL, and broader humanitarian considerations, including other ethical and legal concerns. In 2021, an ICRC policy paper proposed the following recommendations relating to the use of autonomous weapon systems (AWS): 1. Unpredictable AWS should be expressly ruled out, notably because of their indis- criminate effects. This would best be achieved through a prohibition on AWS that are designed or used in such a way that their effects cannot be sufficiently understood, predicted and explained. 2. In light of ethical considerations to safeguard humanity and to uphold IHL rules for the protection of civilians and combatants hors de combat, the use of AWS to target human beings should be ruled out. This would best be achieved through a prohibition on AWS that are designed or used to apply force against persons. 3. In order to protect civilians and civilian objects, uphold the rules of IHL, and safe- guard humanity, the design and use of AWS that would not be prohibited should be regulated, including through a combination of: – limits on the types of target, such as constraining them to objects that are military objectives by nature – limits on the duration, geographical scope, and scale of use, including to enable human judgement and control in relation to a specific attack – limits on situations of use, such as constraining them to contexts where civilians or civilian objects are not present – requirements for human–machine interaction, notably to ensure effective human supervision, timely intervention, and deactivation.59 It is noteworthy that the overarching framework for the ICRC recommendations is a call on states to “adopt new binding rules” to give effect to the recommendations.60 In other words, the policy paper presents itself as a proposal for new lex ferenda. It does not claim that LAWS are strictly banned by existing IHL (the ICRC does maintain, howe- ver, that “it is difficult to envisage realistic combat situations where LAWS use against persons would not pose a significant risk of IHL violations”).61 Furthermore, a central recommendation found in the policy paper—to ban the use of autonomous weapon systems to target human beings—is based first and foremost on ethical, and not strictly legal, considerations relating to human dignity.62 Other aspects of the ICRC position are grounded, however, in traditional legal consi- derations: concerns relating to unpredictability, accurate target selection and collateral harm mirror the concerns about the capacity of LAWS to properly apply IHL discussed 59 ICRC Position On Autonomous Weapon Systems, 2021, p. 11. 60 Ibid. 61 Ibid., p. 9. 62 Ibid., p. 8. 179 Yuval Shany – To Use AI or Not Use AI? Autonomous Weapon Systems and Their Complicated Relationship with the Right to Life in Part Two. The policy paper notes in this regard certain specific concerns relating to military operations in urban environments and the impact of unforeseen circumstances in military operations involving AWS engaged in long-term loitering. In addition, the policy paper raises concerns about legal accountability due to the limited capacity for un- derstanding, predicting, and explaining the effects of autonomous weapon systems, the inadequate level of human control over them, and the broad scope of discretion afforded by such systems to algorithms. It may be noted in this regard that the recommendation for a “human on the loop” in the ICRC position paper, including retaining the power to deactivate autonomous weapon systems, appears to go beyond the guiding principles adopted by the 2019 GGE on LAWS, which only alluded to placing human–machine interaction within an accou- ntability framework, including a responsible chain of command and control.63 Whereas the GGE was unable to reach consensus on a definition of the “meaningful human control” standard,64 the ICRC proposed specific criteria for the exercise of such power of supervision and control. The upshot of the position espoused in the policy paper is that the use of LAWS against human beings is considered by the ICRC to be unethical and legally problemat- ic—though not clearly legally impermissible. In practical terms, barring a specific agree- ment relating to the development, deployment, and use of LAWS, the legal problems identified in the policy paper would need to be reviewed in the course of new weapon le- gality assessments, pursuant to Article 36 of the First Additional Protocol to the Geneva Conventions.65 63 Report of the 2019 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, UN Doc. CCW/GGE.1/2019/3 (2019), Annex IV (“(c) Human-machine interaction, which may take various forms and be implemented at var- ious stages of the life cycle of a weapon, should ensure that the potential use of weapons systems based on emerging technologies in the area of lethal autonomous weapons systems is in compliance with applicable international law, in particular IHL. In determining the quality and extent of hu- man-machine interaction, a range of factors should be considered including the operational con- text, and the characteristics and capabilities of the weapons system as a whole; (d) Accountability for developing, deploying and using any emerging weapons system in the framework of the CCW must be ensured in accordance with applicable international law, including through the operation of such systems within a responsible chain of human command and control”). 64 Kwik, 2022. 65 Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts, 8 June 1977, Article 36, 125 UNTS 3 (“In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party”). 180 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 As for the ethical issues raised, a fundamental dilemma that the ICRC policy paper avoids discussing is whether technological developments that ultimately result in better application of IHL by machines than by humans could justify resorting to them, despite the troubling implications of authorising machines to kill humans. Furthermore, one might ask whether placing “human on the loop” limits on the use of algorithms to en- sure legal compliance and accountability—would be tenable in the long run, given the growing gap between algorithmic and human capacity. The better machines become in fast and complex decision-making, the less accessible and understandable their decisions will inevitably be to humans, and the less effective the supervision and control that humans can exercise over their operations.66 In the long run, difficult ethical and legal trade-offs between performance quality and the quality of supervision and control over performance may present themselves to policy-makers and their legal advisers. 4. General Comment 36 The doubts as to whether IHL clearly prohibits the use of LAWS, as discussed in previous sections, underscore the significance of broadening the scope so as to include IHRL norms as well. The advantage of IHRL over IHL in this regard is that it explicitly and implicitly recognises many of the normative notions underlying concerns about the development, deployment, and use of LAWS—concerns for which IHL does not offer a dedicated vocabulary—such as humanitarian protection that goes beyond the strict requirements of IHL and the ethical and legal implications of authorising ma- chines to kill humans without effective supervision and control by human beings. In other words, IHRL offers protection both in situations governed by IHL (where IHRL provides overlapping protection) and in cases where IHL does not appear to constrain decision-making, thus inviting the application of humanitarian and other ethical and legal considerations. An example of a broad ethical and legal consideration influencing the scope of pro- tections afforded in and around the battlefield is the objection to permitting machines to kill humans, which is based on the notion of human dignity. This notion is found in Article 1 of the Universal Declaration of Human Rights67 and in the Preambles to both Covenants from 1966.68 On that basis, the Human Rights Committee explained 66 Cf. Milmo, 2024 (citing Geoffrey Hinton: “how many examples do you know of a more intelligent thing being controlled by a less intelligent thing”). 67 Universal Declaration of Human Rights, 10 December 1948, Article 1, GA Res. 217A III (1948) (“All human beings are born free and equal in dignity and rights”). 68 ICCPR, preamble (“Considering that, in accordance with the principles proclaimed in the Charter of the United Nations, recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world; 181 Yuval Shany – To Use AI or Not Use AI? Autonomous Weapon Systems and Their Complicated Relationship with the Right to Life in General Comment No. 36 that the right to life “concerns the entitlement of individ- uals to be free from acts and omissions that are intended or may be expected to cause their unnatural or premature death, as well as to enjoy a life with dignity”.69 It could be claimed that the development, deployment and use of LAWS, involving the delegation of life-and-death decisions to machines lacking human agency, are prima facie incompat- ible with the right to life with dignity. Such an approach seems consistent with the de- velopment, outside the context of military AI, of a right not to be subject to automated decisions over significant matters.70 In the same vein, notions of transparency and accountability mentioned above are strongly related to procedural dimensions of IHRL protection. Here too, reviewing General Comment No. 36 could be instructive. The Comment reads into Article 6 a normative expectation to report, review and investigate certain lethal incidents;71 a rec- ommendation for the evaluation and monitoring of the impact of certain weapons on the right to life;72 an obligation to effectively monitor and control the involvement of private actors in the application of lethal force;73 a duty to “take adequate measures of protection, including continuous supervision, in order to prevent, investigate, punish and remedy arbitrary deprivation of life by private entities”;74 and a requirement “to investigate and, where appropriate, prosecute the perpetrators of such incidents, includ- ing incidents involving allegations of excessive use of force with lethal consequences”.75 Although these specific obligations—some of which represent soft law and some hard law—were not formulated with a view to addressing the risks to the right to life posed by LAWS, they could apply thereto mutatis mutandis, and entail requirements of transpar- ency and accountability for all cases involving the use of military AI. Indeed, the specific paragraph that addresses the challenge of autonomous weapon systems—paragraph 65 (whose text is provided in the Introduction to this article)— explicitly sets out an obligation to consider the impact on the right to life of all new weapons, and calls for a moratorium on the development, deployment and use of au- tonomous weapon systems until their compatibility with Article 6 of the ICCPR and Recognizing that these rights derive from the inherent dignity of the human person”). See also International Covenant on Economic, Social and Cultural Rights, 16 December 1966, preamble, 999 UNTS 3. 69 Human Rights Committee, General Comment No. 36, § 3. 70 General Data Protection Regulation, Article 22. 71 Human Rights Committee, General Comment No. 36, § 13. 72 Ibid., § 14. 73 Ibid., § 15. 74 Ibid., § 21. 75 Ibid., § 27. 182 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 other relevant norms of international law has been established.76 This formulation ap- pears to be influenced by the legality assessment process found in Article 36 of the First Additional Protocol to the Geneva Convention. It is interesting to note that the Human Rights Committee singled out, in paragraph 65, several problematic features in the operation of autonomous weapon systems: lack of human compassion and judgment, and questions of legal responsibility. These issues relate to all three levels of criticism discussed in section two: proper law application (lack of judgment), additional humanitarian considerations (compassion) and other legal and ethical concerns (legal responsibility). Whereas under IHL, lex lata focuses only on the capacity to properly apply the law, the IHRL framework is, as explained above, broad enough to capture more abstract notions of human dignity, humanitarian protection, accountability and transparency. Still, even under IHRL, the Committee did not call for an outright ban on LAWS, but rather for extra-caution in their development, de- ployment and use. It cannot be excluded that, once sufficient empirical data has been gathered concerning the ability of future versions of LAWS to comply with IHRL (and IHL), and especially after adequate safeguards concerning transparency, ex ante supervi- sion, real-time control and ex-post accountability have been put in place, they could be regarded as IHRL-compatible. The ability of bodies such as the Human Rights Committee to continuously monitor states’ record in developing, deploying and using military AI during periodic reviews of state reports under relevant human rights instruments77 provides these bodies with a unique opportunity to fine-tune the interpretation and application of specific IHRL norms governing military AI. A similar contribution can be made by the work of UN special procedures operating under the auspices of the Human Rights Council. One question that lies beyond the scope of the present discussion—but which might none- theless be considered in the future by IHRL-applying bodies—is whether the growing reliance on military AI increases the propensity to resort to military force in ways that violate the prohibition against the use of force in international law, and, by implication, Article 6 of the ICCPR.78 5. Conclusion Military AI is already changing how armed forces operate, prompting a growing reli- ance on machines to replace humans in decision-making. While this development raises difficult ethical and legal issues—especially given doubts about the quality of machine performance, aversion to machines making fateful decisions for human beings, and the 76 Ibid., § 65. 77 See e.g., ICCPR, Article 40. 78 See Human Rights Committee, General Comment No. 36, § 75. 183 Yuval Shany – To Use AI or Not Use AI? Autonomous Weapon Systems and Their Complicated Relationship with the Right to Life chronic problems of transparency and accountability afflicting the use of AI—military AI might over time also improve the quality of decisions in and around the battlefield, potentially resulting in better compliance with IHL and IHRL. As a result, decision-makers might sooner or later face the dilemma of whether—after appropriate impact and risk assessments have been conducted—to develop, deploy and use LAWS as a cost-effective method to improve compliance with IHL and enhance hu- manitarian protections. Even then, the IHRL framework appears more conducive than the existing IHL framework to consolidating specific normative expectations relating to human dignity, transparency and accountability, possibly directing the field’s develop- ment towards patterns of machine–human interaction that provide safeguards against violations of applicable IHRL and IHL standards. 184 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 References Amoroso, D. & Giordano, B. (2019) ‘Who Is to Blame for Autonomous Weapons Systems’ Misdoings?’ in: Carpanelli, E. & Lazzerini, N. (eds.) Use and Misuse of New Technologies. Springer. Article 36 (2019) Policy Note: Targeting People – Key issues in the regulation of autono- mous weapons systems, . Asaro, P. (2012) ‘On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-making’, 94 International Review of the Red Cross 687. Atherton, K. (2022) ‘Understanding the Errors Introduced by Military AI Applications’, Brookings Tech Stream, 6 May . Barrett, R.C. (2019) ‘Humanising the Law of Targeting in Light of a Child Soldier’s Right to Life’, 27 The International Journal of Children’s Rights 3. Bellotti, M. (2021) ‘Helping Humans and Computers Fight Together: Military Lessons from Civilian AI’, War On The Rocks, 15 March, . Brenneke, M. (2018) ‘Lethal Autonomous Weapon Systems and their Compatibility with International Humanitarian Law: A Primer’, Yearbook of International Humanitarian Law 59. Brewster, T. (2021) ‘Project Maven: Startups Backed By Google, Peter Thiel, Eric Schmidt And James Murdoch Are Building AI And Facial Recognition Surveillance Tools For The Pentagon’, Forbes, 8 September. Cabitza, F. (2019) ‘Biases Affecting Human Decision Making in AI-Supported Second Opinion Settings’ in: Torra, V., et al. (eds.) Modelling Decisions for Artificial Intelligence. Springer. Center for Strategic and International Studies (2021) Maritime Security Dialogue: The Aegis Approach with Rear Admiral Tom Druggan, 22 November. Cook, M.L. & Hamann, P.A. (1994) ‘The Road to Basra: A Case Study in Military Ethics’, 14 The Annual of the Society of Christian Ethics 207. Department of Defense (March 2022) Summary of the Joint All-Domain Command and Control (CJADC2) Strategy. Geiss, R. (2016) Autonomous Weapons Systems: Risk Management and State Respon- sibility, submission to Third CCW meeting of experts on lethal autonomo- 185 Yuval Shany – To Use AI or Not Use AI? Autonomous Weapon Systems and Their Complicated Relationship with the Right to Life us weapons systems (LAWS) Geneva, 11–15 April, . Geiss, R. & Zimmermann, A. (2017) ‘The International Committee of the Red Cross: A Unique Actor in the Field of International Humanitarian Law Creation and Progressive Development’ in: Geiss, R., Zimmermann, A. & Haumer, S. (eds.) Humanizing the Laws of War: The Red Cross and the Development of International Humanitarian Law. Geneva: International Committee of the Red Cross. Gibson, J. (2021) ‘Death by Data: Drones, Kill Lists and Algorithms’, E-International Relations, 18 February, . Heller, K.J. (2013) ‘“One Hell of a Killing Machine” Signature Strikes and International Law’, 11 Journal of International Criminal Justice 89. Heller, K.J. (2023) ‘The Concept of “The Human” in the Critique of Autonomous Weapons’, 15 Harvard National Security Journal 1. Hollings, A. (2021) ‘America’s Loitering Radar-Hunting Missile Is Due For A Comeback’, Sandboxx, 14 December, . Hua, S.-S. (2019) ‘Machine Learning Weapons and International Humanitarian Law: Rethinking Meaningful Human Control’, 51 Georgetown Journal of International Law 117. Human Rights Watch (2015) Mind the Gap: The Lack of Accountability for Killer Robots. Human Rights Watch and International Human Rights Clinic – Harvard Law School (December 2021), Crunch Time on Killer Robots Why New Law Is Needed and How It Can Be Achieved, . Ibrahim, A. (2022) Loitering Munitions as a New-Age Weapon System, Centre for Strategic and Contemporary Research, 5 December, . ICRC (2021) Position On Autonomous Weapon Systems. ICRC (2022) What you need to know about autonomous weapons, . Jensen, E.T. (2020) ‘The (Erroneous) Requirement for Human Judgment (and Error) in the Law of Armed Conflict’, 96 International Law Studies 26. 186 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Korteling, J.E. (Hans), et al. (2021) ‘Human versus Artificial Intelligence’, Frontiers Artificial Intelligence (online edition), . Kwik, J. (2022) ‘A Practicable Operationalisation of Meaningful Human Control’, 11 Laws 43. Laitinen, A. & Sahlgren, O. (2021) ‘AI Systems and Respect for Human Autonomy’, Frontiers in Artificial Intelligence (online edition, 26 October), . Makridakis, S. (2017) ‘The Forthcoming Artificial Intelligence (AI) Revolution: Its Impact on Society and Firms’, 90 Futures 46. McFarland, T. (2015) ‘Factors Shaping the Legal Implications of Increasingly Auto- nomous Military Systems’, 97 International Review of the Red Cross 1313. Manuel, R. (2022) ‘French Military Approves Final Phase of Big Data and AI Platform Artemis’, The Defence Post, 15 July, . Milmo, D. (2024) ‘“Godfather of AI” Shortens Odds of the Technology Wiping Out Humanity Over Next 30 Years’, The Guardian, 27 December, . Mimran, T., Pacholska, M., Dahan, G., & Trabucco, L. (2024) ‘Beyond the Headlines: Combat Deployment of Military AI-Based Systems by the IDF’, Articles of War, 2 February, . Min, R. (2022) ‘Israel deploys AI-powered robot guns that can track targets in the West Bank’, Euronews, 17. October . Morgan, F.E., et al., (2020) Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World. RAND. Nasu, H. (2021) ‘The Kargu-2 Autonomous Attack Drone: Legal & Ethical Dimensions’, Articles of War, 10 June, . Pacholska, M. (2023) ‘Military Artificial Intelligence and the Principle of Distinction: A State Responsibility Perspective’, 56 Israel Law Review 3. Price, R. (2016) ‘In Defence of Killer Robots’, Insider, 24 June, . 187 Yuval Shany – To Use AI or Not Use AI? Autonomous Weapon Systems and Their Complicated Relationship with the Right to Life Renic, N.C. & Schwartz, E. (2023) ‘Inhuman-in-the-loop: AI-targeting and the Erosion of Moral Restraint’, Articles of War, 19 December, . Rosengrün, S. (2022) ‘Why AI is a Threat to the Rule of Law’, 1 Digital Society (online version) Article 10. Runkle, B. (2015) ‘The Obama Administration’s Human Shields: How the Obama administration is using the threat of civilian casualties to hold its fire aga- inst the Islamic State’, Foreign Policy, 30 November, . Sassóli, M. (2014) ‘Autonomous Weapons and International Humanitarian Law: Advantages, Open Technical Questions and Legal Issues to be Clarified’, 90 International Law Studies 308. Schmitt, M.N. (2013) ‘Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics’, Harvard National Security Journal (online edi- tion), . Schmitt, M.N. (2013) ‘Wound, Capture, or Kill: A Reply to Ryan Goodman’s “The Power to Kill or Capture Enemy Combatants”’, 24 European Journal of International Law 855. Schuller, A.L. (2019) ‘Artificial Intelligence Effecting Human Decisions to Kill: The Challenge of Linking Numerically Quantifiable Goals to IHL Compliance’, 15 I/S: A Journal Of Law And Policy 105. Schwartz, E. (2018) ‘The (Im)possibility of Meaningful Human Control for Lethal Autonomous Weapon Systems’, Humanitarian Law & Policy, 29 August, . Shany, Y. (2023) ‘Human Rights Norms Applicable in the Situation of Armed Conflict: Beyond the Lex Generalis/Lex Specialis Framework’, 66 Japanese Yearbook of International Law 3. Sharkey, N.E. (2012) ‘The Evitability of Autonomous Robot Warfare’, 94 International Review of the Red Cross 787. Statman, D., et al. (2020) ‘Unreliable Protection: An Experimental Study of Experts’ In Bello Proportionality Decisions’, 31 European Journal of International Law 429. 188 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Swoskin, E. (2024) ‘Israel has built an ‘AI Factory’ for War. It has unleashed it in Gaza’, Washington Post, 29 December, . Taddeo, M. & Blanchard, A. (2022) ‘A Comparative Analysis of the Definitions of Autonomous Weapons Systems’, 28(5) Science and Engineering Ethics 37. Talbot Jensen, E. & Alcala, R.T.P. (2019) The Impact of Emerging Technologies on the Laws of Armed Conflict. Oxford: Oxford University Press. Tversky, A. & Kahneman, D. (1974) ‘Judgement Under Uncertainty: Heuristics and Biases’, 185 Science 1124. UAS Vision (2019) DARPA Reveals Details of CODE Program. Wagner, M. (2014) ‘The Dehumanization of International Humanitarian Law: Legal, Ethical, and Political Implications of Autonomous Weapon Systems’, 47 Vanderbilt Journal of Transnational Law 1371. Walker, P. (2021) ‘Leadership Challenges from the Deployment of Lethal Autonomous Weapon Systems: How Erosion of Human Supervision Over Lethal Engagement Will Impact How Commanders Exercise Leadership’, 188 The RUSI Journal 10. Winter, E. (2022) ‘The Compatibility of Autonomous Weapons with the Principles of International Humanitarian Law’, 27 Journal of Conflict and Security Law 1. Work, R.O. (2021) Principles for the Combat Employment of Weapon Systems with Autonomous Functionalities. Center for a New American Security. Xiao, B., et al. (2016) ‘Computational Analysis and Simulation of Empathic Behaviors: A Survey of Empathy Modelling with Behavioral Signal Processing Framework’, 18 Current Psychiatry Reports 49. Zerilli, J., et al. (2019) ‘Transparency in Algorithmic and Human Decision Making: Is there a Double Standard?’ 32 Philosophy & Technology 661. 189 © The Author(s) 2024 Scientific Article DOI: 10.51940/2024.1.189-216 UDC: 341.3:342.7:004.8 341:623:004.8 Joana Gomes Beirão,* Jan Wouters** Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible? Abstract This article considers the potential use of autonomous weapons both in and outside armed conflict, including in law enforcement. It analyses the phenomenon from the perspective of human rights law, with a particular focus on the right to life. For over a decade, the international community has debated whether technological advances per- taining to the development of autonomous weapons require the establishment of new rules within the framework of international humanitarian law. In contrast, consideration of such technology from a human rights law perspective has been limited, despite its implications for the right to life and other human rights. In parallel, several international initiatives have emerged in recent years aiming to establish non-binding and binding rules for the development and use of artificial intelligence (AI) based on respect for hu- man rights. This article reviews four such initiatives: the OECD Recommendation on AI, the UNESCO Recommendation on the Ethics of AI, the INTERPOL and UNICRI Toolkit for Responsible AI Innovation in Law Enforcement, and the Council of Europe AI Convention. It examines the extent to which these initiatives address the specific concerns raised by autonomous weapons. Key words autonomous weapons, artificial intelligence, human rights, right to life, law enforcement. * Junior researcher at the Leuven Centre for Global Governance Studies – Institute for International Law, America Europe Chair on Technology, Innovation and International Regulation, KU Leuven. ** Jean Monnet Chair ad personam and Full Professor of International Law and International Organizations, Director, Leuven Centre for Global Governance Studies, Coordinator, America Europe Chair on Technology, Innovation and International Regulation, KU Leuven. Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review – voL. LXXXiv, 2024 • pp. 189–216 ISSN 1854-3839 • eISSN: 2464-0077 190 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 1. Introduction Autonomous weapons have been the subject of long discussions and disagreements over whether they can be used in compliance with existing rules of international human- itarian law (IHL) and whether new IHL rules should be created to prohibit or at least regulate them. As stated in 2014 by Christof Heyns, at the time UN Special Rapporteur on extrajudicial, summary or arbitrary executions, “[t]he legal debate about [autono- mous weapons] that has emerged during the past few years has largely left human rights out of the picture, and focused primarily on IHL”.1 A decade later, the statement remains perfectly accurate. Building on the conclusions of his predecessor,2 Heyns recommended in 2013 that the United Nations (UN) Human Rights Council call on States to declare a moratorium on the development, acquisition, deployment, and use of lethal autonomous robots until an international framework could be established to regulate such technology. He also proposed that the UN High Commissioner for Human Rights convene a high-level pan- el tasked with advancing the establishment of this framework.3 The following year Heyns called on the international community to “adopt a comprehensive and coherent ap- proach to autonomous weapons systems in armed conflict and in law enforcement, one which covers both the international humanitarian law and human rights dimensions”, stressing that “the various international agencies and institutions dealing with disarma- ment and human rights, such as the Convention on Certain Conventional Weapons and the Human Rights Council, each have a responsibility and a role to play” with regard to autonomous weapons.4 Despite such calls, echoed by civil society,5 discussions on the potential regulation of autonomous weapons have primarily taken place within the framework of IHL, spe- cifically within the Group of Governmental Experts on Lethal Autonomous Weapons Systems, established under the UN Convention on Certain Conventional Weapons.6 In a time where the development7 and use8 of this technology are well under way, to date such discussions have yielded only modest results. Many questions regarding the appli- 1 Heyns, 2014b. 2 Alston, 2010, § 48. 3 Heyns, 2013, §§ 113–114. 4 Heyns, 2014c, § 89. 5 Docherty, 2014, p. 4. 6 Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, Geneva, 10 October 1980, UNTS 22495. 7 Alston, 2010, §§ 27–28. 8 Choudhury et al., 2021, §§ 63–64. 191 Joana Gomes Beirão, Jan Wouters – Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible? cation of existing rules to autonomous weapons remain unanswered, and new rules have not been established due to persistent difficulties in reaching consensus.9 In parallel, several initiatives have recently emerged to regulate artificial intelli- gence (AI), including ensuring that its use respects human rights. As this technology progresses, the international community has considered whether non-binding or bind- ing rules should be established to address the concerns it raises, including its poten- tial impact on the enjoyment of human rights. Noteworthy among these initiatives are the OECD Recommendation on AI,10 the UNESCO Recommendation on the Ethics of AI,11 the INTERPOL and UNICRI Toolkit for Responsible AI Innovation in Law Enforcement,12 and the Council of Europe Framework Convention on AI and Human Rights, Democracy and the Rule of Law.13 Considering the attention the topic has re- ceived, further initiatives to establish an international legal framework for AI may emerge in the future.14 With these developments in mind, the present article considers the potential use of autonomous weapons in and outside armed conflict, including in law enforcement, ana- lysing the phenomenon from the perspective of human rights law, focusing particularly on the right to life. Subsequently, we reflect on recent initiatives to regulate AI based on respect for human rights, examining to what extent they address the specific concerns autonomous weapons raise. Before moving forward with our analysis, it is important to note that the concept of autonomous weapons, as “weapons that select and apply force to targets without human 9 Reeves, Alcala & McCarthy, 2021, pp. 102 and 107–110. 10 OECD, Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449, 22 May 2019 (OECD Recommendation on AI). 11 UNESCO, Recommendation on the Ethics of Artificial Intelligence, SHS/BIO/PI/2021/1, 23 November 2021 (UNESCO Recommendation on the Ethics of AI). 12 UNICRI and INTERPOL, Toolkit for Responsible AI Innovation in Law Enforcement, June 2023 (INTERPOL and UNICRI Toolkit for Responsible AI Innovation in Law Enforcement). 13 Council of Europe, Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, 5 September 2024 (Council of Europe AI Convention). 14 To name but a few examples: ASEAN is developing a guide on AI governance and ethics, although little is currently known about the initiative (Potkin & Wongcha-um, 2023); the United Kingdom announced it will host “the first major global summit on AI safety” which “will bring together key countries, leading tech companies and researchers to agree safety measures to evaluate and monitor the most significant risks from AI” (UK Prime Minister’s Office, 2023); the European Union and the United States are developing a voluntary AI code of conduct (Blenkinsop, 2023); and the UN Secretary-General has supported the proposal to establish an agency, inspired by the International Agency of Atomic Energy, mandated to regulate AI (Guterres, 2023). 192 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 intervention”,15 includes both weapons that incorporate AI16 as well as weapons which do not use such technology to perform the autonomous selection and application of force. Nevertheless, the focus of this article is on autonomous weapons that incorporate AI, given the increased unpredictability as to how these machines select and apply force.17 2. Autonomous Weapons and Human Rights Law While the development and use of autonomous weapons clearly deserves careful con- sideration from the perspective of IHL, the same is also required from the perspective of human rights law for at least three reasons. Firstly, even if autonomous weapons were an exclusively military technology, human rights law remains applicable during armed conflicts alongside IHL.18 Although a thorough analysis of the relationship between IHL and human rights law is not possible here,19 it should be noted that international and regional courts, UN organs, treaty bodies and human rights special procedures have recognised that “both bodies of law apply to situations of armed conflict and provide complementary and mutually reinforcing protection”.20 In this regard, the International Court of Justice has held that: “[T]he protection offered by human rights conventions does not cease in case of armed conflict, save through the effect of provisions for derogation of the kind to be found in Article 4 of the International Covenant on Civil and Political Rights. As regards the relationship between international humanitarian law and human 15 Although there is no universally-accepted definition of autonomous weapons, for the purpose of our analysis we consider the definition endorsed by the International Committee of the Red Cross (see: International Committee of the Red Cross, 2021, p. 5). For a comparative analysis of defini- tions of autonomous weapons, see: Taddeo & Blanchard, 2022. 16 Conceptualisations of AI differ as there is currently no universally-agreed upon definition of this technology. The OECD defines an AI system as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments” (OECD Recommendation on AI, § I). UNESCO defines such systems as “information-processing technologies that integrate models and algorithms that produce a capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-mak- ing in material and virtual environments” (UNESCO Recommendation on the Ethics of Artificial Intelligence, § 2). The Council of Europe Committee on Artificial Intelligence defines an AI system as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may in- fluence physical or virtual environments” (Article 2 of the Council of Europe AI Convention). 17 International Committee of the Red Cross, 2021, pp. 6–7. 18 Brehm, 2017, p. 25; Odon, 2022, pp. 85–89. 19 See, inter alia, Naert (2016). 20 Office of the United Nations High Commissioner for Human Rights, 2011, p. 1. See also: Droege, 2007, pp. 320–324. 193 Joana Gomes Beirão, Jan Wouters – Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible? rights law, there are thus three possible situations: some rights may be exclusively matters of international humanitarian law; others may be exclusively matters of human rights law; yet others may be matters of both these branches of internati- onal law”.21 For this reason, State conduct, such as the use of autonomous weapons, should be as- sessed considering both international human rights law (as lex generalis) and IHL (as lex specialis).22 In essence, the concurrent application of the two regimes means that human rights rules are to be interpreted in light of IHL.23 Secondly, military technologies regularly find their way outside armed conflict.24 As such, it cannot be ruled out that autonomous weapons may be used in peacetime, in- cluding in law enforcement.25 The incorporation of military technologies into law en- forcement can already be seen in the increasing use of remote-controlled drones and ro- bots by police (e.g., for bomb disposal,26 surveillance27 and border patrol28).29 Moreover, there is at least one recorded instance in which police used a remote-controlled robot to employ lethal force.30 It is important to note that this unprecedented action, which took place in Texas in 2016, did not result from an official policy change that would allow the use of robots to employ lethal force, but from a “creative” solution reached by police officers facing an extremely dangerous situation. One instance which is perhaps more indicative of the militarisation and depersonalisation of law enforcement is the proposal of the San Francisco Police Department to establish a new policy allowing (remote-con- trolled) lethal robots to be employed in extreme circumstances which pose an immediate risk to life.31 Advocates of such a policy argue that it could save police officers’ lives since they would not have to be physically present in dangerous situations. Such reasoning applies to remote-controlled and autonomous robots alike. However, in conjunction with the (perceived) need to increase the efficiency of law enforcement, it is possible that we will see a push in the future towards the incorporation of autonomous robots into 21 International Court of Justice, Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory: Advisory Opinion, 9 July 2004, § 106. 22 International Court of Justice, Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory: Advisory Opinion, 9 July 2004, § 106; Odon, 2022, pp. 85–86. 23 European Court of Human Rights, Hassan v. the United Kingdom [GC], App. No. 29750/09, Judgement, 16 September 2014, §§ 102–104; Odon, 2022, pp. 85–86. 24 Amnesty International, 2015, p. 9. 25 Heyns, 2013, § 84; Heyns, 2014a, § 144; Heyns, 2014c, § 84; Marijan, 2023. 26 Allison, 2016. 27 Singapore Home Team Science and Technology Agency, 2021; Reuters, 2017. 28 The Guardian, 2014; U.S. Department of Homeland Security, 2022. 29 Heyns, 2014c, §§ 77–83; Marijan, 2023. 30 Sinder & Simon, 2016; Fund, 2016. 31 Derico & Clayton, 2022; Rodríguez, 2023. 194 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 policing since the technology is capable of processing information and responding faster than humans piloting remote-controlled robots.32 For law enforcement, States may be particularly willing to use the so-called less-lethal autonomous weapons as these are gen- erally considered less dangerous and, hence, less controversial. However, such weapons also raise concerns from a human rights perspective, including with regard to the right to life.33 It should be recalled that the use of less-lethal weapons (such as tasers, rubber bullets and tear gas), whether employed directly by a police officer, remotely-controlled or autonomously, may lead to the death of the targeted person(s) and/or innocent by- standers.34 Since IHL is not applicable outside an armed conflict, any rules which may be created within that field to prohibit or regulate autonomous weapons, including within the context of the Convention on Certain Conventional Weapons35, would not apply to the use of this technology in law enforcement or other domestic settings such as private security. It is thus crucial to carefully assess the potential use of autonomous weapons in domestic settings from the perspective of human rights law. Thirdly, the use of autonomous weapons may have far-reaching implications for hu- man dignity and human rights.36 Some scholars argue that entrusting the decision to kill a human being to a machine constitutes a grave violation of human dignity, rendering the use of any technology capable of autonomously employing lethal force a priori un- lawful.37 While the scope of the right to human dignity remains contentious, it is clear that the use of autonomous weapons in and outside armed conflict may impact the right to life and the right to not be subjected to cruel, inhuman, or degrading treatment.38 Moreover, considering the large-scale collection and processing of data required for the functioning of this technology, as well as concerns regarding bias, transparency, and ex- plainability of algorithmic decisions, autonomous weapons may also affect the right to privacy, the right not to be discriminated against, and the right to an effective remedy.39 Given these concerns, the development and use of autonomous weapons deserves careful consideration from the perspective of human rights law. 32 Heyns, 2016, p. 359; Marijan, 2023. 33 Human Rights Committee, General Comment No. 36 on Article 6: right to life, § 14; Brehm, 2017, pp. 54–55. 34 Heyns, 2014c, § 69; Heyns, 2016, p. 361; Office of the United Nations High Commissioner for Human Rights, 2020, § 1.2. 35 Amnesty International, 2015, pp. 7–8. 36 Heyns, 2014a, § 144. 37 Heyns, 2016, pp. 369–371; Docherty, 2014, pp. 23–24; Brehm, 2017, pp. 63–65. 38 Brehm, 2017, pp. 69–70. 39 Ibid., pp. 56–68; Spagnolo, 2017, pp. 52–56; Spagnolo, 2019, pp. 59–61. 195 Joana Gomes Beirão, Jan Wouters – Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible? 3. Autonomous weapons and the right to life From a human rights perspective, the most important implication of the use of au- tonomous weapons both in and outside armed conflict is the potential interference with the right to life.40 The Human Rights Committee has recognised the importance of considering this right when it comes to autonomous weapons. Referring to Article 36 of Additional Protocol I to the Geneva Conventions, the Committee held that: “States parties engaged in the deployment, use, sale or purchase of existing we- apons and in the study, development, acquisition or adoption of weapons, and means or methods of warfare, must always consider their impact on the right to life. For example, the development of autonomous weapon systems lacking in hu- man compassion and judgment raises difficult legal and ethical questions concer- ning the right to life, including questions relating to legal responsibility for their use. The Committee is therefore of the view that such weapon systems should not be developed and put into operation, either in times of war or in times of peace, unless it has been established that their use conforms with article 6 [of the International Covenant on Civil and Political Rights] and other relevant norms of international law.”41 Importantly, the Human Rights Committee did not categorically state that the use of autonomous weapons in and outside armed conflict is a priori incompatible with the right to life. Instead, it noted that, from the perspective of the right to life, autonomous weapons are lawful if and to the extent that they can be used in accordance with the requirements of Article 6 of the International Covenant on Civil and Political Rights (ICCPR). We now turn to those requirements. As “the supreme right” inherent to every human being whose “effective protection […] is the prerequisite for the enjoyment of all other human rights”,42 the right to life is enshrined in all human rights treaties, as well as in Article 3 of the Universal Declaration of Human Rights and Article I of the American Declaration of the Rights and Duties of Man. Pursuant to Article 6 of the ICCPR and Article 4 of the American Convention on Human Rights (ACHR), “[n]o one shall be arbitrarily deprived of his life”. Both con- ventions explicitly state that the death penalty, when applied for the most serious crimes, does not constitute an arbitrary deprivation of life. Article 4 of the African Charter on Human and People’s Rights also provides that no one may be arbitrarily deprived of their life, but without explicitly addressing whether the death penalty is to be considered an arbitrary deprivation of life. In a different formulation, Article 2 of the European Convention on Human Rights (ECHR) stipulates that “[n]o one shall be deprived of his life intentionally” except if sen- 40 Heyns, 2013, §§ 36 and 85; Spagnolo, 2019, p. 59. 41 Human Rights Committee, General Comment No. 36 on Article 6: right to life, § 65. 42 Ibid., § 2. 196 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 tenced to death by a court or if the use of force is absolutely necessary to defend a person from unlawful violence, to effect a lawful arrest, to prevent the escape of a detainee or to quell a riot or insurrection. The ECHR allows States to derogate from the right to life but only with respect to deaths resulting from lawful acts of war.43 Even though the ICCPR44 and ACHR45 allow no derogations from the right to life, deaths resulting from lawful acts of war are not considered arbitrary deprivations of life and thus do not contravene the right to life under these treaties.46 The same applies to the situations in which the use of force is absolutely necessary to defend a person from unlawful violence, effect a lawful arrest, prevent the escape of a detainee or quell a riot or insurrection.47 Importantly, the rules governing the use of lethal force under human rights law are more stringent than those under IHL. Human rights law only tolerates the use of lethal force in exceptional circumstances in accordance with the principles of legality, necessity and proportionality. Firstly, any use of lethal force must have a sufficient legal basis; it must be authorised and sufficiently regulated by law.48 Secondly, the use of lethal force must be strictly necessary to protect life or prevent serious injury from an imminent threat. In adhering to the principle of necessity, any alternatives to the use of lethal force must have been exhausted, unless they are not possible or adequate to protect the interest in question.49 Thirdly, the amount of force employed must be proportional to the interest protected. Thus, the principle of proportionality requires that the amount of force em- ployed does not exceed what is strictly necessary to respond to the threat.50 As recognised in the preamble of the UN Basic Principles on the Use of Force and Firearms by Law Enforcement Officials, “law enforcement officials have a vital role in the protection of the right to life, liberty and security of the person”.51 For this reason, “[t]he use of potentially lethal force for law enforcement purposes is an extreme measure that should be resorted to only when strictly necessary in order to protect life or prevent serious injury from an imminent threat”.52 43 ECHR, Article 15(2). 44 ICCPR, Article 4(2). 45 ACHR, Article 27(2). 46 Brehm, 2017, pp. 24–25. 47 Human Rights Committee, General Comment No. 36 on Article 6: right to life, § 10. 48 Ibid., § 11. 49 Ibid., § 12. 50 Ibid., § 12. 51 Basic Principles on the Use of Force and Firearms by Law Enforcement Officials, adopted by the Eighth United Nations Congress on the Prevention of Crime and the Treatment of Offenders, Havana, Cuba, 27 August to 7 September 1990. 52 Human Rights Committee, General Comment No. 36 on Article 6: right to life, § 12. See also: Article 3 of the Code of Conduct for Law Enforcement Officials, adopted by General Assembly resolution 34/169 of 17 December 1979. 197 Joana Gomes Beirão, Jan Wouters – Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible? The Basic Principles further specify that firearms shall only be used “in self-defence or defence of others against the imminent threat of death or se- rious injury, to prevent the perpetration of a particularly serious crime involving grave threat to life, to arrest a person presenting such a danger and resisting their authority, or to prevent his or her escape, and only when less extreme means are insufficient to achieve these objectives”.53 Moreover, law enforcement officials “shall, as far as possible, apply non-violent means before resorting to the use of force and firearms. They may use force and firearms only if other means remain ineffective or without any promise of achieving the intended result”.54 When law enforcement officials do use force, they must exercise restraint, act in proportion to the seriousness of the offence and the legitimate objective to be achieved, minimise damage and injury, and ensure that medical assistance is rendered at the earliest possible moment.55 In addition to the prohibition of unlawfully interfering with the right to life, States have positive obligations pertaining to the right to life. States have the duty to protect the right to life, including by establishing an appropriate legal framework that ensures the full enjoyment of this right, protects it from foreseeable threats, establishes with sufficient precision the grounds on which lethal force may be used and puts in place pro- cedures to prevent, investigate and prosecute potential cases of unlawful deprivation of life.56 With regard to law enforcement, States must put in place “all necessary measures to prevent arbitrary deprivation of life by their law enforcement officials, including soldiers charged with law enforcement missions”. 57 Such measures include adopting “appropriate legislation controlling the use of lethal force by law enforcement of- ficials, procedures designed to ensure that law enforcement actions are adequately planned in a manner consistent with the need to minimize the risk they pose to human life, mandatory reporting, review and investigation of lethal incidents and other life-threatening incidents, and supplying forces responsible for crowd control with effective, less-lethal means and adequate protective equipment in order to obviate their need to resort to lethal force”.58 53 Basic Principles on the Use of Force and Firearms by Law Enforcement Officials, § 9. 54 Ibid., § 4. 55 Ibid., § 5. On less-lethal weapons, see: Office of the United Nations High Commissioner for Human Rights, 2020, § 2.1-2.11. 56 Human Rights Committee, General Comment No. 36 on Article 6: right to life, § 18–20. 57 Ibid., § 13. 58 Ibid., § 13; Basic Principles on the Use of Force and Firearms by Law Enforcement Officials, §§ 1–3, 6–7, 11, 22–26; European Court of Human Rights, 2022, § 91–96. On less-lethal weapons, see: Office of the United Nations High Commissioner for Human Rights, 2020, § 3.1–4.8.2. 198 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Considering the negative and positive obligations of the State with regard to the right to life, the development and use of autonomous weapons must be carefully assessed. A State intending to use autonomous weapons must ensure that any use of potentially le- thal force therein complies with the principles of legality, necessity and proportionality. However, it remains unclear whether autonomous weapons are capable of complying with these principles as they require contextual value judgements, which machines may not be able to make reliably.59 To determine whether recourse to lethal force is neces- sary, autonomous weapons would need to assess in a limited time if a person poses an imminent threat, including by ascertaining that person’s intent to kill or seriously injure another person, which may be particularly difficult for a machine to accurately assess.60 Similarly, the balancing exercise required to comply with the proportionality principle may be challenging for autonomous weapons to perform, since it requires an assessment, which has to be performed in a limited time, of the amount of force strictly needed to respond to the threat in question.61 Moreover, under human rights law, any use of force requires an individual assess- ment of the circumstances that justify recourse to force. Since autonomous weapons are programmed to some extent beforehand, the requirement to individuate the use of force may not be met.62 For this reason, and considering the doubts as to whether autonomous weapons can reliably make the value judgements necessary to assess the necessity and proportionality of using lethal force, some scholars argue that autonomous weapons which employ lethal force without meaningful human control contravene the right to life.63 Accordingly, in order to comply with the right to life, the use of autonomous weapons would need to comprise human agents which “remain constantly and actively (personally) engaged in every individual application of force”, essentially ruling out the use of fully autonomous weapons.64 Given the grave consequences of an erroneous assessment by an autonomous weap- on—namely an unlawful deprivation of life—States must exercise particular caution with this technology. Arguably, the aforementioned doubts regarding the ability of fully autonomous weapons to comply with the principles of necessity and proportionality provide sufficient reason for States to refrain from using such technology, at least while such doubts persist. Ensuring meaningful human control over the technology may con- tribute to ensuring compliance with the prohibition of unlawful interference with the 59 Heyns, 2014c, § 85; Spagnolo, 2017, p. 48. 60 Heyns, 2016, pp. 364–366. 61 Ibid. 62 Brehm, 2017, pp. 45–48; Heyns, 2016, pp. 370–371. 63 Kiai & Heyns, 2016, § 67(f ); Heyns, 2016, pp. 374–376; Brehm, 2017, p. 48. 64 Brehm, 2017, p. 48; Asaro, 2012, p. 708. 199 Joana Gomes Beirão, Jan Wouters – Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible? right to life.65 However, many questions remain regarding the technical and operational requirements necessary to effectively operationalise the concept of meaningful human control.66 Thus, States must still exercise caution when assessing whether to use partially autonomous weapons. Ultimately, a death will be unlawful if it does not strictly comply with the principles of legality, necessity, and proportionality, regardless of whether force was directly employed by a person, by remotely-controlled technology, by a fully auton- omous weapon or by a partially autonomous weapon. The use of autonomous weapons, regardless of their level of autonomy, does not excuse States from complying with the prohibition of unlawfully interfering with the right to life. Furthermore, States intending to use autonomous weapons must also respect their positive obligations to ensure the right to life, including by establishing an appropriate legal framework regulating the use of autonomous weapons, ensuring they are designed to minimise the risk to human life, and adequately training the persons responsible for exercising control and oversight over the technology.67 A notable risk of using autono- mous weapons is that humans who engage with them may overly rely on the machine’s assessments that the use of force is legal, necessary and proportional, thereby limiting their role to an automatic approval of the machine’s decisions.68 States must provide adequate training to avoid such risk, as well as ensure that there are sufficient human resources to effectively exercise control over the weapons. Additionally, States must in- vestigate and prosecute potential cases of unlawful deprivation of life resulting from the use of autonomous weapons. However, it may be challenging for a State to fulfil such duties where the decision to use lethal force was made by an autonomous system without meaningful human control, as there will only be an indirect link between the actions of the persons involved (e.g., the public body which approved the use of autonomous weap- ons in a certain context, its developers, etc) and the decision to kill.69 Even when human control and oversight are present, there is a risk that persons involved in the use of the system may claim that unlawful deprivations of life were caused by technical errors. The 65 Although discussions on the concept of meaningful human control have mostly concerned its role in ensuring compliance with international humanitarian law, many of the considerations therein can be applied to human rights law. Since a parallel can be drawn between the difficulty in ensuring that autonomous weapons comply with the principles of distinction and proportionality under international humanitarian law and the difficulty in ensuring that they comply with the principles of necessity and proportionality under human rights law, the concept of meaningful human control may be useful for both bodies of law. 66 Boutin & Woodcock, 2022, p. 2. 67 Spagnolo, 2019, p. 67. 68 For an explanation of the phenomenon of over reliance on algorithmic decision, known as automa- tion bias, see: Jones-Jang & Park, 2022, p. 2. 69 Heyns, 2016, p. 373; Spagnolo, 2017, pp. 50–51. 200 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 opacity of the technology may render such claims difficult or impossible to assess.70 If a State intends to use autonomous weapons while respecting its obligations under human rights law, it must ensure that responsibility for unlawful deaths is not evaded. Overall, many questions remain regarding how States can ensure that they respect the right to life when using autonomous weapons. Considering the fundamental nature of this right, it is critical that the international community discusses the concerns autono- mous weapons raise. The next section analyses the extent to which recent initiatives to regulate AI address these concerns. 4. Regulating AI but not lethal AI? As “[t]he use of force against the human person, including the use of deadly or po- tentially deadly force by agents of the State, is a central human rights concern”,71 it would be expected that any initiative to regulate AI based on respect for human rights would carefully examine the concerns autonomous weapons raise with regard to the right to life. From this lens, this section reflects on the OECD Recommendation on AI, the UNESCO Recommendation on the Ethics of AI, the INTERPOL and UNICRI Toolkit for Responsible AI Innovation in Law Enforcement, and the Council of Europe AI Convention. 4.1. OECD Recommendation on AI In May 2019, the OECD Council adopted a Recommendation on AI, which “aims to foster innovation and trust in AI by promoting the responsible stewardship of trustworthy AI while ensuring respect for human rights and democratic values”.72 Although devoid of binding force, the Recommendation is “an important political and moral commitment at the intergovernmental level” recognising not only that AI may pose harm to human rights and democratic values but also that these concerns need to be addressed at both intergovernmental and national levels.73 The Recommendation was endorsed by all 36 OECD Members, as well as Argentina, Brazil, Columbia, Costa Rica, Peru, and Romania, and formed the basis of the G20 AI Principles adopted by G20 Leaders that same year. The Recommendation sets forth five complementary principles for responsible stew- ardship of trustworthy AI: inclusive growth, sustainable development and well-being; human-centred values and fairness; transparency and explainability; robustness, security, 70 Bo, Bruun & Boulanin, 2022, pp. 46–49. 71 Heyns, 2014c, § 65. 72 OECD Recommendation on AI, p. 3. 73 Yeung, 2020, p. 28. 201 Joana Gomes Beirão, Jan Wouters – Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible? and safety; and accountability. Furthermore, it provides five recommendations regarding the development of national policies and international cooperation, namely investing in AI research and development, fostering a digital ecosystem for AI, shaping an enabling policy environment for AI, building human capacity and preparing for labour market transformation, and promoting international cooperation for trustworthy AI. Of particular relevance to the subject of our analysis is the set of five principles for responsible stewardship of trustworthy AI, which sets forth that: “a) AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognised labour rights. b) To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consis- tent with the state of art” (§1.2). AI actors, i.e., any actors who play an active role in the lifecycle of an AI system, should further “commit to transparency and responsible disclosure regarding AI systems […] to enable those adversely affected by an AI system to challenge its outcome” and ensure that AI systems are “robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk” (§ 1.3–1.4). Finally, “AI actors should be accountable for the proper functioning of AI systems and for the respect of the […] principles [set forth in the recommendation], based on their roles, the context, and consistent with the state of art” (§ 1.5). Interestingly, the right to life is not mentioned anywhere in the document, nor are the specific concerns autonomous weapons raise reflected in its text. In line with the dec- laration that the use of AI should respect human rights, the document does recommend that “mechanisms and safeguards” are implemented, and that safety and accountability is ensured. While these principles are relevant for the development and use of autonomous weapons, they are likely insufficient to ensure that the right to life is respected. Consider, for example, the recommendation to implement “capacity for human determination”. Designing and using an autonomous weapon that allows human intervention if a mal- function is detected but does not require prior human approval for the use of lethal force may not meet human rights law requirements for the use of lethal force, as detailed in section 3. Overall, the Recommendation does not significantly contribute to clarifying how States can ensure they respect the right to life when developing and using autono- mous weapons. 202 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 4.2. UNESCO Recommendation on the Ethics of AI In November 2021, the General Conference of UNESCO adopted a Recommendation on the Ethics of AI, “a standard-setting instrument developed through a global approach, based on in- ternational law, focusing on human dignity and human rights, as well as gender equality, social and economic justice and development, physical and mental wel- l-being, diversity, interconnectedness, inclusiveness, and environmental and eco- system protection”. The Recommendation addresses ethical issues concerning AI to the extent that they are within UNESCO’s mandate, focusing particularly on its central domains, namely education, science, culture, communication and information (§ 1–3).74 The document sets forth a set of values and principles, operationalised in eleven policy areas: ethical impact assessment; ethical governance and stewardship; data policy; development and international cooperation; environment and ecosystems; gender; cul- ture; education and research; communication and information; economy and labour; and health and social well-being. For the subject of our analysis, the first value set forth in Recommendation is of particular relevance, as it stresses that “[h]uman rights and fundamental freedoms must be respected, protected and promoted throughout the life cycle of AI systems”, and “[n]o human being or human community should be harmed or subordinated, whether physically, economically, socially, politically, culturally or mentally during any phase of the life cycle of AI systems”. The need to respect human dignity is emphasised: “persons should never be objec- tified, nor should their dignity be otherwise undermined” when interacting with an AI system (§ 13–16). Among the principles set out in the Recommendation, three should be emphasised. Pursuant to the principle of proportionality and “do no harm”, the choice to use an AI system should be appropriate and proportional to the aim pursued and should not in- fringe on human rights. For this reason, “[i]n scenarios where decisions are understood to have an impact that is irreversible or difficult to reverse or may involve life and death decisions, final human determi- nation should apply” (§ 26). The principle of human oversight and determination requires States to “ensure that it is always possible to attribute ethical and legal responsibility for any stage of the life cycle of AI systems, as well as in cases of remedy related to AI systems, to physical persons or to existing legal entities” (§ 35). 74 Law enforcement is specifically mentioned in the Recommendation, which classifies it is a “human rights-sensitive use case” (UNESCO Recommendation on the Ethics of Artificial Intelligence, § 62). 203 Joana Gomes Beirão, Jan Wouters – Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible? While humans may decide to delegate certain decisions to AI systems, “an AI system can never replace ultimate human responsibility and accountability” and “[a]s a rule, life and death decisions should not be ceded to AI systems” (§ 36). Finally, pursuant to the principle of responsibility and accountability, “ethical responsibility and liability for the decisions and actions based in any way on an AI system should always ultimately be attributable to AI actors correspon- ding to their role in the life cycle of the AI system” (§ 42). Despite its soft law nature, the Recommendation deserves praise for explicitly con- sidering the possibility of life and death decisions being delegated to AI systems and cautioning against it. While autonomous weapons are not specifically mentioned,75 the values and principles of the Recommendation point to the need to maintain human con- trol, oversight and responsibility over this technology, whether used for law enforcement, defence or other purposes. In particular, the requirement that final human determination should apply to life-and-death decisions excludes the use of fully autonomous weapons. 4.3. INTERPOL and UNICRI Toolkit for Responsible AI Innovation in Law Enforcement In June 2023, INTERPOL and UNICRI released a Toolkit for Responsible AI Inno- vation in Law Enforcement. The foundation of the toolkit is a set of soft law principles “designed to guide law enforcement agencies across the world in integrating AI systems into their work in ways that align with good policing practices and AI ethics, and respect human rights”.76 Based on five core principles, the document argues that “responsible AI innovation in law enforcement consists of developing, procuring, and using AI systems in a way that is lawful, minimizes harm, respects human autonomy, is fair, and is supported by good governance”.77 The document reiterates that, as with any action carried out by law enforcement, the use of AI by police must respect human rights.78 For this reason, 75 When referring to decisions which may “have an impact that is irreversible or difficult to reverse or may involve life and death decisions”, the Recommendation only mentions social scoring and mass surveillance, stating that AI systems should not be used for such purposes (UNESCO Recommendation on the Ethics of Artificial Intelligence, § 26). 76 UNICRI and INTERPOL Toolkit for Responsible AI Innovation in Law Enforcement: Principles for Responsible AI Innovation, p. 3. 77 Ibid., p. 6. 78 Ibid., p. 8. 204 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 “law enforcement agencies should ensure legitimacy, necessity, and proportiona- lity whenever they engage with AI systems in ways that could have an impact on human rights”.79 Moreover, AI systems should “not pose a threat to the physical or mental well-being of individuals, their property or the environment”.80 AI systems must be “safe, meaning that they include sufficient safeguards to prevent unacceptable harm and minimize unin- tentional and unexpected harm”.81 Furthermore, the document stresses the importance of respecting human autonomy, which “requires that any decisions that impact humans are ultimately taken by humans, especially in a high-stakes context such as law enforcement”.82 Thus, “[e]nsuring human control and oversight of an AI system is […] essential to upholding human autonomy” and entails “protecting the independence and dignity of every individual or group that interacts with or is affected by the use of an AI system”.83 The need to uphold human control and oversight of AI systems in the law enforcement context is further stressed, “considering that the work of law enforcement agencies is at the very core of the functioning of society, justice and political systems, and therefore has a significant influence on individuals and their rights”.84 The document cautions against the use of “AI systems with a high degree of autono- my—meaning, those which are able to make decisions about the “real world” and act on them without human supervision and intervention”, stating that they “are generally not recommended, as their decisions can have a direct impact on people’s lives”.85 Guidance is provided on how law enforcement agencies should ensure human control and over- sight: they should “verify that the AI systems they currently use or intend to use are built with the functionalities needed to ensure that humans remain in charge during use, as well as to certify that the necessary organizational structures are in place to ensure that humans have the last word regarding certain decisions”.86 Interestingly, the need to ensure human control and oversight over AI systems is ex- plicitly related to accountability for decisions taken with the assistance of such systems. Essentially, it is argued that the personnel interacting with AI systems will be ultimately 79 Ibid., p. 9. 80 Ibid., p. 14. 81 Ibid. 82 Ibid., p. 20. 83 Ibid. 84 Ibid., p. 21. 85 Ibid. 86 Ibid., p. 20. 205 Joana Gomes Beirão, Jan Wouters – Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible? responsible for any decisions taken therein, and as such, they should ensure they main- tain control and oversight over the systems.87 Moreover, law enforcement should “ensure that, when AI-supported decisions have an unjust negative impact, those affected are able to formally seek redress through adequate and accessible proces- ses”.88 Mechanisms need to be “put in place to enable stakeholders to clearly determine who is responsible for the decisions made with the support of the AI system, and the consequences of those decisions”.89 Unlike the UNESCO Recommendation on the Ethics of AI, the principles put forth by INTERPOL and UNICRI do not explicitly consider the possibility of AI systems being used to make life-and-death decisions. Indeed, the potential use of autonomous weapons in law enforcement was not explicitly considered; neither were the concerns that such use raises. Nevertheless, it can be argued that the emphasis placed on ensuring human control and oversight over AI systems when they make decisions with significant impacts implies that human control, oversight and accountability should be maintained over autonomous weapons used in law enforcement. 4.4. Council of Europe AI Convention Since 2019, the Council of Europe has been exploring the possibility of establish- ing a legal framework on the development, design and application of AI systems based on human rights, democracy and rule of law standards. Building upon its predecessor’s work,90 the Committee on Artificial Intelligence (CAI) was tasked with establishing an international negotiation process and elaborating such a framework until November 2023.91 The CAI brings together representatives of the 46 Member States of the Council of Europe and observer states (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America and Uruguay), as well as rep- resentatives of other Council of Europe bodies, international organisations (including the European Union, the Organisation for Security and Co-operation in Europe, the 87 Ibid., p. 21. 88 Ibid., p. 32. 89 Ibid., p. 34. 90 The Ad Hoc Committee on Artificial Intelligence was mandated from 2019 to 2021 to “exam- ine the feasibility and potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of artificial intelligence, based on Council of Europe’s standards on human rights, democracy and the rule of law” (Decision CM/Del/Dec(2019)1353/1.5-app adopted at the 1353rd meeting of the Ministers’ Deputies, 11 September 2019). 91 Council of Europe Committee of Ministers, Decision CM(2021)131-addfinal. 206 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Organisation for Economic Co-operation and Development, and the United Nations Educational, Scientific and Cultural Organisation), the private sector, civil society, and research and academic institutions. The work of the CAI culminated in the landmark adoption of the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law by the Committee of Ministers of the Council of Europe on 17 May 2024. The Convention is open for signature by the Member States of the Council of Europe, the non-member States which participated in its elaboration and the European Union. It will enter into force three months after five signatories, including at least three Member States of the Council of Europe, express their consent to be bound by the Convention.92 Although a full analysis of the Convention is not the aim of this article, it is neces- sary to provide a few contextual notes regarding the object and purpose of this treaty. According to its Explanatory Report, the Convention does not set out to regulate all AI systems, focusing instead on those systems which have the potential to interfere with hu- man rights, democracy and the rule of law.93 As such, its provisions “aim to ensure that ac- tivities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law”.94 Importantly, the Convention does not intend to create new human rights obligations but rather “to facilitate the effective implemen- tation of the applicable human rights obligations of each Party in the context of the new challenges raised by artificial intelligence”.95 To achieve this, the Convention sets forth legally binding obligations that Parties must give effect to through appropriate legislative, administrative or other measures.96 The drafters of the Convention intended for Parties to “enjoy a certain margin of flexibility as to how exactly to give effect to the provi- sions of the […] Convention, in view of the underlying diversity of legal systems, traditions and practices among the Parties and the extremely wide variety of con- texts of use of artificial intelligence systems in both public and private sectors”.97 However, in giving effect to the Convention, Parties must take into account and tailor measures according to the level of risk posed by AI systems in different contexts of use.98 Of particular relevance among the obligations that Parties must give effect to are: the 92 Council of Europe AI Convention, Article 30. 93 Council of Europe, Explanatory Report to the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, 5 September 2024, § 12. 94 Council of Europe AI Convention, Article 1(1). 95 Council of Europe, Explanatory Report to the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, 5 September 2024, § 13; Article 21 Council of Europe AI Convention. 96 Council of Europe AI Convention, Article 1(2). 97 Council of Europe, Explanatory Report to the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, 5 September 2024, §16. 98 Council of Europe AI Convention, Articles 1(2) and 16. 207 Joana Gomes Beirão, Jan Wouters – Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible? protection of human rights,99 respect for human dignity and autonomy,100 transparency and oversight,101 accountability and remedies,102 equality and non-discrimination,103 pri- vacy and personal data protection,104 and reliability.105 To ensure effective implementa- tion, the Convention foresees a follow-up mechanism and international co-operation.106 Specifically with regard to autonomous weapons and the right to life two consid- erations should be highlighted. First, the scope of the AI Convention excludes “activ- ities within the lifecycle of artificial intelligence systems related to the protection of […] national security interests” and “matters relating to national defence”.107 Thus, the Convention will apply to the design, development and application of autonomous weap- ons in law enforcement and other domestic settings, but not in national security or defence matters. Arguably, this limitation is a missed opportunity to positively influ- ence the development and use of autonomous weapons in the defence field by clarifying (some of ) the requirements such conduct must comply with to respect human rights, especially the right to life. While it is true that IHL is the specialised legal framework to be applied in the conduct of hostilities and that there are ongoing discussions to establish specific rules on autonomous weapons within that area, the difficulty in achiev- ing consensus within the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems suggests that such rules may not emerge in the foreseeable future.108 In this context, a human rights treaty such as the Council of Europe’s AI Convention could contribute to filling some of the legal gaps pertaining to the development and use of autonomous weapons in armed conflict. As discussed in section 2, human rights law, including the ECHR, continues to apply in armed conflict, with its rules being inter- preted in light of IHL.109 If, for example, the Council of Europe’s AI Convention were to include a provision requiring States to ensure meaningful human control over auton- omous weapons used in armed conflict, such provision would have to be interpreted in light of IHL, including the principles of military necessity, distinction, proportionality and precaution. Since there is an ongoing unsettled debate on whether States using au- 99 Ibid., Article 4. 100 Ibid., Article 7. 101 Ibid., Article 8. 102 Ibid., Articles 9, 14 and 15. 103 Ibid., Article 10. 104 Ibid., Article 11. 105 Ibid., Article 12. 106 Ibid., Articles 1(3), 23-26. 107 Ibid., Articles 3(2) and 3(4). 108 Reeves, Alcala & McCarthy, 2021, pp. 101–118. 109 European Court of Human Rights, Hassan v. the United Kingdom [GC], App. No. 29750/09, Judgement, 16 September 2014, § 102–104. 208 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 tonomous weapons can comply with such principles if meaningful human control is not ensured,110 the aforementioned provision would hold particular weight. In this regard, one should, of course, be aware that, pursuant to its Statute, “[m]atters relating to na- tional defence do not fall within the scope of the Council of Europe”.111 However, that does not exclude that treaties developed within the Council of Europe apply to matters related to defence. Notably, this is the case of the ECHR, which remains applicable in times of war, although the High Contracting Parties may derogate from some of their obligations “to the extent strictly required by the exigencies of the situation”.112 Indeed, the European Court of Human Rights has extensive case law on the application of the ECHR to State conduct in armed conflict.113 Secondly, even though the Convention applies to the design, development and ap- plication of autonomous weapons in law enforcement and other domestic settings, its text does not explicitly address the grave issues raised by this technology with regard to the right to life. It is undeniable that the Convention obliges Parties to “adopt or maintain measures to ensure that the activities within the lifecycle of artificial intelligence systems are con- sistent with obligations to protect human rights”, which obviously includes the right to life.114 Moreover, Parties are obliged to tailor measures to the degree of risk posed by AI systems in different contexts of use, considering in particular the “severity and proba- bility of the occurrence of adverse impacts on human rights […]”.115 Thus, any Party intending to use autonomous weapons in law enforcement would need to consider the extremely severe risks of unlawful deprivation of life discussed in section 3 of this article. However, Parties to the Convention enjoy a margin of appreciation of these risks. As long as Parties apply the general risk and impact management framework foreseen in 110 See, for example: Amoroso & Tamburrini, 2020, pp. 188–189. 111 Statute of the Council of Europe, European Treaty Series, No. 1, 5 May 1949, Article 1(d). 112 ECHR, Article 15. 113 For a collection of caselaw of the European Court of Human Rights on the application of the ECHR to armed conflicts, see: European Court of Human Rights, 2023. One case of particular relevance to discussions on autonomous weapons is Streletz, Kessler and Krenz v. Germany [GC], App. nos. 34044/96, 35532/97 and 44801/98, Judgement, 22 March 2001. The Court considered the use of anti-personnel mines and automatic-fire systems by the German Democratic Republic (GDR) for border control, and held that this practice breached “the obligation to respect human rights and the other international obligations of the GDR, which, on 8 November 1974, had rati- fied the International Covenant on Civil and Political Rights, expressly recognising the right to life and to the freedom of movement” (§ 73). To reach this conclusion, the Court considered, among other elements, the “automatic and indiscriminate effect” of anti-personnel mines and automat- ic-fire systems (§ 73). 114 Council of Europe AI Convention, Articles 1(2) and 4. 115 Ibid., Article 1(2) and 16. 209 Joana Gomes Beirão, Jan Wouters – Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible? the treaty,116 they may reach different decisions on whether the perceived benefits of this technology outweigh its risks, as well as on the conditions for its use. The Convention itself does not explicitly ban the use of autonomous weapons for law enforcement pur- poses or set forth limits for such use (such as the requirement of meaningful human control). These choices are left to the discretion of the Parties. Thus, a priori, it cannot be said that the obligations set forth by the Convention preclude Parties from using fully or partially autonomous weapons for law enforcement. Interestingly, the Convention foresees the possibility of Parties imposing bans or moratoriums on certain uses of AI systems which, for example, pose an unacceptable risk to human rights.117 However, it is up to the discretion of each Party to determine what is an unacceptable risk to human rights that would warrant the imposition of a ban or moratorium. Thus, Parties to the Convention may reach different decisions on whether a ban or moratorium on the use of autonomous weapons is necessary. A similar logic applies to the obligation foreseen in the treaty to ensure that effective procedural safeguards are available where an AI significantly impacts upon the enjoy- ment of human rights,118 as would be the case of the use of autonomous weapons in law enforcement. According to the Explanatory Report, “[w]here an artificial intelligence system substantially informs or takes decisions impacting on human rights, effective procedural guarantees should, for instance, include human oversight, including ex ante or ex post review of the decision by humans”.119 However, what procedural safeguards are required for such impactful AI systems is left to the discretion of Parties. Ultimately, Parties to the Convention may reach different decisions on whether ex ante human review of the decision to use force is required. On the one hand, the open-ended risk-based approach that underlines the Convention makes it suitable to be applied to a broad range of AI systems across the public and private sectors, including systems which have not yet been developed.120 On the other 116 Ibid., Article 16. 117 Ibid., Article 16(4). 118 Ibid., Article 15. 119 Council of Europe, Explanatory Report to the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, 5 September 2024, § 103. 120 As recounted in the Explanatory Report, the provisions of the Convention “are purposefully drafted at a high level of generality, with the intention that they should be overarching requirements that can be applied flexibly in a variety of rapidly changing contexts” (§ 49). According to the same document, the open-ended risk-based approach underlining the Convention “is based on the as- sumption that the Parties are best placed to make relevant regulatory choices, taking into account their specific legal, political, economic, social, cultural, and technological contexts, and that they should accordingly enjoy a certain flexibility when it comes to the actual governance and regulation which accompany the processes” (§ 106). 210 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 hand, the fundamental nature of the right to life and the grave risks posed by autono- mous weapons arguably call for a red line to be drawn. The Convention thus missed an opportunity to unequivocally establish a ban or a moratorium on the use of autonomous weapons for law enforcement, or to set forth requirements for that use (such as the re- quirement of meaningful human control). While it is clear that States must respect the right to life if they intend to use auton- omous weapons, the issue at hand is whether and how they can ensure that such stand- ards are met when using a technology that may not be able to reliably make the value judgments necessary to assess the necessity and proportionality of the use of lethal force. Arguably, the fundamental nature of the right to life calls for an unequivocal statement that decisions to kill should not be delegated to machines; hence, States must not employ autonomous weapons for law enforcement or at a minimum ensure meaningful human control over them. Overall, although the Convention should be praised for being the first AI human rights treaty to ever be adopted, it does not significantly contribute to clarify- ing whether and how States can ensure that they respect the right to life if they intend to develop or use autonomous weapons for law enforcement and other domestic purposes. This omission may be explained by the assumption that autonomous weapons will only be used in armed conflict, resulting in a tendency to only consider the right-to-life concerns this technology raises with regard to its military use. To illustrate these observations, we briefly discuss two resolutions of the Parliamentary Assembly of the Council of Europe. The first resolution, adopted in October 2020, concerns the role of AI in police and criminal justice systems. It notes that AI applications for use by the police and criminal justice systems have been developed and introduced in many countries, and “include facial recognition, predictive policing, the identification of potential vi- ctims of crime, risk assessment in decision making on remand, sentencing and parole, and identification of ‘cold cases’ that could now be solved using modern forensic technology”.121 The Assembly expressed concerns over the use of such applications, namely in light of lack of transparency, unfairness, responsibility gaps, unsafety and disregard for privacy,122 and called on Member States to mitigate the risks of such applications seriously impact- ing human rights.123 The resolution does not consider the potential use of autonomous weapons by police and the concerns it raises with regard to the right to life. The second resolution, adopted in January 2023, concerns the emergence of lethal autonomous weapons and their necessary apprehension through European human rights law. This resolution considers the risks associated with the development and use of lethal 121 Parliamentary Assembly of the Council of Europe, Resolution 2342 (2020) Justice by algorithm – The role or artificial intelligence in policing and criminal justice systems, § 6. 122 Ibid., § 7. 123 Ibid., § 9. 211 Joana Gomes Beirão, Jan Wouters – Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible? autonomous weapons in armed conflict and the need for such systems to comply with IHL and human rights law, especially the right to life. In order to meet the requirement that the right to life be protected by law, the Assembly stressed that the States “must introduce a legal framework defining the limited circumstances in which the use of these weapons is authorised”.124 The Assembly further maintained that “[f ]rom the viewpoint of international humanitarian law and human rights law, regulation of the development and above all of the use of [lethal autonomous we- apon systems] is therefore indispensable” and that “[r]espect for the rules of international humanitarian and human rights law can only be guaranteed by maintaining human control […] over lethal weapons systems at all stages of their life cycle”.125 For this reason, the Assembly supported the adoption of non-binding and bind- ing instruments by the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems and invited its Member States to consider initiating such work at the Council of Europe if a consensus does not emerge within a reasonable period of time in that forum.126 This resolution does not consider the potential use of lethal autonomous weapons in law enforcement and other domestic contexts nor the concerns such use raises regarding the right to life. Given its fundamental importance, protecting the right to life should be an absolute priority when establishing a legal framework for AI based on human rights. Arguably, this includes regulating the potential use of autonomous weapons in and outside armed conflict and carefully considering their serious implications for the right to life. Although not specifically reflected in the text of the Council of Europe AI Convention, right-to-life considerations with regard to autonomous weapons can—and should—be taken into account by the Parties when implementing the risk-based approach foreseen in the treaty. Given the reporting obligation foreseen in the Convention,127 it will be interesting to see if Parties adopt and report on any measures in this regard. Moreover, once the Conference of Parties is convened, it will be interesting to see if right-to-life considerations feature in discussions regarding the interpretation and application of the Convention or possibly regarding the supplementation of the Convention.128 124 Parliamentary Assembly of the Council of Europe, Resolution 2485 (2023) Emergence of lethal autonomous weapons systems (LAWS) and their necessary apprehension through European human rights law, § 6.4. 125 Ibid., § 7. 126 Ibid., § 14–18. 127 Council of Europe AI Convention, Article 24. 128 Ibid., Article 23. 212 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 5. Conclusion The increasing use of AI across most, if not all, domains of human life raises legal and societal concerns that should be addressed proactively. This article does not in any way contest the need to ensure that the use of AI across sectors respects human rights, such as the right to privacy and the right to not be discriminated against. What is argued in this article is rather that the specific concerns raised by the possibility of machines autono- mously making the decision to kill, deserves the same careful consideration, if not more. Although errors are inevitable when using any technology, caution must be especially acute when such errors may lead to death. Considering the implications of autonomous weapons for the right to life, this article analysed the different extents to which four recent initiatives to regulate AI considered the potential delegation of decisions on the use of lethal force to AI. While all initia- tives stressed the importance of respecting human rights, none explicitly referred to the right to life or to the development and use of autonomous weapons. Only one initia- tive, the UNESCO Recommendation on the Ethics of Artificial Intelligence, explicitly considered and cautioned against the possibility of AI systems being used to make life- and-death decisions. Arguably, the fundamental nature of the right to life requires that initiatives to regulate AI carefully consider such possibility and unequivocally state that decisions to kill should not be delegated to machines. As “the supreme right” inherent to every human being whose “effective protection […] is the prerequisite for the enjoyment of all other human rights”,129 it is crucial that the development and use of AI in and out- side armed conflict is fully aligned with the negative and positive obligations of States in relation to the right to life. Discussions on the creation of an international legal framework for AI based on respect for human rights will likely continue and intensify in the future, as technology progresses. If the difficulty in achieving consensus in the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems is indicative, agreeing upon a frame- work which specifically addresses the concerns raised by autonomous weapons may prove challenging. Nevertheless, as precisely this technology entails the most serious conse- quences, the hope should be expressed that, regardless of the forum at hand, right-to-life considerations feature more prominently in future discussions to regulate AI. 129 Human Rights Committee, General Comment No. 36 on Article 6: right to life, § 2. 213 Joana Gomes Beirão, Jan Wouters – Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible? References Allison, P.R. (2016) What does a bomb disposal robot actually do?, BBC, (accessed 15 June 2023). Alston, P. (2010) Interim report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, UN Doc. No. A/65/321. Amnesty International (2015) Autonomous weapons systems: five key human rights issues for consideration, (accessed 31 January 2024). Amoroso, D., & Tamburrini, G. (2020) ‘Autonomous Weapons Systems and Meaningful Human Control: Ethical and Legal Issues’, Current Robotics Reports 1, pp. 187–194. Asaro, P. (2012) ‘On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making’, International Review of the Red Cross 94(886), pp. 687–709. Blenkinsop, P. (2023) EU tech chief sees draft voluntary AI code within weeks, Reuters, (accessed 15 June 2023). Bo, M., Bruun, L., & Boulanin, V. (2022) Retaining Human Responsibility in the Development and Use of Autonomous Weapons Systems: On Accountability for Violations of International Humanitarian Law Involving AWS, Stockholm International Peace Research Institute, October 2022. Boutin, B., & Woodcock, T. (2022) Aspects of Realizing (Meaningful) Human Control: A Legal Perspective, Research paper series, Asser Institute Center for International and European Law. Brehm, M. (2017) Defending the boundary: constraints and requirements on the use of au- tonomous weapon systems under international humanitarian and human rights law, Geneva Academy Briefing No 9. Choudhury, M.R. et al. (2021) Letter dated 8 March 2021 from the Panel of Experts on Libya established pursuant to resolution 1973 (2011) addressed to the President of the Security Council, UN Doc. No. S/2021/229. Derico, B., & Clayton, J. (2022) San Francisco to allow police ‘killer robots’, BBC, (accessed 13 June 2023). Docherty, B. (2014) Shaking the Foundations: The Human Rights Implications of Killer Robots, Human Rights Watch. 214 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Droege, C. (2007) ‘The interplay between international humanitarian law and inter- national human rights law in situations of armed conflict’, Israel Law Review 40(2), pp. 310–355. European Court of Human Rights (2022) Guide on Article 2 of the European Convention on Human Rights: Right to Life, (accessed 23 June 2023). European Court of Human Rights (2023) Factsheet – Armed Conflicts, (accessed 23 June 2023). Fung, B. (2016) Meet the Remotec Andros Mark V-A1, the robot that killed the Dallas shooter, The Washington Post, (accessed 15 June 2023). Guterres, A. (2023) Press Conference: Secretary-General Urges Broad Engagement from All Stakeholders towards United Nations Code of Conduct for Information Integrity on Digital Platforms, UN Doc. No. SG/SM/21832. Heyns, C. (2013), Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, UN Doc. No. A/HRC/23/47. Heyns, C. (2014a) Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, UN Doc. No. A/HRC/26/36. Heyns, C. (2014b) Presentation made at the informal expert meeting organized by the state parties to the Convention on Certain Conventional Weapons 13 – 16 May 2014, Geneva, Switzerland by Christof Heyns, Professor of human rights law, University of Pretoria United Nations Special Rapporteur on extrajudicial, summary or arbi- trary execution, (accessed 16 June 2023). Heyns, C. (2014c) Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, UN Doc. No. A/69/265. Heyns, C. (2016) ‘Human Rights and the use of Autonomous Weapons Systems (AWS) During Domestic Law Enforcement’, Human Rights Quarterly 38(2), pp. 350–378. International Committee of the Red Cross (2021) ICRC position and background pa- per on autonomous weapon systems, (accessed 15 June 2023). Jones-Jang, S.M., & Park, Y.J. (2022) ‘How do people react to AI failure? Automation bias, algorithmic aversion, and perceived controllability’, Journal of Computer- Mediated Communication 28(1), pp. 1–8. 215 Joana Gomes Beirão, Jan Wouters – Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible? Kiai, M., & Heyns, C. (2016) Joint report of the Special Rapporteur on the rights to freedom of peaceful assembly and of association and the Special Rapporteur on extrajudici- al, summary or arbitrary executions on the proper management of assemblies, UN Doc. No. A/HRC/31/66. Marijan, B. (2023) Allowing Killer Robots for Law Enforcement Would Be a Historic Mistake, Centre for International Governance Innovation, (accessed 31 January 2024). Naert, F. (2016) ‘Human rights and (armed) conflict’, in J. Wouters, Ph. De Man and N. Verlinden (eds.), Armed Conflicts and the Law, Oxford – Antwerp, Intersentia, 2016, pp. 187–218. Odon, D.I. (2022) Armed conflict and human rights law: protecting civilians and interna- tional humanitarian law. London: Routledge. Office of the United Nations High Commissioner for Human Rights (2011) International Legal Protection of Human Rights in Armed Conflict, UN Doc. No. HR/ PUB/11/01. Office of the United Nations High Commissioner for Human Rights (2020) UN Guidance on Less-Lethal Weapons in Law Enforcement, UN Doc. HR/PUB/20/1. Potkin, L., & Wongcha-um, P. (2023) Exclusive: Southeast Asia to set ‘guardrails’ on AI with new governance code, Reuters, (accessed 15 June 2023). Reeves, S.R., Alcala, R.T., & McCarthy, A. (2021). ‘Challenges in regulating let- hal autonomous weapons under international law’, Southwestern Journal of International Law 27(1), pp. 101–118. Reuters (2017) Robocop joins Dubai police to fight real life crime, (accessed 15 June 2023). Rodríguez, G. (2023) SFPD may resubmit proposal for ‘killer robots’ after policy was bloc- ked, reigniting debate, ABC News, (accessed 13 June 2023). Sinder, S., & Simon, M. (2016) How robot, explosives took out Dallas sniper in unpre- cedented way, CNN, (accessed 15 June 2023). Singapore Home Team Science and Technology Agency (2021) HTX Ground Robot on Trial at Toa Payoh Central to Support Public Officers in Enhancing Public Health and Safety, (accessed 15 June 2023). Spagnolo, A. (2017) ‘Human rights implications of autonomous weapon systems in domestic law enforcement: sci-fi reflections on a lo-fi reality’, QIL Zoom-in 43, pp. 33–58. Spagnolo, A. (2019) ‘What Do Human Rights Really Say About the Use of Autonomous Weapons Systems for Law Enforcement Purposes?’, in Carpanelli, E., & Lazzerini, N. (eds.) Use and Misuse of New Technologies, Springer, pp. 55–72. Taddeo, M., & Blanchard, A. (2022) ‘A Comparative Analysis of the Definitions of Autonomous Weapons Systems’, Science and Engineering Ethics 28(37). The Guardian (2014) Half of US-Mexico border now patrolled only by drone, (accessed 15 June 2023). UK Prime Minister’s Office (2023) Press release: UK to host first global summit on Artificial Intelligence, (accessed 15 June 2023). U.S. Department of Homeland Security (2022) Feature Article: Robot Dogs Take Another Step Towards Deployment at the Border, (accessed 15 June 2023). Yeung, K. (2020) ‘Introductory Note to the Recommendation of the Council on Artificial Intelligence (OECD)’, International Legal Materials 59(1), pp. 27–34. 217 © The Author(s) 2024 Scientific Article DOI: 10.51940/2024.1.217-253 UDC: 341.33/.34:341.232:004.8 Maruša T. Veber* Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent Abstract The author analyses the notion of State consent in the delivery of humanitarian assis- tance supported by artificial intelligence (AI) systems from the perspective of the existing applicable international legal regimes, in particular, the general legal regime of human- itarian assistance and the specific rules deriving from international humanitarian law and international human rights law. She argues that the notion of consent lies at the heart of these rules with a distinction made between strategic and operational consent to humanitarian assistance. The former refers to a State’s general consent to the delivery of humanitarian assistance on its territory, while the latter refers to the consent required at the operational level for the delivery of a particular type of humanitarian assistance in a specific geographically defined area. It is argued that valid reasons for withholding opera- tional consent to AI-supported humanitarian assistance under international humanitari- an law must be distinguished from the arbitrary withholding of strategic consent. While withholding operational consent may be legally justified, the arbitrary withholding of strategic consent to humanitarian assistance is prohibited under the relevant interna- tional legal regimes when it amounts to a violation of other existing obligations of the State concerned (e.g., under international humanitarian law or human rights law). In such situations the non-consensual delivery of humanitarian assistance could be legally justified either through United Nations Security Council authorisation or by secondary rules of international law, in particular countermeasures. Key words artificial Intelligence, humanitarian assistance, consent, arbitrary withholding of con- sent, countermeasures. * PhD, Assistant Professor and Associate Researcher, Department of International Law, Faculty of Law, University of Ljubljana, marusa.veber@pf.uni-lj.si. Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review – voL. LXXXiv, 2024 • pp. 217–253 ISSN 1854-3839 • eISSN: 2464-0077 218 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 1. Introduction** Humanitarian assistance is increasingly being carried out by relying on digital infor- mation technologies, including artificial intelligence (AI). AI systems,1 which typically draw on large amounts of data2, including the biometric data of aid recipients, have the potential to significantly enhance the accuracy and effectiveness of aid delivery, while also helping to prevent the misuse of humanitarian aid. By making the distribution of aid conditional on the use of AI and biometric data, the organisations mandated to deliver aid in the aftermath of man-made or natural disasters aim to ensure that the assistance reaches those in need, thereby preventing it from being diverted and used for other pur- poses. However, the use of AI in a humanitarian context raises numerous important legal questions, including whether the aid-receiving State consents to the use of AI systems in its territory, and whether it might withhold consent due to concerns over the poten- tial dual use of the collected data and the security of that data. Indeed, there have been instances where parties engaged in armed conflict have declined AI-supported humani- tarian assistance provided by international humanitarian organisations, as evidenced by the situation in Yemen. This paper analyses the notion of State consent in the delivery of humanitarian assis- tance supported by AI systems, viewed from the perspective of the applicable internation- al legal regimes—particularly the general legal regime of humanitarian assistance, as well as specific rules derived from international humanitarian law and international human rights law. It argues that consent lies at the heart of the rules governing the provision of humanitarian assistance, with a distinction drawn between strategic and operational con- sent. Strategic consent refers to a State’s general consent to the delivery of humanitarian assistance on its territory, while operational consent refers to the consent required at the operational level for delivering a particular type of humanitarian assistance in a specific geographic area. It is argued that valid reasons for withholding operational consent to AI-supported humanitarian assistance under international humanitarian law must be dis- tinguished from the arbitrary withholding of strategic consent. While the withholding of operational consent may be legally justified, the arbitrary withholding of strategic consent ** This paper was prepared in the framework of a research project ‘Development and use of artificial intelli- gence in light of the negative and positive obligations of the state to guarantee the right to life (J5-3107)’, which is co-funded by the Slovenian Research Agency (ARIS). See also T. Veber, 2024; and T. Veber, 2025. 1 There is currently no uniform definition of the AI. Arguably, the most authoritative definition was provided for in the UNESC Recommendation (2021), whereby AI systems are understood as “systems which have the capacity to process data and information in a way that resembles intelligent behaviour, and typically includes aspects of reasoning, learning, perception, prediction, planning or control”. UNESCO (2021), § 2. It is acknowledged, however, that the definition of AI will have to be changed over time in accordance with the rapid technological developments. T. Veber, 2023, pp. 14–15. 2 Beduschi, 2022, pp. 1149–1169. 219 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent to humanitarian assistance is prohibited under the relevant international legal regimes. When analysing the consequences of arbitrarily withheld consent, this paper concludes that such withholding of consent cannot automatically confer legality on the non-con- sensual delivery of AI-supported assistance in such situations.3 Rather, it argues that in certain limited cases, where the arbitrary withholding of consent amounts to a violation of other existing obligations of the State (e.g., under international humanitarian law or international human rights law), the non-consensual delivery of humanitarian aid could be legally justified either by United Nations Security Council (UNSC) authorisation or by existing secondary rules of international law, in particular the law of countermeasures. It is acknowledged that, apart from the issue of State consent, the question of the con- sent of individuals—namely the consent of recipients of humanitarian aid delivered with the support of AI—also arises in this context. Specifically, the use of AI in humanitarian assistance raises various questions relating to data protection and the right to privacy of the individuals concerned.4 However, an analysis of this topic lies beyond the scope of this paper. It also must be noted that this paper focuses solely on the delivery of aid by international organisations, such as UN specialised agencies, rather than the work of non-governmental organisations (NGOs) mandated with delivering aid. Since NGOs are not subjects of international law stricto sensu, they are primarily governed by national laws, and the application of international legal rules applies differently to them than to inter- national organisations.5 Finally, this paper focuses on the notion of consent in the context of AI-supported humanitarian assistance, a concept specific to the broader question of the possible non-consensual provision of humanitarian assistance under international law. Following this introduction, the paper briefly presents the practice of using AI by international humanitarian organisations (section 2). Section 3 discusses the AI-specific legal regimes, while section 4 analyses relevant international legal regimes governing the provision of humanitarian assistance under international law. The paper then outlines the modalities and the distinction between withholding operational consent and with- holding strategic consent to humanitarian assistance (section 5) and discusses the (il) legality of the non-consensual provision of humanitarian assistance under international law (section 6). The final section (7) presents possible legal justifications for the non-con- sensual delivery of humanitarian assistance under the UNSC collective security regime and the secondary rules on responsibility, with special emphasis on the law of counter- measures. Finally, section 8 offers concluding remarks. 3 This argument is put forward for example by Barber, 2023. 4 More on this see: T. Veber, 2025. See also Narbel & Sukaitis, 2021; European Data Protection Board, 2022, p. 10; FRA, 2020; Wills, 2019; Kuner & Marelli, 2020, pp. 280–296. 5 Kuner, 2020, p. 81; Generally, on non-governmental organizations see: Lindblom, 2009. 220 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 2. AI-Supported Humanitarian Assistance Humanitarian assistance is increasingly provided through reliance on digital infor- mation technologies, including AI. For example, the World Food Programme (WFP), in partnership with the United Nations High Commissioner for Refugees (UNHCR), introduced an iris-scan payment system in a Jordanian refugee camp, enabling 76,000 Syrian refugees to purchase food from camp supermarkets using only an iris scan instead of cash, vouchers, or e-cards.6 This system connects with different databases within sec- onds (e.g., the UNHCR and bank databases), thereby enabling quick and efficient aid delivery. By making the distribution of aid conditional on the use of AI and biometric data, these organisations aim to ensure that the assistance goes directly to those in need, preventing its diversion for other purposes.7 However, the use of AI in a humanitarian context also raises numerous important legal questions. One particular concern is that AI systems in humanitarian assistance may be prob- lematic because of the possible dual use of the data these systems collect. AI systems run on a variety of datasets and produce large amounts of data, which may help improve aid delivery. Yet, that same data can easily be used for other purposes and become tools for surveillance, security checks, tracing, or deportation.8 The issue of data security and the potential compromises of sensitive data is particularly relevant since, in the past, cyberattacks on humanitarian organisations exposed the personal data of about 500,000 vulnerable people around the world.9 Moreover, requests have been made by different States to access biometrics data on refugees from humanitarian organisations, to use such data for security checks and deportation procedures.10 In addition, international organisations mandated with, for example, food assistance are increasingly relying on private commercial actors to support their humanitarian ac- tivities. One illustrative example is the WFP, which pledged to “become a digitally ena- bled and data-driven organization, with investments in new technology”,11 and recently partnered with Palantir to use its software to provide faster and more efficient food as- sistance.12 Palantir is a leading US company specialising in data analytics, which is also increasingly integrating AI into various aspects of its operation, ranking among the top AI software platforms.13 Palantir has, however, been the subject of criticism, with allega- 6 WFP, 2016. 7 Reuters, 2019. 8 Martin et al., 2023, pp. 1363–1397. 9 Macdonald, 2022. 10 In the past, Bangladesh, Lebanon, Malaysia, and the US, for example, requested access to UNHCR biometric data on refugees. Martin et al., 2023, p. 1382. 11 WFP strategic plan (2022-2023), WFP/EB.2/2021/4-A/1/Rev.2, 12 November 2021, § 130. 12 Parker, 2019. 13 Businesswire, 2022. 221 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent tions that it provided controversial data-sifting software to US government agencies.14 In this context, the term “surveillance humanitarianism” is sometimes used to describe the potential widespread collection of data in a humanitarian context without adequate safe- guards—an approach that may “inadvertently amplify the vulnerability of individuals in need of humanitarian aid”.15 Others refer to “techno-colonianism”, wherein practices of digital innovation “can lead to reproducing the colonial relationships of dependency and inequality amongst different populations around the world.”16 Due to these concerns, parties to an armed conflict have, on occasion, refused AI- supported humanitarian aid. For example, in 2019, the WFP decided to suspend the delivery of food aid in Yemen because of a disagreement on about using technology that employed biometric data (via iris scans, fingerprints, or facial recognition) to support aid delivery to food recipients.17 The principal objection was the concern that utilising AI and collecting data could jeopardise the security of the State.18 3. The AI-Specific Legal Framework In 2024 two legally binding AI-specific documents were adopted. The Council of Europe adopted the first international treaty regulating the development and use of AI systems, which stresses the need for the application of existing human rights obligations to the development and use of AI systems, and provides some concrete safeguards in this respect. The Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework Convention on AI)19 is based on the fol- lowing fundamental principles: human dignity and individual autonomy, equality and non-discrimination, respect for privacy and personal data protection, transparency and oversight, accountability and responsibility, and reliability and safe innovation.20 On the other hand, at the European Union (EU) level, the AI Act21 was adopted, governing the development and use of AI. In terms of substantive provisions, the AI Act is based on the so-called ‘risk-based’ approach. This means that it categorises AI 14 Martin et al., 2023, p. 1363; BBC, 2020. 15 Latonero, 2019, as cited in Beduschi, 2022, p. 1152. 16 Madianou, 2019, as cited in Beduschi, 2022, p. 1152. 17 Reuters, 2019; Welsh, 2019. 18 Martin et al., 2023, p. 1364. 19 Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, Council of Europe Treaty Series – No. [225], 2024. 20 More on this see T. Veber, 2025. 21 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 222 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 systems according to the level of risk they might pose from the perspective of health, safety, fundamental rights, the environment, democracy or the rule of law, into: prohib- ited AI practices, high-risk systems listed in Annex III, general-purpose AI models with systemic risk, and general-purpose AI models. While AI systems with unacceptable risks are prohibited, high-risk systems are subject to certain requirements in terms of data quality22, transparency, 23 human oversight, 24 fundamental rights impact assessment25 and registration.26 Under the AI Act certain biometric identification27 systems fall under prohibited practices, e.g. biometric categorisation systems that categorise individual nat- ural persons28 and ‘real-time’ remote biometric identification systems in publicly acces- sible spaces for the purposes of law enforcement (except in certain limited cases).29 On the other hand, uses of other types of biometrics (e.g., remote biometric identification systems) would have to comply with requirements for high-risk AI systems.30 While these documents regulate the development and deployment of AI systems, including biometric systems, their relevance for the present paper is limited for the fol- lowing two reasons: 1. Humanitarian international organisations, such as WFP, are not parties to these tre- aties and, even in cases where the use of AI systems by humanitarian international organisations would fall under the material and territorial scope of these laws, the enforcement of these rules is foreclosed by the privileges and immunities to which IOs are entitled under international law31; 2. These two documents regulate AI products within their member States/signatories and provide safeguards concerning the protection of the human rights of indivi- duals possibly affected by the use of AI. They do not explicitly address the possible non-consensual use of AI in the territory of a country affected by a humanitarian catastrophe of a sort. and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), OJ L, 2024/1689, 12 July 2024 (AI Act). 22 AI Act, Article 10. 23 AI Act, Article 13. 24 AI Act, Article 14. 25 AI Act, Article 27. 26 AI Act, Article 49. 27 According to Article 3(35) AI Act ‘biometric identification’ means the automated recognition of physical, physiological, behavioural, or psychological human features for the purpose of establish- ing the identity of a natural person by comparing biometric data of that individual to biometric data of individuals stored in a database. 28 AI Act, Article 5(1)(g). 29 AI Act, Article 5(1)(h). 30 AI Act, Section 2. 31 For a detail analysis of this see T. Veber, 2025. See also Kuner, 2019, p. 174. 223 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent At the United Nations (UN) level, which is most relevant to our discussions, AI is mainly being addressed through soft-law documents. Arguably, the most important development in this respect is the adoption of the ‘Principles for the Ethical Use of Artificial Intelligence in the United Nations System’ by the Inter-Agency Working Group on Artificial Intelligence in 2022.32 These principles aim to guide the design, de- velopment, deployment and use of AI by UN agencies through the following principles: do no harm; defined purpose, necessity and proportionality; safety and security; fairness and non-discrimination; sustainability; right to privacy, data protection and data gov- ernance; human autonomy and oversight; transparency and explainability; responsibil- ity and accountability; and inclusion and participation. While these principles provide valuable guidance for the use of AI by UN agencies, including those mandated with the provision of humanitarian assistance, their primary aim is to safeguard the rights of individuals subject to AI systems. In this respect, no explicit legal obligations can be derived from these principles for international organisations providing AI-supported humanitarian assistance in terms of the consent of the concerned, aid-receiving State, to the use of AI on its territory. However, the foregoing does not mean that the delivery of AI-supported humanitar- ian aid by humanitarian international organizations remains unregulated as a legal lacu- na. Activities of international organisations utilising AI in their humanitarian delivery missions are governed by the existing applicable international legal regimes, which are analysed in the remaining part of this paper. 4. International Legal Regimes Governing Humanitarian Assistance and the Issue of Consent Deriving from the principle of sovereignty and its corollary, the principle of non-in- tervention,33 it is the primary responsibility of the affected State to ensure, organise, coordinate, and implement the protection of affected persons and provision of humani- tarian assistance in cases of natural disasters and other emergencies occurring on its terri- tory, or on the territory under its jurisdiction or control.34 When States possess adequate 32 UN, Principles for the Ethical Use of Artificial Intelligence in the United Nations System, 20 September 2022. 33 Articles 2(1) and 2(7) Charter of the United Nations (UN Charter) (24 June 1945, entered into force 24 October 1945) 1 UNTS XVI (UN Charter); Declaration on Principles on International Law Concerning Friendly Relations and Cooperation among States in accordance with the Charter of the United Nations, UNGA resolution 2625 (XXV), 24 October 1970, UN Doc. A/ RES/2625(XXV). 34 Draft Article 10, Draft articles on the protection of persons in the event of disasters, YILC 2016, vol. II Part Two; UNGA Resolution 46/182, 19 December 1991, UN Doc. 46/182, Annex, Guiding Principles, § 4. See also Institute of International Law, Resolution of Humanitarian Assistance, Bruges Session – 2003, 8 September 2003 (IIL Resolution 2003), p. 5. 224 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 capacities to respond to man-made or other humanitarian disasters, the issues of consent will generally not arise. However, in cases where a disaster exceeds national response capacities, civilians in need are inadequately provided with essential supplies. The assis- tance of other actors, including international organisations as part of the international community, is warranted through the provision of impartial humanitarian assistance to the affected State.35 The question that hereby arises is whether the concerned State has an obligation to accept such humanitarian assistance and whether humanitarian assistance could potentially be provided in the absence of the affected State’s consent. In the context of AI-supported humanitarian assistance, the use of AI technology, such as iris scanning, may be the main reason for withholding consent. Three relevant legal regimes applicable to situations of the delivery of humanitar- ian assistance in cases of man-made or natural disasters are: the general humanitarian assistance legal regime, the international human rights regime and the international hu- manitarian law regime. The first two will generally apply to humanitarian situations not involving an armed conflict. In times of armed conflict, however, a specific regime con- cerning the provision of humanitarian assistance exists under international humanitarian law. Even though the general humanitarian assistance legal regime and international human rights law continue to apply in such situations, in times of war, the human rights regime is to be interpreted in light of the rules of international humanitarian law, which are applicable as lex specialis.36 4.1. General Humanitarian Assistance Legal Regime Humanitarian assistance in cases of both natural37 and man-made38 disasters and emergencies has continuously occupied the agenda of the UN, with the UN General Assembly (UNGA) Resolution 46/182 (1991)39 outlining the guiding principles of hu- manitarian assistance as the cornerstone of this regime. Among the key guiding principles embedded in this document are the principles of humanity, neutrality and impartiality. 35 Oxford Guidance on the Law Relating to Humanitarian Relief Operations in Situations of Armed Conflict, 2016, § 6. 36 Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory, Advisory Opinion (2004) ICJ Rep. 136, §§ 106–113. 37 UNGA Resolution 2034 (XX), 7 December 1965, UN Doc. A/RES/2034; UNGA Resolution 44/236, 22 December 1989, UN Doc. 44/236. The Institute of International Law, classified »di- sasters« as either natural, man-made disasters of technological origin or disaster caused by armed conflicts or violence. IIL Resolution 2003. 38 The Institute of International Law, classified »disasters« as either natural, man-made disasters of technological origin or disaster caused by armed conflicts or violence. IIL Resolution 2003. UNGA Resolution 2816 (XXVI), 14 December 1971, UN Doc. A/RES/2816. 39 UNGA Resolution 46/182, 19 December 1991, UN Doc. 46/182. 225 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent In terms of the consent of the affected State to external humanitarian assistance, the third guiding principle centres around the prior consent of the affected State: “[T]he sovereignty, territorial integrity and national unity of States must be fully respected in accordance with the Charter of the United Nations. In this context, humanitarian assistance should be provided with the consent of the affected coun- try and in principle on the basis of an appeal by the affected community.”40 The issue of consent has also been debated within the International Law Commission’s (ILC) work on the Draft articles on the protection of persons in the event of disasters, which explicitly acknowledge that “the provision of external assistance requires the con- sent of the affected state.”41 Similarly, the notion of consent is central in the work of the Institute of International Law (IIL) on the matter, whereby its resolution on humanitar- ian assistance stresses that “States and organizations have the right to provide humani- tarian assistance to victims in the affected States, subject to the consent of these States.”42 Accordingly, State sovereignty and the notion of consent seem to lie at the heart of the general humanitarian assistance regime. Simultaneously, however, both the ILC and the IIL recognise that in cases of disas- ters exceeding national response capacities, the concerned State “shall seek” 43 or “has a duty to seek”44 assistance from competent international organisations, third States and other actors. As explained in the commentaries, the ILC embedded this reasoning on the basis of the principle of sovereignty, which does not only confer rights upon States but also imposes certain duties.45 However, at the same time, the ILC, in the commentary, stressed that the term “seek” cannot be equated with a duty to give consent to humanitar- ian assistance but rather “entails the proactive initiation by an affected State of a process through which agreement may be reached.”46 Consent to humanitarian assistance, there- fore, remains central to this regime, with the ILC acknowledging that “the provision of 40 UNGA Resolution 46/182, 19 December 1991, UN Doc. 46/182, Annex, Guiding Principles, § 3. See also UNGA Resolution 67/87, UN Doc. A/RES/67/87, 26 March 2013 (“Emphasizing also the fundamentally civilian character of humanitarian assistance, and, in situations in which military capacity and assets are used to support the implementation of humanitarian assistance, reaffirming the need for their use to be undertaken with the consent of the affected State and in conformity with international law, including international humanitarian law, as well as humanitarian principles.”). 41 Draft articles on the protection of persons in the event of disasters (2016), Article 13. 42 IIL Resolution 2003, § IV(2). 43 IIL Resolution 2003, § III(3). 44 Draft articles on the protection of persons in the event of disasters (2016), Article 11. 45 Draft articles on the protection of persons in the event of disasters (2016), Commentary to Article 10, § 3. 46 Draft articles on the protection of persons in the event of disasters (2016), Commentary to Article 11, § 6. 226 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 external assistance requires the consent of the affected State [which] is fundamental to international law.”47 4.2. International Human Rights Law Regime The obligation to provide humanitarian assistance to civilian populations in need also stems from the international human rights law regime, particularly from the pro- visions on the right to life deriving from Article 6 of the International Covenant on Civil and Political Rights (ICCPR)48 and the right to food as enshrined in Article 11 of the International Covenant on Economic, Social and Cultural Rights (ICESCR)49. As already mentioned, it is generally confirmed that the obligations of States under inter- national human rights law continue to apply in times of armed conflict50, including in times of occupation.51 The right to life under Article 6(1) ICCPR is non-derogable under the ICCPR52 even in “time of public emergency which threatens the life of the nation,” which includes situ- ations of armed conflict and other public emergencies.53 The Human Rights Committee explained States’ obligations deriving from this provision in its General Comment 36, whereby it affirmed that the right to life “should not be interpreted narrowly” and in- cludes “the entitlement of individuals to be free from acts and omissions that are intend- ed or may be expected to cause their unnatural or premature death, as well as to enjoy a life with dignity.”54 Moreover, it expressly recognised that the positive obligations of States (the duty to protect life) include taking “appropriate measures to address the gen- eral conditions in society that may give rise to direct threats to life or prevent individuals from enjoying their right to life with dignity”, including “widespread hunger and mal- 47 Draft articles on the protection of persons in the event of disasters (2016), Commentary to Article 13, § 2. 48 International Covenant on Civil and Political Rights, 16 December 1966, UNTS, vol. 999, p. 171 (ICCPR). 49 International Covenant on Economic, Social and Cultural Rights, 16 December 1966, UNTS, vol. 993, p. 3 (ICESC). 50 Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory, Advisory Opinion (2004) ICJ Rep. 136, §§ 106–113. 51 See, e.g., UN Human Right Committee (UN HRC), General Comment No. 31 UN Doc. CCPR/C/Rev.1/Add.13 (2004), § 10; Akande and Gillard, 2016, p. 504. 52 ICCPR, Article 4(2). 53 See also UN HRC, General Comment No. 29: Article 4: Derogations during a State of Emergency, 31 August 2001, CCPR/C/21/Rev.1/Add.11. 54 UN HRC General comment no. 36, Article 6 (Right to Life), 3 September 2019,  CCPR/C/ GC/35, § 3 (General comment no. 36). 227 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent nutrition and extreme poverty and homelessness.”55 In this respect, it confirmed that the right to life includes the obligation to ensure access to humanitarian assistance: “The measures called for to address adequate conditions for protecting the right to life include, where necessary, measures designed to ensure access without delay by individuals to essential goods and services such as food, water, shelter, health care, electricity and sanitation, and other measures designed to promote and facilitate adequate general conditions, such as the bolstering of effective emergency health services, emergency response operations.”56 On the other hand, according to the International Covenant on Economic, Social and Cultural Rights, the right to food as embedded in Article 11 includes the obligation that: “The States Parties will take appropriate steps to ensure the realization of this right, recognizing to this effect the essential importance of international co-operation based on free consent.”57 Unlike civil and political rights, economic, social and cultural rights may not be dero- gated from in times of emergency, which is compensated by the fact that they are subject to progressive realisation, i.e. dependent on the available resources of States. Therefore, even in emergencies, States have to do their best to work towards the progressive realisa- tion of these rights and guarantee the minimum content of the core obligations.58 The Committee on Economic, Social and Cultural Rights expressly stressed in General Comment No. 12 on the Right to Adequate Food that violations of the right to food: “can occur through the direct action of States or other entities insufficiently regu- lated by States. These include […] the prevention of access to humanitarian food aid in internal conflicts.”59 While indeed economic, social and cultural rights are subject to progressive realisa- tion, States are under an obligation to ensure “minimum essential levels” of these rights.60 In this respect, the Committee also stressed the need to seek international assistance to secure available resources for the realisation of the right to food. For a State not to be in breach of Article 11 by failing to ensure “at the very least, the minimum essential level required to be free from hunger” in cases of natural or man-made disasters, it has to 55 UN HRC General comment no. 36, § 26. 56 Ibid. 57 ICESCR, Article 11(1). 58 Organization for Security and Co-operation in Europe Office for Democratic Institutions and Human Rights, Report on Violations of International Humanitarian and Human Rights Law, War Crimes And Crimes Against Humanity Committed in Ukraine (1 April – 25 June 2022), ODIHR. GAL/36/22/Corr.1, 14 July 2022, p. 83. 59 UN Committee on Economic, Social and Cultural Rights (CESCR), General Comment No. 12: The Right to Adequate Food (Art. 11 of the Covenant), 12 May 1999, E/C.12/1999/5, § 19. 60 UN CESCR General Comment No. 3: The Nature of States Parties Obligations, UN Doc. E/1991/23, 14 December 1990, § 10. 228 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 “demonstrate that every effort has been made to use all the resources at its disposal in an effort to satisfy, as a matter of priority, those minimum obligations,” including that it has sought to obtain international support.61 Moreover, according to the Special Representative of the UN Secretary-General on internally displaced persons, the obligation to allow for third-party provision of human- itarian assistance also stems from other rights, such as the right to an adequate standard of living, health and education.62 Against this background, certain obligations concerning the provision of humani- tarian assistance, including a duty to seek assistance from the international community, explicitly derive from international human rights law. To fulfil their international obliga- tions towards individuals, States may, therefore, have to resort to international support in cases where their resources are inadequate to meet protection needs.63 If they fail to do so, they risk breaching their above-mentioned obligations under the ICCPR and ICESCR. However, these obligations cannot be translated into a general obligation to give unconditional consent to the provision of humanitarian assistance on the territory of the concerned State to protect the right to life of its citizens and realise its progressive obliga- tion under the right to food. As will be explained below, within the international human rights legal framework, the question of States’ human rights obligations and possible violations is to be determined against the background of the concrete circumstances of a situation, whereby the question of whether denial of consent to humanitarian assistance is necessary and proportionate to achieving legitimate ends is generally assessed in the context of arbitrariness. 4.3. International Humanitarian Law Regime The obligation to provide humanitarian assistance to the civilian population in times of armed conflict is one of the central aspects of international humanitarian law. It is acknowledged that under this regime different modalities of humanitarian assistance arise in different contexts: international armed conflict, non-international armed con- 61 UN CESCR General Comment 12, 1999, § 17. 62 “A State is deemed to have violated the right to an adequate standard of living, to health and to education, if authorities knew or should have known about the humanitarian needs but failed to take measures to satisfy, at the very least, the most basic standards imposed by these rights. State obligations thus include the responsibility to follow up on these situations of concern and assess relevant needs in good faith, and ensure that humanitarian needs are being met, by the State itself or through available assistance by national or international humanitarian agencies and organizations, to the fullest extent possible under the circumstances and with the least possible delay.” Report of the Representative of the Secretary-General on the human rights of internally displaced persons, UN Doc. A /65/282, 11 August 2010, § 69. 63 Draft articles on the protection of persons in the event of disasters (2016), Commentary to Article 11, § 3. 229 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent flict, occupation, and provision of humanitarian assistance on territories controlled by non-state actors.64 Outlining all modalities of the relevant rules governing the provision of humanitarian assistance in these contexts is beyond the scope of this paper. Rather, this section merely clarifies the role of the consent of the concerned State in the provision of humanitarian assistance, which seems to lie at the centre of the international human- itarian law regime. At the outset, it has to be explained that under the international humanitarian law regime, two different levels of consent exist: 1. At the strategic level, humanitarian international organisations have to seek the con- sent of the concerned State to enter the territory or territories in question (the so-cal- led strategic consent); and 2. At the operational level, once this strategic consent has been obtained, the provision of humanitarian assistance is subject to the right of control by the parties to the conflict.65 In other words, parties to the conflict are to give operational consent to the provision of specific humanitarian aid in a certain geographic area and may prescribe technical ar- rangements for the passage of such humanitarian assistance, search for humanitarian aid to verify the humanitarian nature of supplies, prevent convoys from affecting or being affected by military operations and ensure supplies meet health and safety standards.66 While this section primarily addresses strategic consent, operational consent is analysed in the following section. In terms of strategic consent under international humanitarian law, parties to the conflict are obliged to allow free passage of humanitarian assistance to those in need.67 The International Committee of the Red Cross (ICRC) emphasised on numerous occa- sions the importance of unimpeded access to humanitarian assistance by civilian popula- tions in times of armed conflict, in accordance with the applicable rules of international humanitarian law.68 According to customary international humanitarian law applicable to international and non-international armed conflicts, “[P]arties to the conflict must allow and facilitate unimpeded passage of huma- nitarian relief for civilians in need, which is impartial in character and conducted without any adverse distinction subject to their right of control.”69 64 Ryngaert, 2013, pp. 6–9. 65 See also Sharpe, 2023. 66 Oxford Guidance on the Law Relating to Humanitarian Relief Operations in Situations of Armed Conflict, 2016, §§ 65–72. 67 Henckaerts, 2005, Rule 55. 68 International Humanitarian Law Databases, Customary IHL, Rule 55, (accessed 30 April 2023). 69 Henckaerts, 2005, Rule 55. 230 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 The provision of humanitarian assistance in international70 and non-international71 armed conflicts is also governed by relevant treaty law. The Fourth Geneva Convention obliges States to allow for “the free passage of all consignments of essential foodstuffs,”72 whereby Additional Protocol I broadens this obligation to the “rapid and unimpeded passage of all relief consignments, equipment and personnel.”73 In times of occupation, the occupying power “shall agree to relief schemes on behalf of the respective population and shall facilitate them by all the means at its disposal.”74 The obligation to allow for and facilitate access to humanitarian relief for civilians in need is also enshrined in national military manuals and is supported by State practice.75 The question that often arises in situations of armed conflict is whether there exists an obligation of parties to a conflict to give strategic consent to humanitarian assistance and whether assistance could be provided without such strategic consent. In this respect, relevant rules specifically regulating humanitarian assistance in non-international and international armed conflicts emphasise the central role of the consent of the affected State, whereby the obligation to allow for the free passage of humanitarian assistance is preconditioned by “consent of the High Contracting Party concerned,”76 or is “subject to the agreement of the Parties concerned in such relief actions.”77 The requirement of consent also clearly stems from Additional Protocol II, which preconditions delivery of humanitarian assistance in non-international armed conflicts with the explicit consent of the parties to the conflict. It has sometimes, therefore, been argued that, in relation to non-signatories of the AP II, delivery of aid in non-international armed conflicts could be non-consensual.78 However, the requirement of consent can also be implied from other provisions and customary international law, as it is hard to imagine how parties to a con- 70 Geneva Convention Relative to the Protection of Civilian Persons in Time of War (Fourth Geneva Convention), 12 August 1949,  75 UNTS 287, Articles 23 and 59; Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977, 1125 UNTS 3, Articles 69–71. 71 Common Article 3(2) of the Geneva Conventions; Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of Non-International Armed Conflicts (Protocol II), 8 June 1977, 1125 UNTS 609, Article 18. 72 Fourth Geneva Convention, 1949, Article 23. 73 Protocol I, 1977, Article 70(2). See also Protocol II, 1977, Article 18(2). 74 Fourth Geneva Convention, 1949, Article 59. See also Fourth Geneva Convention, 1949, Article 62. 75 International Humanitarian Law Databases, Customary IHL, Rule 55, (accessed 30 April 2023). 76 Protocol II, Article 18(2). 77 Protocol I, Article 70(1). 78 Barber, 2023, p. 2; Sproson & Olabi, 2023, p. 1 ff; see also: American Relief Coalition for Syria, 2022, pp. 25–36. 231 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent flict could make use of their right to “control”79 the provision of humanitarian assistance (operational consent) without previously giving strategic consent to such assistance.80 It is (probably) against this background that the ICRC concludes it is considered “self-evident” that a humanitarian organisation cannot “operate (in states) without the strategic consent of the party concerned,” both in international and non-international armed conflicts,81 and that some scholars talk about the “absolute” nature of the require- ment of consent.82 This is not without problems and does not mean that States have no obligations concerning humanitarian assistance, nor that they can arbitrarily withhold consent. As will be explained in the following section, arbitrarily withholding consent to humanitarian assistance typically amounts to a violation of international law. It does, however, confirm that humanitarian organisations generally would not operate in affect- ed States without their consent. This seems to be endorsed by the UNSC, which in its resolutions often called for unimpeded access to humanitarian assistance in conflict situations,83 while at the same time also reaffirming the commitment of UN member States to respect the sovereignty, territorial integrity and political independence of the aid-receiving State,84 and urging all parties in a particular situation to facilitate the delivery of humanitarian assistance in ac- cordance with international humanitarian law.85 The important role of strategic consent is also implied in relevant resolutions of the UNGA, whereby it called on affected States to facilitate the work of humanitarian organisations,86 not outlining, however, that they are legally obliged to do so unconditionally. The question of consent under international humanitarian law is especially pertinent in situations whereby part of the territory of the conflict-affected State is controlled by non-state actors. In such situations, States are especially inclined, for military reasons, to deny humanitarian assistance in these areas, as was, for example, the case in Syria.87 79 Henckaerts, 2005, Rule 55. 80 Similar conclusion is reached in the Oxford Guidance on the Law Relating to Humanitarian Relief Operations in Situations of Armed Conflict, 2016, § 30. 81 Henckaerts & Doswald-Beck, 2005, Commentary to Rule 55, pp. 195–200. See also Report of the Secretary-General on the protection of civilians in armed conflict, UN Doc. S /2013/689, 22 November 2013, § 58. 82 Akande & Gillard, 2016, p. 489. 83 See, e.g., UNSC Resolution 853, 29 July 1993, UN Doc. S/RES/853 (1993). For more relevant UNSC resolutions see International Humanitarian Law Databases, Customary IHL, Rule 55, (accessed 30 April 2023). 84 See, e.g., UNSC Resolution 688, 5 April 1991, UN Doc. S/RES/688, § 3. 85 UNSC Resolution 2216 (2015), UN Doc. S/RES/2216 (2015), 14 April 2015. 86 UNGA Resolution 46/182, 19 December 1991, UN Doc. 46/182, Annex, Guiding Principles, § 6. 87 See below, section 5.3. 232 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Views on such situations are diverse. Some scholars contend that the host State in such situations is not concerned with humanitarian relief operations provided in opposition areas, especially when a non-state actor controls part of the territory of a State and takes over governance functions.88 The prevailing view, however, seems to be that the strategic consent “of the High Contracting Party concerned”89 as framed in Protocol II refers to the State party to the conflict.90 Deriving from the principle of sovereignty over a State’s territory, it is, therefore, the prerogative of a State to give consent, even on territories that it does not fully control. Although this is highly controversial in situations where the civilian population suffers from the lack of necessities, in practice, humanitarian organi- sations will typically require the consent of the concerned State to provide humanitarian assistance on territories controlled by non-state actors.91 Based on the foregoing, the notion of strategic consent seems to be at the heart of the international humanitarian law regime governing humanitarian assistance in times of armed conflicts. However, as will be explained below, at the operational level, parties to the conflict may, for valid reasons (e.g., military necessity), deny operational consent to the provision of humanitarian assistance in a particular situation.92 It has been explained in this section that the notion of consent lies at the heart of the humanitarian assistance regimes. As a ‘hallmark’ of these regimes, lack of consent is often the major practical limitation to humanitarian relief operations.93 However, as is well known, the principle of sovereignty, from which the notion of consent derives, cannot be perceived as unlimited.94 In the context of humanitarian assistance, this translates into the prohibition of arbitrarily withholding consent. In the following two sections, this paper will distinguish between occasions of legally justified withholding of operational consent on the one hand and the prohibition of arbitrarily withholding strategic consent to humanitarian assistance on the other. While States, as parties to the conflict, may rely on military necessity to withhold operational consent to humanitarian assistance, it will be explained that this is not possible at the strategic level. The final section will explain that while arbitrarily withholding consent at the strategic level typically amounts to a violation of international law, such withholding of consent cannot be considered a legal justification for a per se argument on the legality of non-consensual humanitarian assistance. Rather, the underlying violation (arbitrarily withholding consent) triggers the 88 Bothe, 1982, p. 696; Barber, 2009, pp. 384–385. 89 Protocol II, Article 18(2). 90 See, e.g., Akande in Gillard, 2016, p. 17; Gal, 2017, p. 45. 91 This question was especially pertinent in the context of Syria. See Landgren et al., 2023. 92 Similarly, Ryngaert, 2013, pp. 6–9. 93 Ryngaert, 2013, p. 9. 94 Discussions on “sovereignty as responsibility” or “relative sovereignty” were especially brought for- ward in the context of the principle of the Responsibility to Protect. See Sancin, 2010, pp. 33–49. 233 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent application of relevant secondary rules of international law, under which non-consensual provision of humanitarian assistance could be justified. 5. Withholding of the Operational and Strategic Consent to Humanitarian Assistance When analysing the possible withholding of consent to humanitarian assistance, one has to distinguish between the following two situations: 1. Withholding of operational consent under international humanitarian law, whereby parties to a conflict may have valid legal reasons for withholding consent because humanitarian assistance, for example, does not comply with their technical modalities of humanitarian assistance (which is embedded in most relevant primary rules on humanitarian assistance) or due to military necessity; and 2. The possible arbitrary withholding of strategic consent, which is prohibited under the analysed humanitarian assistance regimes. 5.1. Withholding of Operational Consent It is generally acknowledged that States and parties to conflicts may withhold their operational consent to humanitarian assistance under certain circumstances. In this respect, consent to humanitarian assistance may be lawfully withheld in cases where humanitarian assistance is not aligned with the prescribed technical modalities estab- lished by the aid-receiving State. Consent may also be withheld for imperative reasons of security if, for example, foreign relief personnel could hamper military operations in the concerned State,95 as well as on occasion of non-compliance with the principles of humanity, neutrality, impartiality,96 and non-discrimination by the external aid provider. For example, under international humanitarian law, parties to the conflict may exer- cise control over the relief action.97 In this respect, the aid-receiving State generally has the right to “prescribe the technical arrangements” of the humanitarian assistance and may make its permission “conditional on the distribution of this assistance being made under the local supervision.”98 Moreover, those providing humanitarian relief must not “exceed the terms of their mission” and “shall take account of the security requirements of the Party in whose territory they are carrying out their duties.”99 95 Akande & Gillard, 2016, p. 499. 96 Stoffels, 2004, pp. 539–544. 97 See, e.g., Fourth Geneva Convention, 1949, Article 23; Protocol I, Article 70(3); Henckaerts, 2005, Rule 55. 98 Fourth Geneva Convention, Article 23; Protocol I, Article 70(3). 99 Protocol I, Article 71(4). 234 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Similarly, the ILC explicitly recognised in its work on the protection of persons in the event of disasters that “the affected State may place conditions on the provision of external assistance.”100 This is also confirmed in practice, whereby international organisations pro- viding humanitarian assistance to affected States typically sign a specific agreement and ne- gotiate these technical modalities of humanitarian assistance before they engage with a con- cerned State. In this sense, they also secure the advance consent of the concerned State.101 Determining the modalities of the provision of humanitarian assistance—i.e. wheth- er it will include the use of AI—could arguably fall under this ‘technical’ category of valid legal reasons to deny humanitarian assistance. In situations where humanitarian assistance is rejected due to its AI component, as was the case in Yemen, the question of alternative means to provide humanitarian assistance arises. For example, could a UN agency assist without relying on AI technology? Arguably, denying the use of AI on the territory of a State cannot be seen as breaching rules on humanitarian assistance in in- stances where adequate alternative solutions exist. In this respect, the possibility of denying humanitarian assistance due to technical modalities, such as the use of AI, seems to be embedded in the primary rules governing humanitarian assistance and cannot be seen as a violation of these rules. It has to be acknowledged, however, that technical arrangements have to be applied in good faith, whereby their imposition or effect must not be arbitrary.102 According to the ICRC, mil- itary necessity can only “be invoked in exceptional circumstances in order to regulate —but not prohibit— humanitarian access, and can only temporarily and geographically restrict the freedom of movement of humanitarian personnel.”103 Unjustified withholding of operational consent would amount to a violation of relevant rules of international hu- manitarian law. However, as will be explained below, in instances where such denial would be all-encompassing and unjustified and would amount to serious violations of the State’s other international obligations relating to the civilian population, and where the use of AI would be the only way to distribute such assistance or would be proportionally the most appropriate and efficient way to distribute the assistance, one could discuss the issue of the arbitrary withholding of consent and the subsequent responsibility of the concerned State. 5.2. Arbitrary Withholding of Strategic Consent Withdrawal of operational consent due to technical modalities of humanitarian as- sistance has to be distinguished from the arbitrary withholding of strategic consent to 100 Draft articles on the protection of persons in the event of disasters, 2016, draft Article 14. 101 See, e.g., Article XI (Assistance agreements), General Regulations and General Rules, WFP (2022) (accessed 20 April 2023). 102 Oxford Guidance on the Law Relating to Humanitarian Relief Operations in Situations of Armed Conflict, 2016, § 71. 103 ICRC, 2014, p. 364. 235 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent humanitarian assistance. While the first is a right of States deriving from primary rules governing humanitarian assistance, strategic consent is a precondition for the delivery of humanitarian aid in the first place, and its arbitrary withholding is generally prohibited by these same primary rules.104 It seems to be generally accepted that strategic consent to humanitarian assistance cannot be arbitrarily withheld.105 The modalities of arbitrariness are, however, not generally defined under internation- al law. Arbitrariness is, therefore, typically dependent on the circumstances of a concrete situation. Some guidance as to arbitrariness can be found under international human rights law, whereby the question of whether withholding of consent is necessary and pro- portionate to achieving legitimate ends is crucial.106 In this respect, arbitrariness has been understood as refusing consent in a manner that is “unjustified” and not “in pursuit of [a] legitimate aim”107; “unreasonable, unjust, lacking in predictability or […] otherwise inappropriate”108 or not pursued for “reasons that are valid and compelling.”109 It is often argued that conduct which would violate other obligations of a State under international law should be regarded as arbitrary.110 According to Sivakumaran, refusal is arbitrary if it “results in the violation by a state of its obligations under international law con- cerning the civilian population in question (such as its human rights obligations), or if it violates the principle of necessity and proportionality, or if it discriminates against a particular group.”111 In this sense, there seems to be agreement among scholars and institutions that if the withholding of consent results in mass atrocities, such as war crimes or crimes against humanity, it could arguably be considered arbitrary.112 In this respect, under international humanitarian law, a denial of humanitarian assistance to cause, contribute, or perpetuate starvation would amount to a violation of the prohibition of starvation of the civilian pop- 104 International Humanitarian Law Databases, Customary IHL, Rule 55, (accessed 30 April 2023); Draft articles on the protection of persons in the event of disasters, 2016, draft Article 13; Institute of International Law, Santiago de Compostela Resolution, 1989, Article 5; see also IIL Resolution, 2003. 105 Report of the Secretary-General on the protection of civilians in armed conflict, UN Doc. S /2013/689, 22 November 2013, § 58; Oxford Guidance on the Law Relating to Humanitarian Relief Operations in Situations of Armed Conflict, 2016, section E; Akande and Gillard, 2016, pp. 489 ff; Henckaerts & Doswald-Beck, 2005, p. 197; IIL Resolution, 2003, § VIII; Draft articles on the protection of persons in the event of disasters, 2016, draft Article 13(2). 106 See also Akande & Gillard, 2016, pp. 498–499 and 505–507. 107 Sivakumaran, 2015, pp. 517–521. 108 Akande & Gillard, 2016, p. 22. 109 Gillard, 2013, p. 360. 110 Akande & Gilliard, 2016, pp. 494–495. 111 Sivakumaran, 2015, p. 521. 112 Rottensteiner, 1999, pp. 555–581; IIL Resolution, 2003, § VIII. 236 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 ulation as a method of warfare113 and may also amount to a war crime under international criminal law.114 Moreover, “intentional inflictions of conditions of life, inter alia the dep- rivation of access to food and medicine, calculated to bring about the destruction of part of a population” may amount to extermination as a crime against humanity.115 Similarly, systematic rejection of humanitarian assistance in areas populated by a particular ethnic group would amount to a violation of the rule prohibiting adverse distinction under in- ternational humanitarian law116 and the prohibition of discrimination under international human rights law,117 and could possibly amount to a crime against humanity.118 The question of the withholding of consent to humanitarian assistance has been ex- tensively addressed outside the AI context, especially in cases where it resulted in gross vi- olations of international humanitarian law and international human rights law. This was the case in Ethiopia, where the Mengistu regime banned the movement of relief supplies during the famine that emerged in 1989119 and more recently, in the case of Syria’s denial of consent to international humanitarian aid on the territories controlled by non-state actors.120 In these situations, it has been argued that arbitrary denial of humanitarian assistance violates the aforementioned rules of international humanitarian law and inter- national human rights law governing humanitarian assistance.121 In the past, the Human Rights Committee has considered arbitrary denial of humanitarian assistance as violating international human rights obligations of States, including the right to life.122 113 Protocol I, Article 54(1); Protocol II, Article 14; Akande & Gilliard, 2016, pp. 495–496. 114 Rome Statute of the International Criminal Court, UNTS 2187, 17 July 1998, EIF 1 July 2002, p. 3 (Rome Statute), Article 8(2)(b)(xxv). 115 Rome Statute, Articles 7(1)(b) and 7(2)(b); ILC Draft articles on Prevention and Punishment of Crimes Against Humanity, with commentaries, YILC 2019, vol. II, Part Two, p. 28. See also na- tional criminal legislations, e.g., Criminal Code of the Republic of Slovenia, Official Gazette of the Republic of Slovenia, No. 50/12 – official consolidated version, 6/16 – corr., 54/15, 38/16, 27/17, 23/20, 91/20, 95/21, 186/21, 105/22 – ZZNŠPP and 16/23, Article 101. 116 Common Article 3 of the Geneva Conventions. See also Geneva Convention Relative to the Treatment of Prisoners of War (Third Geneva Convention), 12 August 1949, 75 UNTS 135, Article 16; Fourth Geneva Convention, Article 13; Protocol I, Article 75(1); Protocol II, Article 4(2). 117 See, e.g., ICCPR, Article 26; Akande & Gilliard, 2016, p. 497. 118 Rome Statute, Article 7(1)(h). 119 International Humanitarian Law Databases, Customary IHL, Rule 55, (accessed 30 April 2023). 120 Ryngaert, 2013, pp. 5–19. See below, section 5.3. 121 UNSC, Statement by the President of the Security Council, UN Doc. S/PRST/2013/15, 2 October 2013. See also UNSC Resolution 2216 (2015), UN Doc. S/RES/2216 (2015), 14 April 2015; UNGA Resolution 68/182, UN Doc. A/RES/68/182, 30 January 2014, § 14. 122 See, e.g., Human Rights Committee, Concluding observations on the fourth periodic report of the Sudan, UN Doc. CCPR/C/SDN/CO/4, 19 August 2014, § 8. 237 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent As already explained, conditioning consent to humanitarian assistance with certain technical reasons, i.e. the non-use of a particular AI system, seems to be supported by the relevant primary rules of international law governing humanitarian assistance. However, when such denial would amount to serious violations of the State’s other international obligations relating to the civilian population, and where the use of AI would be the only way to distribute such assistance or would be proportionally the most appropriate and efficient way to distribute the assistance, the issue of the arbitrariness of such withhold- ing of consent comes to the forefront. The assessment of arbitrariness has to be made on a case-by-case basis, taking into consideration the above-mentioned elements and is fraught with difficulty in the decentralised international legal reality. While on these occasions the arbitrariness of the withholding of consent seems self-evident, stemming from the gravity of the underlying breach, this is not always the case. Under international human rights law, there may be a need for a more nuanced balance between the provi- sion of humanitarian assistance and the realisation of the right to life and the right to food on the one hand and, for example, the right to privacy, which the use of AI on the territory of the State concerned may undermine, on the other. It is beyond the scope of this paper to provide a detailed overview of the cases and arguments relating to the arbi- trary withdrawal of consent. For our discussion, it is important to determine what legal consequences stem from the arbitrary withdrawal of consent to humanitarian assistance: does it result in the per se legality of non-consensual humanitarian aid, or does it trigger justification of non-consensual provision of humanitarian aid under the secondary rules of international law? This is important because it essentially determines the legal analysis and course of action to be undertaken in such situations by international organisations. 5.3. Legal Consequences of Arbitrary Withholding the Strategic Consent Some argue that in instances of arbitrary withholding of consent, the provision of non-consensual humanitarian assistance is to be considered as per se lawful.123 These scholars seem to make the argument that there exists, at the level of the primary rules governing humanitarian assistance, a customary international legal rule allowing for the non-consensual provision of humanitarian relief in such cases. This view became par- ticularly vocal in the case of Syria, whereby, despite the deterioration of the humanitar- ian situation due to an ongoing conflict in the country, the Syrian government refused to give consent to “cross-border” operations, to reach more than three million people who were located in remote areas. Already in 2014, a coalition of international lawyers made a statement arguing that “there is no legal barrier to the UN directly undertaking 123 Barber, 2023, p. 1; Barber, 2009, pp. 371–397; Stoffels, 2004, p. 536; American Relief Coalition for Syria, 2022, pp. 25 ff. 238 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 cross-border humanitarian operations”124 to opposition-controlled areas because they meet all conditions for legality, neutrality, impartiality, and non-discrimination, and due to the prior arbitrary withholding of consent by Syria causing a serious humanitarian situation in the country.125 This view was reinstated in 2023, after a devastating earthquake in southern Turkey, which seriously affected thousands of people in northwest Syria, destroying the border crossing between the countries, Babal-Hawa, the only crossing the UN Security Council (UNSC) has authorised for humanitarian assistance to the opposition-held territory in Syria. This caused significant delays in international aid deliveries. Some scholars126 and a group of eminent academics and professionals adopted the statement “There is Still No Legal Barrier to UN Cross-Border Operations in Syria Without a UN Security Council Mandate,”127 in which they argue, among other things, that refusal to permit cross-border aid in this situation is unlawful as it is arbitrary and necessitates continuous cross-border provision of aid to prevent possible distress, strife, and starvation.128 Without a doubt, in such situations, where the lives of millions of people who rely on cross-border aid are put at risk, allowing the non-consensual delivery of (possibly AI-supported) humanitarian aid seems reasonable and humane. However, it is argued here that legal analysis in these cases should nevertheless be nuanced. Rather than argu- ing that non-consensual humanitarian assistance is per se lawful in such situations, one should carefully analyse and apply relevant primary and secondary rules of international law accordingly. In particular, the question that has to be addressed in such situations is whether a non-consensual, “clandestine”, AI-supported humanitarian assistance opera- tion would be in line with the principles of sovereignty and territorial integrity and the principle of the prohibition of intervention in the internal affairs of a State. And subse- quently, whether a violation of these primary rules could be justified under international law on the basis of the secondary rules of international law, such as countermeasures. 6. Non-Consensual AI-Supported Humanitarian Aid and the Principles of Non-Intervention and Sovereignty The provision of non-consensual humanitarian assistance is in fundamental tension with two fundamental principles of international law: the principle of sovereignty and its corollary, the principle of non-intervention. This conclusion is embedded in the primary 124 The Guardian, 2014. 125 Ibid. 126 Barber, 2023, p. 1. 127 There is Still No Legal Barrier to UN Cross-Border Operations in Syria Without a UN Security Council Mandat, 2023. 128 Ibid. See also Barber, 2023; Sproson & Olabi, 2023. 239 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent rules governing humanitarian assistance, which, as explained above, are preconditioned on the consent of the concerned State.129 Discussions on the provision of humanitarian aid are generally centred on the princi- ple of non-intervention as interpreted by the International Court of Justice (ICJ) in the Nicaragua case. This principle, often labelled as controversial,130 is commonly perceived as consisting of two elements: (1) the act in question relates to the internal or external affairs of the targeted State, and (2) the act is coercive in nature.131 In the Nicaragua case, the Court analysed these two elements when discussing the support of the US to the Contras in the form of financial support, training, supply of weapons, intelligence, and logistic support. The ICJ considered this support a clear breach of the principle of non-interven- tion due to its purpose, i.e. coercing Nicaragua and supporting the Contras to overthrow the government. In contrast, however, the Court stressed that the provision of human- itarian assistance cannot be considered as violating the principle of non-intervention: “The Court has however taken note that, with effect from the beginning of the United States governmental financial year 1985, namely 1 October 1984, the United States Congress has restricted the use of the funds appropriated for assistan- ce to the contras to ‘humanitarian assistance’ (paragraph 97 above). There can be no doubt that the provision of strictly humanitarian aid to persons or forces in another country, whatever their political affiliations or objectives, cannot be regarded as unlawful intervention, or as in any other way contrary to international law.”132 The ICJ further stressed that there cannot be an intervention in internal affairs in cas- es where humanitarian assistance is limited to preventing and alleviating human suffer- ing, protecting life and health, and ensuring respect for human beings, whereby it must be given without discrimination.133 This passage is commonly cited as confirming that the provision of humanitarian assistance cannot be considered as violating the principle of non-intervention.134 However, it is argued here, that the ICJ’s statement necessitates a more nuanced analysis. The question that this passage raises is whether the ICJ was referring to the provision of humanitarian assistance without crossing the border of the concerned State (Nicaragua) or to humanitarian assistance, including direct engagement with relief operations inside the country. Given the context in which the Court reached its conclusions—where it had previously found that US aid and support to the Contras (without crossing the border) was contrary to the principle of non-intervention due to the purposes of such aid, i.e. 129 Gillard, 2013, p. 369; Stoffels, 2004, p. 535. 130 Jamnejad & Wood, 2009, p. 346. 131 Schmitt (ed.), 2017, p. 314. 132 Case Concerning Military and Paramilitary Activities in and Against Nicaragua (Nicaragua v. United States of America), Judgment (1986), ICJ Rep. 14, § 242 (Nicaragua case). 133 Ibid., § 243. 134 Barber, 2020, pp. 9–10. 240 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 coercing Nicaragua in respect of matters in which each State is permitted by the principle of state sovereignty to decide freely, and the purpose of Contras to overthrow the gov- ernment of Nicaragua—one could deduce that the Court was referring to humanitarian assistance without crossing the border. Thus, at the centre of the decision was not the issue of consent and physical intrusion into the territory, but rather the purpose and nature of the aid provided by the US,135 without crossing the border. An essential element of the principle of non-intervention is coercion, whereby a “pro- hibited intervention must constitute an attempt to coerce the targeted State by directly or indirectly interfering in the internal or external affairs of this State.”136 In this respect, the element of ‘coercion’ in the principle of non-intervention has two facets: (1) coercion in a physical, direct, sense, e.g. with the use of force, and (2) non-physical, indirect sense, as was the case with, e.g., the provision of financial support.137 In cases of non-consensual humanitarian assistance, neither of these elements will typically be met, as the mere provision of impartial humanitarian assistance does not by itself constitute a coercive act aimed at affecting the freedom of decision of the targeted State.138 The Nicaragua decision, which asserts that humanitarian aid does not violate the principle of non-intervention because it does not seek to coerce the free will of the State concerned, must, therefore, be read in this context. Against this background, one could hardly conclude that the cited passage of the ICJ could be understood as permitting the cross-border provision of humanitarian relief inside the affected State without the consent of the concerned State under international law.139 Indeed, an offer of humanitarian assistance cannot be considered a violation of the principle of non-intervention, as it does not fulfil the “coerciveness” criterion. However, for it to be physically provided on the territory of a concerned State, that State’s consent is necessary. This is because such physical non-consensual provision of humanitarian as- sistance would breach another principle of international law: the principle of sovereignty. Non-consensual, “mere” physical intrusion into the territory of another State, with- out a coercive element and the aim of affecting the internal or external affairs of the concerned State, is typically analysed in the context of the principle of sovereignty, which is a separate principle, albeit intrinsically related to the principle of non-intervention.140 According to this central141 principle of international law, States have supreme author- ity over their land, territory and appurtenances (e.g., internal waters, territorial seas, 135 Nicaragua case, 1986, §§ 239–244. 136 Delerue, 2020, p. 235. 137 Nicaragua case, 1986, § 205. 138 Similarly Delerue in relation to cyber espionage and the principle of non-intervention. Delerue, 2020, p. 258. 139 For a similar conclusion, see: Gillard, 2013, p. 370; Sproson & Olabi, 2023. 140 Nicaragua case, 1986, § 202. 141 Ibid., § 263. 241 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent archipelagic waters, airspace, and subsoil).142 It is in the context of this principle that clandestine, non-consensual operations, while being physically present in the territory of a foreign State, are typically discussed.143 This can also be buttressed by the fact that in the Nicaragua case, violations of sovereignty by way of physical intrusion—e.g. by unauthorised overflights of Nicaragua’s territory by aircraft belonging to or under the control of the government of another State (the US)—were addressed separately from the principle of non-intervention.144 In a similar way conducting espionage by State agents on the territory of another State violates the principle of sovereignty145 (and on some occasions, if the criteria are fulfilled, also the principle of non-intervention146), the non-consensual provision of AI- supported humanitarian assistance in the territory of the concerned State also amounts to a violation of the principle of sovereignty. According to Buchan, any “non-consensual incursion by one State into the territory of another State violates the rule of territorial sovereignty, regardless of whether that infraction produces damage.”147 Similarly, Deleure concludes that “an unauthorised act by a State on the territory of the targeted State violates the territorial sovereignty of the latter”148 and that damage need not occur.149 Therefore, unauthorised physical intrusions, even of a neutral and humanitarian nature, violate the principle of sovereignty. It also has to be acknowledged, however, that wheth- er the provision of humanitarian aid in a concrete case violates the principle of sovereign- ty and non-intervention must be assessed on a case-by-case basis.150 Finally, it has to be acknowledged that the non-consensual provision of humanitarian assistance with the use of AI also triggers another facet of the principles of non-inter- vention and sovereignty: the question of possible unauthorised, clandestine gathering of large amounts of data on the territory of a concerned State through AI systems. In the past, such clandestine activities were considered to amount to a violation of the principle of sovereignty and non-intervention by States and courts. For example, in 2008, the Federal Court of Canada published its response to a request from the Canadian Security Intelligence Service (CSIS) to approve a warrant under Section 12 of the Canadian Security Intelligence Service Act 1984 to conduct surveillance against individuals locat- 142 Ibid., § 212; Delerue, 2020, p. 200. 143 Jennings & Watts, 2008, pp. 385–386. 144 See Nicaragua case, 1986, §§ 251 ff. 145 Delerue, 2020, p. 212. 146 Wright, 1962. 147 Buchan, 2021, p. 51. 148 Delerue, 2020, p. 212. 149 Ibid., pp. 213 and 215–219. 150 It has been argued, for example, with respect to the situation in Syria that UN agencies are not providing cross-border aid. Sproson & Olabi, 2023, p. 4. 242 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 ed within the territory of other States. Under Canadian law, the Court could only issue the warrant if the activities being authorised were compliant with international law. In refusing to grant the warrant, the Court observed that: “The intrusive activities […] are activities that clearly impinge upon the above- -stated principles of territorial sovereign equality and non-intervention […] By authorizing such activities, the warrant would therefore be authorizing activities that are inconsistent with and likely to breach the binding customary principles of territorial sovereign equality and non-intervention, by the comity of nations. These prohibitive rules of customary international law […] have evolved to protect the sovereignty of nation states against interference from other states.”151 In the context of cyber espionage, other States like Argentina, Bolivia, Brazil, Uruguay, and Venezuela condemned the clandestine activities of the US as unacceptable behaviour that violates their sovereignty. Against this background, scholars have concluded that “pulling” data without the consent of a concerned State is contrary to international law, in particular the principle of sovereignty.152 To conclude, the non-consensual provision of humanitarian assistance is potentially in violation of two fundamental principles of international law: the principle of sover- eignty and the principle of non-intervention. However, as will be explained in the last part of this paper, the consent of the concerned State could be substituted by a UNSC authorisation under Chapter VII of the UN Charter or justified under the secondary rules of international law, countermeasures in particular. 7. Possible Legal Justifications for the Non-consensual Provision of Humanitarian Aid Under International Law Indeed, while the arbitrary withholding of consent could be considered as violating the rules governing humanitarian assistance, in particular international humanitarian law and international human rights law, this unlawfulness itself, does not justify the non-consensual delivery of humanitarian assistance at the level of the primary rules due to its tensions with fundamental international legal principles. Rather, recourse to the UNSC or to secondary rules of international law is necessary to justify non-consensual humanitarian assistance in cases of withholding of consent. 7.1. The Authorisation of the UNSC In situations where humanitarian assistance is being arbitrarily withheld, deliberately denied or obstructed, and where such denial may constitute a threat to international 151 Federal Court, Canadian security intelligence service act (re) (f.c.), SCRS-10-07, 2008 CF 301. 152 Buchan, 2021, pp. 54–55. 243 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent peace and security, the UNSC may adopt appropriate measures under Chapter VII of the UN Charter153 to remedy the situation. The role of the UNSC in substituting the consent of the State for the provision of humanitarian assistance has been extensively analysed in the context of the situation in Syria, whereby the Syrian government denied humanitarian access to civilians in opposition-controlled areas. In response to this, the UNSC first demanded the Syrian authorities to allow the delivery of humanitarian as- sistance.154 Failing to do so, and due to the deterioration of the humanitarian situation in Syria, whereby the number of people in need of assistance exceeded 10 million, and disturbed by the “continued, arbitrary and unjustified withholding of consent to re- lief operations,” the UNSC with resolution 2165 (2014),155 authorised UN agencies to provide humanitarian assistance in Syria through four designated international border crossings without the consent of the Syrian government.156 This resolution was continu- ously renewed until 2019, whereas in 2020, due to the veto of Russia and China, which were concerned over the sovereignty of Syria,157 the authorisation for the cross-border humanitarian operation was reduced to only include one international border crossing (Turkey).158 This authorisation was considered necessary since the Syrian government restricted the delivery of humanitarian assistance to the areas not under its control.159 The UNSC authorisation to substitute the consent of a concerned State is a reasona- ble solution in cases of arbitrary withholding of consent by a concerned State. However, in cases where the modalities and technical details for the provision of humanitarian assistance are in question and the primary reason for denying AI-supported humanitar- ian assistance, it seems less likely that the UNSC would intervene. Only in a situation where there is no appropriate technical alternative to the use of AI, and the consequences of refusing to allow the use of AI when distributing humanitarian assistance cause sig- nificant damage to the civilian population in need—amounting, for example, to star- vation—would the adoption of a UNSC resolution be reasonable. Rather, the issue of technical modalities should be negotiated among concerned parties, i.e. the humanitar- ian organisation and the State. It is only reasonable that the humanitarian organisation tries to comply with the technical requirements proposed by the host State to distribute assistance to those in need. 153 UNSC Resolution 1265, 17 September 1999, UN Doc. S/RES/1265; IIL Resolution 2003, § VII(3). 154 UNSC Resolution 2139, 22 February 2014, UN Doc. S/RES/2139, 2014. See also Presidential Statement of 2 October 2013, UN Doc. S/PRST/2013/15. 155 UNSC Resolution 2165, 14 July 2014, UN Doc. S/RES/2165, 2014. 156 UNSC Resolution 2165, § 2. 157 UNSC SC/14066, 20 December 2019, UN Doc. SC/14066. 158 See UNSC Resolution 2504, 10 January 2020, UN Doc. S/RES/2504; UNSC Resolution 2533,13 July 2020, UN Doc. S/RES/2533. 159 Barber, 2020, p. 1 244 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 7.2. Necessity Under secondary rules of international law, the most commonly proposed legal jus- tifications for the provision of non-consensual humanitarian assistance are necessity and countermeasures, as one of the circumstances precluding wrongfulness of conduct that would otherwise not conform with the international obligations of the concerned hu- manitarian organisation. In other words, because the non-consensual provision of hu- manitarian assistance arguably violates the principle of sovereignty, this last part of the paper addresses the question of whether international humanitarian organisations could justify such a breach by relying on necessity or countermeasures as circumstances pre- cluding wrongfulness. According to Article 25 ARIO, an international organisation may invoke necessity as a ground for precluding the wrongfulness of an act not in conformity with an interna- tional obligation of that organisation, where that act is “the only means for the organization to safeguard against a grave and imminent peril an essential interest of its member States or the international community as a whole, when the organization has, in accordance with international law, the fun- ction to protect that interest.”160 Additionally, to be able to rely on necessity, an international organisation must not seriously impair an essential interest of the concerned State. The idea of the ILC, when introducing this circumstance, was that it would be used in exceptional and limited cases, under narrowly defined conditions161, where an irrec- oncilable conflict between an essential interest on the one hand and the obligation of the concerned State or international organisation invoking necessity on the other exists.162 The ILC, however, explicitly stated in the commentaries that (forcible) humanitarian in- tervention cannot be justified on necessity as a circumstance precluding wrongfulness.163 What is more, the ILC concluded that the plea of necessity should not be invocable by international organisations as widely as by States and thereby limited the possibility of international organisations to rely on the necessity to instances where the essential interest of its member States or the international community as a whole is at stake, and the organisation has, in accordance with international law, the function to protect that interest.164 Against this background, three conditions have to be met for an international organization to invoke necessity: 160 Articles on Responsibility of International Organizations (ARIO), YILC 2011, vol. II, Part Two, Article 25(1)(a). 161 Commentary to Article 25 ARIO, § 1. 162 Commentary to Article 25, Articles on Responsibility of States for Internationally Wrongful Acts (ARSIWA), YILC, 2001, vol. II (Part Two), p. 80, §§ 1–2. 163 Commentary to Article 25 ARSIWA, § 21. 164 Commentary to Article 25 ARIO, p. 52, § 4. 245 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent 1. Grave and imminent peril to an essential interest of its member States or the interna- tional community as a whole; 2. A concerned international organisation has the function to protect that interest; 3. The course of action, i.e. non-consensual humanitarian assistance, is the only available way to safeguard that interest. There is, however, very little practice whereby international organisations would rely on the notion of necessity to justify their actions.165 Regarding the first criterion, a situation whereby denial of humanitarian assistance leads to suffering and grave violations of the rights of the civilian population, possi- bly amounting to war crimes or crimes against humanity, could be considered as erga omnes obligations, and thus in the essential interest of the international community as a whole.166 Regarding the second condition, UN agencies generally are mandated with the provision of humanitarian assistance and have the function to protect the concerned essential interest (helping the civilian population in need, preventing humanitarian ca- tastrophes167). While necessity has been considered by scholars as a possible legal basis for the provision of non-consensual humanitarian assistance,168 its application to AI- supported humanitarian assistance is questionable. In particular, it is the third condition that is problematic in our context, as typically, an alternative to AI-supported human- itarian assistance should be available to international organisations. If there are other (non-AI) means by which humanitarian assistance could be delivered, even if they are more costly or less convenient, this condition will not be met.169 7.3. Countermeasures Whether non-consensual humanitarian relief operations can be characterised as counter- measures has already been considered (albeit briefly) by other scholars.170 Countermeasures are one of the circumstances precluding wrongfulness, allowing for the response to a previ- ous breach of international law with the adoption of measures that would otherwise them- 165 Commentary to Article 25 ARIO, p. 51, § 2. 166 For discussions whether ensuring safety of civilian population and severe suffering of civilian pop- ulation amounts to the essential interest, see: Commentary to Article 25 ARSIWA, p. 83; Barber, 2023, p. 3; Gillard, 2013, p. 373; Ryngaert, 2013, p. 15. 167 See, e.g., Article II (The purposes and functions of WFP), General Regulations and General Rules, WFP, 2022, (accessed 20 April 2023). 168 Barber, 2023, p. 3; American Relief Coalition for Syria, 2022, pp. 37–43. 169 Commentary to Article 25 ARSIWA, p. 83, § 15. 170 See, e.g, Akande & Gillard, 2016, pp. 54–55; Ryngaert, 2013, p. 15; Stoffels, 2004, p. 537. 246 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 selves be contrary to international law.171 Countermeasures aim to ensure the cessation of the alleged breach and, where appropriate, ensure reparation for injury.172 They are thus of temporary or provisional character, aiming at the restoration of legality. If the responsible subject complies with its obligations of cessation and reparation, the countermeasures are to be discontinued, and the performance of the obligation resumed.173 Countermeasures may be adopted by international organisations to protect their in- dividual interest when they are injured by the previous internationally wrongful act,174 or to safeguard a general interest of a sort.175 It is the latter, more controversial type of countermeasures that could be relevant for our discussion. In brief, the idea of the so- called third-party countermeasures is that it gives States and international organisations the entitlement to invoke responsibility and adopt countermeasures in instances where they are not directly injured by a prior breach of international law, that is, in response to erga omnes (partes) obligations. Due to their fundamental character these obligations are “the concern of all States”, because “all States can be held to have a legal interest in their protection”;176 they are owed to the international community as a whole. The question of the legality of the adoption of countermeasures in response to violations of erga omnes obligations as codified by the ILC, has received a lot of attention among scholars.177 It is not the purpose of the present research to further explore extensive debates on the legality of the adoption of countermeasures in response to violations of erga omnes obligations. It is important to note that from 2001 onwards acceptance of the legality of such measures, also against the background of increased practice, is becoming firmly established amongst international lawyers178 and other important professional organisations on international law.179 It is, therefore, premised here, that such an entitlement exists in international law. In cases of withholding of consent to humanitarian assistance, it is the civilian pop- ulation that suffers and is, therefore, directly affected by the activities of their host State. While the civilian population as such has limited options if invoking the responsibility of 171 Article 22 and Part three, Chapter II ARSIWA; Article 22 and Part four, Chapter II ARIO. Naulilaa Incident Arbitration (Portugal v. Germany), 31 July 1928, RIAA, vol. 2 (UN publications 1949), p. 1012; Gabčíkovo-Nagymaros Project (Hungary v. Slovakia), Judgment (1997) ICJ Rep. 7, pp. 55–56; Air Services Agreement of 27 March 1946 between the United States of America and France, 9 December 1978. 172 Articles 22 and 49 ARSIWA; Articles 22 and 51 ARIO. 173 Commentary to Article 49, ARSIWA, pp. 130–131, § 7. 174 Article 22 ARIO (and Article 22 ARSIWA). 42. ARSIWA and Article 43. ARIO. 175 Article 48 and 54 ARSIWA and Article 49 and 57 ARIO. 176 Barcelona Traction, Light and Power Company, Limited, (Belgium v. Spain), Judgment (1970) ICJ Rep. 3, §§ 33–34. 177 Frowein, 1987; Alland, 2002; Tams, 2005, pp. 198–251; Gaja, 2011; Dawidowicz, 2017. 178 See T. Veber, 2022, p. 311, footnote 2375. 179 IIL, Resolution on Obligations erga omnes in international law, Krakow, 2005. 247 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent this State, in cases where denial of humanitarian assistance amounts to a violation of an erga omnes obligation, e.g. crime against humanity, genocide or war crime, it is the other actors of the international community that have the entitlement to react on such occa- sions through the adoption of countermeasures in the form of non-consensual delivery of humanitarian assistance. As already explained, non-consensual humanitarian assistance generally amounts to a violation of the sovereignty of the concerned State. However, in situations where denial of AI-supported humanitarian assistance leads to a violation of an erga omnes obligation, States and international organisations are entitled to provide non-consensual humanitarian assistance through the adoption of countermeasures.180 8. Concluding Remarks Careful analysis of relevant regimes governing humanitarian assistance reveals that the consent of the concerned State continues to have a central role in the general human- itarian assistance regime, the international human rights regime and the international humanitarian law regime. The notion of consent lies at the heart of these rules, and subsequently the lack of consent is often the major practical limitation to humanitarian relief operations. This paper distinguished between two different types of consent to AI-supported humanitarian assistance: strategic consent and operational consent. The former refers to the general consent of a State to the delivery of humanitarian assistance on its territory, while the latter refers to the consent required at the operational level for the delivery of a particular type of humanitarian assistance in a specific geographically defined area. States can have valid legal reasons for withholding the operational con- sent, including because humanitarian assistance, for example, does not comply with their technical modalities of humanitarian assistance, which is embedded in most relevant primary rules on humanitarian assistance. Against this background, States may validly withhold operational consent to AI-supported humanitarian assistance and request an alternative (non-AI) distribution of humanitarian assistance. On the other hand, strate- gic consent, which is a prerequisite to the delivery of humanitarian aid to a particular State, cannot be arbitrarily withheld, which is the case where such denial would amount to serious violations of the State’s other international obligations relating to the civilian population, and where the use of AI would be the only way to distribute such assistance or would be proportionally the most appropriate and efficient way to distribute the as- sistance. In such an instance, the issue of the responsibility of a concerned State arises, and subsequently also possible non-consensual AI-supported humanitarian assistance. It has been explained that the provision of non-consensual humanitarian aid cannot be considered as per se legal under international law. Careful analysis of relevant ICJ case- 180 For a view that the law of countermeasures cannot be applied to non-consensual humanitarian assistance, see Stoffels, 2004, p. 536. 248 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 law and State practice reveals that the non-consensual provision of humanitarian assis- tance would amount to a violation of the principle of sovereignty and non-intervention. However, non-consensual humanitarian assistance could nevertheless be justified by a UNSC authorisation or under the secondary rules of international law, countermeasures in particular. This latter possibility is limited to instances whereby denial of AI-supported humanitarian assistance would simultaneously lead to a violation of an erga omnes obli- gation, thereby triggering the entitlement for the provision of non-consensual humani- tarian assistance through the adoption of countermeasures. 249 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent References Akande, D., & Gillard, E.-C. (2016) ‘Arbitrary Withholding of Consent to Huma- nitarian Relief Operations in Armed Conflict’, International Law Studies, Vol. 92, pp. 483–511. Alland, D. (2002) ‘Countermeasures of General Interest’, European Journal of Inter- national Law, Vol. 13, No. 5, pp. 1221–1239. American Relief Coalition for Syria (2022), 2014 is not 2022: why the continuation of UN- coordinated cross-border aid into Syria absent a UN Security Council resolution is lawful, (accessed 20 April 2023). Barber, R. (2009) ‘Facilitating humanitarian assistance in international humanitarian and human rights law’, International Review of the Red Cross, Vol. 91, No. 874, pp. 371–397. Barber, R. (2020) ‘Does International Law Permit the Provision of Humanitarian Assistance Without Host State Consent? Territorial Integrity, Necessity and the Determinative Function of the General Assembly’ in: Gill, T.D., Geiß, R., Krieger, H., and Mignot-Mahdavi, R. (eds.) Yearbook of International Humanitarian Law, Vol. 23. The Hague, T.M.C. Asser Press, pp. 85–121. Barber, R. (2023) ‘There wasn’t before, and now there even more definitely isn’t, any legal barrier to providing cross-border humanitarian assistance in northwest Syria’, EJIL:Talk!, 13 February 2023, (accessed 20 April 2023). BBC (2020) ‘Palantir: The controversial data firm now worth £17bn’, (accessed 20 April 2023). Beduschi, A. (2022) ‘Harnessing the potential of artificial intelligence for humanitari- an action: Opportunities and risks’ International Review of the Red Cross 104 (919), pp. 1149–1169. Buchan, R. (2021) Cyber Espionage and International Law. Hart Publishing. Businesswire (2022) ‘Palantir Ranked No. 1 in Worldwide Artificial Intelligence Software Study in Market Share and Revenue’, (accessed 20 April 2023). Dawidowicz, M. (2017) Third-Party Countermeasures in International Law. Cambridge, Cambridge University Press. 250 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Delerue, F. (2020) Cyber Operations and International Law. Cambridge, Cambridge University Press. European Data Protection Board (2022), Guidelines 05/2022 on the use of facial reco- gnition technology in the area of law enforcement, (accessed 20 April 2023). FRA (2020) Facial recognition technology: fundamental rights considerations in the context of law enforcement, (acces- sed 20 April 2023). Frowein, J. (1987) ‘Collective enforcement of international obligations’ 47 Zeitschrift für ausländisches öffentliches Recht und Völkerrecht, pp. 67–79. Gaja, G. (2011) The Protection of General Interests in the International Community, The Hague Academy Collected Courses Online / Recueil des cours de l’Académie de La Haye en ligne, Vol. 364, (Martinus Nijhoff Publishers). Gal, T. (2017) ‘Territorial Control by Armed Groups and the Regulation of Access to Humanitarian Assistance’, Israel Law Review 50(1), pp. 25–47. Gillard, E.-C. (2013) ‘The Law Regulating Cross-Border Relief Operations’, International Review of the Red Cross 95 (890), pp. 351–382. Henckaerts, J.M., (2005) ‘Study on Customary International Humanitarian Law: A contribution to the understanding and respect for the rule of law in armed conflict’, International Review of the Red Cross, Vol. 87, No. 857, March 2005. Henckaerts, J.-M., & Doswald-Beck, L. (2005) Customary International Humanitarian Law, Volume I, Rules, Cambridge, Cambridge University Press. ICRC (2014) ICRC Q&A and lexicon on humanitarian access, 96 International Review of the Red Cross, No. 893, pp. 359–375. Jamnejad, M., & Wood, M., ‘The principle of non-intervention’ Leiden Journal of International Law 22.02 (2009), pp. 345–381. Jennings, R., & Watts, A. (eds.) (2008) Oppenheim’s International Law, Ninth edition, Oxford, Oxford University Press. Kop, M. (2021), EU Artificial Intelligence Act: The European Approach to AI, Transatlantic Antitrust and IPR Developments. Kuner, C. (2019) ‘International Organizations and the EU General Data Protection Regulation: Exploring the Interaction between EU Law and International Law’, International Organizations Law Review, 2019, Vol. 16, No. 1, pp. 158–191. 251 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent Kuner, C., & Marelli, M. (2020) Handbook on Data Protection in Humanitarian Action, Second edition, ICRC, (accessed 20 April 2023). Landgren, K., Redhal, G., Romita, P., & Thompson S.K. (2023) ‘The Demise of the Syria Cross-Border Aid Mechanism’, Lawfare, 23 August 2023, (accessed 20 April 2023). Lindblom, A.-K. (2009) Non-Governmental Organisations in International Law, Cam- bridge University Press. Macdonald, A. (2022) ‘African nations must implement safeguards against humanitarian digital ID risks: researcher’, Biometric update, (accessed 20 April 2023). Martin, A., Sharma G., de Souza S.P., Taylor, L., van Eerd, B., McDonald, S. M., Marelli, M., Cheesman, M., Scheel, S., & Dijstelbloem, H. (2023) ‘Digitisation and Sovereignty in Humanitarian Space: Technologies, Territories and Tensions’, Geopolitics, Vol. 28, No. 3, pp. 1363–1397. Narbel, V.G., & Sukaitis, J. (2021) ‘Biometrics in humanitarian action: a delicate balance’, Humanitarian Law & Policy, (accessed 20 April 2023). Oxford Guidance on the Law Relating to Humanitarian Relief Operations in Situations of Armed Conflict (2016), (accessed 20 April 2023). Parker, B. (2019) ‘New UN deal with data mining firm Palantir raises protection con- cerns’, The New Humanitarian, (accessed 20 April 2023). Reuters (2019) ‘U.N. food chief warns aid suspension in Yemen likely to start this week’, (accessed 20 April 2023). Rottensteiner, C. (1999) ‘The denial of humanitarian assistance as a crime under in- ternational law’, International Review of the Red Cross, Vol. 81, No. 835, pp. 555–582. Ryngaert, C., (2013) ‘Humanitarian Assistance and the Conundrum of Consent: A Legal Perspective’, Amsterdam Law Forum, 5(2), pp. 5–19. 252 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Sancin, V., (ed.) (2010) Lokalni zločinci univerzalni zločini: Odgovornost zaščititi. GV Založba, Ljubljana. Sandoz, Y., Swinarski, C., & Zimmermann, B. (eds.) (1986) Commentary on the Additional Protocols of 8 June 1977 to the Geneva Conventions of 12 August 1949, International Committee of the Red Cross. Schertel Mendes, L. (2023) The road to regulation of artificial intelligence: the Brazilian experience, Internet Policy Review, (accessed 20 April 2023). Schmitt, M. N. (ed.) (2017) Tallin Manual 2.0 on the International Law Applicable to Cyber Operations, Prepared by the International Groups of Experts at the Invitation of the NATO Cooperative Cyber Defence Centre of Excellence (CUP). Sharpe, M. (2023) Humanitarian access to Gaza, (accessed 21 February 2024). Sproson, J., & Olabi, I. (2023) With No Judge or Jury, Who Will Decide the Fate of 4.1 million Aid-Dependent Syrians? A Comment on the Legality of UN-Coordinated Cross-Border Aid Operations in Syria, (accessed 20 April 2023). Stoffels, R. A. (2004) ‘Legal regulation of humanitarian assistance in armed conflict: Achievements and gaps’, 86 International Review of the Red Cross No. 855, pp. 515–545. T. Veber, M. (2022) Sanctions adopted by international organizations in the defence of the general interest, PhD Thesis, Faculty of Law, University of Ljubljana. T. Veber, M. (2023) ‘Z umetno inteligenco podprta humanitarna pomoč in odgovornost zaščititi’ Pravna praksa, 29 June 2023, Vol. 42/1597, No. 25, pp. 14–15. T. Veber, M. (2024) ‘AI-Supported Humanitarian Aid and the Right to Life: Highlighting Some of the Legal Challenges Faced by International Humanitarian Organizations’ in: Sancin, V. (ed.), Artificial Intelligence and Human Rights: From the Right to Life to Myriad of Diverse Human Rights Implications. Ljubljana, Litteralis (forthcoming). T. Veber, M. (2025) ‘International Organizations and AI-Supported Humanitarian Aid: Navigating Through the Applicable (Data Protection) Legal Regimes’ International and Comparative Law Review (forthcoming). Tams, C.J. (2005) Enforcing Obligations Erga Omnes in International Law (CUP). 253 Maruša T. Veber – Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent The Guardian (2014) There is no legal barrier to UN cross-border operations in Syria, 28 April 2014, (accessed 20 April 2023). There is Still No Legal Barrier to UN Cross-Border Operations in Syria Without a UN Security Council Mandat (2023), (accessed 20 April 2023). UNESCO (2021) Recommendation on the Ethics of Artificial Intelligence, (accessed 1 January 2024). Welsh, T. (2019) ‘Biometrics disagreement leads to food aid suspension in Yemen’, devex, (accessed 20 April 2023). WFP (2016) WFP Introduces Iris Scan Technology To Provide Food Assistance To Syrian Refugees, in: Zaatari, (accessed 20 April 2023). Wills, T. (2019) Sweden: Rogue algorithm stops welfare payments for up to 70,000 unemployed, Algorithm Watch (accessed 20 April 2023). Wright, Q. (1962) Espionage and the Doctrine of Non-Intervention in Internal Affairs. Essays on Espionage and International Law, pp. 793–798. 255 © The Author(s) 2024 Scientific Article DOI: 10.51940/2024.1.255-278 UDC: 341.226:347.8:004.8 341.176:341.226 Anže Singer* Artificial Intelligence in Space: Overview of the European Space Agency and its Role in the AI Environment Abstract ESA was established on 30 October 1980. It currently has twenty-three Member States, and its mission is to shape the development of the European space capability and to ensure that investment in space is continued in the direction of bringing benefits to European citizens and the world. Artificial intelligence (AI) can be seen as intelligence exhibited by machines that can observe, perceive and act upon their environment to maximise their chance of success at a given goal. AI can be an important and enabling technology for space missions, bringing added value for scientific return and for the efficiency of the mission itself. The most successful AI implementations are still rarely used in the space in- dustry today, as the models developed within the neural network are not human-readable. Despite the challenges, there are examples where AI is successfully being demonstrated in the space sector through ESA’s own activities. The fast-evolving field of space research and technology, AI, and the related applications are raising numerous doubts and debates while challenging the adequacy of traditional space law. There is a looming concern as to whether the legal framework is up to date to meet the challenges that may arise within the AI and space sector, and what can be done to meet those challenges accordingly and on time. Others also argue that, in addition to liability concerns, ensuring confidentiality and data protection are some of the more acute issues in the context of AI. Key words European Space Agency, space research, artificial intelligence, space law, legal challenges. * Graduated in 2013 with a Master’s Degree in Law, Faculty of Law, University of Ljubljana, and cur- rently holding a position as a Contracts Officer in the European Space Agency, in the European Space Research and Technology Centre (ESTEC), The Netherlands. E-mail: anze.singer@gmail.com Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review – voL. LXXXiv, 2024 • pp. 255–278 ISSN 1854-3839 • eISSN: 2464-0077 256 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 1. Introduction The present article provides a general overview of the European Space Agency (ESA) as an intergovernmental organisation, together with its mandate, structure, and the main elements of its inner workings. It is essential to understand the purpose of ESA and its role in the current landscape of space research and technology, in which one of the topical areas of discussion is artificial intelligence (AI). The article introduces how ESA is either affected by or acting upon the current status of AI in space developments. The most important and visible projects of ESA that have been based on the use of AI, either in their development stages or in their operational phases are presented. The article discusses ESA’s current role in the international environment as one of the leading and most innovative space agencies in the world, as well as future challenges and solutions related to the use of AI in space, in which ESA may have to play a crucial role, not only technically, but equally in establishing and promoting the necessary legal frame- work. The objective of the article is to confirm the increasing use of AI in ESA’s activities and whether or not current special regulations are sufficient for the rapid developments of AI in space research and technology. If the latter fails to be the case, the article will present some of the solutions that could bring regulation to a satisfactory level on which society at large can rely and from which it can benefit. The article’s objective is achieved through collecting the necessary information on ESA, AI and the interconnection between the two while analysing the currently availa- ble legislation on AI—namely AI in space—in international and regional environments through a comparative method and deduction. 1.1. Key definitions European Space Agency: ESA is an international organisation with 23 Member States. By coordinating the financial and intellectual resources of its Members, it can undertake programmes and activities far beyond the scope of any single European country.1 Artificial Intelligence: the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recogni- tion, decision-making, and translation between languages.2 Space Law: the body of law governing space-related activities. Space law, much like gen- eral international law, comprises a variety of international agreements, treaties, conven- tions, and United Nations General Assembly resolutions as well as rules and regulations of international organisations.3 The views expressed herein are a collection of information already available in the public domain, and can in no way be taken to reflect the official opinion of the European Space Agency. 1 ESA (without a date). 2 OxfordLanguages, 2024. 3 UN Office for Outer Space Affairs, 2024. 257 Anže Singer – Artificial Intelligence in Space: Overview of the European Space Agency and its Role in the AI Environment 2. European Space Agency as an Intergovernmental Organisation For over five decades, the European Space Agency (hereinafter ESA) has been—and continues to be—the heart of European space research and technology. It is an intergov- ernmental organisation that comprises twenty-three Member States4 and is Europe’s gate- way to space. It was established out of the merger between the previous European Space Research Organization (ESRO) and the European Launcher Development Organization (ELDO),5 on 30 May 1975,6 through the signature of the Convention for the establish- ment of a European Space Agency (hereinafter ESA Convention) by the ten founding Member States.7 Its mission is to shape the development of Europe’s space capability and to ensure that investment in space continues to deliver benefits to the citizens of Europe and the world, while its purpose is to promote cooperation among European States in space research and technology and their space applications, exclusively for peaceful purposes.8 By coordinating the financial and intellectual resources of its members, it can undertake programmes and activities far beyond the scope of any single European country.9 In addition to the mandate and purpose described above, ESA shall also facilitate the exchange of scientific and technical information related to space research and technology and their space applications.10 ESA’s activities and programmes fall into two categories—“mandatory” and “option- al”. Programmes carried out under the General Budget and the Space Science programme budget are “mandatory”; they include the Agency’s basic activities (studies on future projects, technology research, shared technical investments, information systems and training programmes, as well as basic infrastructure and general services).11 All Member States contribute to these programmes on a scale based on their Gross National Product 4 As of August 2017, the Member States are Austria, Belgium, the Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Luxembourg, the Netherlands, Norway, Poland, Portugal, Romania, Spain, Sweden, Switzerland, and the United Kingdom, while Slovenia officially became ESA’s 23rd full Member State on 1 January 2025. Slovakia, Latvia, and Lithuania are Associate Members. Canada takes part in some projects under a cooperation agreement. Bulgaria, Croatia, Cyprus, and Malta have cooperation agreements with ESA. ESA (without a date). 5 ESA Convention, Article XIX. 6 ESA Convention entered into force on 30 October 1980. 7 Belgium, Denmark, France, Germany, Italy, the Netherlands, Spain, Sweden, Switzerland, and the United Kingdom. 8 ESA Convention, Article II. 9 ESA (without a date). 10 Article III, ESA Convention. 11 ESA (without a date). 258 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 (GNP),12 and they amount to about 20% of the funding ESA receives from its Member States.13 The other programmes, known as ‘optional’, are only of interest to some Member States, who are free to decide on their level of involvement. Optional programmes cover areas such as Earth observation, telecommunications, integrated applications, human and robotic ex- ploration, satellite navigation and space transportation. Similarly, the International Space Station and microgravity research are financed by optional subscriptions.14 Another, third source of ESA funding stems from third-party activities, where ESA manages space-related activities on behalf of organisations such as the European Union (EU) or Eumetsat. Examples include Galileo, parts of Copernicus, and recurrent Meteosat and Metop satellites. The industrial participation in these activities is regulated by each respective agreement.15 For both mandatory and optional projects, the industrial activity in each Member State must be commensurate with that Member State’s funding of each project. In this sense, programmes are independent of each other, and excess activities in one may not compensate for a low (with respect to the funding) participation in another.16 Contributions are not only financial but can also be technological or industrial. Activities may be related to advanced research, project-related technology developments, produc- tion, operations, support, and services. ESA aims to make maximum use of the industrial and research potential and capabilities available in each Member State, whilst supporting the Member State’s interests and priorities.17 To this end, ESA’s industrial policy18 has four specific aims: to meet the European space programme requirements in a cost-effective manner;19 to improve the worldwide competitiveness of European industry;20 to ensure that all Member States participate in an equitable manner, with regard to their financial contribution, in implementing the European space programme;21 and to exploit the advantages of free competitive bidding in all cases22. To monitor industrial policy, ESA permanently reviews the industrial po- 12 Ibid.; ESA Convention, Article XIII. 13 ESA (without a date); ESA Convention, Article V(1)(a). 14 ESA (without a date). 15 Ibid. 16 Ibid. 17 Ibid. 18 ESA Convention, Article VII and Annex V. 19 Ibid., Article VII(1)(a). 20 Ibid., Article VII(1)(b). 21 Ibid., Article VII(1)(c). 22 Ibid., Article VII(1)(d). 259 Anže Singer – Artificial Intelligence in Space: Overview of the European Space Agency and its Role in the AI Environment tential and industrial structure in relation to the Agency’s activities.23 A very distinctive characteristic of ESA and its industrial policy is the so-called geographical distribution,24 whereby the distribution of contracts placed by ESA should ideally result in all countries having an overall return coefficient (i.e. the ratio between a country’s percentage share of the total value of all contracts awarded among Member States and its total percentage contributions) of 1. The Council is ESA’s governing body and provides the basic policy guidelines within which ESA develops the European space programme. Each Member State is represented on the Council and has one vote, regardless of its size or financial contribution.25 The executive branch of the organisation is vested in the Director General, who is the chief executive officer and the legal representative of ESA, and who is responsible for the exe- cution of ESA’s programmes and policies.26 ESA has a legal personality,27 and ESA, its personnel, experts and representatives of its Member States enjoy legal capacity, privileges and immunities.28 Furthermore, the agreements concerning ESA’s establishments are implemented directly between ESA and the respective host nation(s).29 Its legal subjectivity is further defined in Annex I to the ESA Convention, where it is stated that ESA has a legal personality and specifically has, among other things, the ability to conclude agreements or contracts, acquire and dispose of movable and immovable property and be a party to legal proceedings.30 3. Artificial Intelligence within ESA 3.1. Definition and Branches of Artificial Intelligence It is no surprise that AI can be an important and enabling technology for space mis- sions, bringing added value for scientific return and for the efficiency of the mission it- self. Latency, mission design, development, operations, data exploitation, proficiency and availability are all factors in space missions that can benefit from the introduction of AI.31 Already in 2003, with the vision that ESA is rich in data, the organisation started to consider the use of AI in very specific use cases, mainly focused on enhancing ground op- 23 ESA (without a date). 24 ESA Convention, Annex V, Article IV. 25 ESA Convention, Article XI; ESA (without a date). 26 ESA Convention, Article XII. 27 Ibid., Article XV(1). 28 Ibid., Article XV(2). 29 Ibid., Article XV(3). 30 Ibid., Annex I, Article I. 31 Fratini, 2019, p. 6. 260 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 erations tasks with some examples including spacecraft health management and support to operations decision processes.32 The initial proofs of concept for introducing AI in space were accomplished mostly in the area of space operations with acknowledged success, and eleven applications have been deployed in operations in support of seventeen missions. In addition, ESA holds a leading position worldwide in AI applications for early detection and anomaly investiga- tion, with two patents.33 However, to properly contextualise the role of AI, specifically its role in the space sector, a first definition of AI had to be established. One such definition within the space research community was given in the AI for Earth Observation (AI4EO) research and development (R&D) community consultation, where AI can be seen as: “intelligence exhibited by machines that can observe, perceive and act upon their environment to maximize their chance of success at some goal. It refers to the capacity of an algorithm for assimilating information to perform tasks that are characteristic of human intelligence, such as recognizing objects and sounds, con- textualizing language, learning from the environment, and problem solving.”34 Furthermore, the term AI can be seen as: “comprising all techniques that enable computers to mimic intelligence, for example, computers that analyse data or the systems embedded in an autonomous vehicle. Usually, artificially intelligent systems are taught by humans—a process that involves writing an awful lot of complex computer code.”35 While the foundations for the AI definition have been laid, ESA has also introduced some of the AI technologies and identified areas of development, operation and exploita- tion of ESA space missions where these technologies might enhance existing approaches or even be game-changers.36 Four main application areas were identified in ESA for AI-oriented activities, namely the development area (design, operation concepts, quality assurance, etc.), operations area (mission planning support and optimisation, automated operations, ground stations management, etc.), exploitation (Earth observation data, space science data, navigation science data, etc.) and other areas such as research, education and training, knowledge management, etc.37 The broad scope of domains where AI can be applied has also been recognised by the 2019 ESA Technology Strategy, admitting the significance of the emerging AI sector: 32 Ibid., p. 7. 33 Ibid. 34 AI4EO Workshop, 2018, p. 3. 35 ESA, 2023. 36 Fratini, 2019, p. 4. 37 Ibid., p. 6. 261 Anže Singer – Artificial Intelligence in Space: Overview of the European Space Agency and its Role in the AI Environment “Given the real-life benefits already seen by early adopters, artificial intelligence (AI) is considered to be at the core of the next wave of digital disruption, accelera- ting competition and speeding up digital transformations.”38 The leading position that ESA has in the AI field was reiterated through the aforemen- tioned ESA Technology Strategy, since ESA has been at the cutting edge of academic AI research related to space, while European space industry and services capitalise on previ- ously academic-driven research.39 The importance of AI for both the present and the fu- ture is evident from the plan for a 30% improvement in spacecraft development by 2023, where technology development from all competence domains will be critical to achieving this goal. This success largely depends on the further development of digital engineering, automation and artificial intelligence.40 AI is also envisaged to contribute significantly to achieving a 30% faster development and adoption of innovative technology.41 3.1.1. Machine Learning AI can be achieved through machine learning (ML), which “simply” means teaching machines to learn for themselves. It is a way of “training” a relatively simple algorithm to become more complex using huge amounts of data fed into the algorithm, which then adjusts and improves itself over time.42 The AI for Earth Observation (AI4EO) workshop provided, among other terminolo- gy, the definition of the machine learning aspect of AI, identifying it as: “a branch of AI relying on algorithms that are capable of learning from both data and through human interactions (e.g. supervision) to enable prediction, but is also used for data mining (i.e. discovery of unknown properties and patterns). ML is a field of statistical research for training computational algorithms that split, sort, transform a set of data in order to maximize the ability to classify, predict, cluster or discover new patterns in target datasets. ML is all about using computers to learn how to deal with problems without programming. In fact, ML generates models by taking some data for training a model, and then makes predictions.”43 ML has been used for Earth Observation data exploitation even before the advent of cloud computing and big data technologies. Experiments based on the classification of satellite images with very shallow (two layers max) neural networks were performed 38 ESA, 2019, p. 17. 39 Ibid. 40 Ibid., p. 25. 41 Ibid., p. 26. 42 ESA, 2023. 43 AI4EO Workshop, 2018, p. 3. 262 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 using ERS-1/244, ENVISAT45 and Copernicus46 products for specific test sites. From the early 2000s onwards, there have been a consistent number of projects where simple neural networks were applied to ESA data, for example SAR/ASAR47 data for recognising oil spills and ships, data mining applied to optical data at different resolutions, inversion of atmospheric measurements, etc.48 Tools based on neural networks have been also assessed for Earth Observation ground segments even before the deployment in space of the first Copernicus components. Activities have been kicked-off (especially for optical missions) to enhance cloud and shadow detection by feeding simple Neural Networks with morphological information together with pixel measured counts.49 3.1.2. Deep Learning Another branch of AI is the so-called deep learning (DL), a specialised technique within machine learning, whereby the machine utilises multi-layered artificial neural networks to train itself on complex tasks such as image recognition. This can happen via supervised learning (for example, feeding the system Moon and Earth pictures until it can successfully identify both) or via unsupervised learning (for example, the network finding the structure by itself ). Some examples of DL include online translation services, and navigation systems for self-driving cars or spacecraft.50 The AI for Earth Observation (AI4EO) workshop defined deep learning as: “a type of ML algorithm that aim to solve the same kind of problems by mimic- king the biological structure of the brain and construct hierarchical architectures of increasing sophistication […] Today, DL is reaching high-level accuracy going beyond human performance, holding promises that they could substitute han- dcrafted feature extraction, thereby enabling totally automatic image recognition of big data (including Earth Observation) and opening huge opportunities for new science and business.”51 Researchers and engineers have attempted to apply these new DL techniques to prob- lems related to vegetation and/or water monitoring, object recognition and land use classification from space, while the tools made available by ESA for the elaboration of Copernicus data were also based on the first neural networks algorithms.52 44 ESA – earth online, (without a date). 45 Ibid. 46 ESA, (without a date). 47 ESA – earth online, (without a date). 48 Fratini, 2019, p. 7. 49 Ibid., pp. 7 and 8. 50 ESA, 2023. 51 AI4EO Workshop, 2018, p. 3. 52 Fratini, 2019, p. 7. 263 Anže Singer – Artificial Intelligence in Space: Overview of the European Space Agency and its Role in the AI Environment 3.2. Proven AI Application Examples in ESA Projects There are examples where AI has been, and still, is successfully demonstrated in the space sector through ESA’s own activities. Although it may seem like a big step to move from basic activities to real space applications, ESA is already starting to use AI in its space missions as well.53 3.2.1. Data Handling: ɸ-sat-1 (Phisat-1) ɸ-sat-1 (pronounced phisat-1) is artificial intelligence technology carried on one of the two CubeSats that make up the Federated Satellite Systems mission (FSSCat). The FSSCat mission is based on two CubeSats, each about the size of a shoebox, which use state-of-the-art dual microwave and multispectral optical sensors to measure, for exam- ple, soil moisture, ice extent, ice thickness, urban heat islands, and to monitor changes in vegetation and water quality. To take FSSCat to the next level, ESA worked with partners to develop ɸ-sat-1 to not only give FSSCat more spectral capabilities, but also to improve the efficiency of sending vast quantities of data back to Earth.54 ɸ-sat-1 is the first artificial intelligence to be carried on a European Earth observation mission. Its hyperspectral camera images in the visible, near-infrared and thermal-infra- red parts of the electromagnetic spectrum, acquiring an enormous number of images of Earth. However, some images will not be suitable for use because of cloud cover. To avoid downlinking these less-than-perfect images, the ɸ-sat-1 artificial intelligence chip filters them out so that only usable data are returned. This will make the process of handling all these data more efficient, allowing users more timely access to information, ultimately benefiting society at large.55 3.2.2. Operations: Mars Express Another example of AI application is Mars rovers, which can navigate around obsta- cles by autonomously finding their way across “unknown” fields. Intelligent data trans- mission software on board the rovers removes human scheduling errors that might oth- erwise cause valuable data to be lost, and increases the volume of useful data that arrives from our planetary neighbour.56 In addition to the rovers exploring Mars on its surface, ESA’s orbiter Mars Express has been using its sophisticated instruments since January 2004 to study the atmosphere, surface and subsurface of Mars, confirming the presence of water and looking for other signatures of life on and below the rocky terrain. The spacecraft generates huge volumes of scientific data, which must be downloaded to Earth at the right time and in the correct 53 ESA, 2023. 54 ESA, (without a date). 55 Ibid. 56 ESA, 2023. 264 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 sequence. Otherwise, data packets can be permanently lost when the limited on-board memory is overwritten by newly collected data. Traditionally, data downloading was managed using human-operated scheduling software to generate command sequences sent to Mars Express, instructing it when to dump specific data packets.57 A new “smart” tool, dubbed MEXAR2 (“Mars Express AI Tool”), has been developed and successfully passed initial testing and validation. It is now an integral part of the Mars Express mission planning system. MEXAR2 works by considering the variables that affect data downloading—including the overall science observation schedule for all Mars Express instruments—and then intelligently projecting which on-board data packets might later be lost due to memory conflicts. It then optimises the data down- load schedule and generates the commands needed to implement it. By doing so, the MEXAR2 tool has reduced the mission planning team’s workload by about 50 percent compared to the old manual method. AI provides solutions for complex problems and has now entered the space mission operations field as a value-adding technology. Mars Express represents the very first European deep-space exploration mission to fly using an AI tool on the ground.58 3.2.3. Deep-Space Operations: Hera Hera is the first probe to rendezvous with a binary asteroid system, to examine the aftermath of the first kinetic impact test of asteroid deflection,59 which was performed by NASA with its Double Asteroid Redirection Test (DART) in September 2022.60 ESA’s Hera planetary defence mission will make use of AI as it steers itself through space to- wards an asteroid, taking a similar approach to self-driving cars. While most deep-space missions have a definitive driver back on Earth, Hera will fuse data from different sensors to build up a model of its surroundings and make decisions on board, all autonomously.61 3.2.4. Guidance, Navigation & Control (GNC), and Visual Operations: ClearSpace-1 “In more than 60 years of space activities, more than 6,050 launches have resulted in some 56,450 tracked objects in orbit, of which about 28,160 remain in space. Only a small fraction—about 4,000—are intact, operational satellites today.”62 As this ever-increasing threat of space debris is more pressing than ever, ESA decided to tackle the issue with the world’s first space debris removal mission, ClearSpace-1. 57 ESA, 2008. 58 Ibid. 59 ESA, (without a date). 60 NASA, 2024. 61 ESA, 2023. 62 ESA, (without a date). 265 Anže Singer – Artificial Intelligence in Space: Overview of the European Space Agency and its Role in the AI Environment Its objective is to be the first mission to remove a piece of space debris from orbit and to rendezvous with, capture and safely bring down a large derelict object for a safe at- mospheric re-entry. The object in question is a 112 kg defunct rocket part—the Vespa upper stage (launched in 2013)—with the target object altitude ranging from 664 to 801 kilometres.63 Crucial technologies include the advanced guidance system, navigation and control systems, the robotic arms used to capture space debris, and vision-based artificial intelligence, equipped with an AI camera to locate the debris.64 All these cutting-edge technologies were developed as part of ESA’s Clean Space initiative.65 66 3.2.5. Satellite Autonomy: Transfer Lab Furthermore, in relation to space debris, satellites orbiting Earth also require great- er autonomy, as they need to make more frequent collision avoidance manoeuvres to evade increasing amounts of debris. In January 2021, ESA and the German Research Center for Artificial Intelligence (DFKI) established a technology transfer lab that works on AI systems for satellite autonomy and collision avoidance capabilities, among other aspects.67 The Transfer Lab at DFKI in Kaiserslautern creates a framework in which scientists from both organisations research AI systems for the interpretation of complex, extensive data from Earth observation, and for collision avoidance of satellites.68 3.2.6. Autonomous Image Processing: HyperScout Imager The HyperScout imager, tested in orbit aboard the Gomx-4B cubesat, acquires and processes hyperspectral environmental imagery on an autonomous basis.69 It is a “linear variable filter” instrument, meaning each horizontal line of pixels it observes is seen at a different wavelength from 400 to 1000 nanometres). The onward movement of the satel- lite allows the rapid build-up of a complete hyperspectral image. The instrument targets specific regions across the globe, aiming to highlight rapid changes such as flooding, fire hazards, or variations in vegetation, or land cover and use between acquisitions.70 3.2.7. Database Development & Exploitation: MiRAGE As the institutional focal point for the European space sector and industry, ESA also supports start-up companies to increase their worldwide competitiveness. One example 63 Ibid. 64 Abashidze, Ilyashevich & Latypova, 2022. 65 Space Explored, 2020. 66 ESA, 2014. 67 ESA, 2023. 68 German Research Center for Artificial Intelligence, 2021. 69 ESA, 2019, p. 17. 70 ESA, 2018. 266 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 is an Italian start-up company, AIKO, which develops state-of-the-art AI for space ap- plications for mission autonomy with their MiRAGE library. MiRAGE—also known as Mission Replanning through Autonomous Goal gEneration—is a software library that enables autonomous operations for space missions.71 By shifting decision-making on-board the satellite, the MiRAGE library eliminates the decision-making loop in the ground Mission Control Centre for all the events that were accounted for during spacecraft development, including failures, events related to mission objectives, or events generated by other elements in a constellation. Operators can focus on more critical decisions requiring ground intervention. The result is increased efficiency, with mission objectives being fulfilled sooner, and downlink bandwidth being occupied only by relevant data.72 3.2.8. Visualisation & Forecasting: Digital Twin of Earth ESA is currently working towards a Digital Twin of Earth, a replica constantly fed with Earth observation data and artificial intelligence to help visualise and forecast nat- ural and human activity on the planet to better understand Earth’s past, present and future. In September 2020, ESA launched several Digital Twin Earth Precursor Activities to explore some of the main scientific and technical challenges in building a digital twin of Earth. These activities included: Forest, Hydrology, Antarctica, Food Systems, Ocean and Climate Hot Spots, each addressing a different scientific, technical, and operational challenge regarding the Digital Twin of Earth, including the role of AI and consistent data, stakeholder engagement scientific credibility and the role of sectorial models.73 3.3. Challenges for the Present and Future Despite some of these useful applications of AI in space exploration and research, the most successful AI implementations based on machine learning (ML) or deep learning (DL) are rarely used in the space industry today, as the models developed within the neural network are not human-readable and thus far have been impossible to replicate.74 In addition, before ML and DL can fully take over the space sector, the complex models and structures needed must be improved, as well as the reliability and adaptabil- ity required in the new software.75 AI—and in particular ML—still has some way to go before it is extensively used for space applications; however, it is already being implemented in new technologies. 71 Ibid. 72 Ibid. 73 ESA, 2021. 74 ESA, 2023. 75 Ibid. 267 Anže Singer – Artificial Intelligence in Space: Overview of the European Space Agency and its Role in the AI Environment One area in which the application of AI is being thoroughly investigated is satellite op- erations, specifically in support of large satellite constellations, which includes relative positioning, communication, end-of-life management, etc.76 Another example of ML application is approximating complex representations of the real world, namely in ana- lysing massive amounts of Earth observation data or telemetry data from spacecraft, or when transmitting data from Mars rovers, which is essentially done through AI.77 Therefore, confidence is growing that AI can be of support for future space mis- sions, although some critical concerns still loom—particularly where an application provides too many false positives rendering it difficult to accept as an operational aid. Simultaneously, AI has made significant progress in recent years, aided by increased computational power and miniaturisation. These advances will enable the progressive introduction of AI on board spacecraft, in support of specific mission operational tasks.78 4. ESA and AI in the Legal Environment 4.1. ESA in the International Space Law Environment The first section above elaborated on the ESA’s legal personality: ESA and its person- nel, experts, and representatives of its Member States enjoy privileges and immunities. Its legal subjectivity is further defined in Annex I to the ESA Convention, where it is stated that ESA has a legal personality and specifically has, among others, the ability to sign agreements/contracts, including those that fall within international law. Space law is a branch of international law which has the specificity of being influ- enced by other sources of law, both of a public and private character. There are many laws and regulations that should also be applied, particularly in view of the increase in privatisation, and, as such, the law applicable to space activities is not and should not be limited only to outer space law.79 When discussing space law, it is important to bear in mind at all times three impor- tant points: – the “territory” that space law regulates—outer space including celestial bodies—is outside the sovereignty of states; – outer space activities are to be conducted for the benefit of, and in the interests of, all states, irrespective of their degree of economic or scientific development; – outer space activities are the “province of all humankind”.80 76 Ibid. 77 Ibid. 78 Fratini, 2019, p. 8. 79 ESA, (without a date). 80 Ibid. 268 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Factors such as the increase in daily activities dependent on space technology, the national interests of the countries involved, and the commercialisation/privatisation of outer space have all resulted in the political will that implemented the treaties and prin- ciples we have in place today.81 The United Nations’ Committee on the Peaceful Uses of Outer Space (COPUOS) was set up by the General Assembly of the United Nations in 1959 to govern the exploration and use of space for the benefit of all humanity. COPUOS was tasked with reviewing inter- national cooperation in the peaceful uses of outer space, and for that very purpose had the merit of adopting five treaties and principles: Outer Space Treaty;82 Rescue Agreement;83 Liability Convention;84 Registration Convention;85 and the Moon Agreement.86 ESA has enjoyed special observer status in COPUOS since 197287 and has declared rights and obligations for three out of the five treaties (i.e. Rescue Agreement, Liability Convention, Registration Convention),88 and its accession to a treaty has the same consequences as an individual country acceding to or ratifying a treaty. However, the concern remains—as with all international treaties—that not all countries are parties to these legal instruments. With the interdependence of law and technology, and the increased repercussion of space activities on the ground, it is hoped that more countries will become aware of the necessity for this legal framework set up by COPUOS, while “it is to be hoped that ESA will be an example for other international organisations.”89 Another example of ESA leading the way is ESA’s own “Resolution of the Council of the European Space Agency on the Agency’s Legal Liability” from December 1977, which defines the consequences of the legal responsibilities of ESA in the event of inju- ries and damages caused by ESA to one of the Member States, legal or natural persons, or any other third party. This shows ESA’s willingness and preparedness to address and implement the legal framework needed for activities undertaken within the space sector. 81 Ibid. 82 The Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, including the Moon and Other Celestial Bodies, 610 UNTS 205, entered into force on 10 October 1967. 83 The Agreement on the Rescue of Astronauts, the Return of Astronauts and the Return of Objects Launched into Outer Space, 672 UNTS 119, entered into force on 3 December 1968. 84 The Convention on International Liability for Damage Caused by Space Objects, 961 UNTS 187, entered into force on 1 September 1972. 85 The Convention on Registration of Objects Launched into Outer Space, 1023 UNTS 15, entered into force on 15 September 1976. 86 The Agreement Governing the Activities of States on the Moon and Other Celestial Bodies, 1363 UNTS 3, entered into force on 11 July 1984. 87 United Nations, Office for Outer Space Affairs, (without a date). 88 Committee on the Peaceful Uses of Outer Space, Legal Subcommittee, 2022, p. 10. 89 ESA (without a date). 269 Anže Singer – Artificial Intelligence in Space: Overview of the European Space Agency and its Role in the AI Environment Considering that the liability aspect of entities is of major importance, especially in the international space law environment, it is also worth noting (with respect to ESA) the 2011 Draft Articles on the Responsibility of International Organizations (DARIO) adopted by the International Law Commission. Within its provisions, it is defined that every internationally wrongful act of an international organisation entails the interna- tional responsibility of that organisation,90 and the two elements which entail this in- ternational responsibility are that the wrongful act or omission is attributable to the organisation under international law, and that the act or omission constitutes a breach of an international obligation of that international organisation.91 DARIO concludes by stating that these draft articles do not apply where and to the extent that the interna- tionally wrongful acts and international responsibilities are governed by special rules of international law.92 Lex specialis, in this case, could be the Liability Convention. 4.2. Current Legal Challenges for AI and Its Space Applications 4.2.1. Adequacy of the Current Legal Framework for AI in Space While international space law provides relevant treaties and principles, more will need to be done in terms of the evolving space technologies, specifically with the emer- gence of AI in the last few decades. The fast-evolving field of space exploration, AI, and related applications is raising various concerns about whether the legal framework is up to date to meet the challenges that may arise within the AI and space sector, and what can be done to address those challenges on time. Experts in the field are in agreement when discussing the two sides of the coin that is AI: on the one hand, one cannot deny the benefits that AI and its evolution brought during these last few decades—not only to our everyday life but to space missions as well, where huge potential nonetheless still awaits to be fully exploited. On the other hand, a number of issues are arising simultaneously with the increasing use of AI, therefore challenging the adequacy of traditional space law to address these problems.93 It is to be noted that, as it stands now: “the increasing autonomy of AI-deployed space objects, coinciding with the associ- ated decreasing role of human ‘control’, does not sit squarely in sync with existing space law concepts, particularly with respect to liability for damage caused by space objects, and the obligations of states for continuing supervision of national activi- ties in space as well as for controlling space objects”.94 90 Article 3. 91 Article 4. 92 Article 64. 93 Bratu & Freeland, 2022. 94 Ibid. 270 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 In addition to the framework for liability for damages and its non-alignment with AI development and use, the introduction of AI systems into space activities entails other legal consequences and problems. First, as individual legal acts contain different ap- proaches to defining AI, the main difficulty is to establish the essence of “artificial intel- ligence”, since in order to regulate relations efficiently, it is necessary to understand what the subject and object of these relations are.95 Second, and in addition to the very essence of AI, its legal status is currently also not defined. It is inappropriate to apply existing legal categories to determine the status of AI, given that its character is autonomous from humans and it makes decisions based on the ability to learn by itself.96 Third, the use of AI in space carries a high risk in high volumes, which differ from those emerging from other aspects of everyday life. Fourth, the existing international treaties, specifically those regulating space activities, do not cover issues arising from the potential use of AI.97 Specifically, the latter—the current legal framework of international space law— has been the talking point of many specialised forums, one of which was organised by “Friends of Europe”, where the present and upcoming issues of such a framework were addressed. Most participants agreed that the Outer Space Treaty has done a good job of keeping peace and order beyond the Earth’s atmosphere; however, the treaty dates back to 1967, when satellite technology was in its infancy. Now, with more and more satellites in orbit and an increasing number of countries managing space-based assets, there is a pressing need to upgrade space governance.98 These existing major treaties obviously do not contain provisions related to the use of AI, while acts of “soft law”, which play a large role in the regulation of space activities, also do not contain provisions regarding the use of AI technologies.99 A separate problem could also emerge where not all states carrying out space activities have the necessary level of development that allows the use of AI. The question arises whether it is possible to regulate the activities of states in the same way, or whether it is necessary to develop standards taking this difference into account.100 The debate further assessed the efficacy of the existing treaties and regulations and explored whether it was time for an upgrade and update, as well as how to craft an en- forceable framework empowered to manage the warp-speed technological developments transforming the use of space in the 2020s. A broad agreement was reached that im- provements in governing space are needed—and fast—while the Legal Counsel to ESA clearly stated that: 95 Abashidze, Ilyashevich & Latypova, 2022. 96 Ibid. 97 Ibid. 98 Friends of Europe, 2021. 99 Abashidze, Ilyashevich & Latypova, 2022. 100 Ibid. 271 Anže Singer – Artificial Intelligence in Space: Overview of the European Space Agency and its Role in the AI Environment “the global system is fit for today, for what we have done so far but [the] question is, is it still fit for tomorrow, for the technological advances that we’re planning to do tomorrow? […] From what we see is happening, probably it will not be enough anymore, as we need more, with more precise principles, even regulations and […] more enforceable regulations and standards.”101 The International Astronautical Federation (IAF) also gave floor to its 36th IAA/IISL Joint Roundtable, where participants discussed the legal challenges of autonomous in- telligent systems in space. The premise was that artificial intelligence-based autonomous systems for space operations are opening up a whole new set of questions about how these interact with existing legal concepts and technical standards. Very little human intervention will be required beyond the programming, and one of the first questions is the extent to which the laws—particularly space laws—governing these technologies on Earth are relevant and applicable to these activities in outer space. It is becoming ever clearer that the growing reliance on autonomous technologies may require a fresh look at the traditional concepts behind the regulation of space activities, while recognising that the existing body of legal rules, regulations, and practices will eventually be impacted by these technical developments.102 “This will inevitably also include how AI technologies relate to the traditional un- derstandings of legal responsibility and liability under national and international space law.”103 For example, if the machine is not considered a subject but only an object or instru- ment of the person who created it, this would not require any change to the existing legal framework as we know it. However, only recently, the question of the responsibility of the machine itself has arisen, which would stir up these traditional approaches.104 It has been reiterated and discussed in a similar manner across the field of legal ex- perts due to the fact that the absence of special regulation in this area is inherently con- nected with the emergence of many difficult situations in the future: “in particular, the increasing use of artificial intelligence technologies in space acti- vities raises questions in areas such as data protection, transparency and non-di- scrimination, cybersecurity, intellectual property, international responsibility and liability, etc.”105 These areas are getting increasingly intertwined with space applications, similarly to other everyday applications that society at large depends on, but which might not have been considered or linked to space research before. Certain ethical and social risks also 101 Friends of Europe, 2021. 102 IAF, 2022. 103 Ibid. 104 Abashidze, Ilyashevich & Latypova, 2022. 105 Ibid. 272 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 arise with AI development, for example its use with space technologies for purposes of law enforcement.106 Other examples include: “‘facial recognition’; lack of transparency (the subject is not informed that his or her personal data is being collected); tracking and de-anonymizing data; lack of access rights, correction and deletion of data (the so-called black-box effect); bias and discrimination, and, as a result, unreliable results and many others”.107 Striking the right balance between using AI for preventing or solving crimes and avoiding violation of human rights should be a priority. To this end, elements such as discrimination and data protection are increasingly in the spotlight.108 4.2.2. Solutions to the Current Challenges and are They Sufficient? The effects of space activities are becoming far-reaching, not limited to only a few se- lect capable countries and organisations, but propagated all around the world. We are see- ing an increase in public discourse surrounding AI and how to regulate it, and it is equally imperative that its “sub-chapter”, AI in space, receives the same level of attention. Should the absence of special regulations persist and lag behind the aggressive timeline of AI tech- nological development, we risk maintaining a legal void the space sector cannot afford. To this end, issues related to the use of AI in space are increasingly being raised, and the need for a separate understanding of these issues at the international legal level is driven by the rapid development of technologies in this area, which can radically affect the process of space exploration and the diversification of types of space activities. Some examples also include national legal initiatives at the level of individual states.109 As much as this is welcomed and encouraged, it can potentially lead to the dominance of the interests of individual states when carrying out activities in outer space.110 In addition, there are initiatives from non-governmental organisations and academia, such as the Future of Life Institute, which adopted the Asilomar Principles on Artificial Intelligence; the University of Montreal, which prepared the Montreal Declaration for a Responsible Development of Artificial Intelligence; and Amnesty International and Access Now, which proposed the Toronto Declaration on the protection of the rights to equality and non-dis- crimination in machine learning systems. All these different initiatives should be carefully weighed to create a unified and harmonised approach, namely with a priority to adapt 106 Soroka & Kurkova, 2019, as cited in Abashidze, Ilyashevich & Latypova, 2022. 107 Gal et al., 2020, as cited in Abashidze, Ilyashevich & Latypova, 2022. 108 Abashidze, Ilyashevich & Latypova, 2022. 109 Examples include: Executive Order 13859, “Maintaining American Leadership in Artificial Intelligence”, 84 Fed. Reg. 3967 (Feb. 11, 2019); Order of the Government of the Russian Federation No. 2129-r of August 19, 2020 “On approval of the Concept for the development of regulation of relations in the field of artificial intelligence and robotics technologies for the period up to 2024”). Abashidze, Ilyashevich & Latypova, 2022. 110 Ibid. 273 Anže Singer – Artificial Intelligence in Space: Overview of the European Space Agency and its Role in the AI Environment already existing norms and principles for the protection of human rights and data pro- tection when using AI.111 “Given the interconnectedness of these areas of interstate cooperation, it is im- portant to focus efforts on developing an intersectoral approach that will take into account the specifics of activities in outer space and, to the maximum extent, gua- rantee the observance and protection of human rights”.112 Among other ideas put forward on how to tackle current legal challenges in governing AI in space are also calls for a global regulatory body to replace the current system based on national laws and add more efficiency to the treaties currently governing space, while highlighting especially the scale of private sector expansion as a problem.113 Making space a political and policy priority, matched with resources comparable to other global regions, is key. If initiating solutions on a global level proves too ambitious (due to lack of political will, prioritisation of other matters, etc.), the resolution may lie in initiating solutions on a regional level first before being propagated to a global stage. To this end, the “Friends of Europe” forum and its participants were once again in agreement—this time about giving an effective political mandate to ESA and defining an EU-wide regu- latory regime—both of which were considered essential foundation blocks for the future of space governance.114 It has yet to be seen if there will be any specific political mandate given to ESA with respect to establishing a framework for AI in space and in what form, but certainly ESA, with its innovative and leading role in the larger scientific community, and with its ample heritage, present achievements, and future vision, is in a very good position to address the subject matter. On the EU side, one step towards defining an EU-wide regulatory framework has already been taken, as it introduced the “EU AI Act”, which is a proposed European law on AI, described by the EU as “the first law on AI by a major regulator anywhere.”115 Its purpose is clear from the title of the Act itself, which reads Regulation of the European Parliament and of the Council laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM/2021/206). Despite this encouraging step, a reader of the document will quickly realises that the AI it addresses does not cover AI used in space exploration or space ap- plications and, as such, its relevance for the emerging AI legal environment in the space sector remains doubtful. A similarly encouraging, but incomplete, notion can be seen from the European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)). 111 Ibid. 112 Ibid. 113 Friends of Europe, 2021. 114 Ibid. 115 The EU Artificial Intelligence Act. 274 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 This is a missed opportunity to take on board the imperative and topical technological and legal challenges that AI poses with its use in space research, especially with the increasing role the EU has in space through its EU Agency for the Space Programme (EUSPA) and its first integrated space programme created by the EU to support its space policy.116 Considering that neither legal source mentions activities in space, the search for an appropriate platform, competent legal entity, or the necessary legislation to address these shortcomings on regional and international levels continues. 4.2.3. Conclusion Despite the shortcomings of the current status of the AI sector in international space law, one can conclude that AI is a critical technology for Europe’s space sector, whose growth should be accelerated by Europe strengthening its own AI capabilities. Not do- ing so, on the technical and legal side, would mean missing valuable opportunities for Europe to position itself in a rapidly changing AI landscape that is shaping the future. Some of the paths to be taken going ahead may be in fostering coordination and com- munication among the various entities that either use or conduct research in AI within ESA, or in identifying strategic initiatives to spread AI culture for innovation and to facilitate spin-off of ESA AI technology to external actors or spin-in of industrial AI technologies into ESA.117 The importance of AI is apparent, and the imperative goal that the legal aspects should follow the AI technical evolution—specifically in the space sector—is a respon- sibility the world should take on with full force. The gravity of the situation is perfectly described through the notion that: “AI is in the midst of a true renaissance becoming an integral part of our society, deeply transforming the way we work, operate, and live. Within the report ‘AI for Earth’ presented at the 2018 World Economic Forum, AI is even coined to be the new ‘electricity’ of the 4th Industrial revolution.”118 116 EUSPA, 2021-2024. 117 Fratini, 2019, p. 7. 118 World Economic Forum WEF, 2018, p. 5, as cited in AI4EO Workshop, 2018, p. 3. 275 Anže Singer – Artificial Intelligence in Space: Overview of the European Space Agency and its Role in the AI Environment References European Space Agency (2019) ESA Convention, (accessed 7 June 2024) AI4EO Workshop (2018) Towards a European AI4EO R&I Agenda (Executive Summary), (accessed 7 June 2024) European Space Agency (2022) ESA’s Technology Strategy, (accessed 7 June 2024) Fratini S. (2019) Artificial Intelligence in ESA. European Space Agency, reference ESA- TEC-RP-013019. Currently no placeholder in the public domain, but it is made public, whenever applicable, as a reference document for an individual ESA’s Tender Action which is published on the esa-star Publication platform. (accessed 7 June 2024) Committee on the Peaceful Uses of Outer Space, Legal Subcommittee (2022) Status of international agreements relating to activities in outer space as at 1 January 2022, (accessed 7 June 2024) Abashidze, A.K., Ilyashevich, M., & Latypova, A. (2022) Artificial Intelligence and Space Law. Journal of Legal, Ethical and Regulatory Issues, 25(S3), 1–13, (accessed 7 June 2024) Bratu, I. & Freeland, S. (2022) Artificial intelligence, space liability and regulation for the future: a transcontinental analysis of national space laws. IISL Colloquium on the Law of Outer Space (E7), 73rd International Astronautical Congress, (accessed 7 June 2024) Gal, G.A., Santos, C., Rapp, L., Markovich R., & Torre L. (2020) Artificial intelligence in space, (accessed 7 June 2024) Soroka, L., & Kurkova, K. (2019) Artificial Intelligence and Space Technologies: Legal, Ethical and Technological Issues. Advanced Space Law, 3, 131–139, (accessed 7 June 2024) 276 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 IAF (2022) 36th IAA / IISL Joint Roundtable: Autonomous Intelligent Systems in space: Operational and Legal Challenges, (accessed 7 June 2024) World Economic Forum WEF (2018) Harnessing AI for the Earth, (accessed 7 June 2024) ESA (without a date) ESA facts, (accessed 7 June 2024) ESA (without a date) Funding, (accessed 7 June 2024) ESA (without a date) ESA, an intergovernmental customer, (accessed 7 June 2024) ESA (2023) Artificial intelligence in space, (accessed 7 June 2024) ESA (without a date) ERS, (accessed 7 June 2024) ESA (without a date) Envisat, (acces- sed 7 June 2024) ESA (without a date) Introducing Copernicus, (accessed 7 June 2024) ESA (without a date) ASAR, (acces- sed 7 June 2024) ESA (without a date) Artificial intelligence for Earth observation, (accessed 7 June 2024) ESA (2008) Artificial intelligence boosts science from Mars, (accessed 7 June 2024) ESA (without a date) hera, (accessed 7 June 2024) NASA (2024) Double Asteroid Redirection Test (DART), (accessed 7 June 2024) 277 Anže Singer – Artificial Intelligence in Space: Overview of the European Space Agency and its Role in the AI Environment ESA (without a date) About space debris, (accessed 7 June 2024) ESA (without a date) clearspace-1, (accessed 7 June 2024) Space Explored (2020) ClearSpace to launch world’s first space debris removal mission in 2025, (accessed 7 June 2024) ESA (2014) ESA Presents…Clean Space, (accessed 7 June 2024) German Research Center for Artificial Intelligence (2021) AI for Spaceflight – ESA and DFKI Launch Joint Transfer Lab, (accessed 7 June 2024) ESA (2018) First light from HyperScout imager, (accessed 7 June 2024) ESA (2018) AIKO: Artificial Intelligence for Autonomous Space Missions, (acces- sed 7 June 2024) ESA (without a date) About space law, (accessed 7 June 2024) United Nations Office for Outer Space Affairs (2024) Committee on the Peaceful Uses of Outer Space: Observer Organizations, (accessed 7 June 2024) United Nations Office for Outer Space Affairs (1977) Resolution of the Council of the European Space Agency on the Agency’s Legal Liability (Esa/C/Xxii/Res.3, 13 December 1977), (accessed 7 June 2024) United Nations (2011) Draft articles on the responsibility of international organizations, (accessed 7 June 2024) European Union (2021) Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial 278 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Intelligence Act) and Amending Certain Union Legislative Acts, (accessed 7 June 2024) Friends of Europe (2021) Governing Space, (accessed 7 June 2024) EU Artificial Intelligence Act (2024) The EU Artificial Intelligence Act, (accessed 7 June 2024) European Union (2021) European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intel- ligence (2020/2014(INL)), (accessed 7 June 2024) The EU Agency for the Space Programme (2021-2024) The EU Space Programme, (accessed 7 June 2024) United Nations Office for Outer Space Affairs (1977) Space Law, (accessed 7 June 2024) OxfordLanguages (2024) Definition of Artificial Intelligence, (accessed 7 June 2024) 279 © The Author(s) 2024 Scientific Article DOI: 10.51940/2024.1.279-306 UDC: 341.229:004.8 341.229:343.301 Iva Ramuš Cvetkovič* AI—A Possible Solution to the Threats Against Human Lives Arising from Space Objects? Abstract With the 1957 launch of the satellite Sputnik I, the first space object reached outer space. Many more followed, and today space objects are considered an invaluable part of our everyday lives. Satellites and the data they provide are used for monitoring the environ- ment through Earth observation, climate regulation, and natural disaster management, as well as economic activities, for example, agriculture, transportation, communication, and several others. Despite these numerous benefits, however, space objects pose threats to human lives in outer space, in airspace, and on Earth. The technological advancement of the 21st century, especially the increased use of artificial intelligence, brought hope that these threats would be minimised, mitigated, or even completely resolved. In this paper, I am going to evaluate whether such hope is reasonable and justified. To do this, I will, first, identify some examples of the threats to human lives arising from space objects and provide examples when such threats already materialised in reality. Second, I will present the applicable legal framework and then, third, evaluate it and show that it falls short in addressing those threats. Fourth, I will demonstrate how AI is planned to be used in mitigating these threats. Fifth, I will outline some of the new legal challenges such use of AI would bring and, against this background, finally assess whether such AI threat mitigation is going to be as effective as currently predicted. Key words AI, space technology, space debris, space objects, terrorism. * PhD student at the Faculty of Law, University of Ljubljana, working at the Institute of Criminology at the Faculty of Law, researching the topics at the intersection of law & technology, with a focus on the new, under-researched technologies, and their harmful effects on human rights, environment, and society. Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review – voL. LXXXiv, 2024 • pp. 279–306 ISSN 1854-3839 • eISSN: 2464-0077 280 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 On a Halloween night, I sat outside by the fire on the edge of a forest, and I saw something scary. Not a zombie, or a vampire or, a werewolf, or anything like that, but something even scarier—a bright spot of light above the trees that was moving noticeably slower than a shooting star, leaving behind a trail of burning little dots. I assumed I had witnessed the last few seconds of the life of a space object, entering and burning up in the Earth’s atmosphere. Nothing happened, the evening went on, and now, in broad daylight, it probably sounds ridiculous that a distant object, which does not even exist anymore, could be scarier than a zombie, a vampire, or a werewolf. But a space object could, in fact, be considered deadlier than all three of these combined. In this article, I will therefore explain why I believe a Halloween costume of a satellite could easily beat all the afore- mentioned supernatural creatures, and furthermore examine whether artificial intelligence (hereinafter: AI) has the potential to become, mutatis mutandis, what garlic is to vampires. I will, first, identify some examples of the threats space objects pose to human lives and provide examples of when such threats already materialised in reality. Second, I will present the applicable legal framework and, third, evaluate it to show that it falls short in addressing these threats. Fourth, I will demonstrate how AI is planned to be used in mit- igating those same threats. I will be using the term AI as an umbrella term to describe the new generation of technologies, mainly characterised by a certain degree of self-learning, automatization, and autonomy.1 Fifth, I will outline some of the new legal challenges such use of AI would bring, and lastly, against this background, I will assess whether such AI threat mitigation will likely be as effective as currently predicted. 1. Identifying the Threats In this section I will present some examples of space objects with the potential of endangering human lives. In line with the definition of a space object under Article I(d) of the Convention on International Liability for Damage Caused by Space Objects— Liability Convention (hereinafter: LIAB)2 and the identical Article I(d) of the Convention on Registration of Objects Launched into Outer Space—Registration Convention (here- inafter: REG),3 as including component parts of a space object, its launch vehicle, and parts thereof, this presentation will extend to threats arising from such parts as well. Additionally, as there is no requirement of functionality in the aforementioned defini- tion, this presentation will also include non-functional space objects and their fragments, i.e., space debris. 1 Custers & Frosch-Villaronga, 2022, pp. 3–8. 2 Convention on International Liability for Damage Caused by Space Objects, 29 March 1972, 961 UNTS 187 (entered into force on 1 September 1972). 3 Convention on Registration of Objects Launched into Outer Space, 12 November 1974, 1023 UNTS 15 (entered into force on 15 September 1976). 281 Iva Ramuš Cvetkovič – AI—A Possible Solution to the Threats Against Human Lives Arising from Space Objects? 1.1. Space Object Falling on Earth The first identified threat to human lives is a space object crashing to the surface of the Earth, as demonstrated by the Kosmos 954 incident. In 1978, the malfunctioning Soviet satellite Kosmos 954 re-entered the Earth’s at- mosphere.4 Instead of burning up in the atmosphere, it intruded into Canadian airspace while failing to separate from its nuclear reactor containing about 50 kilograms of ura- nium-235.5 The satellite crashed onto the ground, littering Canadian territory with a 600-kilometre-long path of radioactive debris.6 During the aftermath of the incident— Operation Morning Light—a search of an area of more than 124,000 km2 resulted in the finding of several intensely radioactive and potentially lethal pieces of the satellite.7 The fear arose that civilians might notice the pieces and bring them into their homes, thereby further spreading radioactive contamination, and that fear was magnified once the most radioactive piece of debris, which contained enough radiation to have potentially killed its holder within a few hours, was discovered.8 Luckily, re-entry into the atmosphere was detected early on, and the area where the debris landed was sparsely populated. The effects of the Kosmos 954 crash would have been much more devastating if the debris had fallen onto densely populated areas, as many more people could come into contact with radioactive material. Likewise, if the fall had remained undetected or hidden, the damage would not be mitigated at all. Currently, there are more than 5,000 functioning satellites orbiting the Earth, more than 1,500 of which belong to the private US corporation SpaceX.9 In case of a malfunc- tion, one of them could enter the Earth’s atmosphere and not burn up completely, and the debris could hit the Earth’s surface just like in the Kosmos 954 scenario. Moreover, pieces of satellite could hit populated areas and result in the loss of human lives on Earth. 1.2. Collisions between space objects A second identified threat to human lives is the chance of collisions between two or more space objects. Such collisions have already occurred several times. They often occur unintentionally, either as part of normal operation (low-speed collisions during rendezvous and docking) or as a coincidental high-speed collision. However, sometimes the collisions are caused 4 For the detailed description of the event see the Settlement of Claim between Canada and the Union of Soviet Socialist Republics for Damage Caused by “Cosmos 954”, 1981. 5 Weintz, 2015. 6 Patowary, 2020. 7 Shultz, 2010. 8 Weintz, 2015. 9 Union of Concerned Scientists, 2022; Kizer Witt, 2022. 282 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 intentionally, with a clear goal of destroying the satellites—either to test anti-satellite weapons (hereinafter: ASAT tests) or destroy satellites that may pose an active hazard to Earth, aircraft in flight, or other space objects.10 In all cases, the collisions result in an increased amount of space debris (see below). These collisions could, in some cases, result in the loss of human life. Such loss could occur in outer space if at least one of the space objects involved in the collision were carrying humans on board. An example of such a collision is the 1994 collision between the crewed Soyuz TM-17 spacecraft and the space station Mir, which was fortunately a minor collision and did not cause severe damage.11 Loss of life due to a space object collision could also occur on Earth or in its airspace. In this case, the loss of life would be caused by a space object indirectly; for example, where one or both of the space objects involved in a collision would be crucial for the functioning of critical infrastructure, such as telecommunication or navigation systems. A detected collision between two satellites occurred in 2009, when a functioning com- mercial Iridium 33 satellite collided with the non-functional12 military Kosmos 2251 satellite.13 Luckily, no lives were lost due to this incident. However, this is no guarantee that space object collisions will not negatively affect human lives in the future. Moreover, despite not causing human casualties, the Iridium–Kosmos collision resulted in more than 1,800 pieces of new space debris, which spread throughout Earth’s orbit, further endangering human lives (explained next).14 1.3. Space debris A third identified threat to human lives from space objects is space debris. There exist many different technical definitions aiming to clarify this term; however, there is, as of yet, no universally adopted legal definition. Nonetheless, what is common to most of the existing definitions is that space debris is man-made material (space objects or fragments thereof ) in outer space that is no longer functional.15 In Earth’s orbit, there are approx- imately 15,000 pieces of debris larger than 10 cm, about 200,000 pieces between 1 and 10 cm, and millions of pieces smaller than 1 cm, all travelling at incredibly high speeds.16 10 For more on the intentional collisions, definition and types of ASAT tests as well as the challenges and damages related to or stemming from them, see Ramuš Cvetkovič, 2023. 11 Harland, 2022. 12 Because Kosmos 2251 was put out of its functioning, it was technically considered space debris. For more on space debris, see section 1.3. 13 NASA, 2009; David, 2013. 14 NASA, 2009. 15 Sheer & Li, 2019, pp. 425–429. 16 Gregersen, 2022. 283 Iva Ramuš Cvetkovič – AI—A Possible Solution to the Threats Against Human Lives Arising from Space Objects? The incidents in which space objects are destroyed—collisions or ASAT tests—contrib- ute enormously to the amount of space debris in orbit.17 The most obvious and direct threat that space debris poses to human life is that it significantly contributes to space object collisions. Pieces larger than 1 cm can already affect a space object, whereas pieces larger than 10 cm can seriously damage or even de- stroy a space object.18 It has been reported that astronauts aboard the International Space Station (ISS) have already had to take shelter and cancel a spacewalk after a dangerous sudden increase of space debris caused by an ASAT test.19 The ISS, moreover, had to take avoidance manoeuvres to move out of the way of space debris several times, and has even been hit by a piece of debris that damaged its robotic arm.20 Space debris can, and already has, caused damage on Earth, as some pieces have hit houses in villages and, in at least one documented incident, even a person.21 It has been estimated that the risk of space debris resulting in human casualties upon its re-entry into Earth’s orbit is increasing, noting that even a small piece of debris can cause extreme devastation if it hits places with a high population density in a relatively small area, such as large cities or airplanes.22 The amount of space debris is, furthermore, rapidly increasing. Not only because that more and more space objects are launched every year, but also due to the Kessler syndrome—a process of continuous fragmentation of existing pieces into ever-smaller fragments, which can, in the worst-case scenario, result in a dense debris cloud around the Earth.23 Such a dense presence of space debris negatively impacts the climate, as re‐entering debris burning in the atmosphere contributes to stratospheric O3 depletion, and could therefore worsen the effects of climate change.24 Furthermore, it would likely cause the malfunction of all satellite services, impairing or completely disabling the func- tioning of critical infrastructure. In this way, space debris indirectly endangers human lives in the long term. 1.4. Misuse of space objects The fourth identified threat is the misuse of space objects. By this, I mean the use of space objects with the purpose of directly and specifically targeting human lives—either 17 David, 2013. 18 Chen, 2011, pp. 538–539. 19 Chow & Mitchell, 2021; Mukherjee, 2021. 20 McFall-Johnsen, 2021. 21 Byers et al., 2022, p. 1093; Zander, 2022. 22 Byers et al., 2022, pp. 1093–1095. 23 Kessler & Cour-Palais, 1978, pp. 2637–2646. 24 Ryan et al., 2022, pp. 10–11. 284 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 by an entity controlling the space object, or by another entity gaining control over the space object (for example, by means of cyber-attacks). In the past, such misuse has al- ready occurred in practice. Although the technology that would enable direct strikes on humans from space objects does not yet exist, space objects can, and have, already been indirectly used to target human lives. Data collected through space objects is already extensively used to determine military targets on Earth.25 This demonstrates that it is possible to endanger human lives by means of using space technology, as such data could be misused for harmful or even unlawful purposes, such as, for example, the deliberate targeting of the civilian population. A misuse of a space object occurs, moreover, when space objects become targets of cyber-attacks.26 Cyber-attacks are, for now, mostly aimed at obtaining satellite data but could also gain complete control over a satellite and turn it into a lethal weapon.27 In both cases, such cyber-attacks could result in losses of human lives. For example, it has been claimed that the cyber-attack on the KA-SAT satellite, providing communication services in Europe, including Ukraine, occurred on 24 February 2022—at the beginning of the Russian invasion of Ukraine.28 A cyber-attack disrupting communication at a time when evacuation of civilians was needed demonstrates how such attacks could easily con- tribute to mass losses of human lives. Even a mere disruption of a space object’s activity due to a cyber-attack could turn lethal if the satellite failed to perform essential services on which humans rely, for example, predictions of extreme weather conditions or natural disasters, enabling medical activity, water or electricity networks, traffic management, etc.—or if it crashed into another space object enabling such activities.29 Another threat to human lives is that space objects are used for the purpose of a terrorist attack. Liberation Tigers of Tamil Eelam (hereinafter: LTTE), considered a ter- rorist organisation by several states, hijacked an INTELSAT satellite and used it for pub- licity.30 This demonstrates that satellites and other space objects have already gained the interest of non-state actors linked to terrorism, and that they are likely to become even greater objects of interest for such entities in the future. 25 Lee & Steele, 2014, pp. 71–73; Bt & Cummings, 1991, pp. 46–52; Dunlap, 2021; Borowitz, 2022, pp. 1–4. 26 Puttré, 2022. 27 Hobe, 2019, p. 101. 28 See ESPI, 2022. See also Burgess, 2022; Viasat, 2022; Jewett, 2022. 29 Akoto, 2020. 30 Miller, 2019, p. 39; Stuart, 2015. 285 Iva Ramuš Cvetkovič – AI—A Possible Solution to the Threats Against Human Lives Arising from Space Objects? 2. Outlining the Safeguards in the Applicable Legal Framework In this section, I will outline the legal principles and rules applicable in outer space that contribute to minimising threats to human lives arising from space objects. The main focus will be on the principles enshrined in the Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial Bodies—Outer Space Treaty (hereinafter: OST),31 the fundamental space law instrument, supplemented by other relevant provisions of space law treaties and customary international law. 2.1. The Freedom of Exploration and Use, the Obligation to Carry out Exploration and Use of Outer Space for the Benefit and in the Interests of all Countries, and the Province of Humankind Article I of the OST sets out important principles governing the use and exploration of outer space. It provides that the exploration and use of outer space are free to all states without discrimination of any kind. It further establishes that they shall be carried out for the benefit and in the interests of all countries, and that outer space shall be the province of all mankind. These two principles together form the so-called “common benefit clause”, conditioning and limiting the freedom of exploration to the common benefit of all states. 32 For the topic at hand, namely threats to human lives arising from space objects, these principles carry special relevance as they provide fundamental guidance for conducting space activities. They entail that even though there is freedom to use and explore outer space, including the freedom to launch space objects, such freedom is subject to limita- tions, such as taking into account the freedoms, benefits, and interests of other states.33 Even though it has not been precisely determined what that means for every case of space activities in practice, it has been claimed that it can at least be concluded that the benefits and interests of other states are not respected when pursuing merely one state’s military objectives, such as, for example, conducting ASAT tests.34 This is further relevant for the threat posed by space debris, as it has been argued that these obligations in Article I preclude states from conducting their activities in a way that could potentially close ac- 31 Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial Bodies, 27 January 1967, 610 UNTS 205 (entered into force on 10 October 1967). 32 Hobe, 2009, p. 36. 33 Ibid., pp. 34–38. 34 Zedalis & Wade, 1978, pp. 466 and 480. For more on the (in)compatibility of the ASAT tests with the legal framework established by the OST, see Ramuš Cvetkovič, 2023. 286 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 cess35 to outer space to others, meaning that they must reduce space debris emissions and actively work to maintain outer space usable for all, including future generations.36 There already exist several soft-law mechanisms aimed at limiting the generation of space debris and ensuring greater sustainability of space activities,37 but they are usually not directly enforceable, unless translated into a legally binding document, as has been done, for ex- ample, by Austrian38 and Slovenian39 national laws governing space activities. However, concrete and precise legally-binding rules effectively defining the scope and governing the enforcement of all the principles enshrined in Article I of the OST have not yet been established. By aiming to eliminate obstacles to the freedom of use and exploration of outer space, and by putting forward the benefits and interests of all states, the obligations set out in Article I of the OST therefore to some extent increase the protection of hu- man lives against the threats posed by space objects. However, it cannot be claimed that Article I in itself is a sufficient mechanism to fully eliminate such threats. 2.2. The Obligation to Carry on Activities in the Exploration and Use of Outer Space in Accordance with International Law Article III of the OST dictates that activities in the exploration and use of outer space must be carried out in accordance with international law, including the Charter of the United Nations (hereinafter: UN Charter), in the interest of maintaining interna- tional peace and security and promoting international co-operation and understanding. This Article, therefore, provides that international law applies to all human activities in outer space, including launching, operation, and return of space objects.40 It can, more- over, be deduced that space objects shall not be used so as to jeopardise international peace and security. 41 35 It must be mentioned here that Article II of the OST explicitly prohibits national appropriation of outer space. It is yet to be determined whether disabling other states access to outer space could constitute de facto appropriation in contradiction with Article II. 36 Palmroth, Tapio & Soucek et al., 2021, p. 4; Niewęgłowski, 2021; Hobe, 2009; p. 43. 37 See, for example, UN COPUOS Guidelines for the Long-term Sustainability of Outer Space Activities, 2018, A/AC.105/2018/CRP.20 and IADC Space Debris Mitigation Guidelines, 2021, IADC-02-01. 38 See Article 5 of the Federal Law on the Authorisation of Space Activities and the Establishment of a National Registry (Outer Space Act), BGBl. I No. 132/2011 (Austria), which demands compliance with ‘state of the art and in due consideration of the internationally recognised guidelines for the mitigation of space debris’. 39 See Article 5 of the Space Activities Act, Official Gazette of the Republic of Slovenia, 43/22 (Slovenia), which states that space activities must ‘envisage measures for limiting the generation of space debris in accordance with the applicable UN Space Debris Mitigation Guidelines and for lim- iting adverse environmental effects on Earth or in outer space or adverse changes in the atmosphere’. 40 Ribbelink, 2009, pp. 64–66. 41 Ibid., pp. 66–67. 287 Iva Ramuš Cvetkovič – AI—A Possible Solution to the Threats Against Human Lives Arising from Space Objects? The applicability of international law through Article III, however, raises issues re- garding the lex specialis42 nature of space law, as well as the scope of the applicability of international law.43 Even though there is no doubt that a substantial part of international law applies in outer space, its applicability in toto remains disputed.44 For now, it is agreed that not only the long-established rules of customary international law and the explicit rules enshrined in the UN Charter (including non-aggression, prohibition of the use of force, self-defence and the peaceful settlement of disputes), but also subsequent generally accepted principles—such as, for example, the precautionary principle—apply.45 It has additionally been established that specific sub-branches of international law, including human rights law, environmental law and international criminal law, apply as well, even though the scope of their application is not yet precisely determined.46 This could mean that, to rely on a specific rule or principle from these sub-branches of international law, it would first need to be established that such a rule is indeed applicable through Article III.47 In short, this provision, therefore, demands that space objects are operated in accord- ance with international law. Thus, at least to a certain degree, human rights and envi- ronmental principles must be respected. This means that international legal safeguards aimed at protecting human lives principally apply to the handling of space objects; how- ever, the precise scope of their application will have to be further defined to enable a direct and undisputed applicability of various rules and principles of international law in outer space through Article III. 2.3. Limitations on Weaponising Outer Space Article IV(1) of the OST imposes certain limitations on the weaponisation of outer space. It prohibits placing in orbit around the Earth any objects carrying nuclear weap- ons or any other kind of weapons of mass destruction, installing such weapons on celes- tial bodies or stationing them in outer space in any other manner. It has been stressed that even though only space objects carrying such weapons are prohibited, this provision must be understood as covering the weapons themselves as well.48 The issue arises in defining precisely which weapons are prohibited by Article IV(1). It is clear that this prohibition does not cover space objects carrying conventional weapons 42 For more on the lex specialis nature of space law, see Ramuš Cvetkovič, 2021. 43 Ribbelink, 2009, p. 67. 44 Ibid. 45 Ibid. For more on the concrete issue of the application of precautionary principle to space activities, see Novak, 2022. 46 Ibid. 47 For more, see Hoe, Umar & Kamarudin, 2018, p. 336. 48 Schrogl & Neumann, 2009, p. 78; Gorove, 1973, p. 117. 288 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 or military satellites.49 The use of satellite data for military activities on Earth is therefore not prohibited by Article IV.50 Less clear, on the other hand, is what precisely constitutes a “nuclear weapon” or a “weapon of mass destruction”. It has been claimed that for a nuclear weapon, all arms utilising atomic energy would qualify.51 Even though the concern has been expressed that, for a space weapon to qualify as a weapon of mass destruction, the number of human lives lost would have to be greater than in the case of a biological or chemical weapon,52 there is no reasonable ground for such a distinction, especially from the perspective of protecting human lives from the threats arising from space objects. Further uncertainty arises as to whether the prohibition in Article IV(1) extends to all space objects capable of causing a nuclear reaction or severe devastation, or merely to those intended to be used as weapons. Relying on an ordinary-meaning interpretation in accordance with Article 31 of the Vienna Convention on the Law of Treaties (herein- after: VCLT)53 of the term “weapon”, it has been concluded that only objects intended to be used in warfare or in combat to attack and overcome an enemy, that can cause a nuclear reaction (nuclear weapons) or widespread devastation and loss of life (weapons of mass destruction), are prohibited by Article IV(1).54 That means that space objects that contain a nuclear component but not for the purpose of attacking and overcoming the enemy—for example, using small atomic bombs for propulsion—do not qualify as prohibited.55 However, after the Kosmos 954 accident, the Principles Relevant to the Use of Nuclear Power Sources in Outer Space (hereinafter: NPS Principles)56 were ac- cepted, aimed at regulating precisely these types of space objects (e.g. space objects with non-weaponised nuclear components). Even though the NPS Principles are soft-law guidelines of a non-binding nature, they introduced important safeguards, such as the obligation to conduct and make public a comprehensive safety assessment (Principle 4), notification of re-entry of radioactive materials to Earth (Principle 5), and an emergency assistance responsibility (Principle 8).57 Another dilemma arising from Article IV(1) and the use of the word “weapon” con- cerns the definition of the word “enemy”. This dilemma was sparked by the debate on as- 49 Schrogl & Neumann, 2009, p. 78; Zedalis & Wade, 1978, p. 459. 50 Hobe, 2019, p. 106. 51 Gorove, 1973, p. 115. 52 Ibid., pp. 115–116. 53 Vienna Convention on the Law of Treaties, 23 May 1969, 1155 UNTS 331 (entered into force on 27 January 1980). 54 Schrogl & Neumann, 2009, pp. 75–77. 55 Ibid., p. 76. 56 UN GA Res. 47/68, Principles Relevant for the Use of Nuclear Power Sources in Outer Space, 14 December 1992 (not yet entered into force). 57 See Hobe, 2019, pp. 153–154. 289 Iva Ramuš Cvetkovič – AI—A Possible Solution to the Threats Against Human Lives Arising from Space Objects? teroid mitigation programmes and planetary defence.58 In that case, it is unclear whether targeting a natural object in outer space with the use of a space object carrying a nuclear device (thus transforming the space object into one carrying nuclear weapons prohibited under Article IV(1)) would constitute a breach of Article IV(1).59 The main question here is whether a natural threat could be considered the “enemy”, in the sense that the means to tackle such a threat would be considered a “weapon”.60 Even though such an interpre- tation is possible, the question whether the use of a space object with nuclear power in such a non-military way is allowed or prohibited by Article IV(1) remains unresolved.61 Article IV(2) of the OST regulates weaponisation on celestial bodies, stating that the Moon and other celestial bodies shall be used exclusively for peaceful purposes, prohibit- ing the establishment of military bases, installations and fortifications, the testing of any type of weapons, and the conduct of military manoeuvres on celestial bodies. However, the exclusively peaceful purposes clause pertains only to celestial bodies. Therefore, it is not applicable to space objects not situated on celestial bodies. That means that even though Article IV(2) prohibits the testing of weapons, this does not apply to ASAT tests aimed at satellites. It has been expressed that even Article IV(1) does not prohibit ASAT tests.62 As demonstrated by this analysis, despite the fact that Article IV of the OST provides certain safeguards against threats to human lives stemming from space objects, it does not explicitly and unambiguously eliminate all potential threats. 2.4. Assistance to Astronauts Article V of the OST contains several provisions aimed at protecting the life and health of astronauts. Firstly, it dictates that States Parties to the Treaty shall regard astro- nauts as envoys of mankind in outer space and shall render them all possible assistance in the event of accident, distress, or emergency landing on the territory of another State Party or on the high seas (Article V(1) of the OST). Secondly, it adds that in carrying on activities in outer space and on celestial bodies, the astronauts of one State Party shall render all possible assistance to the astronauts of other States Parties (Article V(2) of the OST). Lastly, it obliges States Parties to the Treaty to inform other States Parties to the Treaty or the Secretary-General of the United Nations immediately of any phenomena they discover in outer space, including the Moon and other celestial bodies, which could constitute a danger to the life or health of astronauts (Article V(3) of the OST). 58 Schrogl & Neumann, 2009, p. 76. 59 Ibid., pp. 76–77. 60 Ibid., p. 77. 61 Ibid., p. 77. 62 Ibid., p. 78. 290 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 It is clear that Article V, therefore, provides a certain level of protection to human lives from the threats posed by space objects. However, the extent of that protection depends on the interpretation of the text. In this regard, the question arises whether all humans in outer space fall under the definition of an “astronaut”, or merely trained pro- fessionals. This dilemma was non-existent at the beginning of the space era, when only professional astronauts were sent to outer space, but with the changed nature of space activities and the introduction of private actors to outer space, as well as space tourists, three categories of space travellers have to be distinguished: 1. The classical professional astronaut, bearing certain fundamental responsibilities in relation to the spacecraft and its operation; 2. The professional spaceflight participant, entrusted with a special job to perform in outer space, that is not related to operating the spacecraft; and 3. The private spaceflight participant travelling to outer space for leisure reasons and essentially paying for the trip.63 A subsequent dilemma concerns whether all three categories are entitled to the rights and obligations accorded to astronauts in Article V of the OST. Several authors agree that it is not appropriate to confer the status of an astronaut on completely untrained personnel travelling to outer space exclusively for private leisure purposes, especially in light of the fact that astronauts are considered envoys of mankind by the OST.64 It has also been agreed that categories 2 and 3 should not be blurred into one, as there are important distinctions between them.65 As the OST leaves us with little guidance on how to resolve the dilemma of which rights and obligations pertain to the three afore- mentioned categories of humans in outer space, a subsequent Agreement on the Rescue of Astronauts, the Return of Astronauts and the Return of Objects Launched into Outer Space—Rescue Agreement (hereinafter: ARRA) must be consulted.66 This document, concretizing Articles V and VIII of the OST, mentions “astronauts” in its Preamble (as a reference to earlier documents) but in the operative part of the text uses only the term “personnel of the spacecraft”.67 Views differ on whether the protection accorded to astronauts by both the OST and the ARRA applies to space tourists as well. Arguments following an ordinary-meaning interpretation conclude that an astronaut is only a highly trained, state-employed profes- sional, whereas arguments following an object-and-purpose interpretation—which seeks 63 Von der Dunk & Goh, 2009, pp. 97–98. 64 Ibid. 65 Ibid. 66 Agreement on the Rescue of Astronauts, the Return of Astronauts and the Return of Objects Launched into Outer Space, 19 December 1967, 672 UNTS 119 (entered into force on 3 December 1968). 67 Marboe, Neumann & Schrogl, 2013, pp. 33–35. 291 Iva Ramuš Cvetkovič – AI—A Possible Solution to the Threats Against Human Lives Arising from Space Objects? the broadest possible human welfare—find that no distinction should be made.68 The second view is further supported by the Agreement Governing the Activities of States on the Moon and Other Celestial Bodies—Moon Agreement (hereinafter: MOON),69 which clarifies that, for the purposes of that document, any person, whether professional or private, on the Moon is regarded as an astronaut.70 It seems that, for the purposes of minimising the threat posed to human lives by space objects, the broad interpretation of the term “astronaut” is more suitable. On the other hand, equalising the status of space tourists with groups 1 and 2 might in effect cause more people to decide to go to outer space for private leisure purposes, which would lead to launching more and more space objects, generating more space debris and increasing the likelihood of collisions and pollution, thereby further endangering human lives in the long run. An additional dilemma arising from the interpretation of the text of Article V of the OST is the threshold of the standard “all possible assistance”. The difficulty of defining this standard is enhanced by the fact that in Article V(1) of the OST it is stated that “all possible assistance” must be rendered to astronauts in the event of accident, distress, or emergency landing, whereas in Article V(2) these qualifying circumstances are omitted and only the obligation to render “all possible assistance” remains.71 Even though the absence of these qualifying circumstances hints that the standard is set higher in Article V(2), the interpretation of Article V must, according to Article 32 of the VCLT, not lead to absurd or unreasonable results. The standard must therefore be interpreted in some- what a more limited manner, namely, to assist humans in threatening circumstances.72 The limitation on the interpretation, however, should be such that it still enables effective protection of human lives. Article V of the OST’s standards are further concretised by the obligations set out in the ARRA. However, it must be noted that the ARRA contains similar standards— namely, “all possible steps” and “all necessary assistance” (Article II of the ARRA), mean- ing that the dilemma regarding such thresholds is not completely resolved. The effect of Article V of the OST and the ARRA on preventing threats to human lives stemming from space objects is, for the reasons stated above, limited. 68 Ibid., p. 35. 69 Agreement governing the Activities of States on the Moon and Other Celestial Bodies, 5 December 1979, 1363 UNTS 3 (entered into force on 11 July 1984). 70 Marboe, Neumann & Schrogl, 2013, p. 35. 71 Von der Dunk & Goh, 2009, p. 98. 72 Ibid., p. 100. 292 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 2.5. State Responsibility for National Space Activities and the Obligation to Authorise and Continuously Supervise Space Activities Article VI of the OST dictates that states bear international responsibility for na- tional activities in outer space, whether such activities are carried on by governmental agencies or by non-governmental entities, and for ensuring that national activities are carried out in conformity with the provisions set forth in the OST. The term ‘national activities’ must be understood broadly, not merely as activities in national interest, but in contrast to international activities carried out by international organisations.73 There exist two distinct views on the nature and effect of this rule.74 One view is that it is a rule of a secondary nature, and it, therefore, applies only in case of a breach of a primary international obligation.75 According to this view, Article VI is much broader than cus- tomary international law on state responsibility, as it ascribes international responsibility to a state without having to establish the attributability condition.76 The second view supports a narrower interpretation, claiming that a state is only responsible for its own actions and omissions, meaning that it is only responsible for the actions of private actors when it fails to supervise them. 77 It remains to be authoritatively determined which of the two views is determinative of the nature of the obligation from Article VI. As the first view is broader, and, therefore, extends state responsibility to private actors completely, it is more suitable to meet the goal of protecting human lives from threats posed by space objects, as the possibility of bearing international responsibility represents motivation for a higher standard of care. Article VI of the OST also establishes that states must authorise and continuously supervise the activities of non-governmental entities in order for the latter to be allowed. The detailed procedure of authorisation and continuous supervision is usually subject to particular national legislations, which regulate in further detail the safeguards aimed at protecting human lives from threats posed by space objects.78 States, therefore, have the opportunity to actively and extensively mitigate threats to human lives posed by space objects through their own national legislation, and should, when creating it, bear in mind these threats and the importance of effective continuous supervision of all national space activities. It must be noted, however, that when drafting their national legislation, States are forced to balance mechanisms for ensuring the safety of humans with other requirements of the private sector to maintain competitiveness and support the devel- 73 See Gerhard, 2009, pp. 108–110. 74 For more on this, see Ramuš Cvetkovič, 2021, pp. 19–20. 75 Cassese, 2005, p. 244. 76 Hobe & Pellander, 2012, pp. 7–8. 77 Ibid., pp. 8–9; Marchisio, 2018, p. 201. 78 In Slovenia, this procedure is regulated in Space Activities Act (Zakon o vesoljskih dejavnostih – ZVDej), Official Gazette of the Republic of Slovenia, No. 43/22. 293 Iva Ramuš Cvetkovič – AI—A Possible Solution to the Threats Against Human Lives Arising from Space Objects? opment of their economies.79 It is difficult to imagine that one State would put forward very limiting and strict legislation, as it could drive private actors to seek to register and conduct their space activities in a State with more favourable space legislation. The main initiatives for increasing the protection of human lives will therefore probably have to be made at the international level. 2.6. Liability for Damages Caused by Space Objects Article VII of the OST establishes liability to pay compensation for damages caused by space objects. It dictates that a state that launches or procures the launching of an object into outer space, or from whose territory or facility an object is launched, is inter- nationally liable for damage to another state or to its natural or juridical persons by such object or its component parts on the Earth, in air, or in outer space, including the Moon and other celestial bodies. International liability is further defined in the LIAB. Its provisions are relevant to threats to human lives arising from space objects, as loss of human life is recoverable damage under the LIAB. In particular, Article I(a) of the LIAB defines damage as includ- ing loss of life, personal injury, or other impairment of health. Furthermore, the LIAB covers space object threats to humans both on Earth and in outer space, establishing two different liability regimes based on where the damage oc- curs. In case damage occurs on the surface of the Earth or to an aircraft in flight, Article II of the LIAB establishes absolute liability, dictating that the launching state of the space object causing the damage is absolutely liable to pay compensation. There is no need to prove that a state failed to perform a duty or satisfy a standard of care.80 Article II of the LIAB therefore governs situations where a space object would re-enter orbit and crash onto the Earth or damage an airplane in flight. In case damage occurs in outer space, Article III of the LIAB states that, in the event of damage being caused to a space object of one launching state or to persons or property on board such a space object by a space object of another launching state, the latter shall be liable only if the damage is due to its fault or the fault of persons for whom it is responsible. This means that, in outer space, liability is not absolute but fault-based, whereby fault means a deviation from a legal duty or applicable standard of care.81 The extent to which liability contributes to minimising threats to human lives posed by space objects is questionable. Even though international liability for damages caused by a space object increases States’ motivation to exercise due care to avoid potential damage, its main effect occurs ex-post, after the damage is already done (and proven). 79 See Linden, 2016. 80 Hobe, 2019, p. 82. 81 Ibid., p. 83. 294 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Furthermore, issues may arise regarding the establishment of causation between damage and the space object. In certain cases, usually when the damage occurs on Earth (such as in the Kosmos 954 situation), the proximity of the object to the damage constitutes prima facie evidence that it was indeed that particular object that caused immediate damage.82 In other cases, when the damage occurs in orbit, it will most likely be more difficult to establish causation, but that difficulty does not remove the requirement of proving the causal nexus between a space object and damage.83 In addition, difficulties may arise in obtaining evidence for identifying the precise object and its launching state, especially when damage does not occur immediately but after some time has passed.84 This is especially problematic in the case of space debris, as it is extremely challenging, if not impossible, to determine which space object a small piece of debris once belonged to. A final issue regarding causation occurs in subsequent chains of events, when it is difficult to assess whether it was a space object that initiated the chain of events resulting in damage. This issue might arise in the case of ASAT tests or cyber or terrorist attacks on space objects, complicating the process of proving the causal nexus. While the regulation of international liability plays an important role in establishing and encouraging respect for a required standard of care—since the sanction for its viola- tion is often an obligation to pay compensation for damage—it cannot be seen as the sole means of minimising the threats to human lives arising from space technology. 2.7. Jurisdiction and Control Over a Space Object Article VIII of the OST establishes that the state that has registered a space object in its national register obtains jurisdiction and control over that object and over any personnel thereof, while in outer space or on a celestial body. Jurisdiction, in the sense of Article VIII of the OST, means the enforcement of laws and rules in relation to persons and objects, whereas control means the exclusive right and the possibility to supervise the activities of the space object and its personnel.85 The state of the registry, therefore, directly impacts the activities of the space object and human lives, not only those of the personnel on board that space object but also of all others who could potentially be af- fected by it. It is important that such a state, therefore, makes use of the powers conferred by Article VIII and takes appropriate measures aimed at mitigating threats to human lives arising from space objects registered in its registry. The problem with national as well as international registrations is that they often do not occur in a timely manner, and the identification and tracking of space objects is, therefore, more difficult.86 82 Kerrest, A. & Smith, L.J., 2009, p. 141. 83 Ibid. 84 Ibid., p. 142. 85 Schmidt-Tedd & Mick, 2009, p. 157. 86 Hertzfeld, 2021, pp. 238–240. 295 Iva Ramuš Cvetkovič – AI—A Possible Solution to the Threats Against Human Lives Arising from Space Objects? Article VIII thus has important implications for threats to human lives arising from space objects. These can be positive if States use the jurisdiction and control conferred on them by Article VIII in a way that mitigates such threats, but can also be negative if states decide to use their jurisdiction and control in the opposite way, for example to block the process of removing a dangerous space object or space debris. 2.8. Principles of Co-operation, Mutual Assistance and Due Regard and Prohibition of Harmful Contamination Article IX of the OST provides several principles relevant to minimising threats to human lives arising from space objects. Firstly, it dictates that in the exploration and use of outer space, states shall be guided by the principle of co-operation and mutual assistance and shall conduct all their activities in outer space with due regard to the cor- responding interests of all other states. Secondly, states shall conduct exploration of outer space, including the Moon and other celestial bodies, so as to avoid their harmful con- tamination. Thirdly, it provides for consultations in case of potentially harmful interfer- ence with the activities of other states in the peaceful exploration and use of outer space. It has been explained that the principle of co-operation and mutual assistance is not to be construed as a strict obligation but rather as a general principle that needs to be further detailed based on other international instruments dealing with the co-operative behav- iour of states.87 In doing so, special attention needs to be paid to the threats mentioned above. The Kosmos 954 incident can, in this regard, serve as a case study to establish a more efficient co-operation forum and to prepare better for similar future incidents. The principle of due regard is connected to the freedom of exploration and use of outer space, which is limited by the freedom of other states to do the same. In that sense, it implies a certain standard of care, attention, or observance with which a state must act when conducting space activities.88 A state must, therefore, manage its space objects in a way that does not impair the corresponding interests of other states. It is unlikely that ASAT tests and especially cyber or terrorist attacks could ever fulfil this criterion, especially as such activities can endanger or cause the loss of human life. Lastly, the prohibition of harmful contamination is an important contribution to protecting human lives. It has been established that this provision is to be understood broadly, covering all possible kinds of harmful interference in outer space, irrespective of whether such interference is deliberate or unintentional.89 The question is, howev- er, where the line should be drawn. Is every space object sent into outer space already harmful interference, or only those with nuclear components or radioactive materials 87 Marchisio, 2009, pp. 174–175. 88 Ibid., p. 175. 89 Ibid., p. 176. 296 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 on board? It has been argued that space debris may fit under the definition of harmful contamination.90 Even though such a view remains disputed—since it has often been highlighted that none of the space treaties explicitly prohibits space debris91—it is plau- sible, at least in certain cases of disproportionately large contamination, for example, in events that deliberately cause an explosion resulting in massive amounts of space debris, such as ASAT tests. It can be concluded that while Article IX contains important provisions that can con- tribute to the safety of space operations and, consequently, to the protection of human rights, its reach in achieving this goal is limited due to the openness of its terms as well as the absence of an enforcement mechanism. 3. Evaluation of the Legal Framework As can be observed from the previous section, the legal framework applicable to outer space activities and particularly to space objects contains several safeguards aimed at ensuring the protection of human lives from threats arising out of space technology. However, several of the abovementioned principles are written in a broad and vague manner, which, on the one hand, ensures their broad acceptance as well as their flexibil- ity, but on the other hand, lacks precise obligations and, therefore, allows for different interpretations. The possibility of different interpretations creates an issue in terms of legal predictability and legal safety. Another issue regarding the legal framework is related to its enforcement and the absence of compulsory jurisdiction in international law. Thus, it can be concluded that the applicable legal framework alone is not sufficient to ensure maximum protection from threats to human lives arising from space objects. New, more concrete and more easily enforceable legal rules will need to be adopted in order to in- crease the effectiveness of the legal framework. The legal sphere, however, can only be part of the solution. The other part will have to come from improvements in the material and technical sphere, amongst which one of the most promoted is the inclusion of AI mechanisms into space technology. 4. AI as a Solution to the Threats to Human Lives Arising from Space Objects Recently, AI has often been presented as a solution to the threats stemming from various types of technology, including space technology. In this section, I will present the ways in which AI is proposed and planned to be used to increase safety and resilience or 90 See, for example, Alby et al., 2001. 91 Diaz, 1993, p. 377; Dennerley, 2018, p. 286. 297 Iva Ramuš Cvetkovič – AI—A Possible Solution to the Threats Against Human Lives Arising from Space Objects? to improve the operation of space objects, as well as the ways in which AI is already used for such purposes. AI solutions were, to this day, proposed for all of the abovementioned threats stemming from space technology. Regarding the threat of space objects falling on Earth, AI technologies can be used for recognising and identifying various artificial objects in the sky, as well as distin- guishing them from each other. In this way, they can contribute to the foreseeability of potential accidents, identifying the approximate place of the crash, and can thus serve as a basis for creating an evacuation plan, if required. AI has already been used for the recognition of artificial objects such as airplanes, as well as some natural occurrences or even objects in outer space.92 Secondly, AI could be used to prevent collisions between space objects. Namely, AI is said to be able to contribute to more effective monitoring of space objects—their po- sitioning, communication, as well as their end-of-life management.93 Furthermore, AI is becoming an important tool for keeping a constant watch on satellites’ equipment and functioning and consequently promptly alerting in case of its malfunction or a threat of collision, and in certain cases even directly mitigating this risk.94 SpaceX claimed it had already installed such an AI collision-preventing system on some of its satellites. However, subsequent reports put its efficiency into question, as they describe incidents of near crashes with other space objects.95 AI has additionally ascribed an important role in space debris remediation. The European Space Agency (hereinafter: ESA) announced its plan to develop and launch the first space mission aimed at removing space debris, named ClearSpace-1, in 2026, which is going to be equipped with an AI camera used for locating debris.96 Regarding the misuse of space objects, the role of AI can have contradicting effects. On the one hand, AI can improve efficiency and accuracy in detecting and preventing cyber-attacks; on the other hand, it can be exploited so as to make the attacks more efficient and accurate, and consequently also more devastating.97 Thus, it acts as both a means of conducting and a means of combating cyber or similar attacks. Against this background, it can be observed that AI is being increasingly deployed in space technology and that it is planned to play a role in ensuring greater safety for hu- man lives. However, even if these technologies are developed and deployed accordingly, and they function as envisioned, the use of AI in space activities is going to open several new legal questions within a legal framework that already contains many rules open to 92 Bobrovsky et al., 2019, pp. 1–2; Amster, 2022; Instituto de Astrofísica e Ciências do Espaço, 2022. 93 Bratu, Lodder, & van der Linden, 2021, p. 427. 94 Schmelzer, 2020; Miller, 2022. 95 Chatterjee, 2022. 96 ESA, 2020; Macaulay, 2020. 97 Zekos, 2022a, p. 368. 298 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 interpretation. Therefore, to evaluate whether AI is indeed an effective solution to the threats arising from space objects, the use of AI in mitigating space objects must also be examined in light of the legal challenges it could bring. Through those identified legal challenges, a better evaluation of the overall effectiveness of AI in combating the threats arising from space technology can be conducted. 5. Some Legal Challenges Stemming from the Use of AI in Preventing Threats to Human Lives This section aims to outline some legal challenges stemming from the previously described uses of AI in space activities. The first legal challenge is related to the fact that AI necessarily entails a certain level of unpredictability.98 To that end, it can negatively affect the predictions and early detections of threats to human lives arising from space objects, especially in light of the fact that humans are prone to being overly trusting of decisions made by technology, based on a generalised, often unsubstantiated perception of its capabilities (the so-called machine heuristics).99 The second issue related to AI used in space technology is liability. The concept of lia- bility in space law is related to damage caused by space objects, and, as mentioned above, the term space object includes its component parts as well as the launch vehicle and parts thereof. It is unclear, however, whether this would cover situations where damage would occur due to the AI connected to the functioning of such a space object. Would such damage still be recoverable under Article VII of the OST and the provisions of the LIAB? The answer to this question depends on whether AI can be considered a “component part” of a space object. In other words, must a “component part” of a space object neces- sarily be a physical component, or can it be “intangible” software? For now, the opinion that AI as software should be considered a “component part” of a space object has been expressed.100 In that case, damage is recoverable under these legal rules. If the opposite interpretation, namely that AI is not to be considered a component part of a space ob- ject, is accepted, the fact that AI was used in the functioning of a space object that caused damage will make it more difficult to establish causation, as a convenient argument that the damage was caused by AI, not by the space object itself, will be available. In order to avoid such a situation, this dilemma must be resolved and liability for damages caused by AI must be established as well. 98 Chatzipanagiotis, 2020, p. 2. 99 On machine heuristic, see, for example, Sundar & Kim, 2019. 100 Chatzipanagiotis, 2020, p. 3. 299 Iva Ramuš Cvetkovič – AI—A Possible Solution to the Threats Against Human Lives Arising from Space Objects? Furthermore, AI will also affect the notion of “fault” in particular cases of damage occurring in outer space.101 Fault is established when states fail to perform the required amount of due diligence.102 Therefore, in establishing the diligence standard, AI will make it more difficult to decide which risks were foreseeable and what kind of due dili- gence was, therefore, required. The proposed solution to this issue is the establishment of international rules of conduct.103 As AI could also be used as a weapon (or at least a part of it),104 the issue of the need for reinterpretation or rewriting of Article IV of the OST arises. Currently, Article IV(1) of the OST explicitly prohibits only nuclear weapons and weapons of mass destruction, but not autonomous weapons utilising AI. In any case, additional safeguards regarding the latter are needed, such as, for example, the obligation that, in cases involving the use of AI in autonomous weapons, the final decision in activating such a weapon should always remain in the hands of a human.105 However, even keeping the human in the loop can be insufficient when humans do not critically examine algorithmic suggestions but merely act as a “stamping machine”.106 Therefore, significantly more attention has to be put into stricter regulation of such uses of AI, and the accountability for the loss of life it causes. Furthermore, the use of satellite data for powering AI tools which result in breaches of human rights, international humanitarian law, or other rules of international law, (re)opens the question of the (il)legality of space activity producing such data. Another issue related to the use of AI is that there are currently no comprehensive “rules of the road” governing traffic in outer space, including space objects. The drafting of these principles is currently ongoing, but the authoritative version of the rules has not yet been adopted. Therefore, in establishing these traffic rules, AI will make the process more challenging for lawmakers, as it will bring a new component to an already complex issue. The last issue concerns the delicate relationship between transparency and the security of AI. As AI can be used to both improve and harm the security of space objects, concrete rules need to be developed in order to strengthen the security aspect of AI systems used for space objects, as in this way, the space objects will become more resilient to terrorist and cyber-attacks.107 On the other hand, however, the regulation of AI technology must be mindful of the transparency requirement, to ensure the trustworthiness of AI systems. 101 Ibid. 102 Ibid. 103 Ibid., p. 7. 104 For an example of how AI (utilising, inter alia, satellite data) can be used as a weapon, see the report on algorithmic weapons Lavender, Gospel and where is Daddy?, developed by Israel and used on its onslaught on Gaza. Abraham, 2024. 105 Martin & Freeland, 2020, p. 6. 106 For more on the dangers of the lack of critical assessment of AI-produced decisions, see Abraham, 2024. 107 Ibid., p. 7. 300 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 6. Conclusion: An (Im)perfect Solution The brief analysis conducted throughout this article demonstrates that there exists a need to further mitigate threats to human lives arising from space objects, as the legal framework, despite containing several safeguards, can only play an effective role to a certain extent, but often falls short of addressing concrete issues. AI is often presented as a prom- ising technology to address this gap and improve the protection of human lives; however, the analysis shows that it cannot in itself be perceived as a complete and comprehensive solution, and that can even, in certain cases, bring more issues than it actually solves. Firstly, most of the AI technologies planned to be used to enhance the capabilities and safety of space technology are still being developed and are not yet completed. They are expected to be subject to several changes and improvements. In most cases, there is currently a lack of relevant data to test their efficiency. Secondly, the use of AI in mitigating the risks posed by space objects will open new legal dilemmas, some of which were identified in the previous section, which will add to those already present in the existing legal framework. The danger is that the legal framework could become even less effective in ensuring that technology is used in a human-friendly manner, which we can already observe in some of the current108 ex- amples of the use of AI technologies as a weapon in war. Thus, AI solutions have to be consciously considered from all possible perspectives and then carefully implemented.109 Its potential effects have to be examined and the legal framework amended accordingly. This means that the use of AI to mitigate the risks posed to human lives by space objects is not the end, but the beginning of what could lead towards a solution to the problem of dangerous space objects. While AI carries the potential to improve space technology in this regard, further critical research will have to be carried out in both technical and legal fields, to ensure that AI will indeed serve the purpose of protecting human lives. To fulfil this goal, the developing AI regulatory framework and internation- al space law will have to evolve in strong co-operation.110 Until then, space objects, even when enhanced or interacting with a certain type of AI technology, will remain a suitable “scary costume” for the Halloweens to come. 108 Abraham, 2024. 109 Zekos, 2022b, pp. 348–349. 110 See Ramuš Cvetkovič & Drobnjak, 2023. 301 Iva Ramuš Cvetkovič – AI—A Possible Solution to the Threats Against Human Lives Arising from Space Objects? References Abraham, Y. (2024) ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza, (accessed 7 January 2025). Akoto, W. (2020) Hackers could shut down satellites – or turn them into weapons (accessed 3 November 2022). Alby, F., et al. (2001) The European Space Debris Safety and Mitigation Standard. Proceedings of the 3rd European Conference on Space Debris, ESOC, Darmstadt, Germany, 19–21 March 2001, (accessed 1 December 2023). Amster, K. (2022) The AI help to identify astronomical objects, (accessed 10 November 2022). Bobrovsky, A.I., Galeeva, M.A., Morozov, A.V., et al. (2019) ‘Automatic detection of objects on star sky images by using the convolutional neural network’, Journal of Physics 1236, pp. 1–6. Borowitz, M. (2022) The Military Use of Small Satellites in Orbit, (accessed 3 November 2022). Bratu, I., Lodder, A.R., van der Linden, T. (2021) ‘Autonomous space objects and international space law: Navigating the liability gap’, Indonesian Journal of International Law (18)3, pp. 423–446. Bt, S.P., & Cummings, D. (1991) ‘The first space war: The contribution of satellites to the gulf war’, RUSI Journal 136(4), pp. 45–53. Burgess, M. (2022) A Mysterious Satellite Hack Has Victims Far Beyond Ukraine, (acces- sed 3 November 2022). Byers, M., Wright, E., Boley, A., et al. (2022) ‘Unnecessary risks created by uncontrolled rocket reentries’, Nature Astronomy 6, pp. 1093–1097. Cassese, A. (2005) International Law. New York: Oxford University Press. Chatterjee, P. (2022) How SpaceX is using AI to advance its ambitions, (accessed 13 November 2022). 302 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Chatzipanagiotis, M. (2020) ‘Whose fault is it? Artificial Intelligence and Liability in International Space Law’, 71th International Astronautical Congress (IAC) - The Cyberspace Edition, 12–14 October 2020. Chen, S. (2011) ‘The Space Debris Problem’, Asian Perspective 35(4), pp. 537–558. Chow, D., & Mitchell, A. (2021) Astronauts take shelter as debris passes dangerously close to space station, (acces- sed October 31 2022). Custers, B. & Frosch-Villaronga, E. (2022) ‘Humanizing Machines: Introduction and Overview’, in: Custers, B. & Frosch-Villaronga, E. (eds.) (2022) Law and Artificial Intelligence. The Hague: Springer, pp. 3–28. David, L. (2013) Effects of Worst Satellite Breakups in History Still Felt Today, (acces- sed 31 October 2022). Dennerley, J.A. (2018) ‘State liability for space object collisions: The proper interpretati- on of ‘fault’ for the purposes of international space law’, The European Journal of International Law 29(1), pp. 281–301. Diaz, D. (1993) ‘Trashing The Final Frontier: An Examination of Space Debris From a Legal Perspective’, Tulane Environmental Law Journal 6(2), pp. 369–395. Dunlap, C. (2021) Are commercial satellites used for intelligence-gathering in attack planning targetable?, (accessed 3 November 2022). ESA (2020) Earth’s first space debris removal mission, (accessed 13 November 2022). ESPI (2022) ESPI Short Report 1 – The war in Ukraine from a space cybersecurity per- spective, (accessed 20 January 2024). Gerhard, M. (2009) ‘Article VI’ in: Hobe S. et al. (eds.) (2009) Cologne Commentary on Space Law, Vol 1, Cologne: Carl Heymanns Verlag, pp. 103–125. Gorove, S. (1973) ‘Arms Control Provisions in The Outer Space Treaty: A Scrutinizing Reappraisal’, Georgia Journal of International and Comparative Law 3, pp. 114–123. Gregersen, E. (2022) Space Debris, (accessed 31 October 2022). 303 Iva Ramuš Cvetkovič – AI—A Possible Solution to the Threats Against Human Lives Arising from Space Objects? Harland, D.M. (2022) Mir. Soviet-Russian Space station, (accessed 27 October 2022). Hertzfeld, H.R. (2021) ‘Unsolved issues of compliance with the registration convention’, Journal of Space Safety Engineering (8)3, 238–244. Hobe, S. (2009) ‘Article I’ in: Hobe S. et al. (eds.) (2009) Cologne Commentary on Space Law, Vol 1, Cologne: Carl Heymanns Verlag, pp. 25–44. Hobe, S., & Pellander, E. (2012) ‘Space Law: a “Self-Contained Regime”?’ in: Hobe, S. & Freeland, S. (2012) In Heaven as on Earth? The Interaction of Public International Law on The Legal Regulation of Outer Space. Bonn: Institute of Air and Space Law of the University of Cologne, pp. 1–12. Hobe, S. (2019) Space Law. Baden-Baden: Nomos. Zekos, G.I. (2022a) Political, Economic and Legal Effects of Artificial Intelligence. Cham: Springer. Hoe, L.I., Umar, R., & Kamarudin, M.K.A. (2018) ‘Article III of the 1967 Outer Space Treaty: A Critical Analysis.’ International Journal Of Academic Research In Business And Social Sciences 8(5), pp. 326–338. Instituto de Astrofísica e Ciências do Espaço (2022) Artificial intelligence helps in the identification of astronomical objects, (accessed 13 November 2022). JAXA (1981) Settlement of Claim between Canada and the Union of Soviet Socialist Republics for Damage Caused by “Cosmos 954”, (accessed 27 October 2022). Jewett, R. (2022) Viasat Details KA-SAT Cyberattack that Affected Thousands of Modems in Ukraine, (accessed 3 November 2022). Kerrest, A., & Smith, L.J. (2009) ‘Article VII’ in: Hobe S. et al. (eds.) (2009) Cologne Commentary on Space Law, Vol 1. Cologne: Carl Heymanns Verlag, pp. 126–145. Kessler, D.J., & Cour-Palais, B.G. (1978) ‘Collision Frequency of Artificial Satellites: The Creation of a Debris Belt’, 38 JGR Space Physics A6, 2637–2646. Kizer Witt, K. (2022) Who owns all the satellites?, (accessed 26 October 2022). Lee, R.J., Steele, S. (2014) ‘Military Use of Satellite Communications, Remote Sensing, and Global Positioning Systems in the War on Terror’, Journal of Air Law and Commerce 79(1), pp. 69–112. 304 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Linden, D. (2016) The Impact of National Space Legislation on Private Space Undertakings: Regulatory Competition vs. Harmonization. Journal of Science Policy & Governance 8(1). Macaulay, T. (2020) AI to help world’s first removal of space debris, (accessed 13 November 2022). Marboe, I., Neumann, J., & Shrogl, K. (2013) ‘Article I’ in: Hobe, S., et al. (eds.) (2013) Cologne Commentary on Space Law, Vol II. Cologne: Carl Heymanns Verlag, pp. 38–47. Marboe, I., Neumann, J., & Shrogl, K. (2013) ‘Preamble’ in: Hobe, S., et al. (eds.) (2013) Cologne Commentary on Space Law, Vol II. Cologne: Carl Heymanns Verlag, pp. 31–37. Marchisio, S. (2009). ‘Article IX’ in: Hobe, S., et al. (eds.) (2009) Cologne Commentary on Space Law, Vol I. Cologne: Carl Heymanns Verlag, pp. 169–182. Marchisio, S. (2018) ‘Il Trattato sullo spazio: passato, presente e futuro’, Rivista di diritto internazionale 1, pp. 205–213. Martin, A.-S., & Freeland, S. (2020) ‘The Advent of Artificial Intelligence in Space Activities: New Legal Challenges’, Space Policy 55. McFall-Johnsen, M. (2021) High-Speed Space Junk Risk Forces NASA Astronauts to Abandon Spacewalk, (accessed 31 October 2022). Miller, G.D. (2019) ‘Space Pirates, Geosynchronous Guerrillas, and Nonterrestrial Terrorists’, Air and Space Power Journal, pp. 33–51. Miller, M. (2022) UC engineers develop navigation to avoid collisions: UC’s new system gets us closer to robots that can fix satellites or spacecraft in orbit, (accessed 13 November 2022). Muhammad, A.N. (2019) ‘Revisiting U.S – China Aggressive Use of Outer Space: A Comprehensive International Law Outlook Towards Military Activities in Outer Space’, Indonesian Journal of International Law 16(4), pp. 473–503. Mukherjee, S. (2021). Should we be worried about space debris? Scientists explain, (accessed 31 October 2022). NASA (2009) The Collision of Iridium 33 and Cosmos 2251: The Shape of Things to Come, (accessed 31 October 2022). 305 Iva Ramuš Cvetkovič – AI—A Possible Solution to the Threats Against Human Lives Arising from Space Objects? Niewęgłowski, K. (2021) Space debris and obligations erga omnes – a legal framework for States’ responsibility?, (accessed 8 November 2022). Novak, Ž. (2022). Uporaba previdnostnega načela pri aktivnostih v vesolju. (accessed 7 January 2025). Palmroth, M., Tapio, J., Soucek, A., et al. (2021) ‘Toward Sustainable Use of Space: Economic, Technological, and Legal Perspective’, Space Policy 57, pp. 6–12. Patowary, K. (2020) Cosmos 954: The Nuke That Fell From Space, (accessed 26 October 2022). Puttré, M. (2022) Satellites Are Likely Targets in the Next Major War, (accessed 4 November 2022). Ramuš Cvetkovič, I. (2021) Space law as lex specialis to international law, (accessed 4 November 2022). Ramuš Cvetkovič, I. (2023) ‘Protisatelitski (ASAT) testi – kaj se skriva za masko pre- vencije?’ in: Badalič, V. (ed.) (2023) Preventivna (ne)pravičnost: preprečeva- nje kriminalitete in družbene škode. Ljubljana: Inštitut za kriminologijo, pp. 139–153. Ramuš Cvetkovič, I. & Drobnjak, M. (2023) ‘As Above so Below: The Use of International Space Law as an Inspiration for Terrestrial AI Regulation to Maximize Harm Prevention’ in: Završnik, A. & Simončič, K. (eds.) (2023) Artificial Intelligence, Social Harms and Human Rights. Springer, pp. 207–238. Ribbelink, O. (2009) ‘Article III’ in: Hobe, S., et al. (eds.) (2009) Cologne Commentary on Space Law, Vol 1. Cologne: Carl Heymanns Verlag, pp. 64–69. Ryan, R.G., Marais, E.A., Balhatchet, C.J., & Eastham, S.D. (2022). ‘Impact of rocket launch and space debris air pollutant emissions on stratospheric ozone and global climate’, Earth’s Future 10, e2021EF002612. Schmelzer, R. (2020) How Is AI Helping to Commercialize Space?, (accessed 14 November 2022). Schmidt-Tedd, B., & Mick, S. (2009) ‘Article VIII’ in: Hobe, S., et al. (eds.) (2009) Cologne Commentary on Space Law, Vol 1. Cologne: Carl Heymanns Verlag, pp. 146–168. 306 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Schrogl, K., & Neumann, J. (2009). ‘Article IV’ in: Hobe, S., et al. (eds.) (2009) Cologne Commentary on Space Law, Vol 1. Cologne: Carl Heymanns Verlag, pp. 78–93. Sheer, A., & Li, S. (2019) ‘Space Debris Mounting Global Menace Legal Issues Pertaining to Space Debris Removal: Ought to Revamp Existing Space Law Regime’, Beijing Law Review 10, pp. 423–440. Shultz, K. (2010) Operation Morning Light, (accessed 26 October 2022). Stuart, J. (2015) Comment: Satellite industry must invest in cyber security, (accessed 4 November 2022). Sundar, S., & Kim, J. (2019) Machine Heuristic: When We Trust Computers More than Humans with Our Personal Information, (accessed 15 January 2024). Union of Concerned Scientists (2022) Satellite Database, (accessed 26 October 2022). Viasat (2022) KA-SAT Network cyber attack overview, (accessed 4 November 2022). Von der Dunk, F.G., & Goh, G.M. (2009) ‘Article V’ in: Hobe, S., et al. (eds.) (2009) Cologne Commentary on Space Law, Vol. 1. Cologne: Carl Heymanns Verlag, pp. 94–102. Weintz, S. (2015) Operation Morning Light: The Nuclear Satellite That Almost Decimated America, (accessed 27 October 2022). Zander, F. (2022) What’s the risk of being hit by falling space debris?, (accessed 20 January 2024). Zedalis, R., & Wade, C. (1978) ‘Anti-Satellite Weapons and the Outer Space Treaty of 1967’, California Western International Law Journal 8, pp. 454–482. Zekos, G.I. (2022b) Advanced Artificial Intelligence and Robo-Justice. Cham: Springer. 307 © The Author(s) 2024 Scientific Article DOI: 10.51940/2024.1.307-334 UDC: 004.8:17:342.7 Kristina Čufar* AI Software/Hardware as Mind/Body Problem Global Supply Chains, Shadow Workers, and Wasted Lives Abstract Artificial intelligence (AI) and other algorithm-based technologies have become part of everyday life over the last decade. While AI holds amazing potential and has already con- tributed positively to the human condition, it is also subject to fierce critique as it may, for example, reproduce bias and social injustices or increase dystopic forms of surveil- lance. While most scholarly, regulatory, and ethical debates focus on AI software-related issues, AI hardware receives far less attention. Understanding AI as software, as an arti- ficial mind, highlights only the supposedly new and exciting aspects of this technology and ignores the human and material costs of its fabrication. This is consistent with the traditional mind-body dualism, which prioritises mind over body and thus skews our perception of the problem. To counter the dominant narratives, this article proposes a concept of AI as hardware/software to broaden the scope of ethical and legal issues that ought to be addressed through AI regulation. A holistic and systemic treatment of the AI phenomenon robs it of its perceived uniqueness. Once the worldwide extraction of materials, labour, and data necessary to set up AI machinery is seriously considered, AI stands out as yet another instance of colonial capitalism. Key words artificial intelligence, ethics, human rights, extractivism, colonialism. * Research Associate, Institute of Criminology at the Faculty of Law Ljubljana; Assistant Professor, University of Ljubljana Faculty of Law: kristina.cufar@pf.uni-lj.si The article was composed and accepted for publication in 2022 and it reflects the state of technological and regulatory frameworks at the time of writing. Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review – voL. LXXXiv, 2024 • pp. 307–334 ISSN 1854-3839 • eISSN: 2464-0077 308 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 1. Introduction** Artificial intelligence (AI) is the buzzword of the day. From advertisements and mov- ie suggestions to police surveillance and healthcare, it seems that AI and other algo- rithm-based technologies have entered all spheres of human existence. This process is enveloped in a complex aura of dread and hope that mirrors the partial and confused understandings of what AI technology is and what it might become. In the popular im- agination, as well as in scholarly and regulatory debates, AI software is the privileged ob- ject/subject of interest. AI software—or rather, its potential—excites human imagination much more than the material conditions that allow AI to exist and function. Excessive focus on AI (as) software thus distorts our understanding of contemporary technologies and their ethical and legal implications. In contrast, a holistic understanding of AI requires recognising that AI software can- not exist and function without hardware and that separating the two in ethical and reg- ulatory debates obscures more than it illuminates. I propose a conception of AI as hard- ware/software to highlight the materiality of the phenomenon and allow for its thorough scrutiny. A materialist understanding of AI widens the scope of concern and brings the issue of the rights of inhabitants and environments of the Global South into the fold of AI ethics. Critical analysis of the disproportionate focus on software in the AI debates can benefit from reframing this problem as another instance of mind-body dualism. This dualism has marked Western thought for centuries, and various strands of critical theory denounce mind-body dualism for prioritising mind over body and consequently contrib- uting to an array of stereotypes and social hierarchies. The excessive focus on software, the artificial mind, has similar effects, as it often eclipses the physical realities sustaining it. Applying the critique of mind-body dualism to mainstream debates on AI thus pro- vides a powerful prism for a systemic re-evaluation of techno-solutionist narratives con- cerning climate change and other pressing issues facing our planet. Furthermore, under- standing AI as hardware/software erodes the hype surrounding AI’s uniqueness. Treating the AI machine as a mere part of a more extensive technocapitalist apparatus is necessary to conduct a sober debate on the role of technology in the unfolding planetary drama. To provide a critical analysis of sidelining hardware issues and their consequences for (non) human beings and the environment, I depart from the right to life, understood as a right to a dignified life.1 AI weapon systems or fatalities on the streets where smart self-driving ** The article is based on research work of the author conducted at the Faculty of Law of the University of Ljubljana within the small basic research project titled Development and use of artificial intelligence in the light of negative and positive obligations of a State to ensure the right to life (J5-3107), co-financed by the Slovenian Research Agency. 1 As being alive is crucial for one’s enjoyment of other human rights, the right to life has been inter- preted as one of the most basic and all-encompassing human rights, despite the nominal absence of hierarchy among them. The boundaries of the right to life are porous and unclear, as is the case with 309 Kristina Čufar – AI Software/Hardware as Mind/Body Problem. Global Supply Chains, Shadow Workers, and Wasted Lives cars are tested, for example, are not the only instances where AI infringes on the right to life. The AI industry also encroaches on the right to life of people living in communities whose access to safe drinking water, moderately clean air, safe food, and other basic pro- visions is denied because of the extraction of resources, production facilities, or dumping grounds for machines that no longer serve us. Moreover, a dignified life denotes freedom from slavery and extreme exploitation. To unpack the issues described above, it is necessary to understand the loose signifier AI. Accordingly, the article first briefly engages with various attempts to define AI tech- nologies and proceeds with a short overview of existing regulatory strategies and their shortcomings (section 2). Defining and regulating a swiftly developing phenomenon focuses on differentiating it from all others: in the case of AI, the distinguishing factor is AI software capable of autonomous adjustments. The focus on software in regulatory debates sidelines concerns such as human rights violations and the destruction of the environment in the process of hardware production. A theoretical framework to explain this tendency, the prism of mind-body dualism and decolonial theory, is then provided (section 3). The argument that software issues eclipse a swarm of legal and ethical prob- lems is illustrated by examples of the extraction of minerals, labour and data and the environmental implications of these practices (section 4). The conclusions bring differ- ent strands of the article together and propose that focusing on what makes AI the same as—rather than different from—all other technologies and consumer goods makes an important contribution to debates on AI (section 5). Eroding the narrative of AI’s uniqueness is one of the main focal points of this article. While research on AI-specific threats to human rights and other values is necessary and important, it is also important to consider the ways in which AI technology is entangled in the longstanding global system of extraction and consumption rooted in human rights violations. Before engaging with this argument, the following section addresses the elu- sive definition of AI. 2. To Define is to Regulate 2.1. Defining AI: Mythology and Materiality Scientific prose and regulatory interventions cannot escape the slippery terrain of notions that tend to denote, yet never capture, the essence of phenomena. Like many important concepts, including the very concept of intelligence, AI escapes a clear defi- nition. AI is usually defined broadly, for example, as “the science and engineering of any right. Nevertheless, dignified life implies that each human being should have access to basic provisions like drinking water, food, and shelter in an environment free from extreme pollution. See: Casey-Maslen & Heyns, 2021, pp. 11–15. 310 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 making intelligent machines, especially intelligent computer programs”.2 Most discus- sions revolving around AI today focus on various forms of machine learning. Machine learning algorithmic tools automatically “learn” and adjust themselves over time without explicit human programming. Generally, AI systems are understood as machines some- what similar to human intelligence,3 with human intelligence representing the yardstick in the field. Similarity to the human mind is recognised in the machine’s ability to “learn” to identify patterns in the data and make predictions and decisions. Critical scholars operating with a holistic understanding of the phenomenon warn that AI systems are neither artificial nor intelligent but embodied and profoundly political.4 AI might thus be considered a heavily mystified regime of truth based on knowledge extractivism and epistemic colonialism.5 Before engaging with these arguments, the AI concept needs to be further unpacked. The flourishing of AI technologies in recent decades has been enabled by combining large amounts of data, sophisticated algorithms, and ever-rising computing power.6 More and more data are captured and extracted as technology proliferates. Algorithms, sets of instructions for computers to perform, require less and less pre-programming. Many of the ideas driving the development of AI systems today have been around for decades but could not be implemented due to a lack of computing power.7 Computing power has risen dramatically in recent decades, roughly doubling every two years.8 As computing power increases, computer chips are becoming smaller, and computer processing faster and faster. This perfect storm allowed for the AI spring we are currently experiencing. Definitional open-endedness is one of the factors that hinders meaningful regula- tion of AI and could hardly be resolved in this article.9 The proposed conception of AI as hardware/software builds on the understanding of AI as embodied and thus aims towards a definition of AI that necessarily includes the mundane material aspects of the phenomenon. The definition of AI used in this article is rather broad: AI as hardware/ software is not necessarily limited to machines that mimic neural networks but entails all contemporary technology necessary for the functioning and development of AI systems. While science fiction and news sensationalism contribute to utopic and dystopic ideas about AI’s capabilities, most AI systems are not as intelligent as people think; in fact, a lot of work goes into hiding how “stupid” they are. Different tricks, including 2 McCarthy, 2007. 3 Scherer, 2016. 4 Crawford, 2021, pp. 7–9. 5 Joler & Pasquinelli, 2020. 6 Maclure, 2020. 7 Mitchell, 2019, pp. 27–66. 8 Shalf, 2020. 9 Buiten, 2019; Hoffmann & Hahn, 2020. 311 Kristina Čufar – AI Software/Hardware as Mind/Body Problem. Global Supply Chains, Shadow Workers, and Wasted Lives paying people to pretend to be AI systems, are employed to maintain the illusion of machine autonomy.10 What goes under the name of AI is not intelligence in the sense of understanding but powerful statistical tools with a great capacity to perceive patterns in vast amounts of data. Accordingly, what exists at present may be referred to as weak or narrow AI systems that can perform complicated repetitive tasks that the machine was created to perform.11 Strong or general AI, or artificial general intelligence (AGI), does not exist (yet). The hypothetical AGI would amount to an artificial human mind capable of performing various tasks and understanding data. It would also have its own volition, reasons, de- sires and would learn and develop like a human child. It is unclear when (and if ) AGI will come to be, what it would actually be like, or what kind of consequence it would bring. Obsession with AGI or singularity, a rise of self-conscious, all-powerful, and in- credibly intelligent machines, is a powerful myth that attracts a lot of attention, often at the expense of the problems AI technology is already causing for traditionally discrim- inated groups of the population.12 Fixation on building human-like AI is also pushing the industry to focus on developing tools that could replace human beings rather than developing AI tools that might complement and assist them. In practice, a collaboration between humans and AI technology is far more realistic and efficient since people are integral in training and explaining the workings of the machines, which can, in turn, assist humans with automatable tasks.13 Pushing the AGI mythology aside, its meagre approximation in the form of weak AI has become an integral part of our lives. AI systems are increasingly used in a broad spectrum of domains, from employment, education, healthcare, and welfare to warfare, from judiciary and law enforcement to advertisement and entertainment. Most AI today is developed for commercial reasons by corporate actors, and many issues associated with contemporary AI are rooted in increasing social inequalities and regulatory capture.14 While AI certainly has many exciting and valuable applications and potentials, it is but a human-made tool with human flaws: various instances of (intersectional) discrimina- tion against women, people of colour, queer people, people with disabilities, and other marginalised groups are regularly reported.15 Social media using AI tools to hook users and moderate content have found themselves at the heart of debates about democracy, freedom of expression, and users’ mental health.16 All these—and other—controversies 10 Crawford, 2021, pp. 63–69. 11 Searle, 2009. 12 Crawford, 2016. 13 Wilson & Daugherty, 2018. 14 Bryson, 2020. 15 West Myers Whittaker & Crawford, 2019; Whittaker et al., 2019. 16 Rouvroy, Berns & Carey-Libbrecht, 2013; Balkin, 2017. 312 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 make the regulation of AI technologies a pressing issue. Regulation lagging behind the developments on the ground is not unique to AI, but given the speed and role of tech- nological development, the issue of AI regulation seems extremely acute, as the following subsection sketches out. 2.2. Regulating AI: Profits and Lives Discrimination and other potential fundamental rights violations are some of the key issues driving AI regulatory development. It seems that the user, the principal and fungible subject of technocapitalism, presupposes a particular type of human embodiment; in the case of Western AI, a white cis, non-disabled, affluent Western man. People who do not fit the image of this prototype user often experience difficulties when interacting with AI sys- tems.17 AI developed by Chinese companies faces similar criticism of perpetuating gender stereotypes and race profiling of ethnic minorities.18 The prototype embodiment coincides with the identity parameters of those developing AI systems in their image, at the expense of other groups and other epistemologies.19 AI bias is a complex social issue rooted in the historical bias of software developers and people processing the data, non-representative and problematic datasets, and algorithmic bias instilled in the machine.20 AI bias is a pressing concern, as it threatens to reproduce and cement many of the existing injustices within our societies. AI bias and other risks, such as privacy concerns, the potential of AI for nudging and manipulating people, and threats to safety and secu- rity, contribute to the race to regulate AI amongst the most powerful actors in the field.21 Nevertheless, relatively few legislative efforts have been made thus far to regulate AI, and arguments that regulation will stifle development and arrest progress carry (too) much weight. The European Union (EU), unlike the United States (US) and China, is not home to important “Big Tech” corporations developing and marketing AI systems. The EU has a plan, though: it aims to become a significant player in the AI industry through regulation and governance. Based on its soft-law guidelines addressing AI issues, the EU Commission proposed the so-called “AI Act”, a mixture of instruments aiming to boost the development of the AI industry in the EU and instruments aspiring to address fundamental rights concerns.22 The final shape of this regulation, expected to be enacted in 2023, remains unclear, but it is a revolutionary step in the hard-law regulation of AI. Critics warn that the proposed 17 Buolamwini & Gebru, 2018; Shabbar, 2018. 18 See, e.g., Zhang, 2021; Mozur, 2019. 19 Abdilla et al., 2021. 20 Joler & Pasquinelli, 2020. 21 Smuha, 2021. 22 Veale & Borgesius, 2021. 313 Kristina Čufar – AI Software/Hardware as Mind/Body Problem. Global Supply Chains, Shadow Workers, and Wasted Lives Act repeats the EU’s colonialist attitudes and that the balance between economic goals and ethical principles is fragile and likely to favour economic and security concerns.23 The human rights and the environment the EU is supposedly so eager to protect are those of its citizens-users, as the Act completely ignores the rights and environments harmed in the pre-stages of producing the final—high-value—AI products. The EU is not alone in its zeal to comprehensively regulate AI; China is another trailblazer in AI regulation and development.24 If hard law regulating AI technologies is in its embryonic stages, the situation is starkly different when it comes to soft law in- struments. Here, the EU and China are far from the only actors: guidelines for ethical AI are mushrooming and being developed by nongovernmental organisations (NGOs), corporations, governments, transnational organisations, academic institutions, and oth- ers.25 Ethical guidelines have limited scope and differ depending on who is drafting them. Nevertheless, buzzwords like transparency, explainability, non-discrimination, safety, pri- vacy, accountability, oversight, humans in the loop, and societal and environmental well- being consistently arise. AI ethical guidelines are mostly developed by actors in the Global North and predominantly focus on the possibility of ethical AI software, while the ethical pitfalls of AI’s material dimensions remain largely overlooked.26 When discussing workers’ rights, for example, the software threats to workers in the Global North are usually con- sidered—for example, privacy, surveillance, or job loss due to automation.27 The debates on environmental wellbeing likewise risk overemphasising the software, for example, the enormous amounts of electricity needed to train machine learning algorithms.28 Thus, many important ethical issues remain overlooked in regulatory attempts. Since the Global South is indispensable in the genesis of AI in the very material sense of providing cheap labour and raw materials, decolonial scholars are well aware of the de- struction left in the wake of the digitalisation of global economies. Some examples of this destruction recorded in their work are discussed in the fourth section of this article. Decolonial scholars are vocal in assessing the indifference of leading AI designers and regulators to human and nonhuman life in the Global South. Yet, their work, just like the issues of the Global South, often remains overlooked. It must be stressed that the Global North versus Global South terminology presents yet another deceiving and over- simplifying binary that demands some unpacking. Both Global North and South are heterogeneous. Nevertheless, as political and economic power largely remains concen- trated in the countries of the Global North, the Global South remains exploited, mar- 23 Carmel & Paul, 2022. 24 Wu 2022; Roberts et al., 2021. 25 Jobin, Ienca & Vayena, 2019. 26 Ricaurte, 2022; Crawford, 2021, pp. 223–227. 27 Cf. Rodrigues, 2020. 28 Cf. Strubell Ganesh & McCallum, 2019. 314 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 ginalised, racialised, and overlooked. That said, it is essential to avoid replicating the stale image of the underdeveloped poor South versus the rich, injustice-free North. When using this dualist distinction, it is imperative to be aware of its limitations and stress the overwhelming complexity of the situation worldwide, and the existence of economic Souths in the geographical North and vice versa.29 The North-South distinction is nonetheless helpful in the context of contemporary or late capitalism, neoliberalism, or technocapitalism—or whatever one wishes to call it.30 I employ North-South terminology to stress that the current global economic and political system cannot function without extractivism and othering, or, in other words, cannot operate without the good old colonialist and patriarchal patterns. This reality is reflected in regulatory and ethics debates surrounding AI technologies. Contemporary capitalism is a system built and dependent upon endless economic growth and con- sumption, reducing people and the environment to expendable resources.31 To entice the consumer in the Global North with the myths of clean and green technology, for example, the economic Souths must be kept far from view and discussion. Omitting the role and backstory of hardware in the everyday glorification of AI software is thus vital for the patterns of domination and extraction to remain undisturbed. Our turbulent time, designated by Achille Mbembe as a time of planetary entangle- ment of fast capitalism, soft power warfare, and overflow of computational technologies, is not without history.32 Capitalism as a political and economic system could not come to be and function without colonialism—the European occupation and exploitation of the globe that began in the fifteenth century.33 As Walter Mignolo argues, Western modernity is unimaginable without coloniality, an intricate matrix of power that snakes from the Renaissance and Enlightenment to contemporary neoliberalism.34 As the sys- tem of dispossession and unequal redistribution of costs and profits continues in capi- 29 Png, 2022. 30 It is not my intention to engage in a profound analysis of naming the present stage of global cap- italism: this article engages with capitalism in the broadest sense of the word, that is, capitalism as a political and economic order and ideology. As AI technology and its implications for hu- man (and other) rights are at the forefront of the discussion, technocapitalism is especially fitting. Technocapitalism is Suarez-Villa’s denominator for contemporary capitalism in which technology and science facilitate a range of transformations of (corporate) power. See: Suarez-Villa, 2009, pp. 1–7; Neoliberalism as the signifier of contemporary capitalist practices is also fitting. Neoliberalism is defined by Harvey as a political and economic theory and practice that promotes entrepreneurial freedom, property rights, individual liberty, free trade, and free markets as the modes of advancing human well-being. See: Harvey, 2007. 31 Jackson, 2021, pp. 1–161 32 Mbembe, 2019, pp. 93–116. 33 Bhambra, 2021.as formative of, and continuous with them. This is a consequence of the dominant understandings (across different theoretical perspectives 34 Mignolo, 2011, pp. 1–26. 315 Kristina Čufar – AI Software/Hardware as Mind/Body Problem. Global Supply Chains, Shadow Workers, and Wasted Lives talism’s techno-reincarnation, its racist and patriarchal underpinnings remain firmly in place. Moreover, technological advances, including the latest blossoming of AI research and industry, contribute to ever-intensified and accelerated connections, redistributions of power, and incipience of new fantastical myths. The bond between colonialism and capitalism thus remains central to understanding the role and implications of AI tech- nologies in the global landscape. Through centuries, capitalism has morphed and trans- formed, just like colonialism has; yet, the two remain essentially interwoven, mutually dependent, and co-constitutive. The logic of coloniality is apparent in the debates on AI regulation, which mostly ignore the actual costs and effects of contemporary technologies. Today’s Other tends to elude our view just like the Enlightenment’s Other, who disappeared from lofty debates about the rights of men, natural equality, and freedom.35 When it comes to the produc- tion of AI hardware, the communities and environments of the Global South are too often perceived as passive repositories of resources, as relatively inconsequential in the quest for ethical human-centred AI. If we seriously consider that the distinction between software and hardware is, to a large extent, artificial and obscuring, we might realise that some of the most pressing regulatory and ethical issues related to AI are not novel at all. When considering the artificiality of the hardware-software distinction, it is impera- tive to keep in mind that the Global South is not just a synonym for hardware produc- tion but is also crucial in AI software development. Just like body and mind, hardware and software are not two separate eventualities. Since AI software receives a lot of at- tention, this article focuses on the hardware to highlight issues that too often remain in the background. The intention is not to present hardware issues as more pressing and consequential but to illuminate precisely the fact that software and hardware ought to be contemplated in conjunction. Therefore, my attempt to shift the focus from software to hardware also illustrates that such exercise is, ironically, impossible, as the two perpet- ually intertwine. A meaningful debate on AI (ethics and regulation) must consider AI technology in its entirety or risk losing a vital piece of the puzzle in understanding how and why AI might positively contribute to life on planet Earth, as well as how AI harms all life on the planet and jeopardises human rights, including the very right to life. Before engaging with the interplay of rights and AI systems’ lifecycles, the following section expands on the theoretical framework of mind-body dualism crucial for illuminating the preference for software in many AI debates. 3. Mind-body problem In the Western tradition, the body is perceived as the passive temple or even the prison of the active mind and has accordingly enjoyed a lower status in the onto-epistemological 35 Robertson, 2005; Carey & Festa, 2009. 316 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 hierarchy. Mind-body dualism was firmly established in the Age of Enlightenment and cemented through centuries with serious consequences for marginalized groups of the population. Women, colonised people, people of colour, and others identified with the body and nature were long perceived as part of the material universe, that is, as passive, incapable of rational thought and institution-building, and were denied access to edu- cation and political participation.36 Such onto-epistemic orientations and classifications of people were used to justify colonialism, cultural genocide, oppression of women, and racialised slavery and represent the foundations of modernity and global capitalism.37 The great minds of the Enlightenment imagined the privileged subject of knowledge in power in their own image: a white, affluent, educated man identified with reason, cre- ativity, curiosity, invention, entrepreneurship, and so on. Nowadays, along these lines, the Global North, enchanted with the service economy and techno-solutionism, quickly identifies AI software as an artificial mind with AI as a whole. AI systems are constantly presented as artificial minds and continuously discussed in separation from AI hardware, the machine’s body. However, just like traditional mind- body dualism, the software-hardware binary distorts our understanding of the phenom- ena and prioritises certain issues over others. Topics like AI replacing human workers, AI surveillance, privacy concerns, and discriminating AI systems are significant issues. And yet, issues like widespread destruction of the environment, displacement and im- poverishment of communities, and child labour are paramount as well. Nevertheless, as they pertain to the materiality of the machine, they lack the aura of exciting novelty associated with AI technologies. Furthermore, these issues are not unique to AI technol- ogy but are the bitter leitmotif of global capitalism. While different initiatives to address human rights abuses in global supply chains exist, they address only the symptoms of an inherently problematic system, are wrought with issues, and are often inefficient.38 AI technologies, as a part of the global political and economic regime, are entangled in the longstanding bricolage of inequalities and injustices that define global capitalism. Despite the persistent mystification of AI as an intangible process, AI is very much embodied. AI is an assemblage of actions, interactions, relationships, matter, knowl- edge, and power. Much celebrated digitalisation of economies is unimaginable without extractivism—the forceful removal of raw materials and life from the earth’s surface, ex- traction of labour needed to produce electronic devices, and extraction of personal data performed in turn by these devices.39 Individual AI systems’ supply chains are estimated to include tens of thousands of suppliers in over a hundred countries and take years 36 See, e.g., Bray & Colebrook, 1998; Jenkins, 2005 37 Walsh & Mignolo, 2018, pp. 177–210. 38 Alamgir & Banerjee, 2019; Anner, 2020. 39 Mezzadra & Neilson, 2017. 317 Kristina Čufar – AI Software/Hardware as Mind/Body Problem. Global Supply Chains, Shadow Workers, and Wasted Lives to approximately disentangle.40 Even companies whose business model is built around ethically sourced and produced technological products can hardly guarantee more than “aiming to work towards responsible natural resource management.”41 Therefore, it is, put mildly, challenging to ensure that the machines facilitating our relationships with AI software are ethical and free from contaminants like child labour, forced labour, conflict, destruction of habitats, and displacement. Furthermore, the AI industry contributes its fair share to global climate change, which represents another significant threat to human rights and the rights of other inhabitants of the planet. The full-scale environmental impacts of AI technologies and their contributions to climate change are seldom considered.42 The environmental burdens caused by the AI industry are unequally distributed between the Global North and South, as well as be- tween economic Norths and Souths.43 The poor, marginalised, and racialised commu- nities worldwide consume the least but are more adversely affected by the degradation of the environment and climate change-related weather events and are more likely to struggle to access basic provisions such as clean drinking water.44 The story of how the AI bodies/objects come to be, what it takes for these machines to operate, and what happens to them when they no longer serve us remains clouded by user ignorance and indifference. Nevertheless, this backstage process is crucial for understanding how AI intertwines with the present, future, and rights of human and nonhuman beings around the globe. The following section is composed of just a few examples that illustrate the wide array of ethical and legal issues that arise throughout the lifecycle of an AI system. 4. (Im)material AI? 4.1. Inception: Sweat and Minerals AI, as we know it, would be impossible without an array of metals, minerals, and rare earth elements. Deposits of critical raw materials are scattered all over the globe. They are often subject to intense (geo)political frictions and competition between the tradi- tional global economic powers of the Global North and those on the rise, most notably China.45 The electricity-powered digital economy, with AI at its centre, is propagated as a pathway to a sustainable future, prompting both nation-states and corporations to 40 Crawford & Joler, 2018. 41 Fairphone, 2022. 42 Mulligan & Elaluf-Calderwood, 2022. 43 Islam & Winkel, 2017. 44 Bell, 2019. 45 Kalantzakos, 2019. 318 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 entertain ideas such as space mining to ensure the materials needed to enact this vision.46 Perhaps even more immanent is the desire for large-scale deep-seabed mining, which will bring about unimaginable consequences for the little-understood ecosystems of the deep seas and the planet in general.47 Nevertheless, the traditional forms of mineral extrac- tion remain the norm across the world, from the lithium triangle in Bolivia, Chile, and Argentina to mass-scale production of rare-earth metals in China, from the goldmines in Australia and the USA to zinc mining in India. From the perspective of the Global South, streams of minerals and data flowing to the Global North are often unilateral: pouring from economically and politically weaker countries to those more powerful. Extraction of materials like copper, gold, silver, aluminium, nickel, manganese, graphite, silver, lithium, cobalt, europium, terbium, and many others composing AI and other hardware takes place around the planet, often in politically and economically frag- ile countries. Large-scale mining is conducted chiefly by transnational corporations and does not economically benefit the communities residing in the mining areas. To survive, these communities are often forced to engage in extremely dangerous small-scale arti- sanal mining in the proximity of official mines. Due to its unofficial character, criminal groups often abuse artisanal mining, which is thus associated with conflict, violence, and exploitation.48 Whether artisanal or corporate, extraction of minerals is perilous for hu- man health, devastating for ecosystems, and water-intensive, contributing to wide-scale pollution and water scarcity.49 As individual devices are compounded by a vast array of chemical elements extracted worldwide, the following lines provide only an illustrative example of cobalt extraction in the Democratic Republic of the Congo (DRC). Climate change prompted demands for the abandonment of fossil fuels, yet the world order is organised around extreme consumption by privileged consumers, most- ly residing in the Global North. This type of consumer wants it all: the comforts and vices of a privileged consumerist lifestyle, clean air, and green spaces in their immediate surroundings. This context is ripe for a greenwashing campaign presenting electricity as an ecologically friendly alternative to oil and coal, despite the fact that coal remains the dominant fuel used in global electricity production.50 Moreover, electricity is not only problematic because it is often produced with a high carbon footprint; the issue of electricity storage is also highly contentious. The demand for rechargeable and relatively short-lived lithium-ion batteries is growing, and their production is impossible without minerals whose extraction poses several ethical and legal issues.51 46 Gilbert, 2021. 47 Levin, Amon & Lily, 2020. 48 Kaufmann & Côte, 2021. 49 Peña & Tapia, 2020. 50 International Energy Agency, 2022. 51 Crundwell, du Preez & Knights, 2020. 319 Kristina Čufar – AI Software/Hardware as Mind/Body Problem. Global Supply Chains, Shadow Workers, and Wasted Lives Cobalt, along with lithium, is one of the most notorious elements involved in this process, and its extraction is rapidly increasing. Cobalt, found in the battery of every (smart) device, is considered a critical raw mineral crucial in the transition to electrici- ty-powered societies. The DRC and Zambia, the so-called Copperbelt, are home to the world’s largest cobalt deposits. The DRC, a former Belgian colony, has a long history of extraction of copper, cobalt, and uranium for export. Today, the DRC produces almost 70% of the world’s cobalt, 20–30% of which is extracted in artisanal mines.52 Human rights abuses in cobalt mining in the DRC were brought into the limelight by the 2016 Amnesty International report53 and the unsuccessful 2019 class lawsuit against Tesla, Apple, Google, and Microsoft, filed in the USA by the families of children killed or injured while mining cobalt.54 Harsh working conditions, child labour, and forced labour in artisanal mines con- tributed to the big mining companies’ formalisation of unofficial mining operations. These moves, however, led to novel forms of dispossession and exploitation and did not provide safety for the miners.55 Cobalt extraction is not only problematic from the perspective of exploitation of official and unofficial workers, widespread corruption, and conflict risks, but it also causes widespread environmental contamination. The health of those residing near cobalt mines is severely affected, and the rates of congenital dis- orders are alarmingly high.56 As in the days of Belgium’s colonisation, cobalt extracted in the DRC allows for the bare survival of local communities who bear the poisonous costs of the North’s green transition, while the added value of the mineral is cashed in by corporations based in countries like the USA and China. The issues entangling cobalt production and the division of costs and profits of these operations are not unique to the DRC. Around the globe, communities are exploited, displaced, and harmed by mining operations that make AI technology possible. 4.2. Flux: Voyages and Transformations Once raw materials are extracted, they travel to the many production facilities, where they are turned into diverse components, which travel to yet another set of production facilities where machines are constructed. Finished devices take another journey to reach their users, and once disposed of, they take their final voyage. This simplified description captures the essence of contemporary supply chains—where a single product repeatedly travels by sea, earth, and land and encompasses the labour of thousands. The transport 52 Calvão Mcdonald & Bolay, 2021; Gulley, 2022. 53 Amnesty International, 2016. 54 Mining.com, 2021. 55 Calvão Mcdonald & Bolay, 2021. 56 Van Brusselen et al., 2020. 320 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 involved in the supply chains heavily contributes to climate change and is simultaneously threatened by the increasing frequency and ferocity of extreme weather events.57 Much transportation is carried out by ships using dizzying quantities of low-grade fuel, pollut- ing the air and the oceans and contributing to an estimated 60,000 deaths worldwide.58 Millions of standardised containers roaming around the globe represent the basic build- ing blocks of the global capitalist economy. Hundreds of shipping containers are lost at sea every year, and the World Shipping Council reports a dramatic increase in lost containers observed in the years 2020 and 2021 due to weather events.59 Many of these containers emit toxins and litter seabeds and seashores. Furthermore, seafaring is a highly hazardous occupation. Workers em- ployed in the shipping industry spend long periods in relative isolation, are vulnerable to a high risk of (fatal) injury and physical and psychological illness, and are exposed to carcinogenic and other toxic materials.60 The shipping industry is involved in all spheres of consumption and is not essential only in manufacturing AI hardware. Nevertheless, since AI technology has yet to assist in producing self-driving and self-loading ecological- ly friendly means of transportation, its development hinges on these harmful practices. Let’s entertain the workers’ health, well-being, and survival for a moment longer. Psychological distress and high suicide rates among seafarers are not isolated; taking one’s own life might even be a radical means of protest against exploitation. A series of jumping suicides of young migrant workers in Foxconn factories in Shenzhen, China, occurred between 2010 and 2011. In China, the Foxconn suicides were followed by a broader wave of worker suicides, as well as a public debate on labour conditions and factory management in the country.61 Meagrely paid workers producing Apple and oth- er devices described illegally long working hours, abuse, discrimination, and failure to report work-related accidents. The suicides highlighted the inequalities (re)produced in China’s neoliberal economic blossoming and the state’s complicity in this process.62 As a response to accusations that its operation is basically a labour camp, Foxconn installed anti-jumping nets63 and included no-suicide clauses in the workers’ contracts.64 The worker suicides shocked the world and (momentarily) brought some attention to the exploitation of labour in producing electronic devices. The reader has undoubtedly al- ready realised that the story of the Foxconn suicides is meant to illustrate a much broader 57 Ghadge Wurtmann & Seuring, 2020. 58 Crawford & Joler, 2018. 59 World Shipping Council, 2022. 60 Bloor Thomas & Lane, 2000. 61 Lin, Lin & Tseng, 2016. 62 Pun & Koo, 2015. 63 Ye, 2010. 64 Lee, 2011. 321 Kristina Čufar – AI Software/Hardware as Mind/Body Problem. Global Supply Chains, Shadow Workers, and Wasted Lives issue. Cheap factory labour is another prerequisite of AI technology since individual users, corporations, and research facilities demand sophisticated hardware at affordable prices. Despite alarmist discourse on robots and AI replacing human workers, cheap human labour seems to be, for the time being at least, essential in creating the machine. 4.3. Data: Ghosts and Clouds While data might, at first glance, appear abstract and immaterial, it is a product and a resource driving the accumulation of capital by powerful actors in technocapitalist so- cieties. Data extraction is another form of raw material extraction that involves disposses- sion, asymmetries of power, and colonialist techniques. Diverse, often vulnerable, pop- ulations around the globe—for example, users of social networks, workers in Amazon’s fulfilment centres, people with criminal records—are at the forefront of data extraction that does not benefit them and might, in fact, adversely affect their well-being.65 Users of seemingly free technological products are not compensated for the time and data that are essential for the functioning of Big Tech as we know it. Furthermore, the data-centric rationality at the heart of AI ideology also has colonial-flavoured epistemic dimensions, imposing dominant epistemological positions as universal modes of knowing at the ex- pense of others.66 Data is, moreover, very material: it must be stored in physical locations and processed by human beings to serve its assigned role in the system. More and more data are stored and processed in the cloud. Despite its ethereal name, cloud computing implies massive data centres—factories offering on-demand paid de- livery of information technology resources such as computing power, data storage, pro- cessing, and distribution on remote computers. The transition to cloud computing is a transition towards centralisation and commodification of the internet that was once imagined as free and decentralised cyberspace.67 Cloud computing also raises issues con- nected with data security and the surveillance of technology users.68 Moreover, data cen- tres worldwide are big consumers of electricity for functioning and water for cooling the numerous computers. Thus, clouds can put public infrastructure and the environment under strain. Furthermore, cloud computing, essential for contemporary AI and digital technologies, does not burden the environment only through its consumption of re- sources. Greenpeace 2020 report details the role of cloud computing and AI tools offered by Google, Microsoft, and Amazon in facilitating and optimising the discovery, extrac- tion, distribution, refining, and marketing of oil and gas.69 As such, cloud computing sits 65 Crawford, 2021, pp. 89–121; Delfanti & Frey, 2021. 66 Ricaurte, 2019. 67 Mosco, 2016. 68 Rachana et al., 2017. 69 Greenpeace, 2020. 322 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 at the intersection of several ethical preoccupations concerning data, natural resources, and labour extraction. Data is crucial for AI systems to “learn.” Yet some tasks resist automation, and ma- chine learning is not as spontaneous as it is made out to be. For the most part, machine learning and deep learning are supervised, meaning that human agents must label datasets used in the process in advance, adjust learning parameters, and so on. All internet users get to participate in this process, for instance, by improving AI machine vision each time we are asked to prove our humanity by clicking the correct images in Google’s reCAPT- CHA.70 Yet, most of this work—and other work crucial for developing and functioning of AI systems—is performed by click-workers who remain invisible to an ordinary user. These “ghost workers” are often employed through crowd-work platforms and meagrely paid by the click.71 This is best exemplified by the cynical irony of Amazon’s Mechanical Turk, a crowdsourcing marketplace where such a fragmented precarious workforce can be outsourced. Mechanical Turk mimics AI by delegating micro-work— such as like labelling, checking, assessing, and correcting machine-learning processes— to human workers around the globe. The very name of the platform originates from an eighteenth-century anecdote about a chess-playing automaton built to impress the Habsburg Empress Maria Theresa.72 While the device appeared autonomous, it was actu- ally just a casing hiding a human being operating it, creating an illusion of an intelligent machine. Platforms like Mechanical Turk created a digital global on-demand workforce working on their personal devices in their homes or internet cafés. This hyper-flexible precariat reflects colonialist and patriarchal structures at the heart of AI development, as many click-workers reside in the Global South.73 Furthermore, many click-workers are women who struggle to find more traditional forms of employment because of their role as caretakers.74 The relative invisibility of these shadow or ghost workers, predominantly vulnerable population groups, is once again veiled by the mythology of self-learning pumping much of the AI-related hype. People who are more likely to be excluded from the creative and visible jobs in AI software design due to their economic status, place of birth/residence, race, gender, and other (intersections of ) markers of oppression are not only more likely to perform in- visible labour but also more likely to be the subjects of experimentation with newly developed AI systems. The hype surrounding AI allows tech companies to test their products on the general public around the world.75 Yet again, some groups—namely, 70 Lung, 2012. 71 Gray & Suri, 2019, pp. ix–xxxi. 72 Aytes, 2013. 73 Soriano, Cabalquinto & Panaligan, 2021. 74 Altenried, 2020. 75 Stilgoe, 2018; Wolf, Miller & Grodzinsky, 2017. 323 Kristina Čufar – AI Software/Hardware as Mind/Body Problem. Global Supply Chains, Shadow Workers, and Wasted Lives those residing in the Global South and economic Souths in the Global North—are more vulnerable to the ethics dumping involved in AI systems’ beta testing. For instance, the infamous Cambridge Analytica software was beta tested in Nigeria and Kenya elections before it was used in the United Kingdom (UK) and the USA; and New Zealand tested its predictive welfare algorithms on the Māori population.76 This short overview of data extraction, processing, and epistemology illustrates the internal contradictions destabi- lising the mind-body dualism and their second coming in the software-hardware distinc- tion. Data, like many key concepts of contemporary technologies, has been built up as the intangible new oil, despite the fact that it rests on the ‘old’ oil and human bodies that make it intelligible. 4.4. Necropolitics: E-waste and Wasted Lives As digitalisation advances, the lifespan of electronic devices is becoming shorter and shorter while the demand for such devices is snowballing around the world. E-waste, an umbrella term for various discarded electronic equipment, is, therefore, a growing chal- lenge. It is estimated that humanity produces e-waste equivalent to around 5,000 Eiffel towers in weight every year, which makes e-waste an environmental and health concern of epic proportions.77 Simultaneously, one person’s trash is another’s treasure: e-waste recycling is an expanding multi-billion global industry.78 Trash and treasure are limi- nal concepts in this context, as their disentanglement involves confronting an array of complexities. E-waste is essentially a bundle of plastics, gold, silver, copper, aluminium, platinum, nickel, chromium, zinc, mercury, beryllium, lead, and many other elements. Only an estimated 17% of this waste is properly collected and formally recycled.79 The fate of the remaining global e-waste is unclear, probably decided outside the official col- lection systems. A portion of this e-waste is illegally shipped to and informally recycled in Africa and Asia, using methods like open burning and acid stripping of metals, which release an array of toxins.80 The way e- and other waste is handled today is, in part, connected with the envi- ronmental justice struggles that emerged in the Global North in the 1970s and 80s and inadvertently contributed to the exportation of hazardous waste to the Global South.81 Recycling is usually understood as a positive practice that magically annihilates the neg- ative contributions of hyper-consumption, yet the grim reality of (e-waste) recycling 76 Mohamed, Png & Isaac, 2020. 77 Parajuly et al., 2019. 78 Kaza et al., 2018. 79 Forti et al., 2020. 80 Rautela et al., 2021. 81 Little, 2021, pp. 16–21. 324 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 paints a less romantic picture. One of the infamous examples of the dark side of e-waste recycling is Agbogbloshie, a scrapyard with an adjoining informal settlement in Accra, Ghana. This formerly sacred place and green space for residents has gradually trans- formed into what is often depicted as a toxic high-tech hellscape,82 where an egg ex- ceeds the European Food Safety Authority limits for chlorinated dioxin’s daily intake by 220-fold.83 Agbogbloshie has attracted the attention of media, photographers, research- ers, and NGOs, culminating in research fatigue among its workers and residents.84 Most of the e-waste in this scrapyard originates in the EU and the USA, while some of it is created in Ghana and other African countries.85 Reducing the site to a dead zone and a dead-end of green narratives would flatten down the complexity of activities, relation- ships, and struggles that define Agbogbloshie. Extraction of copper and aluminium from e-waste and refurbishing discarded digital devices for further use are important economic activities weaving the complex social fabric. Yet Agbogbloshie residents are undoubtedly burdened by the personal and ecological costs of the unsustainable habits of people re- siding in the Global North. Agbogbloshie and other e-waste dumps function as powerful illustrations of a social, political, and economic system that favours software over hardware, new over old, central over peripheral, rich over poor, and capital over labour. In this system, the emergence of countless e-dumps is unavoidable. Still, as long as they remain out of sight of the priv- ileged populations, the e-dumps remain largely ignored. The throw-away culture at the heart of our economic model and its concept of economic progress is not only creating e- and other waste but is also persistently expanding the wastelands that make human lives increasingly difficult and put them at risk. In this process of displacement, not only are discarded items produced, but also countless “wasted lives” or “human waste”, to borrow Zygmunt Bauman’s term for human beings deemed excessive, redundant, and threatening in the prevailing model of economic progress and modernisation.86 The e-dumps thus symbolise not only the pivotal point where the lifecycle of one machine ends to discharge materials for a new one but also the wasted lives, stolen childhoods, opportunities, and living spaces of those unable to afford the latest electronic devices. These wasted lives inspire dread in the Global North: no wonder the EU, with all its talk about ethical and human-centred AI, feels little reservation when protecting its “smart borders” with invasive AI technologies targeting third-country nationals.87 Again, the human being around which technology and rights are built is the EU citizen, a user 82 Little & Akese, 2019. 83 Petrlik et al., 2019. 84 Akese, 2020. 85 Little & Akese, 2019. 86 Bauman, 2013, pp. 1–41. 87 Jo Pesch, Dimitrova & Boehm, 2022; Broeders & Hampshire, 2013. 325 Kristina Čufar – AI Software/Hardware as Mind/Body Problem. Global Supply Chains, Shadow Workers, and Wasted Lives and consumer in need of protection from not only invasive AI technologies but also from human waste in the form of desolate migrants fleeing poverty, despair, drought, floods, toxicity, and other by-products of the capitalist system. This human waste is another class of subjectivity whose rights weigh less than those of the users for whom EU legislation is drafted. In late capitalism, as before, colonial sovereignty encompasses the power to define who matters and who is disposable.88 Thus, an e-dump is far from the final chapter of an AI hardware’s lifespan; it is but a repetition of the omnipresent re-establishing of borders, a site of what Mbembe terms “necropolitics”, the drawing of the border between humans who get to live and those designated to social death, an expulsion from human- ity and its rights.89 The concrete examples discussed above are far too few to highlight the full scale of global destruction necessary to support the AI industry as we know it. Furthermore, the above descriptions are too loose and general to expose the full range of human and non- human beings affected and the full gravity and complexity of their stories. Nevertheless, these partial stories indicate that humans are all too present in the AI loop and that the related ethical issues cannot be ignored or excluded from the AI regulation debate, even if they are not AI (software) issues stricto sensu. 5. Conclusions Systemic critique of the excessive focus on AI software in scholarly and regulatory debates carried out in this article strategically shifts the focus to AI hardware. At first glance, the software-hardware distinction appears to be a simple and logical epistemic binary, dividing programming and mechanical engineering. Yet, there is a political di- mension to this dualism: it allows us to ignore the continuation of colonial patterns that define capitalism as a political and economic system. Technocapitalism and the obsession with data, the service economy, and digitalisation, are no exceptions: “postindustrial” societies rely on extractivism and the industrialisation of the peripheries. The prioritiza- tion of software over hardware thus reflects a system prioritising capital over labour and (surveilled and exploited) users over (surveilled and exploited) producers. AI hardware is not essentially different from computer hardware without an “intel- ligent” dimension. Furthermore, from the point of view of hardware production, trans- portation, and waste management, AI is not essentially different from all other objects circulating in the global economy, like clothing, food, furniture, or toys. The exploitation of the Global South for the profits created in the Global North is a longstanding process and the foundation of the global capitalist political and economic order. Human rights, including the right to life, are all too often side-lined in relation to capital expansion and 88 Mbembe, 2019, pp. 78–83. 89 Ibid. 326 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 economic growth. Since so much debate focuses on what makes AI technology special and different from all other phenomena, this article highlights what makes AI technolo- gies painfully familiar. Rejecting the glorification of AI software entails understanding AI as a paradigm of technocapitalism and inadvertently broadening its definition beyond ad- vanced statistical models involving some kind of machine “learning” or “training” ability. Despite the focus on hardware, this article does not claim that hardware issues are more important than software issues. The attempt to overturn the binary—that is, to iso- late and highlight the hardware aspect, if only to demonstrate that it has been side-lined and devalued vis-à-vis software—is self-deconstructing from the get-go. Neither software nor hardware can be treated in isolation, nor can one of these aspects be considered more important in defining AI. The examples provided throughout this article illustrate precisely the hopeless entanglement of human rights issues that define AI as hardware/ software. What the overturning of the binary achieves is precisely what feminist, critical race, and decolonial critics of mind-body onto-epistemic dualism continuously assert. Identifying the privileged pole of a hierarchical binary (mind, software) with that which is creative, interesting, and thus worthy of attention erases and depreciates the opposite pole (body, hardware). The resulting injury is multi-dimensional. First, it creates an il- lusion that the separation of the two poles is possible and simple, while the dualism is always somewhat artificial, as its two poles endlessly contaminate one another. Second, the devaluated pole (body, hardware) is systematically ignored as the passive prerequisite of the active and creative pole (mind, software). In the case of AI, this means that AI software and its creators—the computer software engineers and tech entrepreneurs—are celebrated as creative, revolutionary explorers of uncharted lands. The harm caused by the hardware industry and the contribution of human beings who perform non-programming labour is subsequently erased from the majority of AI discussions. Dualistic perception of software and hardware thus enables a privileged and highly homogenous group of human beings to reap enormous rewards for what is essentially a common undertaking, all the while huge costs are borne by the planet and all its inhabitants. In other words, AI mythologies forget about the hardware, the body, treating it as a given, necessary but passive and taken-for-granted machine that hosts the active and amazing mind, the software. AI understood as software/hardware, on the other hand, robs the AI phenomenon of its exceptionality. Instead, approaching AI as hardware/software places AI in the broader context of contemporary technology and an even broader context of hyper-consumerism fuelling the global economy. What is lost through this operation is the hype, and what is gained is a more sober reckoning with the challenges of tomorrow. AI as software/hardware invites consideration that the threats to fundamental rights caused by AI software are hopelessly entangled with those posed by AI hardware. Transparent and fair AI cannot be a product of colonial displacement, dispossession, 327 Kristina Čufar – AI Software/Hardware as Mind/Body Problem. Global Supply Chains, Shadow Workers, and Wasted Lives and ecocide. Instead of endless proliferation, a sustainable AI industry mindful of hu- man rights inescapably implies fewer and more expensive, repairable, and long-lasting technological products. Technological interventions should be guided not by corporate greed and the (perceived) privileged users’ desires but by the actual needs of humanity as a whole and with a sensibility for the needs of nonhuman entities. The speed of development of new technologies and beta testing should be slowed down and sub- jected to peer review, rigorous scientific ethics, and public regulation. Sustainable and human-centred technology also requires rethinking techno-solutionist narratives, which suggest that technology can solve our problems without sacrificing the privilege and unsustainable way of life that we, the inhabitants of the Norths of this world, consider as our entitlement. 328 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 References Abdilla, A., Kelleher, M., Shaw, R., et al. (2021) Out of the Black Box: Indigenous Protocols for AI. United Nations Educational, Scientific and Cultural Organization (UNESCO). Akese, G.A. (2020) ‘Researching Agbogbloshie: A Reflection on Refusals in Fieldwork Encounters’, Feministische Geo-RundMail 12(5), pp. 52–55. Alamgir, F. & Banerjee, S.B. (2019) ‘Contested Compliance Regimes in Global Production Networks: Insights from the Bangladesh Garment Industry’, Human Relations 72(2), pp. 272–297. Altenried, M. (2020) The Platform as Factory: Crowdwork and the Hidden Labour be- hind Artificial Intelligence. Capital & Class 44(2), pp. 145–158. Amnesty International (2016) This Is What We Die for: Human Rights Abuses in the Democratic Republic of the Congo Power the Global Trade in Cobalt. Amnesty International. Anner, M. (2020) ‘Squeezing Workers’ Rights in Global Supply Chains: Purchasing Practices in the Bangladesh Garment Export Sector in Comparative Perspective’, Review of International Political Economy 27(2), pp. 320–347. Aytes, A. (2013) ‘Return of the Crowds: Mechanical Turk and Neoliberal States of Exception’ in: Scholz, T. (ed.) Digital Labor: The Internet as Playground and Factory. Routledge, pp. 79–97. Balkin, J. (2017) ‘Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation’, UCDL Rev. 51, p. 1149. Bauman, Z. (2013) Wasted Lives: Modernity and Its Outcasts. John Wiley & Sons. Bell, L. (2019) ‘Place, People and Processes in Waste Theory: A Global South Critique’, Cultural Studies 33(1), pp. 98–121. Bhambra, G.K. (2021) ‘Colonial Global Economy: Towards a Theoretical Reorientation of Political Economy’, Review of International Political Economy 28(2), pp. 307–322. Bloor, M., Thomas, M. & Lane, T. (2000) ‘Health Risks in the Global Shipping Industry: An Overview’, Health, Risk & Society 2(3), pp: 329–340. Bray, A. & Colebrook, C. (1998) ‘The Haunted Flesh: Corporeal Feminism and the Politics of (Dis)Embodiment’, Signs: Journal of Women in Culture and Society 24(1), pp. 35–67. Broeders, D. & Hampshire, J. (2013) ‘Dreaming of Seamless Borders: ICTs and the Pre- Emptive Governance of Mobility in Europe’, Journal of Ethnic and Migration Studies 39(8), pp. 1201–1218. 329 Kristina Čufar – AI Software/Hardware as Mind/Body Problem. Global Supply Chains, Shadow Workers, and Wasted Lives Bryson, J.J. (2020) ‘The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation’, in: Dubber M.D., Pasquale, F. & Das, S. (eds.) The Oxford Handbook of Ethics of AI. Oxford University Press, pp. 3–25. Buiten, M.C. (2019) ‘Towards Intelligent Regulation of Artificial Intelligence’, European Journal of Risk Regulation 10(1), pp. 41–59. Buolamwini, J. & Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in: Conference on Fairness, Accountability and Transparency, 21 January 2018, pp. 77–91. Calvão, F., Mcdonald, C.E.A. & Bolay, M. (2021) ‘Cobalt Mining and the Corporate Outsourcing of Responsibility in the Democratic Republic of Congo’, The Extractive Industries and Society 8(4), p. 100884. Carey, D. & Festa, L. (2009) ‘Some Answers to the Question: ‘What is Postcolonial Enlightenment?’’ in: Carey, D. & Festa, L. (eds.) The Postcolonial Enlightenment: Eighteenth-Century Colonialism and Postcolonial Theory. Oxford, New York: Oxford University Press, pp. 1–34. Carmel, E. & Paul, R. (2022) ‘Peace and Prosperity for the Digital Age? The Colonial Political Economy of European AI Governance’, IEEE Technology and Society Magazine 41(2), pp. 94–104. Casey-Maslen, S. & Heyns, C. (2021) The Right to Life under International Law: An Interpretative Manual. Cambridge, New York, Melbourne, New Delhi, Singapore: Cambridge University Press. Crawford, K. & Joler V. (2018) Anatomy of an AI System: The Amazon Echo as an Anatomical Map of Human Labor, Data and Planetary Resources, (accessed 27 April 2021). Crawford, K. (2016) Artificial Intelligence’s White Guy Problem, The New York Times, 25 June, (accessed 8 September 2021). Crawford, K. (2021) Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, London: Yale University Press. Delfanti, A. & Frey B. (2021) ‘Humanly Extended Automation or the Future of Work Seen through Amazon Patents’, Science, Technology, & Human Values 46(3), pp. 655–682. Fairphone (2022) Fair Materials, (accessed 11 October 2022). Forti, V., Baldé C.P., Kuehr R., et al. (2020) The Global E-waste Monitor (GEM) 2020. United Nations University (UNU)/United Nations Institute for Training 330 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 and Research (UNITAR) – co-hosted SCYCLE Programme, International Telecommunication Union (ITU) & International Solid Waste Association (ISWA), (accessed 17 October 2022). Ghadge, A., Wurtmann, H. & Seuring, S. (2020) ‘Managing Climate Change Risks in Global Supply Chains: A Review and Research Agenda’, International Journal of Production Research 58(1), pp. 44–64. Gilbert, A. (2021) Mining in Space Is Coming, (accessed 11 October 2022). Gray, M.L. & Suri, S. (2019) Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Houghton Mifflin Harcourt. Greenpeace (2020) Oil in the Cloud, (accessed 14 October 2022). Gulley, A.L. (2022) ‘One Hundred Years of Cobalt Production in the Democratic Republic of the Congo’, Resources Policy 79, p. 103007. Harvey, D. (2007) ‘Neoliberalism as Creative Destruction’, The ANNALS of the American Academy of Political and Social Science 610(1), pp. 21–44. Hoffmann, C.H. & Hahn, B. (2020) ‘Decentered Ethics in the Machine Era and Guidance for AI Regulation’, AI & SOCIETY 35(3), pp. 635–644. International Energy Agency (2020) World – World Energy Balances: Overview – Analysis, (acces- sed 12 October 2022). International Energy Agency (2022) Executive summary – Electricity Market Report – July 2022 – Analysis, (accessed 12 October 2022). Islam, S.N. & Winkel, J. (2017) Climate Change and Social Inequality. Working Papers 152, United Nations, Department of Economics and Social Affairs, (accessed 11 October 2022). Jenkins, L. (2005) ‘Corporeal Ontology: Beyond Mind-Body Dualism?’, Politics 25(1), pp. 1–11. Jo Pesch, P., Dimitrova, D. & Boehm, F. (2022) ‘Data Protection and Machine- Learning-Supported Decision-Making at the EU Border: ETIAS Profiling Under Scrutiny’, in: Gryszczyńska, A., Polański, P., Gruschka, N., et al. (eds.) Privacy Technologies and Policy. Springer International Publishing, pp. 50–72. Jobin, A., Ienca, M. & Vayena, E. (2019) ‘The Global Landscape of AI Ethics Guidelines’, Nature Machine Intelligence 1(9), pp 389–399. 331 Kristina Čufar – AI Software/Hardware as Mind/Body Problem. Global Supply Chains, Shadow Workers, and Wasted Lives Joler, V. & Pasquinelli, M. (2020) The Nooscope Manifested: AI as Instrument of Knowledge Extractivism, (accessed 12 October 2022). Kaufmann, C. & Côte, M. (2021) ‘Frames of Extractivism: Small-Scale Goldmining Formalization and State Violence in Colombia’, Political Geography 91, p. 102496. Kaza, S., Yao, L., Bhada-Tata, P., et al. (2018) What a Waste 2.0: A Global Snapshot of Solid Waste Management to 2050. World Bank Publications. Lee, A. (2011) Apple Manufacturer Makes Employees Sign ‘No Suicide’ Pact: Report, (accessed 13 October 2022). Levin, L.A., Amon, D.J. & Lily, H. (2020) ‘Challenges to the Sustainability of Deep- Seabed Mining’, Nature Sustainability 3(10), pp. 784–794. Lin, T., Lin, Y. & Tseng, W. (2016) ‘Manufacturing Suicide: The Politics of a World Factory’, Chinese Sociological Review 48(1), pp. 1–32. Little, P.C. & Akese, G.A. (2019) ‘Centering the Korle Lagoon: Exploring Blue Political Ecologies of E-Waste in Ghana’, Journal of Political Ecology 26(1), pp. 448–465. Little, P.C. (2021) Burning Matters: Life, Labor, and E-Waste Pyropolitics in Ghana. Oxford University Press. Lung, J. (2012) ‘Ethical and Legal Considerations of reCAPTCHA’ in: 2012 Tenth Annual International Conference on Privacy, Security and Trust, July 2012, pp. 211–216. Maclure, J. (2020) ‘The New AI Spring: A Deflationary View’, AI & SOCIETY 35(3), pp. 747–750. Mbembe, A. (2019) Necropolitics. Durham, London: Duke University Press Books. McCarthy, J. (2007) What is AI?, (accessed 11 October 2022). Mezzadra, S. & Neilson, B. (2017) ‘On the Multiple Frontiers of Extraction: Excavating Contemporary Capitalism’, Cultural Studies 31(2–3), pp. 185–204. Mignolo, W.D. (2011) The Darker Side of Western Modernity: Global Futures, Decolonial Options. Durham, London: Duke University Press Books. Mining.com (2021) Tesla, Apple, Google, Microsoft Dodge Congo Cobalt Class-Action, (accessed 12 October 2022). Mitchell, M. (2019) Artificial Intelligence: A Guide for Thinking Humans. Penguin UK. 332 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Mohamed, S., Png, M.-T. & Isaac, W. (2020) ‘Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence’, Philosophy & Technology 33(4), pp. 659–684. Mosco, V. (2016) ‘After the Internet: Cloud Computing, Big Data and the Internet of Things’, Les Enjeux de l’information et de la communication 17/2(2), pp. 146–155. Mozur, P. (2019) One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority. The New York Times, (acces- sed 24 November 2022). Mulligan, C. & Elaluf-Calderwood, S. (2022) ‘AI Ethics: A Framework for Measuring Embodied Carbon in AI Systems’, AI and Ethics 2(3), pp. 363–375. Parajuly, K., Kuehr, R., Awasthi, A.K., et al. (2019) Future E-Waste Scenarios. Solving the E-waste Problem (StEP) Initiative; United Nations University; United Nations Environment Programme, (accessed 17 October 2022). Peña, P. & Tapia, D. (2020) White Gold, Digital Destruction: Research and Awareness on the Human Rights Implications of the Extraction of Lithium Perpetrated By the Tech Industry in Latin American Ecosystems. Global Information Society Watch, (accessed 15 July 2022). Petrlik, J., Adu-Kumi, S., Hogarh, J., et al. (2019) Persistent Organic Pollutants (POPs) in Eggs: Report from Africa. Accra-Yaounde-Gothenburg-Prague, IPEN, Arnika- Toxics and Waste Programme, CREPD-Centre de Recherche et d’Education pour le Développement. Png, M.-T. (2022) ‘At the Tensions of South and North: Critical Roles of Global South Stakeholders in AI Governance’, in: 2022 ACM Conference on Fairness, Accountability, and Transparency, New York, NY, USA, 21 June 2022, pp. 1434–1445. Pun, N. & Koo, A. (2015) ‘A “World-Class” (Labor) Camp/us: Foxconn and China’s New Generation of Labor Migrants’, Positions: East Asia Cultures Critique 23(3), pp. 411–435. Rachana, C.R., Banu, R., Ahammed, G.F.A., et al. (2017) ‘Cloud Computing – A Unified Approach for Surveillance Issues’, in IOP Conference Series: Materials Science and Engineering 225(1), p. 012073. Rautela, R., Arya, S., Vishwakarma, S., et al. (2021) ‘E-Waste Management and Its Effects on the Environment and Human Health’, Science of The Total Environment 773, p. 145623. 333 Kristina Čufar – AI Software/Hardware as Mind/Body Problem. Global Supply Chains, Shadow Workers, and Wasted Lives Ricaurte, P. (2019) ‘Data Epistemologies, the Coloniality of Power, and Resistance’, Television & New Media 20(4), pp. 350–365. Ricaurte, P. (2022) ‘Ethics for the Majority World: AI and the Question of Violence at Scale’, Media, Culture & Society 44(4), pp. 726–745. Roberts, H., Cowls, J., Morley, J., et al. (2021) ‘The Chinese Approach to Artificial Intelligence: An Analysis of Policy, Ethics, and Regulation’, AI & SOCIETY 36(1), pp. 59–77. Robertson, J. (2005) ‘Women and Enlightenment: A Historiographical Conclusion’ in: Knott, S. & Taylor, B. (eds.) Women, Gender and Enlightenment. London: Palgrave Macmillan UK, pp. 692–704. Rodrigues, R. (2020) ‘Legal and Human Rights Issues of AI: Gaps, Challenges and Vulnerabilities’, Journal of Responsible Technology 4, p. 100005. Rouvroy, A., Berns, T. & Carey-Libbrecht, L. (2013) ‘Algorithmic Governmentality and Prospects of Emancipation’, Reseaux 177(1), pp. 163–196. Scherer, M. (2016) ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies’, Harvard Journal of Law & Technology 29(2), p. 353. Searle, J. (2009) ‘Chinese Room Argument’, Scholarpedia 4(8), p. 3100. Shabbar, A. (2018) ‘Queer-Alt-Delete: Glitch Art as Protest Against the Surveillance Cis- tem’, Women’s Studies Quarterly 46(3 & 4), pp. 195–212. Shalf, J. (2020) ‘The Future of Computing beyond Moore’s Law’, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 378(2166), p. 20190061. Smuha, N.A. (2021) ‘From a ‘Race to AI’ to a ‘Race to AI Regulation’: Regulatory Competition for Artificial Intelligence’, Law, Innovation and Technology 13(1), pp. 57–84. Soriano, C.R., Cabalquinto, E.C. & Panaligan, J.H. (2021) ‘Performing “Digital Labor Bayanihan”: Strategies of Influence and Survival in the Platform Economy’, Sociologias 23, pp. 84–111. Stilgoe, J. (2018) ‘Machine Learning, Social Learning and the Governance of Self- Driving Cars’, Social Studies of Science 48(1), pp. 25–56. Strubell, E., Ganesh, A. & McCallum, A. (2019) ‘Energy and Policy Considerations for Deep Learning in NLP’, arXiv preprint:1906.02243 [cs], (accessed 9 February 2021). Suarez-Villa, L. (2009) Technocapitalism: A Critical Perspective on Technological Innovation and Corporatism. Philadelphia: Temple University Press. 334 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 LjubLjana Law Review, voL. LXXXiv, 2024 Van Brusselen, D., Kayembe-Kitenge, T., Mbuyi-Musanzayi, S., et al. (2020) ‘Metal Mining and Birth Defects: A Case-Control Study in Lubumbashi, Democratic Republic of the Congo’, The Lancet Planetary Health 4(4), pp. 158–167. Veale, M. & Borgesius, F.Z. (2021) ‘Demystifying the Draft EU Artificial Intelligence Act—Analysing the Good, the Bad, and the Unclear Elements of the Proposed Approach’, Computer Law Review International 22(4), pp. 97–112. Walsh, C.E. & Mignolo, W.D. (2018) On Decoloniality: Concepts, Analytics, Praxis. Durham: Duke University Press. West Myers, S., Whittaker, M. & Crawford, K. (2019) Discriminating Systems. New York: AI Now. Whittaker, M., Alper, M., Bennett, C.L., et al. (2019) Disability, Bias, and AI. New York: AI Now Institute. Wilson, H.J. & Daugherty, P.R. (2018) ‘Collaborative Intelligence: Humans and AI Are Joining Forces’, Harvard Business Review 96(4), pp. 114–123. Wolf, M.J., Miller, K.W. & Grodzinsky, F.S. (2017) ‘Why We Should Have Seen That Coming: Comments on Microsoft’s Tay “Experiment,” and Wider Implications’, The ORBIT Journal 1(2), pp. 1–12. World Shipping Council (2022) Containers Lost at Sea 2022 Update. World Shipping Council. Ye, J. (2010) Foxconn Installs Antijumping Nets at Hebei Plants. Wall Street Journal, 3 August, (accessed 13 October 2022). Zhang, P. (2021) The ‘CEO’ Is a Man: How Chinese Artificial Intelligence Perpetuates Gender Biases. South Moring China Post, (accessed 24 November 2022). Povzetki Abstracts 337 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Znanstveni članek DOI: 10.51940/2024.1.17-37 UDK: 340.12:165.82(73) Jure Spruk Ideološke premise ameriškega pravnega realizma Avtor obravnava ameriški pravni realizem in njegove ideološke sledi. S pravno teoretične- ga stališča ameriški pravni realizem zaobjema teorijo sodniškega odločanja, ki se je zlasti v dvajsetih in tridesetih letih 20. stoletja razvila kot odgovor na Langdellov pravni for- malizem in formalistično sodniško odločanje. Namesto pravnih pravil so ameriški pravni realisti v središče analize sodniškega odločanja postavili iz konkretnih primerov izvedena dejstva, bolj kot notranja logika pravnega razlogovanja pa so jih zanimale njegove po- sledice. Umestitev teoretičnih poudarkov ameriškega pravnega realizma v družbeni kon- tekst njihovega nastanka pokaže na njihove ideološke implikacije. Z ideološkega vidika je kritika formalizma pomenila kritiko klasično liberalnih ideoloških konstruktov, zlasti nevtralnega in svobodnega trga, ki so družbeno moč pospešeno zgoščali v rokah posame- znikov in korporacij na škodo manj privilegiranih družbenih skupin. Kritika eksaktnega preračunavanja pravilnosti sodniških odločitev je bila dejansko kritika naravne neizogib- nosti trga kot pravičnega posrednika med interesi materialno neenakih posameznikov in družbenih skupin. Ključne besede ameriški pravni realizem, ideologija, teorija prava. 338 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Scientific article DOI: 10.51940/2024.1.17-37 UDC: 340.12:165.82(73) Jure Spruk Ideological Premises of American Legal Realism The author discusses American legal realism and its ideological traits. From a legal-theo- retical standpoint, American legal realism can be understood as a theory of judicial deci- sion-making that developed especially in the 1920s and 1930s in response to Langdell’s formalism and formalistic judicial decision-making. Instead of focusing on legal rules, American legal realists focused on the facts of concrete cases. They were less interested in the internal logic of legal reasoning and more concerned with its consequences. The social contextualisation of the main theoretical premises of American legal realism re- veals their ideological implications. From an ideological perspective, the critique of legal formalism signified a critique of classical liberal ideological constructs—particularly the notion of a neutral and free market—which concentrated social power in the hands of individuals and corporations at the expense of less privileged social groups. The critique of the exact calculations of the correctness of judicial decision-making was, in effect, a critique of the notion of a naturally inevitable market as a fair mediator between the interests of materially unequal individuals and social groups. Key words American legal realism, ideology, theory of law. 339 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Znanstveni članek DOI: 10.51940/2024.1.39-64 UDK: 340.1 Timotej F. Obreza Privid pravne konstrukcije O duhu in porah pravnega (spo)znanja Pravniki pri svojem delu razmišljajo na podoben način. Prek vzorcev védenja, ki jih pri- dobijo z ustrezno izobrazbo in poznejšim delovanjem na posameznem pravnem podro- čju, privzemajo ustaljeno miselno shemo. Avtor predstavi tezo, da je takšno miselno izhodišče smiselno razumeti kot privid pravne konstrukcije, ki se pravnikom pri njiho- vem delovanju utira pred očmi. Pri tem »privid« napoveduje dejstvo, da je ta podoba na- mišljena, čeprav nujna, »pravna konstrukcija« pa označuje zasnovo (rezultat) in snovanje (ustvarjalni proces) pravnega sveta. S prividom si pravniki ne samo zagotovijo dostop do specifične resnice, prek katere spoznavajo in razvijajo svet prava, pridobijo tudi privile- gij, s katerim to področje védnosti monopolizirajo in monetizirajo. Ključna pri tem je priučena spoznavna metoda, ki tvori »ogrodje« pravne konstrukcije in je sestavljena iz enote, tehnike in vrline pravne sporočilnosti. Avtor te tri elemente na kratko ponazori, posebej pa poudari tudi nekatere pomanjkljivosti pravnega razmišljanja. Kot ključen je prepoznan metodološki pluralizem, ki šele omogoči celovitejše razumevanje sveta okrog nas: spoznavno sintezo. Rdeča nit, ki ideji privida pravne konstrukcije usodno botruje, zadeva sprejemanje pomembnosti pravnega dela na eni in odgovorno koriščenje privile- gija pravnega znanja na drugi strani. Ključne besede privid, pravna konstrukcija, pravna vednost, metodološki pluralizem, odgovoren mono- pol znanja. 340 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Scientific article DOI: 10.51940/2024.1.39-64 UDC: 340.1 Timotej F. Obreza The Phantasm of Legal Construction On the Spirit and Pores of Legal Knowledge Lawyers tend to think about their work in a similar manner. Through the patterns of knowledge acquired during their education and subsequent work in a particular area of law, they adopt an established cognitive scheme. This paper argues that such a cognitive scheme can be understood as a phantasm of a legal construction laid out before the eyes of lawyers. While “phantasm” refers to the fact that this image is indeed imaginary—al- beit necessary—“legal construction” refers both to the conception (the result) and the design (the creative process) of the legal sphere. Through this phantasm, lawyers not only secure access to a specific truth by which they learn and shape the world of law; they also gain the privilege of monopolising and monetising this field of knowledge. The key to this is the learned cognitive method, which forms the “frame” of legal construc- tion and consists of the unit, technique and virtue of legal communicability. The paper briefly illustrates these three elements and highlights some of the shortcomings of legal reasoning. It then identifies methodological pluralism as a crucial precondition for a more comprehensive understanding of the world around us—a cognitive synthesis. The central thread, which is fatal to the idea of the phantasm of legal construction and which the paper adopts, concerns accepting the importance of legal work on the one hand, and the responsible use of the privilege of legal knowledge on the other. Key words phantasm, legal construction, legal knowledge, methodological pluralism, responsible knowledge monopoly. 341 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Znanstveni članek DOI: 10.51940/2024.1.65-86 UDK: 347.7:347.9:343.1 Luka Vavken (Ne)priznavanje privilegija zoper samoobtožbo pravnim osebam s poudarkom na enoosebni gospodarski družbi Prispevek analizira vprašanje, ali je treba v kaznovalnih postopkih priznati pravico do privilegija zoper samoobtožbo ne le fizičnim, temveč tudi pravnim osebam. Ker pregon kaznivih dejanj oziroma prekrškov, v katerem nastopajo pravne osebe, bolj kot pregon fizičnih oseb temelji na materialnih, torej neverbalnih dokazih, uvodni deli razprave obravnavajo vprašanje dometa privilegija zoper samoobtožbo. Ta v sodobni pravni do- gmatiki in sodni praksi ne zajema le testimonialnih dokazov, temveč tudi materialne do- kaze oziroma dokumentarno gradivo, nad katerim ima osumljenec kontrolo. Ker je kaz- novalni očitek – zaradi sistema limitirane akcesorne odgovornosti pravnih oseb – fizični (odgovorni) osebi pravne osebe praviloma vsebinsko prepleten z očitkom pravni osebi, privilegij zoper samoobtožbo, ki ga uživa domnevni storilec kaznivega dejanja oziroma prekrška, pogosto hkrati varuje pred izpovedovanjem in izročanjem dokumentarnega gradiva v svojo škodo tudi pravno osebo. Ne pa vselej! Avtor zavzema stališče, da bi bilo treba v slednjem primeru pravnim osebam priznati samostojno pravico do privilegija zo- per samoobtožbo. Še zlasti, kadar je osumljena oziroma obdolžena enoosebna gospodar- ska družba, pri kateri se s podelitvijo privilegija zoper samoobtožbo dejansko varuje pred izpovedovanjem (ravnanjem) v svojo škodo njenega »lastnika« – edinega družbenika. Ključne besede privilegij zoper samoobtožbo, jamstva poštenega postopka, pravna oseba, enoosebna družba z omejeno odgovornostjo, odgovornost pravnih oseb za kazniva dejanja, odgo- vornost pravnih oseb za prekrške, limitirana akcesorna odgovornost. 342 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Scientific article DOI: 10.51940/2024.1.65-86 UDC: 347.7:347.9:343.1 Luka Vavken (Non-)Recognition of the Privilege Against Self-Incrimination for Legal Persons, with an Emphasis on Single-Member Companies The aticle analyses whether, in punitive proceedings, the right to the privilege against self-incrimination should be granted not only to natural but also to legal persons. Since the prosecution of criminal and minor offences involving legal persons depends more on material (i.e. non-verbal) evidence than does the prosecution of natural persons, the introductory sections of the discussion address the scope of the privilege against self-in- crimination. In contemporary legal dogmatics and case law, this privilege does not cover only testimonial evidence but also material or documentary evidence over which the suspect has control. Because, due to the system of limited accessory liability of legal per- sons, the punitive allegation against the natural (responsible) individual within the legal entity is generally substantively intertwined with the allegation against the legal person, the privilege against self-incrimination enjoyed by the alleged perpetrator of a crimi- nal or minor offence often simultaneously protects the legal person from testifying and submitting documentary material to its detriment. However, this is not always the case. The author argues that, in such situations, legal persons should themselves be granted an independent right to the privilege against self-incrimination. This is particularly so when the suspected or accused entity is a single-member company, in which case granting the privilege against self-incrimination effectively protects its “owner”—the sole sharehold- er—from testifying (acting) to their own detriment. Key words privilege against self-incrimination, fair trial guarantees, legal person, single-member limited liability company, liability of legal persons for criminal offences, liability of legal persons for minor offences, limited accessory liability. 343 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Znanstveni članek DOI: 10.51940/2024.1.87-107 UDK: 314.15:343.123.11:341 Urh Šelih* Izbrani vidiki pravice do izjave v azilnih postopkih Avtor obravnava pomen pravice do izjave v azilnih postopkih s poudarkom na njeni vlogi pri oceni tveganja vračanja in dodelitvi mednarodne zaščite. Njegova temeljna teza je, da so izjave prosilcev za azil, pridobljene skozi osebni pogovor, ključna podlaga za nadaljnje ravnanje in odločanje pristojnih organov. Še posebej pomembne so v primerih, ko prosil- ci nimajo drugih trdnih dokazov, kar je pogost pojav. Prispevek se osredotoča na pravne vidike in zahteve pravice do izjave, kot izhajajo iz prava Evropske unije, Evropske kon- vencije o človekovih pravicah in nacionalnega prava. Glavni namen prispevka je predsta- vitev nekaterih ključnih vidikov pravice do izjave, še zlasti v kontekstu procesnih zahtev, ki jih določajo evropski in nacionalni pravni viri. Med te vidike spadajo tudi procesne pravice prosilcev, kot je pravica do komentiranja poročila o osebnem pogovoru in dajanja pripomb na ugotovitve pristojnih organov glede verodostojnosti izjav in dokazov. Avtor se osredotoča na pomembnejše in problematične vidike pravice do izjave, ki zahtevajo posebno pozornost pri obravnavi prošenj za azil. Hkrati navaja, da se pravica do izjave ne izčrpa z osebnim pogovorom, temveč zahteva tudi možnost dodatnih informacij in popravljanja napak ter podajanja pripomb na ugotovitve pristojnih organov. Avtor ana- lizira tudi relevantno sodno prakso, ki podpira tezo o pomembnosti pravice do izjave v azilnih postopkih. Ključne besede pravica do izjave, mednarodna zaščita, azil, sodelovalna dolžnost, procesna direktiva. 344 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Scientific article DOI: 10.51940/2024.1.87-107 UDC: 314.15:343.123.11:341 Urh Šelih Selected Aspects of the Right to be Heard in Asylum Procedures The paper discusses the importance of the right to be heard in asylum procedures, fo- cusing on its role in both assessing the risk of refoulement and granting international protection. The main thesis is that the statements of asylum seekers, obtained through personal interviews, form a key basis for subsequent proceedings and decision-making by the competent authorities. These statements are particularly important in cases where applicants have no other evidence, which is a common occurrence. The articleexamines the legal aspects and requirements of the right to be heard as they arise under European Union law, the European Convention on Human Rights, and national law. Its main purpose is to present several key aspects of this right, particularly in the context of the procedural requirements laid down by European and national legal sources. These as- pects include the procedural rights of applicants, such as the right to comment on the report of the personal interview and the right to comment on the competent authorities’ findings regarding the credibility of statements and evidence. The article concentrates on the most significant and problematic aspects of the right to be heard that demand special attention in the processing of asylum applications. The author argues that the right to be heard is not confined to a personal interview but also requires the opportunity to provide additional information, correct errors, and comment on the findings of the competent authorities. The article additionally analyses relevant case law supporting the argument about the importance of the right to be heard in asylum proceedings. Key words right to be heard, international protection, asylum, duty to cooperate, Procedures Directive. 345 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Kratek znanstveni članek DOI: 10.51940/2024.1.109-124 UDK: 341.3/.4:341.645(5-11) Polona Brumen Pisma iz Tokia Z analizo primarnih in sekundarnih virov v angleškem jeziku avtorica predstavi nekate- re značilnosti Mednarodnega vojaškega sodišča za Daljni vzhod, ki je po koncu druge svetovne vojne dve leti in pol zasedalo v Tokiu. Opre se na uradno, pogosto tajno kore- spondenco nekaterih članov sodišča, saj so njihove misli, izražene domačim institucijam, zelo zanimiv vpogled v delovanje sodišča, njegove značilnosti, meddržavno sestavo in tudi dileme, s katerimi so se pri sprejemanju končne odločitve soočali sodniki. Skozi multilateralno prisotnost in sočasno aplikacijo značilnosti različnih pravnih sistemov – čeprav je sodišče delovalo na osnovi Statuta Mednarodnega vojaškega sodišča za Daljni vzhod in takrat veljavnega mednarodnega prava –, sodniki niso zmogli v celoti izstopiti iz svojih pravnih tradicij in v mnogoplastnih okoliščinah dolgotrajnega dela daleč od doma je prihajalo med udeleženci na lokaciji – in v njihovih odnosih z vodilnimi v domačih državah – do nepričakovanih, dotlej nepoznanih zapletov in merjenj moči na različnih ravneh. Glavna posledica tega je bila neenotnost razsodbe: sprejeta večinska sodba je 25 obtožencev spoznala za krive, sedem jih je obsodila na smrtno kazen. Od enajstih članov senata so trije podali (delno) ločeno odklonilno mnenje, predsednik senata pa je na koncu uradno vložil le izjavo o nestrinjanju z višino kazni. Tako je glavni doprinos tega prispevka razkritje neenotnosti sodnega senata, saj nasprotuje običajno razširjenim predstavam, da je bilo predmetno sojenje »ameriška predstava«. Ključne besede Mednarodno vojaško sodišče za Daljni vzhod, mednarodni odnosi, mednarodno pravo, ločeno mnenje, diplomatska korespondenca. 346 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Short scientific article DOI: 10.51940/2024.1.109-124 UDC: 341.3/.4:341.645(5-11) Polona Brumen Letters from Tokio Through an analysis of primary and secondary sources in the English language, this arti- cle aims to shed light on certain characteristics of the International Military Tribunal for the Far East, which convened in Tokyo for two and a half years in the aftermath of World War II. It relies on official, often secret, correspondence from some tribunal members, as their thoughts—expressed to their home institutions—offer an extremely interesting insight into the tribunal’s affairs, its characteristics, intra-state constitution, and the di- lemmas faced by the judges when reaching their final decision. Through the tribunal’s multilateral presence and the simultaneous application of elements derived from var- ious legal systems—despite the tribunal’s workings being stipulated by the Charter of the International Military Tribunal for the Far East and existing international law—the judges were unable to detach themselves fully from their own legal traditions. Amid the multi-layered circumstances of prolonged work far from home, this resulted in unexpect- ed entanglements and power struggles on different levels, both among the participants on site and in their relations with leading figures in their home countries. The main effect of this turn of events was a non-uniform final decision: the majority judgement found the 25 accused guilty, condemning seven to the death penalty. From among the 11 senate members, three judges filed a (partial) separate dissenting opinion, while the President eventually filed only a statement disagreeing with some of the penalties. The contribution of this article lies in disclosing the bench’s non-uniformity, since it contests the widespread perception that the trial in question was an “American show”. Key words International Military Tribunal for the Far East, international relations, international law, dissenting opinion, diplomatic correspondence. 347 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Znanstveni članek DOI: 10.51940/2024.1.125-156 UDK: 347.85:341.229:347:355 Anže Mediževec Pravica do samoobrambe v Zemljini orbiti Z razvojem vesoljske tehnologije so v vesolju poleg držav čedalje bolj prisotni tudi nedr- žavni akterji. Njihova vse večja prisotnost odpira številna pravna vprašanja, tudi v zvezi z uporabo sile, zlasti v okviru pravice do samoobrambe. Prvi cilj tega članka je pojasniti pravno podlago za uporabo sile pri izvajanju samoobrambe v vesolju, zlasti v Zemljini or- biti. Drugi cilj je prispevati k pravnemu okviru, kako lahko države izvajajo samoobrambo pred napadi nedržavnih akterjev v vesolju. Avtor razlikuje med pravili o pripisovanju uporabe sile državi in doktrino nepripravljenosti ali nezmožnosti države »gostiteljice«. Predlaga, da se slednja lahko smiselno prenese na področje vesolja s pomočjo rekoncep- tualizacije pojma »ozemlja« države od paradigme državne suverenosti v smeri državne jurisdikcije. Nadalje avtor na področju pravil o pripisovanju ravnanja državi primerja določbe v Členih o odgovornosti držav za mednarodno protipravna dejanja (ARSIWA) o odgovornosti države in režim objektivne odgovornosti iz Pogodbe o vesolju (OST). Njegov namen je pojasniti, kateri sistem pravil naj se uporablja pri obravnavi vpraša- nja odgovornosti države za uporabo sile s strani nedržavnih akterjev v vesolju. Glede tega ponudi tri rešitve. Prva temelji na predpostavki, da se pravo vesolja, konkretno VI. člen OST, lahko obravnava kot lex specialis v razmerju do sistema pravil po ARSIWA. Druga podpira stališče, da bi se morala uporabljati splošna pravila o odgovornosti držav iz ARSIWA, saj gre za sekundarna pravila mednarodnega prava, VI. člen OST pa zajema primarna pravila. Tretji pristop ponuja kombinirano razlago VI. člena OST in ARSIWA, ki temelji na sistematični razlagi tam vsebovanih norm, da se ohrani namen sekundarnih pravil mednarodnega prava o odgovornosti držav. Ključne besede Pogodba o vesolju (OST), IV. člen OST, VI. člen OST, miroljubni nameni, nacionalne dejavnosti, samoobramba v vesolju, režim objektivne odgovornosti. 348 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Scientific Article DOI: 10.51940/2024.1.125-156 UDC: 347.85:341.229:347:355 Anže Mediževec The Right of Self-defence in the Earth’s Orbit Abstract The increasing presence of non-State actors in space raises a plethora of legal questions, including those related to the use of force, especially in the context of the right of self-de- fence. The first aim of this article is to explain the legal basis for resorting to force in the exercise of self-defence in space, specifically in the Earth’s orbit. The second goal is to contribute to the legal framework concerning how States may exercise self-defence against attacks committed by non-State actors in space. In this regard, the author distinguishes between the rules of attribution of the use of force to a State and the “unwilling or unable” doctrine. It is suggested that the latter may be transposed into the space domain, mutatis mutandis, by a re-conceptualisation of the notion of a State’s “territory”, shifting from its sovereignty-based foundation towards State jurisdiction. Further on, in the realm of the rules of attribution of conduct to a State, the author compares the ARSIWA rules of State responsibility with the strict responsibility regime of the Outer Space Treaty (OST), to clarify which system applies when addressing State responsibility for the use of force by non-State actors in space. Three solutions are offered in this regard. The first rests on the premise that space law, specifically Article VI OST, may be seen as lex specialis in relation to ARSIWA. The second supports the view that the general rules of State responsibility in ARSIWA should apply, as they are secondary rules of international law, whereas Article VI OST encompasses primary rules. The third approach offers a combined reading of Article VI OST and ARSIWA, based on a systematic interpretation of the norms contained therein, to preserve the purpose of the secondary rules on State responsibility. Key words Outer Space Treaty, Article IV OST, Article VI OST, Peaceful Purposes, National Activities, Self-defence in Space, Strict Responsibility Regime. 349 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Znanstveni članek DOI: 10.51940/2024.1.163-184 UDK: 341.3:342.7:004.8 623.09:004.8 Yuval Shany Uporabiti umetno inteligenco ali ne? Avtonomni orožni sistemi in njihovo zapleteno razmerje s pravico do življenja Razširjenost tehnologij umetne inteligence, razvite ali prilagojene za vojaško uporabo, odpira zahtevna vprašanja o skladnosti teh novih tehnologij z mednarodnim pravom nasploh ter še zlasti z mednarodnim pravom človekovih pravic. Odbor za človekove pra- vice, strokovno telo, ki je zadolženo za spremljanje izvajanja Mednarodnega pakta o državljanskih in političnih pravicah, je leta 2018 podal svoje mnenje o razmerju med pojavom nove vojaške umetne inteligence in spoštovanjem pravice do življenja. Članek preučuje razprave v okviru mednarodnega prava človekovih pravic v zvezi z uvajanjem tehnologij umetne inteligence v vojaške kontekste in njihovim razmerjem s pravico do življenja. V prvem delu na kratko predstavi nekatere dejanske in možne uporabe ume- tne inteligence v vojaških okoljih. V drugem delu obravnava tri glavne ugovore zoper uvajanje umetne inteligence v bojna območja: zmožnost avtonomnih ali polavtonomnih sistemov umetne inteligence, da delujejo skladno s pravili mednarodnega humanitarnega prava, pomisleke glede dejanskega znižanja standardov humanitarne zaščite ter etične in pravne posledice prenosa nekaterih odločitev o življenju ali smrti z ljudi na stroje. V tretjem delu – ob upoštevanju teh treh načelnih ugovorov – avtor preuči konkretne predloge Mednarodnega odbora Rdečega križa za omejitev uporabe umetne inteligence v vojaških okoljih (omejitev področja in načina uporabe avtonomnih orožnih sistemov ter izključitev nepredvidljivih in smrtonosnih sistemov). V četrtem delu so glavna vprašanja, ki jih obravnava ta članek, preučena z vidika pravice do življenja po mednarodnem pravu človekovih pravic, kot jo pojasnjuje Splošni komentar št. 36. Ključne besede avtonomni orožni sistemi, pravica do življenja, mednarodno humanitarno pravo, člo- vekovo dostojanstvo, odgovornost, preglednost, smiselni človeški nadzor, Mednarodni odbor Rdečega križa, vojaška umetna inteligenca. 350 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Scientific Article DOI: 10.51940/2024.1.163-184 UDC: 341.3:342.7:004.8 623.09:004.8 Yuval Shany To Use AI or Not to Use AI? Autonomous Weapon Systems and Their Complicated Relationship with the Right to Life Abstract The increased prevalence of AI technology developed or adapted for military use raises difficult questions about the compatibility of this new technology with international law in general, and international human rights law (IHRL) in particular. The Human Rights Committee, the expert body entrusted with monitoring the application of the International Covenant on Civil and Political Rights, expressed its view in 2018 on the relationship between the emergence of new military AI and respect for the right to life. The article reviews the terms of the IHRL debate surrounding the introduction of AI technology into military contexts and its relationship to the right to life. Section one briefly reviews some actual and potential applications of AI in military contexts. Section two deals with three principal objections to introducing military AI to battlefield en- vironments: the capacity of autonomous or semi-autonomous AI systems to properly apply international humanitarian law (IHL), concerns about de facto lowering of stan- dards of humanitarian protection, and the ethical and legal implications of transferring certain life-and-death decisions from humans to machines. Section three reviews, in light of these three principled objections, specific proposals by the ICRC to limit the use of AI in military contexts (limiting the scope and manner of use of autonomous weapon systems, and excluding unpredictable and lethal systems). Section four reviews the main issues discussed in this article from the vantage point of the right to life under IHRL, as elaborated in General Comment No. 36. Key words autonomous weapon systems, right to life, international humanitarian law, human dig- nity, accountability, transparency, meaningful human control, ICRC, military AI. 351 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Znanstveni članek DOI: 10.51940/2024.1.185-212 UDK: 341.3:342.7:004.8 341:623:004.8 Joana Gomes Beirão, Jan Wouters Na poti do mednarodnega pravnega okvira za smrtonosno umetno inteligenco, ki temelji na spoštovanju človekovih pravic: misija nemogoče? Avtorja obravnavata morebitno uporabo avtonomnega orožja tako v oboroženih spopa- dih kot tudi zunaj njih, vključno z odkrivanjem kaznivih dejanj in kazenskim pregonom (na primer v policijskih postopkih). Ta pojav proučujeta z vidika prava človekovih pravic, s posebnim poudarkom na pravici do življenja. Mednarodna skupnost že več kot dese- tletje razpravlja o tem, ali tehnološki napredek na področju razvoja avtonomnega orožja zahteva oblikovanje novih pravil v okviru mednarodnega humanitarnega prava. Na drugi strani pa je bila obravnava tovrstne tehnologije z vidika prava človekovih pravic doslej omejena, čeprav ima pomembne posledice za pravico do življenja in druge človekove pravice. Vzporedno s temi razpravami se je v zadnjih letih pojavilo več mednarodnih po- bud, ki si prizadevajo oblikovati nezavezujoča in zavezujoča pravila za razvoj in uporabo umetne inteligence na podlagi spoštovanja človekovih pravic. Ta članek preučuje štiri take pobude: Priporočilo Organizacije za gospodarsko sodelovanje in razvoj (OECD) o umetni inteligenci, Priporočilo Unesca o etiki umetne inteligence, orodje Interpola in UNICRI za odgovorne inovacije umetne inteligence pri odkrivanju kaznivih dejanj in kazenskem pregonu ter Konvencijo Sveta Evrope o umetni inteligenci. Avtorja analizira, koliko te pobude zajemajo konkretne pomisleke, ki jih sproža avtonomno orožje. Ključne besede avtonomno orožje, umetna inteligenca, človekove pravice, pravica do življenja, odkriva- nje kaznivih dejanj in kazenski pregon. 352 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Scientific Article DOI: 10.51940/2024.1.185-212 UDC: 341.3:342.7:004.8 341:623:004.8 Joana Gomes Beirão, Jan Wouters Towards an International Legal Framework for Lethal Artificial Intelligence Based on Respect for Human Rights: Mission Impossible? This article considers the potential use of autonomous weapons both in and outside armed conflict, including in law enforcement. It analyses the phenomenon from the perspective of human rights law, with a particular focus on the right to life. For over a decade, the international community has debated whether technological advances per- taining to the development of autonomous weapons require the establishment of new rules within the framework of international humanitarian law. In contrast, consideration of such technology from a human rights law perspective has been limited, despite its implications for the right to life and other human rights. In parallel, several international initiatives have emerged in recent years aiming to establish non-binding and binding rules for the development and use of artificial intelligence (AI) based on respect for hu- man rights. This article reviews four such initiatives: the OECD Recommendation on AI, the UNESCO Recommendation on the Ethics of AI, the INTERPOL and UNICRI Toolkit for Responsible AI Innovation in Law Enforcement, and the Council of Europe AI Convention. It examines the extent to which these initiatives address the specific concerns raised by autonomous weapons. Key words autonomous weapons, artificial intelligence, human rights, right to life, law enforcement. 353 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Znanstveni članek DOI: 10.51940/2024.1.213-249 UDK: 341.33/.34:341.232:004.8 Maruša T. Veber Umetna inteligenca in humanitarna pomoč: preučitev vloge soglasja držav Avtorica preučuje vlogo in pojem soglasja držav pri zagotavljanju humanitarne pomoči, ki jo podpirajo sistemi umetne inteligence, z vidika veljavnih mednarodnih pravil: splo- šnega pravnega režima, ki ureja humanitarno pomoč, in posebnih pravil, ki izhajajo iz mednarodnega prava človekovih pravic ter mednarodnega prava oboroženih spopadov. Avtorica ugotavlja, da ima pojem soglasja zadevne države osrednjo vlogo v teh pravilih, pri čemer razlikuje med strateškim soglasjem in operativnim soglasjem za zagotovitev humanitarne pomoči. Strateško soglasje se nanaša na splošno soglasje države za zago- tavljanje humanitarne pomoči na njenem ozemlju, operativno soglasje pa se nanaša na soglasje, ki se zahteva na operativni oziroma tehnični ravni za zagotavljanje posamezne vrste humanitarne pomoči na geografsko opredeljenem območju. Avtorica zatrjuje, da je treba utemeljene razloge za zavrnitev operativnega soglasja za humanitarno pomoč, ki jo podpira UI in kot izhajajo iz mednarodnega prava oboroženih spopadov, razlikovati od samovoljne zavrnitve strateškega soglasja. V prvem primeru je zavrnitev operativnega soglasja lahko pravno upravičena, samovoljna zavrnitev strateškega soglasja za dostavo humanitarne pomoči pa je prepovedana. Zagotavljanje humanitarne pomoči brez soglas- ja zadevne države je pravno lahko upravičeno bodisi na podlagi dovoljenja Varnostnega sveta Združenih narodov bodisi na podlagi sekundarnih pravil mednarodnega prava, zlasti s pravili o protiukrepih. Ključne besede umetna inteligenca, humanitarna pomoč, samovoljna zavrnitev soglasja, protiukrepi. 354 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Scientific Article DOI: 10.51940/2024.1.213-249 UDC: 341.33/.34:341.232:004.8 Maruša T. Veber Artificial Intelligence and Humanitarian Assistance: Reassessing the Role of State Consent Abstract The author analyses the notion of State consent in the delivery of humanitarian assis- tance supported by artificial intelligence (AI) systems from the perspective of the existing applicable international legal regimes, in particular, the general legal regime of human- itarian assistance and the specific rules deriving from international humanitarian law and international human rights law. She argues that the notion of consent lies at the heart of these rules with a distinction made between strategic and operational consent to humanitarian assistance. The former refers to a State’s general consent to the delivery of humanitarian assistance on its territory, while the latter refers to the consent required at the operational level for the delivery of a particular type of humanitarian assistance in a specific geographically defined area. It is argued that valid reasons for withholding opera- tional consent to AI-supported humanitarian assistance under international humanitari- an law must be distinguished from the arbitrary withholding of strategic consent. While withholding operational consent may be legally justified, the arbitrary withholding of strategic consent to humanitarian assistance is prohibited under the relevant interna- tional legal regimes when it amounts to a violation of other existing obligations of the State concerned (e.g., under international humanitarian law or human rights law). In such situations the non-consensual delivery of humanitarian assistance could be legally justified either through United Nations Security Council authorisation or by secondary rules of international law, in particular countermeasures. Key words artificial intelligence, humanitarian assistance, arbitrary withholding of consent, coun- termeasures. 355 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Znanstveni članek DOI: 10.51940/2024.1.251-274 UDK: 341.226:347.8:004.8 341.176:341.226 Anže Singer Umetna inteligenca v vesolju: pregled Evropske vesoljske agencije in njena vloga v okolju umetne inteligence Evropska vesoljska agencija (European Space Agency – ESA) je bila ustanovljena 30. okto- bra 1980. Trenutno združuje triindvajset držav članic, njeno poslanstvo pa je oblikovanje razvoja evropskih vesoljskih zmogljivosti in zagotavljanje, da se naložbe v vesolje usmer- jajo tako, da prinašajo koristi evropskim državljanom in svetu. Umetno inteligenco (UI) lahko razumemo kot inteligenco, ki jo izkazujejo stroji in ki lahko opazujejo, zaznavajo ter delujejo v svojem okolju, da maksimirajo verjetnost uspeha pri določenem cilju. UI je lahko pomembna in omogočitvena tehnologija za vesoljske misije, saj pripomore k povečanju znanstvenih rezultatov in k večji učinkovitosti same misije. Najuspešnejše iz- vedbe UI se v vesoljski industriji še vedno le redko uporabljajo, saj modeli, razviti znotraj nevronskih mrež, niso berljivi za ljudi. Kljub tem izzivom pa poznamo primere, v katerih ESA z lastnimi dejavnostmi uspešno dokazuje uporabo UI v vesoljskem sektorju. Hitro razvijajoče se področje vesoljskih raziskav in tehnologije, UI in z njimi povezane aplikacije odpirajo številne dvome ter razprave, hkrati pa postavljajo pod vprašaj ustreznost tradi- cionalnega vesoljskega prava. Prisotna je skrb, ali je pravni okvir dovolj posodobljen, da se lahko spoprime z izzivi, ki lahko nastanejo na področju UI in vesolja, ter kaj je treba storiti za ustrezno in pravočasno rešitev teh izzivov. Poleg vprašanj odgovornosti nekateri opozarjajo tudi, da sta zagotavljanje zaupnosti in varstvo podatkov med bolj perečimi temami v kontekstu UI. Ključne besede Evropska vesoljska agencija, vesoljske raziskave, umetna inteligenca, vesoljsko pravo, pravni izzivi. 356 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Scientific Article DOI: 10.51940/2024.1.251-274 UDC: 341.226:347.8:004.8 341.176:341.226 Anže Singer Artificial Intelligence in Space: Overview of the European Space Agency and its Role in the AI Environment European Space Agency was established on 30 October 1980. It currently has twen- ty-three Member States, and its mission is to shape the development of the European space capability and to ensure that investment in space is continued in the direction of bringing benefits to European citizens and the world. Artificial intelligence (AI) can be seen as intelligence exhibited by machines that can observe, perceive and act upon their environment to maximise their chance of success at a given goal. AI can be an important and enabling technology for space missions, bringing added value for scientific return and for the efficiency of the mission itself. The most successful AI implementations are still rarely used in the space industry today, as the models developed within the neural network are not human-readable. Despite the challenges, there are examples where AI is successfully being demonstrated in the space sector through ESA’s own activities. The fast-evolving field of space research and technology, AI, and the related applications are raising numerous doubts and debates while challenging the adequacy of traditional space law. There is a looming concern as to whether the legal framework is up to date to meet the challenges that may arise within the AI and space sector, and what can be done to meet those challenges accordingly and on time. Others also argue that, in addition to liability concerns, ensuring confidentiality and data protection are some of the more acute issues in the context of AI. Key words European Space Agency, space research, artificial intelligence, space law, legal challenges. 357 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Znanstveni članek DOI: 10.51940/2024.1.275-302 UDK: 341.229:004.8 341.229:343.301 Iva Ramuš Cvetkovič UI – možna rešitev za grožnje človekovemu življenju, ki prihajajo iz objektov v vesolju Z izstrelitvijo satelita Sputnik I leta 1957 je prvi vesoljski objekt dosegel vesolje. Sledili so mu še številni drugi, danes pa vesoljske objekte obravnavamo kot nepogrešljiv del našega vsakdana. Satelite in podatke, ki jih zagotavljajo, uporabljamo za spremljanje okolja s pomočjo opazovanja Zemlje, urejanje podnebja in upravljanje naravnih nesreč, pa tudi za gospodarske dejavnosti, denimo kmetijstvo, promet, komunikacije in še šte- vilne druge. Kljub številnim koristim pa vesoljski objekti pomenijo grožnjo človeškim življenjem v vesolju, v zračnem prostoru in na Zemlji. Tehnološki napredek 21. stoletja, še zlasti čedalje pogostejša uporaba umetne inteligence, je vzbudil upanje, da bodo te grožnje zmanjšane, omiljene ali celo v celoti odpravljene. Avtor v članku presoja, ali je tako upanje razumno in upravičeno. Najprej opredeli nekaj primerov groženj človeškim življenjem, ki izhajajo iz vesoljskih objektov, ter navede primere, ko so se te grožnje že uresničile v praksi. Drugič, predstavi veljavni pravni okvir in ga nato v tretjem koraku oceni ter pokaže, da ne zadošča za obravnavo omenjenih groženj. V četrtem delu prikaže, kako je predvidena uporaba umetne inteligence za omilitev teh groženj. V petem delu oriše nekatere nove pravne izzive, ki bi se lahko pojavili ob taki uporabi umetne inteli- gence, in na tej podlagi končno presodi, ali bo taka omilitev groženj s pomočjo umetne inteligence res tako učinkovita, kot se trenutno napoveduje. Ključne besede umetna inteligenca, vesoljska tehnologija, vesoljski odpadki, vesoljski objekti, terorizem. 358 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Scientific Article DOI: 10.51940/2024.1.275-302 UDC: 341.229:004.8 341.229:343.301 Iva Ramuš Cvetkovič AI—A Possible Solution to the Threats Against Human Lives Arising from Space Objects? With the 1957 launch of the satellite Sputnik I, the first space object reached outer space. Many more followed, and today space objects are considered an invaluable part of our everyday lives. Satellites and the data they provide are used for monitoring the environ- ment through Earth observation, climate regulation, and natural disaster management, as well as economic activities, for example, agriculture, transportation, communication, and several others. Despite these numerous benefits, however, space objects pose threats to human lives in outer space, in airspace, and on Earth. The technological advancement of the 21st century, especially the increased use of artificial intelligence, brought hope that these threats would be minimised, mitigated, or even completely resolved. In this paper, I am going to evaluate whether such hope is reasonable and justified. To do this, I will, first, identify some examples of the threats to human lives arising from space objects and provide examples when such threats already materialised in reality. Second, I will present the applicable legal framework and then, third, evaluate it and show that it falls short in addressing those threats. Fourth, I will demonstrate how AI is planned to be used in mitigating these threats. Fifth, I will outline some of the new legal challenges such use of AI would bring and, against this background, finally assess whether such AI threat mitigation is going to be as effective as currently predicted. Key words AI, space technology, space debris, space objects, terrorism. 359 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Znanstveni članek DOI: 10.51940/2024.1.303-330 UDK: 004.8:17:342.7 Kristina Čufar Programska/strojna oprema UI kot problem uma/telesa Globalne dobavne verige, delavci v senci in zavržena življenja Umetna inteligenca (UI) in druge tehnologije, ki temeljijo na algoritmih, so se v zad- njem desetletju pretežno že vključile v naša vsakdanja življenja. Čeprav ima UI izjemen potencial in je že pripomogla k izboljšanju človekovega položaja, je hkrati deležna ostre kritike, saj lahko med drugim reproducira pristranskost in družbene krivice ali okrepi distopične oblike nadzora. Večina znanstvenih, regulativnih in etičnih razprav se osre- dotoča na izzive UI na ravni programske opreme, pa je vidik strojne opreme UI pogos- to prezrt. Razumevanje UI kot programske opreme, tj. kot umetnega »uma«, poudarja zgolj domnevno nove in vznemirljive vidike te tehnologije, pri tem pa prezre človeške in materialne stroške njene izdelave. To sovpada s tradicionalnim dualizmom uma in telesa, ki um postavlja nad telo in tako izkrivlja naš pogled na celoten problem.Da bi se zoperstavili prevladujočim narativom, avtorica predlaga razumevanje UI kot strojne in tudi programske opreme, s čimer se želi razširiti obseg etičnih in pravnih vprašanj, ki bi jih morali zajeti pri regulaciji UI. Celostno in sistemsko obravnavanje pojava UI mu odvzame njegovo zaznano edinstvenost. Ko resno upoštevamo svetovni obseg pridobiva- nja surovin, dela in podatkov, ki so potrebni za vzpostavitev UI, postane jasno, da je UI zgolj še en primer kolonialnega kapitalizma. Ključne besede umetna inteligenca, etika, človekove pravice, ekstraktivizem, kolonializem. 360 Zbornik znanstvenih razprav – letnik LXXXIV, 2024 – povzetki LjubLjana Law Review, voL. LXXXiv, 2024 – synopses Scientific Article DOI: 10.51940/2024.1.303-330 UDC: 004.8:17:342.7 Kristina Čufar AI Software/Hardware as Mind/Body Problem Global Supply Chains, Shadow Workers, and Wasted Lives Artificial intelligence (AI) and other algorithm-based technologies have become part of everyday life over the last decade. While AI holds amazing potential and has already con- tributed positively to the human condition, it is also subject to fierce critique as it may, for example, reproduce bias and social injustices or increase dystopic forms of surveil- lance. While most scholarly, regulatory, and ethical debates focus on AI software-related issues, AI hardware receives far less attention. Understanding AI as software, as an arti- ficial mind, highlights only the supposedly new and exciting aspects of this technology and ignores the human and material costs of its fabrication. This is consistent with the traditional mind-body dualism, which prioritises mind over body and thus skews our perception of the problem. To counter the dominant narratives, this article proposes a concept of AI as hardware/software to broaden the scope of ethical and legal issues that ought to be addressed through AI regulation. A holistic and systemic treatment of the AI phenomenon robs it of its perceived uniqueness. Once the worldwide extraction of materials, labour, and data necessary to set up AI machinery is seriously considered, AI stands out as yet another instance of colonial capitalism. Key words artificial intelligence, ethics, human rights, extractivism, colonialism. ZBORNIK ZNANSTVENIH RAZPRAV Ljubljana Law Review 2024 LETNIK LXXXIV ZZR 2024 ovitek.indd 1 3. 02. 2025 18:26:35