ISSN 2463-9281 Izzivi prihodnosti Challenges of the Future Letnik 10, številka 4, november 2025 Volume 10, Issue 4, November 2025 ISSN 2232-5204 155 UPSKILLING OLDER EMPLOYEES IN THE ARTIFICIAL INTELLIGENCE ERA Tinkara Žabar, Aleksander Janeš 173 IOBSTACLES TO THE IMPLEMENTATION OF INNOVATIONS IN START-UP COMPANIES Nerman Ljevo, Sabina Šehić - Kršlak 188 AI IS NOT A TOOL: THE IMPACT OF GROWING AI AGENCY ON THE FUTURE OF WORK Alexander van Biezen KAZALO VSEBINE TABLE OF CONTENTS DODATEK APPENDIX Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 DOI: 10.37886/ip.2025.007 Upskilling Older Employees in the Artificial Intelligence Era Tinkara Žabar1 , Aleksander Janeš 2 1 University of Primorska, Faculty of Management, tinkara.kodelja@fm.upr.si 2 University of Primorska, Faculty of Management, aleksander.janes@fm.upr.si Abstract Research Question (RQ): What is the effect of new technologies, with an emphasis on artificial intelligence (AI), on the need to upskill older employees (50+ years). Purpose: The purpose of the research was to carry out a systematic literature review of existing research in the field of the effect of AI on the upskilling needs of older employees. Method: We performed a systematic literature review across six academic search engines: ProQuest, Emerald, Sage Journals, Springer, Research Gate, and Google Scholar. Results: Artificial intelligence is significantly transforming the labor market, as it requires constant adaptation to new skills and knowledge. AI has a significant effect on older employees, who are exposed to greater challenges due to a possible lack of digital skills and sensitivity to change. In this context, training and further education are key mechanisms to ensure that skills match the requirements of the work environment and the labor market. Organizations must quickly adapt to changing requirements by creating a culture of lifelong learning that encourages seniors and other employees to improve. Training programs must be based on the specific needs and challenges faced by older employees. Organization: The research emphasizes the importance of training older employees in the age of AI and encourages organizations to create a culture of lifelong learning as part of the organization's strategic directions and goals. Society: The importance of research for society is reflected in the insight into the involvement of all age groups in the possibility of improving knowledge, skills, and attitudes towards the use of modern technologies. Organizations and society itself bear the social responsibility to enable older employees to successfully integrate into the work environment in the AI era. Originality: The research addresses the need to improve the skills of a specific age group in the age of AI, where it simultaneously highlights the importance of fostering a culture of lifelong learning in a rapidly changing world. The research findings provide guidelines for policymaking in the field of training on the national level in the context of an aging workforce and new technologies. Limitations/further research: The literature review was limited to six publicly available databases. In the article, older employees were considered as people in the labor process older than 50 years. We must emphasize that older employees differ from each other in terms of education, economic, social, and other circumstances. Further research should investigate the effect of new technologies regarding the specific circumstances mentioned in this age group. Keywords: knowledge society, upskilling, knowledge management, retraining, older employees, artificial intelligence, lifelong learning. Received: 2025-06-25, revised: 2025-07-02, accepted: 2025-10-09 155 as Original Review Paper Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 1 Introduction Technological advances, particularly artificial intelligence (AI), are profoundly reshaping job roles and work structures. Organizations are integrating AI technology into their business to stay competitive. However, the introduction of such technologies into business processes contributes to the exposure to inequalities such as digital skills, employment volatility, the impact of automation and robotics, in parallel with processes of job destruction and creation (Cramarenco et al., 2023, pp. 732). These transformations underscore a pressing imperative for large-scale workforce upskilling, with projections indicating that by 2050, approximately 50% of all employees will require upskilling as a result of the use of emerging technologies in the workplace. To ensure meaningful employee engagement, organizations must provide learning opportunities that equip workers for these transitions. Such efforts contribute to the development of an inclusive and resilient knowledge-based economy in which all individuals can participate meaningfully (Li et al., 2023, pp. 1697). New technologies often have a stronger impact on older workers, as they tend to have less developed digital competencies needed to manage AI in comparison to their younger colleagues. In the literature, older employees are generally defined as individuals who are active in the labor market. The age at which someone is defined as an older employee is not universally agreed upon. Most studies refer to those aged 50 and above (Novak, 2023, pp. 23), with definitions typically starting between age 45 and 55, and with the upper age limit often left unspecified (Krašovec, 2015, pp. 30). In our work, we refer to older employees as those who are over 50 years of age and are actively participating in the labor force. Despite the valuable professional experience that older workers possess, many of them lack digital skills necessary to effectively engage with AI systems, making targeted retraining and upskilling efforts essential (Tiku, 2023; Chetty, 2023). A common misconception is that older people are disinterested in training. However, participation many times correlates with the right training format. When learning formats are adapted to meet older employees’ preferences, they are significantly more inclined to engage in further education (Zwick, 2015, pp. 146). Many researchers point out that creating a culture of lifelong learning in the workplace is crucial for fostering skills development in a rapidly changing world and labor market demands (Tiku, 2023; Li et al., 2023; Vuorenkoski et al., 2018; Pradhan et al., 2023). Based on a systematic literature review, we aimed to answer the following research question: What is the effect of AI on the need to upskill older workers? In the fourth section, the main results will be outlined based on the articles included in the literature review. 156 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 2 Theoretical framework Digital technologies, particularly AI, commonly defined as computer programs capable of performing tasks that typically require human reasoning, represent a key advance in technological development (Bruun and Duka, 2018, pp. 1). With these characteristics, AI plays a transformative role and significantly reshapes the nature of work by influencing job structures, business operations, and employee monitoring (Classen et al., 2018, pp. 23). One of the key contributions of AI is that it enables the automation of certain processes, such as repetitive, everyday tasks that do not require expert knowledge. This, in turn, allows employees to focus on tasks that require human attention and understanding (Haizir, 2022, pp. 7). Parallel to technological advances in the workplace, the workforce is also declining. Therefore, the adoption of new technologies is more significant in countries with rapidly aging populations, where innovation is often driven by the need to compensate for a shrinking workforce (Acemoglu and Restrepo, 2021). As digital technologies continue to evolve and become more integrated into the workplace, active participation in the labor market increasingly depends on the possession of advanced digital competencies (Komp-Leukkunen et al., 2022, pp. 37–38). The integration of AI into companies also introduces challenges, such as creating new tasks and threatening job losses for employees with limited digital skills, which causes a change in the needs for skills or competencies in the labor market (OECD, 2024). As a result of the integration of AI into work processes, digital literacy has become one of the most desirable characteristics that an employee can have (Bokek-Cohen, 2018, pp. 21). Integration of new technologies also presents a big shift in the labor market. Bruun and Duka (2018, pp. 3) have researched the impact of the level of automation on the labor market and how the economy will adapt to it by 2038, where they used cheese terminology. Based on this, they have developed three scenarios: - Stalemate: The AI revolution will be much smaller than expected and will not change the nature of work. The economy does not need to adapt; employment in 2038 will look the same as today. In this scenario, there is no cause for concern, and governments can continue business as usual. - Check: Despite the wave of automation, the economy can move and adapt, allowing new jobs to be created to replace those lost. The transition may cause some initial chaos in the labor market, but after a period of discomfort, stability will follow. - Checkmate: The AI revolution will lead to rapid job losses as the economy, governments, and individuals will fail to keep up. The main concern is that the economy might not adapt quickly enough. Reflected in the increased risk of technological unemployment and social instability. 157 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 It is difficult to determine which of these scenarios will occur in the future. According to Bruun and Duka (2018, pp. 5), it is best to prepare for the worst-case scenario, namely the checkmate scenario. One of the finest strategies to prepare for unknown scenarios is investing in upskilling and skills development programs to keep people competitive, as AI shapes the functioning of the labor market. These programs would enable the current working-age generation, who are most affected by AI, to acquire the relevant skills to be more confident in operating in the labor market. It is assumed that future generations will need to be retrained several times in their working lives in order to keep pace with technological progress. Learning will therefore not end after tertiary education but will continue throughout an individual's life course (Bruun and Duka, 2018, pp.10). Digital technologies are not only transforming the economy and sectors but also creating new jobs and demanding new skills. As a result, employers are seeking individuals who possess technical and digital skills, as well as skills that automation has not yet been able to replicate, such as cognitive and social skills (Lincoln, 2017, pp. 7). Taking the uncertainty of the impact of AI on jobs into account, the most effective response lies in developing systems that promote both individual capabilities and societal learning. A good level of basic skills and a broad knowledge base will thus be reflected in citizens' learning opportunities. However, in a rapidly changing economy, it is necessary to understand that skills that currently ensure high wages may soon lose their relevance (Vuorenkoski et al., 2018, pp. 39–40). The present workforce is not yet ready to embrace new technology due to a lack of relevant skills. Consequently, organizations must invest in upskilling and encourage workers to incorporate AI into their everyday tasks (Pradhan and Saxena, 2023, pp. 181). New technologies impact age groups in distinct ways. Younger individuals generally possess stronger capabilities for understanding and adapting to technological developments compared to older age groups (Tiku, 2023, pp. 3). Older workers, particularly low-skilled, are often defined as having lower digital competencies. This can lead to a sense of threat from advances in automation, creating a skills mismatch and even leading to earlier retirement (Aisa et al., 2023, pp. 9). Besides the lack of digital skills, older workers also face age discrimination, which affects their employability (Lee et al., 2008). As a response to these intersecting challenges, upskilling programs represent a key component of broad workforce strategies. Employees are generally interested in training programs, but they do not attend them due to various socio-economic barriers such as the cost of education or family commitments (Bianco, 2021, pp. 9). Older population in the workplace has greater difficulties in transitioning to new job roles due to automation, while the adult learning participation is generally lower, making it harder to pursue upskilling and re-training for this age group (Nedelkoska and Quintini, 2018). There is also a lack of incentives for employers to invest in the training of older employees, affected by low participation rates (Alcover et al., 2021). 158 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 It should be emphasized that the workplace has a significant impact on older workers in the process of digitization, through the implementation of information and communication technologies (ICT), the possibility to work remotely, the training offered, and the attitudes of employers and managers towards employees (Komp-Leukkunen et al., 2021, pp. 49). It is crucial to ensure an inclusive work environment for older employees by allowing them to develop skills, career development, flexible education, training programs (Cramarenco et al., 2023, pp. 747), and intergenerational cooperation (Waligóra, 2024, pp. 103). However, the lack of basic skills remains a key barrier to the uptake of AI and poses a challenge for adults with limited digital skills. It reflects the need for tailored skills policies, as this age group, together with low-skilled workers and people living in rural areas, face the lowest participation rates in training programs (OECD, 2024). Upskilling older employees enhances their competitiveness in the labor market (Trunkina et al., 2019, pp. 522). At the same time, organizations also benefit from investing in their employees. A failure to support older employees in adapting to new technologies risks the loss of valuable knowledge, experience, and perspective that this demographic brings to the workplace (Zwick, 2015, pp. 146). Adult education trainers must take into account the specific motivations of older employees that affect their participation in such programs. While this age group wants flexible training programs that provide practical and immediately relevant knowledge (Zwick, 2015, pp. 146). In the case of specific training for the use of AI, this training should include specific knowledge that can be applied in the workplace of the current employer, while developing critical thinking and meta- cognitive skills (Chetty, 2023, pp. 9). Given the fast-paced evolution of skill requirements in the labor market due to technological advancement, fostering a culture of lifelong learning within organizations should be considered a strategic priority and integral to their long-term objectives (Lincoln, 2017; Bianco, 2021; Li et al., 2023). Embedding continuous learning into organizational culture supports workforce adaptability and enhances resilience in rapidly changing environments. 3 Method The research is based on a systematic literature review, which allows a structured and comprehensive analysis of existing research on the chosen topic. The scientific research articles were retrieved from the following databases: ProQuest, Emerald, Sage Journals, Springer, Research Gate, and Google Scholar. The keywords used to search the databases were "older employees" OR "older workers" AND "upskilling" OR "skills" AND "artificial intelligence" OR "AI" OR "artificial intelligence era". The data collection took place between November 2024 and January 2025. The total number of articles found by keyword was 15.888. In our literature review, we set inclusion criteria that guided our research collection. For the inclusion criteria, we considered that: 159 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 - The article was published between 2014 and 2024, - The text was published in English, - The availability of the full text; and - The article focused on the topic of upskilling older workers in relation to new technologies, with a focus on AI. The systematic literature review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) principles (Page et al., 2021). The keywords mentioned above were entered into the databases, where duplicate records (n=658) were removed from the search. This was followed by a screening of titles and abstracts based on the topic under consideration. Publications that did not address our research objective were excluded, and we continued reading the full texts. 13 publications were included in the final analysis according to the inclusion criteria and the research question. The publication collection procedures are shown in Figure 1. Figure 1 The process of systematic selection of publications according to PRISMA 160 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 4 Results Tabel 1 provides a synthesis of the findings from 13 articles that met the inclusion criteria. The first column lists the authors and year of publication, followed by columns detailing the (original) title, methodology, research region, publication type, and a summary of key findings. Tabel 1 Author and year of publication, title, methodology, research region, publication type, and key findings Author and year of publication Title Methodology Research region Publication type Key findings Literature review and Europe Journal Article -Older workers may lose their jobs or have their job quality reduced due to the interaction between Alcover et al., Aging-and-Tech Job concept development Vulnerability: A Proposed ageing and job automation. 2021 Framework -Both the individuals and the organizations are responsible for reducing the vulnerability of older employees. Literature review and Europe Journal Article -Two simultaneous processes in developed secondary data analysis countries - ageing populations and technological Ageing Workers and Digital innovation. Bianco, 2021 Future -Higher educated workers are more willing to participate in training programs. -Lifelong learning to keep older workers competitive in the labor market. Conceptualizing Literature review - Israel Journal Article -Digital skills as a signal to the employer, informing Bokek-Cohen, Employees’ Digital Skills as Spence's signaling theory them of the (future) employee's capabilities, 2018 Signals Delivered to adaptability, and potential within the organization. -Older employees need to compensate for their Employers proportionally higher age by learning new skills. Quantitative-secondary Europe Journal Article -AI has a dual impact on the decision to retire early, Casas and The Impact of Artificial data from the European depending on the education of workers and the Román, 2024 Intelligence in the Early Survey on Health, Ageing nature of the occupation. Retirement Decision and Retirement -More educated workers are more able to adapt to technologies. 161 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 Literature review South Africa Journal Article -The development of the digital economy has encouraged companies to train employees in AI literacy. AI Literacy for an Ageing -Older employees are at a disadvantage compared to younger employees who have more advanced Chetty, 2023 Workforce: Leveraging the Experience of Older knowledge of the latest technologies. -AI can enable older workers to participate Workers strategically in the digital economy. -Training programs for working with AI for older employees should consider that they contain specific knowledge for their job. The Impact of AI on Literature review Europe Journal Article -Technological advancement is affecting jobs, leading to (digital) skills gaps and job volatility. Cramarenco et Employees’ Skills and Well-being in Global Labor -Key solutions include continuous skills upgrading, al., 2023 Markets: A Systematic financial support for lifelong learning, tax incentives for employers, grants for flexible training programs, Review and international partnerships to promote mobility. Is the Obsolescence of the Literature review Europe Book Chapter -Digital technology is reforming the organization of Skills of Older Employees workplaces and requires employees to develop new Cros et al., 2021 an Inevitable skills. Consequence of -This development is not given to a vulnerable Digitalization? group. Literature review and America Book Chapter -The "greying" of workplaces is forcing a paradigm panelist interviews shift in workplace policies. -The ageing workforce is forcing organizations to A Hiring Paradigm Shift better understand the acceptance and retraining of Hughes et al., through the Use of an older workforce. 2019 Technology in the -Strategies to meaningfully engage an ageing Workplace workforce: learning and development, knowledge transfer, and career path development. -Lifelong learning is becoming a growing trend across all ages. Quantitative data from China Journal Article Does Artificial Intelligence -The impact of AI on workplace learning affects the the China General Social Promote or Inhibit On-the- older workers, women, and those with lower levels Li et al., 2023 Survey (CGSS) of education, as well as those without employment Job Learning? Human Reactions to AI at Work contracts and with less job autonomy and work experience. 162 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 -Based on this, companies and governments should pay more attention to these employees and consider providing them with more opportunities for training and job protection. Quantitative and Europe Report -Elderly people are exposed to stereotypes and qualitative discrimination. An Ageing Workforce in The Lincoln, 2017 Digital Era: Older Workers, -Employers should provide more training and Technology and Skills development for older people, create a lifelong culture, and tailor training programs for the older age group. Literature review Europe Journal Article -The age-related digital divide affects how The Impact of Artificial individuals perform their jobs and how they upskill Morandini et al., Intelligence on Workers' and reskill. 2023 Skills: Upskilling and -Organizations that do not invest in upskilling older Reskilling in Organisations employees may lose the necessary skills, experience, and perspectives. Qualitative case study Japan, USA, and India Journal Article -Older employees can acquire and improve their digital skills under the right conditions. AI-Induced Labor Market -Upskilling and AI can lead to longer working lives for older employees. Tiku, 2023 Shifts and Aging Workforce -The USA, Japan, and India are trying to bridge the Dynamics: A Cross- National Study digital divide with strategies such as intergenerational cooperation, creating inclusive workplaces, and creating opportunities for lifelong learning. Literature review – Russia Conference Paper -Increasing the competitiveness of older people Increasing the secondary analysis of requires a system of training and retraining. Trunkina et al., Competitiveness of Older statistical data -Upskilling of the older generation can lead to a 2019 Age Groups in the paradigm shift in ageism, which creates a Digitalization Environment perception of older people as a productive part of society. 163 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 The number of relevant publications analyzing the upskilling of the older workforce concerning newer technologies, such as AI, has been increasing over the years. No relevant articles were found from 2014 to 2016. An increased number of publications was noted in 2023 that can be linked to the launch of one of the most well-known generative AI engine - Chat GPT, in 2022. The number of articles by year of publication is shown in Figure 2. Figure 1 Number of publications based on the publication date 5 4 3 publications 2 1 0 2017 2018 2019 2020 2021 2022 2023 2024 The relevance of the topic is reflected in the two simultaneous processes, an ageing population and new technological innovations entering the workplace (Alcover et al., 2021; Bianco, 2021; Nedelkoska and Quintini, 2018). Older employees often have less developed digital competencies, which can hinder their effective use of new technologies (Tiku, 2023, pp. 3). This increases the digital divide, which widens the gap between individuals in terms of access and use of ICT (Dolničar et al., 2002, pp. 83). Upskilling older workers is crucial to maintain their competitive advantage in the labor market (Trunkina et al., 2019, pp. 522) and to reduce the chances of early retirement (Casas and Román, 2024) by enhancing their adaptability to technological changes. The introduction of new technologies in the workplace encourages organizations to create a culture of lifelong learning, which is key to adapting to the new labor market (Tiku, 2023; Li et al., 2023; Vuorenkoski et al., 2018; Pradhan et al., 2023). 5 Discussion AI can be understood as a socio-technical construct, shaped through the dynamic between users and machines or technology (Orr and Davis, 2020). Despite these findings, researchers, academics, and practitioners have also raised concerns about the lack of methods to incorporate accountability in socio-technical systems (Verdiesen et al., 2021). To address awareness about this gap, there is a need to link the design, production, and implementation phases of AI development, including its governance initiatives (Birkstedt et al., 2023, pp. 154). These developments are not occurring in isolation, rather, they are profoundly reshaping the nature of work as technological advancements are raising the necessity for the acquisition of new 164 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 and more complex skill sets across the workforce (Lincoln, 2017; Alcover, 2021). In this context, the development of digital competencies emerges as a critical precondition for the effective adoption and management of technological tools and systems (Cramarenco et al., 2023). However, some vulnerable groups, such as older workers, are many times left out of such training (Cros et al., 2021). This can even widen the digital divide, while this age group has, on average, less developed digital competences compared to their younger counterparts (Tiku, 2023; Janeš et al., 2023). Older employees are in the context of digital transformations defined as 'digital immigrants' who had to learn how to use digital technologies later in life (Bokek-Cohen, 2018). Conversely, younger employees are often characterized as 'digital natives' due to their early exposure to technology, which is assumed to result in more advanced digital competencies (Tiku, 2023). Furthermore, they are expected to possess more up-to-date knowledge of digital tools and systems, giving them a relative advantage compared to older employees in the labor market (Chetty, 2023). Moreover, digital competences serve as a signal of adaptability and employability in the labor market for employers (Bokek-Cohen, 2018; Alcover et al., 2021). Komp-Leukkunen et al. (2022) have framed digital competencies as a crucial part of general labor market access. It is important to recognize that the disadvantaged position of older employees in the labor market cannot be attributed solely to lower levels of digital competences (Chetty, 2023; Cros et al., 2021; Morandini et al., 2023; Li et al., 2023), but also to challenges in accessing training, due to various prejudices, stereotypes, bias, and discrimination related to age (Alcover et al., 2021; Lincoln, 2017). Therefore, new digital technologies in the workplace encourage older employees to upskill to remain competitive in the evolving labor market (Alcover, 2021). Acquiring digital competencies not only enhances their employability but also serves as a signal to employers, indicating capabilities, flexibility, and the potential of the worker for the organization (Bokek-Cohen, 2018). Based on that, older employees need to 'compensate' for their age by investing in learning new skills that may not have been previously needed. On the other hand, the ageing of the workforce is forcing organizations to better understand the needs of this age group and to empower them (Hughes et al., 2019). Companies and policymakers need to consider providing more experiences to vulnerable groups such as older employees, women, and those with less autonomy. However, people who are highly educated are more willing to adapt and train in the use of new technologies (Bianco, 2021; Casas and Román, 2018). Training should also be tailored to the needs of the older age group (Chetty, 2023), as this is the only way to ensure that skills do not become obsolete (Cross, 2021). If older employees are going to upskill, they will be able to participate in the digital economy as strategic decision-makers (Chetty, 2023). It is crucial to develop suitable systems for training and retraining employees, which include programs for the development and implementation of appropriate programs aimed at the active inclusion of the older population in the digital economy and knowledge society (Trunkina et al., 2019). Older 165 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 employees are at a disadvantage in comparison to younger ones looking for a new job (Morandini et al., 2023). If some adjustments were taken, this could lead to advanced skills and appropriate knowledge, resulting in prolonging the working life of adults (Tiku, 2023). However, this relationship appears to be influenced by workers’ educational backgrounds. Casas and Román (2018) emphasize that exposure to AI reduces the likelihood of early retirement of more educated workers, suggesting that education plays a mediating role in how digital transformation affects labor market participation in later life (Casas and Román, 2018). Based on that, there is a need for more structured and systematic research on how organizations can effectively adapt to technological changes in the workplace and support the continuous upskilling of older employees, considering their special needs and backgrounds. Firstly, there should be rigorous empirical research about older employees and their attitudes toward AI and AI training, which would give practical guidelines for employers on how to design and implement training programs effectively for the ageing workforce. Further research should prioritize longitudinal studies that explore the participation and experiences of older employees in the workplace and training based on suggested guidelines from previous studies. The study mentioned offers a critical insight into the long-term effectiveness of age-sensitive interventions. As a key strategy for promoting the development of people in the knowledge society and competitiveness in the labor market, researchers (Bianco, 2021; Cramarenco et al., 2023; Huges et al., 2019) suggest lifelong programs. Meanwhile, Tiku (2023) also highlights the importance of intergenerational cooperation and the creation of inclusive workplaces as a strategy for bridging the digital divide. It should be emphasized that intergenerational cooperation has two sides of the value. Firstly, older employees can gain skills in using modern technologies from younger colleagues, which can significantly contribute to supporting work in organizational processes. Secondly, younger employees can gain important insight and knowledge about the content of work that older employees have developed and mastered over the years. The adoption of such approaches and strategies holds the potential to challenge the "age paradox", fostering a shift in how older employees are perceived, not as passive or obsolete, but as active and valuable contributors to society (Trunkina et al., 2019). Despite the emphasis on the importance of adjusted strategies, none of the articles included in the literature review research on the personal views of older employees on new technologies. Based on that, we suggest a qualitative study that assesses the personal views of older employees about managing AI systems in the workplace. The literature review has enabled us to answer our research question: what is the impact of AI on the training needs of older employees? New technologies such as AI encourage older employees to constantly improve their skills, as upskilling remains one of the most effective strategies for maintaining competitiveness in the labor market. Companies are introducing new technologies into their operations, which can be understood as pressure on competitiveness, forcing companies to update their work processes with modern technologies; if not addressed, this could threaten the organization's continued viability. Moreover, with demographic changes, 166 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 prolonged working lives, and integration of AI in the workplace, training is particularly crucial for older employees, as they generally have less developed digital skills to manage new technologies. This can also help them to overcome fears, mistrust, and resistance when integrating modern technologies into the workplace and serve as a strategy to prolong working lives. To sum up, cultivating the culture of lifelong learning is an imperative, as it supports continuous skill development among employees of all ages in response to the evolving nature of work. Furthermore, training programs should be tailored to the specific needs of participants, considering factors such as age, existing digital competencies, and the nature of their work. 6 Conclusion Nowadays, two processes are taking place simultaneously: rapid technological development and the aging of the workforce. This article investigates the impact of new technologies, with an emphasis on AI, on older employees and the need to upskill them. Generally, older employees have less developed digital competencies needed to manage new technologies, which are transforming jobs and creating new demands. Without upgrading their knowledge and skills, they may become uncompetitive in the labor market, face early retirement, or even job loss. Based on that, older employees must adapt to new forms of work and engage in upskilling or retraining initiatives. Older employees are not a homogeneous age group, as they differ in education, social, economic, and other circumstances. Higher-educated older employees tend to be more willing to participate in training programs. However, it is essential to emphasize that training programs must be tailored to the needs of older employees, as their effective participation depends on such adaptation. Moreover, investing in the training of older workers can significantly contribute to organizational efficiency and overall success. If organizations do not invest in training older employees, they may lose important knowledge and information that older employees possess. Additionally, training enables older employees to participate strategically in the digital economy and increase their competitiveness within the labor market. Consequently, it is imperative for organizations to foster a culture of lifelong learning that extends beyond the older generation due to rapid technological progress to equip the workforce with the relevant skills. Furthermore, fostering intergenerational cooperation and cultivating inclusive workplaces are also important components in the knowledge society. The development of digital skills and knowledge of AI management can enable older employees to extend their working lives, increase competitiveness in the labor market, and play a strategic role in decision-making. A systematic literature review focuses on the impact of new technologies on a specific age group. The relatively low number of articles included in the research and the increase in the number of publications after 2022 indicate the relevance of the topic and the need for further research in this area. It should be emphasized that in the article, we consider older employees as people who are over 50 years of age. However, older employees are not a homogeneous group defined solely 167 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 by chronological age. Building on these considerations, further research should study older employees in relation to their educational structure, social, economic, and other factors that might influence their capacity to adapt to technological change. In parallel, further studies should examine how workplaces and training programs can be more effectively adapted to the specific needs of an ageing workforce within rapidly evolving technological environments. This would enable researchers to formulate practical guidelines for employers seeking to support digital inclusion across all age groups. In addition, qualitative research focusing on the personal experiences of older employees when they are managing new technologies would offer valuable insights into their coping strategies, perceptions, and barriers faced by a specific age group. The article has an important contribution for managers, as it emphasizes the importance of creating a culture of lifelong learning to bridge the digital divide among employees and empower the workforce with the essential skills in the modern labor market. Older employees bring a wealth of experience and knowledge that can greatly benefit both organizations and society. By providing targeted training, their competitiveness and value in the labor market can be significantly enhanced. The research also raises the question of how training programs for this generation should be designed to effectively support their adaptation to new technologies. Furthermore, the findings have broader implications for policymakers, offering guidance for shaping national training strategies in response to the dual challenges of workforce ageing and technological advancement. The research also has some limitations. Firstly, we were limited to articles published only in the English language. Secondly, we have defined older employees as a homogenous group, where we did not include the possible variables in this age group, which opens up a new research area. References 1. Acemoglu, D., & Restrepo, P. (2021). Demographics and automation. The Review of Economic Studies, 89(1), 1–44. 2. Aisa, R., Cabeza, J., & Martin, J. (2023). Automation and aging: The impact on older workers in the workforce. The Journal of the Economics of Ageing, 26, 1–12. 3. Alcover, C.-M., Guglielmi, D., Depolo, M., & Mazzetti, G. (2021). Aging-and-Tech Job Vulnerability: A proposed framework on the dual impact of aging and AI, robotics, and automation among older workers. Organizational Psychology Review, 11(2), 175–201. 4. Bianco, A. (2021). Ageing workers and digital future. Rivista trimestrale di scienza dell'amministrazione, 3(3), 1–22. 5. Birkstedt, T., Minkkinen, M., Tandon, A., & Mäntymäki, M. (2023). AI governance: themes, knowledge gaps and future agendas. Internet Research, 33(7), 133–167. 6. Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T., Mulrow, C.,… Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. British Medical Journal. 168 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 7. Bokek-Cohen, Y. (2018). Conceptualizing employees’ digital skills as signals delivered to employers. International Journal of Organization Theory & Behavior, 21(1), 17–27. 8. Bruun, E.P.G., & Duka, A. (2018). Artificial Intelligence, Jobs and the Future of Work: Racing with the Machines. Basic Income Studies, 13(2), 1–15. 9. Casas, P., & Román, C. (2024). The impact of artificial intelligence in the early retirement decision. Empirica, 51, 583–618. 10. Chetty, K. (2023). AI literacy for an ageing workforce: Leveraging the experience of older workers. OBM Geriatrics, 7(3), 1–17. 11. Classen, J., Wegemer, D., Patras, P., Spink, T., & Hollick, M. (2018). Anatomy of a Vulnerable Fitness Tracking System: Dissecting the Fitbit Cloud, App, and Firmware. In T. Ploetz & L. Mamykina (Eds.), Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(1), (pp. 1– 24). ACM digital Library. 12. Cramarenco, R. E., Burcă-Voicu, M. I., & Dabija, D.-C. (2023). The impact of artificial intelligence (AI) on employees’ skills and well-being in global labor markets: A systematic review. Oeconomia Copernicana, 14(3), 731–767. 13. Cros, F., Bobillier Chaumon, M.-E., & Cuvillier, B. (2021). Is the obsolescence of the skills of older employees an inevitable consequence of digitalization? In M.-E. Bobillier Chaumon (Ed.), Digital transformations in the challenge of activity and work: Understanding and supporting technological changes (pp. 169–181). Retrieved 23 March 2024 from https://doi.org/10.1002/9781119808343.ch13. 14. Dolničar, V., Vukčevič, K., Kronegger, L., & Vehovar, V. (2002). Digitalni razkorak v Sloveniji. Družboslovne razprave, 18(40), 83–106. 15. Hughes, C., Robert, L., Frady, K., & Arroyos, A. (2019). A Hiring Paradigm Shift through the Use of Technology in the Workplace. In E. Parry (Ed.), Managing Technology and Middle- and Low-skilled Employees: Advances for Economic Regeneration, 49–59. Retrieved 19 January 2024 from https://doi.org/10.1108/9781789730777. 16. Janeš, A., Madsen, S. S., Saure, H. I., Lie, M. H., Gjesdal, B., Thorvaldsen, S., ….Klančar, A. (2023). Preliminary Results from Norway, Slovenia, Portugal, Turkey, Ukraine, and Jordan: Investigating Pre- Service Teachers’ Expected Use of Digital Technology When Becoming Teachers. Education Sciences, 13(8), 783. 17. Komp-Leukkunen, K., Poli, A., Hellevik, T., Herlofson, K., Heuer, A., … Motel Klingebiel, A. (2022). Older workers in digitalizing workplaces: A systematic literature review. The Journal of Aging and Social Change, 12(2), 37–59. 18. Krašovec, S. J. (2015). Izobraževanje in usposabljanje starejših delavcev–mednarodna primerjava. Andragoška spoznanja, 21(2), 29–46. 19. Lee, C. C., Czaja, S. J., & Sharit, J. (2008). Training older workers for technology-based employment. Educational Gerontology, 35(1), 15–31. 20. Li, C., Zhang, Y., Niu, X., Chen, F., & Zhou, H. (2023). Does artificial intelligence promote or inhibit on-the-job learning? Human reactions to AI at work. Systems, 11(114), 1–26. 21. Lincoln, J. (2017). An Ageing Workforce in The Digital Era: Older Workers, Technology and Skills. Business in the Community, A Business in the Community report, supported by Tata Consultancy Services. UK: The Prince’s Responsible Business Network, Tata Consultancy Services. 169 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 Retrieved 30 January 2025 from https://www.bitc.org.uk/wp-content/uploads/2022/12/bitc-report- age-ageing-worforce-digital-era-march20.pdf. 22. Morandini, S., Fraboni, F., De Angelis, M., Puzzo, G., Giusino, D., & Pietrantoni, L. (2023). The impact of artificial intelligence on workers’ skills: Upskilling and reskilling in organisations. Informing Science, 26, 39–68. 23. Nedelkoska, L., & Quintini, G. (2018). Automation, skills use and training (Vol. 202). Paris: OECD Publishing. 24. Novak, V. (2014). Izzivi dolgožive družbe: staranje prebivalstva, trg dela in ravnanje s starejšimi zaposlenimi. In M. Bernik (Ed.), Transformacija kadrovskega managementa (pp. 19–44). Maribor: Univerza v Mariboru. 25. OECD. (2024). Training Supply for the Green and AI Transitions: Equipping Workers with the Right Skills, Getting Skills Right. Paris: OECD. 26. Orr, W., & Davis, J. L. (2020). Attributions of ethical responsibility by Artificial Intelligence Practitioners. Information, Communication and Society, 23(5), 719–735. 27. Pradhan, I.P., & Saxena, P. (2023). Reskilling workforce for the artificial intelligence age: Challenges and the way forward. In The Adoption and Effect of Artificial Intelligence on Human Resources Management, Part B (pp. 181–197). Emerald Publishing Limited. 28. Tiku, S. (2023). AI-Induced Labor Market Shifts and Aging Workforce Dynamics: A Cross-National Study of Corporate Strategic Responses in Japan, USA, and India. SSRN Electronic Journal. 29. Trunkina, L. V., Kipervar, E. A., & Mizya, M. S. (2019). Increasing the competitiveness of older age groups in the digitalization environment. In International Scientific and Practical Conference on Digital Economy (ISCDE 2019) (pp. 236–239). Atlantis Press. 30. Verdiesen, I., Tubella, A. A., & Dignum, V. (2021). Integrating comprehensive human oversight in drone deployment: a conceptual framework applied to the case of military surveillance drones. Information (Switzerland), 12(9), 1–13. 31. Vuorenkoski, V., Lehikoinen, A., Hakola-Uusitalo, T., & Urrila, P. (2018). Learning and skills in transition. In O. Koski & K. Husso (Eds.), Work in the age of artificial intelligence: Four perspectives on the economy, employment, skills and ethics. Work in the age of artificial intelligence (pp. 37–46). Publications of the Ministry of Economic Affairs and Employment. 32. Waligóra, Ł. (2024). Employees' age diversity - between supportive workplaces and organizational outcomes. Katowice: University of Economics in Katowice. 33. Zwick, T. (2015). Training older employees: what is effective? International Journal of Manpower, 36(2), 136–150. 170 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 *** Tinkara Žabar is a junior researcher and a doctoral student in the Management program at the Faculty of Management, University of Primorska. She completed her undergraduate studies in pedagogy and obtained a master's degree in sociology, specializing in human resources, knowledge, and organizational management. Before joining the Faculty of Management, she gained practical experience in human resource management by working in HR departments of various organizations as a student, deepening her understanding of HR processes. She actively participates in the development of Erasmus+ Sport programs, through which she has attended numerous international conferences. Additionally, she has enhanced her professional competencies through further education, including the HR Starter program and the Leadership Academy. Her research focuses on the impact of artificial intelligence on employees, analyzing changes in the work environment as well as the challenges and opportunities brought by digital transformation. Tinkara Žabar je mlada raziskovalka in doktorska študentka na programu Management na Fakulteti za management Univerze na Primorskem. Dodiplomski študij je zaključila na področju pedagogike, magistrirala pa iz sociologije s specializacijo iz upravljanja človeških virov, znanja in organizacij. Pred zaposlitvijo na Fakulteti za management je kot študentka pridobivala praktične izkušnje na področju upravljanja človeških virov z delom v kadrovskih službah različnih organizacij, kjer je poglobila razumevanje kadrovskih procesov. Aktivno sodeluje pri razvoju programov Erasmus+ Sport, v okviru katerih se je udeležila številnih mednarodnih konferenc. Svoje strokovne kompetence je dodatno nadgrajevala z nadaljnjim izobraževanjem, med drugim v programih HR Starter in Akademija vodenja. Njeno raziskovalno delo je usmerjeno v proučevanje vpliva umetne inteligence na zaposlene, pri čemer analizira spremembe v delovnem okolju ter izzive in priložnosti, ki jih prinaša digitalna preobrazba. *** Aleksander Janeš is a Professor in the fields of Operational Management, Quality Management, and Business Excellence at the Faculty of Management, University of Primorska. As an experienced expert and researcher, he serves as the Program Director of the Master's in Management program, bringing over 29 years of professional experience. His research expertise and interests encompass various perspectives on project management systems and performance measurement, (green, blue, and sustainable) business models, business process management, as well as management tools in the fields of digitalization and process management, inclusive education and skills, and youth and media (https://orcid.org/0000-0001-5678-0737). Aleksander Janeš je profesor na področjih operativnega managementa, managementa kakovosti in poslovne odličnosti na Fakulteti za management Univerze na Primorskem. Kot izkušen strokovnjak in raziskovalec deluje tudi kot direktor magistrskega študijskega programa Management ter ima več kot 29 let delovnih izkušenj. Njegova raziskovalna strokovnost in interesna področja obsegajo različne vidike sistemov projektnega vodenja in merjenja uspešnosti, (zelene, modre in trajnostne) poslovne modele, management poslovnih procesov ter managerska orodja na področjih digitalizacije in procesnega managementa, vključujočega izobraževanja in razvoja kompetenc ter mladih in medijev. *** 171 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 Povzetek Izpopolnjevanje starejših zaposlenih v dobi umetne inteligence Raziskovalno vprašanje (RV): Kakšen je učinek novih tehnologij, s poudarkom na umetno inteligenco (UI), na potrebo po izpopolnjevanju starejših zaposlenih (50+ let). Namen: Namen raziskave je bil opraviti sistematični pregled literature dosedanjih raziskav s področja učinka UI na potrebe po izpopolnjevanju starejših zaposlenih. Metoda: Opravili smo sistematični pregled literature v šestih akademskih iskalnikih in sicer ProQuest, Emerald, Sage Journals, Springer, Research Gate ter v Google Učenjaku. Rezultati: Umetna inteligenca pomembno preoblikuje trg dela, saj zahteva nenehno prilagajanje novim spretnostim in znanju. Pomemben učinek ima UI na starejše zaposlene, ki so zaradi morebitnega pomanjkanja digitalnih veščin in občutljivosti na spremembe izpostavljeni večjim izzivom. V tem kontekstu sta izpopolnjevanje in dodatna izobrazba ključna mehanizma za zagotavljanje skladnosti spretnosti z zahtevami delovnega okolja in trga dela. Organizacije se morajo hitro prilagajati spreminjajočim se zahtevam z oblikovanjem kulture vseživljenjskega učenja, ki spodbuja starejše in ostale zaposlene k izpopolnjevanju. Ključno je, da izobraževalni programi temeljijo na specifičnih potrebah in izzivih, s katerimi se soočajo starejši zaposleni. Organizacija: Raziskava poudarja pomen izpopolnjevanja starejših zaposlenih v dobi umetne inteligence in organizacije spodbujanja k ustvarjanju kulture vseživljenjskega učenja, kot dela strateških usmeritev in ciljev organizacije. Družba: Pomen raziskave za družbo se odraža v vpogledu vključenosti vseh starostnih skupin v možnosti izpopolnjevanja znanja, veščin in odnosa do uporabe sodobnih tehnologij. Organizacije in družba sama je namreč nosilec socialne odgovornosti, da starejšim zaposlenim omogočijo uspešno vključevanje v delovno okolje v dobi UI. Originalnost: Raziskava naslavlja potrebo po izpopolnjevanju specifične starostne skupine v dobi UI, kjer sočasno osvetljuje pomen ustvarjanja kulture vseživljenjskega učenja v hitro se spreminjajočem svetu. Omejitve/nadaljnje raziskovanje: Pregled literature je bil omejen na šest javno dostopnih baz podatkov. V članku so bili starejši zaposleni obravnavani kot vse osebe v delovnem procesu starejše kot 50 let. Pri tem je potrebno izpostaviti, da se starejši zaposleni med seboj razlikujejo glede na izobrazbo, ekonomske, socialne in druge okoliščine. Primerno bi bilo, da bi učinek novih tehnologij raziskali tudi glede na omenjene okoliščine pri tej starostni skupini. Ključne besede: družba znanja, izpopolnjevanje, management izobraževanja, prekvalifikacija, starejši zaposleni, umetna inteligenca, vseživljenjsko učenje. This work is licensed under a Creative Commons Attribution 4.0 International License. This journal is published by Faculty of Organisation Studies in Novo mesto. 172 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 DOI: 10.37886/ip.2025.008 Obstacles to the Implementation of Innovations in Start-Up Companies Nerman Ljevo1 , Sabina Šehić - Kršlak2 1 Faculty for Management and Business Economy, Bosnia and Herzegovina, ljevo.nerman@gmail.com 2 Center for Advanced Technologies, Bosnia and Herzegovina, ssabina83@yahoo.com Abstract Research Question (RQ): Start-up companies represent the support of the development of every economy. However, this is a more fragile part of the economy, because a large number of start-up companies fail in the first years of their development. The results of previous research indicate that there are several reasons for the failure of start-ups, and one of the key ones is the difficult application of innovations. In today's modern and turbulent times, in the embrace of globalization, innovations are an absolute must have. All companies that are not innovatively oriented are doomed to failure. Even large, multinational companies pay for the lack of innovation, so this is a signal to start-ups that investing in innovation is the most important thing in their budget. However, there are numerous challenges and obstacles in the application of innovations at start-ups. The main problem of the research is to detect what are the most important obstacles and challenges for the application of innovations in Bosnian start-ups, and by what methods it is possible to overcome them. Purpose: The main purpose of the research is to point out the obstacles and challenges faced by start- up companies in Bosnia and Herzegovina and to give adequate recommendations for start-ups as well as all interest groups, how to eliminate them effectively, in order to make the economy and business operations of the company more propulsive and proactive. Method: In the theoretical explanation of the problem, sufficient information contained in previous research studies will be gathered, regarding the considered issue. Through empirical research, it will be determined what the most common types of obstacles are and challenges faced by start-ups in Bosnia and Herzegovina, what types of innovations are most often implemented in start-up companies in Bosnia and Herzegovina, and which methods and ways can be used to overcome the mentioned obstacles. The research will be done on a sample of 10 start-up companies in BiH. Results: The results of the research will show what are the obstacles and challenges in the implementation of innovations recorded in the sample of observed start-ups, what types of innovations are most often implemented in start-ups in Bosnia and Herzegovina, and how to most effectively overcome obstacles in the implementation of innovations in start-ups, with the aim of their better and more successful business. Organization: Through research of this type, managers of start-ups (or those who intend to become one) can more easily identify the basic obstacles in the implementation of innovations in their business, and use the proposed methods to overcome them, in order to make their business more stable and sustainable. Society: Increasing the stability of start-ups brings long-term benefits for the entire society. Socially responsible business of start-ups can contribute to a better distribution of resources in society, which is desirable for all participants in society, but also for the environment. Originality: Research of this type represents one of the rare research studies on start-ups in Bosnia and Herzegovina. It is pioneering research in the context of challenges and obstacles in the implementation of innovations at start-ups in Bosnia and Herzegovina. Received: 2025-05-22, revised: 2025-06-19, accepted: 2025-11-06 173 as Original Research Paper Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 Limitations / further research: This study has several limitations, including a small sample size and difficulties in data collection. Future research is recommended to focus on the implementation of innovations across different industries in which start-up companies operate. Keywords: start-up, innovation, challenges, obstacles, implementation, recommendation, stability, overcome. 1 Introduction Start-up companies play a crucial role in the development of any economy. They are important for job creation, increased competition, diversification, higher productivity, improved efficiency, and more. It can be said that start-ups are the backbone of any economy. Start-ups can also be described as drivers of innovation development. However, during their establishment, growth, and expansion, start-ups are often insufficiently recognized by the very structures that should provide them with better and more sustainable market conditions. As a result, start-up founders face numerous obstacles that limit their growth and development and complicate their survival in the market. These obstacles are particularly related to the innovation implementation process. First and foremost, it is necessary to identify the key barriers start-ups face in the context of innovation implementation and then work to minimize them. This paper presents research in the field of entrepreneurship and aims to highlight the most common barriers that start-ups encounter in the innovation implementation process—both generally and specifically in Bosnia and Herzegovina—and explore the methods and strategies that can be used to overcome these barriers. Additionally, the paper seeks to identify the key stakeholders who can contribute to minimizing the challenges faced by start-ups. The main problem of the research is the insufficient identification and understanding of the barriers that start-up companies face in the process of innovation implementation, particularly in the context of Bosnia and Herzegovina. Although start-ups are widely recognized as crucial drivers of innovation and economic development, they often encounter numerous challenges and barriers during their establishment, growth, and expansion. Therefore, this research in the field of entrepreneurship seeks to explore the nature and impact of these barriers, examine the strategies and methods used by start-ups to overcome them, and identify the key stakeholders who can contribute to improving the conditions under which start-ups operate. The ultimate goal is to generate findings that can help inform more supportive policies and practices aimed at fostering innovation and entrepreneurship in the region. 174 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 2 Theoretical framework Entrepreneurship is crucial for the growth of any economy. It creates job opportunities, supports rural development, drives technological progress, increases national income, promotes industrialization, and boosts exports (Sinha, 2023). The survival of startups is closely tied to innovation. Over time, technological innovation has attracted growing attention from academics, businesses, industries, entrepreneurs, and policymakers, all of whom have increasingly focused on building an entrepreneurial ecosystem that can support and integrate innovation (Nicolau & Bărbulescu, 2024). According to Fakhri and Bahoussa (2014), since the time of Schumpeter, innovation has been closely associated with startups and involves a range of factors such as technology, nature, society, government, and the broader community. All these factors can—and do—have an impact on the implementation of innovations by startups. Sopjani (2019) argues that innovations implemented by startups contribute to positive social transformation. Therefore, all stakeholders involved in the innovation implementation process should work to ensure that it is carried out as efficiently as possible. Research conducted by Wetzel and Eiche (2024) on a sample of 152 startups shows that, at present, the most important factors for startups are developing an effective sales and customer service system, implementing financial planning, and adopting a sustainable pricing and costing model. Alawamleh et al. (2023), based on a sample of startups in Jordan, found that women encounter more innovation-related challenges than men. These challenges often relate to limited access to funding and a general lack of investment in startups. According to Banka et al. (2024), innovation challenges in startups often stem from a lack of understanding by investors and the high risk of business failure associated with implementing innovations. Wedajo and Kakuze (2020) point out that fear of digitalization or a lack of knowledge about digital tools can also pose barriers to innovation in startups. Supporting this view, Saksonova and Kuzmina-Merlino (2017) argue that startup founders must be ready to manage all business obligations digitally, including financial operations. Silva Nunes et al. (2021) emphasize the need to reform laws and public institution practices so they are fully aligned with the needs of innovation implementation in startups. This, they argue, is the only way to foster a strong entrepreneurial culture and spirit. Rajiani et al. (2023) go a step further, suggesting that governments should assist startups in implementing new technologies, thereby facilitating innovation development. Carle (2024) finds that smaller startups tend to offer better innovations than larger ones due to their agility and responsiveness to market changes. Based on research conducted on tech park 175 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 startups in France, she concludes that a significant threat to startups arises when they either fail to innovate or are unable to effectively implement or commercialize their innovations. Venczel et al. (2024) argue that startups must have a clear and precise methodology—essentially, a guiding algorithm—for implementing each new innovation. Kask and Linton (2025) highlight that knowledge is a critical factor in innovation implementation. Education and expertise can make the difference between the success and failure of innovative startup ventures. On the other hand, Kuckertz et al. (2020) point out the importance of motivation: individuals involved in innovation must be properly motivated, as motivation greatly influences whether the innovation process aligns with organizational goals. Another critical element is talent. According to Brunetti et al. (2020), talent is necessary for successful innovation implementation. A lack of talent can pose a serious challenge for startups. Hvolkova et al. (2019) suggest models for overcoming barriers to innovation implementation. One proposed model involves categorizing barriers based on their significance and focusing on eliminating the most critical ones to accelerate and intensify innovation efforts. This approach requires the involvement of all stakeholder groups connected to startups. Finally, Kusumaningtyas et al. (2023) propose a five-step model to facilitate innovation implementation in startups. The first step of this model emphasizes identifying profit opportunities from innovation—recognizing from the outset the potential to recover the time, effort, and resources invested. This return should come directly from the market. In the next Table (Table 1), it can be seen the main barrieres to innovation implementation. 176 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 Table 1 Mainbarrieres to innovation implementation Category of Number Description of Barrier Sources Barrier 1 Financial Barriers Limited access to funding and lack of Alawamleh et al. (2023), Banka investment, especially for women in start- et al. (2024) ups. 2 Knowledge and Lack of knowledge about digital tools and Wedajo & Kakuze (2020), Kask & Education insufficient education on innovation Linton (2025) management. 3 Motivation and Lack of motivation and talent necessary for Kuckertz et al. (2020), Brunetti Talent successful innovation implementation. et al. (2020) 4 Institutional Absence of adequate laws and public Silva Nunes et al. (2021) Barriers institution practices aligned with innovation needs. 5 Fear of Resistance or fear of digitalization that Wedajo & Kakuze (2020), Digitalization hinders the use of digital technologies in Saksonova & Kuzmina-Merlino business. (2017) 6 Inadequate Lack of clear and precise methods or Venczel et al. (2024) Methodology algorithms for innovation implementation. 7 Risk of Business High risk of failure associated with Banka et al. (2024) Failure innovation implementation discourages investors. 8 Lack of Market Insufficient ecosystem support, including Fakhri & Bahoussa (2014), Support lack of relevant stakeholders to support Rajiani et al. (2023), Hvolkova et innovations. al. (2019) The basic assumption of the research is that financial barriers are the most common form of obstacle for startups in Bosnia and Herzegovina. This will be elaborated on further in the paper. 3 Method This study was conducted as an exploratory cross-sectional study with a mixed quantitative- qualitative design. The primary aim of the research was to identify the obstacles and challenges faced by start-up companies in Bosnia and Herzegovina during innovation implementation and to provide recommendations for effectively overcoming these barriers, thus making the business operations and economy of these companies more dynamic and proactive. Data were collected from a sample of 10 start-up companies in Bosnia and Herzegovina. The inclusion criteria were that the company must be a start-up (i.e., no older than one year), located in Bosnia and Herzegovina, and actively engaged in some form of innovation. The sample was selected using purposive sampling due to the specific nature of the population. The data collection instruments included an online questionnaire and semi-structured interviews with start-up founders. The questionnaire contained questions about the industry sector, number of employees, year of establishment, type of innovation implemented, perceived importance of innovation, and 177 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 obstacles faced during the innovation implementation process. The interviews provided additional qualitative insights into participants’ experiences and challenges. The sampling process was defined by clear inclusion and exclusion criteria, excluding companies that did not meet the age requirement or were not actively innovating. Given the sample size and approach, the study’s limitations include potential limited generalizability of the results. For data analysis, statistical tools MS Excel and SPSS were used. Quantitative data were processed using descriptive statistics, while qualitative interview data were analyzed using content coding techniques to identify key themes and patterns. Ethical considerations included obtaining informed consent from all participants, ensuring confidentiality and anonymity of respondents. The study was conducted following ethical guidelines for research involving human subjects, and institutional approval was obtained where applicable. 4 Results At the very beginning, the data related to the sample are presented in Table 2. Table 2 Information about sample Number Industry Number of employers Year of establishment 1 Textile 2 2024 2 Law 1 2024 3 It 2 2024 4 It 3 2024 5 It 6 2024 6 It 5 2024 7 It 2 2024 8 It 2 2024 9 Food Industry 2 2024 10 Cosmetics 1 2024 As can be seen, the sample included 10 start-ups, all founded in 2024 (i.e., none are older than one year). The majority of the companies operate in the field of information technology (60%), while the remaining companies belong to the textile sector, law, food industry, and cosmetics. The highest number of employees was recorded in one IT company (6 employees), while the lowest was observed in two companies, each employing only one person. The types of innovations being implemented are presented in Figure 1. 178 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 Figure 1 Types of innovations The research results indicate that the majority of startups in the sample are implementing technological innovations, 30 % are engaged in process innovations, while only 10 % are introducing product innovations. It is also important to note that the respondents were asked whether they consider innovation important for their business, and whether they find it profitable and effective for use in the market. Of all respondents, 100 % stated that they believe their innovation is very important for their business, that it is very profitable, and that it is effective for broader market use. However, when it comes to the need for further development of their innovations, 60 % of respondents believe that additional work is required. Figure 2 presents whether the startups in the sample have previously faced any issues in the process of implementing innovations. 179 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 Figure 2 Obstacles in implementing innovations Based on the previous graph, it can be concluded that all 10 startups from Bosnia and Herzegovina included in the sample have encountered some form of obstacle during the implementation of innovations. The types of obstacles are shown in Figure 3. Figure 3 Obstacles to the implementation of innovations Based on the previous graph, it can be concluded that the majority of startups face a lack of financial resources when implementing their innovations. It is evident that every innovation requires substantial investment, making financial issues the most common and significant challenge among the companies in the sample. Five respondents reported that the biggest problem in implementing innovations was a lack of financial support. This was followed by a lack 180 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 of knowledge and lack of vision, each mentioned by two respondents, while one respondent indicated that the greatest problem was insufficient planning. Through the analysis of the proposed statements, the aim was to identify potential solutions for overcoming the obstacles to innovation implementation. Respondents rated the statements on a scale of 1 to 5. The analysis is presented in Table 3. Table 3 Statements No Statements Respondents' ratings Mean 1 Encouraging creativity (hackathons, brainstorming 4 4 4 4 4 4 4 4 4 3 3,9 sessions) 2 Insisting on building a prototype 4 4 4 4 4 4 4 4 3 1 3,6 3 Introduction of agile methods (Scrum, Kanban) 4 4 4 4 4 4 4 4 4 4 4,0 4 Crowdsourcing and crowdfunding 4 4 4 4 4 4 4 5 5 5 4,3 5 Investing in technology, AI, digitalization 4 4 4 4 4 4 4 5 5 5 4,3 6 Mentoring and investment support 4 4 4 4 4 4 5 5 5 5 4,4 Based on the previous table, it is evident that startups in Bosnia and Herzegovina most need knowledge and investment, that is, mentoring support and support in the context of finding certain sources of financing. On the other hand, they least need anyone to motivate them to implement innovations and work on them, they are already motivated enough to offer their innovations to the market. 5 Discussion The study included a sample of ten start-ups from Bosnia and Herzegovina, all established in 2024, with the majority operating in the field of information technology. The analysis reveals that most of these start-ups are implementing technological innovations, while process and product innovations are less represented. Notably, all respondents emphasized that innovation is of vital importance for their business, describing it as both profitable and effective for broader market application. Nevertheless, 60% of the respondents expressed the need for further development of their innovations. All start-ups in the sample reported encountering certain obstacles during the innovation implementation process. The most frequently cited barrier was the lack of financial resources, reported by half of the respondents. This was followed by a lack of knowledge and a lack of vision (each mentioned by two respondents), and insufficient planning (mentioned by one respondent). These findings suggest that financial constraints represent a predominant challenge for young companies seeking to innovate. Furthermore, respondents identified the greatest need for support in the form of mentoring and access to investment or financing opportunities. In contrast, motivational support was deemed least necessary, as participants reported being highly self-motivated to develop and introduce their innovations to the market. 181 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 The presented results indicate that startups are not deprived or protected from obstacles when implementing innovations. Each of the startups in the sample had some form of obstacles when implementing their innovations. It is important to note that in most cases (as much as 50%) the lack of finance marked the main obstacle in the implementation of innovations. Similar results were recorded by Mohnen et al. (2008) and García-Quevedo et al. (2018). It is important to point out that financial problems are not only characteristic of startups from Bosnia and Herzegovina, but also in other, much more developed countries. Based on these results, it can be concluded that the basic assumption of the research is confirmed. On the other hand, startups need professional and mentoring help, i.e. additional knowledge in the field of management, in order to successfully run their organizations. Through the conversation with startups, certain ideas and directions were reached on how to improve the entrepreneurial climate in Bosnia and Herzegovina. Some of the proposals were: Grants and subsidies (on the basis of which a certain amount of financial resources would be accumulated that would be used to invest in innovations), Technological parks and incubators (there is a lack of space in Bosnia and Herzegovina that would support startups - in terms of space but also in financial and mentoring terms, and the construction of technological parks and incubators is what should be done by the government in the coming period); Creating a favorable regulatory environment (it is necessary to enable easier registration for startups, in order to support the creation of an entrepreneurial climate); Support in education and training. Creating a community and network (there is a need to network startups, which can be achieved precisely through the previously mentioned technology parks and incubators); Support in entering the foreign market (assistance in the internationalization of business for startups must be the vision of the governing structure), Legal aid and Business angels (legal aid is necessary for all startups, as well as connections with business angels, which can be achieved through the organization of certain competitions, presentations, etc.). It should be noted that the current institutional support for startups in Bosnia and Herzegovina is insufficient, and that it is necessary to work on this area. The difficulties that startups encounter, which are recorded in the work, can be overcome institutionally and facilitate their operations. The results and recommendations previously given can be a significant input to government institutions on how to act to increase the number of startups and make the business sector of Bosnia and Herzegovina more developed. 6 Conclusion In this part, the basic conclusions will be presented. First of all, it is important to emphasize that the process of analyzing secondary and subsequently primary data aimed to demonstrate the importance of start-ups for any economy, particularly highlighting the significance of the innovations they generate. Accordingly, the analysis sought to identify the key obstacles to innovation and underscore the importance of minimizing these barriers in order to enable start- 182 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 ups to operate at their full creative potential. The research found that the lack of financial resources is one of the key barriers to innovation, and it is important that the government and all interest groups provide access to financial resources to startups, whose growth is needed by the economy. Something like this is possible through grants, subsidies, but also through cooperation with business angels, enabling space through technology parks and incubators. In addition to the above, the startups mentioned the need to work with mentors, as well as the need for knowledge and education, because this factor can be crucial when implementing innovations. After the basic categories of obstacles have been defined, it is important to eliminate each of the existing ones, in order to make startups work better and more efficiently. The results of this research can be used by the government and its institutions in Bosnia and Herzegovina, but also by the wider community, which can make a step forward in creating an entrepreneurial environment, and improve the status of startups in general. It is a process that benefits everyone. References 1. Alawamleh, M., Francis, Y. H., & Alawamleh, K. J. (2023). Entrepreneurship challenges: The case of Jordanian start-ups. Journal of Innovation and Entrepreneurship, 12(21). https://doi.org/10.1186/s13731-023-00286-z 2. Banka, M., Chmiel, N., Kostrzewski, M., Marczewska, M., Kowalski, A. M., Sedkiewicz, K., & Salwin, M. (2024). Understanding corporate concerns: Barriers and challenges in corporate–start-up collaboration. Journal of Open Innovation: Technology, Market, and Complexity, 10(4), 100388. https://doi.org/10.1016/j.joitmc.2024.100388 3. Brunetti, F., Matt, D. T., Pedrini, G., & Orzes, G. (2020). Digital transformation challenges: Strategies emerging from a multi-stakeholder approach. The TQM Journal, 32(4), 697–724. https://doi.org/10.1108/TQM-12-2019-0300 4. Carle, A. (2024). Implementation challenges of innovation policies fostering sustainability: Evidence from a French public grant for technological startups. Journal of Innovation Economics & Management, (43)(1). 5. Fakhri, S., & Bahoussa, A. (2014). Obstacles of innovation among the entrepreneur: An empirical study. International Journal of Innovation and Applied Studies, 9(1), 393–400. http://www.ijias.issr-journals.org/ 6. García-Quevedo, J., Segarra-Blasco, A., & Teruel, M. (2018). Financial constraints and the failure of innovation projects. Technological Forecasting and Social Change, 127, 127–140. https://doi.org/10.1016/j.techfore.2017.09.027 7. Hvolkova, L., Klement, L., Klementova, V., & Kovalova, M. (2019). Barriers hindering innovations in small and medium-sized enterprises. Journal of Competitiveness, 11(2), 51– 67. https://doi.org/10.7441/joc.2019.02.04 183 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 8. Kask, J., & Linton, G. (2025). Navigating the innovation process: Challenges faced by deep- tech startups. In Contemporary Issues in Industry 5.0. 9. Kuckertz, A., Brändle, L., Gaudig, A., Hinderer, S., Morales Reyes, C. A., Prochotta, A., Steinbrink, K. M., & Berger, E. S. C. (2020). Startups in times of crisis – A rapid response to the COVID-19 pandemic. Journal of Business Venturing Insights, 13, e00169. https://doi.org/10.1016/j.jbvi.2020.e00169 10. Kusumaningtyas, A., Bolo, E., Istianah, I., Chua, S., Wiratama, M., & Tirdasari, N. L. (2021). Why start-ups fail: Cases, challenges, and solutions. In Proceedings of the Conference Towards ASEAN Chairmanship 2023 (T-A-C 23 2021). Advances in Economics, Business and Management Research, 198. 11. Mohnen, P., Palm, F. C., Schim van der Loeff, S., & Tiwari, A. (2008). Financial constraints and other obstacles: Are they a threat to innovation activity? De Economist, 156(2), 201– 214. https://doi.org/10.1007/s10645-008-9089-y 12. Nicolau, C., & Bărbulescu, O. (2024). Innovation challenges and benefits in high-tech start- ups: A quantitative analysis of university student entrepreneurs in Romania. Amazonia Investiga, 13(82), 222–235. https://doi.org/10.34069/AI/2024.82.10.18 13. Nunes, A. K. da S., Morioka, S. N., & Bolis, I. (2022). Challenges of business models for sustainability in startups. RAUSP Management Journal, 57(4), 382–400. https://doi.org/10.1108/RAUSP-10-2021-0216 14. Rajiani, I., Kot, S., Michałek, J., & Riana, I. G. (2023). Barriers to technology innovation among nascent entrepreneurs in deprived areas. Problems and Perspectives in Management, 21(3). 15. Saksonova, S., & Kuzmina-Merlino, I. (2017). Fintech as financial innovation – The possibilities and problems of implementation. European Research Studies Journal, 20(3A), 961–973. 16. Sinha, N. (2023). Challenges and opportunities faced by innovative entrepreneurs in India. International Journal of Innovative Research in Management and Political Science (IJIRMPS), 11(2). 17. Sopjani, X. (2019). Challenges and opportunities for startup innovation and entrepreneurship as tools towards a knowledge-based economy: The case of Kosovo (Master's thesis). Rochester Institute of Technology. RIT Digital Institutional Repository. 18. Venczel, T. B., Berényi, L., & Hriczó, K. (2024). The project and risk management challenges of start-ups. Acta Polytechnica Hungarica, 21(2). 19. Wedajo, B. T., & Kakuze, H. (2020). Barriers in digital startup scaling: A case study of Northern Ethiopia (Master’s thesis). Umeå University. 20. Wetzel, T., & Eiche, J. (2024). Challenges of start-ups—An analysis of individually tailored recommendations based on the development phases, branches, business models and founding teams. Open Journal of Business and Management, 12, 1556–1585. https://doi.org/10.4236/ojbm.2024.123084 184 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 Povzetek Izzivi pri uvajanju inovacij v zagonskih podjetjih Raziskovalno vprašanje (RV): Zagonska podjetja predstavljajo pomembno podporo razvoju vsakega gospodarstva, hkrati pa so tudi njegov bolj ranljiv del, saj veliko zagonskih podjetij propade že v prvih letih delovanja. Rezultati dosedanjih raziskav kažejo, da obstaja več razlogov za neuspeh zagonskih podjetij, med ključnimi pa je oteženo uvajanje inovacij. V sodobnem, turbulentnem času in v razmerah globalizacije so inovacije nujen pogoj za obstoj in razvoj podjetij. Podjetja, ki niso inovativno usmerjena, so dolgoročno obsojena na propad. Tudi velika multinacionalna podjetja občutijo posledice pomanjkanja inovativnosti, kar je jasen pokazatelj, da so vlaganja v inovacije za zagonska podjetja izjemnega pomena. Kljub temu pa se pri uvajanju inovacij pojavljajo številni izzivi in ovire. Glavni problem raziskave je ugotoviti, katere so najpomembnejše ovire in izzivi pri uvajanju inovacij v zagonskih podjetjih v Bosni in Hercegovini ter s katerimi metodami jih je mogoče premagati. Namen: Glavni namen raziskave je opozoriti na ovire in izzive, s katerimi se srečujejo zagonska podjetja v Bosni in Hercegovini, ter podati ustrezna priporočila tako zagonskim podjetjem kot tudi vsem ključnim deležnikom, kako te ovire čim bolj učinkovito odpraviti, da bi bilo gospodarstvo bolj propulzivno in proaktivno. Metoda: Pri teoretični obravnavi problematike bodo uporabljeni podatki iz dosedanjih raziskav s tega področja. Z empirično raziskavo bo ugotovljeno, katere so najpogostejše vrste ovir in izzivov, s katerimi se soočajo zagonska podjetja v Bosni in Hercegovini, katere vrste inovacij se v teh podjetjih najpogosteje uvajajo ter katere metode in načine je mogoče uporabiti za premagovanje teh ovir. Raziskava bo izvedena na vzorcu desetih zagonskih podjetij v Bosni in Hercegovini. Rezultati: Rezultati raziskave bodo pokazali, katere ovire in izzivi se pojavljajo pri uvajanju inovacij v opazovanih zagonskih podjetjih, katere vrste inovacij se v zagonskih podjetjih v Bosni in Hercegovini najpogosteje izvajajo ter kako najučinkoviteje premagovati ovire pri uvajanju inovacij z namenom izboljšanja njihovega poslovanja. Organizacija: S tovrstnimi raziskavami lahko vodje zagonskih podjetij oziroma tisti, ki to nameravajo postati, lažje prepoznajo temeljne ovire pri uvajanju inovacij v svoje poslovanje ter uporabijo predlagane metode za njihovo premagovanje, kar prispeva k stabilnejšemu in trajnostnemu poslovanju. Družba: Povečanje stabilnosti zagonskih podjetij prinaša dolgoročne koristi za celotno družbo. Družbeno odgovorno poslovanje zagonskih podjetij lahko prispeva k boljši porazdelitvi virov v družbi, kar je koristno za vse družbene deležnike ter tudi za okolje. Originalnost: Tovrstna raziskava predstavlja eno redkih raziskav o zagonskih podjetjih v Bosni in Hercegovini. Gre za pionirsko raziskavo na področju izzivov in ovir pri uvajanju inovacij v zagonskih podjetjih v tej državi. Omejitve/nadaljnje raziskovanje: Ta raziskava ima več omejitev, med katerimi sta majhna velikost vzorca in težave pri zbiranju podatkov. Nadaljnje raziskave naj se osredotočijo na uvajanje inovacij v različnih panogah, v katerih delujejo zagonska podjetja. Ključne besede: zagon, inovativnost, izzivi, ovire, implementacija, priporočila, stabilnost, premagovanje. 185 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 *** Nerman Ljevo je redni profesor (docent) na Fakulteti za management in poslovno ekonomijo Univerze v Travniku. Doktorski študij je zaključil na Ekonomski fakulteti Univerze v Tuzli, smer Management in organizacija. Objavil je več deset znanstvenih člankov in dve knjigi. Nerman Ljevo is an Associate Professor at the Faculty of Management and Business Economics of the University of Travnik. He completed his doctoral studies at the Faculty of Economics, University of Tuzla, specializing in Management and Organization. He has published several dozen scientific articles and two books. *** Sabina Šehić - Kršlak je profesorica na Fakulteti za management in poslovno ekonomijo Univerze v Travniku, Fakulteti za javno upravo Univerze v Sarajevu in uslužbenka Centra za napredne tehnologije Kantona Sarajevo. Doktorski študij je zaključila na Ekonomski fakulteti Univerze v Banja Luki, smer Management in organizacija. Objavila je več deset znanstvenih člankov in tri knjige. Sabina Šehić - Kršlak is a Professor at the Faculty of Management and Business Economics of the University of Travnik, the Faculty of Public Administration of the University of Sarajevo, and an employee of the Canton Sarajevo Center for Advanced Technologies. She completed her doctoral studies at the Faculty of Economics of the University of Banja Luka, specializing in Management and Organization. She has published several dozen scientific articles and three books. *** This work is licensed under a Creative Commons Attribution 4.0 International License. This journal is published by Faculty of Organisation Studies in Novo mesto. 186 ISSN 2463-9281 Izid publikacije je finančno podprla ARIS iz naslova razpisa za sofinanciranje domačih znanstvenih periodičnih publikacij. The journal is subsidised by the Slovenian Research and Innovation Agency. GLAVNA IN ODGOVORNA UREDNICA / EDITOR IN CHIEF A N N M A R I E G O R E N C Z O R A N SOUREDNICA / ASSOCIATE EDITOR N A D I A M O L E K UREDNIŠKI ODBOR / EDITORIAL BOARD Boris Bukovec, Faculty of Organisation Studies in Novo mesto, Slovenia Alois Paulin, Technical University Vienna, Austria Juraj Marušiak, Slovak Academy of Science, Slovakia Mario Ianniello, Udine University, Italy Anisoara Popa, Danubius University, Romania Raluca Viman-Miller, University of North Georgia, Georgia, USA Anna Kołomycew, Rzeszów University, Poland Jurgita Mikolaityte, Siauliai University, Lithuania Patricia Kaplanova, Faculty of Organisation Studies in Novo mesto, Slovenia Laura Davidel, Univeristy of Lorraine, France Ana Železnik, Ljubljana University, Slovenia Marko Vulić, Information Technology School - ITS ComTrade, Serbia Vita Jukneviciene, Siauliai University, Lithuania Mitja Durnik, Ljubljana University, Slovenia Anca-Olga Andronic, Spiru Haret University, Romunija Razvan-Lucian Andronic, Spiru Haret University, Romunija Tine Bertoncel, Faculty of Organisation Studies in Novo mesto, Slovenia Nadia Molek, Faculty of Organisation Studies in Novo mesto, Slovenia Maja Meško, University of Maribor, Faculty of Organizational Sciences, Slovenia Agnieszka Wedel-Domaradzka, Faculty of Law nad Economics, Kazimierz Wielki University, Bydgoszcz, Poland Armand Faganel, University of Primorska, Faculty of Management, Slovenia Elżbieta Roszko-Wójtowicz, University of Lodz, Poland Gorazd Justinek, New University, Faculty of Government and European Studies Damyana Bakardzhieva, Anwar Gargash Diplomatic Academy, United Arab Emirates (UAE) Bashar H. Malkawi, University of Arizona James E. Rogers College of Law, USA Kodzo Alabo, University of Ghana, Salt University College, Ghana Robert Mudida, Strathmore University Business School, Nairobi, Kenia Juan Carlos Radovich, Universidad de Buenos Aires, Argentina Anurag Hazarika, Tezpur Central University, Assam, India & 21st century Open University, USA Naslov uredništva / Editorial address: Fakulteta za organizacijske študije v Novem mestu Ulica talcev 3 8000 Novo mesto, Slovenija   Licenca CC BY-SA 4.0 omogoča prosto uporabo, pr i lagajanje in distr ibuci jo dela, tudi v komercialne namene, pod pogojem, da se pr izna avtorstvo in ohrani enaka l icenca pri del jenih predelavah. To zagotavl ja odprtost in šir jenje vsebine pod enakimi pogoj i . Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 DOI: 10.37886/ip.2025.009 AI is not a Tool The Impact of Growing AI Agency on the Future of Work Alexander van Biezen1 1 Arcadia University, Belgium, alexander.vanbiezen@arcadiascholen.be Abstract Research Question (RQ): What are the underlying philosophical assumptions shaping current perceptions of artificial intelligence (AI) as a mere tool, and how do these assumptions influence our understanding of AI’s growing agency and its potential impact on the future of work? Purpose: The paper aims to critically examine the widespread assumption that AI systems remain passive instruments entirely under human control. It explores how emerging forms of AI agency— understood as autonomous or semi-autonomous decision-making capacities—challenge this notion and what implications this shift entails for human labour, ethics, and social stability. Methods: The study adopts a philosophical and conceptual methodology grounded in the philosophy of mind and the philosophy of science. It draws on classical thought experiments (Searle’s Chinese Room, Jackson’s Mary, Penrose’s arguments on non-algorithmic consciousness) and integrates recent interdisciplinary debates on AI agency, autonomy, and consciousness. The analysis is based on a critical literature review combining philosophical, technological, and socio- political sources. Results: Findings indicate that the assumption of AI as a “dumb tool” no longer holds. Evidence of growing AI autonomy demonstrates that decision-making processes once reserved for humans are increasingly being delegated to machines. This outsourcing of human agency risks creating social and ethical blind spots, potentially leading to unequal labour transformations and governance challenges. However, a managed transition toward human–AI cooperation could foster innovation and inclusion if grounded in ethical oversight and policy regulation. Organization: For organizations, the study highlights the need to anticipate shifts in work structures and decision-making processes caused by AI systems with growing agency. It encourages managers and policymakers to design governance frameworks that maintain human oversight while enabling responsible collaboration with AI. Society: At the societal level, the research underlines the urgency of open policy debates and ethical reflection on AI regulation. Addressing the implications of AI autonomy is essential to preserve human agency, democratic accountability, and social justice in the digital era. Originality: The article contributes to bridging philosophical inquiry and socio-technical analysis by reframing AI not merely as a technological tool but as an emerging actor in human decision-making systems. It advances the concept of “AI agency” as a key lens for understanding the transformation of work. Limitations / Further research: The study is conceptual and does not include empirical data. Future research should investigate how organizations and workers experience AI agency in practice, possibly through ethnographic or organizational case studies, and explore policy instruments capable of mitigating risks related to automation and technocratic governance. Keywords: artificial intelligence, AI agency, AI consciousness, workforce, future, philosophy of science, philosophy of mind. Received: 2025-06-24, revised: 2025-07-02, accepted: 2025-11-05 188 as Original Research Paper Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 1 Introduction Sometimes reality catches up with us faster than expected. The past few years, each time when I mentioned the possibility of AI systems developing ‘agency’ to my students, I was met with blank stares or frowning. Today, ‘AI agent’ has become a new buzzword overnight. At the beginning of March 2025, a report from The Information announced that OpenAI, one of the leading AI research organizations, may be planning to charge up to $20,000 per month for specialized AI ‘agents’ (Palazzo & Weinberg, 2025). The prices vary from $2,000 a month for an AI agent at the level of a ‘high-income knowledge worker’ (Wiggers, 2025), a software developer agent will cost about $10,000 a month, and at the top we find a ‘Phd-level research’ (Edwards, 2025) agent which will cost you no less than $20,000 a month. The etymological origin of 'agency' stems from the Latin verb agere which means “to do”, “to act”. The word 'agent' stems from the present participle agens, agentis, “one who acts” or “one who does an act” (i.e. an agent). The topic of agency has a much longer history in the field of ethics, predating the advent of AI. Giving a precise definition of what characterizes 'agency' is quite a challenging task for philosophy, as it is related to intentional action and goal-directed behavior and, eventually, to the intricate philosophical question of what it means to be a person. According to a recent study from PULSE (Program on Understanding Law, Science, and Evidence) from UCLA School of Law, there is still a significant lack of agreement on the definition of agency and, as a result, there is still no consensus whether or not it is even possible to consider AI systems as agents (Newman et al., 2025). “(…) Depending on the perspective and definition (…) the agency of AI could be controversial, unimaginable, or an unquestionable truth (...)” (Newman et al., 2025). Especially the question whether or not an AI system can be considered as a goal-oriented entity remains controversial (Newman et al., 2025). For our discussion at hand, we take the option of slipping through the horns of this dilemma. As a working definition of 'agency' with regard to AI in this article, we simply mean that some AI systems are getting to a whole new level of decision-making capabilities and autonomous action. Whether or not these current AI systems can really be said to have their own goals and are truly goal-oriented is not the key question in this respect. What matters here for our discussion at hand is that these AI systems are acquiring a level of autonomous decision- making which makes it possible and tempting for us, humans, to transfer a growing part of human decision-making to these systems. Be that as it may, one thing’s for sure: developments in AI are making giant leaps at an ever- increasing pace. In September 2024, the well-known historian Yuval Noah Harari published Nexus on information networks from early history to AI today. In Nexus, Harari warns us: “AI 189 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 isn’t a tool – it’s an agent” (Harari, 2024, p. XXII), meaning that AI is capable of processing information all by itself, and thereby has the capacity to replace humans in the making of decisions (Harari, 2024, p. XXII). Harari was heavily criticized for this warning, being put away by some as a doomsday prophet. For instance, Don Lim, a seasoned IT specialist and formerly Chief AI Developer at Visdex, was very quick to reproach Harari in his article Why Yuval Noah Harari’s AI Doomsday Prophecies Are Misleading (Lim, 2024) that he is being both alarmist and misinformed: “He also argues “AI is not a tool, but an agent.” The current AI systems are far from autonomous entities; they are tools created, monitored, and controlled by humans. The idea that AI will become a self-governing force beyond human control is closer to science fiction than reality.” (Lim, 2024) Some reproached him that his book Nexus is “based on shallow scholarship” (Ferguson, 2025), or even bluntly suggested that the real lesson we can learn from Harari is that “there’s an incredible amount of money to be made with doomsday predictions” (Foreman, 2024). But even Geoffrey Hinton, the so-called godfather of AI, who shared the 2024 Nobel Prize in Physics with John Hopfield for their foundational discoveries and inventions that enable machine learning with artificial networks, warns us that “AI systems may be more intelligent than we know and there’s a chance the machines could take over” and “we’re moving into a period when for the first time ever we may have things more intelligent than us” (Pelley, 2024). He even left Google a year before, in May 2023, precisely because of his concerns about the many risks of AI (Douglas Heaven, 2023). If even the very brightest and the most well-versed among us in the field of AI yield warnings about the rapid developments in AI and its possible unforeseen devastating consequences, maybe it is time to listen to what they have to say and, at least, to postpone our judgement, even if only for a moment. Vincent Ginnis, professor of mathematics, physics and artificial intelligence at my alma mater, the Free University of Brussels (Vrije Universiteit Brussel), and at Harvard University, has become very concerned about the disconcerting lack of concern about the possible dangers of AI. In an eye-opening opinion piece (Ginnis, 2025) he mentions that at a recent AI safety conference in Paris, no one seemed to be concerned any longer with the dangers of AI. Instead, it was all about PR and power struggles. Once, Ginnis states, AI safety was about risks for humanity. The possible threat that millions of people might lose their jobs, the danger that misinformation and manipulation would spread at a scale undermining democracy, the possibility that AI systems might one day take decisions we no longer comprehend, let alone control. Instead, at the conference in Paris it was all about power. Who will get AI? Who 190 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 controls it? Who is running ahead in the race? The focus, according to Ginnis, shifted from risks to geopolitics. He issues a firm, unequivocal warning: “Humanity is creating a technology that surpasses its own knowledge and is completely unprepared for it. The first step is simple: acknowledge what is happening. The threat is real, the acceleration is dangerous, and the priorities are misplaced. There is still a long way to go, but we have to start somewhere.” (Ginnis, 2025, my translation) Likewise, Koen Schoors, professor of economy at the University of Ghent (Belgium), points to the danger of the growing attraction of an AI-based technocracy, not hampered by the sluggishness of democratic decision-making. There is no shortage of politicians, he states, who are fed up with the inertia of the democratic model with regard to the developments in AI (Schoors, 2024, p. 189). The question is not about whether or not or to what extent Harari, Hinton and others are right about their warnings. What intrigues me the most, as a philosopher of science, is: why are so many people so adamant and resolute in brushing aside all these warnings? Why do we cling so tightly to the reassuring idea that AI is just a mere tool, totally under our control? What are the tacit assumptions which apparently make it very hard for us to find the blind spots in our rosy vision on AI? 2 Literature review: AI and philosophy of mind 2.1 The Chinese room thought experiment When I was still a graduate student in philosophy, way back in the 1980s, I remember we were discussing John Searle’s thought experiment of the Chinese room. Remember that 40 years ago, AI was still a very remote theoretical possibility, a popular theme in science fiction movies perhaps, but not something to be taken very seriously. Nevertheless, some philosophers gave it their attention, more often than not in order to concoct sophisticated arguments to show that artificial intelligence would not be possible. A machine, a computer, could never have a mind in the same way human beings can be said to have minds. In brief, John Searle’s argument (Searle, 1980 and 1984) goes as follows. Someone is sitting in a room and receives a sheet of paper with Chinese characters from the left, through a slit in the wall. This person then meticulously follows a highly detailed instruction table, akin to a computer program, to transform the Chinese messages into other messages with different characters on another sheet of paper. Once the conversion is complete, this person then sends this new sheet of paper out to the right as output (likewise, through a slit in the wall). To an outside observer, it seems as if the person inside the rooms understands Chinese. However, in 191 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 reality, the person inside the room is just following instructions, he does not need to understand a single word of Chinese himself. Searle wanted to show with this thought experiment that two people can be functionally identical: a native Chinese speaker and the person inside the Chinese Room. They both provide perfect answers to questions posed in Chinese. Yet, they have completely different mental states (one understands Chinese, while the other does not understand it at all). The bottom line of Searle's argument is: a computer will remain fundamentally different from a human being. A computer system will never “truly” understand what it is doing, whereas human beings obviously can. 2.2 Mary in the black-and-white room Another famous example in this respect comes from the Australian philosopher Frank Jackson, who in 1986 published a very controversial and influential article, What Mary didn’t know (Jackson, 1986). Even though it is almost forty years old by now, it is about a thought experiment which is still used today in discussions about the possibility of artificial general intelligence (AGI). In short, Jackson's thought experiment goes as follows. This experiment is about a fictional scientist in a distant future, Mary. In that distant future, both physics and neurophysiology have reached a final state. That means that Mary knows everything these sciences have to say about perceiving colors. But there is something special about Mary: she lives in a completely colorless room. Everything is black or white. So, Mary has never seen a color before in her entire life. Her knowledge of colors is therefore purely based on books about physiology, neurology, and the biochemistry of color perception. Now it gets exciting. One day, Mary finds a secret door to the outside world. The first thing she sees when she steps outside is a red apple. The question Jackson poses is: does Mary learn something new when she sees this apple? Jackson answers: yes, she learns a new fact — she learns what it is like to experience the color 'red’. This phenomenon is referred to in philosophy of mind as ‘qualia’: ‘individual instances of subjective, conscious experience’ (qualia is Latin and is the plural of quale, which literally means ‘such as’). Jackson aims to conclude that knowledge about qualia fundamentally relies on subjective experiences, unlike knowledge about physical states in our brains. By subjective, we mean how someone experiences or judges something from a personal perspective. 192 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 Although Jackson’s knowledge argument was originally meant to question the philosophical position of physicalism, in short, the view that “everything is physical”, that there is nothing “above” the physical (van Biezen, 2016), this thought experiment has also become a classic in discussions about artificial general intelligence (Wang, 2023); Baron, 2025; Renard, 2024). What it boils down to is that Jackson’s argument about Mary in the black-and-white room is used to demonstrate that there will always be an unbridgeable gap between the human mind and artificial intelligence. In short, AI is Mary stuck in the black-and-white room, never having seen a color in her entire life. The human being is Mary when she walks into the outside world and sees a red apple. 2.3 The Copernican trauma Somehow, I cannot shed the impression that these arguments resonate with older arguments pertaining to the difference between human beings and other animals. Time and again, throughout the ages, people have tried to pinpoint demarcation criteria to demonstrate that there is a fundamental difference in kind between human beings and animals, and certainly not a difference in degree. Part of the upheaval caused by the publication of Darwin’s On the Origin of Species (Darwin, 1859) in 1859 was precisely that: the horrifying idea that human beings are just another animal in the tree of evolution. It seems that the gist of these arguments resurfaces in the discussions about the nature of artificial intelligence. Benjamin Bratton, philosopher of technology and professor at the University of California, San Diego, sees AI as the next phase in the series of Copernican human decentering of the once thought privileged position of human beings in the center of the world, a “Copernican trauma” (Bratton, 2024): “What is today called ‘artificial intelligence’ reveals that intelligence, cognition and even mind (…) are not what they seem to be, not what they feel like and not unique to the human condition (…) intelligence itself is artificializable.” (Bratton, 2024) 2.4 Roger Penrose on minds and machines One of the most famous defenders of the position that there is a fundamental chasm between the human mind and machines is the esteemed mathematician and mathematical physicist Sir Roger Penrose, who shared the 2020 Nobel Prize in physics for his ground-breaking work on black hole formation in the 1960s (which he accomplished together with Stephen Hawking who passed away in 2018, so, unfortunately, he could not be nominated anymore). As early as 1989, in his book The Emperor’s New Mind (Penrose, 1989), Penrose argued that human consciousness is essentially non-algorithmic, which implies that human consciousness in principle could never be embedded by what is called a conventional Turing machine, i.e. the theoretical model which underlies every possible computer. 193 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 Penrose’s argument is based on the so-called first incompleteness theorem by Kurt Gödel, undoubtedly one of the most important results in the foundations of mathematics of the twentieth century (for a comprehensive account, see Nagel & Newman,1958). Put very simply: this theorem gives a mathematical proof that any consistent and sufficiently rich axiomatic system of ordinary arithmetic with natural numbers (0, 1, 2, 3,…) and basic arithmetic functions like addition (+) and subtraction (-) will always contain true statements about natural numbers which cannot be proved nor disproved within that formal system itself. In other words, these statements are true, but they cannot be derived from the axioms (that is, by a step-by-step procedure, i.e. an algorithm). Penrose’s argument boils down to his claim that, contrary to that formal system, a sufficiently skilled mathematician (i.e. a human mind) is indeed capable of arriving at and formulating those true statements (which are unprovable and underivable from within that system). According to Penrose, Gödel's first incompleteness theorem tells us that no computer which works within a formal system F can prove the sentence G(F) = "This sentence cannot be proved in F." But, Penrose continues, we humans can just "see" the truth of G(F). Because if G(F) were false, then it would be provable, which leads to a paradox, an absurd result. So, the human mind is capable of doing something which not a single computer can do. Therefore, Penrose concludes, consciousness can't be reducible to computation. This implies that machines can never have human-like consciousness and, therefore, artificial intelligence would be forever out of human reach. Needless to say, there has been lot of discussion on Penrose’s argument, but that would be far beyond the scope of this article (for a brave attempt at refuting Penrose's use of Gödel's theorems, see Krajewski, 2015). The only reason why I bring it up here is the question: why is one of the smartest persons on this planet so persistent in trying to prove that the human mind will always “surpass” the capabilities of any possible machine? The tacit assumption underlying all these approaches seems to be that artificial general intelligence systems should possess consciousness in order to be called “truly” intelligent. All these thought experiments and arguments serve the same strategy: machines, artificial intelligence, cannot possibly attain consciousness or “self-awareness”, if you’d like, and, hence, machines, artificial intelligence, cannot possibly be called “really” intelligent. 2.5 “Can machines think?” (Alan Turing) It is exactly this tacit assumption that intelligence somehow presupposes consciousness that I want to put into question. This is crucial to my argument, so I want to state it as explicitly as I can. For some reason or another, whatever it might be, some people still see computers as 194 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 inanimate, dumb machines which are incapable of thinking or feeling anything. And because computers cannot think or feel anything, they are not capable of making any decisions on their own. This leads us to one of the key questions in the field of philosophy of artificial intelligence: the fascinating cross-border field between philosophy of mind and the philosophy of computer science (Brey & Søraker, 2009) that explores the implications of artificial intelligence for getting grips on concepts like ‘intelligence’, ‘consciousness’, ‘free will’ etc. (for a comprehensive account of the current state of affairs in the philosophy of artificial intelligence, see Müller, 2025). This key question is: can something like artificial intelligence exist at all? As Alan Turing, the father of the computer and of AI, put it succinctly: can machines think? (Turing, 1950, p. 442). Turing himself found this question simply “too meaningless” to deserve discussion: “The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted” (Turing, 1950, p. 442) The interesting question is not ‘Can machines think?’, but ‘Can we let machines do things for which human beings would need intelligence?’. Saying that AI systems cannot “truly” think, that is to say, think as human beings, is the equivalent of saying that planes cannot “truly” fly because they can’t flap their wings. Admittedly, whether or not AI can really be called intelligent or not is still a controversial debate. Philosopher Luciano Floridi traces the issue down to a conceptual issue: either we enlarge our definition of what intelligence actually is so that it also includes artificial forms of it, or we widen our conception of agency so that it also encompasses artificial forms of agency which do not necessarily presuppose intelligence (Floridi, 2025). Floridi is in favor of the last option, viewing AI as “agency without intelligence” (Floridi, 2025). Others are more clearly opposed to AI agency and its implications, considering AI systems as intelligent. Even as early as 2009, scholar Joanna Bryson unequivocally warned against the danger of humanising robots and declared that “robots are fully owned by us” (Bryson, 2009, p. 1). She argues: “In humanising them, we not only further dehumanise real people, but also encourage poor human decision making in the allocation of resources and responsibility” (Bryson, 2009, p.1). Why, you might ask, is that so important? Because this reassuring and soothing position that AI cannot possibly be called “really” or “truly” intelligent entails the risk of creating a dangerous blind spot. As long as we keep on seeing AI systems as a mere tool, as a “dumb” instrument, passively waiting for human instructions to do something, we put ourselves in danger by 195 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 overlooking the fact that these systems are getting more and more agency, that is to say, more and more autonomy and decision-making capabilities. If those rapidly increasing decision- making capabilities of AI systems remain in our blind spot, we run a serious risk of continuing to outsource human decision-making to the point where it is no longer under our control. And this blind spot is related to the deeply ingrained tacit assumption that intelligence presupposes consciousness. As Yuval Harari strikingly noted, there is a widespread conviction that computers and AI systems simply are not capable of making decisions assumes that making decisions is predicated on having consciousness (Harari, 2024, p. 201). However, Harari continues, the fact that in human beings, as in other mammals, intelligence is often accompanied by consciousness does not allow us to extrapolate from humans and other mammals to all possible entities (Harari, 2024, p. 201). Admittedly, the issue of what 'consciousness' precisely is remains very slippery. Nevertheless, as scholars Patrick Krauss and Andreas Maier recently pointed out, most biologists nowadays consider consciousness as a gradual phenomenon which, in different levels of complexity, can also be found in animals (Krauss & Maier, 2025). In this respect, according to the integrated information theory (Giulio Tononi), the level of consciousness, the level of consciousness depends on the structure of the underlying substrate (i.e. the brain, for humans and other animals). The more coherent or connected a system is, the more conscious it is. In short, consciousness is related to the mutual interconnectedness of a system (Krauss & Maier, 2025). To cut all these ramifications short, let us simply erase ‘consciousness’ from the equation. The rapid evolution of AI compels us to radically rethink our understanding of concepts like ‘intelligence’ and ‘consciousness’. As Blaise Agüera y Arcas and James Manyika put it succinctly: “We’re in paradigm-shifting territory” (Agüera y Arcas & Manyika, 2025). Meanwhile, we should focus on reducing our blind spot and getting past this pernicious stumbling block of keeping on seeing AI as a mere tool, a mere instrument which cannot operate without human supervision and which we have under our control. 3 Methodology In this article, although I bring in different elements from various disciplines, the methodology followed is that of a philosophical argumentation, as the central thread of the questions examined are most closely related to philosophical issues in the field of philosophy of mind. 4 Findings: The risk of our blind spot 4.1 AI gaining in agency AI gaining in agency, becoming more and more autonomous and capable of independent decision-making, is not about Terminator-like robots rampaging through the streets in a killer 196 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 spree. As Mustafa Suleyman, the co-founder of DeepMind, one of the leading AI research laboratories, already pointed out in his book The Coming Wave (Suleyman & Bashkar, 2023): “Many technologies and systems are becoming so complex that they’re beyond the capacity of any one individual to truly understand them (… ) In AI, the neural networks moving toward autonomy are, at present, not explainable. You can’t walk someone through the decision-making process to explain precisely why an algorithm produced a specific prediction.” (Suleyman & Bashkar, 2023) When it comes to artificial general intelligence (AGI), the risk of the blind spot impeding us to see the real potential of danger becomes even greater. When bringing up the mere possibility of artificial general intelligence, you easily get your share of disbelief and laughter, conjuring up the famous image of the super-intelligent HAL 9000 computer on board of the spaceship U.S.S. Discovery in the motion picture 2001: A Space Oddysey (1968, Stanley Kubrick), refusing to open the air-lock, in that very calm, soothing tone: “I’m sorry Dave, I’m afraid I can’t do that”. By brushing aside the possibility of artificial general intelligence, we run the risk of turning a blind eye to the real dangers of the rapid development of current AI technology. The position we are in right now remembers me of the horrifying images of the 2004 tsunami in Sout-East Asia. Remember the footage of the receding water from the beaches, with people laughing at this strange phenomenon, small boats, sloops, suddenly lying dry at the sea floor, maybe some fish flopping in a remaining shallow pool, children still playing around. Even when they saw the wall of water looming in the distance, people’s warning systems still didn’t seem to be triggered, they curiously kept on staring in the distance, instead of deciding to run to higher grounds as quickly as possible. Lest I be dismissed as another doomsday prophet, I want to add a critical note to the rosy picture of AI being hailed as the next Industrial Revolution, lifting the future of the human workforce to a whole new level. A bit like in the lyrics of that song of Timbuk 3, “The future’s so bright, I gotta wear shades”. Admittedly, there are quite a number of economic and labor studies highlighting the risks of automation and AI for skills mismatch and workforce displacement. However, voices pleading for some caution are easily drowned out as too overly pessimistic. 4.2 “AI is set to surpass us in speed and understanding” (Geoffrey Hinton) Moreover, there are some recent developments taking an ugly turn. At a recent conference, Geoffry Hinton made a very interesting point, which strengthens me in my argument that we are put on the wrong track, set on the wrong foot, so to speak, when we continue to convince ourselves that AI systems are just dumb tools, not capable of “really” understanding what they are doing. Hinton’s point was that even with today’s large language models (LLMs), there is a 197 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 key difference between them and the way human memory works, and that is “AI’s unmatched ability to share knowledge” (Saso, 2025). Human beings pass information in small pieces, whereas AI has the ability to synchronize trillions of bits in the blink of an eye: “(…) It’s no competition,” he said. If intelligence is about learning and sharing knowledge, AI is set to surpass us in speed and understanding. It’s a “very scary conclusion,” Hinton said—a warning that highlights the need for consensus on AI’s capabilities. (…)” (Saso, 2025) In a recent paper in Science on how to manage extreme AI risks amid rapid technological progress, Hinton warns unequivocally: “(…) Increases in capabilities and autonomy may soon massively amplify AI’s impact, with risks that include large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems. (…)” (Bengio, Hinton et al., 2024) Now, let us not get carried away by popular science fiction ideas of evil AI systems taking over the world. The danger of AI I want to talk to you about does not come from systems like HAL 9000, the super-intelligent computer, eliminating all humans on board of the U.S.S. Discovery. The danger I want to shed a light on is coming from us. It is we who are putting ourselves in danger because we are turning a blind eye to the growing agency of AI systems. And we are doing so because we are held captive by an image, the image of an AI system as a mere tool, the iconic image of a computer as a simple box with a keyboard. Because we tacitly assume that agency presupposes consciousness, and since we are firmly convinced that AI systems cannot possibly acquire consciousness, they will not acquire autonomous agency, so, there is nothing to worry about, isn’t it? However, reality is catching up fast with us. Even now, as we speak, AI systems analyze tremendous amounts of data, they take decisions in fractions of a second, more and more without any human intervention. Just think about all the sophisticated algorithms which manage the content feed on social media. Think about the trading by algorithms on financial markets, medical diagnostics driven by AI systems. The point is: we rely more and more on AI systems to make decisions that once were the sole province of human beings. 4.3 The need for vigilance As we are getting more and more comfortable towards these new technologies, as we are embracing them more deeply, we tend to become less vigilant over the technology, not to say downright lazy with regard to the use of all those systems. The principle of ‘least effort’ is a strong predictor of human behavior (Anderson & Rainie, 2023). It makes me think of my former life, when I was still working as a computer scientist for IT companies. In those days, you had to 198 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 be able to write sophisticated and elegant search queries in SQL-type languages to retrieve relevant information from a database. Nowadays, we just shout “Hey, Google!”, followed by a simplified query, often a very inelegant one, and we accept the result at face value, without giving it a second thought. The near infinite capacity of human beings to take things for granted will never fail to amaze me. Aren’t we lulling ourselves to sleep to quickly? A striking example of how this lazy attitude, blindly accepting what AI systems regurgitate, might lead us straight into disaster has been reported recently by Bastian Leibe, professor at the RWTH Aachen University in Germany (Leibe, 2025). When President Donald Trump from the United States announced his reciprocal tariffs on April 2nd, 2025, a number of people noticed that these proposed tariffs are not related to the actual tariffs these countries charge on imports from the United States. Instead, they have been shown to correspond to the United States’ trade deficit divided by the United States import volume from that country, which, according to economists, does not make any sense at all from an economic perspective (Leibe, 2025). Admittedly, at the time of writing of this article (early April 2025), president Trum had just announced his tariff plan. No scientific analyses or academic articles on this topic had been published as yet, as it was just discovered and signaled by a few academic scholars who are well versed in the field, like Bastian Leibe. Since April 2nd, a lot of economists have been trying to find out how on earth someone could come up with such an insane strategy. Until, Leibe continues his argument, someone found out that if you pose the question of tariff tables to current LLMs (like ChatGPT version 4o, Gemini 2.5pro et al.), they all propose tariffs which turn out to be very close to the tariffs in the list of president Trump. Leibe concludes that the most likely conclusion is that the Trump administration simply based its tariffs “on the unchecked outputs of an LLM” (Leibe, 2025): “This has real-world consequences. It is already sending economies into turmoil and it will cause worldwide harm and suffering. And it is sadly nothing that AI safety research could have prevented -- because the problem lay in front of the screen.” (Leibe, 2025) We see it happening all around us, as we speak. Society is becoming more and more complex. We are delegating human decision-making to sophisticated systems which involve storing our digital data and automating decision rules at an ever-increasing pace. In other words, we are lured into outsourcing our cherished decision-making and autonomy to AI systems step by step, and as these smart systems, powered by sophisticated machine- learning, will rapidly augment their sophistication level in the next, say, ten years from now, we might lose the ability to keep on making decisions independently of these systems. Barry 199 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 Chudakov, founder and principal of Certain Research, sees the relationship between human beings and AI systems as “(…) a struggle between the determined fantasy of humans to resist (‘I’m independent and in charge and no, I won’t give up my agency!’) and the seductive power of technology designed to undermine that fantasy (‘I’m fast, convenient, entertaining! Pay attention to me!’)” (…)” (Anderson & Rainie, 2023) 5 Debate: Towards a Human–AI Symbiosis What is the way forward? Is the future so bleak as some of these findings tend to suggest? No, I think not. But it is time that we realize that we will have to face difficult questions, now that there is still time to do so. What are the things we, as human beings, really want agency over? What should we list as the conditions under which we will turn to AI to help us in making decisions? And, the most pernicious issue, under what conditions and tight control mechanisms are we prepared to outsource certain precisely defined decisions to AI systems? We do not have the luxury of turning a blind eye to these difficult questions. A very promising view on a possible way forward is offered by Somendra Narayan, a professor of Strategy and Innovation at the University of Amsterdam, The Netherlands. His idea is that if we want to come to grips with understanding the impact of AI on human agency, it is helpful to see this interaction in terms of a symbiosis (Narayan, 2024). Instead of looking at AI as “an external force eroding our autonomy”, it would be better to look at it in terms of “an augmentation of human capability, a co-evolution where humans and machines are learning from each other and influencing each other’s decision-making” (Narayan, 2024). Narayan admits: yes, AI presents certain risks for human agency, but also opportunities. In order to ensure that AI augments human autonomy instead of diminishing it, we must build systems “where transparency and ethical considerations are central to how AI operates” (Narayan, 2024). His core argument is: human agency does not exist in isolation. Human agency is part of a larger socio-technical system. Human decisions do not arise in isolation, they are shaped by both technological tools and societal structures. In his view, AI is simply the most recent layer in this system. Tyler Suard, an AI researcher and developer, formerly at Apple and Meta, places a critical note on this sunny idea of a fruitful AI-human combination. He gives several examples of experiments where AI turned out to perform better on its own than when working together in a combined human-AI team. He concludes: if AI can indeed outperform human-AI teams, massive job loss will be lurking around the corner (Tyler, 2024). He calls for a reality check: we need to prepare for that possibility where AI will be a powerful independent entity in the workforce (Tyler, 2024). He pleas for governments and organizations to start working on policies 200 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 and regulations that address potential job displacement. He thinks about “social safety nets, retraining programs, and incentives for industries that create new job opportunities” (Tyler, 2024). I want to join this positive but cautious attitude towards the future of the human workforce (Boudry & Friederich, 2024). No, these developments certainly do not mean the end of human work. However, we must see to it that the transition to human-AI cooperation is not too disruptive, as this could be a threat to an inclusive and just division of labor, possibly even destabilizing society (van Biezen, 2024). In the past few years, the World Economic Forum stresses time and again in its annual jobs report that both analytical thinking and creative thinking are considered as the most wanted core skills in order to face the challenges of the oncoming transformation of the labor market (van Biezen, 2024, p. 58). In other words, companies focus on very high-level profiles with abstract cognitive skills as critical for that “brave new world” (Huxley, 1932). But what about the medium-level and low-level occupations? We need to be aware of the urgency of scaffolding social protection and support for those who run the risk of being forced out. The research group Inclusive Society (department Research & Expertise) at UCLL University of Applied Sciences (Leuven, Belgium) attaches a lot of importance to inclusion when it comes to the future of the human workforce. We focus on the question: how can we contribute to building a more inclusive and fairer world? From this perspective, we plea for the need of continuing policy debates on AI regulation taking explicitly into account the phenomenon of growing AI agency. 6 Conclusions The study has shown that the prevailing assumption of artificial intelligence (AI) as a mere tool—an inert instrument incapable of independent reasoning or decision-making—no longer holds in light of current technological developments. Through a philosophical and conceptual analysis, it was demonstrated that AI systems are gaining increasing levels of autonomy and agency, allowing humans to outsource parts of their decision-making to machines. This process, if left unchecked, risks creating social and ethical blind spots with potentially destabilizing consequences for labour structures and democratic decision-making. At the same time, the transition toward human–AI cooperation presents opportunities for innovation, inclusion, and new forms of collaboration—provided it is governed by ethical oversight and clear regulatory frameworks. The paper contributes to the interdisciplinary dialogue between philosophy, technology studies, and the social sciences by reframing the debate on AI not in terms of technical functionality but in terms of agency. It advances a conceptual bridge between classical philosophical arguments about consciousness and contemporary issues of algorithmic autonomy, offering a framework 201 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 for understanding AI as an emergent actor within socio-technical systems. This philosophical contribution strengthens the conceptual foundations of ongoing debates on AI governance and the future of work. For management and organizations, the findings underline the importance of recognizing AI systems as active participants in decision-making processes rather than as passive tools. Managers and policymakers are encouraged to develop governance models that balance efficiency with ethical responsibility, ensuring human oversight, transparency, and accountability in the implementation of AI technologies. On a broader societal level, the paper highlights the need for continued public and policy debates on AI regulation, particularly regarding its implications for inclusion, social justice, and democratic control. The study is theoretical in nature and does not include empirical or quantitative data. Its findings are based on philosophical reasoning and conceptual synthesis, which limits its direct applicability to specific organizational contexts. The rapid evolution of AI technologies also means that some empirical examples may quickly become outdated. Future research should empirically investigate how AI agency manifests in real organizational and societal contexts. Comparative studies across industries or sectors could provide insight into how different forms of AI autonomy affect human decision-making, trust, and accountability. Interdisciplinary research combining philosophy, organizational studies, and AI ethics would help to refine theoretical models and translate them into practical guidelines for governance. References 1. Agüerra y Arcas, B. & Manyika, J. (2025). AI Is Evolving – And Changing Our Understanding of Intelligence. In Noema, Berggruen Institute, 2. https://www.noemamag.com/ai-is-evolving-and-changing-our-understanding-of- intelligence/. 3. Anderson, J. & Rainie, L. (2023). The Future of Human Agency. In Pew Research Center, https://www.pewresearch.org/internet/2023/02/24/the-future-of-human-agency/. 4. Baron, S. (2025). Are a Machine's Thoughts Real? The Answer Matters Now More Than Ever. In Science Alert, https://www.sciencealert.com/are-a-machines-thoughts-real-the-answer- matters-now-more-than-ever. 5. Bengio, Y, Hinton, G. et al. (2024), Managing Extreme AI Risks Amid Rapid Progress, in Science, Vol. 384, Issue 6698, pp. 842-845. 6. Boudry, M. & Friederich, S. (2024). The Selfish Machine. On the Power and Limitation of Natural Selection to Understand the Development of Advanced AI. In Philosophy of Science, PhilSci-Archive, preprint, https://philsci-archive.pitt.edu/23903/. 202 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 7. Boudry, M. (2025). The Selfish Machine. Will Humanity Be Subjugated by Superintelligent Ais?. In Maarten Boudry’s Substack, https://maartenboudry.substack.com/p/the-selfish- machine. 8. Bratton, B. (2024). The Five Stages of AI Grief. In Noema, Berggruen Institute, https://www.noemamag.com/the-five-stages-of-ai-grief/ 9. Brey, Ph. & Johnny H. Søraker, J. H. (2009). Philosophy of Computing and Information Technology. In Philosophy of Technology and Engineering Sciences, edited by Antonie Meijers, Amsterdam, Elsevier, pp. 1341–1407. 10. Bryson, J. (2009). Robots Should Be Slaves. Published at Joanna Bryson Publications, https://www.joannajbryson.org/publications/robots-should-be-slaves-pdf. 11. Darwin, C. (1859). On the Origin of Species by Means of Natural Selection or the Preservation of Favoured Races in the Struggle for Life. New York, D. Appleton and Company, 1861 (first edition 1859). 12. Dhondt, S. & Dessers, E. (eds.)(2022). Robot zoekt collega. Uitgeverij Lannoo. [In Dutch; English title: Robot seeking colleague]. 13. Douglas Heaven, W. (2023). Deep learning pioneer Geoffrey Hinton quits Google. In MIT Technology Review, https://web.archive.org/web/20230501125621/https://www.technologyreview.com/2023/ 05/01/1072478/deep-learning-pioneer-geoffrey-hinton-quits-google/. 14. Edwards, B. (2025). What does “PhD-level” AI mean? OpenAI’s rumored $20,000 agent plan explained. In Ars Technica, https://arstechnica.com/ai/2025/03/what-does-phd-level-ai- mean-openais-rumored-20000-agent-plan-explained/. 15. Ferguson, N. (2025). The Doom Nexus. In Niall Ferguson’s Time Machine, https://niallferguson.substack.com/p/the-doom-nexus. 16. Floridi, L. (2025). AI as Agency without Intelligence: On Artificial Intelligence as a New Form of Artificial Agency and the Multiple Realisability of Agency Thesis. February 12, 2024. Available at http://dx.doi.org/10.2139/ssrn.5135645. 17. Foreman, J. T. (2024). How to Make it as a Doomsday Prophet. In The Metaphor, https://www.taylorforeman.com/p/how-to-make-it-as-a-doomsday-prophet. 18. Ginnis, V. (2025). Is er nog íémand bekommerd om de gevaren van AI? In De Standaard, 15 February 2025, https://www.standaard.be/cnt/dmf20250214_96655287 [In Dutch; English title: Is there still anyone concerned about the dangers of AI?]. 19. Harari, Y. N. (2024). Nexus. A Brief History of Information Networks from the Stone Age to AI. Vintage Publishing, Kindle Edition. 20. Huxley, A. (1932). Brave New World. Pdf edition, Coradella Collegiate Bookshelf, 2004, http://collegebookshelf.net. 21. Jackson, F. (1986). What Mary Didn’t Know. In The Journal of Philosophy, Vol. 83, No. 5 (May, 1986), pp. 291-295. 22. Kahneman, D. (2011). Thinking Fast and Slow. New York, Farrar, Straus and Giroux. 203 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 23. Krajewski, S. (2015). Penrose’s Metalogical Argument is Unsound. In Ladyman, J. et al. (eds.)(2015). Road to Reality with Roger Penrose. Kraków (Poland), Copernicus Center Press, p. 87-104. 24. Krauss, P. & Maier A. (2025). De geest in de machine. In EOS Psyche & Brein, June 2025, pp. 20-25 [In Dutch, English translation of the title: The Ghost in the Machine]. 25. Ladyman, J. et al. (eds.)(2015). Road to Reality with Roger Penrose. Kraków (Poland), Copernicus Center Press. 26. Leibe, B. (2025). Post on LinkedIn, https://www.linkedin.com/feed/update/urn:li:activity:7313873939691130880/. 27. Lim, D. (2024). Why Yuval Noah Harari’s AI Doomsday Prophecies Are Misleading. In Medium, https://medium.com/@don-lim/why-yuval-noah-hararis-ai-doomsday- prophecies-are-misleading-5541504ec3ab. 28. Molek, N., Pulinx, R. & van Biezen, A. (eds.)(2024). Analysis of the State of the Art on the Future of Human Workforce. Scientific Report. Transform, European Union. 29. Molek, N., van Biezen, A. & Velez, M. J. (2025), Book of Abstracts. International Interdisciplinary Conference Transform “The Future of Human Workforce”. Novo Mesto (Slovenia), FOS. 30. Müller, V. (2025). Philosophy of AI. A Structured Overview. In Smuha, N. (ed.)(2025). The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence. Cambridge University Press, p. 40-58. 31. Nagel, E. & Newman, J. R. (1958). Gödel’s Proof. New York, New York University Press. 32. Narayan, S. (2024). AI and the Future of Human Agency: Are We Outsourcing Decision- Making or Evolving with Machines?. In Medium, https://medium.com/@narayan.somendra/ai-and-the-future-of-human-agency-are-we- outsourcing-decision-making-or-evolving-with-machines-78da6ba4475f . 33. Newman, S. et al. (2019). AI & Agency. In 2019 Summer Institute on AI and Society, in AI Pulse, 26 September 2019, https://aipulse.org/ai-agency/?pdf=417. 34. Palazzolo, S. and Weinberg, C. (2025). OpenAI Plots Charging $20,000 a Month For PhD- Level Agents. In The Information, https://www.theinformation.com/articles/openai-plots- charging-20-000-a-month-for-phd-level-agents. 35. Pelley, S. (2024). "Godfather of Artificial Intelligence" Geoffrey Hinton on the promise, risks of advanced AI. In CBS News, https://www.cbsnews.com/news/geoffrey-hinton-ai-dangers- 60-minutes-transcript/. 36. Penrose, R. (1989). The Emperor’s New Mind. Concerning Computers, Minds and The Laws of Physics. Oxford, Oxford University Press. 37. Renard, V. et al. (2024). Mary Steps Out: Capturing Patient Experience through Qualitative and AI Methods. In NEJM AI, Vol. 1 No. 12, https://ai.nejm.org/doi/10.1056/AIp2400567. 204 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 38. Sapunov, G. (2023). Turing, “Intelligent Machinery. A Heretical Theory”, 1951. In Gonzo ML, https://gonzoml.substack.com/p/turing-intelligent-machinery-a- heretical?utm_campaign=post&utm_medium=web. 39. Saso, E. (2025). The path to safe, ethical AI: SRI highlights from the 2025 IASEAI conference in Paris. In Schwarz Reisman Institute for Technology and Society, University of Toronto. https://srinstitute.utoronto.ca/news/the-path-to-safe-ethical-ai. 40. Satyanarayan, A. and Jones, G. M. (2024). Intelligence as Agency: Evaluating the Capacity of Generative AI to Empower or Constrain Human Action. In An MIT Exploration of Generative AI - From Novel Chemicals to Opera, https://mit-genai.pubpub.org/pub/94y6e0f8/release/2. 41. Searle, J. (1980). Minds, Brains and Programs. In Behavioral and Brain Sciences, 3, pp. 417- 517. 42. Searle, J. (1984). Minds, Brains and Science. Cambridge, Mass., Harvard university press. 43. Schoors, K. (2024). Alles wordt anders. Gent, Borgerhoff & Lamberigts. [In Dutch; English title: Everything Will Be Different] 44. Smuha, N. A. (ed.)(2025). The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence. Cambridge University Press. 45. Suard, T. (2024). The Future of Work: AI May Not Need Us After All. In Medium, https://medium.com/@ceo_44783/the-future-of-work-ai-may-not-need-us-after-all- 5df8eae52ed9. 46. Suleyman, M. & Bhaskar, M. (2023). The Coming Wave. Technology, Power, and the Twenty- First Century’s Greatest Dilemma. New York, Crown. 47. Turing, A. (1950). Computing Machinery and Intelligence. In Mind, 49, pp. 433-460. 48. Turing, A. (1951). Intelligent Machinery. A Heretical Theory. https://gwern.net/doc/ai/1951- turing.pdf. 49. von Hoffman, C. (2025). Smarter AI means bigger risks – Why guardrails matter more than ever. In MarTech, https://martech.org/smarter-ai-means-bigger-risks-why-guardrails- matter-more-than-ever/. 50. van Biezen, A.F. (2016). A Case for Naturalism. In van Biezen, A.F., The Torch of Discovery, http://alexanderfvanbiezen.blogspot.com/2016/05/a-case-for-naturalism.html. 51. van Biezen, A.F. (2022). Top-Down Cosmology and Model-Dependent Realism. A Philosophical Study of the Cosmology of Stephen Hawking and Thomas Hertog. Brussels, VUB Press. 52. van Biezen, A. (2024). Emerging Skills for the Future Workforce. In Molek, N., Pulinx, R. and van Biezen, A. (eds.)(2024), Analysis of the State of the Art on the Future of Human Workforce. Scientific Report., Transform, European Union, p. 50-62. 53. Van Biezen, A. (2025a). Abstract of ‘AI is Not a Tool’. In Molek, N., van Biezen, A. & Velez, M. J. (2025), Book of Abstracts. International Interdisciplinary Conference Transform “The Future of Human Workforce”. Novo Mesto (Slovenia), FOS, p. 8. 205 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 54. van Biezen, A.F. (2025b). AI is not just another tool. What keeps us in the blind spot?. In van Biezen, A.F., The Torch of Discovery, https://alexanderfvanbiezen.blogspot.com/2025/04/ai-is-not-just-another-tool.html. 55. Verbinnen, L. (2025), AI-gebruik stijgt, maar ook onze bezorgdheid: ‘Techno-optimisme maakt plaats voor technorealisme’. In EOS Wetenschap. [In Dutch; English title: AI usage rises, but so does our concern: ‘Techno-optimism gives way to tech realism’.] https://www.eoswetenschap.eu/technologie/ai-gebruik-stijgt-maar-ook-onze-bezorgdheid- techno-optimisme-maakt-plaats- voor?utm_source=ActiveCampaign&utm_medium=mail&utm_campaign=eos_515. 56. Walther C.C. (2025). Hybrid Intelligence: The Future of Human-AI Collaboration. In Psychology Today, https://www.psychologytoday.com/us/blog/harnessing-hybrid- intelligence/202503/hybrid-intelligence-the-future-of-human-ai-collaboration. 57. Wang, X. (2023). The Possibility of Artifical Qualia. In Communications in Humanities Research, https://doi.org/10.54254/2753-7064/6/20230083. 58. Wiggers, K. (2025). OpenAI reportedly plans to charge up to $20,000 a month for specialized AI ‘agents’. In TechCrunch, https://techcrunch.com/2025/03/05/openai-reportedly-plans- to-charge-up-to-20000-a-month-for-specialized-ai-agents/. *** Alexander van Biezen is a philosopher of science, graduated from the Free University of Brussels (Vrije Universiteit Brussel) as a Doctor of Philosophy and Moral Sciences. He specialized in the philosophy of cosmology, with a doctoral dissertation on the cosmological models of Stephen Hawking and Thomas Hertog. His book Top-Down Cosmology and Model-Dependent Realism (2022, VUB Press) is freely available online through the research portal of the Free University of Brussels (https://researchportal.vub.be/). He has an additional background in religious studies and in computer science. Currently, he is employed as a teacher of philosophy and religion with the Arcadia school group in Aarschot, Belgium. His main areas of interest are cosmology and the philosophy of artificial intelligence. Alexander van Biezen je filozof znanosti, doktoriral je na Svobodni univerzi v Bruslju (Vrije Universiteit Brussel) kot doktor filozofije in moralnih znanosti. Specializiral se je za filozofijo kozmologije, z doktorsko disertacijo o kozmoloških modelih Stephena Hawkinga in Thomasa Hertoga. Njegova knjiga Top-Down Cosmology and Model- Dependent Realism (2022, VUB Press) je prosto dostopna na spletu preko raziskovalnega portala Svobodne univerze v Bruslju (https://researchportal.vub.be/). Ima tudi dodatno izobrazbo na področju religijskih študij in računalništva. Trenutno je zaposlen kot učitelj filozofije in religije v skupini šol Arcadia v Aarschotu v Belgiji. Njegova glavna področja zanimanja sta kozmologija in filozofija umetne inteligence. *** 206 Izzivi prihodnosti / Challenges of the Future Vol. 10 / no. 4 / November 2025 Povzetek UI ni zgolj orodje Vpliv naraščajoče tvornosti umetne inteligence na prihodnost dela Raziskovalno vprašanje (RV): Katere so temeljne filozofske predpostavke, ki oblikujejo sodobno razumevanje umetne inteligence (UI) kot zgolj orodja, in kako te predpostavke vplivajo na naše dojemanje naraščajoče tvornosti UI ter njenega možnega vpliva na prihodnost dela? Namen: Članek kritično preučuje razširjeno domnevo, da sistemi UI ostajajo pasivna orodja, popolnoma pod nadzorom človeka. Raziskuje, kako nove oblike tvornosti UI – razumljene kot avtonomne oziroma polavtonomne sposobnosti odločanja – izpodbijajo to predstavo in kakšne posledice ima ta premik za človeško delo, etiko in družbeno stabilnost. Metoda: Raziskava uporablja filozofsko in konceptualno metodologijo, utemeljeno v filozofiji duha in filozofiji znanosti. Opira se na klasične miselne poskuse (Searlov “kitajski sobi”, Jacksonovo “Mary v črno-beli sobi” in Penroseove argumente o nealgoritmični zavesti) ter vključuje sodobne interdisciplinarne razprave o tvornosti, avtonomiji in zavesti UI. Analiza temelji na kritičnem pregledu literature, ki združuje filozofske, tehnološke in družbenopolitične vire. Rezultati: Ugotovitve kažejo, da predpostavka o UI kot »neum­nem orodju« ne vzdrži več. Dokazi o naraščajoči avtonomiji UI potrjujejo, da se postopki odločanja, ki so bili nekoč izključno v domeni človeka, vse pogosteje prenašajo na stroje. Takšno postopno prenašanje človeške tvornosti lahko povzroči družbene in etične slepe pege ter vodi do neenakih transformacij dela in izzivov upravljanja. Vendar pa lahko nadzorovan prehod k sodelovanju med človekom in UI spodbuja inovativnost in vključenost, če temelji na etičnem nadzoru in ustreznih regulativnih okvirih. Organizacija: Za organizacije raziskava poudarja potrebo po pravočasnem predvidevanju sprememb v delovnih strukturah in procesih odločanja, ki jih povzročajo sistemi UI z naraščajočo tvornostjo. Menedžerje in oblikovalce politik spodbuja k oblikovanju upravljavskih okvirov, ki ohranjajo človeški nadzor in hkrati omogočajo odgovorno sodelovanje z UI. Družba: Na družbeni ravni raziskava poudarja nujnost odprtega političnega in etičnega dialoga o regulaciji UI. Naslavljanje posledic avtonomije UI je ključno za ohranjanje človeške tvornosti, demokratične odgovornosti in družbene pravičnosti v digitalni dobi. . Originalnost: Članek prispeva k povezovanju filozofske refleksije in družbeno-tehnične analize, saj ponovno opredeljuje UI ne zgolj kot tehnološko orodje, temveč kot nastajajočega akterja v sistemih človeškega odločanja. Razvija koncept »tvornosti UI« kot osrednjo analitično perspektivo za razumevanje preobrazbe dela. Omejitve/nadaljnje raziskovanje: Raziskava je konceptualne narave in ne vključuje empiričnih podatkov. Nadaljnje raziskave bi morale empirično preučiti, kako organizacije in delavci v praksi doživljajo tvornost UI – na primer s pomočjo etnografskih ali organizacijskih študij primerov – ter raziskati politične in regulativne instrumente, ki bi lahko omilili tveganja, povezana z avtomatizacijo in tehnokratskim upravljanjem. Ključne besede: umetna inteligenca, tvornost UI, zavest UI, delovna sila, prihodnost dela, filozofija znanosti, filozofija duha. This work is licensed under a Creative Commons Attribution 4.0 International License. This journal is published by Faculty of Organisation Studies in Novo mesto. 207