Abstract
The rapid integration of artificial intelligence (AI) into organizational recruitment processes is transforming how organizations identify, evaluate, and select job candidates. AI-driven recruitment systems enable firms to process large volumes of applicant data and increase the efficiency of hiring processes. However, the growing reliance on algorithmic decision systems also introduces significant governance challenges related to transparency, accountability, and candidate trust. This study examines AI-driven recruitment systems through the lens of algorithmic management and organizational governance. While existing research has primarily focused on technical performance and bias mitigation in automated hiring systems, relatively limited attention has been devoted to the governance structures required to manage algorithmic decision-making within organizational recruitment processes. Addressing this gap, the paper develops the AI Recruitment Governance Framework (ARGF), a conceptual model that conceptualizes AI-driven recruitment as a form of algorithmic management and proposes a responsible AI governance architecture based on three core dimensions: transparency, accountability, and human oversight. The framework provides a theoretical foundation for future empirical research. The framework highlights governance mechanisms that enable organizations to maintain managerial responsibility and ethical oversight while leveraging the efficiency gains offered by AI technologies. This study contributes to the literature by conceptualizing AI-driven recruitment as a form of algorithmic management and proposing a governance framework for responsible AI deployment in hiring processes. The study contributes to the emerging literature on responsible AI in human resource management by integrating insights from algorithmic management theory, HR governance research, and AI ethics scholarship. The findings suggest that organizations should adopt hybrid recruitment models in which algorithmic screening is complemented by structured human oversight and clear governance mechanisms. Such approaches can enable organizations to benefit from AI-enabled recruitment while preserving fairness, transparency, and legitimacy in hiring decisions.
|
Published in
|
Science Discovery Artificial Intelligence (Volume 1, Issue 2)
|
|
DOI
|
10.11648/j.sdai.20260102.12
|
|
Page(s)
|
69-77 |
|
Creative Commons
|

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.
|
|
Copyright
|
Copyright © The Author(s), 2026. Published by Science Publishing Group
|
Keywords
Artificial Intelligence, Algorithmic Management, AI Recruitment Governance Framework (ARGF),
Responsible AI Governance, Human Resource Management, Algorithmic Decision-Making
1. Introduction
The rapid diffusion of Artificial Intelligence (AI) in organizational decision-making has profoundly reshaped human resource management (HRM) practices. In particular, AI-driven recruitment systems are increasingly used to automate candidate screening, evaluate applications, and support hiring decisions. Organizations adopt these tools primarily to improve efficiency, reduce time-to-hire, and enhance the scalability of recruitment processes in highly competitive labor markets
| [1] | European Commission (2024). Artificial Intelligence in Healthcare: Policy and Governance Perspectives. Brussels: European Commission. Available at: https://ec.europa.eu (Accessed: 31 March 2026). |
| [4] | Davenport, T. H. and Harris, J. G. (2017). Competing on Analytics: The New Science of Winning. Revised and updated edition. Boston: Harvard Business Review Press. |
[1, 4]
.
Recent technological developments, including machine learning and generative AI systems, have further expanded the capabilities of automated recruitment platforms. These technologies can process large volumes of applicant data, identify patterns across candidate profiles, and generate predictive assessments of candidate suitability. As a result, AI systems are increasingly embedded in recruitment pipelines across sectors such as finance, healthcare, and technology, where organizations manage large applicant pools and require rapid decision-making
| [4] | Davenport, T. H. and Harris, J. G. (2017). Competing on Analytics: The New Science of Winning. Revised and updated edition. Boston: Harvard Business Review Press. |
[4]
.
However, the growing reliance on AI in recruitment has also raised significant concerns regarding transparency, fairness, and accountability. AI-driven systems often operate as complex algorithmic models whose decision-making logic remains opaque to both candidates and hiring managers. This opacity can generate challenges related to explainability, particularly when automated screening decisions affect employment opportunities for applicants
. In addition, algorithmic decision-making may inadvertently reproduce historical biases embedded in training datasets, potentially leading to discriminatory outcomes and undermining equal opportunity principles in hiring practices
.
Beyond technical risks, the integration of AI into recruitment processes represents a broader organizational transformation that can be conceptualized through the lens of algorithmic management. Algorithmic management refers to the use of automated systems to monitor, evaluate, and guide organizational decision-making processes that were traditionally performed by human managers. Within recruitment contexts, algorithmic systems increasingly shape how candidate information is evaluated, prioritized, and filtered, effectively redistributing decision authority between human managers and algorithmic tools.
This shift introduces new governance challenges for organizations. While AI systems can enhance efficiency and consistency in recruitment decisions, they also raise questions regarding responsibility, oversight, and ethical accountability. Organizations must therefore ensure that AI-driven recruitment processes remain aligned with regulatory standards, ethical principles, and organizational governance structures. Regulatory developments such as the European Union Artificial Intelligence Act emphasize the need for transparency, human oversight, and risk management in the deployment of high-impact AI systems, including those used in employment contexts
.
Despite the growing adoption of AI recruitment technologies, existing research often focuses on technical performance or bias mitigation while paying less attention to the governance structures required to manage algorithmic decision-making in organizational settings. Recent research has begun to address these limitations by developing governance-oriented AI frameworks in high-risk organizational contexts, particularly within public sector HR systems, where legitimacy and regulatory compliance play a central role in AI adoption
| [14] | Prestini, D. K. (2026). AI-Enabled Workforce Governance in Public Healthcare: An Applied Legitimacy-Based Model for Polish Hospital HR Systems. Science Discovery Artificial Intelligence, 1(2), 64-68.
https://doi.org/10.11648/j.sdai.20260102.11 |
[14]
.
This study contributes to the literature by introducing a governance framework that bridges algorithmic management research and responsible AI principles in organizational recruitment. In particular, the conceptualization of AI recruitment as a form of algorithmic management remains underdeveloped in HRM literature. This gap highlights the need for governance frameworks capable of balancing technological efficiency with accountability, transparency, and human oversight in recruitment processes.
This study therefore addresses the following research question: how can organizations design governance frameworks that ensure the responsible deployment of AI-driven recruitment systems?
In response to this challenge, this study develops the AI Recruitment Governance Framework (ARGF), a conceptual model that positions AI-driven recruitment at the intersection of algorithmic management, HRM practices, and responsible AI governance, and explains how organizations can govern algorithmic hiring systems through transparency, accountability, and human oversight. The ARGF provides a conceptual foundation that future studies can use to empirically examine governance mechanisms in AI-supported recruitment across industries and institutional contexts. By integrating insights from algorithmic management research, HR governance studies, and responsible AI principles, the study aims to contribute to the emerging literature on AI governance in organizational decision-making.
The remainder of this paper is structured as follows. Section 2 reviews the existing literature on AI in recruitment, algorithmic management, and responsible AI governance. Section 3 outlines the conceptual approach used to develop the proposed framework. Section 4 discusses the governance challenges associated with AI-driven recruitment systems. Section 5 presents a responsible AI governance framework designed to support accountable and transparent recruitment practices. Finally, Section 6 concludes by discussing implications for HR governance and future research directions.
Figure 1. AI-Driven Recruitment as Algorithmic Management Framework.
Conceptual representation of the interaction between candidate data processing, algorithmic screening, and human oversight within AI-supported recruitment processes. Governance layers ensure transparency, accountability, and responsible decision-making.
2. Literature Review
2.1. Artificial Intelligence in Recruitment
Artificial intelligence is increasingly integrated into human resource management (HRM) processes, particularly in recruitment and talent acquisition
| [15] | Dwivedi, Y. K., Hughes, L., Ismagilova, E., et al. (2023). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research. International Journal of Information Management, 70, 102656. https://doi.org/10.1016/j.ijinfomgt.2023.102656 |
| [17] | Mehrabi, N., Morstatter, F., Saxena, N., et al. (2022). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1-35. https://doi.org/10.1145/3457607 |
[15, 17]
. AI-driven recruitment systems use machine learning algorithms, predictive analytics, and automated screening tools to evaluate job applicants and support hiring decisions. These technologies allow organizations to analyze large volumes of applicant data, identify relevant candidate profiles, and automate early stages of the hiring process
| [5] | Vrontis, D., Christofi, M., Pereira, V., Tarba, S., Makrides, A. and Trichina, E. (2022). Artificial intelligence, robotics, advanced technologies and human resource management: A systematic review. The International Journal of Human Resource Management, 33(6), 1237-1266.
https://doi.org/10.1080/09585192.2020.1871398 |
[5]
.
Research on HR analytics suggests that AI-supported recruitment tools can significantly improve efficiency by reducing time-to-hire and enabling more scalable evaluation processes across large applicant pools
| [3] | OECD (2023). OECD Framework for the Governance of Artificial Intelligence in Public Administration. Organisation for Economic Co-operation and Development. Available at:
https://www.oecd.org (Accessed: 31 March 2026). |
[3]
. By applying consistent evaluation criteria to candidate profiles, algorithmic recruitment systems are often presented as tools that may reduce human biases and improve decision consistency
| [4] | Davenport, T. H. and Harris, J. G. (2017). Competing on Analytics: The New Science of Winning. Revised and updated edition. Boston: Harvard Business Review Press. |
[4]
. This position is also supported by recent evidence from the European banking sector, where AI-enabled recruitment was associated with greater process efficiency and improved candidate selection outcomes, although concerns regarding trust remained significant
.
However, scholars have increasingly highlighted the risks associated with algorithmic hiring systems. One major concern relates to the reproduction of historical biases embedded in training datasets. When algorithms learn from past hiring decisions, they may inadvertently replicate discriminatory patterns present in organizational data
. Additionally, the complex and opaque nature of many machine learning models can reduce transparency in hiring processes, making it difficult for both organizations and candidates to understand how recruitment decisions are produced.
As a result, while AI recruitment systems promise efficiency gains, they also introduce new governance challenges related to fairness, transparency, and accountability in hiring processes.
2.2. Algorithmic Management and Organizational Decision-Making
The concept of algorithmic management provides an important theoretical framework for understanding how AI technologies reshape managerial decision-making processes. Algorithmic management refers to the use of automated systems and algorithms to monitor, evaluate, and coordinate work activities that were traditionally performed by human managers
| [16] | Raisch, S., & Krakowski, S. (2023). Artificial intelligence and management: The automation-augmentation paradox. Academy of Management Review, 48(1), 192-210.
https://doi.org/10.5465/am.2020.0401 |
[16]
.
Research on digital labor platforms has shown that algorithmic systems can increasingly perform core managerial functions such as performance evaluation, task allocation, and decision support
| [7] | Kellogg, K. C., Valentine, M. A. and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
https://doi.org/10.5465/annals.2018.0174 |
[7]
. In these contexts, algorithms influence how workers are monitored and how organizational decisions are implemented.
Kellogg, Valentine, and Christin
| [8] | Lee, M. K., Kusbit, D., Metsky, E. and Dabbish, L. (2015). Working with machines: The impact of algorithmic and data-driven management on human workers. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI 2015). New York: ACM, pp. 1603-1612. https://doi.org/10.1145/2702123.2702548 |
[8]
describe algorithmic management as a system in which algorithms structure and guide organizational behavior by embedding decision rules within technological infrastructures. Similarly, Lee et al.
| [7] | Kellogg, K. C., Valentine, M. A. and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
https://doi.org/10.5465/annals.2018.0174 |
[7]
demonstrate how algorithmic systems mediate interactions between workers and organizations by shaping information flows and performance evaluations. Möhlmann and Zalmanson
| [9] | Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P. and Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707. https://doi.org/10.1007/s11023-018-9482-5 |
[9]
further highlight how algorithmic control mechanisms influence organizational authority and managerial responsibility.
This shift reflects the growing role of algorithmic management, where AI systems increasingly support or replace managerial decision-making processes
| [16] | Raisch, S., & Krakowski, S. (2023). Artificial intelligence and management: The automation-augmentation paradox. Academy of Management Review, 48(1), 192-210.
https://doi.org/10.5465/am.2020.0401 |
[16]
.
Although algorithmic management has been widely studied in digital platform environments, its implications for human resource management practices—particularly recruitment—remain relatively underexplored. In recruitment contexts, algorithmic systems increasingly influence how candidate data is analyzed, prioritized, and filtered. This shift effectively redistributes decision authority between human managers and algorithmic systems, raising important governance questions regarding responsibility, oversight, and organizational accountability.
2.3. Responsible AI Governance
The growing deployment of artificial intelligence in organizational decision-making has generated increasing interest in responsible AI governance. Responsible AI frameworks emphasize the importance of developing governance structures that ensure AI systems operate in accordance with ethical principles such as fairness, transparency, accountability, and human oversight
.
Floridi et al.
argue that the responsible development and deployment of AI requires governance mechanisms that integrate ethical principles into technological design and organizational decision processes. Similarly, Jobin, Ienca, and Vayena
identify transparency, accountability, and human oversight as key principles that underpin most global AI ethics guidelines.
These governance principles are particularly relevant in recruitment contexts, where algorithmic systems may directly influence employment opportunities and career trajectories, raising concerns about bias and fairness in algorithmic decision-making
| [17] | Mehrabi, N., Morstatter, F., Saxena, N., et al. (2022). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1-35. https://doi.org/10.1145/3457607 |
[17]
. Ensuring responsible use of AI in recruitment therefore requires organizations to establish governance structures capable of supervising algorithmic decision processes.
Recent regulatory developments further highlight the importance of governance in AI deployment. The European Union Artificial Intelligence Act classifies AI systems used in employment and recruitment as high-risk technologies and requires organizations to implement risk management, transparency, and human oversight mechanisms when deploying such systems
.
Despite increasing attention to responsible AI governance, relatively limited research has examined how governance mechanisms can be operationalized within AI-driven recruitment systems. Addressing this gap is essential for advancing theoretical understanding of governance mechanisms capable of regulating algorithmic hiring systems in organizational contexts.
Figure 2. Conceptual positioning of AI-driven recruitment and the AI Recruitment Governance Framework (ARGF) within the broader domains of algorithmic management and responsible AI governance.
As illustrated in
Figure 2, the AI Recruitment Governance Framework (ARGF) integrates insights from algorithmic management, AI-driven recruitment, and responsible AI governance.
3. Conceptual Approach
This study adopts a conceptual research approach to analyze the governance challenges associated with AI-driven recruitment systems. Conceptual research is particularly appropriate when emerging technological phenomena reshape organizational practices faster than empirical evidence can fully capture. In such contexts, conceptual analysis enables scholars to synthesize insights from multiple theoretical perspectives and develop frameworks that guide future empirical investigation
.
To enhance methodological transparency, the conceptual development of this study followed a structured literature selection approach. Relevant academic sources were identified through major databases including Scopus, Web of Science, and Google Scholar, focusing on publications related to AI-driven recruitment, algorithmic management, and responsible AI governance.
The selection criteria included peer-reviewed journal articles, influential theoretical contributions, and recent studies published between 2015 and 2025 to capture the most relevant developments in the field. Priority was given to studies addressing governance, ethics, and organizational implications of AI systems.
This structured approach supports the conceptual synthesis by ensuring that the framework is grounded in established academic literature while integrating emerging perspectives on AI governance in human resource management.
The rapid integration of artificial intelligence into recruitment processes represents a relatively recent organizational development. While a growing body of literature has examined the technical performance and bias risks of AI hiring systems, fewer studies have explored the broader governance implications of algorithmic decision-making within recruitment practices. A conceptual approach therefore provides a suitable methodological strategy for integrating insights from different research streams and identifying key governance mechanisms relevant to AI-driven recruitment
.
The framework proposed in this study is developed through the synthesis of three main bodies of literature. First, research on AI-enabled recruitment and HR analytics provides insights into how organizations deploy machine learning systems to automate candidate screening and evaluation processes
| [3] | OECD (2023). OECD Framework for the Governance of Artificial Intelligence in Public Administration. Organisation for Economic Co-operation and Development. Available at:
https://www.oecd.org (Accessed: 31 March 2026). |
[3]
. Second, the algorithmic management literature offers a theoretical lens for understanding how decision authority may shift from human managers to algorithmic systems within organizational contexts
| [8] | Lee, M. K., Kusbit, D., Metsky, E. and Dabbish, L. (2015). Working with machines: The impact of algorithmic and data-driven management on human workers. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI 2015). New York: ACM, pp. 1603-1612. https://doi.org/10.1145/2702123.2702548 |
[8]
. Third, research on responsible AI governance highlights the ethical and regulatory principles required to ensure that algorithmic systems operate in transparent, accountable, and socially responsible ways
.
By integrating these three theoretical perspectives, the study identifies key governance challenges that emerge when algorithmic systems influence recruitment decisions. The synthesis of these research streams leads to the identification of three core governance dimensions that are particularly relevant for AI-driven recruitment: transparency, accountability, and human oversight.
The resulting framework conceptualizes AI-driven recruitment as a form of algorithmic management in which algorithmic systems support candidate evaluation while human decision-makers retain responsibility for final hiring outcomes. The framework therefore emphasizes hybrid governance structures that combine algorithmic decision support with human managerial oversight. Such structures enable organizations to benefit from the efficiency gains offered by AI technologies while maintaining ethical responsibility, regulatory compliance, and organizational legitimacy in hiring processes.
4. Governance Challenges in AI‑Driven Recruitment
The increasing adoption of artificial intelligence in recruitment processes introduces a range of governance challenges that organizations must address to ensure the responsible deployment of algorithmic decision systems. While AI-driven recruitment tools can significantly improve efficiency and scalability in hiring processes, their integration into organizational decision-making structures also generates new risks related to transparency, accountability, and fairness in hiring outcomes
.
From an organizational perspective, AI recruitment systems can be understood as a new domain of algorithmic governance risk. When algorithmic systems influence hiring decisions, organizations are exposed not only to operational risks but also to broader reputational, legal, and ethical challenges. Recruitment decisions directly affect individuals’ employment opportunities, making algorithmic hiring systems particularly sensitive from both regulatory and societal perspectives. Recent regulatory developments, such as the European Union Artificial Intelligence Act, classify AI systems used in employment and recruitment as high-risk technologies and require organizations to implement governance mechanisms that ensure transparency, risk management, and human oversight
.
Within this regulatory context, governance frameworks such as the AI Recruitment Governance Framework (ARGF) may support organizations in operationalizing the transparency, accountability, and human oversight requirements emphasized by the EU AI Act.
One of the most significant governance challenges concerns transparency. Many AI recruitment tools rely on complex machine learning models that function as “black box” systems, making it difficult for decision-makers to understand how candidate evaluations are generated. Limited transparency may reduce trust in recruitment processes and create difficulties in explaining hiring decisions to candidates and regulators. Responsible AI frameworks therefore emphasize the need for mechanisms that make algorithmic decision processes more interpretable and communicable to relevant stakeholders
.
A second major governance challenge relates to accountability. When algorithmic systems participate in recruitment decisions, it may become unclear who is ultimately responsible for the outcomes of these decisions. Heavy reliance on automated screening systems may shift decision authority away from human managers toward algorithmic systems embedded within organizational infrastructures. Research on algorithmic management demonstrates how algorithmic systems can reshape managerial authority and influence organizational decision processes
| [8] | Lee, M. K., Kusbit, D., Metsky, E. and Dabbish, L. (2015). Working with machines: The impact of algorithmic and data-driven management on human workers. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI 2015). New York: ACM, pp. 1603-1612. https://doi.org/10.1145/2702123.2702548 |
[8]
. For this reason, organizations must establish governance structures that preserve human accountability for hiring decisions supported by algorithmic systems.
A third critical governance challenge concerns human oversight. Although AI technologies can enhance decision support in recruitment processes, they should not fully replace managerial judgment. Excessive reliance on automated decision systems may lead organizations to delegate critical hiring decisions to algorithmic processes without sufficient human supervision. Responsible AI governance frameworks consistently identify human oversight as a central principle to ensure that algorithmic systems operate in accordance with ethical standards and organizational values
.
Taken together, these governance challenges demonstrate that AI-driven recruitment systems must be embedded within robust organizational governance structures. Without appropriate governance mechanisms, algorithmic hiring technologies may expose organizations to fairness risks, reputational damage, and regulatory scrutiny. Addressing these challenges requires governance frameworks that integrate transparency, accountability, and human oversight into recruitment decision processes.
Furthermore, AI governance challenges may vary significantly across different institutional and cultural contexts. In emerging markets and multicultural environments, differences in regulatory frameworks, data availability, and organizational practices may influence how AI-driven recruitment systems are implemented and governed. These variations highlight the need for adaptable governance frameworks capable of addressing context-specific risks and ensuring legitimacy across diverse organizational settings.
5. Responsible AI Governance Framework
To address the governance challenges associated with AI-driven recruitment systems, this study proposes the AI Recruitment Governance Framework (ARGF), a conceptual governance model structured around three key dimensions: transparency, accountability, and human oversight.
The ARGF extends existing research on algorithmic management and responsible AI by integrating governance principles directly into organizational recruitment decision processes. This extension builds upon prior governance-oriented frameworks developed in regulated HR environments, where legitimacy constraints and institutional pressures shape AI deployment
| [14] | Prestini, D. K. (2026). AI-Enabled Workforce Governance in Public Healthcare: An Applied Legitimacy-Based Model for Polish Hospital HR Systems. Science Discovery Artificial Intelligence, 1(2), 64-68.
https://doi.org/10.11648/j.sdai.20260102.11 |
[14]
.
These governance principles are widely recognized in the responsible AI literature as essential mechanisms for ensuring that algorithmic systems operate in a fair, transparent, and socially responsible manner
.
The governance architecture designed to operationalize the ARGF within organizational recruitment processes is illustrated in
Figure 3.
To illustrate the practical application of the ARGF, consider a large financial institution implementing AI-driven recruitment tools for candidate screening. In such a context, transparency mechanisms may include explainable AI models that provide recruiters with insights into candidate scoring criteria. Accountability can be ensured by maintaining human decision authority over final hiring outcomes, supported by audit trails documenting algorithmic recommendations. Human oversight is operationalized through structured review processes, where hiring managers validate or override algorithmic outputs when necessary.
This illustrative example demonstrates how the ARGF can be applied in practice to balance efficiency gains with governance requirements in AI-supported recruitment environments.
Figure 3. AI Recruitment Governance Framework (ARGF) illustrating the interaction between AI recruitment systems, human oversight, and governance mechanisms supporting responsible hiring outcomes. Author’s elaboration.
Transparency represents a foundational requirement for responsible AI recruitment systems. Organizations must ensure that algorithmic recruitment tools provide understandable explanations regarding how candidate evaluations are generated. This may include the use of explainable AI techniques, documentation of algorithmic decision criteria, and communication mechanisms that inform candidates about the role of AI technologies in recruitment decisions. Increasing transparency can help organizations improve trust in algorithmic hiring processes and reduce concerns related to opaque decision-making systems
.
Accountability constitutes a second critical dimension of responsible AI governance in recruitment. Even when AI systems assist in candidate screening or evaluation, organizations must maintain clear responsibility for hiring outcomes. Decision authority should remain with human managers who are accountable for final recruitment decisions. Establishing clear governance structures that define managerial responsibility for algorithmic decision processes is therefore essential for ensuring ethical and legally compliant recruitment practices
| [8] | Lee, M. K., Kusbit, D., Metsky, E. and Dabbish, L. (2015). Working with machines: The impact of algorithmic and data-driven management on human workers. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI 2015). New York: ACM, pp. 1603-1612. https://doi.org/10.1145/2702123.2702548 |
[8]
.
Human oversight represents the third core governance dimension of responsible AI recruitment systems. Although AI technologies can significantly enhance decision support capabilities, they should not fully replace managerial judgment. Responsible AI governance frameworks emphasize the importance of maintaining meaningful human involvement in algorithmic decision processes, particularly in high-impact contexts such as employment decisions
. Organizations should therefore implement mechanisms that allow human decision-makers to review algorithmic recommendations, challenge automated outcomes, and intervene when necessary.
In this context, different forms of human oversight may be distinguished, including “human-in-the-loop” approaches, where human decision-makers actively participate in each stage of the recruitment process, and “human-on-the-loop” models, where human supervision is exercised over algorithmic systems with the ability to intervene when necessary. The selection of the appropriate oversight model depends on the level of risk associated with the recruitment process and the regulatory context in which the organization operates.
Together, these governance dimensions form the foundation of responsible AI recruitment practices. The key governance mechanisms that organizations can implement to support responsible AI deployment in recruitment processes are summarized in
Table 1. These mechanisms include explainable AI models, clear documentation of algorithmic decision criteria, human-in-the-loop decision processes, and governance structures that ensure compliance with emerging regulatory frameworks such as the European Union Artificial Intelligence Act
.
Table 1. Core governance dimensions and mechanisms of the AI Recruitment Governance Framework (ARGF). Author’s elaboration.
Governance Dimension | Key Risk Addressed | Governance Mechanisms | Organizational Responsibility |
Transparency | Lack of understanding of algorithmic decisions; limited explainability of candidate evaluations | Explainable AI models; documentation of algorithmic decision criteria; communication with candidates regarding AI use in recruitment | HR departments; AI development teams; compliance officers |
Accountability | Unclear responsibility for AI-supported hiring decisions | Human-in-the-loop decision processes; defined managerial responsibility for final hiring decisions; audit trails of algorithmic recommendations | HR managers; hiring committees; organizational leadership |
Human Oversight | Over-reliance on automated decision-making systems | Human review of algorithmic outputs; escalation procedures for contested decisions; periodic evaluation of AI recruitment tools | HR professionals; ethics committees; governance boards |
Regulatory Compliance | Misalignment with emerging AI regulations and ethical standards | AI governance policies; regular compliance audits; alignment with regulatory frameworks such as the EU Artificial Intelligence Act | Legal departments; compliance units; external auditors |
By integrating transparency, accountability, and human oversight into recruitment decision processes, the ARGF framework enables organizations to develop hybrid governance models in which algorithmic systems support candidate evaluation while human decision-makers retain responsibility for final hiring outcomes. Such hybrid governance structures enable organizations to leverage the efficiency gains offered by AI technologies while maintaining fairness, regulatory compliance, and organizational legitimacy in hiring practices.
The AI Recruitment Governance Framework (ARGF) may serve as both a reference model and an analytical lens for future empirical research examining governance mechanisms in AI-driven recruitment systems across different organizational and regulatory contexts.
Future studies may empirically test the ARGF across industries, recruitment technologies, and institutional contexts.
6. Conclusion and Implications
Artificial intelligence is increasingly transforming recruitment processes across organizations. AI-driven recruitment systems enable firms to process large volumes of applicant data, automate candidate screening, and support hiring decisions with advanced predictive analytics. While these technologies offer significant efficiency advantages, they also introduce important governance challenges related to transparency, accountability, and fairness in hiring decisions
.
This study examined the growing use of AI-driven recruitment systems through the lens of algorithmic management and responsible AI governance. By integrating insights from the literature on AI-enabled recruitment, algorithmic management, and responsible AI governance, the study developed a conceptual framework that highlights three key governance dimensions: transparency, accountability, and human oversight. These dimensions form the foundation of governance mechanisms designed to ensure that algorithmic recruitment systems operate in a responsible and socially legitimate manner.
This study contributes to the emerging literature on artificial intelligence in human resource management by conceptualizing AI-driven recruitment as a form of algorithmic management and by introducing the AI Recruitment Governance Framework (ARGF) as a theoretical model for governing algorithmic hiring systems. First, it conceptualizes AI-driven recruitment as a form of algorithmic management in which algorithmic systems influence managerial decision processes within organizations. Second, it proposes a governance framework that integrates principles from responsible AI research into organizational recruitment practices. By linking algorithmic management theory with responsible AI governance, the study provides a theoretical perspective for understanding how organizations can manage the risks associated with algorithmic decision-making in hiring contexts.
Beyond its theoretical contributions, the study also provides practical implications for organizations adopting AI recruitment technologies. Organizations should implement governance mechanisms that ensure transparency in algorithmic decision processes, maintain clear managerial accountability for hiring outcomes, and preserve meaningful human oversight over algorithmic recommendations. Hybrid recruitment models—where algorithmic screening is complemented by human decision-making—may represent a particularly effective approach to balancing efficiency gains with ethical responsibility and regulatory compliance
.
As a conceptual study, the paper does not provide empirical validation of the proposed framework, which represents an important direction for future research.
Finally, the study highlights several avenues for future research. Empirical studies are needed to examine how organizations implement governance mechanisms for AI-driven recruitment in practice and how these mechanisms influence recruitment outcomes, candidate trust, and organizational legitimacy. The ARGF may serve as a reference framework for comparative empirical studies investigating responsible AI governance in recruitment across sectors such as finance, healthcare, and public administration. Future research could also explore how different regulatory environments shape the governance of algorithmic hiring systems across industries and national contexts.
As artificial intelligence continues to reshape organizational decision-making processes, the development of effective governance frameworks will become increasingly important. By embedding transparency, accountability, and human oversight within AI-driven recruitment systems, organizations can leverage the benefits of algorithmic technologies while maintaining fairness, legitimacy, and trust in hiring practices.
The AI Recruitment Governance Framework (ARGF) may serve as a reference model for future empirical studies investigating responsible AI governance in organizational recruitment systems.
Table 2. Future research directions derived from the AI Recruitment Governance Framework (ARGF), outlining key research opportunities for empirical investigation of AI governance in recruitment systems.
Research Area | Key Research Question | Suggested Method |
AI Governance in Recruitment | How do organizations implement governance mechanisms for AI-supported recruitment systems? | Case studies, qualitative interviews |
Transparency in Algorithmic Hiring | How does algorithmic transparency influence candidate trust in recruitment decisions? | Survey research, experimental studies |
Human Oversight in AI Recruitment | What level of human involvement improves decision quality in AI-assisted hiring processes? | Field experiments, organizational studies |
Regulatory Compliance | How do organizations align AI recruitment tools with emerging regulations such as the EU AI Act? | Policy analysis, comparative institutional research |
Cross-Industry Adoption | How do governance practices differ across sectors such as finance, healthcare, and public administration? | Comparative multi-industry studies |
Abbreviations
AI | Artificial Intelligence |
EU | European Union |
GDPR | General Data Protection Regulation |
HR | Human Resources |
HRM | Human Resource Management |
OECD | Organisation for Economic Co-operation and Development |
RBV | Resource-Based View |
TAM | Technology Acceptance Model |
P-AIHR | Public Artificial Intelligence Human Resources Governance Framework |
Acknowledgments
The author extends sincere appreciation to healthcare professionals, policy analysts, and academic colleagues who provided constructive feedback during the development of this framework. Their insights on regulatory alignment, workforce sustainability, and AI governance significantly contributed to strengthening the conceptual clarity and applied relevance of this study. The author also acknowledges the valuable academic discussions within the fields of human resource management, algorithmic governance, and responsible artificial intelligence, which helped refine the conceptual foundations of this research.
Author Contributions
Dawid Krystian Prestini: Conceptualization, Methodology, Formal Analysis, Investigation, Visualization, Writing – original draft, Writing – review & editing
Conflicts of Interest
The author declares no conflict of interest.
References
| [1] |
European Commission (2024). Artificial Intelligence in Healthcare: Policy and Governance Perspectives. Brussels: European Commission. Available at:
https://ec.europa.eu
(Accessed: 31 March 2026).
|
| [2] |
European Parliament and Council of the European Union (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L series. Available at:
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
(Accessed: 31 March 2026).
|
| [3] |
OECD (2023). OECD Framework for the Governance of Artificial Intelligence in Public Administration. Organisation for Economic Co-operation and Development. Available at:
https://www.oecd.org
(Accessed: 31 March 2026).
|
| [4] |
Davenport, T. H. and Harris, J. G. (2017). Competing on Analytics: The New Science of Winning. Revised and updated edition. Boston: Harvard Business Review Press.
|
| [5] |
Vrontis, D., Christofi, M., Pereira, V., Tarba, S., Makrides, A. and Trichina, E. (2022). Artificial intelligence, robotics, advanced technologies and human resource management: A systematic review. The International Journal of Human Resource Management, 33(6), 1237-1266.
https://doi.org/10.1080/09585192.2020.1871398
|
| [6] |
Mujtaba, D. F. and Mahapatra, N. R. (2024). Fairness in AI-driven recruitment: Challenges, metrics, and future directions. arXiv preprint.
https://doi.org/10.48550/arXiv.2405.19699
|
| [7] |
Kellogg, K. C., Valentine, M. A. and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
https://doi.org/10.5465/annals.2018.0174
|
| [8] |
Lee, M. K., Kusbit, D., Metsky, E. and Dabbish, L. (2015). Working with machines: The impact of algorithmic and data-driven management on human workers. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI 2015). New York: ACM, pp. 1603-1612.
https://doi.org/10.1145/2702123.2702548
|
| [9] |
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P. and Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707.
https://doi.org/10.1007/s11023-018-9482-5
|
| [10] |
Jobin, A., Ienca, M. and Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
https://doi.org/10.1038/s42256-019-0088-2
|
| [11] |
World Health Organization (2023). Ethics and Governance of Artificial Intelligence for Health. Geneva: World Health Organization. Available at:
https://www.who.int/publications/i/item/9789240029200
(Accessed: 31 March 2026).
|
| [12] |
Jaakkola, E. (2020). Designing conceptual articles: Four approaches. AMS Review, 10(1-2), 18-26.
https://doi.org/10.1007/s13162-020-00161-0
|
| [13] |
Prestini, D. K. (2026). AI Adoption and Recruitment Efficiency in European Banking: A Mixed-Method Analysis. Science Discovery Artificial Intelligence, 1(1), pp. 1-6.
https://doi.org/10.11648/j.sdai.20260101.11
|
| [14] |
Prestini, D. K. (2026). AI-Enabled Workforce Governance in Public Healthcare: An Applied Legitimacy-Based Model for Polish Hospital HR Systems. Science Discovery Artificial Intelligence, 1(2), 64-68.
https://doi.org/10.11648/j.sdai.20260102.11
|
| [15] |
Dwivedi, Y. K., Hughes, L., Ismagilova, E., et al. (2023). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research. International Journal of Information Management, 70, 102656.
https://doi.org/10.1016/j.ijinfomgt.2023.102656
|
| [16] |
Raisch, S., & Krakowski, S. (2023). Artificial intelligence and management: The automation-augmentation paradox. Academy of Management Review, 48(1), 192-210.
https://doi.org/10.5465/am.2020.0401
|
| [17] |
Mehrabi, N., Morstatter, F., Saxena, N., et al. (2022). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1-35.
https://doi.org/10.1145/3457607
|
| [18] |
European Commission (2023). Ethics guidelines for trustworthy AI.
https://doi.org/10.2759/346720
|
Cite This Article
-
APA Style
Prestini, D. K. (2026). Algorithmic Management in AI-Driven Recruitment:
The AI Recruitment Governance Framework (ARGF) for Responsible AI Governance. Science Discovery Artificial Intelligence, 1(2), 69-77. https://doi.org/10.11648/j.sdai.20260102.12
Copy
|
Download
ACS Style
Prestini, D. K. Algorithmic Management in AI-Driven Recruitment:
The AI Recruitment Governance Framework (ARGF) for Responsible AI Governance. Sci. Discov. Artif. Intell. 2026, 1(2), 69-77. doi: 10.11648/j.sdai.20260102.12
Copy
|
Download
AMA Style
Prestini DK. Algorithmic Management in AI-Driven Recruitment:
The AI Recruitment Governance Framework (ARGF) for Responsible AI Governance. Sci Discov Artif Intell. 2026;1(2):69-77. doi: 10.11648/j.sdai.20260102.12
Copy
|
Download
-
@article{10.11648/j.sdai.20260102.12,
author = {Dawid Krystian Prestini},
title = {Algorithmic Management in AI-Driven Recruitment:
The AI Recruitment Governance Framework (ARGF) for Responsible AI Governance},
journal = {Science Discovery Artificial Intelligence},
volume = {1},
number = {2},
pages = {69-77},
doi = {10.11648/j.sdai.20260102.12},
url = {https://doi.org/10.11648/j.sdai.20260102.12},
eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.sdai.20260102.12},
abstract = {The rapid integration of artificial intelligence (AI) into organizational recruitment processes is transforming how organizations identify, evaluate, and select job candidates. AI-driven recruitment systems enable firms to process large volumes of applicant data and increase the efficiency of hiring processes. However, the growing reliance on algorithmic decision systems also introduces significant governance challenges related to transparency, accountability, and candidate trust. This study examines AI-driven recruitment systems through the lens of algorithmic management and organizational governance. While existing research has primarily focused on technical performance and bias mitigation in automated hiring systems, relatively limited attention has been devoted to the governance structures required to manage algorithmic decision-making within organizational recruitment processes. Addressing this gap, the paper develops the AI Recruitment Governance Framework (ARGF), a conceptual model that conceptualizes AI-driven recruitment as a form of algorithmic management and proposes a responsible AI governance architecture based on three core dimensions: transparency, accountability, and human oversight. The framework provides a theoretical foundation for future empirical research. The framework highlights governance mechanisms that enable organizations to maintain managerial responsibility and ethical oversight while leveraging the efficiency gains offered by AI technologies. This study contributes to the literature by conceptualizing AI-driven recruitment as a form of algorithmic management and proposing a governance framework for responsible AI deployment in hiring processes. The study contributes to the emerging literature on responsible AI in human resource management by integrating insights from algorithmic management theory, HR governance research, and AI ethics scholarship. The findings suggest that organizations should adopt hybrid recruitment models in which algorithmic screening is complemented by structured human oversight and clear governance mechanisms. Such approaches can enable organizations to benefit from AI-enabled recruitment while preserving fairness, transparency, and legitimacy in hiring decisions.},
year = {2026}
}
Copy
|
Download
-
TY - JOUR
T1 - Algorithmic Management in AI-Driven Recruitment:
The AI Recruitment Governance Framework (ARGF) for Responsible AI Governance
AU - Dawid Krystian Prestini
Y1 - 2026/04/15
PY - 2026
N1 - https://doi.org/10.11648/j.sdai.20260102.12
DO - 10.11648/j.sdai.20260102.12
T2 - Science Discovery Artificial Intelligence
JF - Science Discovery Artificial Intelligence
JO - Science Discovery Artificial Intelligence
SP - 69
EP - 77
PB - Science Publishing Group
UR - https://doi.org/10.11648/j.sdai.20260102.12
AB - The rapid integration of artificial intelligence (AI) into organizational recruitment processes is transforming how organizations identify, evaluate, and select job candidates. AI-driven recruitment systems enable firms to process large volumes of applicant data and increase the efficiency of hiring processes. However, the growing reliance on algorithmic decision systems also introduces significant governance challenges related to transparency, accountability, and candidate trust. This study examines AI-driven recruitment systems through the lens of algorithmic management and organizational governance. While existing research has primarily focused on technical performance and bias mitigation in automated hiring systems, relatively limited attention has been devoted to the governance structures required to manage algorithmic decision-making within organizational recruitment processes. Addressing this gap, the paper develops the AI Recruitment Governance Framework (ARGF), a conceptual model that conceptualizes AI-driven recruitment as a form of algorithmic management and proposes a responsible AI governance architecture based on three core dimensions: transparency, accountability, and human oversight. The framework provides a theoretical foundation for future empirical research. The framework highlights governance mechanisms that enable organizations to maintain managerial responsibility and ethical oversight while leveraging the efficiency gains offered by AI technologies. This study contributes to the literature by conceptualizing AI-driven recruitment as a form of algorithmic management and proposing a governance framework for responsible AI deployment in hiring processes. The study contributes to the emerging literature on responsible AI in human resource management by integrating insights from algorithmic management theory, HR governance research, and AI ethics scholarship. The findings suggest that organizations should adopt hybrid recruitment models in which algorithmic screening is complemented by structured human oversight and clear governance mechanisms. Such approaches can enable organizations to benefit from AI-enabled recruitment while preserving fairness, transparency, and legitimacy in hiring decisions.
VL - 1
IS - 2
ER -
Copy
|
Download