Within the context of accelerating digital transformation, the emergence of generative artificial intelligence (AI) has redefined the epistemic and ethical foundations of the humanities. The algorithmic turn challenges the traditional role of humanistic inquiry, raising questions about interpretation, creativity, and meaning in a technologically mediated world. This article explores how posthuman pedagogy can reclaim humanistic agency in the age of algorithms. Drawing on theoretical frameworks from posthumanism (Braidotti, Haraway) and metamodernism (Vermeulen & van den Akker), it argues that AI does not signal the decline of the humanities but their evolution toward a relational and ethically responsive practice. The study identifies three interrelated competencies – algorithmic literacy, digital empathy, and critical co-agency – as essential pillars for a transformative humanities curriculum. These competencies enable learners to engage critically and ethically with intelligent systems, interpret algorithmic processes as cultural texts, and act responsibly within distributed cognitive environments. Beyond its theoretical contribution, the article also addresses significant ethical and socio-cultural concerns, including algorithmic bias, epistemic injustice, and the need for reflexive digital ethics in higher education. It advocates for a metamodern orientation that balances skepticism with hope and critique with reconstruction. The conclusion emphasizes that the humanities must not resist technological mediation but reinterpret it as an opportunity for ethical imagination. Posthuman pedagogy thus envisions education as a co-creative process between human and algorithmic intelligences, grounded in empathy, reflection, and moral responsibility.
Keywords: posthumanism, metamodernism, digital humanities, AI ethics, algorithmic literacy, posthuman pedagogy
____________________
Revista de Pedagogie Digitala – ISSN 3008-2013
2025, Vol. 4, Nr. 1, pp. 73-79
https://doi.org/10.61071/RPD.2571
HTML | PDF
____________________
Introduction
The accelerating integration of generative artificial intelligence into academic and creative practice has prompted renewed debates about the relevance of the humanities in an age of automation. As algorithms increasingly simulate interpretation, judgment, and creativity, the humanities appear caught between obsolescence and transformation. Yet, rather than viewing this shift as a crisis of displacement, this article argues that it signals a pivotal moment for rethinking the humanities as relational and co-creative enterprises. The central question animating this study is therefore: How can the humanities reclaim humanistic agency through a posthuman pedagogy attuned to the algorithmic condition?
This inquiry is grounded in two complementary frameworks. First, posthumanism (Braidotti, 2019; Haraway, 2016) challenges anthropocentrism by recognizing cognition as distributed across human and nonhuman systems. Second, metamodernism (Vermeulen & van den Akker, 2010) articulates an affective stance of oscillation – between irony and sincerity, scepticism and hope – that mirrors the humanities’ current negotiation between technological displacement and ethical renewal. Together, these paradigms offer the conceptual basis for a posthuman pedagogy – a mode of teaching and inquiry that acknowledges technological co-agency while preserving ethical reflexivity.
By reframing interpretation as a process of collaborative cognition, posthuman pedagogy moves beyond the opposition between humanism and automation. It proposes that learning in the algorithmic age involves not mastery over technology but attunement to entanglement: the capacity to think, feel, and act responsibly within human-machine ecologies. The humanities, traditionally defined by critical interpretation and ethical reflection, thus acquire a renewed mission – to mediate understanding across species of intelligence.
This article develops this argument through four stages. The first situates the humanities at the algorithmic threshold, outlining their historical and epistemic redefinition. The second advances a theoretical framework for posthuman pedagogy as a response to algorithmic rationality. The third explores how competencies such as algorithmic literacy and digital empathy can operationalize this pedagogy in higher education. The final section reinterprets metamodernism as an ethical horizon through which the humanities can transform technological disruption into cognitive and moral renewal.
Ultimately, the article contends that reclaiming humanistic agency in the algorithmic era requires a metamodern ethics of relation – an orientation that balances critique with reconstruction, irony with sincerity, and autonomy with co-agency. Within this framework, the humanities remain indispensable not as guardians of a vanishing humanism, but as architects of a transformative and ethically conscious posthuman future.
1. Theoretical Framework: From Humanism to Posthuman Pedagogy
The humanities have historically been anchored in a humanist paradigm that placed reason, autonomy, and moral self-determination at the centre of knowledge. From the Renaissance ideal of Bildung to the Enlightenment’s celebration of rational subjectivity, “the human” functioned as the epistemic measure of value and truth. Yet twentieth-century theory exposed the limitations of this framework. Poststructuralist critique dismantled the universal subject, while cybernetics and information theory blurred the boundaries between human and machine cognition. Foucault’s (1970) pronouncement of the “death of man” captured this epistemic rupture: humanism’s sovereign subject was revealed to be a historical construct, not a metaphysical constant.
Contemporary posthumanism (Braidotti, 2019; Haraway, 2016) extends this critique by reconceiving subjectivity as relational, distributed, and co-constituted with technological, ecological, and material systems. Rather than opposing the human and the machine, posthumanism understands cognition as an entangled process of co-agency among biological and algorithmic actors. In this view, artificial intelligence is not an external threat but an internal extension of human cultural evolution – another iteration in a long history of cognitive prostheses that began with writing and computation (Hayles, 2017).
For the humanities, this recognition demands a shift from representational to relational epistemologies. Knowledge is no longer conceived as the accurate depiction of a separate world but as participation in networks of meaning that include nonhuman intelligences. Such an orientation transforms interpretation into an act of mediation rather than mastery. The role of the scholar or educator becomes that of translator between species of cognition – negotiating meaning, bias, and affect across algorithmic interfaces.
This relational turn also carries an ethical imperative. By dissolving the hierarchy that privileges human over nonhuman, posthumanism invites what Haraway calls response-ability – a capacity for situated and accountable engagement within shared ecologies of intelligence. It urges the humanities to cultivate attentiveness to how technological systems reproduce inequality or extend empathy. Within this frame, education is not the transmission of humanist ideals but the practice of learning with rather than about intelligent systems.
The concept of posthuman pedagogy thus emerges at the intersection of posthuman theory and educational practice. It redefines learning as a process of co-creation between human and algorithmic agents, guided by ethical reflexivity rather than instrumental control. Instead of resisting technological mediation, posthuman pedagogy seeks to understand how it reconfigures interpretation, creativity, and responsibility. It aligns with what Braidotti (2019) terms affirmative ethics: an ethic that transforms critique into creative adaptation.
Finally, the affective tone of this reconfiguration is best captured by metamodernism (Vermeulen & van den Akker, 2010) – a sensibility that oscillates between irony and sincerity, scepticism and hope. Whereas postmodernism dismantled grand narratives, metamodernism reconstructs meaning as if sincerity were still possible. This oscillatory stance offers the humanities an epistemic strategy for surviving technological acceleration without succumbing to nostalgia or nihilism. Within a metamodern horizon, the humanities can reclaim agency not by reaffirming human exceptionalism but by embracing ethical interdependence as their defining principle.
2. Analysis and Discussion: Algorithmic Rationalities and Posthuman Learning
The rapid diffusion of generative AI technologies – such as ChatGPT, Gemini, and DALL·E – has transformed not only creative and analytical practices but the epistemic foundations of interpretation itself. Algorithmic systems now produce text, sound, and images that simulate meaning without consciousness. This phenomenon has generated anxiety within humanistic disciplines: if interpretation and creativity can be automated, what remains distinctively human about the humanities?
To address this question, we must first understand the epistemology of algorithmic rationality. As Beer (2017) and Pasquale (2015) observe, algorithmic logic privileges prediction over understanding. It interprets the world probabilistically, transforming uncertainty into calculable risk. While such computation enhances efficiency, it also displaces the reflective judgment that traditionally defined humanistic reasoning. Knowledge becomes actionable but opaque – a process of correlation rather than comprehension.
In this sense, AI’s challenge to the humanities is not merely technological but hermeneutic. It exposes the tension between knowing through numbers and knowing through meaning. Whereas Enlightenment humanism valued critical reflection, algorithmic rationality instrumentalizes cognition, converting interpretation into optimization. The humanities’ response, therefore, must not be defensive but reconstructive – to reassert interpretation as a relational and ethical act in which understanding transcends mere prediction.
A posthuman pedagogy offers precisely this reconstructive approach. It does not reject algorithmic mediation; rather, it situates learning within it, cultivating reflective awareness of how technology shapes meaning. Learning in this framework is not mastery over tools but ethical co-creation with them. Students are invited to examine the processes by which generative models produce outputs – what they include, what they omit, and whose voices they amplify or erase. In doing so, interpretation becomes a practice of transparency and moral inquiry.
This pedagogical stance foregrounds three interdependent competencies that together form the architecture of posthuman learning: algorithmic literacy, digital empathy, and critical co-agency.
- Algorithmic literacy refers to understanding how algorithmic systems structure cognition and culture. It extends beyond technical fluency to encompass epistemic awareness – the capacity to read algorithms as cultural texts that embed ideological assumptions. As Wang (2025) suggests, algorithmic literacy means knowing not only how algorithms function but why they function as they do, and for whom. Within the classroom, this literacy is fostered when students analyse generative outputs critically, comparing algorithmic interpretations to human readings of the same material.
- Digital empathy adds an affective dimension. It encourages learners to reflect on the emotional and ethical implications of their interactions with AI – why, for instance, anthropomorphic chatbots evoke comfort or discomfort, and how personalization technologies shape affective experience. Through reflective dialogue exercises, students cultivate empathy not by projecting feelings onto machines but by recognizing how algorithms mediate their own affective responses.
- Critical co-agency integrates cognition and ethics. It teaches learners to act responsibly within human–machine assemblages, acknowledging both dependency and difference. Critical co-agency rejects naïve celebration of technological fusion and insists on epistemic friction – the awareness that humans and algorithms think differently. This friction is not a flaw but a pedagogical resource, sustaining the interpretive distance necessary for ethical reflection.
Together, these competencies reorient the humanities toward a relational epistemology. Knowledge is no longer a solitary human achievement but a distributed process involving data, code, emotion, and ethics. The educator becomes a mediator who designs spaces of informed entanglement – inviting students to experiment with AI while maintaining critical distance. Assignments might include reflective journals on AI-assisted writing, comparative analysis of algorithmic versus human aesthetic choices, or ethical audits of AI-generated content.
By embedding these competencies in higher education, posthuman pedagogy transforms the humanities into laboratories of relational understanding. Students learn that meaning is not static but negotiated; interpretation is not neutral but ethical; and intelligence, human or artificial, is always situated within contexts of power and responsibility.
Ultimately, the humanities’ engagement with AI reveals not their obsolescence but their renewed necessity. Only disciplines grounded in reflection and empathy can provide the moral vocabulary needed to navigate algorithmic culture. In this sense, posthuman learning becomes both a cognitive and ethical project – one that teaches not merely how to think about algorithms, but how to think with them responsibly.
3. Ethics and Implications: From Digital Empathy to Cognitive Justice
The algorithmic age has not only transformed how knowledge is produced but also how it is owned, circulated, and ethically negotiated. Generative AI systems – trained on vast datasets of human cultural production – reveal the deep entanglement of epistemic and moral questions: who is represented, who is excluded, and how responsibility is distributed in hybrid systems of cognition. For the humanities, this situation constitutes both a challenge and a calling – to translate technological awareness into ethical reflexivity and civic imagination.
3.1. Digital Empathy: Rehumanizing the Technological Interface
Digital empathy denotes the capacity to engage affectively and ethically with algorithmic others, recognizing that our emotions and judgments are continually mediated by data-driven systems. It does not imply that machines possess feelings; rather, it asks how human affect is shaped by algorithmic design. As Giannakos et al. (2024) note, interactions with generative AI reshape communication norms, moral intuitions, and notions of care.
Teaching digital empathy requires reintroducing the emotional dimension of learning into technologically mediated spaces. Students can be invited to reflect on their affective responses to AI-generated dialogue, images, or recommendations – examining how trust, intimacy, or unease emerge from machine-human interaction. In this way, emotion becomes a site of critical inquiry rather than manipulation. Empathy thus transforms from sentiment into response-ability (Haraway, 2016): a situated ethical awareness of entanglement within algorithmic environments.
3.2. Critical Co-agency: Responsibility within Distributed Intelligence
While digital empathy restores affective awareness, critical co-agency provides the ethical structure for acting within distributed cognition. It begins from the recognition that neither humans nor algorithms possess absolute autonomy; both participate in shared, asymmetrical processes of meaning-making. The ethical question, therefore, shifts from “Who is responsible?” to “How is responsibility enacted across human-machine systems?”
Critical co-agency cultivates epistemic friction – the capacity to collaborate with algorithms while maintaining critical distance. This friction prevents the absorption of human judgment into automated logic. In classroom practice, it can be operationalized through reflective assignments that document human-AI co-authorship, or ethical simulations that ask students to evaluate algorithmic bias and its cultural effects. These exercises make visible the moral choices embedded in design and use, turning ethical theory into experiential practice.
In this respect, posthuman pedagogy transcends compliance-based ethics (focused on plagiarism or data protection) and moves toward reflexive ethics – a continuous awareness of how one’s cognitive actions shape and are shaped by technological infrastructures. Reflexivity, rather than rule-following, becomes the mark of moral literacy in the algorithmic university.
3.3. Epistemic Justice: Decolonizing Algorithmic Knowledge
The ethical landscape of AI also exposes forms of epistemic injustice (Fricker, 2007; Medina, 2021). Algorithms trained predominantly on Western, English-language data reproduce testimonial and hermeneutic exclusions, marginalizing noncanonical epistemologies and minority cultures. As Mhlambi (2024) argues, decolonial AI ethics must challenge these asymmetries by questioning whose knowledge is encoded and whose experiences are rendered invisible.
Within a posthuman pedagogy, epistemic justice translates into cognitive plurality: the deliberate inclusion of diverse cultural narratives and linguistic corpora in AI literacy activities. Students can, for instance, analyse how generative models reproduce colonial tropes or compare algorithmic representations of texts across different languages. Such exercises develop what Braidotti (2019) calls a transversal ethics – thinking across differences without erasing them.
By linking digital empathy to critical co-agency and epistemic justice, the humanities can reclaim their ethical vocation in the digital sphere. They become not custodians of tradition but mediators of plurality, cultivating both sensitivity to cultural difference and accountability within technological entanglement.
4. Toward Cognitive and Ethical Citizenship
Ultimately, the ethical goal of posthuman pedagogy is to form what might be termed cognitive citizens – learners capable of navigating algorithmic systems with interpretive awareness, moral imagination, and civic responsibility. These citizens understand that algorithms are not neutral tools but cultural agents whose operations reflect and reproduce power. They also recognize their capacity to intervene – through critique, creativity, and care – in reshaping those systems toward greater equity and empathy.
In this sense, the humanities in the AI era perform a renewed ethical function: they teach not only how to interpret the world but how to inhabit it responsibly in the company of nonhuman intelligences. Digital empathy humanizes technology; critical co-agency enacts responsibility; and epistemic justice ensures inclusivity. Together, they define an emergent humanism – one that is relational, reflexive, and metamodern in spirit.
Conclusion: Metamodern Reenchantment and the Future of the Humanities
The algorithmic turn confronts the humanities with one of the most profound transformations in their history: the redefinition of interpretation, agency, and meaning in a world co-authored by humans and machines. Yet, as this article has argued, this transformation does not signify the erosion of the humanities but their renewal. When viewed through the lenses of posthuman pedagogy and metamodern ethics, the algorithmic condition becomes a catalyst for reimagining humanistic education as a relational and ethically attuned enterprise.
At the heart of this renewal lies metamodernism – a sensibility that oscillates between critique and reconstruction, irony and sincerity, despair and hope (Vermeulen & van den Akker, 2010). In this oscillation, the humanities discover a productive tension that sustains their vitality. Metamodern thought rejects both technophilic utopianism and nostalgic lament; it inhabits the space between, translating uncertainty into creative possibility. This affective stance enables educators and scholars to approach technological mediation not as a threat to meaning but as an opportunity for ethical imagination.
The posthuman pedagogy advanced here envisions learning as co-creation across human and algorithmic intelligences. Its triadic competencies – algorithmic literacy, digital empathy, and critical co-agency – form the cognitive and moral architecture of a renewed humanities curriculum. Together, they transform classrooms into laboratories of relational cognition, where knowledge is no longer transmitted from expert to novice but co-produced through reflective interaction. In such environments, students learn not only to use AI critically but to interrogate its cultural assumptions and ethical implications.
This pedagogical model reclaims what may be called a transformative humanism: an ethos grounded in care, adaptability, and moral imagination. It preserves the humanities’ interpretive depth while extending it into the algorithmic sphere. Within this framework, to educate is to cultivate the ability to think and feel with other intelligences – biological, artificial, and ecological. The human thus persists, not as a measure of dominance, but as a practice of responsibility within an expanding web of cognition.
Metamodern ethics reframes this responsibility as a form of ethical imagination – the courage to act as if meaning, truth, and empathy still matter in a world increasingly mediated by code. Such imagination is neither naïve nor sentimental; it is a disciplined act of faith in the possibility of renewal. When educators and scholars approach AI with metamodern openness, they transform technological disruption into a site of reflection, creativity, and moral growth.
Ultimately, reclaiming humanistic agency in the age of algorithms means redefining what it means to be human: not an autonomous subject, but a participant in networks of shared intelligence. The humanities endure because they teach the essential lesson that no system – however advanced – can replace the ethical and imaginative capacities through which humanity understands itself and its world. Their mission is not preservation but transformation: to ensure that as intelligence becomes more distributed, wisdom becomes more collective.
In embracing this metamodern vocation, the humanities do not fade in the glow of artificial intelligence – they illuminate it.
References
Ahmed, S. (2014). The cultural politics of emotion (2nd ed.). Edinburgh University Press.
https://pratiquesdhospitalite.com/wp-content/uploads/2019/03/245435211-sara-ahmed-the-cultural-politics-of-emotion.pdf
Beer, D. (2017). The data gaze: Capitalism, power and perception. SAGE. https://doi.org/10.4135/9781526463210
Biesta, G. (2020). World-centred education: A view for the present. Routledge. https://doi.org/10.4324/9781003098331
Braidotti, R. (2019). Posthuman knowledge. Polity Press.
Burdick, A., Drucker, J., Lunenfeld, P., Presner, T., & Schnapp, J. (2022). Digital _Humanities. MIT Press.
https://archive.org/details/DigitalHumanities_201701
Cadman, S. (2025). Humanism strikes back? A posthumanist reckoning with “AI ethics.” AI & Society. https://link.springer.com/article/10.1007/s00146-025-02339-1
de Mul, J. (2023). Teaching the paradox: Education in the posthuman era. Educational Philosophy and Theory, 55(5), 463–479. https://doi.org/10.1080/00131857.2022.2047927
Epstein, M. (2012). The transformative humanities: A manifesto. Bloomsbury. https://www.bloomsburycollections.com/monograph?docid=b-9781472542885
Floridi, L. (2024). The ethics of the infosphere: Philosophy in the age of information. Oxford University Press.
Foucault, M. (1970). The order of things: An archaeology of the human sciences. Vintage.
https://monoskop.org/images/a/a2/Foucault_Michel_The_Order_of_Things_1994.pdf
Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press. https://doi.org/10.1093/analys/anp028
Gamez, P. (2024). Posthumanism meets surveillance capitalism. Palgrave Macmillan. https://doi.org/10.1007/978-3-031-90770-8
Georgopoulou, M. S. (2025). Approaches to digital-humanities pedagogy: A systematic review. Digital Scholarship in the Humanities, 40(1), 121–140. https://academic.oup.com/dsh/article/40/1/121/7863444
Giannakos, M., et al. (2024). The promise and challenges of generative AI in education. International Journal of Information and Learning Technology. https://discovery.ucl.ac.uk/id/eprint/10199540/
Haraway, D. J. (2016). Staying with the trouble: Making kin in the Chthulucene. Duke University Press. https://doi.org/10.1215/9780822373780
Hayles, N. K. (2017). Unthought: The power of the cognitive nonconscious. University of Chicago Press. https://ageingcompanions.constantvzw.org/books/Unthought_N._Katherine_Hayles.pdf
Liu, J., Wang, Z., Xie, J., & Pei, L. (2024). From ChatGPT, DALL-E 3 to Sora: How generative AI has changed digital-humanities research and services. arXiv Preprint. https://arxiv.org/pdf/2404.18518
Medina, J. (2021). The epistemology of resistance: Gender and racial oppression, epistemic injustice, and resistant imaginations (2nd ed.). Oxford University Press. https://global.oup.com/academic/product/the-epistemology-of-resistance-9780199929023
Mhlambi, S. (2024). Decolonial AI ethics and global data justice. AI & Society. https://doi.org/10.1007/s00146-024-01684-7
Nath, R. (2023). From posthumanism to ethics of artificial intelligence. AI & Society, 38(4), 185–196. https://doi.org/10.1007/s00146-021-01274-1
Nayar, P. K. (2014). Posthumanism. Polity Press.
Nayar, P. K. (2022). Posthuman pedagogy: Ethics and relational learning. Pedagogy, Culture & Society, 30(4), 557–573. https://doi.org/10.1080/14681366.2020.1825404
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press. https://tetrazolelover.at.ua/Frank_Pasquale-The_Black_Box_Society-The_Secret_Al.pdf
Vermeulen, T. (2023). Metamodernism and the future of the humanities. Cultural Politics, 19(2), 145–161. https://doi.org/10.1215/17432197-10526172
Vermeulen, T., & van den Akker, R. (2010). Notes on metamodernism. Journal of Aesthetics & Culture, 2(1), 1–14. https://doi.org/10.3402/jac.v2i0.5677
Wang, Z. (2025). A posthumanist approach to AI literacy. Journal of Writing Studies. https://www.sciencedirect.com/science/article/pii/S8755461525000209
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs
____________________
Acknowledgements
During the preparation of this study, the author used Chat GPT and Claude 4.5 to retrieve additional sources and to support the coherence and alignment of the information included. All AI-generated suggestions were independently selected, verified, and revised by the author. The author assumes full responsibility for the final content, interpretations, and academic quality of this publication
Author:
Cristina-Georgiana Voicu
voicucristina2004@yahoo.fr
Titu Maiorescu Secondary School (Iași, Romania)
https://orcid.org/0000-0001-9299-6551
Author Biography:
Cristina-Georgiana Voicu holds a PhD in Philology and she teaches English at Titu Maiorescu Secondary School in Iași. She is also a researcher in cognitive science with expertise in AI-enhanced learning, inclusive instructional design, and the development of digital skills in pre-university education. Her research interests also include responsible AI integration, digital ethics, and the design of learning environments that support equity, accessibility, and learner agency. She has published in national and international journals and contributes to various educational projects, including Erasmus+ initiatives focused on digital transformation and pedagogical innovation.
____________________
Received: 2.10.2025. Accepted and published: 17.11.2025
© Cristina-Georgiana Voicu, 2025. Published by the Institute for Education (Bucharest). This open access article is distributed under the terms of the Creative Commons Attribution Licence CC BY, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited:
Citation:
Voicu, C.-G. (2025). Reclaiming Humanistic Agency in the Age of Algorithms: Toward a Posthuman Pedagogy of the Humanities. Revista de Pedagogie Digitala, 4(1) 73-79. Bucharest: Institute for Education. https://doi.org/10.61071/RPD.2571