Call for Proposals: Special issue on Postplagiarism and Generativism: Human-AI Hybrid Approaches to Ethical Teaching, Learning, and Assessment

March 17, 2026

Special Issue Call for Papers

Postplagiarism and Generativism: Human-AI Hybrid Approaches to Ethical Teaching, Learning, and Assessment

For publication in the Journal of University Teaching and Learning Practice

Guest editors

Background

Every new technology brings with it societal and moral panic (Orben, 2020). When the Internet first became popular, concerns about plagiarism increased. Even though there is scant empirical evidence that the Internet was actually responsible for increases in rates of plagiarism, the perception that new technology resulted in more academic cheating persisted (Panning Davies & Howard, 2016).

Some plagiarism scholars have been emphatic that the majority of student plagiarism cases are not an intent to deceive, but rather a lack of academic literacy and poor academic practice, and have even advocated for disposing of plagiarism in academic misconduct policies in favour of increased student support (Howard, 1992; Jamieson & Howard, 2021). The idea that plagiarism could be decoupled from academic misconduct seems somewhat unlikely, but by the 2020s it was obvious to some that generative artificial intelligence (GenAI) would have an impact on writing, and by extension, on plagiarism (Mindzak & Eaton, 2021).

In response to these technological shifts, various frameworks have emerged to conceptualize academic integrity in the GenAI era. The postplagiarism framework, first introduced by Eaton (2021, 2023) and since discussed by scholars worldwide (Bali, 2023; Bagenal, 2024; Kenny, 2024), offers one approach. Other perspectives, such as Generativism (Pratschke, 2023), AI Literacy frameworks (Ng et al., 2021; Pretorius & Cahusac de Caux, 2024), and UNESCO’s Guidance for Generative AI in Education (2023), provide complementary or alternative viewpoints on similar phenomena.

Postplagiarism is based on six tenets (Eaton, 2023): (1) human-AI hybrid writing will become the norm; (2) creativity can be enhanced by AI; (3) AI can help to overcome language barriers; (4) we can outsource control of our writing to AI, but we do not outsource responsibility for what is written; (5) attribution remains important; and (6) historical definitions of plagiarism may require rethinking.

Empirical testing of these and related frameworks has shown differing levels of acceptance and application across educational contexts (Kumar, 2025).

Equity, Diversity, Inclusion, and Accessibility in a Postplagiarism Age

As higher education institutions aim to promote social justice through equity, diversity, and inclusion (EDI), GenAI holds the potential to either break down or reinforce barriers related to linguistic, cultural, socioeconomic, and ability differences requires critical examination.

Assessment practices should be designed proactively to enable all students to demonstrate their learning without being unfairly disadvantaged by their personal characteristics or circumstances (Tai et al., 2022). Similarly, McDermott (2024) highlights the importance of considering accessibility, equity, and inclusion in assessment and academic integrity.

GenAI offers opportunities to enhance equity by providing personalized support, overcoming language barriers, and assisting learners with diverse needs. However, without careful implementation, it may exacerbate existing inequities through unequal access to technology, algorithmic biases, or assessment designs that privilege certain ways of knowing and communicating.

In this special edition, we propose to examine the broader question: “How are pedagogies, learning, and teaching approaches evolving in response to GenAI, and what frameworks best support ethical academic practice in a postplagiarism landscape?”

We invite researchers and practitioners to submit their original research papers exploring the transformation of teaching, learning, and assessment in a GenAI age. We welcome both theoretical and empirical contributions, including positions that may present contrasting viewpoints. Potential topics of interest include, but are not limited to:

  • New developments in postplagiarism, generativism, and other emerging frameworks for understanding academic integrity in the GenAI era
  • Empirical studies testing these frameworks in different contexts and disciplines
  • The use of these frameworks to design or reform academic misconduct policies and procedures
  • The relationship between GenAI, academic literacies, and related competencies (e.g., digital literacy, information literacy)
  • Pedagogical approaches that embrace GenAI while maintaining academic integrity
  • Case studies of successful integration of GenAI into teaching, learning, and assessment
  • Critical perspectives on the limitations or challenges of current approaches to GenAI in education
  • Position papers presenting new or alternative frameworks for understanding GenAI in teaching and learning

We particularly encourage submissions that engage in dialogue with existing frameworks, offering either supportive evidence or critical alternatives. Our goal is to foster a robust debate about the future of teaching and learning in a GenAI (and even a post-GenAI) world.

We welcome submissions from both established researchers and early-career scholars from diverse academic and cultural backgrounds. All submissions will be peer-reviewed by an international panel of experts. Accepted papers will be published in a special issue of the Journal of University Teaching and Learning Practice.

Types of publications accepted into this Special Issue

The types of publications that are eligible for acceptance into this Special Issue include:

  • Research papers
  • Review articles (e.g., systematic review or meta-analysis)
  • Case studies and evidence-based good practice examples

Developing a high-quality proposal

We recommend the creation of a single document in Word (.doc or .docx) format that contains the following:

  • Proposed article title
  • Proposed authors names, affiliations, and ORCid
  • A clear evidence-based rationale for the line of inquiry proposed
  • Research question(s)
  • Proposed method (for both theoretical and empirical manuscripts)
  • Practice-based implications of the proposed research

The word limit for the proposal is 250 words (not including references) and is designed to give the Editorial Team a sense of the rigour of the manuscript proposed and the possible implications of such research. The Editorial Team may return with an invitation to combine similar manuscripts. Acceptance of proposals does not guarantee acceptance of final manuscripts.

Timeline

  • Proposals due – April 30, 2026
  • Proposal acceptance notifications: May 14, 2026
  • Full articles due: August 31, 2026

Submit your abstract via this online form: https://forms.gle/6sKjc2jkKGWCtGgw7

For further information contact Professor Sarah Elaine Eaton, University of Calgary.

References

Bali, M. (2023, March 3). Are We Approaching a Postplagiarism Era? https://blog.mahabali.me/educational-technology-2/are-we-approaching-a-postplagiarism-era/

Bagenal, J. (2024). Generative artificial intelligence and scientific publishing: Urgent questions, difficult answers. The Lancet, 403(10432), 1118–1120. https://doi.org/10.1016/S0140-6736(24)00416-1

Eaton, S. E. (2021). Plagiarism in Higher Education: Tackling Tough Topics in Academic Integrity. Bloomsbury.

Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1–10. https://doi.org/10.1007/s40979-023-00144-1

Orben, A. (2020). The Sisyphean cycle of technology panics. Perspectives on Psychological Science, 15(5), 1143–1157. https://doi.org/10.1177/1745691620919372

Howard, R. M. (1992). A plagiarism pentimento. Journal of Teaching Writing, 11(2), 233–245.


How AI Improved the Accessibility of my Slide Presentation with GenAI

February 17, 2026

I used Claude to help me improve the accessibility of a slide deck for an upcoming presentation. I uploaded the .pptx file and also uploaded a .pdf with instructions about how to make the slide deck compliant with accessibility standards.

I was not hopeful.

I asked Claude to revise the slide deck and provide an updated .pptx file that I could download. It did not work perfectly and some of the AltText was lost. So, I asked Claude to provide the AltTex for each slide and a detailed explanation of the changes. The result allowed me to make a few minor edits to a slide deck myself. The slides are now compliant with the organizational standards for a group I’ll be presenting to next week.

Ensuring slides are accessible has been an intimidating task for me in the past. I have always been afraid of “getting it wrong”. I would spend hours trying to figure out every detail (and things still would not be perfect).

In the end, I was satisfied with the results. Using AI for this has helped me to improve both my competence and confidence. The slides still may not be perfect, but they are better than they were… and better than I could have done on my own.

Have you tried using GenAI to help you improve the accessibility of your documents? If yes, what tips do you have?

______________

Share this post: How AI Improved the Accessibility of my Slide Presentation with GenAI – https://drsaraheaton.com/2026/02/17/how-ai-improved-my-presentations-accessibility-with-genai/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


Stop wasting my time! AI Agents Infiltrate Scholarly Publishing

February 6, 2026

As the Editor-in-Chief of the International Journal for Educational Integrity, I have witnessed (and become super frustrated with) threats to academic publishing and research integrity from Gen AI. Don’t get me wrong, I am not opposed to AI, but I have been clear in my research and writing that technology can be used in good and helpful ways or ways that are unethical and inappropriate. Recently, our editorial office received a manuscript with the file name ‘Blinded manuscript generated by artificial intelligence.’

My reaction was, “Are you kidding me?! Well, that’s bold!” Although the honesty of the title may be rarity, the submission itself is symptomatic of a burgeoning crisis in academic publishing: the rise of ‘AI slop.’ Since the proliferation of large language models (LLMs), we have seen a dramatic increase in submissions. Now, I’m pretty sure that a portion of the manuscripts we are receiving are written entirely by AI agents or bots, sending submissions on behalf of authors.

ChatGPT generated image. A puppet seated at a desk in an office, holding a printed document titled “Blinded manuscript generated by artificial intelligence.” The desk is covered with papers, a pair of glasses, a pen, and a coffee mug, with bookshelves and a bulletin board visible in the background.

As a journal editor, let me be clear: The volume of manuscripts you send out does not equate to the value to the readership. It is not that I oppose the use of AI carte blanche, but I do object to manuscripts prepared and sent by bots, with no human interaction in the process. If a manuscript does not bring value to our readers, it gets an immediate desk rejections, and for good reason.

The Problem with AI Slop in Research

Academic journals exist to advance the frontiers of human knowledge. A manuscript is expected to contribute new and original findings to scholarship and science. AI-generated papers, by their very nature, struggle to meet this requirement.

  • Lack of Empirical Depth: AI excels at synthesizing existing information but cannot conduct original fieldwork, clinical trials, or archival research. It mimics the structure of a study without performing the substance of it.
  • Axiological Misalignment: There is a gap between the automated generation of text and the values-driven process of human inquiry. Research requires a commitment to truth, ethics, and accountability, qualities a machine cannot possess.
  • The Echo Chamber Effect: These submissions often present fabricated or corrupted  citations or circular logic that offers little to no utility to the reader. They clutter the ecosystem without moving the needle on critical conversations.

Upholding the Integrity of the Record

Our editorial board remains committed to a rigorous peer-review process, but let’s be clear: the ‘publish or perish’ culture, now supercharged by Gen AI, is threatening to overwhelm the very systems meant to ensure quality.

If an academic paper submitted for publication does not offer an original contribution or if it lacks the human oversight necessary to guarantee its validity, it has no place in a scholarly journal. We in a postplagiarism era where the focus must shift from merely detecting copied text to evaluating the originality of thought and the integrity of the research process. Postplagiarism does not mean that we throw out academic and research integrity or that ‘anything goes’. We recognize that co-creation with GenAI may be normal for some writers today. But having an AI agent write and submit manuscripts on your behalf wastes everyone’s time.

To our contributors: scholarship is a human endeavor. We value your insights, your unique perspectives, and your rigorous labour. In the meantime, we will continue with our commitment to quality, and I expect that the journal’s rejection rate will continue to be high as we focus on papers that bring value to our readership.

______________

Share this post: Stop wasting my time! AI Agents Infiltrate Scholarly Publishing – https://drsaraheaton.com/2026/02/06/stop-wasting-my-time-ai-agents-infiltrate-scholarly-publishing/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


ChatGPT is in classrooms. What now?

February 2, 2026

“What should we be assessing exactly?” This was a question one of our research participants asked when we interviewed them as part of our project on artificial intelligence and academic integrity, sponsored by a University of Calgary Teaching Grant.

In an article published in The Conversation, we provide highlights of the results from our interviews with 28 educators across Canada, as well as our analysis of 15 years of research that looked at how AI affects education. (Spoiler alert: AI is a double-edged sword for educators and there are no easy answers.)

Alt text: Screenshot of The Conversation website showing a blurred smartphone screen with the ChatGPT app icon. Overlaid headline reads, “ChatGPT is in classrooms. How should educators now assess student learning?”
Screenshot from The Conversation.

We emphasize that, “in a post-plagiarism context, we consider that humans and AI co-writing and co-creating does not automatically equate to plagiarism.” Check out the full article in The Conversation.

You can check out the scholarly paper that we published in Assessment and Evaluation in Higher Education that goes into more detail about the methods and findings of our interviews.

I’d like to give a shoutout to all the project team members who worked with us on various aspects of this research: Robert (Bob) Brennan (Schulich School of Engineering, University of Calgary), Jason Weins (Faculty of Arts, University of Calgary), Brenda McDermott (Student Accessibility Services, University of Calgary), Rahul Kumar (Faculty of Education, Brock University), Beatriz Moya (Instituto de Éticas Aplicadas, Pontificia Universidad Católica de Chile) and the student research assistants who helped along the way (who have now all successfully graduated and moved on to the next phase of their careers): Jonathan Lesage, Helen Pethrick, and Mawuli Tay.

Related posts:

What Should We Be Assessing in a World with AI? Insights from Higher Education Educators – https://drsaraheaton.com/2025/11/25/what-should-we-be-assessing-in-a-world-with-ai-insights-from-higher-education-educators/

______________

Share this post: ChatGPT is in classrooms. What now? https://drsaraheaton.com/2026/02/02/chatgpt-is-in-classrooms-what-now/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


What Should We Be Assessing in a World with AI? Insights from Higher Education Educators

November 25, 2025

The arrival of generative AI tools such as ChatGPT has disrupted how we think about assessment in higher education. As educators, we’re facing a critical question: What should we actually be assessing when students have access to these powerful tools?

Our recent study explored how 28 Canadian higher education educators are navigating this challenge. Through in-depth interviews, we discovered that educators are positioning themselves as “stewards of learning with integrity” – carefully drawing boundaries between acceptable and unacceptable uses of chatbots in student assessments.

Screenshot of an academic journal article header from Assessment & Evaluation in Higher Education, published by Routledge. The article title reads: “What should we be assessing exactly? Higher education staff narratives on gen AI integration of assessment in a postplagiarism era.” Authors listed are Sarah Elaine Eaton, Beatriz Antonieta Moya Figueroa, Brenda McDermott, Rahul Kumar, Robert Brennan, and Jason Wiens, with institutional affiliations including University of Calgary, Pontificia Universidad Católica de Chile, Brock University, and others. The DOI link is visible at the top: https://doi.org/10.1080/02602938.2025.2587246.

Where Educators Found Common Ground

Across disciplines, participants agreed that prompting skills and critical thinking are appropriate to assess with chatbot integration. Prompting requires students to demonstrate foundational knowledge, clear communication skills, and ethical principles like transparency and respect. Critical thinking assessments can leverage chatbots’ current limitations – their unreliable arguments, weak fact-checking, and inability to explain reasoning – positioning students as evaluators of AI-generated content.

The Nuanced Territory of Writing Assessment

Writing skills proved far more controversial. Educators accepted chatbot use for brainstorming (generating initial ideas) and editing (grammar checking after independent writing), but only under specific conditions: students must voice their own ideas, complete the core writing independently, and critically evaluate any AI suggestions.

Notably absent from discussions was the composition phase – the actual process of developing and organizing original arguments. This silence suggests educators view composition as distinctly human cognitive work that should remain student-generated, even as peripheral tasks might accommodate technological assistance.

Broader Concerns

Participants raised important challenges beyond specific skill assessments: language standardization that erases student voice, potential for overreliance on AI, blurred authorship boundaries, and untraceable forms of academic misconduct. Many emphasized that students training to become professional communicators shouldn’t rely on AI for core writing tasks.

Moving Forward

Our findings suggest that ethical AI integration in assessment requires more than policies, it demands ongoing conversations about what makes learning authentic in technology-mediated environments. Educators need support in identifying which ‘cognitive offloads’ are appropriate, understanding how AI works, and building students’ evaluative judgment skills.

The key insight? Assessment in the AI era isn’t about banning technology, but about distinguishing between tasks where AI can enhance learning and those where independent human cognition remains essential. As one participant reflected: we must continue asking ourselves, “What should we be assessing exactly?”

The postplagiarism era requires us to protect academic standards while preparing students for technology-rich professional environments – a delicate balance that demands ongoing dialogue, flexibility, and our commitment to learning and student success.

Read the full article: https://doi.org/10.1080/02602938.2025.2587246

______________

Share this post: What Should We Be Assessing in a World with AI? Insights from Higher Education Educators – https://drsaraheaton.com/2025/11/25/what-should-we-be-assessing-in-a-world-with-ai-insights-from-higher-education-educators/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.