Stop wasting my time! AI Agents Infiltrate Scholarly Publishing

February 6, 2026

As the Editor-in-Chief of the International Journal for Educational Integrity, I have witnessed (and become super frustrated with) threats to academic publishing and research integrity from Gen AI. Don’t get me wrong, I am not opposed to AI, but I have been clear in my research and writing that technology can be used in good and helpful ways or ways that are unethical and inappropriate. Recently, our editorial office received a manuscript with the file name ‘Blinded manuscript generated by artificial intelligence.’

My reaction was, “Are you kidding me?! Well, that’s bold!” Although the honesty of the title may be rarity, the submission itself is symptomatic of a burgeoning crisis in academic publishing: the rise of ‘AI slop.’ Since the proliferation of large language models (LLMs), we have seen a dramatic increase in submissions. Now, I’m pretty sure that a portion of the manuscripts we are receiving are written entirely by AI agents or bots, sending submissions on behalf of authors.

ChatGPT generated image. A puppet seated at a desk in an office, holding a printed document titled “Blinded manuscript generated by artificial intelligence.” The desk is covered with papers, a pair of glasses, a pen, and a coffee mug, with bookshelves and a bulletin board visible in the background.

As a journal editor, let me be clear: The volume of manuscripts you send out does not equate to the value to the readership. It is not that I oppose the use of AI carte blanche, but I do object to manuscripts prepared and sent by bots, with no human interaction in the process. If a manuscript does not bring value to our readers, it gets an immediate desk rejections, and for good reason.

The Problem with AI Slop in Research

Academic journals exist to advance the frontiers of human knowledge. A manuscript is expected to contribute new and original findings to scholarship and science. AI-generated papers, by their very nature, struggle to meet this requirement.

  • Lack of Empirical Depth: AI excels at synthesizing existing information but cannot conduct original fieldwork, clinical trials, or archival research. It mimics the structure of a study without performing the substance of it.
  • Axiological Misalignment: There is a gap between the automated generation of text and the values-driven process of human inquiry. Research requires a commitment to truth, ethics, and accountability, qualities a machine cannot possess.
  • The Echo Chamber Effect: These submissions often present fabricated or corrupted  citations or circular logic that offers little to no utility to the reader. They clutter the ecosystem without moving the needle on critical conversations.

Upholding the Integrity of the Record

Our editorial board remains committed to a rigorous peer-review process, but let’s be clear: the ‘publish or perish’ culture, now supercharged by Gen AI, is threatening to overwhelm the very systems meant to ensure quality.

If an academic paper submitted for publication does not offer an original contribution or if it lacks the human oversight necessary to guarantee its validity, it has no place in a scholarly journal. We in a postplagiarism era where the focus must shift from merely detecting copied text to evaluating the originality of thought and the integrity of the research process. Postplagiarism does not mean that we throw out academic and research integrity or that ‘anything goes’. We recognize that co-creation with GenAI may be normal for some writers today. But having an AI agent write and submit manuscripts on your behalf wastes everyone’s time.

To our contributors: scholarship is a human endeavor. We value your insights, your unique perspectives, and your rigorous labour. In the meantime, we will continue with our commitment to quality, and I expect that the journal’s rejection rate will continue to be high as we focus on papers that bring value to our readership.

______________

Share this post: Stop wasting my time! AI Agents Infiltrate Scholarly Publishing – https://drsaraheaton.com/2026/02/06/stop-wasting-my-time-ai-agents-infiltrate-scholarly-publishing/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


ChatGPT is in classrooms. What now?

February 2, 2026

“What should we be assessing exactly?” This was a question one of our research participants asked when we interviewed them as part of our project on artificial intelligence and academic integrity, sponsored by a University of Calgary Teaching Grant.

In an article published in The Conversation, we provide highlights of the results from our interviews with 28 educators across Canada, as well as our analysis of 15 years of research that looked at how AI affects education. (Spoiler alert: AI is a double-edged sword for educators and there are no easy answers.)

Alt text: Screenshot of The Conversation website showing a blurred smartphone screen with the ChatGPT app icon. Overlaid headline reads, “ChatGPT is in classrooms. How should educators now assess student learning?”
Screenshot from The Conversation.

We emphasize that, “in a post-plagiarism context, we consider that humans and AI co-writing and co-creating does not automatically equate to plagiarism.” Check out the full article in The Conversation.

You can check out the scholarly paper that we published in Assessment and Evaluation in Higher Education that goes into more detail about the methods and findings of our interviews.

I’d like to give a shoutout to all the project team members who worked with us on various aspects of this research: Robert (Bob) Brennan (Schulich School of Engineering, University of Calgary), Jason Weins (Faculty of Arts, University of Calgary), Brenda McDermott (Student Accessibility Services, University of Calgary), Rahul Kumar (Faculty of Education, Brock University), Beatriz Moya (Instituto de Éticas Aplicadas, Pontificia Universidad Católica de Chile) and the student research assistants who helped along the way (who have now all successfully graduated and moved on to the next phase of their careers): Jonathan Lesage, Helen Pethrick, and Mawuli Tay.

Related posts:

What Should We Be Assessing in a World with AI? Insights from Higher Education Educators – https://drsaraheaton.com/2025/11/25/what-should-we-be-assessing-in-a-world-with-ai-insights-from-higher-education-educators/

______________

Share this post: ChatGPT is in classrooms. What now? https://drsaraheaton.com/2026/02/02/chatgpt-is-in-classrooms-what-now/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


What Should We Be Assessing in a World with AI? Insights from Higher Education Educators

November 25, 2025

The arrival of generative AI tools such as ChatGPT has disrupted how we think about assessment in higher education. As educators, we’re facing a critical question: What should we actually be assessing when students have access to these powerful tools?

Our recent study explored how 28 Canadian higher education educators are navigating this challenge. Through in-depth interviews, we discovered that educators are positioning themselves as “stewards of learning with integrity” – carefully drawing boundaries between acceptable and unacceptable uses of chatbots in student assessments.

Screenshot of an academic journal article header from Assessment & Evaluation in Higher Education, published by Routledge. The article title reads: “What should we be assessing exactly? Higher education staff narratives on gen AI integration of assessment in a postplagiarism era.” Authors listed are Sarah Elaine Eaton, Beatriz Antonieta Moya Figueroa, Brenda McDermott, Rahul Kumar, Robert Brennan, and Jason Wiens, with institutional affiliations including University of Calgary, Pontificia Universidad Católica de Chile, Brock University, and others. The DOI link is visible at the top: https://doi.org/10.1080/02602938.2025.2587246.

Where Educators Found Common Ground

Across disciplines, participants agreed that prompting skills and critical thinking are appropriate to assess with chatbot integration. Prompting requires students to demonstrate foundational knowledge, clear communication skills, and ethical principles like transparency and respect. Critical thinking assessments can leverage chatbots’ current limitations – their unreliable arguments, weak fact-checking, and inability to explain reasoning – positioning students as evaluators of AI-generated content.

The Nuanced Territory of Writing Assessment

Writing skills proved far more controversial. Educators accepted chatbot use for brainstorming (generating initial ideas) and editing (grammar checking after independent writing), but only under specific conditions: students must voice their own ideas, complete the core writing independently, and critically evaluate any AI suggestions.

Notably absent from discussions was the composition phase – the actual process of developing and organizing original arguments. This silence suggests educators view composition as distinctly human cognitive work that should remain student-generated, even as peripheral tasks might accommodate technological assistance.

Broader Concerns

Participants raised important challenges beyond specific skill assessments: language standardization that erases student voice, potential for overreliance on AI, blurred authorship boundaries, and untraceable forms of academic misconduct. Many emphasized that students training to become professional communicators shouldn’t rely on AI for core writing tasks.

Moving Forward

Our findings suggest that ethical AI integration in assessment requires more than policies, it demands ongoing conversations about what makes learning authentic in technology-mediated environments. Educators need support in identifying which ‘cognitive offloads’ are appropriate, understanding how AI works, and building students’ evaluative judgment skills.

The key insight? Assessment in the AI era isn’t about banning technology, but about distinguishing between tasks where AI can enhance learning and those where independent human cognition remains essential. As one participant reflected: we must continue asking ourselves, “What should we be assessing exactly?”

The postplagiarism era requires us to protect academic standards while preparing students for technology-rich professional environments – a delicate balance that demands ongoing dialogue, flexibility, and our commitment to learning and student success.

Read the full article: https://doi.org/10.1080/02602938.2025.2587246

______________

Share this post: What Should We Be Assessing in a World with AI? Insights from Higher Education Educators – https://drsaraheaton.com/2025/11/25/what-should-we-be-assessing-in-a-world-with-ai-insights-from-higher-education-educators/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


Breaking Barriers: Academic Integrity and Neurodiversity

November 20, 2025

When we talk about academic integrity in universities, we often focus on preventing plagiarism and cheating. But what if our very approach to enforcing these standards is unintentionally creating barriers for some of our most vulnerable students?

My recent research explores how current academic integrity policies and practices can negatively affect neurodivergent students—those with conditions like ADHD, dyslexia, Autism, and other learning differences. Our existing systems, structures, and policies can further marginalize students with cognitive differences.

The Problem with One-Size-Fits-All

Neurodivergent students face unique challenges that can be misunderstood or ignored. A dyslexic student who struggles with citation formatting isn’t necessarily being dishonest. They may be dealing with cognitive processing differences that make these tasks genuinely difficult. A student with ADHD who has trouble managing deadlines and tracking sources is not necessarily lazy or unethical. They may be navigating executive function challenges that affect time management and organization. Yet our policies frequently treat these struggles as potential misconduct rather than as differences that deserve support.

Yet our policies frequently treat these struggles as potential misconduct rather than as differences that deserve support.

The Technology Paradox for Neurodivergent Students

Technology presents a particularly thorny paradox. On one hand, AI tools such as ChatGPT and text-to-speech software can be academic lifelines for neurodivergent students, helping them organize thoughts, overcome writer’s block, and express ideas more clearly. These tools can genuinely level the playing field.

On the other hand, the same technologies designed to catch cheating—especially AI detection software—appear to disproportionately flag neurodivergent students’ work. Autistic students or those with ADHD may be at higher risk of false positives from these detection tools, potentially facing misconduct accusations even when they have done their own work. This creates an impossible situation: the tools that help are the same ones that might get students in trouble.

Moving Toward Epistemic Plurality

So what’s the solution? Epistemic plurality, or recognizing that there are multiple valid ways of knowing and expressing knowledge. Rather than demanding everyone demonstrate learning in the exact same way, we should design assessments that allow for different cognitive styles and approaches.

This means:

  • Rethinking assessment design to offer multiple ways for students to demonstrate knowledge
  • Moving away from surveillance technologies like remote proctoring that create anxiety and accessibility barriers
  • Building trust rather than suspicion into our academic cultures
  • Recognizing accommodations as equity, not as “sanctioned cheating”
  • Designing universally, so accessibility is built in from the start rather than added as an afterthought

What This Means for the Future

In the postplagiarism era, where AI and technology are seamlessly integrated into education, we move beyond viewing academic integrity purely as rule-compliance. Instead, we focus on authentic and meaningful learning and ethical engagement with knowledge.

This does not mean abandoning standards. It means recognizing that diverse minds may meet those standards through different pathways. A student who uses AI to help structure an essay outline isn’t necessarily cheating. They may be using assistive technology in much the same way another student might use spell-check or a calculator.

Call to Action

My review of existing research showed something troubling: we have remarkably little data about how neurodivergent students experience academic integrity policies. The studies that exist are small, limited to English-speaking countries, and often overlook the voices of neurodivergent individuals themselves.

We need larger-scale research, global perspectives, and most importantly, we need neurodivergent students to be co-researchers and co-authors in work about them. “Nothing about us without us” is not just a slogan, but a call to action for creating inclusive academic environments.

Key Messages

Academic integrity should support learning, not create additional barriers for students who already face challenges. By reimagining our approaches through a lens of neurodiversity and inclusion, we can create educational environments where all students can thrive while maintaining academic standards.

Academic integrity includes and extends beyond student conduct; it means that everyone in the learning system acts with integrity to support student learning. Ultimately, there can be no integrity without equity.

Read the whole article here:
Eaton, S. E. (2025). Neurodiversity and academic integrity: Toward epistemic plurality in a postplagiarism era. Teaching in Higher Educationhttps://doi.org/10.1080/13562517.2025.2583456

______________

Share this post: Breaking Barriers: Academic Integrity and Neurodiversity – https://drsaraheaton.com/2025/11/20/breaking-barriers-academic-integrity-and-neurodiversity/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


AI Use and Ethics Among Jordanian University Students

November 19, 2025

885 university students in Jordan “generally viewed AI use for tasks such as translation, literature reviews, and exam preparation as ethically acceptable, whereas using AI to cheat or fully complete assignments was widely regarded as unacceptable.”

Check out the latest article in the International Journal for Educational Integrity by Marwa M. Alnsour, Hamzeh Almomani, Latifa Qouzah, Mohammad Q.M. Momani, Rasha A. Alamoush & Mahmoud K. AL-Omiri, “Artificial intelligence usage and ethical concerns among Jordanian University students: a cross-sectional study“.

Screenshot of the title page of a research article published in the International Journal for Educational Integrity. The article is titled “Artificial intelligence usage and ethical concerns among Jordanian University students: a cross-sectional study.” It is marked as “Research” and “Open Access” with a purple header. Authors listed are Marwa M. Alnsour, Hamzeh Almomani, Latifa Qouzah, Mohammad Q.M. Momani, Rasha A. Alamoush, and Mahmoud K. Al-Omiri. The DOI link and journal details appear at the top.

Synopsis

This cross-sectional study examined artificial intelligence usage patterns and ethical awareness among 885 higher education students across various disciplines. Findings showed how Jordanian university students engage with AI tools like ChatGPT in their academic work.

Key Findings

High AI Adoption: A substantial 78.1% of students reported using AI during their studies, with approximately half using it weekly or daily. ChatGPT emerged as the most popular tool (85.2%), primarily used for answering academic questions (53.9%) and completing assignments (46.4%).

Knowledge Gaps: Although 57.5% considered themselves moderately to very knowledgeable about AI, only 44% were familiar with ethical guidelines. Notably, 41.8% were completely unaware of principles guiding AI use, revealing a significant gap between usage and ethical understanding.

Disciplinary Differences: Science and engineering students demonstrated the highest usage rates and knowledge levels, while humanities students showed lower engagement but expressed the strongest interest in training. Health sciences students displayed greater ethical concerns, possibly reflecting the high-stakes nature of their field.

Ethical Perceptions: Students generally viewed AI use for translation, proofreading, literature reviews, and exam preparation as acceptable. However, 39.8% had witnessed unethical AI use, primarily involving cheating or total dependence on AI. Only 35% expressed concern about ethical implications, suggesting many may not fully recognize potential risks.

Demographic Patterns: Female students demonstrated higher ethical awareness than males. Older students and those in advanced programs (particularly PhD students) showed greater AI knowledge and ethical consciousness, with each additional year of age correlating with increased awareness scores.

Training Needs: More than three quarters (76.7%) of students expressed interest in professional training on ethical AI use, with 83.7% agreeing that guidance is necessary. However, 46.6% indicated their institutions had not provided adequate support (which should surprise exactly no one, since similar findings have been found in other studies.)

Implications

The author call for Jordanian universities to develop clear, discipline-specific ethical guidelines and structured training programs. The researchers recommend implementing mandatory online modules, discipline-tailored workshops, and establishing dedicated AI ethics bodies to promote responsible use. These findings underscore the broader challenge facing higher education globally: ensuring students can leverage AI’s benefits while maintaining academic integrity and developing critical thinking skills.

______________

Share this post: AI Use and Ethics Among Jordanian University Students https://drsaraheaton.com/2025/11/19/ai-use-and-ethics-among-jordanian-university-students/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.