Research Integrity Oversight in Canada: A Postplagiarism Perspective

April 11, 2026

The Canadian Panel on Responsible Conduct of Research (PRCR) is proposing substantive changes to Canada’s research integrity framework, and the public comment window closes April 17, 2026. If you care about research ethics in this country, you have days left to weigh in.

I want to flag a few things about these proposed changes and why they matter to those of us working in postplagiarism research.

The most consequential proposal is the removal of any statute of limitations on allegations of research misconduct. As attorney Minal Caron told Retraction Watch, the existing policy is silent on this question. The proposed language would require institutions to review allegations regardless of how much time has passed since the work was published, which would be a significant shift. It’s also a long-overdue one. Complainants often delay coming forward out of fear of retaliation, and a policy that turns away allegations on procedural grounds protects no one except those who benefit from institutional inaction.

The PRCR also proposes to require institutions to hold respondents accountable even after they have left, and to accept anonymous allegations and allegations already circulating in the public domain as grounds for review. These aren’t radical ideas. They’re basic conditions for a credible oversight system.

I’ve written and spoken at length about how postplagiarism requires us to rethink accountability in an age of AI. But accountability without enforcement infrastructure is a philosophical position, not a policy. These proposed changes represent a concrete attempt to build infrastructure. They will not resolve every tension in Canadian research oversight, and the critics quoted in the article are right to flag gaps, particularly around the vagueness of institutional RCR education requirements.

One of the scholars quoted in the Retraction Watch piece is Gengyan Tang, a PhD candidate and a member of our Postplagiarism Research Lab, who studies research integrity policy. His observation that the proposed language around RCR education is too ambiguous is precise and fair. Institutions can host an ‘Academic Integrity Week’ and check a compliance box without delivering anything substantive. Policies that do not specify how education is to be delivered or evaluated leave too much room for performative compliance.

The Pruitt case, cited in the article as a catalyst for some of this reform momentum, is worth naming directly. Jonathan Pruitt was found to have fabricated and falsified data. The case exposed how the 2011 framework’s absence of relevant procedures allowed institutions to deflect rather than investigate. Requiring institutions to act regardless of elapsed time or an individual’s current affiliation is a direct response to that failure.

Postplagiarism, as a framework, asks us to think past the categories we have inherited. The academic integrity arms race that I have discuss in my research applies just as much to research misconduct oversight as it does to student cheating. Detection tools, policies, and procedures are only as good as the institutional will to apply them rigorously. These proposed changes push toward compulsion rather than discretion, which warrants close attention.

The comment period is open until April 17, 2026. If you work in research integrity, this is your chance: read the proposed revisions and submit feedback.

__________

Reposted from: Research Integrity Oversight in Canada: A Postplagiarism Perspective – https://postplagiarism.com/2026/04/11/research-integrity-oversight-in-canada-a-postplagiarism-perspective/


Interfacing with the Future: Reflections on the National Day of Learning 2026

April 1, 2026

On March 28, 2026, I had the pleasure of joining educators from across Canada for the National Day of Learning, hosted by Let’s Talk Science. This one-day, nation-wide professional learning event brought together K–12 teachers, post-secondary educators, and policy leaders to explore some of the most pressing issues shaping education today, with artificial intelligence high on the agenda.

I was invited to deliver a session titled “Interfacing with the Future: Wearable AI and Academic Integrity for K–12 and Higher Ed.” What follows are a few reflections and key ideas from that conversation, hosted by Dr. Alec Couros.

Moving into the Postplagiarism Era

One of the central ideas framing my talk is postplagiarism. In this reality, artificial intelligence is no longer an external tool that students occasionally use, but rather, it is embedded into everyday life and learning.

Students are already engaging with AI in ways that challenge traditional notions of authorship, originality, and academic work. The question is no longer if students will use AI, but how.

This shift requires a corresponding change in how we think about academic integrity. Detection and surveillance, long relied upon as primary strategies, are no longer sufficient. Instead, we must rethink how we design learning environments that foster integrity from the ground up.

From Tools to Wearables: How AI is Advancing

A key focus of my presentation was the rapid evolution from AI tools to AI wearables — particularly smart glasses and other forms of cosmetically invisible interfaces. The talk was based, in part, on our recent article in Canadian Perspectives on Academic Integrity

Wearable technologies integrate AI directly into our physical experience of the world. Rather than pulling out a device, users can access real-time information, transcription, and prompts seamlessly through their field of vision.

This shift introduces both opportunities and tensions:

  • Cognitive offloading: Learners can reduce mental load by accessing information instantly. (Phill Dawson has done some great work on cognitive offloading that I recommend reading.)
  • Enhanced presence: Wearables allow users to maintain eye contact and engagement without device distraction.
  • Efficiency gains: Tasks such as note-taking or translation can be automated in real time.

At the same time, these benefits come with real challenges including information overload, privacy concerns, and technical limitations. More importantly for educators, they fundamentally disrupt assumptions about what it means to “know” something independently.

New Technology ≠ Cheating

One of the most important messages I emphasized is this: new technology does not automatically equal academic misconduct.

If a tool is permitted, then its use is not cheating. The real issue lies in unauthorized use or misuse in ways that create unfair advantage. 

We must also remain attentive to equity and accessibility. Some wearable technologies may be used as accommodations, making it essential that our integrity policies are inclusive and nuanced rather than rigid and punitive.

Designing for Integrity (Not Surveillance)

Rather than doubling down on detection, I encourage educators to shift their focus toward designing for integrity.

This means:

  • Prioritizing assessment validity: If an AI system can complete a task without genuine understanding, then the task itself needs to be rethought.
  • Moving beyond “gotcha” approaches: Surveillance-based strategies erode trust and are increasingly ineffective.
  • Supporting diverse learners: Students bring different technological access, needs, and experiences. Our designs must reflect that.
  • Building a culture of integrity: Integrity is not enforced; it is cultivated through meaningful learning experiences.

Bridging K–12 and Post-Secondary Education

Another key theme was the gap between K–12 and post-secondary expectations.

In K–12 environments, students are often encouraged to explore technology as part of their learning. In contrast, post-secondary institutions frequently operate under the assumption that students already understand complex academic integrity rules.

As AI continues to evolve, this gap becomes more pronounced. We need stronger alignment across educational sectors to ensure that students are supported, rather than being set up for failure, as they transition between systems. (Myke Healy has a great paper on the topic of GenAI in the K-12 context that is worth reading.) 

Looking Ahead

If there is one takeaway from this experience, it is this: wearable AI is not a future scenario. It is already here.

As educators, we are being called to respond not with fear, but with thoughtful, research-informed approaches. The challenge is not simply to manage technology, but to reimagine teaching, learning, and assessment in ways that remain meaningful in an AI-integrated world.

Events like the National Day of Learning remind me of the power of community. Bringing educators together to share ideas, ask difficult questions, and explore new possibilities is essential as we navigate this rapidly changing landscape.

Thank you to Let’s Talk Science and to Dr. Alec Couros for the opportunity to be part of this important conversation, and to all the educators who continue to lead with curiosity, courage, and care.

______________

Share this post: Interfacing with the Future: Reflections on the National Day of Learning 2026 –  https://drsaraheaton.com/2026/04/01/interfacing-with-the-future-reflections-on-the-national-day-of-learning-2026/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


From Courtrooms to Classrooms: Smart Glasses and Integrity in a Postplagiarism Era

March 18, 2026

by Sarah Elaine Eaton – March 18, 2026

A London judge recently concluded that a witness was receiving coached answers through a pair of smart glasses connected to his mobile phone during cross-examination (Jacobs, 2026). The case involved a routine insolvency dispute, but the technology at the centre of the judge’s findings was anything but routine. The witness, who gave evidence through a Lithuanian interpreter, was found to have been receiving audio from an unidentified caller routed through smart glasses paired to his handset. Once the glasses were removed, his phone began broadcasting a voice from his jacket pocket. The judge rejected the witness’s testimony in full, describing it as unreliable and untruthful.

The incident is instructive for those of us working at the intersection of technology, integrity, and institutional policy. It demonstrates that smart glasses do not need advanced AI capabilities to compromise a formal proceeding. Simple Bluetooth audio connectivity was sufficient.

In our recent paper (Eaton et al., 2026), we examined the implications of AI-enabled smart glasses for teaching, learning, assessment, and academic integrity. One of our central arguments applies here: the reflexive instinct to treat wearable technology as a cheating device, while understandable, risks missing the structural challenge these technologies present to the systems designed to ensure honest participation.

Courts, like universities, depend on observable behaviours and verifiable evidence to assess credibility and ensure procedural fairness. As we noted, AI glasses can embed cognitive or communicative assistance into a user’s perceptual field in ways that leave no external trace (Eaton et al., 2026). The London case illustrates what happens when that assistance leaves a trace, but only because something went wrong: the interpreter heard voices, and the phone began playing audio at the wrong moment.

The question this case raises is not whether courts should ban smart glasses. A blanket prohibition would create its own problems, particularly for individuals who depend on wearable technology for vision correction or accessibility. We argued that institutional responses should focus on redesigning processes rather than policing devices (Eaton et al., 2026). For courts, this means developing protocols for the use of wearable technology during testimony, much as we recommended that educational institutions establish centralized accommodation protocols for AI-enabled devices.

The London ruling also reinforces our observation that enforcement models built around detection are fragile. The coaching was discovered through a combination of the interpreter’s alertness, call log records, and the witness’s inability to explain the contact saved as “abra kadabra” on his phone. These are investigative tools, not systemic safeguards. As smart glasses become more common and more discreet, relying on detection alone will prove insufficient in both courtrooms and classrooms.

What this case calls for is not alarm but preparation. Institutions responsible for the integrity of formal proceedings, whether legal or academic, need forward-looking frameworks that address the capabilities of wearable technology before the next incident occurs. The technology is not going away. Our systems must adapt.

References

Eaton, S. E., Kumar, R., Dahal, B., Tang, G., Ramazanov, F., & Moya Figueroa, B. A. (2026). AI smart glasses and the future of academic integrity in a postplagiarism era. Canadian Perspectives on Academic Integrity, 9(1), 1–5. http://doi.org/10.55016/ojs/cpai.v9i1/82885

Jacobs, S. (2026, March 17). A London judge says a witness was being coached in real time through smart glasses. TechSpot. https://www.techspot.com/news/111710-london-judge-witness-coached-real-time-through-smart.html

____________

Cross posted from:

From Courtrooms to Classrooms: Smart Glasses and Integrity in a Postplagiarism Era – https://postplagiarism.com/2026/03/18/from-courtrooms-to-classrooms-smart-glasses-and-integrity-in-a-postplagiarism-era/


Call for Proposals: Special issue on Postplagiarism and Generativism: Human-AI Hybrid Approaches to Ethical Teaching, Learning, and Assessment

March 17, 2026

Special Issue Call for Papers

Postplagiarism and Generativism: Human-AI Hybrid Approaches to Ethical Teaching, Learning, and Assessment

For publication in the Journal of University Teaching and Learning Practice

Guest editors

Background

Every new technology brings with it societal and moral panic (Orben, 2020). When the Internet first became popular, concerns about plagiarism increased. Even though there is scant empirical evidence that the Internet was actually responsible for increases in rates of plagiarism, the perception that new technology resulted in more academic cheating persisted (Panning Davies & Howard, 2016).

Some plagiarism scholars have been emphatic that the majority of student plagiarism cases are not an intent to deceive, but rather a lack of academic literacy and poor academic practice, and have even advocated for disposing of plagiarism in academic misconduct policies in favour of increased student support (Howard, 1992; Jamieson & Howard, 2021). The idea that plagiarism could be decoupled from academic misconduct seems somewhat unlikely, but by the 2020s it was obvious to some that generative artificial intelligence (GenAI) would have an impact on writing, and by extension, on plagiarism (Mindzak & Eaton, 2021).

In response to these technological shifts, various frameworks have emerged to conceptualize academic integrity in the GenAI era. The postplagiarism framework, first introduced by Eaton (2021, 2023) and since discussed by scholars worldwide (Bali, 2023; Bagenal, 2024; Kenny, 2024), offers one approach. Other perspectives, such as Generativism (Pratschke, 2023), AI Literacy frameworks (Ng et al., 2021; Pretorius & Cahusac de Caux, 2024), and UNESCO’s Guidance for Generative AI in Education (2023), provide complementary or alternative viewpoints on similar phenomena.

Postplagiarism is based on six tenets (Eaton, 2023): (1) human-AI hybrid writing will become the norm; (2) creativity can be enhanced by AI; (3) AI can help to overcome language barriers; (4) we can outsource control of our writing to AI, but we do not outsource responsibility for what is written; (5) attribution remains important; and (6) historical definitions of plagiarism may require rethinking.

Empirical testing of these and related frameworks has shown differing levels of acceptance and application across educational contexts (Kumar, 2025).

Equity, Diversity, Inclusion, and Accessibility in a Postplagiarism Age

As higher education institutions aim to promote social justice through equity, diversity, and inclusion (EDI), GenAI holds the potential to either break down or reinforce barriers related to linguistic, cultural, socioeconomic, and ability differences requires critical examination.

Assessment practices should be designed proactively to enable all students to demonstrate their learning without being unfairly disadvantaged by their personal characteristics or circumstances (Tai et al., 2022). Similarly, McDermott (2024) highlights the importance of considering accessibility, equity, and inclusion in assessment and academic integrity.

GenAI offers opportunities to enhance equity by providing personalized support, overcoming language barriers, and assisting learners with diverse needs. However, without careful implementation, it may exacerbate existing inequities through unequal access to technology, algorithmic biases, or assessment designs that privilege certain ways of knowing and communicating.

In this special edition, we propose to examine the broader question: “How are pedagogies, learning, and teaching approaches evolving in response to GenAI, and what frameworks best support ethical academic practice in a postplagiarism landscape?”

We invite researchers and practitioners to submit their original research papers exploring the transformation of teaching, learning, and assessment in a GenAI age. We welcome both theoretical and empirical contributions, including positions that may present contrasting viewpoints. Potential topics of interest include, but are not limited to:

  • New developments in postplagiarism, generativism, and other emerging frameworks for understanding academic integrity in the GenAI era
  • Empirical studies testing these frameworks in different contexts and disciplines
  • The use of these frameworks to design or reform academic misconduct policies and procedures
  • The relationship between GenAI, academic literacies, and related competencies (e.g., digital literacy, information literacy)
  • Pedagogical approaches that embrace GenAI while maintaining academic integrity
  • Case studies of successful integration of GenAI into teaching, learning, and assessment
  • Critical perspectives on the limitations or challenges of current approaches to GenAI in education
  • Position papers presenting new or alternative frameworks for understanding GenAI in teaching and learning

We particularly encourage submissions that engage in dialogue with existing frameworks, offering either supportive evidence or critical alternatives. Our goal is to foster a robust debate about the future of teaching and learning in a GenAI (and even a post-GenAI) world.

We welcome submissions from both established researchers and early-career scholars from diverse academic and cultural backgrounds. All submissions will be peer-reviewed by an international panel of experts. Accepted papers will be published in a special issue of the Journal of University Teaching and Learning Practice.

Types of publications accepted into this Special Issue

The types of publications that are eligible for acceptance into this Special Issue include:

  • Research papers
  • Review articles (e.g., systematic review or meta-analysis)
  • Case studies and evidence-based good practice examples

Developing a high-quality proposal

We recommend the creation of a single document in Word (.doc or .docx) format that contains the following:

  • Proposed article title
  • Proposed authors names, affiliations, and ORCid
  • A clear evidence-based rationale for the line of inquiry proposed
  • Research question(s)
  • Proposed method (for both theoretical and empirical manuscripts)
  • Practice-based implications of the proposed research

The word limit for the proposal is 250 words (not including references) and is designed to give the Editorial Team a sense of the rigour of the manuscript proposed and the possible implications of such research. The Editorial Team may return with an invitation to combine similar manuscripts. Acceptance of proposals does not guarantee acceptance of final manuscripts.

Timeline

  • Proposals due – April 30, 2026
  • Proposal acceptance notifications: May 14, 2026
  • Full articles due: August 31, 2026

Submit your abstract via this online form: https://forms.gle/6sKjc2jkKGWCtGgw7

For further information contact Professor Sarah Elaine Eaton, University of Calgary.

References

Bali, M. (2023, March 3). Are We Approaching a Postplagiarism Era? https://blog.mahabali.me/educational-technology-2/are-we-approaching-a-postplagiarism-era/

Bagenal, J. (2024). Generative artificial intelligence and scientific publishing: Urgent questions, difficult answers. The Lancet, 403(10432), 1118–1120. https://doi.org/10.1016/S0140-6736(24)00416-1

Eaton, S. E. (2021). Plagiarism in Higher Education: Tackling Tough Topics in Academic Integrity. Bloomsbury.

Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1–10. https://doi.org/10.1007/s40979-023-00144-1

Orben, A. (2020). The Sisyphean cycle of technology panics. Perspectives on Psychological Science, 15(5), 1143–1157. https://doi.org/10.1177/1745691620919372

Howard, R. M. (1992). A plagiarism pentimento. Journal of Teaching Writing, 11(2), 233–245.


How AI Improved the Accessibility of my Slide Presentation with GenAI

February 17, 2026

I used Claude to help me improve the accessibility of a slide deck for an upcoming presentation. I uploaded the .pptx file and also uploaded a .pdf with instructions about how to make the slide deck compliant with accessibility standards.

I was not hopeful.

I asked Claude to revise the slide deck and provide an updated .pptx file that I could download. It did not work perfectly and some of the AltText was lost. So, I asked Claude to provide the AltTex for each slide and a detailed explanation of the changes. The result allowed me to make a few minor edits to a slide deck myself. The slides are now compliant with the organizational standards for a group I’ll be presenting to next week.

Ensuring slides are accessible has been an intimidating task for me in the past. I have always been afraid of “getting it wrong”. I would spend hours trying to figure out every detail (and things still would not be perfect).

In the end, I was satisfied with the results. Using AI for this has helped me to improve both my competence and confidence. The slides still may not be perfect, but they are better than they were… and better than I could have done on my own.

Have you tried using GenAI to help you improve the accessibility of your documents? If yes, what tips do you have?

______________

Share this post: How AI Improved the Accessibility of my Slide Presentation with GenAI – https://drsaraheaton.com/2026/02/17/how-ai-improved-my-presentations-accessibility-with-genai/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.