Cheating, Clarity, and the Role of AI
How a Shared Scale Builds Trust and Expands Possibility in the Classroom
Three charts reveal where AI crosses the line and a 5‑level scale that fixes the ambiguity.
Artificial intelligence is changing how students learn, write, and problem solve. With that change has come confusion. Teachers worry about cheating, while we all struggle to understand what is acceptable and what crosses the line. Some policies try to ban AI completely, while others allow students to use it for brainstorming or feedback. The result is uncertainty, suspicion, and uneven expectations from classroom to classroom.
One way to bring clarity is to examine how teachers and students actually perceive AI use. Over the past year or so, I’ve collected survey data from educators to judge whether different uses of AI counted as “cheating.” The results reveal why conversations about AI are so challenging, and why schools need a common language for assessing AI work. These scenarios come from Matt Miller’s book, “AI for Educators: Learning Strategies, Teacher Efficiencies, and a Vision for an Artificial Intelligence Future.”
If you’d like to take the survey, here’s a link to the form. I’d be grateful for you to pass it along to other educators, too!
Where Confusion Reigns
Let’s take a look at a few of the results. The first chart shows one of the most common but also most controversial practices: a student asks AI to create a response, then edits and submits it.
This chart highlights a core problem: when students revise AI-generated work, some view the process as dishonest. Others see it as a legitimate draft, no different from spellcheck or peer review. That lack of consensus leaves students unsure of what teachers expect and reveals a deeper issue: assignments are often given without a clear sense of which skills are being assessed. AI simply makes that gap visible.
The Question of Effort
The second chart shifts the focus to effort. Here, the student creates their own outline but relies on AI to draft the essay before editing and submitting.
This example shows how effort sits at the heart of the debate. Some argue that building an outline demonstrates ownership and learning, while others see handing off the draft essay as crossing a line. Ghostwriting has always raised questions about authenticity, and AI makes that tension more persistent, more easily blurring the boundary between acceptable assistance and dishonest outsourcing. The disagreement suggests that perceptions hinge on how much intellectual labor the student is seen as doing versus passing off to the machine.
The Path to Clarity
The third chart offers a different picture. In this scenario, the student writes the assignment themselves, then runs it through AI for feedback.
Would you call this cheating or learning in your class? Tell me your grade/subject below.
This chart shows the clearest consensus: most respondents agree that using AI as a feedback tool is not cheating. In fact, it mirrors the way students have long sought help from peers, tutors, writing centers, or even the teacher’s famed red pen. The difference is that AI provides immediate, scalable feedback. Here, the data reveals a more shared understanding: AI can enhance learning without replacing it.
Taken together, these charts show more than just divided opinions, they highlight the confusion students face when adults don’t agree. And, if educators can’t clearly articulate where AI belongs in learning, students are left to guess, usually at the risk of being wrong.
What’s needed isn’t rigid uniformity, but shared clarity. By making expectations explicit while leaving room for curiosity, we can give students both the boundaries and the freedom they need to navigate AI responsibly. And with that clarity, if a student chooses to step outside the boundaries, they know exactly where the line is, and that they are choosing to cross it.
Moving Toward a Shared Framework
The survey charts sharpen this need for clarity by highlighting three patterns. First, disagreement is sharpest when AI becomes the primary creator of student work. Second, perceptions shift depending on how much effort the student contributes before turning to AI. Third, there is broad agreement that AI as feedback, rather than as author, is acceptable.
These patterns echo an earlier tension: many assignments are created without teachers being fully clear on which skills they’re assessing. In an essay, for instance, is the priority the architecture of the piece, outline, syntax, semantics, word choice, or is it the content and ideas themselves? Is the student being assessed on their ability to write an essay, or synthesize content information? It’s not that both skills aren’t worthy of assessment, but are they both required to be assessed at the same time? It’s a little like giving me a quiz about a topic I’m well-versed in, but in a language I don’t speak. Are you assessing my knowledge on the topic, or ability to speak that language? Without that distinction, both students and teachers struggle to decide where AI belongs.
These challenges point directly to why tools like the AI Assessment Scale (AIAS) are useful. Rather than leaving teachers and students to navigate AI use on intuition, the scale offers a structured way to talk about fairness, authorship, and responsibility.
The AI Assessment Scale (AIAS) by Perkins, Furze, Roe and MacVaugh is not just a classroom tool from TpT; it is grounded in research that looked closely at how students and teachers perceive fairness, authorship, and responsibility when AI is in the mix. It grew out of studies in academic integrity, digital literacy, and the psychology of effort. What makes it especially useful is that it does not attempt to collapse all AI use into a simple yes-or-no category. Instead, it acknowledges that there are levels of AI involvement, ranging from no AI use at all to partnered collaboration and even exploratory creativity. By putting these levels into a shared framework, it creates a common language for teachers and students to discuss what is allowed, what is not, and why. For more information, the research papers and more are available at: aiassessmentscale.com
In practice, this means teachers can design assignments with intentional guardrails. For example, they might decide that a history essay should be written entirely without AI (Level 0), while a lab report, using data collected by the student, might allow AI to help polish language after the student drafts it (Level 3). Another assignment could even encourage extending the learning into unforeseen applications with AI (Level 5), treating it like a co-creator. Rather than relying on intuition or after-the-fact policing, the AIAS makes expectations explicit from the start. Students no longer have to guess at the invisible line between learning and cheating.
One way to see this in action is through CyberSandwich, an EduProtocol where students read and take notes on a material before pairing up to compare, contrast, and synthesize their learning. Then, each student crafts a summary of the content. If a teacher assigns the same article to every student, the emphasis is on close reading and collaborative meaning-making. If complementary articles are assigned, the emphasis shifts to comparison and integration. Layering the AIAS on top of this activity provides clarity about what role, if any, AI should play in the process.
For instance:
Level 1 (No AI): Students read the assigned text and write notes entirely independently, using no digital or AI tools other than to record their work. Next, students meet in pairs to compare and discuss their notes. After discussion, students write their summaries. This is completed in a supervised environment (e.g., in class, by hand or on locked-down devices). This meets AIAS Level 1 criteria by ensuring the summary is an unaided demonstration of comprehension and synthesis skills in a controlled setting.
Level 2 (AI Planning): Before reading, students use generative AI to generate a list of key questions or concepts to guide their reading. Or, they might use it to alter the reading level, tone-shift it, or otherwise make the reading more accessible. They then read and annotate the article independently, and continue the compare and contrast phase with their partner. Their final summary must be entirely their own work, without AI input. This leverages AI at the planning stage only, supporting ideation and reading focus while ensuring independent meaning-making and writing. AI use is scaffolded and explicitly separated from the final output.
Level 3 (AI Collaboration): After writing a first draft of their notes and summary, students use generative AI to suggest alternative phrasings, areas for improved coherence, or flag missed concepts. They critically evaluate these suggestions and revise their summaries accordingly. Students then submit a final summary alongside a brief reflection on how they used AI and what decisions they made. This reflects co-drafting and critical engagement, key features of Level 3. The reflective element develops evaluative judgment and supports transparency.
Level 4 (Full AI): Students provide the article to a generative AI tool and ask it to generate a summary, analyze key arguments, and highlight patterns across sections. Students then evaluate the output, refine or restructure it, and add their own insights. The final submission must include a brief commentary on how they directed and modified the AI output. This meets Level 4 expectations by requiring strategic prompting and critical use of AI, while maintaining a focus on demonstrating student learning and judgment.
Level 5 (AI Exploration): Students begin by asking AI to compare the article with another thematically related resource or topic that the AI or student recommends, then generate counterarguments or extensions. Students co-design the task’s final form, for example, they might decide to submit a debate script, podcast outline, or multimedia synthesis. The task ends with a presentation or write-up explaining the process, choices made, and the impact of generative AI in shaping their understanding. This embraces the co-creation and innovation at the heart of Level 5. It fosters deeper critical thinking, ethical reasoning, and creative integration of AI capabilities across tasks.
If you’re interested in learning more about CyberSandwich, watch for an upcoming guest article on it by Adam Moler!
Seen this way, the CyberSandwich is no longer a single rigid activity but a flexible container that adapts to the teacher’s instructional goals and the AIAS level set for the assignment. This approach not only clarifies expectations but also demonstrates to students how AI can be used responsibly across a spectrum, from simple editing to full creative exploration, without blurring into cheating.
Integrating the AI Assessment Scale into a Syllabus
One of the most practical ways for teachers to establish clarity around AI use is to embed the AI Assessment Scale (AIAS) directly into their syllabus. Academic honesty sections traditionally explain plagiarism and citation expectations; in today’s classrooms, they also need to address where and how AI is permitted. The AIAS scale gives teachers a ready-made framework: each assignment can be labeled with a level (1–5), so students know in advance whether they are expected to work without AI, allowed to use it for planning, or encouraged to explore creative applications, based on the teacher’s objectives for the assignment.
This integration serves two purposes. First, it makes academic integrity transparent as students no longer have to guess what “appropriate” AI use looks like because it is spelled out in plain language. Second, it positions AI not just as a risk for dishonesty but as a tool for learning. Teachers can highlight how different levels of the scale connect to different learning goals: for example, Level 1 (No AI) emphasizes recall and core skills, while Level 3 (AI Collaboration) stresses critical evaluation and revision. Over the course of a school year, assignments can rotate across levels so students learn to both safeguard their independent abilities and develop mature, responsible AI skills.
Ultimately, weaving the AIAS into a syllabus communicates that AI is part of the learning environment rather than an exception to it. Instead of policing students through suspicion, teachers create a shared framework that both supports honesty and helps students practice the kinds of AI literacy they will need well beyond the classroom.
Here’s an example of what it might look like in your syllabus:
“This course will use the AI Assessment Scale (AIAS) to set clear expectations for how artificial intelligence tools may or may not be used on assignments. Each task will indicate the AIAS level that applies:
Level 1 – No AI: You may not use AI in any form. Work must reflect only your own knowledge and skills.
Level 2 – AI Planning: You may use AI to brainstorm, outline, gather initial ideas, or otherwise make the material more accessible, but your work must be developed and refined independently.
Level 3 – AI Collaboration: You may use AI to assist with specific tasks such as drafting or revising, but you are responsible for evaluating, editing, and improving any AI-generated content.
Level 4 – Full AI: You may use AI extensively throughout your work, provided you remain in control of the ideas and demonstrate critical thinking in how you apply AI outputs.
Level 5 – AI Exploration: You are encouraged to use AI creatively to solve problems or explore new approaches, potentially co-designing with your instructor.
Unless otherwise specified, the default expectation is Level 2 (AI Planning), and you are encouraged to work at this level. If you are unsure how AI use applies to an assignment, you must ask the instructor before proceeding. Failure to follow AIAS Level guidelines will be considered Academic Dishonesty.”
Conclusion
The point of the AI Assessment Scale is not to prescribe one “right” way to use AI, but to make expectations transparent and purposeful. When teachers and students share a clear framework, the energy that might have been spent on uncertainty or suspicion can instead go into learning itself. The CyberSandwich example shows how even a familiar EduProtocol can take on new dimensions when paired with the AIAS, sometimes focusing on unaided comprehension, other times on partnered creativity.
In this way, the framework is both protective and expansive. It protects core skills by ensuring there are spaces where students demonstrate independent mastery of specific, identified skills. At the same time, it expands possibilities by showing how AI can be integrated ethically and creatively into classroom practice. Rather than treating AI as either a threat or a shortcut, the AIAS helps position it as a tool for thinking; a tool that requires intention, reflection, and clear boundaries.
Ultimately, moving toward a shared framework is less about drawing hard lines and more about creating a common map. With it, teachers can design assignments that align with their goals, students can navigate expectations with confidence, and schools can cultivate a culture where AI is not just managed, but meaningfully learned with.
Your turn: want a custom plan?
Comment with:
Grade + subject
Assignment type (essay, lab, discussion, etc.)
Skill to assess (write clearly, synthesize content, argument quality, etc.)
Your allowed AI level (if unsure, say “recommend”)
I’ll reply with an AIAS level, guardrails, and an EduProtocol adaptation you can use tomorrow.
The AI Assessment Scale (AIAS) is already being used in schools and universities worldwide to create consistency, spark productive conversations, and give students the clarity they need when working with AI. If you’d like to explore the framework in more depth, including full descriptions of the five levels, case studies, and ready-to-use assets, visit aiassessmentscale.com.







Learning—original was written by student, ai gives the feedback as a teacher might, student edits.