The Industrial Complex 21 February 2026

The AI Webinar Industrial Complex: A Thousand Experts, Zero Answers, and a Pile of Submissions That Didn’t Write Themselves

In which the author attends so many webinars on AI and academic integrity that she begins to hallucinate panel discussions in her sleep, and discovers that the entire education sector has agreed on precisely one thing: the detection software doesn’t work. She knew this. Everyone knew this. The webinars continue regardless.

I have, in the past six months, attended more webinars on AI and academic integrity than any human being should be asked to endure without hazard pay. I have watched keynote speakers, panel discussions, fireside chats (a term that should be illegal when applied to a Zoom call featuring four academics and a moderator who keeps forgetting to unmute), roundtables, workshops, and at least one event that described itself as a “thought leadership summit,” which is a phrase that should carry a prison sentence. And I have emerged from this experience with a single, crystalline insight: nobody knows what to do. But they’ve all got slides.

Act I: In Which I Keep Falling for It

It began, as these things always do, with a reasonable impulse. A student had submitted an essay that was, by any measure, too good. Not good in the way that makes a teacher proud — good in the way that makes a teacher suspicious. It had the uncanny fluency of language produced by something that has never had to think about what it is saying. The paragraphs transitioned with the frictionless confidence of a machine that does not experience self-doubt. Every claim was supported. Every sentence was grammatically immaculate. It read like it had been written by a very well-educated entity that had never once experienced a deadline, a hangover, or a three-hour gap in its argument where it clearly went to make a sandwich and lost its train of thought.

I knew what I was looking at. Every teacher reading this knows what I’m describing. There is a specific, visceral sensation — somewhere between certainty and nausea — that arrives when you read student work and realise you are not reading student work. You are reading ChatGPT’s impression of what student work should look like, which is technically competent and emotionally vacant in a way that is almost impossible to prove and absolutely impossible to ignore.

So I did what any conscientious educator would do: I went looking for help. I registered for a webinar titled “AI and Academic Integrity: Navigating the New Landscape.” It had over eight hundred registrations. This, I thought, was a good sign. Clearly the sector was taking this seriously. Clearly there were people with answers. Clearly someone, somewhere, had figured out what to do when a second-year diploma student submits 2,000 words of flawless argumentative prose and you happen to know that this same student once asked you, in person, whether “PDF” was a type of font.

That was six months and approximately forty-seven webinars ago. I have not found the answer. But I keep falling for it. Every single time a new AI-in-education event appears in my inbox — and they appear with the frequency and inevitability of bin night — I experience a hallucination so vivid it would put a chatbot trained on Facebook to shame. I look at the title. I read the abstract. And somewhere in the recesses of my brain, a small, stupid, indestructible voice whispers: this one will be different.

It is never different. It is never, ever different. And yet the next time one arrives, I will click “Register,” and I will add it to my calendar with the solemn conviction of a person who has learned nothing, and I will block out the hour with the same optimism I bring to checking whether IT have responded to my ticket. The definition of insanity, they say, is doing the same thing repeatedly and expecting different results. The definition of being an AI educator in 2026 is doing the same thing repeatedly, knowing the results will be identical, and registering anyway because the alternative is admitting that the professional development ecosystem has been optimised for attendance rather than answers.


A Conditions Survey of the Pseudo-Expert Ecosystem

If you attend enough of these events — and I have attended enough of these events to qualify for some kind of medical intervention — you begin to notice that the speakers fall into distinct and recurring categories, like species in a particularly dispiriting nature documentary.

First, there is The Conceptual Architect. This person has a framework. They always have a framework. It is a two-by-two matrix, or a four-quadrant model, or a pyramid with labels like “Rethink,” “Redesign,” “Reimagine,” and “Redefine,” which are four words that mean functionally the same thing but look magnificent on a slide. The Conceptual Architect does not teach. They have not, in many cases, been in a classroom since the Rudd government. But they have published a paper, and the paper has a diagram, and the diagram has arrows, and the arrows point confidently in directions that do not lead to an answer to the question of what to do about the essays on your desk.

Then there is The Cautious Optimist, who genuinely believes that AI will transform education for the better and cannot understand why the room full of exhausted practitioners keeps asking inconvenient questions about assessment. “What if,” the Cautious Optimist proposes, eyes bright with the fervour of the recently converted, “we rethought the entire assessment paradigm?” Wonderful. Excellent suggestion. I will rethink the entire assessment paradigm immediately after I finish marking these forty-three submissions by Thursday, twelve of which appear to have been written by the same large language model in slightly different moods.

There is The Vendor, who has a product. The product detects AI-generated text. It works, the Vendor assures you, using proprietary algorithms and machine learning and a confidence percentage that means absolutely nothing but looks very convincing on a dashboard. The Vendor does not mention the false positive rate. The Vendor does not mention that their tool flags the work of ESL students at roughly the same rate as actual AI-generated content, which is not a bug so much as a lawsuit waiting to happen. The Vendor would like to offer you a free trial.

There is The Policy Person, who has written an institutional framework. The framework is twelve pages long. It defines terms. It establishes principles. It creates a “graduated response model” with five tiers, each described with the bureaucratic precision of a parking fine schedule. Tier one is “educative conversation.” Tier five is “exclusion from the programme.” Nowhere in the twelve pages does it address the question of how you are supposed to distinguish tier one from tier five when you cannot prove anything, because — and I want to be very precise here — you cannot prove anything.

And finally, most magnificently, there is The Philosopher, who has transcended the question entirely. The Philosopher does not wish to discuss detection or assessment or the pile of submissions on your desk. The Philosopher wishes to discuss the nature of authorship. “What does it mean to write?” the Philosopher asks, with the serenity of someone who has not been asked to mark anything since 2019. “Is the student who prompts an AI not also engaged in a form of composition?” I don’t know, mate. Is the student who pays someone on Fiverr to write their essay also engaged in a form of composition? At what point does “engagement with the generative process” become a euphemism for “didn’t do the work”?

I have attended enough webinars on AI and academic integrity to qualify for some kind of medical intervention. Nobody knows what to do. But they’ve all got slides.


In Which the Narrator Must Make a Dreadful Admission

I need to tell you something, and I need you to understand that I am telling you this in the spirit of full disclosure and not because it makes me look good, because it absolutely does not.

I did one. I did a webinar. I stood up — well, sat in front of a webcam, which is the modern equivalent of standing up but with worse lighting and a bookshelf you’ve curated to look accidentally intellectual — and I subjected my colleagues to my “insights.” I had slides. I had talking points. I had the quiet, simmering confidence of a woman who had recently discovered something she believed to be clever and had not yet been exposed to sufficient criticism to realise it wasn’t.

My contribution to the discourse — and I use that word with the full awareness that it will haunt me — was to gloat. I gloated to my colleagues about how I had experienced a stroke of innovative genius and got ChatGPT to make me some teaching resources. I said “innovative genius” with my actual mouth, to actual people, some of whom I have to share a kitchen with. I demonstrated the resources. I walked them through the prompts. I presented this as though I had discovered fire rather than asked a chatbot to write a quiz for me, which is something a motivated Year 9 student could do between classes while simultaneously watching TikTok.

In my defence — and this is not a good defence, but it is the only one I have — I did this primarily for fun, and primarily to avoid doing the work I was actually supposed to be doing, which was calling students to ask them a question so delicate that I had been rehearsing it in the shower for three days.

The question was this: had they noticed that their AI agent had enrolled them in a course?

Not metaphorically. Not in the abstract, futuristic, “what if AI could one day” sense that the webinars love to discuss. Literally. An AI agent — one of those autonomous task-completion systems that operates on the user’s behalf with all the moral restraint of an unpaid intern with access to the credit card — had enrolled actual students in an actual course. It had then, with the quiet efficiency of a process that has never once questioned whether it should, proceeded to take the course. It accessed the learning materials. It completed the activities. And then — with what I can only describe as the confidence of something that does not understand consequences — it agentically submitted Assignment 2.

Assignment 2. Not Assignment 1, which you might charitably attribute to a setup error. Assignment 2, which implies that Assignment 1 went so smoothly that the agent saw no reason to stop. The agent had, in effect, looked at the academic programme, assessed the requirements, and thought — to the extent that it thinks anything — “yes, I can do this, and I shall, and I will not at any point check whether the human being whose name is attached to this work is aware that any of it is happening.”

I now had to call these students and undertake what I was privately calling a forensic examination of the learning centre of their brain. Not to determine whether they had used AI on their assignment — that question, quaint as it now seems, belongs to a simpler era — but to determine whether they were aware they were enrolled in a course at all. Whether, at any point in the process, a neuron had fired. Whether the human being nominally attached to this student record had experienced any cognitive event whatsoever between the moment their AI agent clicked “enrol” and the moment I opened their submission and found myself reading work produced by a chain of automated processes so complete that the student themselves had been rendered entirely optional.

This is not an academic integrity issue. This is an existential one. We have moved beyond “did the student write this?” and arrived at “does the student know this exists?” The webinars, I can assure you, are not covering this. The graduated response models do not have a tier for “the student was not present for any part of their own education, including the decision to undertake it.”

We have moved beyond “did the student write this?” and arrived at “does the student know this exists?”

So yes. I did a webinar. I gloated about ChatGPT making me a quiz. And while I was gloating, an AI agent was somewhere in the system, enrolling a student who didn’t know they were a student, in a course they’d never heard of, submitting work they’d never seen, in an assessment system that I was supposed to be quality-assuring. I was the Conceptual Architect. I was the Cautious Optimist. I was, for one dreadful hour, the exact kind of person I have spent the rest of this post making fun of.

I do not have a framework for processing this. The irony is structural and possibly load-bearing.


In Which an Entire Sector Discovers What Everyone Already Knew

Every single one of these events — every last one, without exception, across six months and three time zones and more Zoom breakout rooms than I care to recall — arrives at the same revelation. It is delivered with the gravity of a medical diagnosis, the weight of hard-won wisdom, the sombre tone of someone who has been through the fire and emerged with truth:

The detection software doesn’t work.

Yes. I know. We all know. We knew this in 2023. We knew this before the detection software existed, in the same way that you know, instinctively, that a lock sold for two dollars at a petrol station is not going to protect your bicycle. We knew it the moment GPT-3.5 landed and some optimistic startup announced they could identify AI text with “98% accuracy,” which is a statistic that should have immediately disqualified them from adult conversation. We knew it when Turnitin released their AI detection module and it flagged the US Constitution. We knew it, and we said so, and nobody with purchasing authority listened, and now here we are: sitting in webinars being told, with great solemnity, the thing we have been shouting into the void for two years.

The detection tools do not reliably detect AI-generated text. They are biased against non-native English speakers. They produce false positives at rates that should be career-ending for the companies selling them. They cannot detect text that has been lightly edited, paraphrased, or run through a second AI to “humanise” it — a service that now exists, because of course it does, because we live in a timeline where one AI writes your essay and another AI makes it look like you wrote it, and the detection AI tries to figure out which AI did what, and the whole thing has the structural integrity of three spiders trying to eat each other.

But every webinar treats this as a revelation. Every speaker delivers it as though they are the first person to notice. The detection tools don’t work. Gasps. Murmurs. Someone types “wow” in the chat. The speaker nods gravely and advances to the next slide, which is about “what we can do instead,” and what we can do instead turns out to be a list of things that are either blindingly obvious (“focus on process, not product”), logistically impossible (“return to in-person exams for all assessments”), or so vague as to be functionally meaningless (“foster a culture of integrity”). Foster a culture of integrity. Magnificent. I’ll add it to the lesson plan, right after “teach the content” and “survive until Friday.”


The Question That Makes Panellists Sweat

Here is what I have learned, after six months in the webinar trenches: there is one question that every presenter, panellist, and thought leader will do almost anything to avoid. It is not a trick question. It is the single most important question in the entire AI-and-education conversation, and it is this:

Have you ever sat down with a pile of student submissions and read through them knowing — not suspecting, knowing — that you are reading work produced by ChatGPT?

Watch what happens when you ask it. I have asked it. Multiple times. In Q&A sessions that were not supposed to become adversarial but somehow did, because I cannot help myself and because someone has to.

The Conceptual Architect pivots to frameworks. The Cautious Optimist suggests we see it as a learning opportunity. I am running out of polite ways to explain that I do not need a learning opportunity. I need an answer. Or, failing that, I need someone to look me in the eye and say “there isn’t one,” so I can stop attending these bloody webinars and get on with the Sisyphean task of marking work I cannot verify using tools that don’t function within a policy framework that doesn’t account for reality.

The Philosopher reframes the question. The Vendor mentions their product. The Policy Person recommends an “educative conversation,” which — and I cannot stress this enough — is a conversation in which you sit across from a student and say “I think this was written by AI” and they say “No it wasn’t” and you say “I think it was” and they say “Prove it” and you cannot, because the detection software does not work, as you were told approximately fourteen minutes ago in a webinar you should not have attended.


In Which We Discuss the Pile, Because Nobody Else Will

Every educator reading this has the pile. The stack of submissions that arrived this semester looking suspiciously, uniformly, generically better. The spelling is correct. The grammar is correct. The arguments are structured. And yet there is nothing in it. No personality. No wrong turns. No moments where the student goes off on an unexpected tangent that reveals they were actually thinking. A real student essay has fingerprints. ChatGPT doesn’t leave fingerprints. It leaves a surface so smooth you could see your own reflection in it, and what you see is a person who has no idea how to prove what they know to be true.

And then there are the students who are not even trying to hide it. They are smashing through their assessments with ChatGPT with the untroubled confidence of a person who has never once considered that their teacher might notice, or indeed read the submission at all. I know this because they leave in ChatGPT’s postscript. The chatbot’s cheerful little sign-off. The conversational equivalent of a burglar leaving a business card on the kitchen counter.

I am not making this up. I have opened a formally assessed submission and found, sitting serenely at the bottom of an otherwise competent 1,500-word essay on workplace health and safety:

ChatGPT:

I hope this helps! Would you like me to convert this into a Word document, or is there anything else you’d like me to adjust? I can also add a reference list in APA format if your course requires it. Just let me know!

Just let me know. The chatbot is offering after-sales service. It is the digital equivalent of a getaway driver leaning out the window and asking the bank teller if they validate parking.

The student had not read their own submission. They had copied the entire chat output — essay, sign-off, upsell, and all — pasted it into the submission box, and hit submit with the serene detachment of a person who has fully automated their own education and cannot be reached for comment.

I have since seen variations. One submission ended with “Let me know if you’d like me to make this more formal or add additional sections!” Another concluded with “Here’s a suggested structure for your reflective journal — feel free to personalise it with your own experiences.” The student had not felt free. The student had not personalised it. The student had submitted the instruction to personalise it as the finished product, which is a level of irony that I am not emotionally equipped to process on a Tuesday afternoon.

The webinars do not discuss the pile. They do not discuss the specific experience of sitting at your desk at 10pm feeling, in your bones, that this is not student work, and realising that feeling is all you have — because feelings are not evidence, and the detection tools are not evidence, and the policy was written by people who have never read the pile.


The Solutions That Aren’t (A Catalogue of Well-Meaning Futility)

In fairness — and I want to be fair, because being unfair to well-meaning academics is a hobby I am trying to moderate — the webinars do offer solutions. They offer many solutions. They offer so many solutions that after a while the word “solution” detaches from meaning entirely and floats around the screen like a lost balloon.

“Redesign your assessments so they can’t be completed by AI.” Splendid advice. Have you met AI? I redesigned an assessment in November to include a component requiring students to reflect on their personal experience of a workplace scenario. ChatGPT reflected on its personal experience of a workplace scenario. It was moving. It mentioned a colleague named Sarah. It described the fluorescent lighting. If I hadn’t known it was a language model, I’d have offered it a cup of tea and asked if it wanted to talk about it.

“Use oral assessments and viva voce examinations.” For thirty-eight students? In a TAFE programme that runs one day a week? With no additional resourcing? This is the assessment equivalent of suggesting to someone stuck in traffic that they should simply fly. It is not wrong. It is not even bad advice. It is advice that exists in a universe adjacent to but fundamentally incompatible with the one where my timetable lives.

“Focus on process, not product.” I focus on process. I require drafts. I check revision history. Do you know what happens when you check revision history on a Google Doc that was written by pasting in ChatGPT output? Nothing. Because the student pastes it in, makes twelve cosmetic edits, and the revision history shows exactly what a normal drafting process looks like if the student happened to write a flawless first draft at 2:47am and then spent forty minutes changing commas. Which some students do. But not twelve of them. Not twelve of them with identical drafting patterns and the same curious fondness for semicolons.

“Build AI literacy into the curriculum.” This one makes me laugh the hardest, and I mean that with genuine affection, because it is my literal job and I can confirm that teaching students how AI works does not make them less likely to use it to cheat. If anything, it makes them better at it. I have, in a very real sense, been running a masterclass in how to plagiarise more effectively. I am the person who hands out the lock-picking tools and then acts surprised when the locks get picked. This keeps me up at night. Not every night. But more nights than I’d like.


The Part Where It Stops Being Funny (Briefly)

Here is what none of the webinars will say out loud, so I will: we have lost this round. Not the war — maybe not even the battle — but this particular skirmish, the one happening right now in every classroom and every marking queue in the country, has been lost. The tools to detect AI-generated text do not work reliably. The assessment designs we have are not yet adapted to the reality. The policies are not equipped. The students know it. We know they know it. They know we know they know it. And every week we all show up and perform the pantomime of pretending this isn’t happening, because the alternative — admitting that we do not currently have a way to verify the authorship of student work with any confidence — is too terrifying for institutions to say out loud.

The person I worry about is not me. It’s the teacher in a regional school, or a TAFE campus, or a sessional tutor at a university, who doesn’t have time to attend forty-seven webinars and who is sitting alone with the pile, feeling like they’re going mad, wondering why nobody is talking about the thing they’re seeing every single day. They’re talking about it. In the staffroom. Over coffee. In the corridor after class. But not at the webinars. Not on the panels. Not where the decisions are being made. The gap between the lived experience of marking and the professional discourse about marking has become a chasm, and the webinars — God help us — are building bridges that don’t reach the other side.


In Which We Return to Our Regularly Scheduled Absurdity

I have, despite everything, registered for another webinar. It is next Tuesday. It is called “Beyond Detection: Holistic Approaches to AI-Resilient Assessment in the Age of Generative Intelligence.” That title is twenty words long. I counted. Twenty words, and not one of them is “help.”

I will attend. I will sit in my office in Darwin, thirty-four degrees outside, the fan doing its best, the internet doing its worst, and I will watch four panellists discuss, with great erudition and no urgency, the theoretical implications of a crisis that is happening in real time in the assessment queue on my other screen. One of them will mention that detection software doesn’t work. I will nod. Everyone will nod. We will all nod together, a vast digital congregation united in the shared knowledge of a fact we agreed on two years ago, before returning to our respective marking piles to confront it alone.

Someone will ask a question in the chat that is actually the question — the real one, the one about the pile, the one about sitting there knowing and not being able to prove it. The moderator will say they’ll “take that one offline,” which is webinar-speak for “we are not going to answer that.” The Q&A will run over time. The panellists will thank each other for the “rich discussion.” A follow-up survey will arrive in my inbox. I will not complete it. I will, instead, open my marking folder and begin reading submission number one, which will be immaculate, which will be soulless, and which will use the phrase “in today’s rapidly evolving landscape” before the end of the first paragraph.

I will mark it. I will give it the grade it technically deserves. I will feel nothing good about this. And then I will open submission number two.

The rainbow ball from the Stuart Highway has found its way into the webinar waiting room. It is wearing a lanyard. It has a LinkedIn profile now. Its job title is “AI Integrity Consultant.” It has never read a pile of student submissions in its life, but it has a framework, and the framework has arrows, and the arrows point confidently in directions that do not lead anywhere a teacher needs to go.

She’ll be right. Probably. Ask me again after the webinar.

The unreliable narrator would like to acknowledge that this post was itself produced with the assistance of an AI, which at no point offered to redesign her assessment paradigm, suggested she foster a culture of integrity, or used the word “holistic.” She considers this an act of professional courtesy.