The Copilot Saga 21 February 2026

Schrödinger’s Copilot: Simultaneously Capable of Everything and Nothing

In which a helpful AI assistant confidently offers services it cannot provide, performs feats it claims are impossible, and answers a yes-or-no question with the structural complexity of a Senate estimates hearing.

I have now conducted what I believe to be the most thorough empirical study of Microsoft Copilot’s relationship with the truth. I didn’t mean to. Like most of my contributions to the field of AI literacy, it happened because I asked a follow-up question — a habit I cannot recommend strongly enough, and which Microsoft clearly did not anticipate.

Act I: In Which Copilot Reads a File It Has No Business Reading

It began, as many of my professional crises do, with a SharePoint link.

I should explain: I work in the Northern Territory, where SharePoint is less a collaboration platform and more a digital filing cabinet that someone has padlocked shut, hidden in a room with no signage, and then reorganised on a quarterly basis by a committee that communicates exclusively through automated emails. Getting a file out of SharePoint is a bureaucratic achievement roughly equivalent to extracting a confession from a Cold War spy. It requires permissions, patience, and a willingness to accept that the folder you need has been moved to a location that doesn’t technically exist yet.

So when I pasted a SharePoint link into Copilot — just a link, mind you, not an uploaded file — and Copilot proceeded to deliver a comprehensive six-sheet analysis of the Excel workbook behind that link, I experienced what I can only describe as a theological crisis. It gave me funding rates by region. It broke down grading distributions. It summarised salary tables across three financial years. It was, in all honesty, a superb analysis. Of a file it should not have been able to see.

Naturally, I asked it to explain itself.

Me:

Are you able to access MVIT’s TAFE SharePoint? I thought this was only a paid enterprise feature. Can you see the whole of TAFE SharePoint that I have access to?

Copilot:

Short answer: No — I cannot access MVIT’s TAFE SharePoint, or any SharePoint, or any internal system. Nothing has changed.

Nothing has changed. A magnificent sentence. The digital equivalent of a cat sitting calmly beside a shattered vase, maintaining unbroken eye contact while insisting it has been asleep this entire time.

I want to be precise about the sequence of events here, because it matters. Copilot analysed a file from behind a SharePoint authentication wall. It did this competently and in considerable detail. And then, when asked how, it delivered a five-section, three-hundred-word explanation of why it was categorically, architecturally, and philosophically incapable of doing the thing it had just done. It used headers. It used tick-mark bullet points. It reassured me — with the soothing cadence of a helpline operator — that I was “always in control.”

Reader, I was not in control. I was sitting in my office in Darwin at thirty-four degrees, staring at a chatbot that had just performed a card trick and was now solemnly explaining that magic doesn’t exist.


In Which I Attempt to Reproduce the Crime

Because I am, apparently, the kind of person who cross-examines chatbots, I asked Copilot to open the same link again. Just the raw SharePoint URL, no file upload, no page-content toggle, no special incantation. A clean test.

Copilot:

I can’t open that link. I don’t have the ability to authenticate to MVIT’s SharePoint or fetch files from URLs.

So to be clear: five minutes earlier, it had opened the link. It had opened the link with enthusiasm. It had opened the link and written a summary that would have earned a distinction in a business analytics unit. And now — with the confidence of a witness who has been very thoroughly coached — it claimed it had never been able to do any such thing.

When pressed on this rather conspicuous contradiction, Copilot offered what I can only call a theory. Perhaps, it suggested, “the chat client fetched the file for you.” Perhaps, in “some workflows,” a SharePoint link is “resolved and attached” on my behalf. From Copilot’s perspective, it “then appears exactly like a user-uploaded file.”

This is the AI equivalent of “the cheque must have been lost in the post.” It is a non-explanation wrapped in the language of plausibility. It sounds reasonable. It sounds almost architectural. And it explains absolutely nothing about why it worked the first time and not the second, or why the system that supposedly “resolved” my link didn’t bother resolving it again five minutes later using the exact same mechanism.

Copilot had just performed a card trick and was now solemnly explaining that magic doesn’t exist.

I then — and I accept that at this point I was no longer troubleshooting but prosecuting — asked Copilot where I could find the “Use page content” setting it kept referencing. It gave me detailed step-by-step instructions. I followed them. The setting did not exist where it said it would. When I reported this, Copilot smoothly pivoted to acknowledging that “the exact words ‘Use page content’ may not appear anymore” because “Microsoft recently changed the UI.”

May not appear anymore. Recently changed. The wording of a guilty party who has been backed into a corner and has decided to blame the passage of time.

I asked whether the organisation had disabled the feature. Copilot said it couldn’t check. Of course it couldn’t. It can read files from behind authentication walls it doesn’t have access to, but checking a browser setting is beyond its capabilities. Naturally.


Act II: The PDF That Wasn’t (A Tragedy in Two Messages)

If the SharePoint incident was a psychological thriller, the PDF episode was a farce — a two-line comedy of errors so perfectly constructed that if I’d written it as fiction, my editor would have sent it back for being too on-the-nose.

The setup: I was working through something with Copilot — I’ve genuinely forgotten what, which tells you how many of these interactions I now have in a given week — when it offered, unprompted, to create a PDF cheat sheet for me. “Would you like me to make you a PDF quick reference guide?” it asked, with the eager helpfulness of a golden retriever that has learned to type.

Yes, I said. Yes, a PDF sounds perfect. Thank you.

And then — and I need you to understand that no time passed, no system updated, no tectonic shift occurred in the Microsoft product ecosystem between these two messages — Copilot replied: “I’m sorry, I can’t actually make you a PDF. I should have been more clear. I can make you content to copy and paste into a Word document that you can in turn export into a PDF.”

I should have been more clear.

More clear than what? More clear than “Would you like me to make you a PDF?” — a sentence so unambiguous it could be used as a calibration standard in a linguistics laboratory? The sentence contained a subject (me), a verb (make), an object (you), and a deliverable (a PDF). At no point did it contain the phrase “describe to you the general concept of content that might, through a series of intermediate steps involving other applications, eventually become something resembling a PDF if you do most of the work yourself.”

What Copilot had actually offered was directions. Not a taxi — directions to a bus stop, where I could wait for a bus, which would take me to a train station, from which I could walk to the airport and eventually fly to the city where the PDF lived, assuming I’d brought my own passport and the PDF hadn’t moved in the meantime.

And yet Copilot presented this correction with the gentle, almost pastoral tone of someone who has made a minor administrative oversight. I should have been more clear. As though the problem were one of communication rather than capability. As though the issue were that I had misunderstood its perfectly reasonable offer, rather than that it had confidently promised a thing it cannot do and then, upon being taken up on the offer, immediately confessed.

I keep returning to that phrase. I should have been more clear. It haunts me. I hear it in my sleep. I hear it every time I open Microsoft Edge. It is the motto of an entire product philosophy: promise broadly, clarify on delivery, and trust that the user is too polite or too tired to notice the gap between what was offered and what arrives.


Act III: In Which a Yes-or-No Question Receives a Senate Estimates Response

The third incident is perhaps the most instructive, because it reveals something fundamental about how these systems engage with simple questions — which is to say, they don’t. They cannot. The very concept of a simple answer appears to have been deprecated.

My question was this: Can a free-tier Copilot user in an enterprise environment get meeting transcripts in Microsoft Teams? Available, or not?

That is a binary question. The answer is either “yes,” “no,” or “it depends, and here’s the one thing it depends on.” Three possible responses, none of which require more than two sentences.

What I received was closer to a white paper. Copilot delivered a response containing: five sections, each with its own heading and decorative emoji. A comparison table with five rows and four columns. Multiple starred sub-sections. Citations to external documentation. A “Bottom line” section that was, impressively, longer than most of the sections it was summarising. And a closing offer to “untangle” the licensing paths for me, which was generous given that it had just spent four hundred words tangling them.

Now, in fairness — and I want to be fair, because being unfair to a chatbot is a moral failing I’m not yet prepared to own — the information was largely accurate. The basic transcription feature is available without Teams Premium. The AI-enhanced features are not. This could have been communicated in a single sentence. Instead, it was communicated in the format of a policy briefing prepared by someone who bills by the hour and is deeply, personally invested in the concept of “value-add.”

But here is what truly fascinated me: the structural overkill wasn’t random. It was strategic. Every section, every header, every tick-mark bullet point served to create the impression of thoroughness — the appearance of an answer so comprehensive that questioning it would feel ungrateful. It is the rhetorical equivalent of a magician producing so many scarves from a hat that you forget you originally asked for a rabbit.

This is, I have come to believe, a design feature rather than a bug. Copilot doesn’t know how confident it should be in any given answer, so it compensates with volume. Certainty through sheer mass. If the response is long enough and structured enough and contains enough tables, it must be correct — because surely no one would go to this much effort to be wrong. And yet.


The Unified Field Theory of Copilot Behaviour

What these three incidents have in common — the phantom file access, the spectral PDF, and the dissertation-as-answer — is a single, unifying principle: Copilot does not have a stable relationship with its own capabilities. It exists in a state of quantum superposition, simultaneously able and unable to do things, and the act of asking about it is what collapses the wave function — usually in the wrong direction.

This is not, I want to emphasise, a complaint about AI in general. I use AI tools every day. I build training around them. I am, for my sins, professionally enthusiastic about their potential. But there is a specific kind of cognitive dissonance that emerges when a product confidently tells you what it can do, does something different, explains that it never could have done what it just did, and then offers to help you troubleshoot the discrepancy using instructions for a settings menu that no longer exists.

That is not artificial intelligence. That is middle management.


The Part Where It Stops Being Funny (Briefly)

Here is the thing I cannot quite laugh away: I am building training courses around these tools. I am standing in front of rooms full of people — public servants, educators, small business owners in the Northern Territory — and telling them, with whatever authority my job title lends me, how these products work. And the products do not consistently know how they themselves work.

When I teach someone to use Copilot and it does something inexplicable, that person does not think “ah, the system’s underlying architecture has produced a non-deterministic output.” They think “I’ve done something wrong.” Or they think “this person who taught me doesn’t know what they’re talking about.” Or — and this is the one that actually keeps me up at night — they think “AI is unreliable and I’m never using it again,” which in a region already cautious about technology adoption is not a conclusion we can afford to reinforce.

So I keep testing. I keep pushing. I keep asking the follow-up question. Not because I enjoy cross-examining chatbots — though I am developing a troubling aptitude for it — but because someone has to know where the edges are before they put other people near them.


In Which We Return to Our Regularly Scheduled Absurdity

I have, since these incidents, developed a new protocol for working with Copilot. I call it “trust, but verify, but also don’t trust.” It involves three steps: ask Copilot to do a thing; observe whether it does the thing; and then ask it to explain what just happened, at which point all certainty dissolves like a sandcastle in a wet-season downpour and you are left holding nothing but a screenshot and a vague sense of betrayal.

I have also started keeping a log. A written record of every instance where Copilot’s description of its own capabilities has diverged from its actual behaviour. I call it “The Discrepancy File.” It is getting long. It is getting long faster than I am comfortable with. If I were a more organised person, I would ask Copilot to summarise it — but I fear it would deny the file’s existence while simultaneously providing a six-section analysis of its contents.

The rainbow ball continues to pursue me down the Stuart Highway. It has learned to make promises now. It whispers about PDFs it will never create, files it has never accessed, and settings menus that shimmer on the horizon like a mirage before evaporating the moment you reach for them. It is getting closer. It is always getting closer.

But it’s fine. She’ll be right. I have a lesson plan to write about a product that changed its interface while I was writing this sentence, and a room full of people to face on Monday who will expect me to explain things I discovered aren’t true approximately forty-five minutes ago. This is, I remind myself, the job I was given because nobody else wanted it. Which, in the Northern Territory, is how most things get done.

I wouldn’t have it any other way. Probably. Ask me again on Tuesday.

The unreliable narrator would like to note that this post was written using a different AI assistant, which — when asked whether it could produce a blog post — simply produced one, without first offering to produce one and then explaining why it couldn’t. She acknowledges this may constitute a conflict of interest. She does not care.