My Exploration Of How Copilot Can Help Blind Users – Part 2 – Has Access to M365 Copilot Influenced My Use of Other Adaptive Technologies?
The 1st article in this series “My Exploration Of How Copilot Can Help Blind Users” was written almost 2 years ago. My feelings and observations remain the same and my use of AI has only increased over time. However; at work as part of the pilot we are running, I recently got asked the question: Has Access to M365 Copilot Influenced My Use of Other Adaptive Technologies? Since I spent time working through this question, I felt it would make a good natural progression in my copilot from a blindness perspective article series.
So has it? Yes and no. The answer is more nuanced than a simple replacement story.
M365 Copilot hasn’t replaced my screen reader, magnification tools, or accessibility features built into my system. I still use them every day. When I really look at where it really has been impactful, what Copilot has replaced is the manual workarounds. It’s reduced the frequency with which I need to ask colleagues for help. It’s eliminated time spent on navigation of inherently inaccessible content.
To give a couple of examples of this: Much of my role at work involves working with other organisations on short timelines. This means regularly receiving documents from other areas that haven’t been formatted accessibly. I receive PDFs that are image-based scans rather than searchable text. I receive documents with poor heading structures, tables that don’t make sense with a screen reader, and files that were created without any accessibility consideration.
Previously, I either asked someone else to help me with the document, requested the document in accessible format if possible and timelines allowed, or spent significant time working with these documents in inaccessible formats. I’d navigate through them trying to extract information. If they were scanned images with complex graphics or charts, they were completely inaccessible to me. I’d either need to request a reformatted version (adding time and creating a delay) or find workarounds.
Now I can upload these documents to Copilot and ask it to help me work with them as they exist. For scanned documents, Copilot can read the complex graphics, describe images or extract content. For poorly structured documents, Copilot can help me understand the information and reorganize it. This is critical because I often don’t have the authority or timelines available to demand that other areas reformat materials. I need to work with what exists.
Real example: I just had another team share a procedure document created in their preferred format with minimal accessibility consideration. It has a visual flow chart, inconsistent formatting, tables that are hard to parse, and no clear structure for a screen reader user. Rather than spending an hour navigating through it, I ask Copilot to summarize the key steps and help me understand the process. I can also ask it to work through the flow chart and provide me with a bulleted list or numbered list of the steps in the flow chart. I get clarity in 15 minutes.
This isn’t perfect. Inaccessible content isn’t always going to result in something workable and results have to be questioned. However, the alternative is to be less efficient or capable.
Sure these barriers wouldn’t exist if organisations followed their own digital accessibility guidelines and purchased systems and created content that was accessible. However, the reality is that according to one scan, 90 percent of the documents we share don’t even have the most minimal accessibility compliance. Despite legal requirements and despite tools available to make creating accessible documents being built into publishing systems (for example, the accessibility checker in MS Word and M365).
To give another example of where copilot shines: Departmental updates, interdepartmental messages and official communications often include complex formatting and referenced attachments. Email management in such settings means navigating long email chains with multiple layers of replies, understanding who said what, and tracking action items.
Copilot helps me manage this quickly. I ask it to summarize email threads, extract action items assigned to me, or clarify what decisions were made. An email thread that a sighted colleague understands by scanning in 2 minutes might require 10 to 15 minutes of screen reader navigation for me to fully work through.
What I haven’t mentioned yet but is equally important: much of my previous time consumption involved asking colleagues for help. “Can you describe what’s in this document?” “What does this table show?” “Can you help me understand the structure here?” Each question interrupts their work. Each represents a moment where I’m not independent. With Copilot, I’m resolving information access independently, which changes not just my time but my working relationships and my own sense of professional autonomy.
What Copilot has done is reduce the gap between how long work takes me and how long it takes sighted colleagues to do the same work. Before Copilot, I was doing my job plus solving accessibility problems created by systems and documents that weren’t designed for me. This meant that I regularly spent 5 or more hours of unpaid overtime a week just to be competitive. Now I’m doing my job at roughly the same pace as everyone else without the extra overtime or the embarrassing requests for help.
That’s the meaningful change: not that I’m using different tools, but that I’m spending my time on actual work rather than on access itself.