The Blind Spot

April 17, 2026
min read
Summarize with AI

April 17, 2026

The Blind Spot

TL;DR
  • AI can give confident but incorrect answers when key context is missing
  • It does not indicate or warn about missing information
  • In construction (AEC), this can lead to serious real-world consequences
  • Errors can directly impact projects, safety, and costs
  • The real issue lies in incomplete inputs, not just flawed outputs
  • Firms should audit the inputs given to AI, not just review results
  • Ensuring full context is critical to making AI reliable and safe

What your AI model doesn’t know about your project — and why it won’t tell you.

A few months ago I uploaded my blood work results to Claude.

Multiple markers flagged. Claude gave me a careful read — and, with the appropriate caveat that this wasn’t medical advice, told me the results looked concerning. I almost booked an urgent appointment with my doctor.

Then I added one piece of information I’d forgotten to include: I had run a marathon two days before the lab test. Fed it the same results.

Different answer. Completely different read on identical data. The abnormal markers were suddenly explainable — expected, even, for someone two days post-race.

One sentence of context flipped the entire diagnosis.

Here’s what scared me more than the first answer: the AI didn’t know it was missing something. It read what I gave it and produced a confident response. No flag that said “I might be missing context.” No prompt asking what I’d been doing physically. Just a clean, thorough-looking answer based on incomplete information.

AEC Has a Version of This Problem

87% of contractors believe AI will have a meaningful impact on their business — Dodge Construction Network’s 2025 survey. I believe it too. AI is already being used for cost estimation, structural analysis, preconstruction planning, spec reviews.

What almost nobody is talking about is what happens when the AI gets it wrong because it didn’t have the full picture.

Construction Dive ran a piece in March about AI hallucinations in infrastructure data. Not theoretical — actual AI outputs being fed into real construction workflows that contained errors the system had no awareness of. The framing was simple: when AI hallucinates in a word processor, you get a bad paragraph. When it hallucinates in a construction data workflow, you get a bad building.

That’s not hyperbole. That’s load-bearing.

The Problem Isn’t the Output. It’s the Input.

My blood work story is actually the optimistic version of this problem. I knew what I’d done two days before the test. I could provide the missing context. I just forgot to.

In AEC, the missing context often isn’t that recoverable. The AI doesn’t know the soil report from the 1987 expansion is buried in a folder nobody indexed. It doesn’t know the client informally agreed to non-standard load specs in a meeting three months ago. It doesn’t know the building has a legacy HVAC configuration that changes every structural assumption in the model.

It reads what you give it. Produces a confident output. Moves on.

And here’s the part that should keep AEC leaders up at night: you don’t know what the AI doesn’t know. It won’t tell you. It won’t flag the gap. It will give you a clean, well-formatted, apparently complete answer — and the missing context will be invisible.

Same Data, Different Answer

The other thing my blood work experiment showed me: for the same inputs, I sometimes get different answers on different days. Same file. Same question. Different read.

I’ve tested this. It’s not consistent.

For personal health queries where I ultimately defer to my doctor, that inconsistency is manageable. I treat AI as a starting point for a conversation, not a verdict.

That’s not a luxury you have in construction. Timelines, bids, permits, contracts — these move fast. An AI-generated structural estimate doesn’t stay theoretical for long. Someone is going to act on it.

What the Disciplined Firms Are Building

The answer isn’t to slow down AI adoption. 87% of contractors are right that it will reshape the industry. The question is what accountability layer you build alongside it.

The firms getting this right aren’t just reviewing AI outputs. They’re auditing AI inputs. Before any AI-generated estimate or analysis goes to a client, someone asks: what context did we give this model? What’s sitting in this project that the AI didn’t see? Where might it have filled in a gap with something plausible — but wrong?

That’s a different kind of QA. It’s not checking the answer. It’s checking the question.

I eventually had a perfectly normal conversation with my doctor. Marathon recovery, as expected. Nothing urgent.

But I think about the version of that story where I didn’t know to add the context. Where I just took the first answer and acted on it.

In professional services, your clients are uploading their version of the blood work every day — incomplete information, missing history, unstated constraints. Your AI produces a confident answer. The question is whether anyone on your team knows to ask: what did we leave out?

In construction, the missing context doesn’t give you a wrong paragraph.

It gives you a wrong building.

FAQs

No items found.

Heading

No items found.
In this article
Example H2
Example H3
Enjoying this article?
Share it with the world!

Send your first survey with ClearlyRated!

Get in touch with us to discover key terms and concepts related to surveys, customer feedback, and service excellence.

Send your first survey with ClearlyRated!