Upgrade to avoid end of support for SQL Server 2008
Upgrading from SQL Server 2019 is more than a routine task—it’s a necessary move to keep your...
Here's what it looks like in practice.
A few weeks ago, I asked an AI agent to send a follow-up email to a contact in my CRM. The contact's record only had a first name — no last name on file. The AI told me it was sending the email to "Firstname Lastname" — a last name it had apparently interpolated on the spot. When I checked the actual email, the last name wasn't in it.
Lucky, in a way. If that interpolated last name had ended up in the email itself, I'd have sent a message to a real person addressed with a name that isn't theirs. That's the kind of thing that's hard to walk back with a client.
The AI didn't flag the missing data. It didn't ask. It filled the gap, reported confidently, and moved on — and I almost didn't catch it.

AI assistants are genuinely useful. They help you draft emails, summarize documents, explain complicated topics, and think through problems at a speed no human can match. But there's a catch most people don't talk about enough: AI can be wrong — and it won't always tell you.
Not wrong in a confused, hedging kind of way. Wrong in a confident, authoritative, sounds-completely-right kind of way. It's called a hallucination, and understanding when it's likely to happen (and when it's not) makes you a much smarter user of these tools.
When an AI model doesn't know something, it doesn't always say "I don't know." Instead, it fills the gap with whatever seems most plausible based on its training. The result looks like a real answer. It has the right tone, the right structure, and often sounds like exactly what you were hoping to find.
The problem is that "plausible" and "accurate" are not the same thing.
This happens most often when the AI is working without solid source material — when you ask it to recall a specific fact, name, number, or citation rather than reason through a problem. The more specific and verifiable the detail, the higher the risk.
That's a fair question — and the answer gets at why hallucinations happen in the first place.
The architecture doesn't have a "nothing" option. A language model works by predicting the most probable next token given everything that came before it. There's no built-in abstention mechanism. "I don't know" is just another sequence of tokens it could produce, but the model has to actively learn that this is the right response in certain situations — it doesn't default to it the way a person might.
Training data skews toward confident answers. The internet is full of people confidently stating things. It's relatively sparse on examples of people saying "I'm not sure, I'd have to look that up." So the model learns the confident-answer pattern much more thoroughly than the humble-uncertainty pattern.
Human feedback makes it worse. During the training phase where humans rate AI responses, a confident-sounding answer often feels more helpful than "I don't know" — even when the confident answer is wrong. So the model gets nudged toward confidence as a side effect of trying to be useful.
The model doesn't know what it doesn't know. This is the deeper issue. A person knows when a question is outside their expertise because they have a sense of the boundaries of their own knowledge. AI models don't have that kind of self-awareness. There's no reliable internal signal that says "this is a gap" — the model just sees the question and generates what fits. The gap is invisible to it.
Newer models are meaningfully better at expressing uncertainty than earlier ones — you'll often see qualifiers like "I'm not certain about this" or "you may want to verify." But the underlying pressure toward generating a confident-sounding response never fully goes away, which is exactly why the verify-the-specifics habit still matters.
Certain types of requests are more likely to produce hallucinations. Treat these as verify-before-you-use:
Specific names that weren't in your prompt. If you paste a summary into an AI and ask it to identify who's involved, it may invent a plausible-sounding name. If you didn't give it the name, it doesn't reliably know it — even if it sounds certain.
Statistics, percentages, and exact numbers. "Studies show that 73% of small businesses..." is a common hallucination pattern. Real statistics require real sources. If you need a number to back up a claim, look it up independently.
URLs and source citations. AI tools frequently generate citations that look legitimate — correct author name, correct journal format, plausible year — but lead to pages that don't exist. Never paste an AI-generated citation into a document without clicking through to verify it first.
Events or developments near the model's knowledge cutoff. AI training data has a cutoff date. The closer your question is to that boundary, the less reliable the answer. If you're asking about something that happened recently, search for it.
Software version numbers and technical specifics. APIs change, software is updated, features get renamed. Any version-specific technical detail from an AI response should be verified against current documentation before you act on it.
AI is at its best when it's reasoning, not recalling. These tasks don't depend on the AI "knowing" something — they depend on it thinking clearly, and that's where it genuinely shines:
Logic and analysis. If you give the AI a set of facts and ask it to reason through the implications, the output is built from what you provided — not from its training data gaps.
Writing, editing, and structure. Rewriting a paragraph, fixing a tone, trimming wordiness — none of this requires factual recall. The AI is working with your words.
Well-established information. Things that have been stable and consistent for decades — how photosynthesis works, what a net-30 payment term means, how TCP/IP routes a packet — are unlikely to be wrong.
Information you just gave it. When you paste a document and ask for a summary, the AI is working from text you supplied. The summary reflects your document.
Math and logical inference. Basic arithmetic, formula application, and step-by-step logical reasoning are generally reliable — provided you're not asking it to recall a formula it might misremember.
Think of it this way: AI is an exceptional reasoning partner and a mediocre fact-checker. It can help you think through almost anything. But when the answer depends on a specific name, number, date, or source — especially one that wasn't already in your conversation — take thirty seconds to verify it before you forward it to a client or put it in a report.
Trust the reasoning. Verify the facts.
That's not a limitation that makes AI less useful. It's just the right way to use any powerful tool — knowing what it does well and where you need to stay involved.
What is an AI hallucination?
An AI hallucination is when an AI model generates information that sounds plausible but is factually incorrect — a made-up name, a nonexistent citation, a wrong number. It happens because the model is designed to produce fluent, confident-sounding output, not to flag when it's uncertain.
How can I tell when AI is making something up?
You often can't tell just by reading the output — that's the challenge. Hallucinated content looks the same as accurate content. The safest approach is to verify any specific fact, name, number, date, or source that you didn't personally provide to the AI before acting on it.
Is AI reliable enough to use for business communication?
Yes, with the right habits. AI is excellent for drafting, editing, and structuring your own content. Where it gets risky is when you ask it to recall specific details — client names, product specs, statistics — that it might not have accurately in its training data. Always review AI-drafted communications before sending, especially anything that contains specific claims.
What kinds of tasks is AI most trustworthy for?
Tasks that don't depend on recall: writing and editing, analyzing information you've already provided, explaining stable concepts, working through logic problems, and summarizing documents you've pasted in. The AI is reasoning from what you gave it — not guessing from memory.
Can I use AI for business research?
AI can help you think through a research question and identify what to look for, but it shouldn't be your primary source for specific facts. Use it to brainstorm angles or draft an outline, then verify any statistics, quotes, or citations with primary sources before you publish or present.
Will AI tell me when it doesn't know something?
Sometimes — newer models are better about expressing uncertainty. But you can't rely on it. The architecture doesn't have a built-in "I don't know" default, and the model can produce a confident-sounding wrong answer just as easily as a correct one. The absence of a disclaimer does not mean the answer is accurate.
PC Methods provides IT support and ERP consulting to small and mid-size businesses in the Chicago area. Questions about AI tools for your business? Let's talk.
What is an AI hallucination?
An AI hallucination is when an AI model generates information that sounds plausible but is factually incorrect — a made-up name, a nonexistent citation, a wrong number. It happens because the model is designed to produce fluent, confident-sounding output, not to flag when it's uncertain.
How can I tell when AI is making something up?
You often can't tell just by reading the output — that's the challenge. Hallucinated content looks the same as accurate content. The safest approach is to verify any specific fact, name, number, date, or source that you didn't personally provide to the AI before acting on it.
Is AI reliable enough to use for business communication?
Yes, with the right habits. AI is excellent for drafting, editing, and structuring your own content. Where it gets risky is when you ask it to recall specific details — client names, product specs, statistics — that it might not have accurately in its training data. Always review AI-drafted communications before sending, especially anything that contains specific claims.
What kinds of tasks is AI most trustworthy for?
Tasks that don't depend on recall: writing and editing, analyzing information you've already provided, explaining stable concepts, working through logic problems, and summarizing documents you've pasted in. The AI is reasoning from what you gave it — not guessing from memory.
Can I use AI for business research?
AI can help you think through a research question and identify what to look for, but it shouldn't be your primary source for specific facts. Use it to brainstorm angles or draft an outline, then verify any statistics, quotes, or citations with primary sources before you publish or present.
Will AI tell me when it doesn't know something?
Sometimes — newer models are better about expressing uncertainty. But you can't rely on it. The architecture doesn't have a built-in "I don't know" default, and the model can produce a confident-sounding wrong answer just as easily as a correct one. The absence of a disclaimer does not mean the answer is accurate.
Chicago area ERP consultant and Managed Service Provider with over 45 years of experience in Sage 300, Sage Pro, Quickbooks ERP and other systems
Upgrading from SQL Server 2019 is more than a routine task—it’s a necessary move to keep your...
Passwords are a problem that nobody has fully solved. They get reused, forgotten, phished, leaked...
In 1990 and 1991, PC Methods was instrumental in providing a complete workstation and power...