Garbage In, Garbage Out:
Seven Myths About LLMs—and What's Really Behind Them.
Published on 22.02.2026
The question isn’t whether LLMs are good enough.
The question is whether we’re willing to work with them well enough.
Since I started working with Large Language Models (LLMs)—both in general and specifically in the context of technical documentation—I keep running into the same objections: unusable output, false information, generic writing style.
What gets overlooked: these experiences describe results, not causes. LLMs operate on a simple but fundamental principle—the quality of their output depends directly on the quality of the input. Vague instructions produce vague results. Missing context produces answers without context.
The myths below come up again and again. They’re all rooted in real problems—but the solution isn’t better technology. It’s better use of the technology we already have.
LLMs will make my job obsolete.
The fear is real and widespread: if Large Language Models can write text, structure documents, and prepare information—what’s left for technical writers? The logical conclusion seems clear: automation means job loss. Better to keep your distance from a technology that wants to replace you.
That math doesn’t add up: LLMs aren’t replacing technical writers—but technical writers who master LLMs will replace those who don’t. Anyone who avoids the technology today out of fear of job loss achieves exactly the opposite: becoming irrelevant.
The work doesn’t disappear—it shifts. Repetitive tasks like formatting, consistency checks, and basic translation get automated, making room for what humans can still do better than any LLM—at least for now:
- Instructing LLMs precisely, reviewing their output critically, and taking responsibility for accuracy—because catching what an LLM gets wrong requires knowing what good documentation looks like
- Developing strategic information architecture rooted in genuine audience understanding—not in what a model considers statistically plausible
- Extracting implicit expert knowledge from conversations—knowledge that exists nowhere in writing and that no LLM can draw on independently
- Deciding what information is missing in the first place—a question that demands domain knowledge, user perspective, and organizational judgment
- Bridging product, development, and users; applying established terminology consistently; and checking for coherence—work that requires contextual expertise and professional judgment, not text generation
The profession isn’t dying. But the way it’s practiced is. The question isn’t whether LLMs will change the work—it’s whether you’re one of the people shaping that change, or one of the people it happens to.
LLMs hallucinate sometimes—so you can’t trust them.
Anyone who works with Large Language Models will eventually experience this: a statement sounds plausible and convincing, but turns out to be wrong. The common conclusion: LLMs hallucinate occasionally, and that’s exactly why they’re unreliable. You can never be sure when they’re telling the truth and when they’re not.
This framing has it backwards: Large Language Models don’t hallucinate sometimes—they hallucinate by default. That’s inherent to how they work: they break text into tokens and reassemble them based on probabilities. Sometimes the result is close to reality. Sometimes it isn’t.
The deciding factor is context: the more relevant information an LLM receives—background details, clear instructions, reference material—the more likely a correct result becomes. Approaches like Retrieval-Augmented Generation (RAG), which connect the model to external documents and knowledge sources, use exactly this principle: they increase the likelihood that the output reflects reality by feeding the model targeted context.
An LLM isn’t a reference tool. It’s a tool that works with the information you give it. Understanding that means treating every output as a draft that requires review—and getting more reliable results than someone who either trusts it blindly or dismisses it entirely.
You can always tell when something was written by AI.
“That’s obviously written by AI”—this claim comes up surprisingly often, usually with great confidence. Em-dashes, certain phrasing patterns, a particular smoothness: the supposed telltale signs are quickly listed. The belief solidifies: AI text is easy to spot and clearly distinguishable from human writing.
This confidence is a classic case of confirmation bias: What you’re actually spotting is poorly prompted text—generic, with no clear style instructions, relying on the most statistically common patterns from training data. These texts stand out because they fail to meet the standards of good writing.
The em-dash example is instructive. Often cited as a dead giveaway for AI text, em-dashes are actually a marker of polished writing style—found in academic publications, quality literature, and professional texts. LLMs use them because they appear frequently in high-quality writing.
That confidence in spotting AI text is a perceptual error: you’re judging the technology by its worst examples. Anyone who claims to immediately identify AI-written text is probably missing half of it—the good half.
LLMs always write in the same style and it sounds artificial.
“I’ve tried everything—it still sounds like AI.” Many people have this experience the first time they write with LLMs. No matter the task, no matter the topic: the text feels generic, formal, interchangeable. The conclusion seems obvious: LLMs have a fixed style that can’t be controlled.
LLMs actually write in whatever style you ask for: They can write poems, craft sermons, or mirror the tone of specific publications. They follow rules—and the clearer and more precise those rules are, the better the result fits.
What gets perceived as “typical AI style” is the result of vague instructions. Without clear guidance, LLMs fall back on the most statistically common patterns in their training data—which tend to be formal, elevated structures from academic texts and quality literature. But give them concrete style instructions—tone, audience, sentence length, example texts, or even excerpts from an existing style guide—and the output adapts accordingly. An LLM writing an informal blog post behaves fundamentally differently from one writing a technical specification. Understand that, and you no longer have an AI with a fixed style—you have a tool that adapts to yours.
LLMs don’t follow terminology and produce inconsistent terms.
Anyone working with defined terminology runs into this quickly: the LLM uses “user,” then “end user,” then “customer”—even though there should be one clear term. The result feels unprofessional and inconsistent. The conclusion: LLMs simply can’t handle terminology requirements.
The reality is different: Without an explicit instruction, an LLM falls back on the probabilities in its training data—and those include many variants. The problem isn’t a lack of capability; it’s a lack of information. Give an LLM a clear terminology list or concrete examples in context, and it will likely apply those terms more consistently than a human switching between multiple documents. LLMs follow instructions very consistently—when you give them.
LLMs don’t understand context and deliver irrelevant answers.
Anyone who’s asked an LLM to revise an existing document knows the disappointment: it delivers a generic summary that misses the point. Or it restructures the text but ignores important connections. Key information disappears while minor details get expanded. The conclusion: LLMs can’t work with existing material and don’t understand what actually matters.
Here too, the problem isn’t the LLM—it’s unclear instructions: LLMs can process context—but they don’t automatically prioritize the way humans would. They prioritize based on the criteria you explicitly give them. “Summarize this document” leaves the LLM to decide what’s important—and it falls back on statistical patterns that may not lead to the desired result. “Summarize this document in three paragraphs, focusing on the technical requirements for a developer audience. Leave out marketing claims and prioritize information about API integration” provides clear evaluation criteria.
The more precise the instruction and the more clearly you define the goal and what matters, the more targeted the result. LLMs can restructure, rewrite, condense, or expand documents—they can even create different versions for different audiences. But they need clear guidance about which criteria determine relevance and which aspects should take priority.
LLMs remember nothing and repeat the same mistakes.
Anyone who works repeatedly with an LLM will eventually hit this: style instructions given earlier get ignored. Mistakes that were corrected reappear. The conclusion: LLMs have no memory—and are therefore poorly suited for ongoing or complex work.
That’s true—but it’s not a flaw. It’s how they work: LLMs have no persistent memory between sessions. What isn’t in the available context doesn’t exist for them. But that context can be deliberately built—automatically or manually:
- Project environments with permanently stored documents and guidelines
- Memory features that automatically carry over preferences and working patterns across sessions
- Instruction files that load automatically at the start of every session
- Prompt templates with fixed style preferences baked in
- Session summaries used as context for the next conversation
“I already told you that” only works if that information is in the available context. Build that context deliberately, and you work more efficiently than someone who starts from scratch every time.
The Real Challenge.
All of these myths share a common root: they describe symptoms of poor usage and interpret them as limitations of the technology.
- LLMs don’t hallucinate too much—they receive too little context.
- They don’t follow terminology—because none is given.
- They don’t write in a recognizable style—because no style instructions are provided.
- They don’t understand context—because no one defines what’s relevant.
- They remember nothing—because no memory is built for them.
The quality of the output is a direct reflection of the quality of the input.
This is an uncomfortable realization because it shifts the responsibility. It’s easier to dismiss the technology as inadequate than to question your own way of working. But that’s exactly where the opportunity lies: LLMs may not be fully transparent in how they work—what happens behind the prompt largely remains hidden. What you can control is the input. Learn to craft clear instructions, provide complete context, and define concrete expectations—and you’ll get results that don’t look like “AI.” They’ll look like good work.
The technology is evolving faster than any debate about it. What LLMs can’t do today, they’ll be able to do tomorrow—and anyone waiting for things to settle is waiting for a moment that won’t come.
Ask better questions. Get better answers.
Images created with Nano Banana (Google Gemini) with kind prompt support from Claude Opus 4.6