The Rules

Before you ask AI anything, read this.

Chapter 1 of 3

What AI Tools Are Good At

AI tools — large language models specifically — are good at a narrow set of things, and spectacularly good at those:


What AI Tools Are Bad At

And here is where it falls apart:


The Traps

The Copy-Paste Trap

The model gives you a response. It looks right. You paste it into your code or document without reading it carefully. Two weeks later, someone finds a bug or a factual error, and you cannot explain why it is there.

The rule: never commit output you have not read and understood. The model is a collaborator, not an author. You are still the author.

The Verbosity Trap

You ask a simple question and get a 500-word response with an introduction, three sections, a conclusion, and phrases like “it is important to note that.” You did not need any of that. You needed two sentences.

The fix: be explicit about length and format in your prompt. “Answer in one paragraph.” “Give me the command, nothing else.” “Three bullet points, no explanation.” The model will match whatever format you set.

The Authority Trap

The response sounds authoritative and well-structured, so you assume it is correct. But structure is not accuracy. A model can produce a beautifully formatted wrong answer.

The rule: treat every factual claim as “probably right but possibly fabricated.” Verify against documentation, test the code, or ask someone who knows.

The Shortcut Trap

You use AI to skip the learning. Instead of understanding how Kubernetes networking works, you ask the model to generate the config. It works. You move on. Three months later, something breaks and you have no mental model for debugging it.

The balance: use AI to accelerate learning, not to bypass it. “Explain how Kubernetes service discovery works, then show me a config example” is better than “Give me the config.”


The Security Conversation

This deserves its own section because people get it wrong in both directions.

Too cautious: refusing to use AI tools at all because “everything is sensitive.” Most of the things you type into a chat are not sensitive. Asking how to write a bash loop, how to format a YAML file, or how to structure a postmortem template is fine.

Too careless: pasting entire codebases, database schemas, customer names, or internal architecture diagrams into a public model. This is where actual risk lives.

The practical approach:


Chapter 1 of 3