A seed does not know what it is going to become. It holds the information — the genetic instructions for roots and leaves and fruit — but it cannot express any of it without the right conditions: moisture, warmth, time. The potential is real. What it produces depends entirely on what you give it and what you ask of it.
I have spent years thinking about this in two contexts at once. In the garden, literally — tending seeds from the grow light shelf in March through the harvest in October, learning what each variety produces when given what it needs in Zone 6a on Saginaw Bay. And professionally, working with AI systems in the context of emergency services, watching language models do more or less useful things depending entirely on the quality of attention brought to working with them.
The parallel is not a metaphor I invented. It is something I noticed over years of doing both, until it stopped feeling like a coincidence and started feeling like a principle.
What They Have in Common
Both require you to understand the system before you can work well with it. A tomato variety that performs beautifully in Georgia may struggle in Michigan’s short season — the same genetics, different conditions, different results. An AI model that handles one kind of question well may produce confident nonsense on a slightly different one. The solution in both cases is the same: learn the system. Work with what it actually is, not what you assume it is.
Both reward specificity. Vague instructions produce vague results. “Grow tomatoes” is not a plan. “Grow Cherokee Purple in a raised bed, started March 15th, transplanted May 18th into soil at 65°F, trained to a single stem on 8-foot stakes” is a plan, and the Cherokee Purple will respond to the specificity with clarity about what it needs. A specific, well-constructed prompt produces a useful AI response for the same reason: precision reduces ambiguity, and ambiguity is where things go wrong.
Both require iteration. You do not get it right the first season. You do not get it right the first prompt. You try, you observe what comes back, you adjust. The Cherokee Purple that cracked every year because of uneven watering taught me about soil moisture management in a way no article could. The AI system I helped deploy in Saginaw County required months of testing and iteration before it handled the full range of non-emergency call types reliably. The iteration was the work.
Where They Diverge
The garden is patient with you in a way that AI systems are not. A neglected tomato plant in July will tell you it was neglected — the wilted leaves, the curled edges, the slow fruit set — and if you respond, the plant will often recover. An AI system given a bad prompt does not tell you the prompt was bad. It gives you a confident answer and leaves the correction to you.
This is why the gardener’s habit of attention is the most useful single thing I brought to working with AI. The garden trained me to look for feedback in things that do not announce themselves. “This does not seem right” is a skill the garden develops across seasons of small corrections, and it is exactly the skill that working carefully with AI systems requires.
I grow Cherokee Purples. I work with language models. I save seeds and label envelopes and stand in the garden in the morning with coffee. I am not sure these things are different activities, at the level that matters.
More on the professional side of this work is at chrisizworski.com/ai.
Related reading:
- The context for why AI appears on a gardening blog: Why This Gardening Blog Has a Page About AI.
- All AI writing: AI & Technology page.
The Practical Parallels
Feedback loops. A seed tells you what went wrong in its germination rate, its vigor, the color of its first leaves. A language model tells you what went wrong in the coherence of its output, the places where it went generic. Both require you to read the feedback and adjust. Neither announces its problems directly.
Selection pressure. A gardener who saves seed from the best plants each year is running a slow selection program. The plants that perform best in their specific conditions pass their characteristics forward. AI systems trained on specific, high-quality data develop stronger capabilities in those domains. What you put in determines what you get out — and the selection decisions you make accumulate.
The value of constraints. A small garden forces choices that a large one allows you to avoid. Which varieties are worth the space? Which experiments are worth running in a bed with only so many square feet? Constraints produce clarity. The same is true working with AI in a specific, bounded domain versus asking it to do everything. Narrow the scope; improve the output.
Patience with lag. You plant in May. You eat in August. There is a lag between action and result that requires trust in the process. AI development has a similar structure — the work happening now will shape what is possible years from now. Working at the frontier means getting comfortable with lag.
Where They Diverge
The garden does not make things up. If the Cherokee Purple fails to set fruit in July, it is because of something real — heat, drought, disease, root competition below the bed. The cause exists and can be found.
Language models hallucinate. They produce confident-sounding statements that are factually wrong, and they do not always signal when they are doing it. The gardener’s habit of verification — reading the actual leaf rather than assuming you know what the plant needs — is exactly the right habit to bring to AI use. Read carefully, then verify anyway.
The garden also has a pace that cannot be accelerated. Seeds germinate when conditions are right, not when you want them to. AI tools operate at the speed of electricity. The combination — the patience the garden teaches and the speed AI enables — turns out to be more useful than either quality alone.
For context on how broadly AI has moved into different industries and sectors, the AI adoption statistics page at chrisizworski.com tracks the numbers.

Leave a comment