Why you need a logic model even when the funder doesn't require one
When a funder does not explicitly require a logic model, most programs skip building one. That decision looks efficient and is almost always expensive. A logic model is not a deliverable; it is an argument about why your program produces its results. Without that argument written down, every other piece of the evaluation drifts.
The argument, in five columns
A logic model in its simplest form is five columns: inputs, activities, outputs, short-term outcomes, long-term outcomes. Read left to right, it says: "If we put in these resources and do these specific things, we will produce these immediate countable results, which will lead to these near-term changes, which will lead to these longer-term impacts."
Read right to left, it does the more important work. It forces you to ask: "We want this long-term impact. What near-term change produces it? What immediate results produce that change? What activities produce those results? What resources do the activities need?" Working backward this way almost always reveals at least one broken link in the chain — an outcome you hope for but have no plausible mechanism for producing.
What happens when it is missing
We see the same three downstream failures in programs that skip the logic model:
Data collection drifts. Without a written chain, nobody on the team can say with certainty which data points are load-bearing. The program collects a lot — because it feels safe to — and ends up with gigabytes of nothing much. Reporting season becomes an archaeological dig.
Activities expand. New initiatives get added to the program because they sound good. Without a logic model to check them against, there is no principled basis for saying no. The program becomes a Swiss Army knife that does everything adequately and nothing well.
Reports become hard to write. At the end of the cycle, the team has to reverse-engineer the program theory from whatever data they managed to collect. This is both exhausting and weak — reviewers can tell when a theory of change was constructed after the fact to explain the results.
The version that actually gets used
Academic logic models can run to several pages with assumptions, moderators, and contextual factors. These are useful for evaluators and rarely looked at by anyone else. The version that actually gets used has three properties:
It fits on one page. If it doesn't fit on one page, it won't get printed, and if it doesn't get printed, it won't get referenced during staff meetings or planning sessions.
It names specific instruments. Not "we will measure student engagement" — "we will use the Classroom Engagement Rubric administered by site coordinators twice per session." Specificity is what makes the logic model an operational tool rather than a philosophical one.
It is dated and versioned. Programs change. A logic model written in April and revisited in October is one that is being used. An undated logic model pinned to the wall is decoration.
A test you can run today
Ask three people on your team to write down, independently, the one sentence that says: "Our program works because ___." If you get three different sentences, your logic model is either missing or not actually serving as the shared argument it is supposed to be.
The goal is not a document. The goal is a shared, specific, written understanding of why your program produces its results. Funders don't always ask for it. You should build it anyway, because every other evaluation decision you make for the next three years will be easier when you have it.