Skip to content
EDSTAR Analytics
← All insights
Draft · pending Janet's review

What grant reviewers actually look for

April 17, 2026 · 7 min read Grant writingProposals

Over two decades of reading proposals — our own, our clients', and as a peer reviewer for federal competitions — we've come to believe that most funding decisions come down to five categories. Different funders weight them differently, but every rubric we've seen puts something in each bucket. If you know the buckets, you can audit your own proposal before it ever leaves your desk.

1. Funder fit

Reviewers are not asked whether a proposal is good in some abstract sense. They are asked whether it matches a set of priorities the funder announced in the call. A proposal that is brilliantly written but aimed at last year's priority will still lose to a workmanlike one that hits this year's.

The fix is early and simple: before you write a word of narrative, print out the call, the funder's current strategic plan, and any recent public remarks from the program officer. Highlight the exact language they use — not what you think they mean, what they actually say. Your proposal should echo that language in the first paragraph of the narrative and at least three more times thereafter.

2. Evidence

The second question every reviewer is trying to answer is: why do you believe this program will work? The weakest proposals answer from logic alone. Stronger proposals cite research. The strongest cite research and prior results from similar programs — ideally the applicant's own.

A pattern we see repeatedly: applicants who have run successful pilots fail to describe those pilots with any specificity. "We have done this work before" is not evidence. "In our 2023–24 pilot with 47 seventh graders at Rosewood Middle, 72% met the proficiency target against a district average of 58%" is evidence. Reviewers notice.

3. Team

Reviewers read the bios. They are looking for a specific thing: do the named people have documented experience with the specific activities proposed? A team of three Ph.D.s whose dissertations were in unrelated fields is not the right answer for a classroom-implementation grant. A team with fewer credentials but more direct experience in the kind of work is.

Generic bios are a wasted opportunity. If Dr. Smith will run the data side, the bio should say "Dr. Smith has led the data work on three similar evaluations, including [names]." If it just lists degrees and tenure, reviewers have to infer — and reviewers under time pressure infer downward.

4. Budget realism

The budget is scrutinized less for arithmetic than for alignment with the narrative. Reviewers flag two things: items in the narrative with no line in the budget, and items in the budget with no mention in the narrative. Both signal that nobody read the document end-to-end before submission.

The best proposals have someone — not the author — sit down with just the narrative and just the budget and cross-walk them line by line. It takes two hours. It catches the majority of budget-narrative mismatches.

5. Evaluation plan

Every funder that requires an evaluation plan will weight it. Most do. Reviewers are looking for four things: measurable outcomes, named instruments, a realistic data-collection plan, and a named evaluator with relevant experience. If any one of those is vague, the evaluation section loses points.

A surprisingly common failure mode is the "we'll figure that out later" evaluation plan. Language like "we will use appropriate pre- and post-assessments" tells a reviewer that nobody on the team has thought about which pre- and post-assessments, who will administer them, when, or how the data will get from the classroom to the report. This is where an external evaluator on the team pays for itself twice over — once during writing and once during the actual grant.

The meta-pattern

The meta-pattern across all five categories: reviewers want specificity. Names, dates, numbers, citations. Vagueness is read as lack of preparation — which is usually a correct read. The most useful editing pass you can make on your own proposal is to circle every generic phrase and ask: can I replace this with a specific? When the answer is yes, you almost always should.

None of this is a substitute for having something worth funding. But once you have that, most of the distance between a winning proposal and a losing one is in how clearly the five categories above are made visible to a tired reviewer working through a pile.

Questions about this?

If anything here is relevant to your program, we'd love to hear what you're working on. The first conversation is free.