FrankJournal

Against the Meeting Summary

Every AI tool now produces a meeting summary. They are mostly useless. The valuable artifact is something else entirely.

March 15, 2026 · 5 min

The meeting summary, as a genre, has won. Every meeting tool now produces one. They land in your inbox an hour after the call. Bullet points, action items, key decisions, next steps. Most of them are competently written. Almost all of them are useless.

The reason is not that the summaries are inaccurate. They tend to be roughly accurate. The reason is that a summary, by its nature, tells you what happened. What you need from a meeting — the thing that would actually change what you do next — is something different. You need to know what didn't add up.

Consider what a good summary contains. The participants. The topics covered. The decisions reached. The action items, with owners and dates. This is the meeting's surface. If you missed the meeting, you can read the summary and know, in a general way, what the meeting was about. You cannot do anything with this information. It does not point you at any specific decision you should revisit. It does not flag a number that someone stated confidently and that, if you bothered to check, turns out to be wrong.

What you actually want, an hour after a meeting, is a much shorter document. It might contain three things, or one, or zero. It would say: the CFO claimed the gross margin was 71%. Two months ago in a different meeting she said it was 62%. Worth confirming. It would say: the head of sales said the pipeline coverage was 3.8x. The number underneath that, in the deck, is 2.6x. Worth confirming. It would say: the question about the migration timeline never got a date.

That document is short. It is also, for any operator who has to make decisions on the basis of what they're told in meetings, worth more than the entire summary archive of the year.

The reason almost no tool produces this document is that it's hard. Producing a summary is mostly compression. You take a long thing and you make it shorter. Producing a list of things that didn't add up requires the tool to understand what was claimed, compare it against what else was claimed in the same meeting, compare it against a longer history of what's been said before, and then form a small, defensible opinion about which of those gaps is worth the principal's attention. This is a different and much harder problem, and most products don't bother attempting it. The summary is what's easy to ship.

There's a deeper issue, too. A summary is reassuring. It looks comprehensive. It produces a tidy artifact that everyone can sign off on. A list of things that didn't add up is, by contrast, mildly antagonistic. It implies that someone in the meeting said something that wasn't quite right. It puts the principal in the position of having to follow up. It is socially uncomfortable in a way that a bullet-point summary is not. Most tools, sensibly, route around this discomfort.

But the discomfort is the value. The whole reason to have any kind of recording or analysis of a meeting is to catch the things that the human listeners, for entirely understandable reasons, missed in real time. A summary, by definition, cannot do this. It only contains what was said. The valuable artifact contains what was said and what's wrong with it.

This is the artifact Frank produces. Not a summary. A short list of flags — the four kinds of things that don't add up — with the timestamp, the speaker, and what was claimed. It is shorter than a summary. It is more useful. It is also, the first few times you read one, slightly uncomfortable, because it shows you, in plain text, the things that got past you in real time.

The summary will keep winning the market for a while, because it's the thing that's easy to demo. The other artifact is the one that, eventually, the serious operators will quietly switch to.