The Pixar Braintrust is a candid-feedback ritual in which a film-in-progress was reviewed by a group of experienced creative peers who had one job: to identify what was not working and say so directly. The honesty standard was specific — feedback was directed at the work, not the creator; it named actual story problems rather than offering vague reactions; and it was grounded in craft rather than personal taste.
This page carries a sourcing caveat. The Braintrust is documented primarily through Ed Catmull's memoir Creativity, Inc., which gives a clear account of the spirit and intent but is not a procedural guide. The specific format, the selection criteria for participants, and the facilitation discipline are reconstructed from that account and from secondary commentary. Use it as a lens for critique standards, not as a precise repeatable format.
What transfers is the honesty-about-the-work standard. The Braintrust question is: is the feedback specific enough to change the next iteration? Does it identify what the work actually needs, or does it offer support and mild concern? That standard is portable across creative, product, and concept review settings where the default is politeness and the work suffers for it.
Run it in the room
A clean first pass you can run
Participants
The work's creator plus 4–6 people who can give candid, specific feedback and have strong enough judgment to distinguish between what is not working and what to do about it. One facilitator. The quality of the Braintrust depends entirely on the quality of the reviewers.
Timing
90–120 minutes. The creator presents for no more than 20 minutes. The remaining time is for review, notes, and discussion.
Prep
Share the work in advance. Reviewers come with specific written notes — what is not working and why. General impressions expressed in the room are less useful than specific observations prepared before it.
1The creator presents the work without interruption. 15–20 minutes. The presentation should be honest about what is not yet working, not a defense of what is.
2Reviewers give notes. A note names what is not working and where. It does not prescribe a solution. 'The problem stakes feel unclear by the end of the second act' is a note. 'You should restructure the second act' is a prescription that should be withheld.
3The creator listens without defending. Clarifying questions only. The discipline is staying curious rather than protective.
4After the session, the creator decides what to do with the notes. The Braintrust has no authority. It has candor, experience, and the creator's trust. The creator owns the work and the decisions about it.
You leave with
A set of specific, named problems in the work — not a list of solutions. The creator leaves with clarity about what is not working, not a prescription for how to fix it.
First failure point: Reviewers slip into prescription mode — telling the creator what to do rather than naming what is not working. Once the Braintrust starts offering solutions, the creator shifts from listening to evaluating options, and the diagnostic value of the session is lost.
What good looks like
If this is working, these are the signals you should be able to point to
The author heard at least three observations they would not have surfaced themselves.
Feedback was directed at the work, and the author did not need to defend it during the session.
The author left with a clear understanding of the strongest problem in the current version.
At least one piece of feedback led to a structural change, not just a polish revision.
How it worked there
The conditions that made it hold
Pixar developed the Braintrust out of the experience of making films where the director had final creative authority but could benefit from honest peer reaction from people who had no stake in defending the current version. The format trusted directors to receive hard feedback because it was explicitly positioned as help, not judgment — and because the culture had built enough shared creative standards that 'the story has a pacing problem in act two' meant something specific and actionable.
The conditions behind that format: a small creative organization where the people in the room had long shared history, a culture where the work's quality was genuinely the primary interest of everyone present, and leadership that had invested explicitly in distinguishing candor from cruelty. These are not conditions that transfer by declaration. Organizations that try to run a Braintrust-style review without having established those preconditions will find that the room produces performance instead of critique — people saying harder-sounding things but still navigating the same status dynamics as before.
What not to copy · Failure modes
What goes wrong when this is copied
The focus lands on who is in the room rather than the standard the room enforces. The Braintrust is often described in terms of the creative authority it assembled — experienced directors, senior story people, Ed Catmull, John Lasseter. Teams reconstruct it by filling a room with senior people. But the mechanism was not seniority; it was the candor and work-focus standard those people held. A room of senior people who give vague, deferential, or status-mediated feedback has recreated the surface of the Braintrust without its function.
Bluntness is mistaken for the critique standard. The Braintrust's honesty was specific: directed at the work, not the person; aimed at identifying what was not working and why; grounded in craft standards rather than subjective reaction. Teams that interpret the Braintrust as permission for harsh or unfiltered feedback have not understood the discipline. Useful critique is not more brutal feedback — it is more specific feedback about the actual problem the work has.
The ritual is imported without the trust that makes candor safe. The Braintrust worked because the people in the room trusted each other and trusted that the feedback was directed at making the work better, not at establishing status or winning credit. That trust was built through years of shared creative work. A team that assembles a review session and declares it a Braintrust has created the structure without the precondition. The candor the format requires will not appear because the agenda says it should.
Weak signals to watch for
It is not a guaranteed safe way to make feedback more brutal.
It is not strongly enough documented in public to be presented like a step-perfect playbook.
Do not recreate the ritual as a room of senior opinions without shared standards.
Use this when the team needs a stronger object for critique before improving the review ritual itself.
Use this when you need a stronger object for critique and can accept that the format only works if the author stays silent while feedback is being given.