Facilitation in the AI Era

Jigsaw (Google), 2025 — Ethnographic Research Report


What do expert facilitators actually do? Despite facilitation being central to deliberative democracy, peacebuilding, and organizational decision-making, the practice itself remains poorly understood—a “black box” that practitioners learn through apprenticeship rather than explicit instruction.

In early 2025, researchers at Jigsaw (a unit within Google focused on disinformation and online safety) interviewed 22 professional facilitators across six continents to document their mental models, techniques, and early perspectives on AI assistance. The study spans domains from corporate workshops to post-conflict reconciliation, capturing facilitation as practiced by people like Alice Siu at Stanford’s Center for Deliberative Democracy, practitioners at Build Up working in active conflict zones, and community facilitators in Australia and Europe.

What Facilitators Think They Do

One striking finding: there’s no agreed-upon definition of facilitation. Practitioners describe their role through different metaphors—conductor (actively directing the conversation), referee (enforcing rules), or container (holding space for whatever emerges). These aren’t just different words for the same thing; they reflect genuinely different philosophies about how much a facilitator should shape outcomes versus let groups find their own way.

This ambiguity matters for AI facilitation. If experts can’t agree on what good facilitation looks like, training AI systems to facilitate becomes a values question as much as a technical one. The Why-How-Who framework attempts to make these choices explicit.

Cross-Domain Patterns

Despite the diversity, some patterns emerge across contexts. Facilitators consistently emphasize psychological safety—creating conditions where participants feel able to speak honestly. They distinguish between managing the process (who speaks when, how long discussions run) and managing the content (what gets discussed, what conclusions emerge). Most see their role as primarily process-focused, though the boundary blurs in practice.

The study also captures facilitators’ early, often cautious, perspectives on AI assistance. Some see potential for AI to handle logistics and note-taking, freeing human facilitators for relationship work. Others worry about AI flattening the nuance that makes facilitation effective.

Why This Matters for OFL

This study provides ground truth that quantitative approaches miss. The WHoW Framework can tell us what moderation moves occur in conversations, but not why skilled practitioners choose them. The Fora corpus shows patterns in facilitator behavior, but not the reasoning behind those patterns. Ethnographic work like this fills the gap.

For building AI facilitators, the key insight may be the diversity itself. A single “best” facilitation style probably doesn’t exist—different contexts and cultures call for different approaches. This suggests AI facilitation systems should be configurable rather than one-size-fits-all.

The Research Team

The study was led by Ian Beacock, Emily Saltz, Beth Goldberg, and Thea Mann at Jigsaw, with contributions from facilitators including Wasim Almasri (ALLMEP), Cui Jia Wei (vTaiwan), Nicole Hunter (MosaicLab), and Emily Jenke (DemocracyCo).

For quantitative analysis of moderation, see the WHoW Framework. The Fora corpus provides data on facilitated dialogue from MIT. For OFL’s seminar with Alice Siu on deliberative polling, see the seminar notes.