When
Where
LLMs don’t represent “everyone”—they represent a default human. Most models are tuned to a WEIRD (Western, Educated, Industrialized, Rich, Democratic) norm, making them fragile or misleading when used in global or diverse settings. In this workshop, participants will explore Synthetic Cultural Agents (SCAs): AI agents designed to simulate the values, risk preferences, and decision-making styles of real human populations.
Learn how to spot hidden bias, stress-test ideas, and build LLM-powered solutions that actually scale beyond a single assumed user. Perfect for hackathon teams building for real-world impact.
Challenge
Teams will put theory into action by comparing five powerful techniques for narrowing an
LLM’s probability space into a specific cultural manifold:
- Demographic Steering — Lock priors using structured JSON configs
- Recursive Memory & Reflection — Build agents with stable self-schemas
- Social RAG — Ground agents in localized, real-time epistemic bases
- Polyglot Persona — Treat language as a hidden cultural parameter
- Latent Space Fine-Tuning — Hard-code alignment via weights and reward models
Key Takeaways
- A Social Science Lens on AI: See how technical design choices reshape model behavior, fidelity, and decision-making
- SCA Methods, Compared: Understand the real trade-offs between scalability, precision, and realism
- Research-Driven Innovation: Engage with foundational work (e.g., Argyle et al.) and cutting-edge advances in Polyglot Persona Drift and Synthetic Ethnography
Facilitators
Kseniia Biriukova and Monica Capra from the EconLLM Lab