Prompt Programming for Cultural Bias and Alignment of Large Language Models
Maksim Eren, Eric Michalak, Brian Cook et al. (4 authors)
📅 2026-03-17
Culture shapes reasoning, values, prioritization, and strategic decision-making, yet large language models (LLMs) often exhibit cultural biases that misalign with target populations. As LLMs are increasingly used for strategic decision-making, policy support, and document engineering tasks such as summarization, categorization, and compliance-oriented auditing, improving cultural alignment is...