đź“šPublication Alert: Systematic Non-responses in Online Surveys
What if people skip key questions in political surveys—or entire groups don't respond at all? In our latest research, we explore how fine-tuned Large Language Models (LLMs) can help fill these gaps. Using German voting behavior data, we show that LLMs trained on partial survey responses not only outperform traditional methods in cases of systematic non-response but also produce more balanced predictions than zero-shot models. This opens new doors for more robust, bias-aware survey analysis—powered by open-source AI. The future of political opinion research might just be (partly) synthetic.
The paper is now available online and can be accessed through osf.