Meta-Rewarding Language Models: Self-Improving Alignment with LLM-as-a-Meta-Judge

Large Language Models (LLMs) are rapidly surpassing human knowledge in manydomains. While improving these models traditionally relies on costly humandata, recent self-rewarding mechanisms (Yuan et al., 2024) have shown that LLMscan improve by judging their own responses instead of relying on humanlabelers. However, existing methods have primarily focused on improving modelresponses rather than judgment capabilities, resulting in rapid saturationduring iterative training. To address this issue, we introduce a novelMeta-Rewarding step to the self-improvement process, where the model judges itsown judgements and uses that feedback to refine its judgment skills.Surprisingly, this unsupervised approach improves the model’s ability to judgeand follow instructions, as demonstrated by a win rate improvement ofLlama-3-8B-Instruct from 22.9

Further reading