The Reward Modeling team at Anthropic is developing techniques for teaching AI systems to understand and embody human values, as well as to push forward AI capabilities. They are looking for engineers to join their efforts to push forward the science of reward modeling.