Description:
About the Team We re building a future where AI systems are not only powerful but safe, aligned, and robust against misuse. Our team focuses on advancing practical safety techniques for large language models (LLMs) and multimodal systems ensuring these models remain aligned with human intent and resist attempts to produce harmful, toxic, or policy-violating content. We operate at the intersection of model development and real-world deployment, with a mission to build systems that can proactively
Sep 8, 2025;
from:
dice.com