John Dang

John Dang

AI Researcher

Cohere for AI


I’m a Research Scholar at Cohere on the Cohere for AI team working on Multilingual RLHF with Ahmet Üstün, Kelly Marchisio, Julia Kreutzer, and Sara Hooker! I’m also recent MS in computer science graduate from UCLA and a machine learning researcher at the UCLA Machine Intelligence Group (MINT) advised by Professor Aditya Grover. I’m broadly interested in machine learning and my current research is focused on alignment of large foundation models. More specifically, my research is motivated by the following questions: (1) What are current limitations of LLM alignment methods? and (2) How can we develop better ways to align LLMs with important human values including, but not limited to helpfulness, harmlessness and fairness?

Previously, I’ve spent time interning at Motional (ML for autonomous driving), Skydio (ML for autonomous flight), and Amazon Web Services (Software Engineering). Previously, I completed my BS in computer science at UCLA where I worked on computer vision and VR for robotics research at the UCLA Center for Vision, Cognition, Learning, and Autonomy and ML for disease diagnosis at the Ozcan Research Group.

I’m passionate about improving accessibility in STEM education. While I was an undergraduate, I served president as of ACM AI at UCLA, the largest AI student organization at UCLA dedicated to developing a community of students interested in AI at UCLA and beyond through free workshops, events and other outreach initiatives. I currently serve as an advisor, helping develop curriculum and teaching our AI course to underserved high schools in the LA area. I’ve also served as a TA for multiple UCLA undergraduate introductory CS courses (CS 31/33). I also love playing/writing/producing music (vocals, guitar, piano), working out, and trying new food!

  • Machine Learning
  • Foundation Models / LLMs
  • AI Alignment
  • MS in Computer Science, Dec 2023

    University of California, Los Angeles (UCLA)

  • BS in Computer Science, Mar 2022

    University of California, Los Angeles (UCLA)


(2024). Aya 23: Open Weight Releases to Further Multilingual Progress. Preprint 2024.


(2024). Group Preference Optimization: Few-Shot Alignment of Large Language Models. ICLR 2024.

PDF Code