We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

Principal Applied Scientist

Microsoft
United States, Texas, Irving
7000 State Highway 161 (Show on map)
Oct 22, 2025
OverviewThe Sociotechnical Alignment Center (STAC) is an interdisciplinary team of researchers, applied scientists, and linguists within Microsoft Research New York City (NYC), focused on guiding the responsible development and deployment of generative AI systems. We specialize in the sociotechnical alignment of generative AI systems, with a particular focus on measuring risks, including risk systematization, risk annotation, dataset creation, and metric design. Our work bridges research and practice, drawing from computer science, linguistics, social science, and statistics to address some of the most complex challenges in responsible AI. As part of our mission, we collaborate closely with product teams and policy stakeholders to translate cutting-edge research into real-world impact. We are looking for a Principal Applied Scientist to join our team and contribute to the development of foundational resources, drive research initiatives, and help shape the future of responsible AI. We welcome candidates with expertise in linguistics research, experience working with generative AI systems, and an interest in evaluation, measurement, and the sociotechnical alignment of AI technologies. Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees, we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.
ResponsibilitiesYour responsibilities include applying linguistics expertise to advance the sociotechnical alignment of generative AI systems. Specifically, you will: Integrate linguistic concepts into responsible AI research and applied science efforts. Develop resources and tooling to identify, measure, and mitigate AI risks. Build and validate annotation guidelines and datasets for risk evaluation. Collaborate with policy and engineering teams to systematize risk measurement approaches. Generalize evaluation methods across diverse systems, use cases, and deployment contexts. Contribute to interdisciplinary research projects that push the boundaries of risk measurement and model assessment. Embody our culture and values.
Applied = 0

(web-c549ffc9f-b5mrm)