Similar Jobs
About TaskUs: TaskUs is a provider of outsourced digital services and next-generation customer experience to fast-growing technology companies, helping its clients represent, protect and grow their brands. Leveraging a cloud-based infrastructure, TaskUs serves clients in the fastest-growing sectors, including social media, e-commerce, gaming, streaming media, food delivery, ride-sharing, HiTech, FinTech, and HealthTech.
The People First culture at TaskUs has enabled the company to expand its workforce to approximately 45,000 employees globally. Presently, we have a presence in twenty-three locations across twelve countries, which include the Philippines, India, and the United States.
It started with one ridiculously good idea to create a different breed of Business Processing Outsourcing (BPO)! We at TaskUs understand that achieving growth for our partners requires a culture of constant motion, exploring new technologies, being ready to handle any challenge at a moment’s notice, and mastering consistency in an ever-changing world.
What We Offer: At TaskUs, we prioritize our employees' well-being by offering competitive industry salaries and comprehensive benefits packages. Our commitment to a People First culture is reflected in the various departments we have established, including Total Rewards, Wellness, HR, and Diversity. We take pride in our inclusive environment and positive impact on the community. Moreover, we actively encourage internal mobility and professional growth at all stages of an employee's career within TaskUs. Join our team today and experience firsthand our dedication to supporting People First.
Frontier models are reshaping every industry, but real‑world safety still lags behind the pace of innovation. TaskUs already delivers AI safety services to some of the world’s most ambitious model builders and app developers; we’re investing in original research that pushes the field forward and lands in production.
As our first Head of Applied Research, AI Safety, you will set the direction for an applied research team, spearhead high‑impact collaborations, and turn new ideas into tools clients use every day.
The impact you’ll make:
Define the agenda: Establish a research roadmap across alignment, robustness, interpretability, and policy, all prioritized for real‑world impact.
Publish & ship: Produce peer‑reviewed papers and production‑ready evaluation frameworks adopted by Fortune 500 clients.
Grow the practice: Mentor two additional researchers (and counting); instil rigorous methodology, fast iteration, and client focus.
Raise TaskUs’ profile: Build partnerships with leading universities, standards bodies, and open‑source communities—positioning TaskUs as a thought leader, not just a service vendor.
What you’ll do:
Conduct original research on model alignment, adversarial robustness, mechanistic interpretability, prompt‑safety evaluation, and fine‑tuning techniques (RLHF, RLAIF).
Prototype safety interventions on large language models: jailbreak detection, policy‑guided decoding, and drift monitoring at scale.
Develop reproducible benchmarks and open‑source tooling; contribute code in Python, PyTorch/JAX/TensorFlow, and evaluation libraries (e.g., LangChain, Tracr).
Lead client‑facing studies: Design red‑team exercises, custom audits, or bias / toxicity assessments in collaboration with Solutions Engineering and Delivery.
Manage external collaborations with academic labs and industry consortia; co‑author papers and organise joint workshops.
Set team cadence: Define quarterly OKRs, oversee experiment pipelines, review code and study designs, and ensure results land on schedule (weeks to months, not years).
Advise leadership on emerging AI safety standards (NIST RMF, EU AI Act, ISO 42001) and translate them into service offerings.
Experiences you’ll bring:
8+ years combined research & development in technology, including 3+ years focused on AIML safety, robustness, or trustworthy AI.
MS or PhD in Computer Science, Machine Learning, or a related field; peer‑reviewed publications at venues like NeurIPS, ICML, AIES, or FAccT.
Demonstrated success deploying or evaluating safety techniques on large language models in production or client settings.
Hands-on mastery of Python and at least one deep‑learning framework (PyTorch, TensorFlow, or JAX).
Experience leading small research teams or mentoring junior researchers; comfortable setting direction and giving feedback.
Track record of cross‑functional work with product, engineering, or services teams on tight timelines.
Core skills you'll need:
Research design & statistical rigor - hypothesis generation, experimental design, and reproducible analysis.
Safety evaluation expertise: adversarial testing, mechanistic interpretability, prompt evaluation, and fine‑tuning methods (RLHF/RLAIF).
Communication & evangelism: clear technical writing, conference speaking, and ability to explain complex ideas to clients and non‑experts.
Regulatory fluency: working knowledge of emerging frameworks (NIST RMF, EU AI Act, ISO 42001) and their practical implications.
Strategic agility: switch between deep work and rapid, client‑driven requests; prioritise for maximum impact.
Nice to have: Experience with multimodal models, formal verification, differential privacy, or safety‑related open‑source contributions.
Why you’ll love this role:
Remote-first flexibility: Work where you’re most productive!
Dual impact: Publish cutting‑edge research and see it deployed with real users weeks later.
Green‑field leadership: Build the research function from day one and shape its culture, tools, and agenda.
High‑calibre peers: Collaborate with seasoned engineers, solutions architects, and GTM leads who care deeply about safe AI.
Mission & momentum: Help solve one of the most important problems in tech while riding a rapidly growing services business.
Ready to push AI safety from theory to practice?
Apply today and chart the next frontier of trusted AI.
How We Partner To Protect You: TaskUs will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of TaskUs.
DEI: In TaskUs we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. TaskUs is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know.
We invite you to explore all TaskUs career opportunities and apply through the provided URL https://www.taskus.com/careers/.

