NVIDIA Logo

NVIDIA

HPC and AI Software Architect

Sorry, this job was removed Sorry, this job was removed at 08:16 p.m. (GMT) on Wednesday, Jun 11, 2025
Be an Early Applicant
In-Office or Remote
5 Locations
In-Office or Remote
5 Locations

Similar Jobs

An Hour Ago
Remote or Hybrid
9 Locations
Mid level
Mid level
Artificial Intelligence • Healthtech • Machine Learning • Natural Language Processing • Biotech • Pharmaceutical
As a Manager Supply Chain Data Analyst, lead projects, analyze data trends, and guide teams while ensuring compliance with standards.
Top Skills: Power BITableau
5 Hours Ago
Remote or Hybrid
United Kingdom
Junior
Junior
Cloud • Computer Vision • Information Technology • Sales • Security • Cybersecurity
The Analyst I role involves monitoring and analyzing security alerts, conducting incident responses, malware analysis, and improving detection processes.
Top Skills: .NetCC#Crowdstrike PlatformPerlPowershellPythonRuby On RailsVb
5 Hours Ago
Remote or Hybrid
United Kingdom
Mid level
Mid level
Cloud • Computer Vision • Information Technology • Sales • Security • Cybersecurity
The Security Advisor II assesses and ensures the health of Falcon Complete customers, provides security recommendations, documents issues, and collaborates with teams to resolve customer concerns.
Top Skills: Ai-Native PlatformCis Critical SecurityCloud Controls MatrixIso 27001/2LinuxmacOSMdr/XdrMitre Att&CkNist Cyber Security FrameworkPci DssSIEMUebaWindows

NVIDIA has been redefining computer graphics, PC gaming, and accelerated computing for more than 25 years. Today, we lead in artificial intelligence, driving advances in natural language processing, computer vision, autonomous systems, and scientific research. We are looking for a forward-thinking HPC and AI Inference Software Architect to help shape the future of scalable AI infrastructure—focusing on distributed training, real-time inference, and communication optimization across large-scale systems. Join our world-class team of researchers and engineers building next-generation software and hardware systems that power the most demanding AI workloads on the planet. 

 

What you will be doing: 

  • Design and prototype scalable software systems that optimize distributed AI training and inference—focusing on throughput, latency, and memory efficiency. 

  • Develop and evaluate enhancements to communication libraries such as NCCL, UCX, and UCC, tailored to the unique demands of deep learning workloads. 

  • Collaborate with AI framework teams (e.g., TensorFlow, PyTorch, JAX) to improve integration, performance, and reliability of communication backends. 

  • Co-design hardware features (e.g., in GPUs, DPUs, or interconnects) that accelerate data movement and enable new capabilities for inference and model serving. 

  • Contribute to the evolution of runtime systems, communication libraries, and AI-specific protocol layers. 

 

What we need to see: 

  • Ph.D. or equivalent industry experience in computer science, computer engineering, or a closely related field. 

  • 2+ years of experience in systems programming, parallel or distributed computing, or high-performance data movement. 

  • Strong programming background in C++, Python, and ideally CUDA or other GPU programming models. 

  • Practical experience with AI frameworks (e.g., PyTorch, TensorFlow) and familiarity with how they use communication libraries under the hood. 

  • Experience in designing or optimizing software for high-throughput, low-latency systems. 

  • Strong collaboration skills in a multi-national, interdisciplinary environment. 

 

Ways to stand out from the crowd: 

  • Expertise with NCCL, Gloo, UCX, or similar libraries used in distributed AI workloads. 

  • Background in networking and communication protocols, RDMA, collective communications, or accelerator-aware networking. 

  • Deep understanding of large model training, inference serving at scale, and associated communication bottlenecks. 

  • Knowledge of quantization, tensor/activation fusion, or memory optimization for inference. 

  • Familiarity with infrastructure for deployment of LLMs or transformer-based models, including sharding, pipelining, or hybrid parallelism.  

At NVIDIA, you’ll work alongside some of the brightest minds in the industry, pushing the boundaries of what’s possible in AI and high-performance computing. If you're passionate about distributed systems, AI inference, and solving problems at scale, we want to hear from you. 
NVIDIA is at the forefront of breakthroughs in Artificial Intelligence, High-Performance Computing, and Visualization. Our teams are composed of driven, innovative professionals dedicated to pushing the boundaries of technology. We offer highly competitive salaries, an extensive benefits package, and a work environment that promotes diversity, inclusion, and flexibility. As an equal opportunity employer, we are committed to fostering a supportive and empowering workplace for all. 

What you need to know about the Manchester Tech Scene

Home to a £5 billion digital ecosystem, including MediaCity, which consists of major players like the BBC, ITV and Ericsson, Manchester is one of the U.K.'s top digital tech hubs, at the forefront of advancements in film, television and emerging sectors like as e-sports, while also fostering a community of professionals dedicated to pushing creative and technological boundaries.
By clicking Apply you agree to share your profile information with the hiring company.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account