Software Engineer, ML Inference Compiler & Deployment, AI Frameworks
![]() | |
![]() | |
![]() | |
![]() United States, California, Palo Alto | |
![]() | |
What to Expect
As a Software Engineer within our Autonomy teams, you will contribute to one of the most advanced and widely deployed AI Platforms in the world, powering Autopilot and our Humanoid Robot, Optimus. In this role, you will be responsible for the internal working of the AI inference stack and compiler running neural networks in millions of Tesla vehicles and Optimus. You will collaborate closely with the AI Engineers and Hardware Engineers to understand the full inference stack and design the compiler to extract the maximum performance out of our hardware. The inference stack development is purpose-driven: deployment and analysis of production models inform the team's direction, and the team's work immediately impacts performance and the ability to deploy more and more complex models. With a cutting-edge co-designed MLIR compiler and runtime architecture, and full control of the hardware, the compiler has access to traditionally unavailable features, that can be leveraged via novel compilation approaches to generate higher performance models. What You'll Do
What You'll Bring
Compensation and Benefits Benefits
Along with competitive pay, as a full-time Tesla employee, you are eligible for the following benefits at day 1 of hire:
Expected Compensation
$132,000 - $390,000/annual salary + cash and stock awards + benefits
Pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. The total compensation package for this position may also include other elements dependent on the position offered. Details of participation in these benefit plans will be provided if an employee receives an offer of employment. |