Key Responsibilities
-
Build and optimize Gen AI models for predictive scheduling, workload forecasting, and capacity planning.
-
Collaborate with Product, Engineering, Data, and UX teams to design scalable AI solutions.
-
Integrate AI-powered insights (e.g., Capacity Insights) into Jira workflows.
-
Deploy, monitor, and maintain ML models that are secure, interpretable, and high-performing.
-
Experiment with LLMs and prompt engineering for automation, reports, and summaries.
-
Contribute to MLOps practices including model tracking, CI/CD, and governance.
Requirements
-
3+ years building and deploying AI/ML solutions.
-
Strong software engineering background with Python, Git, and cloud platforms (AWS/GCP/Azure).
-
Fullstack development skills: React, Typescript, Node.
-
Database experience: MongoDB, Postgres, and non-relational systems.
-
Skilled with PyTorch, TensorFlow, LLM stacks, and prompt engineering.
-
Knowledge of MLOps, model monitoring, and interpretable AI systems.
-
Excellent communication skills and a product-driven mindset.
Nice-to-Haves
-
Familiarity with Jira/Atlassian ecosystem.
-
Knowledge of strategic portfolio management (SPM).
-
Experience in predictive scheduling, Monte Carlo simulations, or capacity planning.
Why Join
-
Remote-first with optional in-person meetups.
-
Unlimited vacation in most locations.
-
Comprehensive health, vision, dental, and savings plans.
-
Perks: training reimbursement, WFH reimbursement, social activities.
-
Professional growth through mentorship, conferences, and courses.
-
Chance to work on impactful products improving enterprise productivity.
About Tempo
Tempo is a leading provider of time, resource, budget, and portfolio management solutions used by 30,000+ customers worldwide, including one-third of the Fortune 500. Founded in 2007, Tempo has grown from a time-tracking tool into the #1 time management add-on for Jira and one of the most trusted names in the Atlassian ecosystem. With a culture built on innovation, collaboration, and inclusivity, Tempo helps organizations work smarter, not harder.