AI/ML Developer

Job Category: Technology and IT
Job Type: Remote
Job Location: United States
Company Name: hackajob

Are you passionate about building safe and trustworthy AI? Hackajob is partnering with Leo Technologies to hire a Responsible AI Engineer who will design, test, and deploy evaluation systems for Large Language Models (LLMs) and generative AI. As a Responsible AI Engineer, you’ll play a key role in developing guardrails, ensuring fairness, and implementing ethical safeguards for AI solutions used in public safety and intelligence. If you have strong ML/AI experience and a commitment to responsible AI practices, this is your opportunity to shape the future of high-impact AI systems.


Responsible AI Engineer Responsibilities

As a Responsible AI Engineer, you will:

  • Build and maintain evaluation frameworks for LLMs and generative AI systems tailored to public safety and intelligence use cases

  • Design guardrails and alignment strategies to reduce bias, toxicity, hallucinations, and other ethical risks

  • Define and implement online/offline evaluation metrics (accuracy, consistency, interpretability, safety, model/data drifts)

  • Develop continuous evaluation pipelines integrated with CI/CD and production monitoring systems

  • Stress test models against adversarial prompts, edge cases, and sensitive data scenarios

  • Research and integrate third-party evaluation frameworks, adapting them to regulated, high-stakes environments

  • Partner with customer-facing teams to ensure explainability, transparency, and auditability of AI outputs

  • Provide technical leadership in responsible AI practices and influence organization-wide standards

  • Contribute to DevOps/MLOps workflows for AI evaluation (Kubernetes experience is a plus)

  • Document best practices and share knowledge to foster responsible AI innovation


Responsible AI Engineer Requirements

To succeed as a Responsible AI Engineer, you should have:

  • Bachelor’s or Master’s in Computer Science, AI, Data Science, or related field

  • 3–5+ years of ML/AI engineering experience, including 2+ years in LLM evaluation, QA, or safety

  • Expertise in generative AI evaluation techniques: automated metrics, human-in-the-loop testing, adversarial testing, and red-teaming

  • Experience with bias detection, fairness approaches, and responsible AI design

  • Knowledge of LLM observability and monitoring tools (Langfuse, Langsmith)

  • Proficiency in Python and libraries such as LangGraph, Strands Agents, Pydantic AI, LangChain, HuggingFace, PyTorch, and LlamaIndex

  • Experience integrating evaluations into DevOps/MLOps workflows (Kubernetes, Terraform, ArgoCD, GitHub Actions)

  • Familiarity with cloud AI platforms (AWS, Azure) and best practices for deployment

  • Strong problem-solving skills to design real-world AI evaluation systems

  • Excellent communication skills to translate technical findings for technical and non-technical audiences


Why Join as a Responsible AI Engineer?

As a Responsible AI Engineer, you’ll work at the intersection of cutting-edge AI technology and ethical responsibility. Your work will directly influence how AI is deployed in high-stakes environments, ensuring safety, fairness, and trust. You’ll have the chance to collaborate with talented teams while leading efforts in building transparent, auditable, and responsible AI systems.


APPLY

Apply for this position

Allowed Type(s): .pdf, .doc, .docx