Your space-enabled career begins here

Space-based technologies are the building blocks of these pillars of innovation:

Search for credible job opportunities with top entrepreneurial space companies.

Senior Software Engineer

Omnitracs

Omnitracs

Software Engineering
Bengaluru, Karnataka, India
Posted on Feb 17, 2026

Who We Are

Solera is a global leader in data and software services that strives to transform every touchpoint of the vehicle lifecycle into a connected digital experience. In addition, we provide products and services to protect life’s other most important assets: our homes and digital identities. Today, Solera processes over 300 million digital transactions annually for approximately 235,000 partners and customers in more than 90 countries. Our 6,500 team members foster an uncommon, innovative culture and are dedicated to successfully bringing the future to bear today through cognitive answers, insights, algorithms and automation. For more information, please visit solera.com.


Job Summary:

An AI Engineer responsible for designing, building, and operating production-grade GenAI systems using foundation models, retrieval-augmented generation (RAG), and classical ML techniques.
This role blends applied AI engineering with strong backend development, data handling, and system reliability. You will work closely with product and platform teams to deliver scalable, secure, and cost-efficient AI-powered features using open-source and commercial models.

Essential responsibilities and duties:

  • Design and build GenAI-powered applications using foundation models, RAG pipelines, and agent-based architectures.

  • Implement RAG systems end-to-end:

  • Data ingestion and chunking

  • Embedding generation and vector stores

  • Retrieval strategies and re-ranking

  • Prompt design and response grounding

  • Work with open-source and commercial LLMs, selecting models based on cost, latency, and quality trade-offs.

  • Build AI services using Java and Python, integrating with existing backend systems.

  • Develop robust prompting, parsing, and post-processing logic, including regex-based validation and structured output enforcement.

  • Design and optimise SQL and NoSQL data access patterns for AI-driven workflows.

  • Implement fine-tuning and adaptation strategies (LoRA, PEFT, prompt tuning) where appropriate.

  • Use frameworks such as LangChain and LangGraph to orchestrate multi-step reasoning, tools, and agents.

  • Implement evaluation, monitoring, and observability for AI systems:

  • Quality and hallucination detection

  • Latency and cost tracking

  • Drift and regression detection

  • Apply AI safety and data governance practices, including PII minimisation and auditability.

  • Collaborate with product, data, and platform teams to productionize AI features.

  • Continuously improve AI systems through experimentation, iteration, and performance tuning.

  • Understanding of core machine learning concepts, including supervised and unsupervised learning, bias–variance trade-offs, overfitting, and model evaluation.

  • Ability to combine classical ML techniques with GenAI/LLM-based approaches to build hybrid AI systems.

Fleet/Telematics-Specific Responsibilities:

  • Strong understanding of foundation models, embeddings, and transformer-based architectures.

  • Hands-on experience with RAG design patterns and trade-offs.

  • Experience working with open-source LLMs (e.g., LLaMA-family, Mistral, Qwen) and/or hosted models.

  • Practical knowledge of fine-tuning techniques and when to use them vs prompt engineering.

  • Strong prompt engineering skills, including structured outputs and guardrails.

Qualifications:

  • EDUCATION: Bachelor’s degree in Computer Science or equivalent

  • EXPERIENCE: 5+ years of experience building production AI-driven systems, with extensive hands-on work in GenAI over the last 1–3 years, including LLM integration, prompt engineering, orchestration frameworks, and inference optimisation.

Knowledge/Skills/Abilities:

  • Strong programming skills in Python and Java.

  • Solid experience with SQL and NoSQL databases for production workloads.

  • Experience using regex and rule-based techniques to validate and post-process LLM outputs.

  • Familiarity with vector databases and search engines.

  • Experience building API-driven services and integrating AI into backend systems.

  • Exposure to AI governance, security, and compliance considerations.

  • Prior work on analytics, copilots, or decision-support systems.

  • Experience optimising LLM cost and performance at scale.

  • Tooling & Platforms

  • Hands-on experience with LangChain, LangGraph, or similar orchestration frameworks.

  • Experience with model monitoring, evaluation, and logging tools.

  • Familiarity with CI/CD pipelines for AI workloads.

  • Experience deploying AI services in cloud or containerised environments.