Lead Software Engineer
Omnitracs
Who We Are
Solera is a global leader in data and software services that strives to transform every touchpoint of the vehicle lifecycle into a connected digital experience. In addition, we provide products and services to protect life’s other most important assets: our homes and digital identities. Today, Solera processes over 300 million digital transactions annually for approximately 235,000 partners and customers in more than 90 countries. Our 6,500 team members foster an uncommon, innovative culture and are dedicated to successfully bringing the future to bear today through cognitive answers, insights, algorithms and automation. For more information, please visit solera.com.
The role : Lead Software Engineer
We are seeking a highly skilled Lead Software Engineer to join our data engineering team and play a critical role in shaping our data platform and analytics ecosystem. This role is ideal for a hands-on technical leader who thrives on designing scalable, reliable data solutions, leading high-impact initiatives, and mentoring engineers to deliver production-ready data products.
As a Senior Data Engineer, you will be responsible for architecting, building, and optimizing robust ETL/ELT pipelines across Snowflake and AWS to support enterprise analytics, reporting, and business intelligence use cases. You will work closely with product, analytics, and business stakeholders to translate complex requirements into performant data models and trusted datasets.
You will also serve as a technical leader within the team, driving best practices in data engineering, code quality, security, and governance, while continuously improving platform performance, reliability, and cost efficiency. This role offers the opportunity to influence data architecture decisions, modernize existing pipelines, and contribute directly to the success of data-driven initiatives across the organization.
What You’ll Do
- Design, build, and maintain robust, scalable ETL/ELT data pipelines to support large-scale, enterprise data processing and analytics workloads.
- Integrate and harmonize data from diverse structured and unstructured sources, including databases, streaming platforms, and external third-party APIs.
- Develop and optimize high-performance data processing solutions using Python, SQL, and distributed processing frameworks where applicable.
- Support, enhance, and troubleshoot existing ETL processes written in SQL and Python, ensuring reliability, data accuracy, and timely resolution of production issues.
- Collaborate with cross-functional teams to translate business and product requirements into well-designed data models, pipelines, and reusable data assets.
- Create, maintain, and enforce clear reporting specifications, data contracts, and process documentation as part of production-ready data deliverables.
- Partner with business and analytics stakeholders to deliver trusted reporting layers and dashboards, leveraging Looker and other BI tools.
- Lead code reviews, provide technical guidance, and mentor junior engineers, fostering a culture of quality, collaboration, and continuous improvement.
- Establish and enforce best practices for data quality, security, governance, and compliance across the data platform.
- Continuously optimize performance, cost, and scalability of cloud-based data platforms and data warehouse solutions.
- Proactively identify opportunities to modernize data architecture, improve pipeline efficiency, and adopt new tools or frameworks where they add measurable value.
What You’ll Bring
Required Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field.
- 8–12+ years of hands-on experience designing, building, and supporting enterprise-grade data engineering solutions.
- Proven experience working with cloud-based data platforms, including data warehouses and data lakes, at scale.
- Hands-on experience with SSAS cubes (tabular or multidimensional.
- Strong ability to translate business and analytical requirements into technical designs, data models, and production-ready solutions.
- Solid understanding of data architecture patterns (e.g., batch, streaming, ELT/ETL, lakehouse) and experience designing scalable, efficient, and maintainable data models.
- Advanced proficiency with Snowflake, Amazon Redshift, or similar cloud data warehouse technologies.
- Strong programming skills in Python and SQL, with working knowledge of PySpark or other distributed processing frameworks.
- Hands-on experience integrating data from multiple structured and unstructured sources, such as relational databases, NoSQL stores (e.g., MongoDB), streaming platforms (e.g., Kafka), and RESTful APIs.
- Experience supporting production data pipelines, including monitoring, troubleshooting, root-cause analysis, and performance tuning.
- Familiarity with version control systems (Git), CI/CD pipelines, and Agile/Scrum development practices.
- Experience delivering analytics and insights using BI and visualization tools such as Looker, Tableau, or Power BI.
- Strong problem-solving skills with a keen attention to detail and a bias toward automation and reliability.
- Excellent written and verbal communication skills, with the ability to collaborate effectively across technical and non-technical stakeholders.
- Strong time-management skills with the ability to manage multiple priorities in a fast-paced environment.
Preferred Qualifications
- Experience working in AWS or other major cloud ecosystems.
- Exposure to real-time or near–real-time data processing and event-driven architectures.
- Experience implementing data quality checks, governance frameworks, and security best practices.
- Relevant industry certifications (e.g., AWS, Snowflake, Databricks) are a plus.