Your space-enabled career begins here

Space-based technologies are the building blocks of these pillars of innovation:

Search for credible job opportunities with top entrepreneurial space companies.

DataOps Engineer

Placer.ai

Placer.ai

Data Science
Ramat Gan, Israel
Posted on Feb 20, 2026

ABOUT PLACER.AI:

Placer.ai is transforming how organizations understand the physical world. Our location analytics platform provides unprecedented visibility into locations, markets, and consumer behavior. Placer empowers thousands of customers—from Fortune 500 companies, to local governments and nonprofits— to make smarter, data-driven decisions.

What sets us apart? We've built the most advanced location intelligence platform in the market while maintaining an uncompromising commitment to privacy, proving that powerful analytics and responsible data practices can coexist.

Our growth reflects the market's demand: we reached $100M in annual recurring revenue within just 6 years of launching, achieved unicorn status with a $1B+ valuation in 2022, and continue to expand rapidly as one of North America's fastest-growing tech companies. We're creating a $100B+ market opportunity, and we're just getting started.

Named one of Forbes America's Best Startup Employers and a Deloitte Technology Fast 500 company, we're building a culture where innovation thrives, collaboration is the norm, and every team member contributes to reshaping how the world understands location.

SUMMARY:

We are looking for a DataOps Engineer to own the infrastructure that powers Placer's large-scale data processing platform. This is a platform-facing role sitting at the intersection of data engineering and infrastructure — you'll be the person who makes Spark run reliably and efficiently on Kubernetes, so that data engineers can build with confidence.

You understand data workloads deeply enough to make smart infrastructure decisions, and you have the production instincts to keep complex systems healthy at scale. If you get excited about shaving minutes off Spark job runtimes, right-sizing cluster autoscalers, and building the internal tooling that makes a data platform feel effortless, this role is for you.

RESPONSIBILITIES:

  • Design, deploy, and operate the Kubernetes-based infrastructure that runs Apache Spark and large-scale data processing workloads
  • Own the reliability, performance, and cost-efficiency of the data platform — including SLAs, autoscaling, resource quotas, and workload isolation
  • Manage Spark-on-K8s configurations, Airflow infrastructure, and Databricks integration; tune for throughput, latency, and cost
  • Build and maintain CI/CD pipelines and infrastructure-as-code for data platform components
  • Develop observability tooling — metrics, logging, alerting, and data quality dashboards — to proactively surface issues across the pipeline stack
  • Collaborate closely with Data Engineers to understand workload patterns and translate them into infrastructure decisions
  • Manage cloud storage (GCS/S3), Delta Lake, and Unity Catalog infrastructure
  • Drive platform improvements end-to-end: from design through deployment and ongoing ownership

REQUIREMENTS:

  • 5+ years of experience in a production infrastructure, SRE, or DevOps role
  • 2+ years of hands-on experience running data processing workloads (Apache Spark, Flink, or similar) in production
  • Strong Kubernetes experience, including Spark-on-K8s, autoscaling, resource management, and the broader K8s ecosystem
  • 2+ years with infrastructure-as-code tools (Terraform, Pulumi, or similar)
  • Proficiency in at least one general-purpose language — Python or Go preferred
  • Experience with workflow orchestration tools, particularly Apache Airflow
  • Solid understanding of cloud infrastructure — GCP preferred (GCS, GKE, IAM)
  • Strong observability skills: metrics pipelines, structured logging, alerting frameworks

OTHER REQUIREMENTS:

  • Familiarity with Delta Lake, Parquet, and columnar storage formats
  • Experience with data quality frameworks and pipeline lineage tooling
  • Knowledge of query optimization, partition strategies, and Spark performance tuning
  • Experience managing queues and databases (Kafka, PostgreSQL, Redis, or similar)

WHY JOIN PLACER.AI?

  • Join a rocketship! We are pioneers of a new market that we are creating
  • Take a central and critical role at Placer.ai
  • Work with, and learn from, top-notch talent
  • Competitive salary
  • Excellent benefits

NOTEWORTHY LINKS TO LEARN MORE ABOUT PLACER

Placer.ai is committed to maintaining a drug-free workplace and promoting a safe, healthy working environment for all employees.

Placer.ai is an equal opportunity employer and has a global remote workforce. Placer.ai’s applicants are considered solely based on their qualifications, without regard to an applicant’s disability or need for accommodation. Any Placer.ai applicant who requires reasonable accommodations during the application process should contact Placer.ai’s Human Resources Department to make the need for an accommodation known.