Your space-enabled career begins here

Space-based technologies are the building blocks of these pillars of innovation:

Search for credible job opportunities with top entrepreneurial space companies.

Senior AI Data Pipeline Engineer

42dot

42dot

Software Engineering, Data Science
South Korea
Posted on Feb 9, 2026

Location

Pangyo (Software Dream Center), South Korea

Employment Type

Full time

Location Type

Hybrid

Department

ENGINEERINGAI

We are looking for the best

At 42dot, our AI Data Pipeline Engineer architect and scale global data pipelines that ingest and process data from worldwide sources. You will design and operate high-throughput systems to reliably deliver petabyte-scale data to our large-scale GPU infrastructure, powering mission-critical AI workloads.

Responsibilities

  • Design and build high-performance, scalable data pipelines to support diverse AI and Machine Learning initiatives across the organization.

  • Architect and implement multi-region data infrastructure to ensure global data availability and seamless synchronization.

  • Develop flexible pipeline architectures that allow for complex branching and logic isolation to support multiple concurrent AI projects.

  • Optimize large-scale data processing workloads using Databricks and Spark to maximize throughput and minimize processing costs.

  • Maintain and evolve the containerized data environment on Kubernetes, ensuring robust and reliable execution of data workloads.

  • Collaborate with AI researchers and platform teams to streamline the flow of high-quality data into training and evaluation pipelines.

Qualifications

  • Extensive professional experience in building and operating production-grade data pipelines for massive-scale AI/ML datasets.

  • Strong proficiency in distributed processing frameworks, particularly Apache Spark and the Databricks ecosystem.

  • Deep hands-on experience with workflow orchestration tools like Apache Airflow for managing complex dependency graphs.

  • Solid understanding of Kubernetes and containerization for deploying and scaling data processing components.

  • Proficiency in distributed messaging systems such as Apache Kafka for high-throughput data ingestion and event-driven architectures.

  • Expert-level programming skills in Python for system-level optimizations.

  • Strong knowledge of cloud-native services and best practices for building secure and scalable data infrastructure.

  • Logical approach to problem-solving with the persistence to identify and resolve root causes in complex, large-scale systems.

  • Strong communication skills to effectively collaborate with cross-functional teams and external partners.

Preferred Qualifications

  • Experience in architecting global, multi-region data pipelines and solving challenges related to cross-border data transfer and latency.

  • Practical experience or a strong interest in implementing distributed computing frameworks like Ray for AI workloads.

  • Experience in building real-time or near-real-time pipelines using Spark Streaming or Flink.

  • Familiarity with Infrastructure as Code (IaC) tools such as Terraform to manage complex data environments.

  • Understanding of the end-to-end ML lifecycle (MLOps) and how data infrastructure supports model experimentation and deployment.

Interview Process

  • 서류전형 - 온라인 코딩테스트 - 화상면접 (1시간 내외) - 대면면접 (3시간 내외) - 최종합격

  • 전형절차는 직무별로 다르게 운영될 수 있으며, 일정 및 상황에 따라 변동될 수 있습니다.

  • 전형일정 및 결과는 지원서에 등록하신 이메일로 개별 안내드립니다.

Additional Information

  • 이력서 제출 시 주민등록번호, 가족관계, 혼인 여부, 연봉, 사진, 신체조건, 출신 지역 등 채용절차법상 요구 금지된 정보는 제외 부탁드립니다.

  • 모든 제출 파일은 30MB 이하의 PDF 양식으로 업로드를 부탁드립니다. (이력서 업로드 중 문제가 발생한다면 지원하시고자 하는 포지션의 URL과 함께 이력서를 recruit@42dot.ai으로 전송 부탁드립니다.)

  • 인터뷰 프로세스 종료 후 지원자의 동의하에 평판조회가 진행될 수 있습니다.

  • 국가보훈대상자 및 취업보호 대상자는 관계법령에 따라 우대합니다.

  • 장애인 고용 촉진 및 직업재활법에 따라 장애인 등록증 소지자를 우대합니다.

  • 42dot은 의뢰하지 않은 서치펌의 이력서를 받지 않으며, 요청하지 않은 이력서에 대해 수수료를 지불하지 않습니다.

※ 지원 전 아래 내용을 꼭 확인해 주세요.