Senior Software Engineer: Platform
Biggeo
About BigGeo
BigGeo is redefining geospatial intelligence with an AI-ready Discrete Global Grid System (DGGS) that transforms how spatial data is captured, indexed, and monetized. Our platform powers mission-critical decisions across sectors where location intelligence drives outcomes—from large-scale infrastructure projects and environmental planning to logistics and emergency response. We are industry agnostic, unlocking possibilities for organizations that have yet to realize the value a system like ours can deliver.
Backed by Vivid Theory, a venture studio dedicated to building transformative technologies, we're a multidisciplinary, entrepreneurial team built for impact. We work quickly, push boundaries, and expect every team member to be both a thinker and a doer.
The Opportunity
We're seeking a Senior Platform Engineer focusing on high-performance backend systems using modern statically compiled languages. This role emphasizes building reliable, secure, and performant infrastructure that powers our product offerings. If you're a developer who thrives on creating high-performance, observable systems and isn't afraid to dive deep into low-level optimizations while building reliable platform services, we want to hear from you!
Primary Responsibilities
• Design and implement efficient, reliable, secure, and observable backend systems
• Optimize code for performance and resource utilization
• Contribute to architectural decisions for distributed systems and big-data processing
• Write and maintain observable, instrumented code that enables effective system monitoring
• Lead the development of complex platform features
• Design and implement scalable data architectures
• Conduct thorough performance testing and optimization
• Mentor junior developers, promote and enforce best practices
• Lead initiatives to align platform development with business objectives, ensuring that all platform functionalities contribute positively to key outcomes and KPIs
• Facilitate a smooth transition of platform features to product teams, supporting seamless integration and effective use within product pipelines
• Continuously evaluate and optimize the platform to enhance user experience and deliver measurable business value, supporting overall company growth objectives
• Assume full ownership and accountability for strategic technology domains, with the ability to articulate their business value and organizational impact
• Drive DevOps practices and automation initiatives
• Monitor and analyze technical performance of internal systems
• Leverage existing CI/CD pipelines and tooling for efficient deployment workflows
• Support deployment and operational excellence
• Contribute to infrastructure-as-code initiatives
Requirements
• Bachelor's degree in Computer Science, Software Engineering, Data Science, or a related field (or equivalent practical experience)
• Proven track record in high-performance backend development
• Proficiency in modern statically compiled languages
• Strong understanding of immutability principles and their application
• Expertise in writing efficient, reliable, and secure code
• Proficient with both manual memory management and automatic lifetime management techniques
• Strong understanding of computer architecture and efficient utilization of available resources
• Strong knowledge of fundamental data structures and algorithms
• Understanding of performance trade-offs between algorithmic efficiency, distributed systems coordination, and I/O minimization in big data contexts
• Experience with modern observability patterns and practices
Backend Technology Stack Requirements
• Core Languages & Frameworks
• Experience with modern statically compiled languages (Go, Rust, C++, or similar)
• Familiarity with testing frameworks and benchmarking tools
• Understanding of dependency management and build systems
• Databases & Data Storage
• Strong experience with relational databases (PostgreSQL, MySQL)
• Proficiency with NoSQL databases (MongoDB, Redis, Cassandra)
• Experience with time-series databases (InfluxDB, TimescaleDB, or Prometheus)
• Knowledge of database optimization, indexing strategies, and query performance tuning
• Experience with connection pooling and database driver optimization
• Message Queues & Event Streaming
• Experience with Apache Kafka, RabbitMQ, or NATS
• Understanding of event-driven architectures and pub/sub patterns
• Knowledge of message serialization formats (Protocol Buffers, Avro, MessagePack)
• APIs & Communication Protocols
• Expertise in RESTful API design and implementation
• Experience with gRPC and Protocol Buffers
• Knowledge of GraphQL is a plus
• Understanding of API versioning, rate limiting, and authentication patterns (OAuth2, JWT)
Container & Orchestration
• Proficiency with Docker and containerization best practices
• Experience with Kubernetes (deployment, scaling, service mesh)
• Knowledge of Helm charts and Kubernetes operators
• Experience with container registries and image optimization
• Cloud Platforms
• Hands-on experience with at least one major cloud provider (AWS, GCP, or Azure)
• AWS: ECS/EKS, Lambda, S3, RDS, ElastiCache, SQS/SNS
• GCP: GKE, Cloud Run, Cloud SQL, Pub/Sub, BigQuery
• Azure: AKS, Azure Functions, Cosmos DB, Service Bus
• Infrastructure as Code
• Experience with Terraform or Pulumi
• Knowledge of configuration management tools (Ansible, Chef, or similar)
• Experience with GitOps practices (ArgoCD, Flux)
CI/CD & DevOps Tools
• Experience working with CI/CD platforms (Jenkins, GitLab CI, GitHub Actions, CircleCI)
• Ability to effectively leverage existing CI/CD pipelines and deployment automation
• Knowledge of automated testing strategies (unit, integration, e2e)
• Familiarity with build processes and deployment workflows
• Observability & Monitoring
• Experience with Prometheus and Grafana
• Proficiency with distributed tracing (Jaeger, Zipkin, or OpenTelemetry)
• Knowledge of structured logging practices and tools
• Experience with APM tools (DataDog, New Relic, or Elastic APM)
• Understanding of SLIs, SLOs, and SLA definitions
• Version Control & Collaboration
• Expert-level Git proficiency
• Experience with code review processes and branching strategies
• Familiarity with monorepo or microservices repository patterns
Nice to Haves
• A Master's degree or relevant certifications in Distributed Systems, Big Data Processing, or Cloud Computing is a plus
• Experience with Rust (with tokio.rs) or Scala (with cats-effect) will be given top priority
• Experience with Go (Golang) including concurrency patterns, standard library, and popular frameworks
• Experience with any modern statically typed language (C++, Java, Kotlin)
• Background in big-data processing architectures (Spark, Flink, Hadoop)
• Experience with distributed systems and consensus algorithms (Raft, Paxos)
• Experience with high-performance data structures and lock-free programming
• Knowledge of geospatial data structures and algorithms (PostGIS, H3, S2 Geometry)
• Expertise in optimizing I/O operations and understanding of Linux kernel internals
• Familiarity with binary protocols and efficient serialization
• Experience with distributed eventing systems (e.g., NATS.io, Pulsar)
• Experience with service mesh technologies (Istio, Linkerd, Consul)
• Knowledge of caching strategies (Redis, Memcached, CDN optimization)
• Experience with load balancing and reverse proxy configuration (Nginx, HAProxy, Envoy)
• Familiarity with security best practices and compliance frameworks (SOC 2, GDPR, HIPAA)
• Experience with performance profiling tools (pprof, flamegraphs, perf)
• Knowledge of WebAssembly (Wasm) and its applications
• Contributions to open-source projects or maintaining libraries
• Experience with chaos engineering and resilience testing
• Passionate about code efficiency, reliability, and security
• Proactive in finding ways to improve existing systems
• Eager to learn, mentor and teach
• Strong problem-solving skills and critical thinking
• Excellent communication and teamwork abilities