Eneba

DevOps Engineer – Data Platform

Eneba • LT
Remote
About Eneba

At Eneba, we’re building an open, safe and sustainable marketplace for the gamers of today and tomorrow. Our marketplace supports close to 20m+ active users (and growing fast!), provides a level of trust, safety and market accessibility unparalleled to none. We’re proud of what we’ve accomplished in such a short time and look forward to sharing this journey with you. Join us as we continue to scale, diversify our portfolio, and grow with the evolving community of gamers. 

About the Role

We’re hiring a DevOps Engineer to join our Platform team and take full ownership of infrastructure and DevOps processes for our Data Platform. Today, data engineers at Eneba are stretched across both data pipeline development and infrastructure maintenance. You’ll step in to streamline, automate, and elevate the backend systems that power our analytics and data-driven products.

This is a hands-on and collaborative role where you’ll have the opportunity to reshape core infrastructure, focusing on AWS, CI/CD systems, database ops, and performance monitoring, while directly supporting some of the most critical projects in the company.

About the Team

You’ll be part of the Platform Engineering team, working closely with:

- Data engineers (your closest collaborators and internal customers)
- Backend and Frontend engineers building shared services
- DevOps colleagues building EKS clusters, migration pipelines, and automation tools

Responsibilities

  • Own and evolve cloud infrastructure for the Data Platform, with a focus on AWS services like RDS, S3, IAM, and VPC
  • Build and maintain CI/CD pipelines for data-related workloads (e.g., Airflow, dbt, ingestion pipelines, Terraform)
  • Collaborate with data engineers to orchestrate and scale data pipelines, improving reliability and observability
  • Manage database operations, including migrations, replication, backups, and performance tuning for PostgreSQL/MySQL
  • Implement and enforce IAM policies and security best practices for data infrastructure
  • Optimize data platform cost, availability, and scalability across environments
  • Monitor platform health and implement alerting for failures, bottlenecks, or data anomalies
  • Enable self-service infrastructure for the data team by creating reusable templates and automation toolsChampion infrastructure best practices across the data lifecycle
  • Must-have Requirements

  • 4+ years of DevOps or Infrastructure Engineering experience
  • Proven track record working with data engineering teams or on data-intensive platforms
  • Strong hands-on experience with AWS, especially:
  • RDS (PostgreSQL/MySQL): migration, replication, HA, tuning
  • S3 for data lakes
  • IAM, VPC, networking, security best practices
  • Experience with CI/CD for data pipelines or infrastructure-as-code (e.g., deploying Airflow DAGs, dbt jobs)
  • Proficient in Terraform, GitLab CI/CD, and scripting (Bash, Python)
  • Deep understanding of monitoring and observability (e.g., Prometheus, OpenTelemetry, ELK)
  • Comfortable enabling cross-functional collaboration and self-service tooling
  • Nice-to-have Requirements

  • Experience with Databricks, Delta Lake, or S3-based data lakes
  • Familiarity with Apache Airflow, Step Functions, or other workflow orchestration tools
  • Exposure to Spark, EMR, or large-scale data processing environments
  • Background in data validation, schema evolution, or lineage tracking
  • Experience managing secrets and sensitive credentials in data environments (e.g., AWS Secrets Manager, Vault)