About the Role
We are looking for a Data Engineer to design, build, and maintain scalable data pipelines that support analytics, reporting, and AI/ML use cases. In this role, you’ll work closely with engineering and product teams to ensure data is reliable, accessible, and production-ready—so teams can make faster decisions and build smarter systems.
Key Responsibilities:
- Design, build, and maintain ETL/ELT pipelines and data workflows across multiple sources
- Integrate data from APIs, databases, and third-party services with strong reliability and traceability
- Ensure data quality through validation, monitoring, and structured debugging of pipeline issues
- Optimise data models and transformations for analytics and downstream consumption
- Support analysts and ML engineers with clean, well-documented datasets and reliable delivery
- Document schemas, pipelines, and processing logic to keep data systems transparent and scalable
Qualifications & Requirements:
✅ 3+ years of experience in data engineering or a similar role
✅ Strong SQL skills and confidence working with relational databases
✅ Experience building pipelines using Python (or similar) and modern data tooling
✅ Familiarity with cloud platforms and data services (AWS, GCP, or Azure)
✅ Understanding of data warehousing, analytics workflows, and performance optimization
✅ Strong problem-solving skills, attention to detail, and ability to work cross-functionally
Why Join Us?
- Work on products in the intersection of AI, cloud, and automation
- Build data infrastructure that directly impacts real product decisions
- Join a technically strong team with modern tooling and ownership culture
- Flexible work format and competitive compensation based on experience