Responsibilities:
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions
- Design data warehouse requirements based on business metrics with a proper naming convention prior to development
- Design end-to-end ETL processes based on use cases and business requirements.
- Implement complex automated workflows and routines using workflow scheduling tools.
- Troubleshoot complex data issues and perform root cause analysis to proactively resolve product and operational issues
Requirements:
- Bachelor’s degree or higher qualifications in Informatics, Information Systems, Computer Science, or Engineering.
- Proven experience as a Data Engineer, with at least 2 years of experience in the related field.
- Experience using Python for data processing.
- Demonstrated understanding of SQL, Data Modeling, and Data Warehousing (e.g. Redshift, BigQuery, Snowflake, Azure Synapse Analytics).
- Hands-on experience implementing ETL (or ELT) and data pipeline tools.
- Experience in data processing frameworks/schedulers like Airflow.
Nice to have experience in any of the following:
- Experience in remote development using cloud platforms such as GCS, AWS, and Azure.
- Experience in software engineering workflows (version control using Git, CI/CD).
- Understanding of Linux and Docker is a plus.