Identify, propose and implement advanced best-of-breed solutions to complex Data Engineering problems at scale
- Lead the design, build and launch of efficient and reliable distributed data pipelines to move and transform data
- Design and develop new systems in partnership with software engineers to enable quick and easy data consumption
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Mentor team, facilitate best-practices knowledge sharing
- Be a technology ambassador - Git, blogs, spokesperson in customer and industry
• *Requirements**:
- Possess a strong technology foundation
- Platform Architecture incl. containerization
- Databases
- Data Pipelines Engineering and Architecture
- Infrastructure Deployment and Management
- Have extensive hands-on experience across the data pipeline
- Ingestion: batch, streaming or both
- ETL/ELT
- Repository architecture and implementation
- Deep expertise with ability to mentor in at least one programming language (such as Python, Scala, R) as well as SQL and Bash
- Proven ability to envision and build platforms to scale disproportionately
- Prior knowledge of CI/CD and agile development methodologies would be a plus
- Prior experience in migration to cloud from legacy Hadoop, Teradata or Cloudera environments would be a big plus
- Great aptitude for collaboration combined with excellent communication and presentation skills
- Entrepreneurial and growth mindset combined with technical acumen and curiosity (must-have’s)
- Have deep experience and associated certifications in **_at least one_** of the platform-tracks below
- ** Required**: AWS Glue, AWS Redshift, AWS S3, Athena, MSK or Apache Kafka
- ** Preferred**: AWS Certified Data Analytics - Specialty or AWS Certified Big Data - Specialty
- ** Must Have**: Any combination of tools - Snowflake native (COPY, Snowpipe, Streams etc) or 3rd party tools (dbT, Fivetran, Matillion, Immuta, Collibra etc) or Open Source tools (Airbyte, Debezium etc)
- ** Preferred**: SnowPro Core & SnowPro Advanced: Architect
- ** Nice to Have**: AWS Data pipeline, Lake Formation, EMR, Data Exchange or any of the 3rd party data integration tools (dbT, Fivetran, Matillion, Immuta, Collibra etc) or Open Source tools (Airbyte, Debezium etc)
- ** Nice to have**: AWS Solutions Architect Professional or Associate
- ** Nice to Have**: AWS Glue, Informatica ETL, other data integration and ETL/ELT tool sets
- ** Nice to have**: SnowPro Advanced: Administrator/Data Engineer
Work Experience
• *3-7 years**
• **
Industry
• *Cloud IT Company**
• **
Salary
• *9000000**
• **
City
• *Jakarta**
• **
State/Province
• *Jakarta Raya**
• **
Country
• *Indonesia**
• **
Zip/Postal Code
• *11470