As a Data Engineer, you will design, build, and maintain scalable data pipelines and systems. Your expertise in DBT, GCP BigQuery, and GIT, combined with your coding skills and experience working in an Agile environment.
Key Responsibilities:
Data Pipeline Development: Design, develop, and maintain ETL/ELT pipelines to ensure seamless data flow from various sources to our data warehouse using DBT and GCP BigQuery.
Data Modelling: Implement and manage data models in BigQuery to support business analytics and reporting needs.
Version Control: Utilize GIT for version control to manage changes in data pipelines, schemas, and related code.
Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet their needs.
Performance Optimization: Monitor and optimize data pipeline performance, ensuring high availability and reliability.
Agile Practices: Participate in Agile ceremonies such as sprint planning, daily stand-ups, and retrospectives to ensure continuous improvement and timely delivery of projects.
Experience: Proven experience as a Data Engineer with 7-10 years of experience or in a similar role, with a strong background in data engineering.
Technical Skills:
Proficiency in DBT for data transformation and modelling.
Extensive experience with Google Cloud Platform (GCP), particularly BigQuery.
Strong knowledge of GIT for version control.
Solid coding skills in languages such as SQL, Python, or Java.
Agile Environment: Experience working in an Agile development environment, with a good understanding of Agile methodologies and practices.
Problem-Solving: Strong analytical and problem-solving skills.
Qualifications:
Certifications Preferable: Relevant certifications in GCP or DBT.
Experience: Experience with additional GCP services (e.g., Dataflow, Pub/Sub, Composer) or other data tools.
Education: Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field.
Cloud-PaaS-GCP-Google Cloud Platform