DESCRIPTION
This role is part of the rekindle returnship program.
Note : For more details on rekindle program, pls visit – https://www.amazon.jobs/en/landing_pages/rekindle
IES Shopping – Analytics and Science Team (AST) has a vision to embed a data culture deeply in our IES Shopping Experience organization, fostering invention through insights, and building a robust data architecture to support business needs. We spin the insights flywheel by growing a pool of bar-raisers and diverse data professionals, which empowers us to continuously enhance our data capabilities, holistically covering disciplines of Data Engineering, Business Intelligence, Analytics, and Data Science.
Key job responsibilities
As a Data Engineer, you will work in a complex, large-scale data warehouse environment. You should be passionate about working with massive datasets and enjoy bringing them together to answer business questions. Expertise in dataset creation and management is essential.
You will build data analytics solutions to address increasingly complex business needs. This includes implementing and operating stable, scalable data flow solutions from production systems into end-user applications and reports. These solutions must be fault-tolerant, self-healing, and adaptive. You will face unique challenges around space, size, and speed, requiring the use of cutting-edge analytics patterns and technologies like AWS EMR, Lambda, Kinesis, and Spectrum.
Extracting structured and unstructured data from various sources, you will construct complex analyses and write scalable, high-performance code. Your data flow solutions will process data on Spark and Redshift, storing it in Redshift and S3 for reporting and ad-hoc analysis.
You should be detail-oriented, adept at solving unstructured problems, and able to work in a self-directed environment. Excellent business and communication skills are crucial to collaborating with stakeholders, defining key questions, and building data solutions that address their needs. You will own the customer relationship around data, ensuring high availability, low latency and thorough documentation.
BASIC QUALIFICATIONS
– 1+ years of data engineering experience
– Experience with SQL
– Experience with data modeling, warehousing and building ETL pipelines
– Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
– Experience with one or more scripting language (e.g., Python, KornShell)
PREFERRED QUALIFICATIONS
– Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
– Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.