DESCRIPTION
The JP Economics team is a central science team working across a variety of topics in the JP Retail business and beyond. We work closely with JP business leaders to drive change at Amazon. We focus on solving long-term, ambiguous and challenging problems, while providing advisory support to help solve short-term business pain points. Key topics include pricing, product selection, delivery speed, profitability, and customer experience. We tackle these issues by building novel economic/econometric models, machine learning systems, and high-impact experiments which we integrate into business, financial, and system-level decision making. Our work is highly collaborative and we regularly partner with JP- EU- and US-based interdisciplinary teams.
In this role, you will build production-grade machine learning models to serve best-in-class shopping and delivery experience to millions of customers on Amazon. This requires you to formulate ambiguous business problems into solvable scientific problems, work with large-scale data pipelines, perform extensive data cleaning and exploration, train and evaluate your models in a robust manner, design and conduct live experiments to validate model performance, and automate model inference on AWS infrastructure.
The ideal candidate is an experienced data scientist or machine learning engineer who has built machine learning systems in production that delivers business impact at scale in a B2C industry. You are a self-starter who enjoys ambiguity in a fast-paced and ever-changing environment. You are extremely proficient in Python, SQL and distributed computing frameworks. You have excellent understanding of how machine learning models work under the hood. In addition, you may have worked with AWS infrastructure and causal uplift modeling techniques. You think big on the next game-changing opportunity but also dive deep into every detail that matters. You insist on the highest standards and are consistent in delivering results.
We are open to consider high-potential candidates with less experiences for a more junior position.
Key job responsibilities
– Work with Product, Finance and Engineering to formulate business problems into scientific ones
– Build large-scale data pipelines for training and evaluating the models using PySpark/SparkSQL
– Extensively clean and explore the datasets
– Train and evaluate ML models in a robust manner
– Design and conduct live experiments to validate model performance
– Automate model inference and monitoring and on AWS infrastructure
BASIC QUALIFICATIONS
– PhD, or Master’s degree and 5+ years of applied research experience
PREFERRED QUALIFICATIONS
– Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc.
– Experience with large scale distributed systems such as Hadoop, Spark etc.
– Strong understanding of statistical analysis (hypothesis testing and experiment design) and machine learning techniques for tabular data
– Experience in developing and implementing machine learning models for tabular data in production
– Publications in top-tier machine learning conferences