About The Role
Delivery Data Solutions (DDS) is a horizontal team responsible to transform data@Delivery to meaningful data to support analytics, metrics, power ML models and support KPIs for the domain teams through real time/batch processing. We lead the optimal data resource utilization and data quality for the organization. We provide visibility and standardization of core business metrics powered through the canonical data sets owned by the team. The team is the centre of excellence for data engineering practices across Uber Delivery org. The team creates efficient tools and processes to help people working on data, designs and maintains a holistic view of delivery data, and manages and optimises delivery data infrastructure resources.
What The Candidate Will Need / Bonus Points
—- What the Candidate Will Do —-
Build and maintain data pipelines and data products that power analytics, reporting and machine learning use cases across the Delivery organization.
Develop batch and real-time data processing workflows that transform large datasets into reliable and well-structured data assets.
Contribute to the development of core business metrics and analytical datasets used by product, data science and engineering teams.
Work closely with product engineers, data scientists and analysts to understand data requirements and implement scalable solutions.
Ensure data quality, reliability and timeliness across pipelines by following established data engineering best practices.
Support performance optimizations and infrastructure improvements to improve pipeline efficiency and maintain SLA commitments.
Participate in improving data engineering tools, processes and documentation within the team.
Basic Qualifications
Bachelor’s degree in Computer Science or a related technical field, or equivalent practical experience.
Experience coding using a general-purpose programming language such as Java, Python, Go or similar.
Experience working with data processing frameworks such as Spark, Hive or similar technologies.
Understanding of data warehousing concepts and analytical data modeling.
Experience writing data transformation logic, queries and scripts for data processing workflows.
Strong problem-solving skills and ability to work collaboratively with cross-functional teams.
Preferred Qualifications
Master’s degree in Computer Science or a related technical field, or equivalent practical experience.
Experience building data pipelines supporting analytics or machine learning workloads.
Experience working with distributed data processing systems and large datasets.
Understanding of data quality validation, monitoring and pipeline reliability practices.
Exposure to real-time or streaming data technologies is a plus.
Familiarity with marketplace, logistics or delivery domain datasets is a plus.