If you want to optimise the distribution network of India’s largest third party logistics provider, our data team is the place to be. We are the country’s premium data science team featured widely in press and news; see:
The team comprises of leading experts in the field of Operations Research and Particle Physics, along with budding engineers and scientists in the field of Machine Learning and Artificial Intelligence. In the past year, the data science team at Delhivery has built key capabilities in the following areas:
Maps: Our business relies heavily on understanding where our end customers live and how to get there. This is a huge India specific challenge, primarily because of the unstandardized way of writing addresses and poor quality of road/pincode data in smaller towns/villages. We process a large amount of location data from our devices on the ground to learn locality boundaries, routes taken by ground staff, etc.
Machine Learning: We use a range of machine learning techniques to automate decision making at the ground level, e.g., identifying whether a shipment is safe to fly based on its product description, identifying which shipments have a high probability of being returned, etc.
Discrete Optimisation: We rely on a range of optimisation methods to ensure our distribution network is designed for cost efficiencies and scale, e.g., Vehicle Routing Problem to optimise the shipment collection process from clients, Facility Location Problem to ensure our distribution centres are suitably located, etc.
Simulation: The scale of our distribution network often makes many problems intractable due to the existence of millions of variables and how they interact with each other over time. We are investing to build an in-house Simulator to enable us to measure the impact of changes in the network over time and ultimately design a system that can be “self aware”.
We are looking to expand our team to further our capabilities in the area of Operational Research.
Collaborate with cross-functional teams including but not limited to Engineering, Products, Operations, Sales, Marketing, Security, Customer Service, etc. to breakdown complex business problems and recommend data science products
Translate business processes to mathematical models
Take ownership of a project and be able to work independently with little supervision
Formulate and solve Operational Research problems aimed at optimizing Delhivery’s supply chain network and automating daily operations
Use statistical methods for analysing large datasets
Code system prototypes in an object oriented language or scripting/modeling language
Disseminate original research in peer reviewed journals and conferences
Degree (B.Tech, MS, PhD or equivalent) in Computer Science, Mathematics, Operational Research, Statistics or Natural Sciences
1-7 years of work experience in data science and statistical modeling for DS, 3+ for Sr. DS
A very clear understanding of probability and statistics, analytical approach to problem solving, and capability to think critically on a diverse array of problems
Practical and Theoretical knowledge of OR Tools: Mathematical Programming (Linear/Non-linear techniques), Graph Theory, Simulation, Convex Optimisation, Transportation Problem, Vehicle Routing Problem, Facility Location Problem, Queuing Theory, Inventory Management, Forecasting Techniques, etc.
Knowledge about meta-heuristics like Genetic Algorithm, Tabu Search, Simulated Annealing, etc would be beneficial.
Understanding of Mixed Integer Programming techniques to leverage commercially available libraries such as CPLEX, COIN-OR, Google OR-Tools, R and adapt them as required
Familiarity with statistical methods such as hypothesis testing, forecasting, time series analysis, etc - gained through work experience or graduate level education
Expertise in at least one of the following languages: Python, Java, C++
Experience with relational databases NoSQL databases such as MongoDB, Elastic Search, Redis or any graph database
Experience in handling geospatial data such as PostGIS will be appreciated
Skilled at data visualization and presentation
Good communication skills with both technical and business people
Experience with big data tools like Spark, Hadoop is a plus
Publications in peer-reviewed journals will count in your favour
Most importantly, an inquisitive mind, an ability for self learning and abstraction along with a risk appetite for experimentation and failure