Skip to content

Picnic Technologies - Logistics Optimization at Scale

Project Summary

Type: Enterprise Role (4 Years)
Focus: Logistics Optimization & Event-Driven Architecture

Key Features:

  • Trip planning optimization algorithm reducing delivery trips by 5.3%
  • Truck scheduling framework cutting delivery costs by 4% fleet-wide
  • Food safety separation algorithm improving contamination prevention by 66%
  • Real-time event-driven system managing 50K–100K warehouse records
  • DHL API integration for end-to-end parcel return service
  • 4 years of continuous delivery in high-scale production environment

Over 4 years at Picnic Technologies, I architected and built backend systems powering last-mile delivery for thousands of daily orders across multiple warehouses. The challenge: optimize routing, scheduling, and safety compliance at massive scale while maintaining real-time visibility into warehouse operations. The solution: an event-driven microservices architecture that processes millions of events daily, enabling data-driven optimization of delivery operations.

The Problem

Last-mile delivery is the most expensive part of e-commerce. At Picnic's scale, this challenge becomes exponentially complex:

  • Thousands of daily orders across multiple warehouses need optimal routing and scheduling
  • Real-time coordination required between warehouse operations, truck scheduling, and delivery routes
  • Food safety compliance demands strict separation protocols to prevent contamination
  • Cost optimization critical—every percentage point reduction in delivery costs compounds across thousands of deliveries
  • Operational visibility needed across 50K–100K warehouse records for live state management
  • Integration complexity with external partners like DHL for parcel returns

Traditional batch processing and monolithic architectures couldn't handle the scale, latency, and complexity requirements.

Architecture

flowchart TB
    subgraph orders [Order Management]
        orders_in[Orders<br/>Thousands Daily]
    end

    subgraph event_bus [Event Bus - Kafka]
        kafka[Kafka Event Stream<br/>Real-Time Processing]
    end

    subgraph services [Microservices]
        trip_planner[Trip Planning Service<br/>Route Optimization]
        truck_scheduler[Truck Scheduling Service<br/>Fleet Management]
        warehouse_state[Warehouse State Service<br/>50K-100K Records]
        food_safety[Food Safety Service<br/>Separation Algorithm]
        dhl_integration[DHL Integration Service<br/>Parcel Returns]
    end

    subgraph data [Data Layer]
        postgres[(PostgreSQL<br/>Persistent State)]
    end

    subgraph delivery [Delivery Operations]
        optimized_routes[Optimized Routes<br/>5.3% Fewer Trips]
        scheduled_trucks[Scheduled Trucks<br/>4% Cost Reduction]
        safe_orders[Safe Orders<br/>66% Better Compliance]
    end

    orders_in --> kafka
    kafka --> trip_planner
    kafka --> truck_scheduler
    kafka --> warehouse_state
    kafka --> food_safety
    kafka --> dhl_integration

    trip_planner --> postgres
    truck_scheduler --> postgres
    warehouse_state --> postgres
    food_safety --> postgres
    dhl_integration --> postgres

    trip_planner --> optimized_routes
    truck_scheduler --> scheduled_trucks
    food_safety --> safe_orders
    warehouse_state --> optimized_routes
    warehouse_state --> scheduled_trucks

Technical Approach

Trip Planning Optimization

Built a sophisticated trip planning algorithm that optimizes delivery routes across thousands of daily orders. The algorithm considers:

  • Geographic clustering: Groups orders by proximity to minimize travel distance
  • Time windows: Respects customer delivery time preferences
  • Vehicle capacity: Maximizes utilization while staying within weight/volume limits
  • Traffic patterns: Incorporates historical and real-time traffic data
  • Multi-warehouse coordination: Optimizes across warehouse boundaries when beneficial

The optimization runs continuously as new orders arrive, dynamically adjusting routes to maintain efficiency. Result: 5.3% reduction in delivery trips across all warehouses, translating to significant cost savings and reduced environmental impact.

Real-Time Event Architecture

Architected an event-driven system using Kafka to handle the massive scale of warehouse operations. The system:

  • Processes 50K–100K warehouse records in real-time for live state management
  • Decouples services through event-driven communication, enabling independent scaling
  • Maintains eventual consistency across distributed services while ensuring data integrity
  • Provides real-time visibility into warehouse operations for operations teams
  • Handles backpressure gracefully during traffic spikes

Each warehouse event (inventory changes, order status updates, truck arrivals/departures) flows through Kafka, allowing multiple services to react and coordinate without tight coupling. This architecture enabled rapid feature development and reliable operation at scale.

Food Safety Algorithm

Designed a food safety separation algorithm that prevents cross-contamination between incompatible product categories (e.g., raw meat and ready-to-eat items). The algorithm:

  • Analyzes order composition to identify contamination risks
  • Enforces separation rules at multiple stages: warehouse picking, truck loading, delivery routing
  • Tracks compliance across the entire order lifecycle
  • Prevents violations proactively rather than detecting them after the fact

The algorithm improved contamination prevention protocols by 66%, ensuring compliance with food safety regulations while maintaining operational efficiency.

Truck Scheduling Framework

Created a truck scheduling system that optimizes fleet utilization across all warehouses. The framework:

  • Balances workload across available trucks and drivers
  • Minimizes idle time by coordinating warehouse operations with truck availability
  • Optimizes loading sequences to reduce time at warehouses
  • Considers driver schedules and labor regulations
  • Adapts dynamically to changing conditions (traffic, weather, order volume)

This framework achieved a 4% reduction in delivery costs fleet-wide by optimizing resource utilization and reducing operational inefficiencies.

Results

Metric Impact
Trip planning optimization 5.3% fewer trips across thousands of daily deliveries
Truck scheduling framework 4% reduction in delivery costs across all warehouses
Food safety algorithm 66% improvement in contamination prevention protocols
Real-time event system 50K–100K warehouse records managed in real-time
DHL integration End-to-end parcel return service fully operational
Production reliability 4 years of continuous delivery in high-scale environment

These optimizations compound across thousands of daily deliveries, resulting in millions of euros in cost savings annually while improving service quality and compliance.

Tech Stack

Java Kotlin PostgreSQL Kafka Event-Driven Architecture Microservices

Key Learnings

Building systems at Picnic's scale taught me that optimization isn't just about algorithms—it's about architecture. The event-driven approach enabled us to optimize independently across multiple dimensions (routing, scheduling, safety) while maintaining system reliability. Small percentage improvements (5.3% fewer trips, 4% cost reduction) compound dramatically at scale, making data-driven optimization essential. The real challenge was maintaining real-time visibility and coordination across distributed services while processing millions of events daily—a challenge that event-driven architecture solved elegantly.

  • Need help optimizing logistics or building event-driven systems?


    I help companies build scalable backend systems and optimize operations at scale. Let's discuss your challenges.

    Book Free Intro Call