Home> | Automation | >Automated handling | >Adventures in ‘Damage Land’ - how Amazon is automating damage detection of parcels |
Home> | Automation | >Picking & sortation | >Adventures in ‘Damage Land’ - how Amazon is automating damage detection of parcels |
Home> | Industry Sector | >Retail/E-tail | >Adventures in ‘Damage Land’ - how Amazon is automating damage detection of parcels |
Adventures in ‘Damage Land’ - how Amazon is automating damage detection of parcels
22 September 2022
Detecting parcel damage is tricky at Amazon’s scale, but the online retailer now has researchers training robots to help with the task.

IN 2020, Sebastian Hoefer, senior applied scientist with the Amazon Robotics AI team, supported by his Amazon colleagues, successfully pitched a novel project to address this problem. The idea: combine computer vision and machine learning (ML) approaches in an attempt to automate the detection of product damage in Amazon FCs.
“You want to avoid damage altogether, but in order to do so you need to first detect it,” notes Hoefer. “We are building that capability, so that robots in the future will be able to utilise it and assist in damage detection.”
Damage in Amazon fulfillment centers can be hard to spot, unlike this perforation captured by a standard camera (left) and Amazon's damage detection models (right.)
The team set about working at an FC near Hamburg, Germany, called HAM2, in a section of the warehouse affectionately known as “Damage Land”. Damaged items end up there while decisions are made on whether such items can be sold at a discount, refurbished, donated or, as a last resort, disposed of.
The team set up a sensor-laden, illuminated booth in Damage Land.
Julia Dembeck, a senior operations manager at HAM2, says: “The results were amazing and got even better when associates shared their best practices on the optimal way to place items in the tray." Types of damage included things like crushes, tears, holes, deconstruction (e.g., contents breaking out from its container) and spillages.
The associates collected about 30,000 product images in this way, two-thirds of which were images of damaged items.
“We also collected images of non-damaged items because otherwise we cannot train our models to distinguish between the two,” says Hoefer. “Twenty thousand pictures of damage are not a lot in ‘big data’ terms, but it is a lot given the rarity of damage.”
With data in hand, the team first applied a supervised learning ML approach, a workhorse in computer vision. They used the data as a labelled training set that would allow the algorithm to build a generalisable model of what damage can look like. When put through its paces on images of products it had never seen before, the model’s early results were promising.
You can read a longer version of this article, penned by Sean O'Neill, on the Amazon Science website
- Union and MPs call for Parliamentary inquiry into Amazon
- Next-gen supply chain mechatronics the goal of latest Amazon acquisition
- Goodman completes latest of 13 Amazon DC builds
- Warehousing spared in latest Amazon job cuts
- Amazon releases dataset for training robots
- Amazon recruiting for soon to open Rugby DC
- Amazon launches apprenticeship scheme for warehouse workers
- Amazon to open DC at Airport City Manchester
- Online retail giant changing metrics in the space race
- Amazon under fire on warehouse safety