AI against “forgetting” at supermarket self-checkouts? Of course!
AI Eyes in the Supermarket: Balancing Security and Privacy at Self-Checkout
The bane of supermarket self-checkouts: fraud or oversights, euphemistically called unknown losses, or the heavy hand that misweighs fruits and vegetables. Random checks seem ineffective. In some stores, there’s even an attendant at the self-checkout scales. The solution? AI, of course. Despite GDPR? Yes, says the CNIL (French data protection authority) in a May 2025 note.
This AI consists of cameras augmented with real-time analysis software. They are positioned high up to only film the checkout area, but this includes the customer, loyalty card, shopping basket, and products to be scanned, and inevitably the customer, preferably blurred. The algorithm will have learned to recognize “events” (identify or track products, people’s hands, or the position of a person relative to the checkout) and check that everything has been scanned correctly. In case of an anomaly, the idea isn’t to stop the customer but more subtly to schedule a check or inconvenience the customer by launching an alert on the screen, proposes the CNIL, which doesn’t want to make it an additional surveillance tool. This could work, indeed.
The issue is that these devices collect personal data: even by blurring or masking images, the offending individuals are re-identifiable, since intervention will be required with the person. And there are the video images in the store, not blurred. The connection will be quickly made.
But supermarkets have a legitimate interest, says the CNIL, in processing this data of their customers (which exempts them from obtaining consent) to avoid losses caused by errors or theft at self-checkouts. Before venturing onto this somewhat slippery ground, the CNIL seeks to establish the absence of a less intrusive alternative: there isn’t really one. It cites, for example, RFID tags that trigger alarms at the gates, but while this is possible in clothing stores, it doesn’t make sense in supermarkets with thousands of items. And beware of a high number of false positives, to which the CNIL is attentive, and rightly so: being a customer wrongly accused of fraud is anything but pleasant. This will negate the legitimacy of the method.
Experiment, Test
Such an intrusive mechanism must be effective: the CNIL advises retailers to test it first. Does it reduce revenue losses in the way the AI control has been implemented? Can we discriminate between the deterrent effect and unintentional errors to adapt staff intervention? The camera’s field of view should be restricted as much as possible, the recording time limited (only during the transaction), and stopped at the moment of staff intervention. The customer must be informed that such surveillance is taking place and given some control over its activation, while being mandatory (so that they don’t feel they are being filmed without their knowledge), and not create an “immediate arrest” in case of fraud. This data must not be kept for evidentiary purposes or to create a blacklist of unwelcome customers. No audio recording, either. Ah, if all the cameras that spy on us could proceed like this! It’s sound data minimization.
For the same reason, data analysis must be done locally: it is useless to repatriate the data to a cloud where it will obviously be forgotten until the moment it leaks.
The customer can object to this data collection and processing, but there, it’s simple, just provide manual checkouts, but enough of them so that the wait isn’t too long, otherwise this right of objection is more difficult to exercise, which the GDPR doesn’t like. Some will recognize the famous nudge effect of R. Thaler (Nobel Prize 2017), namely offering a choice with cognitive incentives to prefer one option over the other (except that the incentive is too penalizing, the waiting time).
Reuse of Data for Training
Another classic question whenever AI is discussed: can the data be reused to train the algorithm, which would be a plus to reduce the number of false positives? This is more delicate: there will be on this data, even with blurred faces, many physical characteristics of the hands, gestures that will allow people to be recognized. The products handled and purchased can also facilitate the identification of individuals. It would be healthy, says the CNIL, to provide the possibility for people to object to this and in all other cases, to keep the data only for the time necessary to improve the algorithm.
Self-checkouts, like subway ticket punchers, highway tolls, are technologies at the service of the emancipation of a category of humans who are in charge of arduous, repetitive, and thankless tasks. But often the possibilities of cheating increase in parallel, and it is therefore necessary to technically prevent it (jumping the barrier, for example). AI at self-checkouts is nothing new in this regard.
Without realizing it, all these automations also reduce the possibilities of social contact. The CNIL does not mention the alternative of psychologically augmented human surveillance, on-site, at self-checkouts: imagine an attendant who, while monitoring the checkouts, dialogues, discusses, recognizes regulars. It’s social control that prevents many frauds.
When we know the small margin that supermarkets make, isn’t AI at the service of people’s virtue, with all these precautions, a good thing?
Enjoyed this post by Thibault Helle? Subscribe for more insights and updates straight from the source.