United States Customs and Border Safety plans to spend $225,000 for a 12 months of entry to Clearview AI, a face recognition software that compares pictures towards billions of photos scraped from the internet.
The deal extends entry to Clearview instruments to Border Patrol’s headquarters intelligence division (INTEL) and the Nationwide Concentrating on Heart, items that acquire and analyze information as a part of what CBP calls a coordinated effort to “disrupt, degrade, and dismantle” individuals and networks considered as safety threats.
The contract states that Clearview offers entry to “over 60+ billion publicly obtainable photos” and will probably be used for “tactical concentrating on” and “strategic counter-network evaluation,” indicating the service is meant to be embedded in analysts’ day-to-day intelligence work quite than reserved for remoted investigations. CBP says its intelligence items draw from a “number of sources,” together with commercially obtainable instruments and publicly obtainable information, to determine individuals and map their connections for nationwide safety and immigration operations.
The settlement anticipates analysts dealing with delicate private information, together with biometric identifiers corresponding to face photos, and requires nondisclosure agreements for contractors who’ve entry. It doesn’t specify what sorts of pictures brokers will add, whether or not searches might embody US residents, or how lengthy uploaded photos or search outcomes will probably be retained.
The Clearview contract lands because the Division of Homeland Safety faces mounting scrutiny over how face recognition is utilized in federal enforcement operations far past the border, together with large-scale actions in US cities which have swept up US residents. Civil liberties teams and lawmakers have questioned whether or not face-search instruments are being deployed as routine intelligence infrastructure, quite than restricted investigative aids, and whether or not safeguards have stored tempo with enlargement.
Final week, Senator Ed Markey introduced legislation that will bar ICE and CBP from utilizing face recognition know-how altogether, citing issues that biometric surveillance is being embedded with out clear limits, transparency, or public consent.
CBP didn’t instantly reply to questions on how Clearview could be built-in into its techniques, what varieties of photos brokers are licensed to add, and whether or not searches might embody US residents.
Clearview’s enterprise mannequin has drawn scrutiny as a result of it depends on scraping pictures from public web sites at scale. These photos are transformed into biometric templates with out the data or consent of the individuals photographed.
Clearview additionally seems in DHS’s not too long ago launched artificial intelligence inventory, linked to a CBP pilot initiated in October 2025. The stock entry ties the pilot to CBP’s Traveler Verification System, which conducts face comparisons at ports of entry and different border-related screenings.
CBP states in its public privateness documentation that the Traveler Verification System doesn’t use data from “business sources or publicly obtainable information.” It’s extra doubtless, at launch, that Clearview entry would as an alternative be tied to CBP’s Automated Concentrating on System, which hyperlinks biometric galleries, watch lists, and enforcement information, together with recordsdata tied to latest Immigration and Customs Enforcement operations in areas of the US removed from any border.
Clearview AI didn’t instantly reply to a request for remark.
Current testing by the National Institute of Standards and Technology, which evaluated Clearview AI amongst different distributors, discovered that face-search techniques can carry out nicely on “high-quality visa-like pictures” however falter in much less managed settings. Photos captured at border crossings that had been “not initially supposed for automated face recognition” produced error charges that had been “a lot larger, usually in extra of 20 %, even with the extra correct algorithms,” federal scientists say.
The testing underscores a central limitation of the know-how: NIST discovered that face-search techniques can not scale back false matches with out additionally growing the danger that the techniques fail to acknowledge the proper particular person.
In consequence, NIST says companies might function the software program in an “investigative” setting that returns a ranked listing of candidates for human assessment quite than a single confirmed match. When techniques are configured to all the time return candidates, nevertheless, searches for individuals not already within the database will nonetheless generate “matches” for assessment. In these circumstances, the outcomes will all the time be one hundred pc incorrect.
