U.S. Immigration and Customs Enforcement (ICE) isn’t just kicking doors anymore.
It’s quietly wiring up an AI-powered dragnet that watches what you post, where you stand, and who you stand with.

New documents show ICE has bought access to Zignal Labs, an AI social media monitoring platform, in a $5.7 million, five-year contract brokered through the government reseller Carahsoft. (The Lever) The tool can ingest over 8 billion public posts per day in more than 100 languages, using machine learning, computer vision, and OCR to turn social media into a live intelligence feed. (The Verge)

Civil liberties groups are calling the deal what it is:

an “assault on democracy and free speech.” (The Verge)


From Monitoring to Panopticon

Zignal Labs markets itself as “real-time intelligence.” In practice, that means:

  • Scraping public posts across major platforms
  • Geolocating images and videos attached to those posts
  • Using computer vision to identify emblems, uniforms, or patches in footage
  • Bundling it all into “curated detection feeds” that can be handed to ICE operators in the field (The Verge)

In one highlighted case, Zignal brags about analyzing a Telegram video from Gaza, identifying insignia on uniforms, and pushing tactical alerts to “operators on the ground.” (Wikipedia)

Translate that into the U.S. domestic context and you get the real picture:

  • A TikTok with geotagged street vendors
  • A Facebook photo from a protest
  • An Instagram reel from a community event

All of those become potential location beacons for ICE, mapped, filtered, and prioritized by AI.

This isn’t ICE’s first flirtation with social media surveillance. Years ago, police departments used tools like Geofeedia to track Black Lives Matter protests across Facebook, Twitter, and Instagram until the ACLU exposed the relationship and platforms shut down the data firehose. (aclu.org)

The difference now is scale.
Zignal and its ilk aren’t just dashboards for human analysts — they’re machine-driven pattern engines, designed to find targets in oceans of data that no human team could ever read.


A 24/7 AI Enforcement Machine

ICE isn’t stopping at software. Internal planning documents show the agency wants a round-the-clock social media surveillance team, staffed by nearly 30 private contractors split between targeting centers in Vermont and California. (WIRED)

Their stated mission:

sift Facebook, Instagram, TikTok, X, YouTube, and more for people who “pose a danger to national security, public safety, and/or otherwise meet ICE’s law enforcement mission.” (WIRED)

Those analysts won’t just look at a single person’s posts. Planning documents describe pulling in family members, friends, and coworkers to triangulate where a target lives, works, or sleeps. (WIRED)

Layer that on top of ICE’s other tools and the picture gets darker:

  • Access to AI-enabled license plate readers (Flock Safety) via local police “favors,” even without a direct contract (404 Media)
  • A planned purchase of a system that tracks the locations of hundreds of millions of phones every day (404 Media)
  • A mobile app, Mobile Fortify, that lets ICE and CBP agents point a phone at someone’s face and match them to government databases containing more than 200 million images, including biometrics, with no meaningful way to refuse (aclu.org)
LUE MYÖS:  Sionistiset kristityt saavat herätyksen: Uusi laki, joka määrittelee "antisemitismin", kieltää Raamatun tiedot siitä, että juutalaiset tuomitsivat Jeesuksen kuolemaan!

Social media becomes just one sensor in a broader integrated surveillance stack:
posts, faces, license plates, and phone locations all stitched together.


Speech as Evidence, Dissent as Risk Score

The danger isn’t theoretical. The federal government is already using AI to police online speech.

In 2025, the State Department launched the “Catch and Revoke” program: AI tools comb through the social media of foreign students and visa holders to identify those who “appear to support Hamas or other designated terror groups” and revoke their visas. (Axios) Hundreds of visas have already been pulled under this initiative. (New York Post)

Civil liberties groups warn that this system:

  • conflates political protest with terrorism,
  • outsources nuanced judgment to pattern-matching algorithms, and
  • weaponizes immigration status against people who exercise their First Amendment rights. (Brennan Center for Justice)

ICE’s new panopticon sits directly on top of this political climate. When the same government that runs Catch and Revoke now acquires a tool that can scan 8 billion posts per day and prioritize “threats” for enforcement, the line between public expression and actionable intelligence starts to dissolve.

As David Greene of the Electronic Frontier Foundation notes, automated monitoring gives the government the ability to “monitor social media for viewpoints it doesn’t like on a scale that was never possible with human review alone.” (The Verge)

The predictable result:
People start self-censoring. They don’t post, don’t share, don’t attend, don’t speak.

That’s not “collateral damage.” That is the effect.


Why This Is Worse Than NSA-Style Bulk Surveillance

After the Snowden leaks, the public debate centered on mass metadata collection: who you called, when, from where. That was bad enough.

ICE’s AI stack is different in two crucial ways:

  1. Targeted Enforcement, Not Just Intelligence
    This isn’t about abstract national security analysis. It’s about finding bodies to detain and deport. When a conservative influencer tags ICE on a video of street vendors and agents later show up on that exact block, the connection is hard to miss. (The Verge)
  2. Privatized Infrastructure, Weak Oversight
    Tools like Zignal, Flock, Mobile Fortify, and phone-location aggregators are built and run by private contractors, not transparent government agencies. That creates:
    • murky legal boundaries,
    • trade-secret shields against disclosure, and
    • fragmented responsibility when abuses happen. (Wikipedia)

This “public–private surveillance partnership” lets the government claim it’s only using “public data” or “ commercially available datasets,” sidestepping constitutional questions that would arise if agencies built the same systems in-house.


Who Should Be Worried?

Short answer: anyone who posts publicly.
But some groups are at special risk:

  • Immigrants and mixed-status families
    Every post, tag, or geotagged selfie can become a lead in an enforcement file.
  • Activists, organizers, and journalists
    Past systems like Geofeedia were explicitly pitched as tools to monitor protests and movements like Black Lives Matter. (The Guardian) Zignal’s marketing shows similar “real-time situational awareness” use cases.
  • Students and foreign nationals
    Catch and Revoke has already turned campus speech into a potential immigration risk. (Brennan Center for Justice)
  • Anyone near a target
    Friends, coworkers, and even bystanders caught in the background of a video may end up inside AI-built social and location graphs.
LUE MYÖS:  “Eliitin työkalu” hedelmällisyyden romahduttamiseen – mitä datasta oikeasti seuraa?

The message from this architecture is simple:
your online life is now part of the targeting system.


Where the Fight Goes From Here

None of this is inevitable. There are concrete pressure points:

  • Congress and state legislatures can restrict the use of AI-powered surveillance for immigration enforcement, mandate warrants for bulk location and biometric data, and ban the use of social media monitoring tools for First-Amendment-protected activity.
  • Courts can treat AI-enhanced collection as a meaningful intrusion, not a trivial extension of “public observation,” especially when it aggregates data at national scale.
  • Platforms can again cut off API access and data partnerships for tools marketed to law enforcement — as they did with Geofeedia in 2016 — and enforce bans on surveillance use cases. (aclu.org)
  • Users and communities can limit public metadata (geotags, real-time location), move sensitive organizing off mainstream platforms, and support digital rights groups challenging these contracts.

What ICE is building with Zignal Labs isn’t just another database.
It’s the blueprint for a normalized AI panopticon, where speaking in public — online or off — means being permanently scannable, sortable, and actionable.

And once that infrastructure exists, it won’t stay confined to immigration.


📚 Sources

Avatar photo

By Pressi Editor

Jos lainaat tekstiä, laitathan lainatun tekstin yhteyteen paluulinkin!