As companies developing large language models (LLMs) race to gather massive amounts of web data, traditional defenses like robots.txt often fail to deter aggressive crawlers.
VENOM is an experimental toolkit that flips the script, dynamically feeding poisoned data to unwanted crawlers so content owners can tell if their material ends up in downstream products.
This project takes an adversarial approach that shifts the cost-benefit equation for large-scale scraping. VENOM is built as a reverse proxy for no-code setup.
Fill out this form to receive updates, release notifications, and exclusive news about VENOM.