this post was submitted on 12 Feb 2025
12 points (83.3% liked)

Open Source

33247 readers
169 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS
 

After dabbling in the world of LLM poisoning, I realised that I simply do not have the skill set (or brain power) to effectively poison LLM web scrapers.

I am trying to work with what I know /understand. I have fail2ban installed in my static webserver. Is it possible now to get a massive list of known IP addresses that scrape websites and add that to the ban list?

top 3 comments
sorted by: hot top controversial new old
[–] lungdart@lemmy.ca 8 points 1 week ago

Fail2ban is not a static security policy.

It's a dynamic firewall. It ties logs to time boxed firewall rules.

You could auto ban any source that hits robots.txt on a Web server for 1h for instance. I've heard AI data scrapers actually use that to target big data rather than respect web server requests.

[–] kn33@lemmy.world 3 points 1 week ago* (last edited 1 week ago)

You could try putting up a cloudflare proxy and Turnstile (their captcha product) to try to help with it.

The truth is, though, if it's static content, then you have to be able to stop them every time. Once they get it once, they got it. With how frequently they can try, it's going to be difficult to stop them.

[–] catloaf@lemm.ee 2 points 1 week ago

No. You'd have to ban all the cloud providers. Good luck enumerating them all.