mirror of
https://github.com/superseriousbusiness/gotosocial.git
synced 2025-10-29 01:42:24 -05:00
[chore/docs] fix relative link to scraper deterrence (#4111)
# Description While working on the doc translation update, I found a broken link. So I;m opening this separate PR to keep it clean from the translation stuff. Marked as draft currently for checking for any other typos :) Reviewed-on: https://codeberg.org/superseriousbusiness/gotosocial/pulls/4111 Co-authored-by: cdn0x12 <git@cdn0x12.dev> Co-committed-by: cdn0x12 <git@cdn0x12.dev>
This commit is contained in:
parent
4d6408015b
commit
bad427e7f0
2 changed files with 20 additions and 1 deletions
|
|
@ -10,6 +10,6 @@ You can allow or disallow crawlers from collecting stats about your instance fro
|
|||
|
||||
The AI scrapers come from a [community maintained repository][airobots]. It's manually kept in sync for the time being. If you know of any missing robots, please send them a PR!
|
||||
|
||||
A number of AI scrapers are known to ignore entries in `robots.txt` even if it explicitly matches their User-Agent. This means the `robots.txt` file is not a foolproof way of ensuring AI scrapers don't grab your content. In addition to this you might want to look into blocking User-Agents via [requester header filtering](request_filtering_modes.md), and enabling a proof-of-work [scraper deterrence](scraper_deterrence.md).
|
||||
A number of AI scrapers are known to ignore entries in `robots.txt` even if it explicitly matches their User-Agent. This means the `robots.txt` file is not a foolproof way of ensuring AI scrapers don't grab your content. In addition to this you might want to look into blocking User-Agents via [requester header filtering](request_filtering_modes.md), and enabling a proof-of-work [scraper deterrence](../advanced/scraper_deterrence.md).
|
||||
|
||||
[airobots]: https://github.com/ai-robots-txt/ai.robots.txt/
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue