Wiederfrei crawler

Hi! I operate a small web crawler that may have crawled your website.

Here's the gist of it:

That's it, that's all. Thank you.

Any abuse reports sent by complete morons that cannot write a robots.txt file will be dismissed as invalid (and added to the list of idiots below). If you still have questions, you can send email to info@<this-domain>.

Hall of shame

Also known as the list of "ass-wipe web admins that don't know how the internet works". Seriously, this crawler is operating for over 10 years now and I have yet to receive one single valid abuse report.

Domain and date The story
brandzeichen.ch
2013-06-17
Accused me of attacking the website after some Joomla security plugin reported accesses to 3 URLs that had a certain query parameter in it. I replied politely and asked why that would be an attack, but I never heard back. It's likely they had no idea what they were doing.
tcgrauholz.ch
2013-08-06
Accused me of attacking the website with no details whatsoever, so I asked them to explain. Apparently, they received empty submissions on a form that was allegedly protected by a captcha. It turned out that the form was submittable using GET requests even though all form values were missing. And the captcha was 100% non-functional (it didn't matter what was entered into the field). I replied politely and explained all of this, but then never heard back from them. Why are these people allowed to use the internet?
help.ch
2014-09-02
Admin was upset because the "massive" crawling is against their terms and conditions and would cause outages to their services. I politely replied that my crawler did not agree to any terms and conditions and as such they wouldn't apply. I also explained that there was nothing "massive" about the crawling and it would certainly not cause any outages. I never received a reply.
sanasis.ch
2015-12-01
Complained about accesses. It turned out the robots.txt file was faulty. It was forbidding URLs beginning with /checkout/, however, the actual URLs looked liked /de/checkout/.... I replied politely, explaining the situation. I never heard from them again.
Unknown
2016-10-23
This guy, Mr Simon Oberli, refused to tell me which domain it is that he doesn't like to be crawled and insisted that the crawler does not respect the robots.txt file. I had no way of verifying anything, since he completely refused to cooperate. He also misspelled "robots.txt" in two different ways in his emails, so that I'm really not convinced he had a correct one in place.

This was the first time I exchanged more than just two emails with someone about the crawler, but unfortunately Mr Oberli kept being a complete douche bag and made constant fun of my service, my websites' contents and refused any kind of cooperation to resolve or even analyze the situation. He also went on to explain how I should be operating my services, even though it was clear that he had not the slightest clue about how they work. After seven emails, I had to come to the conclusion that this pathetic fool is a waste of my time and that we weren't going to resolve anything. So I stopped replying to his dumb rambling.
fotos.jacomet.ch and shop.bollwerkapotheke.ch
2017-12-09
The (actual quote from the abuse email) "CEO of BitNinja Server Security" complained about "malicious" requests. However, the site in question did not have any robots.txt in place. I explained the situation and never heard back. After some research I found out that BitNinja is well known for sending fraudulent abuse reports.
agrishop.ch
2021-11-23
This person complained about the crawling of certain URLs, but did not have any robots.txt file in place. I explained the situation and never heard back.
vape.ch
2021-12-04
This person complained about the crawling of certain URLs. There was a robots.txt file in place, but it did not forbid crawling the URLs in question. I explained the situation and never heard back.
cesarphoto.ch
2021-12-07
This person complained about the crawling of certain URLs. There was a robots.txt file in place, but it did not forbid crawling the URLs in question. I explained the situation and never heard back.
www.chronowatch.ch
2021-12-15
This person complained about the crawling of certain URLs. There was a robots.txt file in place, but it did not forbid crawling the URLs in question. I explained the situation and never heard back.
blog.nospy.ch
2021-12-28
This person noticed the crawling of some URLs that they didn't wish to be crawled. They THEN made adjustments to their robots.txt file and THEN sent in an abuse report. It's unclear whether they were trying to be sneaky here. I explained the situation and never heard back.
uhren365.ch
2022-01-08
This person complained about the crawling of certain URLs. There was a robots.txt file in place, but it did not forbid crawling the URLs in question. I explained the situation and never heard back.
play-zone.ch
2022-01-20
This person complained about the crawling of certain URLs. There was a robots.txt file in place, but it did not forbid crawling the URLs in question. I explained the situation and never heard back.
suesse-ueberraschungen.ch
2022-01-25
This person complained about the crawling of certain URLs. There was a robots.txt file in place, but it did not forbid crawling the URLs in question. I explained the situation and never heard back.
natura-punto.ch
2022-02-18
This person complained about the crawling of certain URLs. There was a robots.txt file in place, but it did not forbid crawling the URLs in question. I explained the situation and never heard back.
helsinki-design.ch
2022-04-10
This person complained about the crawling of certain URLs. There was a robots.txt file in place, which was entirely empty. I explained the situation and never heard back.
agmmobile.ch
2022-05-04
This person complained about the crawling of certain URLs. There was a robots.txt file in place, which was quite extensive. There were rules forbidding some URLs for all user agents, there were rules forbidding all URLs for certain user agents. None of the rules applied to the URL in question and the user agent of the crawler. Dude, if you actually know how to write a robots.txt, then why not just extend it? I explained the situation and never heard back.
erstehilfeshop.ch
2022-06-08
This person complained about the crawling of certain URLs. There was no robots.txt file in place at all. I explained the situation.
hockeystore.ch
2022-06-10
This person complained about the crawling of certain URLs. There was a robots.txt file in place which indeed disallowed crawling of the offending URL. However, the current version of this file has been put there within the 4 hours prior to the report. The crawler will cache the file for a while, to avoid fetching it too frequently. This is an unfortunate race condition as well as quite a douche bag move of the web admin. I explained the situation and never heard back.
capacitasplus.ch
2022-11-06
This person complained about the crawling of certain URLs. There was a robots.txt file in place which disallowed some URLs, but not the URL which was crawled. I explained the situation and never heard back.
epmorges-est.ch
2022-11-26
This person complained about the crawling of certain URLs. There was a robots.txt file in place, which did not disallow crawling of the URL in question. I explained the situation and never heard back.
wunschmasse.ch
2023-03-14
This person complained about the crawling of a certain URL. There is a robots.txt file in place, which at first glance seems to forbid crawling of this URL. However, the sytax used is a non-standard wildcard syntax, which is not part of the robots.txt standard. This usage is even explicitely mentioned as not supported. I explained the situation and never heard back.

Obviously, we will consider supporting non-standard syntaxes if they prove to be common.