Update on Invalid Traffic

I believe the last update had everyone shaking in the boots with the announcement of Re:Library’s end of service should everything go haywire. Believe me, I was too. It hit us so suddenly I thought it was the end of the world. Ever since that announcement, we’ve been looking into ways to prevent the bot traffic from hitting our site. From simple methods of installing right-click disable, using CAPTCHA to block contents, installing Cloudflare, to the more expensive methods of implementing professional bot mitigation services.

What I can say is that, the professional services are hella expensive, it’s not what any small translation site owners could afford, they can range anywhere between 600 bucks to 1,200 bucks monthly! It’s no wonder no translator has opted for professional services and continued to give the content thief the opportunity to steal their translations. If we have to pay such a huge amount per month, we might as well not have ads on our site. It won’t make a difference. In fact, we might even be on the negative side if we do employ their services.

Back to the main point, we’ve been hit by lots of non-human traffic, gets removed from the advertisement platform, looked for ways to remove those “harmful” traffic, and now we’ve finally reached a safe threshold and has been re-accepted by our advertisement platform. Although we are currently safe for now, there are still some “issues” with our traffic so we’re not completely out of the danger zone yet for a foreseeable future.

I’m not going to mention what kind of methods we’ve employed to prevent the bots from learning anything, but if you’ve encountered any problem with the site or has been falsely identified as a bot, please send a report to [email protected]. It will be much appreciated if you can include your IP address, User-Agent, Browser type/version, as well as a screenshot of the page.

Thank you for your understanding and cooperation.

6 thoughts on “Update on Invalid Traffic”

  1. You guys use wordpress right? try look at this: (link removed)
    I made this some time ago, just update it and use it again on my site.
    The base idea is scrapper not use javascript when scrapping, so I encode the content to unreadable text and decode it using javascript.
    The reader should see readable content on your site but bot/scrapper (and browser that not support js) won’t.
    There are lot of things that need to be improved, but it already works now. If you have knowledge in PHP and js, you could make improvement of it.

Leave a Reply