Getting traffic without having shared my link

Joined
Feb 6, 2015
Messages
45
Likes
14
Degree
0
Hola,

I added google analytics to my site. However, wanted to start today in the evening with sharing my links around, but had 11 sessions yet, without sharing my site. Has google crawled my site, because I added it to analytics?

Any explanation for that?
 
Could possibly be people looking at domains that were newly registered, there are many reasons.

but I would say filter out your IP^
 
Another option is to install a browser extension that blocks Google Analytics tracking on the domains specified by you. Especially recommended if you have a dynamic IP address.
Just search for "analytics blocker" and pick the one you like.
 
There will be a ton of spiders flying around the web that'll register. The numbers will seem large and a high proportion of your traffic at first until you begin getting tons of real traffic. You could block most of the spiders with your robots.txt but they don't have to honor your request either.
 
There will be a ton of spiders flying around the web that'll register. The numbers will seem large and a high proportion of your traffic at first until you begin getting tons of real traffic. You could block most of the spiders with your robots.txt but they don't have to honor your request either.

Do you have an example of a robot.txt that does this?
 
As mentioned by an earlier poster, you'll want to filter out your own IP on Google Analytics. There's also a Chrome extension that will allow you to not be registered if you're using that browser. Thus, if you're always traveling, it won't register you as long as you're using Chrome. Here it is: https://tools.google.com/dlpage/gaoptout
 
If you want to see what's going on, check your web server logs, they'll tell you the whole story. In Ubuntu, the default Apache log is /var/log/apache2/access.log You can watch it close to real time by running this at the console:

sudo tail -f /var/log/apache2/access.log To get back to the command prompt just do CTRL+C

The log should show you the time, IP, User Agent, etc depending on how things are configured. I check my server logs frequently because new bots crop up pretty much daily, and the user agents are nice things to have for .htaccess files and the like. You can also see scraper-like activity (many requests in a short time, too fast for a human) and other things that contribute to slow response time.

Also, running something like piwik (http://piwik.org/) is VERY helpful for site analytics, but since you're already using Google Analytics, that might be a bit too much, as any extra processing overhead slows overall response.
 
Do you have an example of a robot.txt that does this?

Here's a real world example:

http://www.seobook.com/robots.txt

You can learn a lot from this one example, such as delaying the frequency in which bots can hit certain directories, disallowing the crawling of specific files and even file extensions, blocking off entire directories, dynamic URLs, and a ton of bots, including stats, image scrapers, backlink checkers, email harvesters, and more.
 
Back