4th December 2008

Webmaster Help Forum Googlers

With the retirement of the original Google Webmaster Help Group my list of Helping Googlers now only points to archives of what Googler’s have said and will not have any new information. With the introduction of the new Google Webmaster Help Forum this list will include the active Googlers in the new forum.

These names were harvested from Reintroducing your English Webmaster Help Google Guides suggested by an astute webmaster if you are interested in what they haven’t found yet in Google follow that link.  The links I provide go to their Webmaster Help Forum Profile.

This list will be updated as new Googlers migrate to the new forum.

Note: With the new system the profiles only show “questions” asked by the individual and not any answers (which is most important to us for Googlers) but I am told that feature has been requested.

posted in GWHF, GWHG | 1 Comment

23rd September 2008

Dynamic Vs. Static URLs confusion

Nice URL.

What they said:

Google’s help document, “Creating a Google-friendly URL structure” currently says:

Consider organizing your content so that URLs are constructed logically and in a manner that is most intelligible to humans (when possible, readable words rather than long ID numbers). For example, if you’re searching for information about aviation, a URL like http://en.wikipedia.org/wiki/Aviation will help you decide whether to click that link. A URL like http://www.example.com/index.php?id_sezione=360&sid=3a5ebc944f41daa6f849f730f1, is much less appealing to users.

Overly complex URLs, especially those containing multiple parameters, can cause a problems for crawlers by creating unnecessarily high numbers of URLs that point to identical or similar content on your site. As a result, Googlebot may consume much more bandwidth than necessary, or may be unable to completely index all the content on your site

They also say on their “Dynamic Pages” help article:

If you’re concerned that your dynamically generated pages are being ignored, you may want to consider creating static copies of these pages for our crawler

What they do:

The articles above are found at the URLs:
http://www.google.com/support/webmasters/bin/answer.py?answer=76329&t [screenshot]
http://www.google.com/support/webmasters/bin/answer.py?answer=34431&ctx=sibling [screenshot]

I don’t know about you, as you’re probably smarter than me, but intuitively “76329″ does not mean Google friendly URLs, and “34431″ doesn’t scream click me for information on Dynamic URLs.

What they say now:

In their latest blog post “Dynamic URLs vs. static URLs” they have taken a different position.

Providing search engines with dynamic URLs should be favored over hiding parameters to make them look static.

One recommendation is to avoid reformatting a dynamic URL to make it look static

I don’t know what to think now. I don’t want to rip an author as my own blog tagline is “Terrible writing and mere conjecture” but this blog post looks like both. It appears that they are trying to help people who cannot figure out URL writing and saying not to worry about it, but it is written so obtusely that anyone that cannot rewrite URLs surely isn’t going to understand that article. The fact that they contradict all previous documentation only further confuses me.

I think I’ll wait for this shit storm to settle out but for now I am going to abide by the old axiom of designing your site for users and not search engines and as a user I am much more likely to understand what:
Is about than:

Since Google cannot figure out that a page which lists every article on the site is indeed a sitemap I cannot believe that they can figure out how to handle session IDs and numeric references to pages either.

posted in Google | 2 Comments

3rd September 2008

Twitter Reciprocity

I’m sure the millions of readers headed my warning about Twitter caving to mattcuttsean-like pressures and nofollowing everything on your profile back on 7/22/08. So this is no surprise to you, but twitter has finally pulled the plug on that loophole.

The web educated amongst you will add twitter.com to your well maintained nofollow reciprocity list in your plug-ins I’m sure.

As pointed out in my original post, I still find it incredibly stunning that @mattcutts offers @ev advice on furthering the nofollow carnage while ignoring the actually helpful advice that would #1) decrease their server load, and #2) decrease Google’s own crawler load.

I guess we’ll see who twitter is more interested in pleasing, it’s users by reducing the server load with a simple url canonicalization fix or Google with their cure-all rel=”nofollow”, by which is fixed first.

I’m not sure if it’s because lcase() is so hard for them to implement or that bowing to Google’s pressure is more important for the eventual buyout price, but their problems persist, now with the added benefit of HTTPS versions! Nice.

As always and of course follow me on Twitter I’ll follow you back if you #1) update regularly and #2) don’t use it primarily as a bastardized IM service with too many ‘@’ twits.

posted in Google, Matt Cutts, Webmastering | 1 Comment

2nd September 2008

Google Chrome

I’d loose my Google fanboy status if I didn’t mention it. It’s a web browser. Made by Google. But apparently with less flexibility than the current 900 other browsers available, just more Googlier and faster.

This announcement coupled with the recent Google wikipedia Knol project makes me giddy with excitement for the next Google innovation: Google Wheel Beta. This will be a much more Googlier wheel and available in only red or yellow, and of course roundier.

Which will be followed up with the much rumored Google Ten Piece Hammer (pictures not available at time of publication)

posted in Google | 2 Comments

21st August 2008


This post is for Google. It’s not really meant to be ready by humans, but you can if you’d like. With the new more useful 404’s by Google they promise such things as:

In addition to attempting to correct the URL, the 404 widget also suggests the following, if available:

  • a link to the parent subdirectory
  • a sitemap webpage
  • site search query suggestions and search box

I haven’t seen them offer the sitemap page yet.  Perhaps its because I was clever and named it articles and not sitemap or because they hate it since it’s graybarred even though linked to on every page of the site.  Either way, Googlebot the sitemap is located at http://www.jlh-design.com/articles/.  Seeing that it’s just a giant list of all the posts and pages on the site I would think it would have been quite transparent that though the page is named articles, it’s actually a SITEMAP.

posted in Google | 0 Comments

22nd July 2008

Get your twitter links while you can

Earlier today Dave Naylor outed the little known twitter fact that you can get a non-nofollowed link by adding a web address in your “One Line Bio” of your profile.


For a resulting profile page like this:

It’s not going to last long as internet officer on-the-spot Matt Cutts has spotted it and taken action to stop the flow of link juice to people:

Since Matt is so interested in helping out twitter he may want to mention that some of their “capacity” issues may be due to Google crawling the non-canonical versions of URLs that exist throughout the site.

Notice the same page is indexed twice in Google. One as twitter/johnweb and as twitter/JohnWeb, same content, same spelling, just different cases used.

I guess we’ll see who twitter is more interested in pleasing, it’s users by reducing the server load with a simple url canonicalization fix or Google with their cure-all rel=”nofollow”, by which is fixed first.

posted in Google, Matt Cutts | 0 Comments

9th July 2008

Googlebot using Yahoo IP range for crawling?

Okay, the title may be jumping to conclusions but please help me understand this.

I noticed an odd referral today in my stats for this blog. It was for the search term [ip address], it seemed a bit strange so I checked it out.

The IP address belongs to Inktomi Corporation:

Every one of my single post pages contain a little plug-in that shows the user’s IP address, like:

So it would make sense that the IP address of the crawler would be added into the text of the page and returned for search results.

The ODD thing however was that this search referral was from Google, with the Yahoo! IP address.

The Google Search for [ip address]


Returns one of my pages at the 10th spot, and clicking on the cache of that page shows the Yahoo! address stored in the cache:

To be sure this isn’t normal behavior the following thumbnail is for a cache of another page showing the Google IP address

So the question I have is how does a Google cache get taken showing a Yahoo! IP address? I’m sure there is a logical explanation that I am just missing but I am hoping that somebody out there can explain it to me.

Added After Initial Posting

After I initially posted this I thought it would be a good idea to see if this one page was an anomaly or if other indexed pages showed the Yahoo! IP address, apparently the one I showed above is the only one. Note that the other two URLs shown are this post and the home page which were already updated in the index when I went back and checked.

posted in Google, search | 1 Comment

19th June 2008

Google, please let us report paid links

In their ever vigilant zeal to be perplexing and clear as mud on the issue Google has many stances on the paid links situation.

Some official:

Buying or selling links that pass PageRank is in violation of Google’s webmaster guidelines and can negatively impact a site’s ranking in search results.

Some not so official:

We’ll be concentrating primarily on the sellers, but if you send us a site that appears to be buying links that pass PageRank it’s trivial for us to look up all the backlinks for that site to find potential sellers and work from there.

Whether or not they are “concentrating” on link buyers or not, it appears through many threads on Google Webmasters Help Group that people are actually being penalized for buying links. The ones I’ve seen have been pretty obvious either through sponsored themes, automated link networks, or the most obvious sitewide footer links.

They do offer a method for buying links without getting in Google-hot-water with the much maligned and oft misapplied rel=”nofollow” link or through a robots.txt block:

Links purchased for advertising should be designated as such. This can be done in several ways, such as:

  • Adding a rel=”nofollow” attribute to the <a> tag
  • Redirecting the links to an intermediate page that is blocked from search engines with a robots.txt file

Which is a all well and good if you are running the site and have control over the links. But what if you are buying the links? What Google is failing to recognize is that sometimes people may actually buy links because they want the traffic. Gasp. It is possible that a permanent link purchased for a set price will in the long run cost less per click than… let’s say… an adwords ad.

I haven’t mentioned the negative SEO aspect yet, as I’m not convinced that it’s really a viable method, but it is often discussed. If Google is penalizing sites that buy links generally the next thought in the room is “Then I’ll just buy my competition a bunch of links and report them!”. First off I’m not 100% convinced they actually penalize the buying sites but rather just discount the links from the sites that sold them, which if the case you are just paying money for clicks to your competition. Not generally a good business practice. Second, I’m not sure they’ll react to all of the reports so you may in fact be buying them some links that will help them in the rankings PLUS the clicks, also not a sustainable plan. Either way there are a fair amount of webmasters out there worrying that someone else can buy links to their site and have it hurt them.

With all this in mind, the desire to buy links (that you cannot control the format of) for traffic and the logical concern that someone else could buy links to your site that may hurt you I propose that Google institutes a Report My Paid Links” or Disavow Links” feature in Webmaster Tools.

I envision this tool to allow a webmaster to list domains or pages that have linked to their verified domain that they do not want to count for or against them in ranking. It’s a way for a webmaster to say that they’ve purchased links for traffic in a local directory or perhaps a high profile school newspaper but don’t want to give the impression that those links were purchased for PageRank manipulation. It would have the added benefit of letting a webmaster feel more at ease if they see some spammy links pointing to their site that they may want to disavow. Oh, perhaps the old idea that there is almost nothing a competitor can do to harm you still applies and those links won’t actually hurt you, but it would be a good thing to help put them at ease.

So I say: Google, please let me report paid links! Let me tell you which links I bought for traffic. Let me tell you so that if somebody reports my site as a link buyer you can see that I already told you about them, increasing your trust in me rather than taking the chance that some human reviewer gets it wrong. Let me have those links on record in case the link I bought which was on a nofollow page is changed later by the webmaster without my knowledge.

Then again if you are only going to punish the sellers and not the buyers, then say so, so we can put all this “Google bowling” non-sense behind us. :)

posted in Google, Paid Links | 2 Comments

23rd May 2008

Really Minty Fresh Indexing

It took all of 13 minutes for Google to pick up my post before this about my death threat and send a visitor for mayhissolrestinpeace.  Very impressive.  Expecting my first referral from Yahoo! late next week. :-)

posted in Google | 1 Comment

29th April 2008

Google friendly URL structure

From Google’s own Webmaster Help Center:

A site’s URL structure should be as simple as possible. Consider organizing your content so that URLs are constructed logically and in a manner that is most intelligible to humans (when possible, readable words rather than long ID numbers). For example, if you’re searching for information about aviation, a URL like http://en.wikipedia.org/wiki/Aviation will help you decide whether to click that link. A URL like http://www.example.com/index.php?id_sezione=360&sid=3a5ebc944f41daa6f849f730f1, is much less appealing to users.

This can be found on Google’s UN-friendly URL:


I think Nelson captured it best when he said:

Nelson HA HA


posted in Google | 1 Comment

25th April 2008

Spam Monkeys

Google has penalized some sites for buying links and others for selling links. Personal blogs with no revenue stream have their rankings stripped while large brands carry on selling and buying links.


From the Navy Safety Center:

Start with a cage containing five monkeys. Inside the cage, hang a banana on a string and place a set of stairs under it. Before long, a monkey will go to the stairs and start to climb towards the banana. As soon as he touches the stairs, all of the other monkeys are sprayed with cold water. After a while, another monkey makes an attempt with the same result, and all the other monkeys are sprayed with cold water. Pretty soon the monkeys will try to prevent it.

Now, put away the cold water. Remove one monkey from the cage and replace it with a new one. The new monkey sees the banana and wants to climb the stairs. To his surprise and horror, all the other monkeys attack him. After another attempt and attack, he knows that if he tries to climb the stairs he will be attacked.

Next, remove another of the original five monkeys and replace it with a new one. The newcomer goes to the stairs and is attacked. The previous newcomer takes part in the punishment with enthusiasm! Likewise, replace a third original monkey with a new one, then a fourth, then the fifth.

Every time the newest monkey takes to the stairs, he is attacked. Most of the monkeys that are beating him have no idea why they were not permitted to climb the stairs or why they are participating in the beating of the newest monkey. After replacing all the original monkeys, none of the remaining monkeys have ever been sprayed with cold water. Nevertheless, no monkey ever again approaches the stairs to try for the banana.

Why not? Because as far as they know, that’s the way it’s always been done around there.

They don’t have to punish all the link buyers, not even the big ones, just get enough people talking about it and the rest will follow. If I were looking for monkeys I’d find the most vocal ones perhaps those involved in online forums, social media, and discussion groups. You wouldn’t want to waste your time going after trusted newspapers that sell links for $195 a year, specially when they offer:

Your search engine rankings will also improve by receiving a link on our sites!

While you’ll be less dependent on people having to search for your site, your search engine rankings will be improved for those that do.

“PageRank interprets a link from Page A (our sites) to Page B (your site) as a vote for Page B by Page A. PageRank then assesses a page’s importance by the number of votes it receives. PageRank also considers the importance of each page that casts a vote, as votes from some pages are considered to have greater value, thus giving the linked page greater value.” – Google Support site

“The best way to ensure that Google finds your site is to have pages on other relevant sites to link to yours.” – Google Support site

~Hat tip to Wingnut for the Monkey Quote, the link seller outing was my own doing.

posted in Google, Paid Links | 0 Comments

2nd April 2008

Audio and Transcripts of Google Webmaster Chat

I promise no more RickRolling, and apologize to the 100+ people yesterday that were redirected…this is real.

The Google Webmaster Chat was a huge success as reported in many prominent places such as Search Engine Roundtable.

The online chat involved several multi-media avenues in which to communicate between the many Googlers who participated in the meeting and the 200+ webmasters.

  1. A free-for-all chat room type interface, the transcript of which can be read here. [PDF]
  2. A question and answer section where webmasters posted questions and Googlers answered, the transcript is located here. [PDF]
  3. Webmasters called in, or were called, to listen into the presentation by a dozen or so Googlers in conference call with Googlers in Mountain View, Kirkland, and Zurich. The audio recording of that section was saved in this Brasil SEO site.

My Portuguese skills are little rusty don’t exist, but according to the Google Translation it appears as the author is recommending you download the files, which I did. To help save his bandwidth I’ve mirrored the files here, be sure to link to the original source in your own discussion.

Audio Chat Part 1 (mp3)

Audio Chat Part 2 (mp3)

Audio Chat Part 3 (mp3)

Note: The much debated PageRank sculpting discussion starts right at the beginning of part 3.

A transcribed version of the audio portion can be found here.

posted in GWHG, Google | 3 Comments

19th March 2008

Suggestion for Google Webmaster Tools


posted in Google | 1 Comment

17th March 2008

MLSGB Penalty

I wante to patent the term MLSGB Penalty which is the My Link Scheme Got Busted Penalty..

The penalty presents itself as a sitewide deranking and no amount of on-page optimization or writing of reconsideration requests will fix it.

The problem is that your site which has built its reputation on crappy free directory listings, link exchanges, and automated text link purchases has been busted. Google has figured it out and systematically devalued the majority of your links. After all of those crap links are filtered what’s left is not much and the rankings you used to enjoy will not be found again until you can build your link profile back up to the level you were getting credit for, this time with real links. The MLSGB Penalty tends to be doled out in sectors by devaluing crap links in clusters.

Symptoms of this are:

  • Reconsideration requests go unanswered no matter if you’ve removed other offending schemes.
  • Sitewide PageRank drops
  • Same number of pages indexed but long-tail results are way down
  • Reduction in crawl rate
  • Ability to get new pages indexed is reduced
  • Not ranking for your own domain name
  • Interlinked domains suffer the same consequences
  • Seen in common sectors
  • Decrease is sudden, if not over night
  • Google’s site:, link:, and data in webmaster tools stays the same
  • No warning or letter from Google

Basically what we have here is that the site was ranking falsely before based on an improper link profile, now that the link profile has been updated the symptoms above appear. You can’t be reconsidered because in order to restore the rankings they’d have to give you credit for the links, which isn’t going to happen. The one key anecdotal piece evidence here is a drop in ranking for the domain name, which generally returns eventually. Google has indicated that a drop in ranking for a domain name is a sign of penalty, however if it returns eventually I believe it points to another cause. Most worthless link directories link to sites with the domain name as the anchor text, which is quite unnatural in today’s linking. It may have been all the rave back in 1996 when people actually did build link lists for humans but not anymore, even news services use keyword anchor text to help add to the story. When Google has figured out that a majority of your links came from directories and link exchanges and removes the credit, the domain ranking suffers because a key indicator, anchor text, has been removed. Eventually relevancy will win over and the site will rank again for the domain. Just like a newly launched site will skyrocket to the top with a few good links because Google wants to keep its index fresh with the latest trends, the same goes for the reverse. When a site looses a large percentage of links all of a sudden it follows that it should also be reduced in rank. These behaviors happen naturally in the wild, a new site splashes onto the scene and goes viral in days, or an old site shuts its doors or gets involved in a scandal causing people to no longer link to it. Aggressive false link building mimics that natural quick link growth actual popular sites enjoy catapulting it to the top, but even more aggressive filtering by Google also mimics a drop in popularity.

How can you recover? Surely not through the same methods that got the site into this mess to begin with. More aggressive link building in crap directories and link exchanges on ‘links.html’ pages surely won’t help and may even aggravate the situation with another round of deranking. Reconsideration Requests will go unanswered as there is nothing to reconsider just a low linked site ranking where it should. The only answer is to build links naturally at a pace the site deserves and if the content is so poor that it won’t get links no matter who you show it to, it may be time to start over.

I have no insight whether or not this is manual or automated but I tend to think its manual as the quickness of the onset of the penalty suggests. An automated method would slowly remove such links as they are found, whereas a manual review of a sites link profile would tend to be quick and a one-time event. I think this penalty may have other aberrations such as “going supplemental”, “the minus (insert number of the day here) penalty”, or even some of the recent uproar over paid links.

I don’t want to give the impression that I believe the bad links to the site are actually harming anything, just that they used to count for something and no longer do. So before you go out and sign up your competitor for a million bestlittlewebsitedirectoryintheworld.com links remember that they may enjoy that unnatural bump for a while, and with the extra traffic actually get a few real links.

posted in Google, SEO | 1 Comment

3rd March 2008

Why Spam Google Groups?

The Google Webmaster Help Group which I participate in has been inundated with spam lately from a certain spammer looking to push the 2008 Peking Olympic Games Souvenirs.

As seen below”

Olympics Spam

You will notice that the group’s CMS nofollows all links in a post which would make you wonder, “why would someone go to so much effort to bother spamming the groups?”

The answer is: Because it works. Well. Very well.

More and more people have been complaining lately that after discussing their site in GWHG that the thread will outrank their own site. Of course some of the sites discussed in the group are indeed penalized and just about anything will outrank it, but it has been noted the groups are getting more visible in the SERPS lately. The fact is that Groups material is indexed quickly and ranks quite well, irregardless regardless of content or value. Another aspect working for the Google Groups Spammer is the fact that Google has the groups in many languages all on their own TLD, in essence replicating their spam on many more URLs than just the one they planted it on.

The spammer may not be getting any link love from the Google Groups spam pointing to 200836.com but for his keyword phrase, “Peking 2008 Olympic Games”, he’s doing remarkably well in Google.

In the first 100 results Google Groups spam occupies 23 positions (screenshot). To be fair to Google, they aren’t the only target of this spammer as some Yahoo! groups and other forums are also spammed, for a total of 40 of the top results (screenshot). By any standard 40% of a search result being spam drops is not good.

With this Minty Fresh Spamdexing the spammers no longer have to worry about the links they generate but rather use the forums themselves as doorways to their spam site, which by the way is indexed.

posted in GWHG, Google | 8 Comments

  • Please Support

  • Marquette University

  • Sponsored


  • Donations

  • ;

Enter your email address:

Delivered by FeedBurner

rss posts
  • Readers