31st July 2007

100% Supplemental Results Fix

no supplemental

A guy walks into the doctors office and says, “Doc, it hurts when I do this.” Raising his arm over his head. The doctor takes a discerning look and says, “Don’t do that.”

Since the BigDaddy infrastructure update, through last year, and till now Google has been not only improving the supplemental index to not only be fresher but to be more comprehensive.

Matt Cutts has said, “We parse pages and index pages differently when they are in the supplemental index.” With the continued improvements to the supplemental index the difference between it and the regular index has faded so much in fact that Google has announced that they will remove the familiar green tagging.

The distinction between the main and the supplemental index is therefore continuing to narrow. Given all the progress that we’ve been able to make so far, and thinking ahead to future improvements, we’ve decided to stop labeling these URLs as “Supplemental Results.” Of course, you will continue to benefit from Google’s supplemental index being deeper and fresher.

This should free up webmasters to work on the few metrics that actually matter, traffic, conversion, and visitor satisfaction.

Congratulations to Google on finally dumping the supplemental tag and I hope you continue the house cleaning and remove the utterly useless link: operator.

posted in Google, Webmastering | 1 Comment

31st July 2007

Searching for JLH

I noticed an increase today for the search results for [JLH], which is explained by my new #1 ranking. Judging by the images that Google puts up before the search results the searchers are severely disappointed when they land on my site to find out A) I’m not Jennifer Love Hewwit and B) I’m not even female.

JLH Search Results

I’m sorry to disappoint, I am a JLH, just not that one. If Jennifer would like to talk about it, feel free to have her contact me. :)

posted in Site News | 0 Comments

25th July 2007

SEO Tip: Avoid getting caught keyword stuffing

Matt Cutts just outed a spammer using a text box to keyword stuff his pages. It’s more of a defense of Google’s non-editorial process of indexing sites based on their value rather than the views espoused on the site. A fine notion and well within Matt’s rights as a figure head for the company he works for.

The story leads us to believe that the site in question has been banned from the index solely on the fact that Google discovered its dirty little tricks. It’s also a warning to other webmasters to not use such tactics as 1) you may get yourself banned and 2) you may get publicly called out for it.

It’s a great little story, and we all sit and stare in awe at the great Google algorithm that can’t be so easily confused by a text area keyword area. But can it? Doing a search for an odd keyword combination in the text box ["poem grade powerweb"] (screenshot) gives us the following eight sites that use the exact same text box:

realimmortality . com
incrediblecures . com
eternallifedevices . com
superiching . com
liveforevernow . com
achieveimmortality . com
curecancerpill . com
immortaldevice . com

So yes, keyword stuffing is bad and you may get banned for it, but it also works. Whether or not Google can algorithmically find it is another question as Matt’s example is surely not a good one to prove that it can.

posted in SEO | 4 Comments

24th July 2007

Nofollow spam

Stop NofollowI’m in the midst of a complete review of Google’s Webmaster Guidelines which got me thinking about something. Most search engine spamming techniques start out as useful methods of building a site, but when overused go past some invisible threshold into the domain of spamming.

Meta Keywords used to be a way to indicate what the site was about, now it’s pretty much dismissed. Over use of keywords in your content is now keyword stuffing. We’re encouraged to get sites to link to ours but don’t pay or exchange for those links. Clearly any method that is used as an indicator of quality will be abused and eventually fall into the spam category.

This leads me to wonder when the over-use or misuse of nofollow will be seen as a signal of spam. Nofollow wasn’t developed as a tool to help a site rank or even as an indicator of quality, but rather as a help to search engines that couldn’t figure out what was garbage links in blog comments, unattended forums, or guestbooks. With the proliferation of the supplemental results being based solely on PageRank flow through a site, the use of nofollow has increased for internal link management. The original PageRank idea was based on the fact that academic papers on certain subjects that have the most references to them about said subject tend to be authorities. So let’s say you have an established site that claims to be an authority on just about every noun in the English language and they can’t control it’s content any more than they can control their links so they nofollow all of them, the site will soon become the automatic authority on all subjects.  The millions of sites that the wiki uses as it’s sources are now not seen as a source, but rather it itself is seen as the authority.

I haven’t tested it myself, but you know I will.  Is there a bump available for a site to all of a sudden disavow itself from all it’s external links?  Doing so would turn naturally occurring reciprocal links into one-way links to the nofollow abuser?

The nofollow was introduced so that search engines were not being influenced by the works of disingenuous spammers.  But is not the nofollowing of millions, perhaps billions, or otherwise genuine links also influencing the search results in a negative way?   A site like wikipedia with it’s  3.2 million pages in Google’s index definitely has affected the search results, in a negative way.  Perhaps the use of nofollow on such a site has helped curb the attacks by spammers on the wiki, but thats not Google’s problem, and it shouldn’t be mine either.

Sites that artificially link out with all nofollowed links, should be seen for what they are. Spam.  Amazingly some sites that don’t have any external links still rank in Google.  Odd from a company that built its fortune on the concept of linking.

So come on Google when will nofollow abuse be added to the guidelines? Better yet, when will the abusers be punished?

posted in Webmastering | 6 Comments

17th July 2007

Google Nude Stuff

I’ve joined the ranks of Vanessa Fox in her dominance of the Nude search results.

Today in my logs an interesting search referral, [google nude stuff]. I currently rank #6, which is great, but I have a long way to go to beat Vanessa who has the much coveted indented result with two URLs.

You strive and you strive and then one day you get there and don’t know what to do with the success.

Thumbnail saved for eternity.

google-nude-stuff.png

posted in humor | 0 Comments

17th July 2007

GWHG Highlight: Overwhelmed

Today in my Google Webmaster Help Group highlights I am going to pick a thread from each sub-group to emphasise.

Crawling, Indexing, and ranking:

The archive for this group is currently unavailable.

We’re sorry for the inconvenience. Please try again shortly.

Google Webmaster Tools:

The archive for this group is currently unavailable.

We’re sorry for the inconvenience. Please try again shortly.

Sitemap Protocol:

The archive for this group is currently unavailable.

We’re sorry for the inconvenience. Please try again shortly.

Suggestions & Feature Requests - webmaster-related only please:

The archive for this group is currently unavailable.

We’re sorry for the inconvenience. Please try again shortly.

Random Chit-Chat:

The archive for this group is currently unavailable.

We’re sorry for the inconvenience. Please try again shortly.

It’s like driving up onto a car accident, you wonder if you should call 911 or if a dozen other people already have. I’ve got to assume that someone is trying to get the hamster back on the wheel by now. I just hope they didn’t crash the mother board on the Google Groups server, because parts for Commodore 64s aren’t that easy to come by any more.

posted in GWHG, highlights | 2 Comments

16th July 2007

My Desktop

Cute Kids

posted in Personal | 0 Comments

13th July 2007

Site popularity: Displacement, velocity, and acceleration

I want to start out by saying that I haven’t wasted my time reading patents nor have any inside knowledge whether this is fact or not. It’s just pure theory, conjecture, hypothesis, observation, opinion…it’s a blog post.

We all know that one of the factors of a site’s ranking possibilities is the popularity of the site/page. The defacto measurement for this popularity is the amount of links the site has. Links drive entire economic segments of internet marketing from buy and selling, or just pure creating them. At the center of the link popularity firestorm is Google’s PageRank, measurement of the pages importance, which is purely a calculation of the total and quality of the links pointing to a page.

Douglas Fairbanks is a really popular guy, he has millions of fans that pay their last nickle to see him his movies. Unfortunately, Douglas is dead, and hasn’t made a movie since the 1930s. His popularity didn’t wain, his devoted fans were still fans, however new things came along (like sound) in the movies and they became fans of those as well. The point being, like in life, being popular is a continuous effort. You cannot reach a certain amount of fame, links, and then sit back and enjoy the ride.

Google keeps track of your links. We check for them using the link: operator, log into our webmaster tools account to check the links, go to yahoo and use their site explorer, and wait for that quarterly PageRank update. By logging the links to a site they also collect another crucial metric that is rarely discussed, time. Somewhere in Google is a database that is logging: Link X with give PR to Y page found on DD/MM/YY at HH:MM:SS. All of the online tools available track the quantity and to a lesser extent the quality of the links, however the time factor is not mentioned. Given the time factor a whole host of calculations possibilities arise, I’m going to go over some implications.

Displacement

Displacement is the total number of links a site has, it’s the distance from zero along a straight line to the total. It’s the one factor we can gather some data on ourselves by using online tools. When evaluating a sites performance problems, it’s usually the first place any forum observer goes and says things like, “You don’t have enough links, get more to rank for anything” or conversely, “I don’t know why you don’t rank you’ve got 8000 links”. The displacement of your site’s links is the sum total of all the links you’ve received, less the ones you’ve lost, for a snap-shot of the site’s health. Older sites tend to have more links, since they’ve been around a while to gain them, as do popular or trendy sites, as they tend to get them quickly. Not-so-good sites or sites about obscure subject that nobody is interested in tend to have less, new sites may have none.

Velocity

Using the time data of link acquisition another variable can be calculated, the link velocity. Velocity is defined as the rate of change of displacement, given in units of displacement per time ( MPH, m/s, ft/min) or for site popularity let’s say Links/Week, Links/Day, Links/Year, or Links/Site Age. Velocity is the rate at which your site is gaining/loosing links to it. It’s not easily viewed in any of the online tools or data given. Positive velocity is anything above gaining zero links per time period. If you’ve gotten one link in the last week, and not lost any, you’ve got positive link velocity. However if you haven’t gotten any links this week but lost 3, you’ve got a negative velocity. Velocity is great indication of how the site is currently doing, much more than displacement. For example, if you’ve got a site with 10,000 links to it, normally we’d say that site is fairly popular, but if it’s only gaining 2 links a week at the moment, it really isn’t that popular any more. Sure you still get your credit for having 10,000 links but some consideration has to be given to what your doing today.

Another calculation is overall velocity, or velocity calculated over the time frame of the entire event . Let’s consider two marathon runners. Both have run the entire distance of 26+ miles. The first runner completed the journey in 3:00 hours, the second took 7:00 hours. Our first runner has an over all velocity of 26/3 or a little over 8 -1/2 MPH, while the second ran an average of 3-3/4 MPH. In the web world they’d both have the same PageRank (26+ miles) but entirely different link velocities, one may have taken 3 years to get to a PageRank of 6 while the other has done it in 9 months.

Acceleration

Doing a further calculation on the links and time data we can come up with the link acceleration, defined as the rate of change of velocity. Knowing the acceleration of an object tells us the trend that object is taking, is it gaining or loosing velocity. So now for a web site we’ve got three parameters to look at; the total links, the velocity of gaining or loosing links, and the rate at which the site is gaining or loosing. For a given site that has 10,000 links to it, last week it may have gained 50 new links for a velocity of 50 links/week. The week prior the site gained 40 links so the site is accelerating, it’s velocity has gone up by 10 links per week per week, showing an upward trend in popularity. On the other hand, let’s take the same site with 10,000 links that has gained 50 links this week, however last week it gained 80 links. The acceleration of the site is negative 30 links/week/week. It’s still fairly popular, it’s still gaining in popularity, but the rate at which its gaining popularity has slowed down. This isn’t something is normally noticeable when doing a site evaluation, as the data just isn’t there for us to gather, unless it’s been gathered and logged with time as a constant.

Comparison

I’ve discussed how a sites link profile could be used to evaluate its current popularity and trends, but there is another consideration. Since Google has this information on all the web sites in it’s index, one would have to assume that they use it on a comparison basis. For a given search term Google whirls and buzzes and comes up with a ranking based on it’s 11 secret herbs and spices, then comes to link evaluation. The first is raw popularity, how many links point to two sites relevant to the search, the second is velocity - how fast or slow has that site been gaining links, and the 3rd is acceleration is that link gaining/loosing been going up or down. Why comparison is important is because link popularity, velocity, and acceleration do not have the same weight in the ranking algorithm for all sectors. If you are searching for the history of WWII one would assume older more popular sites would rank higher because the history of WWII hasn’t changed, the interest in the subject is pretty steady, and velocity and acceleration should follow the web as an aggregate (as the amount of all available links grows so should a sites share). This would be where an old established authority site would probably be unbeatable in the ranks. Other topics however may not have a history to consider, they are new, so velocity and acceleration would have to be considered more.

Some implications and observations

So now that I’ve hopefully got you thinking about something other than just how many links you’ve got, let’s consider the implications of such ideas in regular search behaviors.

Google has yet to celebrate its 10th anniversary and the internet does not seem to be going anywhere soon. If the algorithm was purely based on PageRank or total links eventually all the web results would settle down to a select small group. Older established sites with their millions of links would continue to get millions of links and eventually be unbeatable. Well, this is obviously not the case as new sites and trends pop up in popularity all the time. It is conceivable that in 20 years time when we’ve got a real history to look at, 30 years of Google, that there will be sites that have millions (if not billions by that time) of links but yet don’t rank for anything at all. Sites that may be popular today will still have their links, but will not gain them as before and be filtered to the bottom. Just like our friend Douglas Fairbanks, his fans didn’t stop loving him, they just starting liking other things more. I’m looking forward to the day when I can look back at the internet with my grandkids and tell them about the days when every search turned up a wiki result. And I can show them the old and busted wiki site siting there with 10 billion pages of content and 20 billion links not showing up at all in search results because no one has linked to it in the last 10 years….ahh one can dream.

It’s been observed by many that PageRank isn’t everything, and the primary proof of this is the search results page. It’s been pointed out in a million different places that a PageRank 2 page can outrank a PageRank 6 page. Other than on-page factors such as content, is the link velocity and acceleration. The PageRank 2 page may not have as many total links at the moment, but it’s been getting them at a quicker pace than the PageRank 6 page.

Another phenomena that is discussed often is the newness factor. Fresh sites and pages tend to get a bump in the SERPs then settle down into a lower rank. A new page has no history, so when an acceleration calculation is done it’s acceleration is huge. If it’s deviation and velocity were zero last week, but this week it has 10 links it has accelerated tremendously. In order for an established page with 100 links to it already have the ability to match our new pages velocity it would need to get 1000 links to it in the same week.

I’ve read in some forums the theory of don’t get “too many links too fast” I’ve always thought that was an odd theory as it’s a natural phenomena. When Apple announce the iPhone, I’d imagine it got a few links that day. However where there may be a grain of truth to it is in unnatural linking. Let’s say you decide your site is lagging so you take a break from content generation and go on a week long link building campaign for your site. You write to 100’s of sites asking for them to take a look at your content, suggesting where they could benefit from linking to a page on your site. You also go and submit to a few hundred directories, and then go buy a couple hundred link ads. Initially you’ll probably see a substantial boost in traffic and probably rankings, you may even see some more green pixels in the tool bar on the next update, so you’re happy. You go back to business as usual, and then a month later are in WebmasterWorld whining that you’ve got the too-many-links-too-fast penalty. I’d suggest that there is no such penalty, just that you’ve made your site look like its loosing popularity rather than gaining it. Sure you get some credit for having more links than it used to but when put into context with the temporal data the site looks like it had a big gain in popularity one week and then the next the popularity wained. Sure you want to outpace your competitors in link building rate, but remember slowing down in that link building is also a sign. So a link building campaign un-naturally sets the bar higher for a site, when that campaign stops you can no longer maintain the false popularity acceleration that it portrayed. So our webmaster quits whining in WebmasterWorld and moves on to other things, the site will eventually settle back into its natural link growth and probably will regain its original rankings.

Spend any time watching webmastering forums and one recurring theme you’ll see is “I’ve done nothing and all of a sudden my rankings dropped” also known as the -950 or whatever penalty. At this point many will head on over to site explorer and check out the links, the site owner will point to a bunch of great links they’ve got on Microsoft’s home page, etc. What isn’t considered is acceleration. Remember there is negative acceleration as well, or the velocity is slowing down, and there is even negative velocity, where your link total is dwindling. If your competitors are gaining links at a regular pace but you’ve just lost some, your site may appear like it’s penalized. Once again the problem lies in only looking at the link total and not knowing the link trending. If the site has 10,000 links and gains 100, it’s probably not going to be noticed by observing link: commands, 10,100 looks a lot like 10,000. On the other hand, if the site was normally getting 100 links a month, but then in one day lost 500 links an interesting thing happens. The popularity will appear pretty much the same, 9500 links looks just as good in site explorer as 10,000 links. BUT the velocity will be negative, and the acceleration will be HUGELY negative because the links were lost in a short period of time. Now 500 people rarely get together and decide to remove some links, but Google does it all the time. In Googles ever-quest to improve its ranking algorithm they are always re-evaluating which links count and don’t count. If they’ve recently discovered that 500 of your links are footer links on sites you bought them from and simply discount those links as not counting it may appear as a penalty because of negative acceleration and velocity. No amount of writing reconsideration requests is going to get the site back into the rankings, because the effect of the negative link building will be there. This also explains why some sites that suffer sudden ranking drop come back into the rankings slowly. As time goes by, that negative spike in acceleration slowly fades into the sites average. The sites natural positive acceleration will slowly show that it’s again gaining in popularity and the effects of loosing the links all at once will be eliminated.

Blogs tend to get a bad rap for being able to rank fresh post quite fast then fading into obscurity. I think this has to do with a blogs infrastructure and the link velocity and acceleration factors. When a new blog post is published it is shown on the front page of the blog, in a couple of categories, and probably in the archives. If the blog is remotely popular many sites aggregate the feed and also publish the story on their front page, categories etc. Unlike adding a new product under an existing category on your ecommerce site, a new blog post get’s tons of link pop right out of the gate. It’s link acceleration is huge. After some time goes by acceleration stops, velocity goes to zero, and displacement stagnates. The blog post then fades into a ranking position much lower than the initial publication.

The circles I travel in tend to bring me to a lot of professional SEO sites and people. One of the main tenants of being an SEO is that SEO is not a one time thing, you cannot just SEO a site and let it ride,you need continued SEO. Many have very good proof of this by being able to document sites that they used to work on, the client stopped using their services, and then slowly the site drops into the abyss. Part of an SEO process is a link building campaign. The campaign can be as white-hat as possible only generating natural links, however it is un-natural in that it outpace the sites natural abilities. Stopping this campaign will be seen as a negative acceleration and thus a slow down in link velocity for the site. The key to a good SEO campaign of link building is not to outpace the natural link building of the site by too much. When link building is stopped, it cannot stop all at once. The site needs to ween itself from link building, slowly reduce it’s link building activities until it’s normal velocity is within the deadband of the un-natural link building. At that point the site can live on its own without a negative rankings drop.

In conclusion I’d just like to sum up the fact that looking at a link total for a site is not the only indication of it’s health. There may be other factors in a sites popularity upswing or down swing in rank other than just the total links. Any un-natural link building whether by-the-rules or not can be seen in time trended algorithms.

Coming up next: How you can monitor your own link velocity and acceleration, kind-of.

posted in SEO, Webmastering | 3 Comments

12th July 2007

GWHG Highlight: MFA (adsense) vs. MFA (affiliates)

Before I get started on the subject at hand, I’d like to point out a new commenter on this blog, Susan M, of Google fame. I appreciate her time and insight. I’m not an A-list-party-with-googlers-all-the-time kind of blogger, but you will see that the people who regularly comment here are all very much more intelligent than me, a theory which is only backed up with your presence.

There are two quite interesting threads in GWHG right now. One has gotten a significant amount of blog airplay because Adam Lasnik made a pretty revealing comment**. [as an aside I have to volunteer that I responded** to it somewhat negatively as the banned site is no more in violation of Google quality guidelines than other very popular sites. Popularity, as we all know from going to high school, is not an indication of quality.] The one Adam is involved in was about an obvious MFA (made for adsense) site** that has been banned, the second site that hasn’t gotten any Googler play is a MFA (made for affiliates) site**, which is possibly under penalty.

I don’t have the answers for the two sites involved, but I did make a few observations while viewing them. If Google is working on cleaning up it’s index by removing sites of lower perceived value I applaud them, there is a lot of junk out there. A lot of junk that they created of course as a secondary effect of adsense. If they really want to make an indent of the junk I’d like to point out two sites that provide very little in the way of valuable content. The wiki is mostly information pulled together from other sites, and about.com is just a giant made for adsense trap taking advantage of subdomain spamming techniques. Spam doesn’t just mean using hidden text and links but also useless sites, a much more subjective assessment tha’s probably as not as easy to mechanize.

MFA (Made for Adsense)

These sites have no real purpose but to generate clicks on adsense ads. The designers put together content that will attract high paying ads (the ads you get are contextual). Part of the TOS (terms of service) of Adsense is that you are not allowed to encourage clicks or even draw undue attention to the ads. The revenue model for being a successful adsense publisher is that you need people clicking on those ads, you don’t get paid by them viewing your site. The best way to get the ads clicked is to design the site to be less fulfilling than the ads. In order to make any money on adsense you need to design the site to be good enough to generate some traffic, but be bad enough so that the viewer doesn’t get what they came for and will go looking further, hopefully through the ad. If you write the worlds most definitive article on digital cameras, answering all the users possible questions perfectly, they won’t click your digital camera ads, why would they?. If you write a vague article mentioning digital cameras enough to get some search traffic, but crappy enough that they won’t get any real answers, they are more than likely to click your ad looking for satisfaction. It’s an unfortunate fact about contextual ad publishing, the best sites as far as content don’t do well, the garbage ones do.

MFA (made for affiliates)

The model for building an affiliate site is different than getting paid for clicks. You only get paid when someone follows your affiliate link and then purchases an item. Contrary to adsense you encourage people to click the ads or follow the links. Unlike adsense you don’t get paid just for them clicking the ad, they need to purchase something, you need to close the sale to get the pay out. In affiliate driven sites, the job of the content is to inspire you the visitor to go somewhere else and purchase an item. Poor affiliate sites that are not successful may generate traffic, may generate clicks, but don’t close on the sale. The best affiliate sites give the consumer enough information to make an educated purchase decision. Affiliate marketing pretty much encourages good writing and research. The poor ones usually just copy content and republish it, those types of operations require millions of page views to be at all successful. Writing the same digital camera information site monetized by affiliate sales would require your visitors from search engines be VERY satisfied with the information they received, so satisfied in fact that they are willing to go and buy the item.

The motivation for publishing both types of sites of course is renumeration, but the methods needed to be successful in either one inspire entirely different content creation styles. I back Google up in their quest to clean up the worst MFA (adsense) sites as long as they get rid of the worst but very popular crap as well. I’d also hope they continue their assault on copied or scraped affiliate sites, we don’t need another site in the world publishing Amazon’s write up for some SEO Books. On the other hand, if I am looking for some lawn care products I hope I find a site like that one, which provides a 3rd party point of view on many related products. It’s information I cannot find on Amazon’s site.

(Like the adsense and affiliate link drops? Ironic isn’t it?)

** Sorry for the nofollow, but I don’t link to places that have a policy of not linking out. Add me to the what we are reading blogroll (or any google domain for that matter) and I’ll be sure to remove all of the nofollows. :)

posted in GWHG, Webmastering, highlights | 1 Comment

10th July 2007

GWHG Highlight: Javascript

Google Groups pfingo wonders:

When i click on the cache, i get a google error.

And Webado sharply notes.

Disable javascript and then go and visit your homepage at http:// www . pfingo . com/

You will see a blank page but viewing the source code you will see this: [code]

This is 404 page (not found) , so this is all that a robot will see.

For your human visitors you have the javascript redirection which is totally useless for robots.

Google is not a person. She doesn’t view your website with firefox or internet explorer, which also means when crawling your site your java script is not going to be executed. If you use that script to redirect your visitor, google is not going to see it.

When designing your site you must not only consider how it looks in many browsers but how it works with features like Java and Flash turned off. Not all people, including Google, browse with these features on. Using allows you to download add-ons to disable javascript, view as IE, turn off images, highlight external links, etc.

posted in SEO, highlights | 0 Comments

10th July 2007

GWHG Highlight: Hidden text and the reconsideration request

Google GroupsA thread was started on July 3, 2007 by the owner of a site who believes that Google has stopped indexing his/her site because:

About three weeks ago I turn[ed] a cookies feature on which would help to prevent abuse of the site. I believe this also cause all bots to stop crawling the site.

Google does mention that the use of cookies could be problematic, specially if it’s required to properly see the site.

Use a text browser such as Lynx to examine your site, because most search engine spiders see your site much as Lynx would. If fancy features such as JavaScript, cookies, session IDs, frames, DHTML, or Flash keep you from seeing all of your site in a text browser, then search engine spiders may have trouble crawling your site

Had the cookies caused a problem it could have been diagnosed by using the Lynx browser.

That’s not why I am pointing out this thread.

Googler MattD steps in and points out some “old” pages of the site that contain a significant amount of hidden text (click link to view the hidden text). Noteworthy in this discussion is the fact that MattD went beyond normal protocol and provided site specific information. The danger of doing this is that everyone may expect this sort of person treatment, which isn’t feasible and is the wrong assumption, but it also is a great milestone and example that should be held up as model for others to learn from. From this example I drew the following opinions.

  1. It’s good to have an idea of what you may have done to get in trouble, but don’t let that idea get in the way of other possibilities. Often having multiple people look at the site will get you differing views that you the owner who is often too close to the site and wouldn’t see as a problem.
  2. We don’t know how MattD knew what the site was in trouble for, was it a manual review or a signal in some of their wonder tools? Either way they know. Remember Susan mentioned that a review of your site will probably include a deeper look at it’s over-all practices.
  3. When submitting your reconsideration request you must be forthright and include ALL discretions, even the old ones specially the old ones. More than likely a ban or penalty is not from what you did last night but from a while ago, a review of the entire site is in order along with a recount of all the changes.
  4. It is entirely possible that the site and or pages ranking was affected by the hidden text, after reconsideration the site may not regain its original position since that effect is now gone.
  5. If you are penalized its because Google has decided that you were attempting to fool the search algorithm. If when you submit a reconsideration request that is incomplete and doesn’t include all problems, that could also be considered an attempt to deceive, though Adam Lasnik has said multiple reconsideration requests are not seen as a signal to be held against you. I wouldn’t assume that filing a 2nd or 3rd request would be aggregated with the previous one, more than likely a different person is reviewing it. If I were to submit an additional request with more information I’d include the previous statements as well
  6. This is always a problem with a 3rd party looking at a site. We are not always given all of the information available, access to all of the sites pages on the server, or knowledge of what was done before. We only see the state the site is in now and without a context in which to put that in. Google on the other hand is the king of data storage and can contrast and compare multiple various previous incarnations.

posted in SEO, highlights, reconsideration request | 2 Comments

9th July 2007

Pownce

I don’t get itI’ve been playing with Pownce. Many thank’s to Vanessa Fox: Nude Social Networking Queen Goddess for an invite. I haven’t used it enough to have an opinion yet, but I will soon enough.

Meanwhile, if you need an invite code, leave a comment and I’ll send you one until they are gone.

posted in web 2.0 | 0 Comments

8th July 2007

Thanks

I wanted to thank all the dozens  of people who have commented and or sent me personal notes regarding a small situation we had in Google Groups. I believe the issue is over so let’s move on from there. As promised I removed the post as I don’t think we need to have a permanent record of the event. I also submitted my URL removal request, let’s see how quick that works!

07/07/07 was indeed an notable day, I choose to celebrate it with a Prince concert in the Twin Cities.

posted in Site News | 9 Comments

2nd July 2007

johnweb has the minus 18 penalty

I keep reading how searching for the domain name and if it doesn’t show up as the #1 result that is an indication of a penalty.

The owner of JohnWeb must be figuring out if he’s penalized then, because I sure see it showing up in a lot of search referrals. Ironically this blog shows up as the #1 result, even though until now the term has not even been used on the site. The cached copy (As of 7-2-07) from the search results show the familiar Google disclaimer, “These terms only appear in links pointing to this page: johnweb”

This site has probably gained links to it with the anchor text JohnWeb as I created that as on online identity some time ago when joining something that wouldn’t allow my normal JLH login (too few characters I think).

Is JohnWeb.com really penalized? I doubt it, since it’s just a parked domain that may or may not even have a history. I just think it’s been outranked by another site (this one) for a relatively obscure and non-competitive term.

Sometimes a cigar is just a cigar.

posted in SEO | 2 Comments

2nd July 2007

pa rum pum pum pum

drummer.jpg

…I am a poor boy too, pa rum pum pum pum. I have no gift to bring, pa rum pum pum pum. That’s fit to give the King…

My displeasure with Google’s handling of the Webmasters Help Group is well known, and I am weary from trying to influence change, so weary in fact that I’ve been pretty much reduced to occasional lurking with a random post or two. There are others though, much more disciplined and forgiving than I am. For these people still holding on to the dream, I commend them, and though I am poor in resources and cannot thank them financially, I would like to offer them a link drop, for whatever that is worth.

So congrats to the following for trying to keep at least some signal in the noise filled shell of its former great self Google Webmasters Help Group.

Webado - Web Hosting and Design in Canada

Cass-Hacks - Powerful XHTML DHTML presentation and accessibility tools

Phil Payne - Website rescue, redesign and maintenance

Dockarl - BlixKrieg Wordpress Theme

JohnMu - Search Engine Tools

Sebastian - Links, Links, Links…

Red Cardinal - Search Engine Optimisation Ireland

Who did I miss?

posted in GWHG | 8 Comments

  • Please Support

  • Marquette University

  • Sponsored

125x125

  • Donations


  • ;

Enter your email address:

Delivered by FeedBurner

rss posts
  • Readers