Deprecated: Assigning the return value of new by reference is deprecated in /home/jlhdes/public_html/wp-settings.php on line 520

Deprecated: Assigning the return value of new by reference is deprecated in /home/jlhdes/public_html/wp-settings.php on line 535

Deprecated: Assigning the return value of new by reference is deprecated in /home/jlhdes/public_html/wp-settings.php on line 542

Deprecated: Assigning the return value of new by reference is deprecated in /home/jlhdes/public_html/wp-settings.php on line 578

Deprecated: Function set_magic_quotes_runtime() is deprecated in /home/jlhdes/public_html/wp-settings.php on line 18

Warning: Cannot modify header information - headers already sent by (output started at /home/jlhdes/public_html/wp-settings.php:520) in /home/jlhdes/public_html/wp-content/plugins/bad-behavior/bad-behavior/screener.inc.php on line 8

Warning: session_start() [function.session-start]: Cannot send session cookie - headers already sent by (output started at /home/jlhdes/public_html/wp-settings.php:520) in /home/jlhdes/public_html/wp-content/plugins/wp-referer.php on line 36

Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/jlhdes/public_html/wp-settings.php:520) in /home/jlhdes/public_html/wp-content/plugins/wp-referer.php on line 36
SEO | JLH Design - Part 2
  • JLH Design

  • Digital Point Members put on Suicide Watch

24th October 2007

Digital Point Members put on Suicide Watch


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

noose.jpg

THIS IS AN EMERGENCY SEO/WEBMASTER BROADCAST.

Due to Google’s apparent assault on Paid Links resulting in some sites’ PageRank being reduced, Digital Point forum members will have to be guarded 24 hours a day for the foreseeable future. If you are near one please remove their belt, shoelaces, and anything that could be fashioned into a sharp weapon. They should also be moved to the lowest floor in the building and all windows should be boarded up.

This is not a drill. I repeat. This is not a drill.

posted in Paid Links | 2 Comments

24th October 2007

Rants: paid links and penalties


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

It’s my party and I’ll cry if I want to.  I’ve been reading a lot of ranting lately on Sphinn and the blogs, which got me into a ranting mood.  Let the games begin.

Even though Google has a ton of official blogs, discussion groups, webmaster guidelines, and press releases available they decide they would be best served to send Deep Throat down to the parking ramp with Danny Sullivan doing his best Carl Bernstein impression to break the news that penalties are now given to link sellers.  I can only guess they’d choose this chicken-shit cowardly approach because it has some plausible deniability if it really hit the fan.  Beyond their poor choice of using unnamed sources I have a couple other issues bugging me.

As soon as one is assimilated into the collective the first thing they teach the new drones is the gospel of “Don’t Worry About PageRank“.  You’ll see is spewed from every orifice of any Googler giving a speech, writing a blog, answering a question in a forum, or just plain pontificating from on high.  It’s the canned response for any and all questions regarding the green bar; its effect, its acquisition, its retention, its loss,  its very existence.  They are all told to say things like, “worry less about PageRank and more about creating unique and compelling content [and tools].” So if this PageRank is nothing to worry about, then why would docking some college newspaper’s PageRank be a suitable punishment?  If PageRank is no big deal and not to be worried about as much as content, why would they choose this as their punitive reaction?  Maybe one should worry about PageRank just a little bit.  You can’t have it both ways, either it’s not worth worrying about or it is worth worrying about and something we should all fear loosing.   This leads me to another thought on the matter, that it’s not punishment but rather an adjustment, more on that later.

It’s clear that to battle the evil doers that sell and purchase links some sort of punishment must be doled out.  They can’t have an outright ban on all site that sell a link, as Google would soon become a joke.  If someone is searching for Stanford’s Newspaper they’d better find it.  If not, then Google looses it’s relevancy.  Sure it wouldn’t matter much if they just destroyed some nice lady in Colorado who’s buying baby food with the money she makes from her site, but there would be plenty of high profile cases that would just make them look silly.  We as webmasters, marketers, SEO’s or just plain anyone who has any idea how the inner workings of search work have to step back from the scene for a moment.  The VAST majority of Google’s users, customers, and shareholders don’t give lick about paid links, hidden text, or cloaking.  They just know that when they search for something they expect to see it.  If Google banned Stanford for selling links and someone who wasn’t in the know was told that was the reason, the response would be a great big, “So what?  The point is that while selling links goes against Google’s webmaster guidelines, not listing the site selling the links goes against Google core principles of returning the most relevant results.

That principle has it’s limits.  In the case of a site known for fascilitating the selling of links, it’s so well known that when you use the Google tool bar to search for its name you’ll get it listed as a suggestion as soon as you type [text-l] in the field.  If you continue the query and fill the whole [text-link-ads] you will not find the site listed.  In this case, Google has decided that returning the most relevant results are not quite worth as much as punishing the offender.  So I am quite confused when that distinction is made.  Is it just academic institutions that get this exemption? Or if Matt Drudge started selling links would he too get to be listed for his name and his site?  I find it utterly priceless that Google is taking the moral high ground on this text-link-ad selling problem by not returning the site for its own name, but they are more than happy to take their money to show their ads in the results.  In this case the most relevant result is required to pay for their position.  This reminds me of a little story I read on the Stanford web site, ” For example, we noticed a major search engine would not return a large airline’s homepage when the airline’s name was given as a query. It so happened that the airline had placed an expensive ad, linked to the query that was its name. A better search engine would not have required this ad, and possibly resulted in the loss of the revenue from the airline to the search engine. In general, it could be argued from the consumer point of view that the better the search engine is, the fewer advertisements will be needed for the consumer to find what they want. This of course erodes the advertising supported business model of the existing search engines. However, there will always be money from advertisers who want a customer to switch products, or have something that is genuinely new. But we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.”  No, it’s not an exact comparison or even a pretty close metaphor, but the idea that the most relevant result has to pay for its position is true in both cases.   Larry and Sergey knew that was wrong way back then, oh my how their little project has strayed.

I’m going to go out on a limb here and postulate that Google cannot detect 100% of the paid links 100% of the time.  I’ve deduced this solely based on their behavior.   1) they still encourage people to tattle on their competitors and do their job for them 2) They have to manually penalize sites by removing PageRank or knocking them down a few hundred notches in the results, and 3) If my buddy calls me tonight and tells me he’ll buy me a shot and beer if I link to him tomorrow, no where in that process is Google involved.  If they were able to detect paid links there wouldn’t be a need for penalties of any kind, they would just re-rank the index as if said links didn’t exist.   The fact that Deep Throat and Danny had to have that clandestine meeting is proof enough to me that Google’s ability to detect paid links is completely flawed.  By admitting that penalties for selling links exists, they are admitting that they cannot handle them algorithmically, which as you’ve heard before just isn’t scalable.  Sure there isn’t a shortage of 3rd world countries with people willing to work for $1 a day hand checking sites, but at some point the web will become so large that even that isn’t scalable for a company with billions and billions to spend.

I don’t want to just hammer on Google, I give them a lot of credit, they still are the best option available for sending free traffic to a site that isn’t going to go viral on youtube.   Perhaps Google’s inability to detect and devalue paid links isn’t all that flawed, all paid endorsements are not irrelevant.  That is what we are after, relevancy.   If you want Bill Clinton to speak at your college’s commencement be prepared to pay him handsomely for the honor.  That does not make his speech to the leaders of tomorrow any less relevant.   To get Jeff Gordon to use your motor oil and put a little sticker on his car it’s going to cost you millions, but his endorsement would mean a lot more than the man on the street telling you what to buy.  Then again if Bill Clinton told us what oil to buy and Jeff Gordon wanted to tell us how to work in the global economy no one (should) listen to them either.  The point is that both of these men are experts in their field who demand a high amount of compensation for their limited time.  The fact that they are paid does not render their opinion any less relevant.  The same could be said of links.   If Stanford links to an academic the link should carry a lot of weight, then again if they link to britney-spears-mesothelioma-nude-lawyer.info it shouldn’t be considered an endorsement on that subject either.

In that sense the drones saying, “Don’t worry about PageRank” are right. PageRank in it’s purest form, the sum of the weighted links to a page shouldn’t be worried about.  The relevancy of the links should, be them paid endorsements or pure out-of-the-goodness-of-their-hearts editorial links.

One final note on paid links.  This includes some other webmaster guideline no-nos here as well, like hidden text.   We can all easily prove that Google’s ability to detect either a paid link or hidden text is limited.  Create a new page, buy a link see if it get’s indexed or create a page with some obscure hidden text, see if you can find it on Google.  Even if they could detect 100% of the hidden text and paid links within a month of it’s publishing, that month would be plenty of time for some people to make use of it.  With domains being pennies now adays the true black-hatter doesn’t even care if a domain is banned, penalized, or blown-up completely.  By the time that month is over they’ve moved on to hundred or a thousand other sites.  Who’s really getting caught up in this dragnet is the “honest” webmaster’s who think they are acting the way they should.  They are trying to build a site for the long-haul and really want to produce a good product but are fed so much bad information that they truly think they are doing the right thing.  It’s the center of all the anger I have with Google right now, utter lack of communication with the real webmaster.  Daily many webmasters approach the Google webmaster help group saying things like, “I’ve exchanged tons of links, bought links and yet I lost my ranking” Not because they are trying to be sneaky but because they feel that is what they SHOULD do.  They don’t read searchengineland or listen to Rand’s youtube video of the week because they are busy running their sites.  It’s the actual honest webmaster’s who don’t have the right information in front of them that are getting hurt, while the black-hats slip through the cracks only to have Google help them by removing legitimate sites by the thousands.

I want to rant about PageRank funnelling and Google’s Green attitude, but that will have to wait for another post.  I’m tired.

posted in Google, SEO | 2 Comments

25th September 2007

Googlebot gave up


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

Feedback LoopThere’s been some rumblings lately around the fact that the DMOZ home page was removed from the index. I don’t pay too much attention to the DMOZ, but in this case it was interesting. I started to follow various threads in the webmastering/SEO community diligently as I’ve seen this “lost my homepage” behavior many times in GWHG. I even made an appeal on behalf of the unfortunate webmasters which was ignored.

Matt Cutts, the true ambassador to the webmaster, came through and answered the question, even taking time from electronic cat gadgets and their pedometers to do so.

Hey all, I dug into this a little bit with the help of a couple crawl folks. It looks like when Googlebot tried to fetch http://www.dmoz.org/, we got a 301 redirect back to http://www.dmoz.org/ . It looks like that self-loop has been going on for several days. We were last able to fetch the root page successfully on Sept. 10th, but from that point on DMOZ was returning these 301-to-itself pages, and after a few days Googlebot gave up on trying to fetch the url.

This makes sense, as Googlebot hit the page it would get a 301 response saying that the new page was the page it hit. When that information got to the normal process that handles 301s it probably just faulted out. Since no other information on a page loads after a 301 (normally) they would have to remove the page as they’d have no data for it.

Here’s the odd thing

When I first heard of this, several days ago, i visited the DMOZ site, and viewed it just fine. Depending on your browser, you can’t view a page that redirects to itself, as this example I’ve set up. Internet Explorer will just sit there and spin, Firefox will eventually give you an error message, and using an online tool will let you know that there is an error.

Pure Conjecture

Matt Cutts has been doing this a long time and probably the best at speaking around issues when he needs to (protecting secrets, towing the company line, etc) but never has there ever been any appearance of being anything less than truthful, so I will by default dispel the idea that he was giving us bad information. So how can I not see a 301 redirect, no one else mentions that the page won’t load ANYWHERE in all the discussions, but yet Googlebot sees the behavior?

  1. All things considered, the simplest explanation is usually the best, perhaps the 301 redirect was briefly shown only when Googlebot happened to visit the site, but not long enough for anyone to take note of it.
  2. They somehow managed to return a 301 response code, but not the redirect. This is something I tried to simulate on many platforms but could not. The browsers and tools I used all seemed to expect the redirect location and either defaulted to one or erred out. Google on the other hand doesn’t actually CRAWL anything, they just hit the page and return back with whatever it saw. I don’t know enough about how the interwebby works to really say if this is a possibility or not, it is after all pure conjecture.
  3. They were cloaking their 301 only showing it to Googlebot (or other bots for that matter) and not to regular users with a browser or not from Google’s IP range.
  4. Perhaps the 301 was referrer based, and when there was no referrer it showed the redirect. Googlebot, since she runs on a predetermined schedule of URLs to crawl would not show a referrer.

Any other ideas that I am too simple to see?

posted in Google, Matt Cutts, SEO, Webmastering | 5 Comments

28th August 2007

Paid links: A scalable solution


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

Google has always been smart in respect to building solutions based on scalability. From the onset they always wondered what would happen if they had to grow the solution at hand by 10 fold or even greater. Scalability in their algorithm is so entrenched as its philosophy that they even openly admit that sites that are submitted via a spam report are not removed or penalized. They rather use that data as information to judge their algorithm against.

What amazes me regarding their battle with paid links how non-scalable the solution is:

  • Adding a rel=”nofollow” attribute to the <a> tag
  • Redirecting the links to an intermediate page that is blocked from search engines with a robots.txt file
  • [it used to say something about javascript but they took that out]

They even take it a step further, which is obviously also a step back in their fight against spam, when they ask for people to submit sites that sell links, supposedly for some hand-to-hand combat.

So what’s a more scalable solution, Google tweaking it’s system to identify paid links on its own or having millions and millions of webmaster’s modify their billions and billions of pages available on the web? Obviously its much easier for Google if we all just bend over and do their job for them, but then again how serious are they about this? Sure it’s available in the guidelines, at conferences, and if you read Matt Cutts blog, but that probably reaches a very small percentage of the real content creators out there. All the pros will know about it, but the VAST majority of indexed content managers out there are going to miss the message.

Let’s go back for a second to review why they think Paid Links are bad. What set Google apart from the rest of “search engines” at the time was that they not only looked at the content on the page but also used the academic model of references in literature to vouch for the authority a page is on the subject. At the time of that original theory the web was young and innocent and pretty much not exploited nearly as much. So Google’s rankings are based largely on the links to a page/site and since most people want to rank higher so they get more traffic the obvious optimization procedure is to get more links. Had they ranked sites based on the use of purple text all websites would be using purple text today.

Back when the original ideas for Google came about most of the links out there were actually votes for other sites. It was when “surfing” the web actually meant bouncing around from site to site based on the links of those sites. You didn’t Google something you surfed for it. In 1584 when Google came up with this idea the barriers to get online were much higher than they are now from registration and hosting to easy content generation it’s gotten much easier to get your site online today, back then it was more academic institutions, geek squads, and corporations that had the resources to publish sites.

Well the times they are a changing. Now you can buy a domain for pennies, hosting is next to free, and writing content has never been easier. There are so many millions of new links created every day that they have lost their value due to the sheer volume of links available. HOWEVER, there are some sites that have some value, traffic, authority, PageRank, and links from those sites tend to be worth something, and BOOM an economy of link selling is born.

Not straying too far from their original founders who borrowed the reference system used in academic papers as a judge of quality, Google wants to borrow from the older established media sources that must disclose paid endorsements. What’s different however is that most of those media outlets are regulated by authorities. Being that Google is the only game in town when it comes to actual search traffic they are the defacto authority to regulate the masses.

So how can Google get everyone on board, let me repeat that EVERYONE, not just the 0.0001% of the publishers that read Matt’s blog, or the 10,000 subscribers to seoMOZ, but EVERYONE. If Google wants to regulate the web then they need to start regulating it and not just observing it, its going to be painful but if they really want to monitor all the links on the web it will have to be done.

  1. The first thing to do is throw out all of the links known till this point. They are polluted, we have no way of knowing the intention of any of the links since they exist pre-regulation.
  2. In order to have the links count they have to be registered, verified, and monitored by Google so all websites will have to be removed from the index.
  3. After verifying ownership in your webmaster tools account, Google will crawl the site. They can then show you a list of all the external links on your site. You then select what type a link it is: Regular Voting link, Paid Link, non-endorsed user generated link, etc. After selecting the link attribute you will have to digitally sign an agreement attesting to the authenticity of your claim, enter the captcha, and submit. Repeat for the rest of the links on your site.
  4. After the links are verified and attested to Google can then add them in as votes or non-votes into the index.

Now we’ve got something with some teeth in it. In order to be included in Google’s index you have to have agreed to their terms and have signed a legally binding contract that they can go back on.

  • We no longer have to worry about hidden links as they won’t be verified.
  • Links will only be bought and sold for traffic.
  • You can code your links any way you’d like.
  • User submitted link directories are all but dead.
  • Sitewide links will probably disappear due to the sheer labor required to insert them.
  • Sneaky little plug-in and theme developers that drop links all over the place will be wasting their time as the site owners probably won’t vouch for them.
  • Automatic text link building systems will grind to a halt as whenever the links change on a page the page will drop out of the index waiting to be verified.
  • As the publisher has to be verified by state issued credentials, large false link networks built up by SEO’s will have little value as Google will be able to see all of them as owned by the same person.
  • Comment spamming will disappear as people will just turn off their comments.

Now until that is instituted and since you’ve read to the end of this story and know about Google’s stance on paid link you are morally bound to nofollow all of your paid links and only buy nofollowed links. Granted your competitors who didn’t go to SES San Jose or read Matt Cutts blog probably aren’t doing that, but that’s your problem not Google’s.

The only flaw in the system is that some people may actually LIE and say that a link that they got paid for is actually a regular link. Oh my. Well at least that’s a sin of ccommission and not a sin of omission like the millions of people currently not nofollowing their paid links.

posted in Paid Links | 2 Comments

16th August 2007

TIFKAS


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

no supplemental

I’m proud to announce: TIFKAS

T - The

I - Index

F - Formerly

K - Known

A - As

S - Supplemental

Inspired by Prince when he went by The Artist Formerly Known As Prince - TAFKAP

tafkap.jpg

posted in SEO | 1 Comment

25th July 2007

SEO Tip: Avoid getting caught keyword stuffing


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

Matt Cutts just outed a spammer using a text box to keyword stuff his pages. It’s more of a defense of Google’s non-editorial process of indexing sites based on their value rather than the views espoused on the site. A fine notion and well within Matt’s rights as a figure head for the company he works for.

The story leads us to believe that the site in question has been banned from the index solely on the fact that Google discovered its dirty little tricks. It’s also a warning to other webmasters to not use such tactics as 1) you may get yourself banned and 2) you may get publicly called out for it.

It’s a great little story, and we all sit and stare in awe at the great Google algorithm that can’t be so easily confused by a text area keyword area. But can it? Doing a search for an odd keyword combination in the text box ["poem grade powerweb"] (screenshot) gives us the following eight sites that use the exact same text box:

realimmortality . com
incrediblecures . com
eternallifedevices . com
superiching . com
liveforevernow . com
achieveimmortality . com
curecancerpill . com
immortaldevice . com

So yes, keyword stuffing is bad and you may get banned for it, but it also works. Whether or not Google can algorithmically find it is another question as Matt’s example is surely not a good one to prove that it can.

posted in SEO | 4 Comments

13th July 2007

Site popularity: Displacement, velocity, and acceleration


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

I want to start out by saying that I haven’t wasted my time reading patents nor have any inside knowledge whether this is fact or not. It’s just pure theory, conjecture, hypothesis, observation, opinion…it’s a blog post.

We all know that one of the factors of a site’s ranking possibilities is the popularity of the site/page. The defacto measurement for this popularity is the amount of links the site has. Links drive entire economic segments of internet marketing from buy and selling, or just pure creating them. At the center of the link popularity firestorm is Google’s PageRank, measurement of the pages importance, which is purely a calculation of the total and quality of the links pointing to a page.

Douglas Fairbanks is a really popular guy, he has millions of fans that pay their last nickle to see him his movies. Unfortunately, Douglas is dead, and hasn’t made a movie since the 1930s. His popularity didn’t wain, his devoted fans were still fans, however new things came along (like sound) in the movies and they became fans of those as well. The point being, like in life, being popular is a continuous effort. You cannot reach a certain amount of fame, links, and then sit back and enjoy the ride.

Google keeps track of your links. We check for them using the link: operator, log into our webmaster tools account to check the links, go to yahoo and use their site explorer, and wait for that quarterly PageRank update. By logging the links to a site they also collect another crucial metric that is rarely discussed, time. Somewhere in Google is a database that is logging: Link X with give PR to Y page found on DD/MM/YY at HH:MM:SS. All of the online tools available track the quantity and to a lesser extent the quality of the links, however the time factor is not mentioned. Given the time factor a whole host of calculations possibilities arise, I’m going to go over some implications.

Displacement

Displacement is the total number of links a site has, it’s the distance from zero along a straight line to the total. It’s the one factor we can gather some data on ourselves by using online tools. When evaluating a sites performance problems, it’s usually the first place any forum observer goes and says things like, “You don’t have enough links, get more to rank for anything” or conversely, “I don’t know why you don’t rank you’ve got 8000 links”. The displacement of your site’s links is the sum total of all the links you’ve received, less the ones you’ve lost, for a snap-shot of the site’s health. Older sites tend to have more links, since they’ve been around a while to gain them, as do popular or trendy sites, as they tend to get them quickly. Not-so-good sites or sites about obscure subject that nobody is interested in tend to have less, new sites may have none.

Velocity

Using the time data of link acquisition another variable can be calculated, the link velocity. Velocity is defined as the rate of change of displacement, given in units of displacement per time ( MPH, m/s, ft/min) or for site popularity let’s say Links/Week, Links/Day, Links/Year, or Links/Site Age. Velocity is the rate at which your site is gaining/loosing links to it. It’s not easily viewed in any of the online tools or data given. Positive velocity is anything above gaining zero links per time period. If you’ve gotten one link in the last week, and not lost any, you’ve got positive link velocity. However if you haven’t gotten any links this week but lost 3, you’ve got a negative velocity. Velocity is great indication of how the site is currently doing, much more than displacement. For example, if you’ve got a site with 10,000 links to it, normally we’d say that site is fairly popular, but if it’s only gaining 2 links a week at the moment, it really isn’t that popular any more. Sure you still get your credit for having 10,000 links but some consideration has to be given to what your doing today.

Another calculation is overall velocity, or velocity calculated over the time frame of the entire event . Let’s consider two marathon runners. Both have run the entire distance of 26+ miles. The first runner completed the journey in 3:00 hours, the second took 7:00 hours. Our first runner has an over all velocity of 26/3 or a little over 8 -1/2 MPH, while the second ran an average of 3-3/4 MPH. In the web world they’d both have the same PageRank (26+ miles) but entirely different link velocities, one may have taken 3 years to get to a PageRank of 6 while the other has done it in 9 months.

Acceleration

Doing a further calculation on the links and time data we can come up with the link acceleration, defined as the rate of change of velocity. Knowing the acceleration of an object tells us the trend that object is taking, is it gaining or loosing velocity. So now for a web site we’ve got three parameters to look at; the total links, the velocity of gaining or loosing links, and the rate at which the site is gaining or loosing. For a given site that has 10,000 links to it, last week it may have gained 50 new links for a velocity of 50 links/week. The week prior the site gained 40 links so the site is accelerating, it’s velocity has gone up by 10 links per week per week, showing an upward trend in popularity. On the other hand, let’s take the same site with 10,000 links that has gained 50 links this week, however last week it gained 80 links. The acceleration of the site is negative 30 links/week/week. It’s still fairly popular, it’s still gaining in popularity, but the rate at which its gaining popularity has slowed down. This isn’t something is normally noticeable when doing a site evaluation, as the data just isn’t there for us to gather, unless it’s been gathered and logged with time as a constant.

Comparison

I’ve discussed how a sites link profile could be used to evaluate its current popularity and trends, but there is another consideration. Since Google has this information on all the web sites in it’s index, one would have to assume that they use it on a comparison basis. For a given search term Google whirls and buzzes and comes up with a ranking based on it’s 11 secret herbs and spices, then comes to link evaluation. The first is raw popularity, how many links point to two sites relevant to the search, the second is velocity - how fast or slow has that site been gaining links, and the 3rd is acceleration is that link gaining/loosing been going up or down. Why comparison is important is because link popularity, velocity, and acceleration do not have the same weight in the ranking algorithm for all sectors. If you are searching for the history of WWII one would assume older more popular sites would rank higher because the history of WWII hasn’t changed, the interest in the subject is pretty steady, and velocity and acceleration should follow the web as an aggregate (as the amount of all available links grows so should a sites share). This would be where an old established authority site would probably be unbeatable in the ranks. Other topics however may not have a history to consider, they are new, so velocity and acceleration would have to be considered more.

Some implications and observations

So now that I’ve hopefully got you thinking about something other than just how many links you’ve got, let’s consider the implications of such ideas in regular search behaviors.

Google has yet to celebrate its 10th anniversary and the internet does not seem to be going anywhere soon. If the algorithm was purely based on PageRank or total links eventually all the web results would settle down to a select small group. Older established sites with their millions of links would continue to get millions of links and eventually be unbeatable. Well, this is obviously not the case as new sites and trends pop up in popularity all the time. It is conceivable that in 20 years time when we’ve got a real history to look at, 30 years of Google, that there will be sites that have millions (if not billions by that time) of links but yet don’t rank for anything at all. Sites that may be popular today will still have their links, but will not gain them as before and be filtered to the bottom. Just like our friend Douglas Fairbanks, his fans didn’t stop loving him, they just starting liking other things more. I’m looking forward to the day when I can look back at the internet with my grandkids and tell them about the days when every search turned up a wiki result. And I can show them the old and busted wiki site siting there with 10 billion pages of content and 20 billion links not showing up at all in search results because no one has linked to it in the last 10 years….ahh one can dream.

It’s been observed by many that PageRank isn’t everything, and the primary proof of this is the search results page. It’s been pointed out in a million different places that a PageRank 2 page can outrank a PageRank 6 page. Other than on-page factors such as content, is the link velocity and acceleration. The PageRank 2 page may not have as many total links at the moment, but it’s been getting them at a quicker pace than the PageRank 6 page.

Another phenomena that is discussed often is the newness factor. Fresh sites and pages tend to get a bump in the SERPs then settle down into a lower rank. A new page has no history, so when an acceleration calculation is done it’s acceleration is huge. If it’s deviation and velocity were zero last week, but this week it has 10 links it has accelerated tremendously. In order for an established page with 100 links to it already have the ability to match our new pages velocity it would need to get 1000 links to it in the same week.

I’ve read in some forums the theory of don’t get “too many links too fast” I’ve always thought that was an odd theory as it’s a natural phenomena. When Apple announce the iPhone, I’d imagine it got a few links that day. However where there may be a grain of truth to it is in unnatural linking. Let’s say you decide your site is lagging so you take a break from content generation and go on a week long link building campaign for your site. You write to 100’s of sites asking for them to take a look at your content, suggesting where they could benefit from linking to a page on your site. You also go and submit to a few hundred directories, and then go buy a couple hundred link ads. Initially you’ll probably see a substantial boost in traffic and probably rankings, you may even see some more green pixels in the tool bar on the next update, so you’re happy. You go back to business as usual, and then a month later are in WebmasterWorld whining that you’ve got the too-many-links-too-fast penalty. I’d suggest that there is no such penalty, just that you’ve made your site look like its loosing popularity rather than gaining it. Sure you get some credit for having more links than it used to but when put into context with the temporal data the site looks like it had a big gain in popularity one week and then the next the popularity wained. Sure you want to outpace your competitors in link building rate, but remember slowing down in that link building is also a sign. So a link building campaign un-naturally sets the bar higher for a site, when that campaign stops you can no longer maintain the false popularity acceleration that it portrayed. So our webmaster quits whining in WebmasterWorld and moves on to other things, the site will eventually settle back into its natural link growth and probably will regain its original rankings.

Spend any time watching webmastering forums and one recurring theme you’ll see is “I’ve done nothing and all of a sudden my rankings dropped” also known as the -950 or whatever penalty. At this point many will head on over to site explorer and check out the links, the site owner will point to a bunch of great links they’ve got on Microsoft’s home page, etc. What isn’t considered is acceleration. Remember there is negative acceleration as well, or the velocity is slowing down, and there is even negative velocity, where your link total is dwindling. If your competitors are gaining links at a regular pace but you’ve just lost some, your site may appear like it’s penalized. Once again the problem lies in only looking at the link total and not knowing the link trending. If the site has 10,000 links and gains 100, it’s probably not going to be noticed by observing link: commands, 10,100 looks a lot like 10,000. On the other hand, if the site was normally getting 100 links a month, but then in one day lost 500 links an interesting thing happens. The popularity will appear pretty much the same, 9500 links looks just as good in site explorer as 10,000 links. BUT the velocity will be negative, and the acceleration will be HUGELY negative because the links were lost in a short period of time. Now 500 people rarely get together and decide to remove some links, but Google does it all the time. In Googles ever-quest to improve its ranking algorithm they are always re-evaluating which links count and don’t count. If they’ve recently discovered that 500 of your links are footer links on sites you bought them from and simply discount those links as not counting it may appear as a penalty because of negative acceleration and velocity. No amount of writing reconsideration requests is going to get the site back into the rankings, because the effect of the negative link building will be there. This also explains why some sites that suffer sudden ranking drop come back into the rankings slowly. As time goes by, that negative spike in acceleration slowly fades into the sites average. The sites natural positive acceleration will slowly show that it’s again gaining in popularity and the effects of loosing the links all at once will be eliminated.

Blogs tend to get a bad rap for being able to rank fresh post quite fast then fading into obscurity. I think this has to do with a blogs infrastructure and the link velocity and acceleration factors. When a new blog post is published it is shown on the front page of the blog, in a couple of categories, and probably in the archives. If the blog is remotely popular many sites aggregate the feed and also publish the story on their front page, categories etc. Unlike adding a new product under an existing category on your ecommerce site, a new blog post get’s tons of link pop right out of the gate. It’s link acceleration is huge. After some time goes by acceleration stops, velocity goes to zero, and displacement stagnates. The blog post then fades into a ranking position much lower than the initial publication.

The circles I travel in tend to bring me to a lot of professional SEO sites and people. One of the main tenants of being an SEO is that SEO is not a one time thing, you cannot just SEO a site and let it ride,you need continued SEO. Many have very good proof of this by being able to document sites that they used to work on, the client stopped using their services, and then slowly the site drops into the abyss. Part of an SEO process is a link building campaign. The campaign can be as white-hat as possible only generating natural links, however it is un-natural in that it outpace the sites natural abilities. Stopping this campaign will be seen as a negative acceleration and thus a slow down in link velocity for the site. The key to a good SEO campaign of link building is not to outpace the natural link building of the site by too much. When link building is stopped, it cannot stop all at once. The site needs to ween itself from link building, slowly reduce it’s link building activities until it’s normal velocity is within the deadband of the un-natural link building. At that point the site can live on its own without a negative rankings drop.

In conclusion I’d just like to sum up the fact that looking at a link total for a site is not the only indication of it’s health. There may be other factors in a sites popularity upswing or down swing in rank other than just the total links. Any un-natural link building whether by-the-rules or not can be seen in time trended algorithms.

Coming up next: How you can monitor your own link velocity and acceleration, kind-of.

posted in SEO, Webmastering | 3 Comments

10th July 2007

GWHG Highlight: Javascript


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

Google Groups pfingo wonders:

When i click on the cache, i get a google error.

And Webado sharply notes.

Disable javascript and then go and visit your homepage at http:// www . pfingo . com/

You will see a blank page but viewing the source code you will see this: [code]

This is 404 page (not found) , so this is all that a robot will see.

For your human visitors you have the javascript redirection which is totally useless for robots.

Google is not a person. She doesn’t view your website with firefox or internet explorer, which also means when crawling your site your java script is not going to be executed. If you use that script to redirect your visitor, google is not going to see it.

When designing your site you must not only consider how it looks in many browsers but how it works with features like Java and Flash turned off. Not all people, including Google, browse with these features on. Using allows you to download add-ons to disable javascript, view as IE, turn off images, highlight external links, etc.

posted in SEO, highlights | 0 Comments

10th July 2007

GWHG Highlight: Hidden text and the reconsideration request


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

Google GroupsA thread was started on July 3, 2007 by the owner of a site who believes that Google has stopped indexing his/her site because:

About three weeks ago I turn[ed] a cookies feature on which would help to prevent abuse of the site. I believe this also cause all bots to stop crawling the site.

Google does mention that the use of cookies could be problematic, specially if it’s required to properly see the site.

Use a text browser such as Lynx to examine your site, because most search engine spiders see your site much as Lynx would. If fancy features such as JavaScript, cookies, session IDs, frames, DHTML, or Flash keep you from seeing all of your site in a text browser, then search engine spiders may have trouble crawling your site

Had the cookies caused a problem it could have been diagnosed by using the Lynx browser.

That’s not why I am pointing out this thread.

Googler MattD steps in and points out some “old” pages of the site that contain a significant amount of hidden text (click link to view the hidden text). Noteworthy in this discussion is the fact that MattD went beyond normal protocol and provided site specific information. The danger of doing this is that everyone may expect this sort of person treatment, which isn’t feasible and is the wrong assumption, but it also is a great milestone and example that should be held up as model for others to learn from. From this example I drew the following opinions.

  1. It’s good to have an idea of what you may have done to get in trouble, but don’t let that idea get in the way of other possibilities. Often having multiple people look at the site will get you differing views that you the owner who is often too close to the site and wouldn’t see as a problem.
  2. We don’t know how MattD knew what the site was in trouble for, was it a manual review or a signal in some of their wonder tools? Either way they know. Remember Susan mentioned that a review of your site will probably include a deeper look at it’s over-all practices.
  3. When submitting your reconsideration request you must be forthright and include ALL discretions, even the old ones specially the old ones. More than likely a ban or penalty is not from what you did last night but from a while ago, a review of the entire site is in order along with a recount of all the changes.
  4. It is entirely possible that the site and or pages ranking was affected by the hidden text, after reconsideration the site may not regain its original position since that effect is now gone.
  5. If you are penalized its because Google has decided that you were attempting to fool the search algorithm. If when you submit a reconsideration request that is incomplete and doesn’t include all problems, that could also be considered an attempt to deceive, though Adam Lasnik has said multiple reconsideration requests are not seen as a signal to be held against you. I wouldn’t assume that filing a 2nd or 3rd request would be aggregated with the previous one, more than likely a different person is reviewing it. If I were to submit an additional request with more information I’d include the previous statements as well
  6. This is always a problem with a 3rd party looking at a site. We are not always given all of the information available, access to all of the sites pages on the server, or knowledge of what was done before. We only see the state the site is in now and without a context in which to put that in. Google on the other hand is the king of data storage and can contrast and compare multiple various previous incarnations.

posted in SEO, highlights, reconsideration request | 2 Comments

2nd July 2007

johnweb has the minus 18 penalty


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

I keep reading how searching for the domain name and if it doesn’t show up as the #1 result that is an indication of a penalty.

The owner of JohnWeb must be figuring out if he’s penalized then, because I sure see it showing up in a lot of search referrals. Ironically this blog shows up as the #1 result, even though until now the term has not even been used on the site. The cached copy (As of 7-2-07) from the search results show the familiar Google disclaimer, “These terms only appear in links pointing to this page: johnweb”

This site has probably gained links to it with the anchor text JohnWeb as I created that as on online identity some time ago when joining something that wouldn’t allow my normal JLH login (too few characters I think).

Is JohnWeb.com really penalized? I doubt it, since it’s just a parked domain that may or may not even have a history. I just think it’s been outranked by another site (this one) for a relatively obscure and non-competitive term.

Sometimes a cigar is just a cigar.

posted in SEO | 2 Comments

27th June 2007

Been caught stealing


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

I’ve been quoting an excellent post by First thing I’d like to highlight is that when you’ve been caught spamming they probably take a deeper look at the site, how it ranks, and particularly how it got there.

But we must also take into account that besides the shady SEO techniques used on a particular site (for instance hidden text, redirecting doorway pages, etc) there are often off-site SEO techniques used such as creating artificial link popularity in order to gain a high position in Google’s SERPs

Secondly, after the spammy on-the-page factors are removed and the link profile is updated they then re-rank the site once it re-included.

…once those manipulations to make a site rank unnaturally high are removed, the site gains the position it merits based on its content and its natural link popularity…

The reason I want to highlight this is that it’s a subject I see a lot from webmasters. They’ve been caught spamming, clean up the site, and then can’t seem to figure out why they don’t rank where they used to anymore. Susan, in a rare glimpse at Google being quite open, lays the groundwork for understanding this. As she explains, the reason some sites are penalized for spamming, is because spamming works. By removing sites Google has really always admitted this simple fact, but at least now it’s in writing.

Site removal is also basically an admission that detection is not nearly as automatic as we’d like to believe. If the algorithm could easily detect spamming techniques the logical conclusion would be they’d just ignore them, and let the site rank where it would without the technique. The same way they treat the keyword meta tag, an abused system of assigning sites keywords used in the early 1930s, ignored by any real search engine today, but religiously used by the flat-earth-society of webmasters. I’d imagine there are too many instances of collateral damage for many types of automatic detection, for example CSS driven menu systems that hide some text.

Spamming a search engine takes some sort of sophistication that your average joe-six-pack mom-n-pop shop doesn’t normally exhibit, which is why I am glad to see Susan mention that they look deeper into how the site has been acting off the page as well. The logic is that if you’ve been sly enough to hide some text, cloak, or any other various egregious violations maybe your links should be looked at. The first and easiest place for the company that has built an empire on data mining is to look at your links. They’ve got cache data from your site going back to Jamestown in the 17th century, and data on the rest of the web. Looking for exchanged links is nothing for a bit of data mining to find, going 3 or 4 levels deep for linking rings is probably just as easy. Now they simply make a note of all those links to your site not counting.

I think this is also a good indication of the manual nature of penalization as I think we are talking about link exchanges, or attempts to manipulate search engine rankings artificially. It’s not the same as reciprocal linking. Reciprocal linking is a natural phenomena of the web, birds of a feather tend to flock together as they say. For example I’ve linked to search giants like Sebastian and Matt Cutts multiple times, and both have linked to me. That was not an arranged for any ranking purposes, but rather as a consequence of the subjects covered, thus a reciprocal link in that case is a good thing. Those links back and forth probably do count for something. Only a manual review and detect the difference.

So what’s all this mean? If you’ve been caught spamming, expect Google to put on the rubber gloves and give your site a real good look-see. Also expect that even though you have removed your spamming ways your ranking will probably be effected because 1) the original spam worked and doesn’t now, and 2) other factors, even off site ones, will also be re-evaluated.

I think where this really impacts people is the ones that get busted for unnatural linking procedures. If you’re running a site in a specific niche that has garnered most of it’s authority from unnatural linking between sites through exchanges and Google busts up that linking ring, some people are not going to be happy. If your site’s links were comprised of mainly industry wide exchanges expect a big drop, on the other hand if your competition had a good amount of real links, expect them to weather the storm better than you. No amount of reconsideration requests is going to get your site back to where it was and there is nothing any Googler can do to help as the old links that helped are now gone.

The answer, as always with Google, is to obey the rules and get more links. Real ones.

Sponsor: Today’s post was brought to you by Jane’s Addiction, circa 1990, back in my college days.

YouTube Preview Image

I’ve been caught stealing
Once when I was 5
I enjoy stealing
It’s just as simple as that
Well, it’s just a simple fact
When I want something, I don’t want to pay for it
I walk right through the door
Walk right through the door
Hey all right!
If I get by, it’s mine
Mine all mine!
My girl, she’s one too
She’ll go and get her a shirt
Stick it under her skirt
She grabbed a razor for me
And she did it just like that
When she wants something, she don’t want to pay for it
She walk right through the door
Walk right through the door
Hey all right!
If I get by, it’s mine
Mine all mine!
We sat around the pile
We sat and laughed
We sat and laughed and waved it into the air!
And we did it just like that
When we want something, we don’t want to pay for it
We walk right through the door
Walk right through the door
Hey, all right!
If I get by, it’s mine, mine, mine, mine, mine, mine, mine. . .

posted in Google, Music, SEO | 5 Comments

20th June 2007

People ask questions


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

I don’t normally do any actual content writing tips here, like most I like to keep my secrets to myself :) But while going over search engine referrals this morning I thought of something I’d like to share, just because it’s so basic and simple but effective.

There is an astonishing amount of people that ask search engines questions, literally. I’m not sure if it’s from the bygone days of the Ask Jeeves campaign or that people actually think Google answers questions but they do. They will form their query not as a keyword search but as a formal question.

For this reason I often include forms of a questions typically asked when writing a page that answers said question. If you’ve read my blog before you’ll recognize my tendency to do this. It’s not just another example of my poor writing skills but rather me looking for some long-tail search results.

For example one that comes up often is, “Why don’t I have a page rank in Google?”, which when queried in Google leads to the page I linked to as the number #1 result.

This applies to other types of sites as well. We spend all our time trying to rank for the holy grail one word term like “Widgets” or the more specific “Red Widgets” . You’d be surprised how many real people search using full questions with an action word in them for example; “How can I buy red widgets online?” or “What is the price of red widgets?”. These semantic phrases are much easier to rank for than the one or two keyword phrases.

I’m not sure if this is purely a symptom of people personifying the machine or it’s the searchers attempt to dig down in the results. Often if you search for “red widgets” you’ll get the manufacturer, an expired ebay auction, some ebay subdomain, a wiki page on red widgets, some review site with Amazon affiliate links, subdomain spam on 31sui38s.com, and then some real sites. The searcher isn’t looking for ebay or Amazon links, they’ve probably already tried those, wiki gives them information, the manufacturer gives them information, etc. so they narrow it down with action words like, “buy”, “price”, “order”, “purchase” and questions like, “Where”, “How”, or “Who”.

While writing your articles and websites, think not only about including all the information that you as an expert searcher would use to find the page, but also someone who isn’t as well trained. It’s important to include in the text the intent and purpose of the page as well as the content.

Tips

  • If you’re site is trying to sell red widgets be sure to include in the red widget sales page that they can “buy” or “order” there and the “price” of the “purchase”. In sales you always have to remember to ask for the order, the same is true on the internet.
  • If you are writing an informative article, try to include the common or multiple common questions the article will answer prominently along with the answer.
  • Dump the “buy it now” or “add to cart” buttons and replace with text links, or at least include ALT text.
  • Personalize the page rather than generalize it. A million sites may be trying to rank for “Make money online” but much less are thinking of the average searcher who is looking for “How can I make money online?” (then sell them their ebook telling them to sell an ebook)
  • Everyone is looking for a bargain. Including bargain hunting words like “Sale”, “bargain”, “discount”, and “clearance” . Those are words that ebay, wiki, and the manufacturer won’t have on their red widget page, but you should.
  • Smart internet shoppers are also coupon clippers so give them coupons, be sure to offer “promotional codes” or “coupons” on the sales page in addition to the cart.
  • Don’t feel like you have to trap your users by camouflaging a sales page as information. Information is easy to find and you don’t want them people anyway, you want people who are ready to buy, trumpet that fact. The page should be as much about red widgets as it is about the purpose of the page, to sell red widgets.
  • If you’ve wrote the definative information page on a subject be sure to include all of the questions it attempts to answer, in a form they will be asked.
  • Write the way people speak. In grammar school you were chastised for not writing in formal language which is very true if you are writing a book, but in the reality that is the internet, most of the searchers aren’t writers and most will search the way they talk.

posted in SEO | 0 Comments

19th June 2007

Where everyone knows your name…


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

I need a new forum to hang my hat in. I’ve enjoyed the Google Webmaster Help Group for quite a while now, but it’s loosing its focus. The place has been inundated with retards trolls, and left for dead by Google*. No longer is it the wonderful place it once was that had insightful official information. That official information is now pretty much exclusively fed through non-official A-list SEO bloggers now or at their conferences around the bar.

I really enjoy the search engine standards discussion the most, so I’m trying to find a new active forum. I figure if Google isn’t going to be answering any more question officially I might as well make my contributions somewhere where some who needs the money is gaining from it. I get the feeling that Google is busy counting their money and not looking to improve their webmaster relations or be bothered with supporting us little people.

I’ve tried Digital Point and WebmasterWorld. WMW has a higher quality of posters, but you can’t discuss real world issues there so their search engine forums are worthless. Digital point will allow spirited discussion, but there are WAY too many idiotic questions that the intelligent stuff get’s hard to find. Been lurking about cre8asiteforums but the volume is so low its hard to get a discussion going, maybe after I get to know some of the players it will improve.

Any other suggestions? If you’ve got a favorite please post a comment so they at least get a link, and perhaps a new contributer (me). I love helping people and learning, not arguing and reading lies, so moderation is a must.

* I was putting together my final statistics of the group, but I got depressed and sad, so I scrapped it. Not only is Googler participation nil, but the once strong regulars are backing away. It’s hard not to when hourly the same trolls spew their lies unattested.

posted in GWHG, SEO, Webmastering | 10 Comments

8th June 2007

Your neighborhood


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

Car on blocksYour site is judged not only on the content it provides, the sites that link to you, but also the sites you link to. The subject of those links help set the theme of your site but also define your web neighborhood, which in most cases you want to keep good.

We’ve been told in the Webmaster Guidelines to avoid bad neighborhoods.

In particular, avoid links to web spammers or “bad neighborhoods” on the web, as your own ranking may be affected adversely by those links

If you must link to such a terrible place, then by all means use the comment-spam-paid-link-announcing-bad-neighborhood-no-page-rank-passing nofollow attribute in the links.

In the old days before electricity Google used to announce to the world that a site sucked by gray barring the visible PageRank or simply delisiting the entire site.

When judging a site you used to be able to visit it and check out it’s visible PageRank run a site command and see that it’s indexed and be pretty well assured that the site was doing fine. With the emergence of penalties instead of bans, PageRank not changing, and gray bars showing up for all kinds of pages the job of being a good linker has become more difficult.

With that said, I’ve come up with a short quick list of things that can be done to check if the site is still in good graces with Google. Some of these items are based on pure conjecture by forum members ( the infamous -30 and -950 debaters) so any one failure should not be a clear indication of the sites linkability but may be cause for more investigation.

  1. Check for PageRank, if the site is new this may not exist.
  2. Check number of pages indexed using the site command.
    1. Compare this number to the URLs in their sitemap if possible.
    2. Crawl the site with Gsitecrawler (which used to have sitelinks but doesn’t now???) to see how many URLs they actually have, compare with indexed.
  3. Check the supplemental count. Is it in-line with what you’d expect for the main pages visible PageRank. A PageRank 6 site should be able to hold more than 3 pages in the real index!. :)
  4. Search for the domain name, see if they turn up as the first result.
  5. Does the home page show up first in the site command?
  6. Most importantly, and often overlooked, if you are linking to site because they are the authority on chickens, search for chickens. Where do they fall in the results?
  7. Building on #6 check the ranking for the keywords and combinations of keywords that the site is obviously targeting.
  8. Not all bad sites have been caught, yet, so do a good check for compliance with the Webmaster Guidelines. While the site may pass the sniff test on all other fronts, if its got hidden links and text on it, it may be banned tomorrow and you don’t want to be linking to it.
  9. Check their link profile using Site Explorer. If 90% of their links are from directories, irrelevant sites, or forum signatures you know you’ve got something suspicious going on.
  10. Check the sites basic maintenance quality. A site that has a good robots.txt file, no conical issues, doesn’t use session IDs, etc is probably more aware of search engine standards than one that doesn’t.
  11. Check the META tags. Not that this is a sign of being banned, but for me a sign of quality. If the site has 23 META tags from 100 “keywords” to “google-pray” and “revisit-after” you know you are dealing with an amateur that may walk into trouble in the future.
  12. Send Matt Cutts an email and ask him to check the site out. He usually get’s back to me in a day or two. Here’s his email address setup just for site health inquiries, in case you don’t have it already:

mattcuttsemail.png

I’d be interested in hearing what you use to judge a linking partner. If you’ve got a list published include it in the comments, all approved comments go non-nofollowed in 2 weeks!

Updates

JohnMu says, “My main tool for evaluating the value of a site before I link to it is the MSN linkfromdomain:-query. There’s nothing better — even if MSN only has 1/100th of the data that Google (or even Yahoo) has. It would be great to be able to check up on sites like that, eg: [linkfromdomain:othersite.com xxx] or better: [linkfromdomain:othersite.com seo]“

posted in Google, SEO, Webmastering | 1 Comment

6th June 2007

Other sites can hurt your ranking


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

Google still says that there is almost nothing a competitor can do to harm your ranking or have your site removed from their index. What about a site that is not your competitor, but one that you thought was your partner?.

I’ll be using the poorly titled newest addition to the webmaster guidelines ” Why should I report paid links ” as a reference.

The first thing you read about your site being negatively impacted is where they clearly state that buying links is a violation of the webmaster guidelines and can result in penalties.

Buying links in order to improve a site’s ranking is in violation of Google’s webmaster guidelines and can negatively impact a site’s ranking in search results

Now this raises an interesting consideration. Assuming Google isn’t in your bank account, they don’t have access to your credit card statements, and they don’t review your tax returns the only way they could divine that you’ve actually purchased a link is to make the conclusion that a site that links to yours has sold that link. Previously we were told that those sites would loose their ability to pass PageRank, but the quoted paragraph above points to a much more proactive penalization of the linkee not the linker.

Further down the page they expound a bit on what Google considers to be the correct way to buy links for traffic purposes only.

Not all paid links violate our guidelines. Buying and selling links is a normal part of the economy of the web when done for advertising purposes, and not for manipulation of search results. Links purchased for advertising should be designated as such. This can be done in several ways, such as:

* Adding a rel=”nofollow” attribute to the href tag
* Redirecting the links to an intermediate page that is blocked from search engines with a robots.txt file

The interaction of the two paragraphs cited represent a profound change in Google’s earlier stated stance that external sites can’t hurt you (almost).

They have now clearly stated that buying links can harm your site, but if you do buy links that they should be constructed in a way that does not pass PageRank, such as with nofollow or through a redirect. Unfortunately for you, the link buyer, you have no control of how the webmaster you purchased your link from set’s up her website.

Imagine a situation where you’ve done your due diligence an purchased a link for traffic from a site that nofollows all of it’s sold links. Three month’s go by and they decide to change their policy and remove all of the nofollows. Your busy running your own website and don’t have time to police the internet full time and don’t notice that your purchased link is now not not-nofollowed. Google may have already tagged the linking site as a link seller [perhaps due to abundance of nofollow!] and now sees your link that is not properly designated as a paid link and issues a penalty on your site.

We can’t have it both ways, either external sites can or cannot hurt you, or link buying can or cannot hurt you, the two are not independent of each other.

I can forsee a sub-economy building out of this if it truly is the case; purchasing obviously paid links for your competitor on sites that don’t properly designate them as paid. On your site that sells links offer a free one time link to a non-indexed domain. You can prove to your new potential client that your site is deemed a link seller as the new purchased link should not get the new domain indexed. After that charge a set rate to link to your clients competitor, without using nofollow, through javascript, or through a redirect. To expand your business even further you could also add the option of letting the targeted site outbid the competitor to take the link down!

I’m hoping that this is just a case of sabre rattling by Google and the new paid links page has not been thought thoroughly through. As it is written now it’s a complete policy shift from the stance that the link seller will have their ability to pass PageRank stripped. A simple change of the subject in the two paragraphs above from the link buyer to the link seller would also solve this paradox, such as:

Buying Selling links in order to improve manipulate a site’s ranking is in violation of Google’s webmaster guidelines and can negatively impact a site’s ranking in search results

and:

Not all paid links violate our guidelines. Buying and selling links is a normal part of the economy of the web when done for advertising purposes, and not for manipulation of search results. Links purchased sold for advertising should be designated as such. This can be done in several ways, such as:

* Adding a rel=”nofollow” attribute to the href tag
* Redirecting the links to an intermediate page that is blocked from search engines with a robots.txt file

More discrepancies in the new webmaster guidelines to come soon…

posted in Google, Paid Links, SEO | 7 Comments

  • Please Support

  • Marquette University

  • Sponsored

125x125

  • Donations


  • ;

Enter your email address:

Delivered by FeedBurner

rss posts
  • Readers