Deprecated: Assigning the return value of new by reference is deprecated in /home/jlhdes/public_html/wp-settings.php on line 520

Deprecated: Assigning the return value of new by reference is deprecated in /home/jlhdes/public_html/wp-settings.php on line 535

Deprecated: Assigning the return value of new by reference is deprecated in /home/jlhdes/public_html/wp-settings.php on line 542

Deprecated: Assigning the return value of new by reference is deprecated in /home/jlhdes/public_html/wp-settings.php on line 578

Deprecated: Function set_magic_quotes_runtime() is deprecated in /home/jlhdes/public_html/wp-settings.php on line 18

Warning: Cannot modify header information - headers already sent by (output started at /home/jlhdes/public_html/wp-settings.php:520) in /home/jlhdes/public_html/wp-content/plugins/bad-behavior/bad-behavior/screener.inc.php on line 8

Warning: session_start() [function.session-start]: Cannot send session cookie - headers already sent by (output started at /home/jlhdes/public_html/wp-settings.php:520) in /home/jlhdes/public_html/wp-content/plugins/wp-referer.php on line 36

Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/jlhdes/public_html/wp-settings.php:520) in /home/jlhdes/public_html/wp-content/plugins/wp-referer.php on line 36
JLH Design Blog: Webmastering, Google, and other stuff
17th February 2009

Beware of thinly veiled link requests


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

If you own a website you no doubt receive hundreds of link request emails ranging from honest people to three-way-exchanges, and even to the following deceptive scams:

I received the following email:

Hello,

Recently I visited your website http://www.jlh-design.com ; while visiting your site I noticed that you link to http://andybeard.eu at this address: http://www.jlh-design.com/2007/08/warnings-google-needs-to-incorporate/. As we are closely related to them, I would love to exchange links with your website, currently there are about 5,000 - 7,000 people per day that goto my site and search for information, Therefore I would to link to an excellent site like yours.

I have taken the liberty of adding your site to my home page: http://www.torontorealestatedirect.com to determine if it is of any benefit to you, if you have a stats program you can check it and let me know. By looking at my stats, it looks like today I have sent you 38 visitors but it may change by the time you receive this email.

Some website owners do not like when other sites link to them so I thought I might ask first. I think the information on your website could be useful to my visitors; and maybe you could receive some extra relevant traffic if you want. Please get back to me when you have a chance to let me know if its ok to link to your website like this.

Have a good week,

Melissa Thompson
——————————————————————————–

email: melissa.thompson@torontorealestatedirect.com
website: http://www.torontorealestatedirect.com
Ref: KNdNB

This email was sent to xxxxx, by melissa.thompson@torontorealestatedirect.com

| 108 Chestnut Street | Toronto | Ontario | Canada

Melissa seems like a nice enough person, wanting my permission to link to me an all….but let’s take a look at her offer a little further.

The first link in the email was actually to http://www.torontorealestatedirect.com/?pg=KNdNB

Note the parameter tagged onto the end.

Visiting that link, which wasn’t visible as the linked text in the email, will send your browser on the following little wild goose chase:

GET /?pg=KNdNB HTTP/1.1
Host: www.torontorealestatedirect.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.9.0.6) Gecko/2009011913 Firefox/3.0.6
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Cookie: estate=http%3A%2F%2Fwww.jlh-design.com

HTTP/1.x 302 Found
Date: Tue, 17 Feb 2009 23:55:51 GMT
Server: Apache/2.2.3 (CentOS)
X-Powered-By: PHP/5.1.6
X-Pingback: http://www.torontorealestatedirect.com/toronto/xmlrpc.php
Set-Cookie: estate=http%3A%2F%2Fwww.jlh-design.com; expires=Thu, 31-Dec-2015 07:00:00 GMT
Location: http://www.torontorealestatedirect.com
Content-Length: 0
Connection: close
Content-Type: text/html; charset=UTF-8
———————————————————-
http://www.torontorealestatedirect.com/

GET / HTTP/1.1
Host: www.torontorealestatedirect.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.9.0.6) Gecko/2009011913 Firefox/3.0.6
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Cookie: estate=http%3A%2F%2Fwww.jlh-design.com

HTTP/1.x 200 OK
Date: Tue, 17 Feb 2009 23:55:52 GMT
Server: Apache/2.2.3 (CentOS)
X-Powered-By: PHP/5.1.6
X-Pingback: http://www.torontorealestatedirect.com/toronto/xmlrpc.php
Set-Cookie: estate=http%3A%2F%2Fwww.jlh-design.com; expires=Thu, 31-Dec-2015 07:00:00 GMT
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8

What the cookie “estate” does until 31-Dec-2015 is inject my domain into the code:

<h2>Recommended Sites</h2>
<ul>
	<li><a title="Jlh-design" href="http://www.jlh-design.com">Jlh-design</a></li>
	<li><a title="Toronto Real Estate Board" href="http://www.torontorealestateboard.com/">Toronto Real Estate Board</a></li>
	<li><a title="Toronto Condos" href="http://www.toronto-condominium-homes.com/">Toronto Condos</a></li>
	<li><a title="Realtor" href="http://www.realtor.com/toronto/">Realtor</a></li>
</ul>

So whenever I’d visit the domain I’d see my fine link sitting there, thinking I got myself a sweet deal. This little spamming technique is just too crooked for me to let go and I figured I warn any webmaster who happens across this and a link request from Melissa Thompson of torontorealestatedirect.com. I wonder how many of the 32,000 links are real?

posted in Webmastering | 12 Comments

10th December 2008

Rudy Arrested


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

By now you’ve heard of Illinois continued stellar record in politics with the arrest of their latest Governor Rod Blagojevich. While watching and reading the news I was almost mesmerized by that awful hair of his, I just knew I’ve seen it before.

Rudy from the 1993 movie set in 1975 and Gov. Rod Blogojevich in 2007 with eerily similar hair.

Rudy from the 1993 movie set in 1975 and Gov. Rod Blogojevich in 2007 with eerily similar hair.


.

posted in Personal | 0 Comments

4th December 2008

Webmaster Help Forum Googlers


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

With the retirement of the original Google Webmaster Help Group my list of Helping Googlers now only points to archives of what Googler’s have said and will not have any new information. With the introduction of the new Google Webmaster Help Forum this list will include the active Googlers in the new forum.

These names were harvested from Reintroducing your English Webmaster Help Google Guides suggested by an astute webmaster if you are interested in what they haven’t found yet in Google follow that link.  The links I provide go to their Webmaster Help Forum Profile.

This list will be updated as new Googlers migrate to the new forum.

Note: With the new system the profiles only show “questions” asked by the individual and not any answers (which is most important to us for Googlers) but I am told that feature has been requested.

posted in GWHF, GWHG | 1 Comment

4th December 2008

New Google Webmaster Help Forum is now live


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

The new help format by Google has now been enabled for Webmaster Issues.

Located at: Webmasters Help

It’s been pre-seeded with frequently asked questions by Susan Moskwa already and appears to be open for business. No announcements yet on the fate of the old Google Webmaster Help Group.

For a list of participating Googlers see my constantly updated page of Google Webmaster Help Forum Googlers.

posted in GWHG | 0 Comments

3rd November 2008

Fresh New Googlers on GWHG


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

Update: 11/4/08 added Jayan

I’d like to introduce four Five new Googlers to the Google Webmaster Help Group.

Jayan Blue G

Hey Guys,

My name is Jayan, and I work with Google’s Search Quality Team. My
personal blog is still a work in progress so I will definitely be
asking around for some help on that. I am looking forward to helping
around the group as much as possible and learning from you guys in the
process.

On the personal front: When not glued on to the internet, I spend most
of my waking hours catching up on movies and football games (Go
Arsenal!!). I am a self confessed Music Junkie/Gaming addict. I still
play Counter-Strike till my eyes actually hurt. I love reading,
traveling and indulge myself with adventure sports whenever I get the
opportunity. This was a recent skydiving video of mine -
http://www.youtube.com/watch?v=_mYgqhzrvyc . Oh, and did I mention I
work out of Google India’s Hyderabad office?

Cheers,
Jayan.

dLux Blue G

I am dLux, I work at Google Switzerland. I am a Googler in the Search
Quality Team.

My personal homepage is at www.dlux.hu, I like photography, especially
the beautiful mountains in Switzerland and sometimes I feel an urge to
sing, too. As you can see I originally came from Hungary, and I still
like that region very much.

I feel honoured to work in Google (it was my dream since I’ve finished
the university in 2002) and to help you guys who use our products!

Oliver Fisher Blue G

I’m Oliver Fisher, yet another Googler. I spend my time working on
Google’s anti-malware efforts (http://
googleonlinesecurity.blogspot.com/2008/10/malware-we-dont-need-no-
stinking.html).

I like long walks on the beach, romantic sunsets… Oh, wrong sort of
profile…

I work with Google’s Montreal engineering office but live in Ottawa.
Fortunately, many days I’m able to work from home - so No Pants Day
isn’t a special occasion for me. When not sitting in front of a
computer screen, I often sit in front of a blank wall practicing Zen.

For those already smitten, http://oliverfisher.blogspot.com is my
personal blog. Its PageRank is so low that I should be fired.

O.

Christopher W. Blue G

Hey gang,

I’m Chris from the Search Quality team, and I thought I’d stop by and
introduce myself before I start helping out in this group.

Most of my time online is in Google Reader, trying to keep up on new
music, politics, design, and, of course, webmasters. My site is mostly
a tumblelog-style collection of things I’m into, and hopefully a
showcase for some music and programming projects in the future.
Offline, I’m usually reading, seeing films, taking photos, playing
music, tinkering with something, or having adventures around San
Francisco.

Oh, and when I have a rare, spare moment from Search Quality, I work
on the Authors@Google team. You can check out our events on Youtube
( http://www.youtube.com/atgoogletalks ), if you’re curious.

I’m really looking forward to getting to know more of you, and helping
out with any issues you might be having.

See you around,
Chris

Adi Goradia Blue G

Hellooo Everyone,

I’m Adi a recent addition to the U2U as well as a member of the Search
Quality team here at Google.

Like my buddy Chris, I keep a close eye on new music and try to get
out to all the shows going on in San Francisco. Most recently I saw
the Notwist; if you have a chance, check out their song Gloomy Planets
(the song is much more lighthearted and uplifting than the title
suggests - http://www.youtube.com/watch?v=x2qKfzIpoQg ). Before this,
I studied computer arts and new media production, which ended up in a
lot of web projects — and that’s how I found myself here, working
with webmasters.

I’m excited to help out and I’m looking forward to learning some new
things from all of you.

- Adi

The timeless classic list of GWHG Googlers has been updated. This on the heals of the best month ever with over 10,000 posts.

posted in GWHG | 0 Comments

24th September 2008

Hide those links


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

Reid Blue G from Google Webmaster Help Group fame, search quality fortune, and Google glory offered some more answers to more webmaster questions. You can watch the video for his answers.

Even More Webmaster Questions

He answered an interesting question that’s been tested several times by several people but this is the first official mention I can recall.

….wanted to know if Google will follow links on a page using the “noindex” attribute in the “robots” meta tag. To answer this question, Googlebot will follow links on a page which uses the meta “noindex” tag, but that page will not appear in our search results…

What does that mean for you? Well if you’ve got nosy competitors wandering around your link profile as some like to do you can still feed links to a site but keep that page out of the index and away from prying eyes (besides of course through navigation and other lesser search engines). It’s an old trick now but a good one to keep in the arsenal, specially for initial feeder links.

I originally hinted at this in my Don’t Use Robots.txt to Control Indexing post.

A follow up question I’d have is whether or not pages that are not indexed and blocked from being so have to conform to webmaster guidelines or does the site pay a price for having non-conforming pages that are not indexed?  I’ll leave it to the reader to think of the loopholes that exist for either possible answer to that question.

posted in GWHG, SEO | 5 Comments

23rd September 2008

Dynamic Vs. Static URLs confusion


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

Nice URL.

What they said:

Google’s help document, “Creating a Google-friendly URL structure” currently says:

Consider organizing your content so that URLs are constructed logically and in a manner that is most intelligible to humans (when possible, readable words rather than long ID numbers). For example, if you’re searching for information about aviation, a URL like http://en.wikipedia.org/wiki/Aviation will help you decide whether to click that link. A URL like http://www.example.com/index.php?id_sezione=360&sid=3a5ebc944f41daa6f849f730f1, is much less appealing to users.

Overly complex URLs, especially those containing multiple parameters, can cause a problems for crawlers by creating unnecessarily high numbers of URLs that point to identical or similar content on your site. As a result, Googlebot may consume much more bandwidth than necessary, or may be unable to completely index all the content on your site

They also say on their “Dynamic Pages” help article:

If you’re concerned that your dynamically generated pages are being ignored, you may want to consider creating static copies of these pages for our crawler

What they do:

The articles above are found at the URLs:
http://www.google.com/support/webmasters/bin/answer.py?answer=76329&t [screenshot]
http://www.google.com/support/webmasters/bin/answer.py?answer=34431&ctx=sibling [screenshot]

I don’t know about you, as you’re probably smarter than me, but intuitively “76329″ does not mean Google friendly URLs, and “34431″ doesn’t scream click me for information on Dynamic URLs.

What they say now:

In their latest blog post “Dynamic URLs vs. static URLs” they have taken a different position.

Providing search engines with dynamic URLs should be favored over hiding parameters to make them look static.

One recommendation is to avoid reformatting a dynamic URL to make it look static

I don’t know what to think now. I don’t want to rip an author as my own blog tagline is “Terrible writing and mere conjecture” but this blog post looks like both. It appears that they are trying to help people who cannot figure out URL writing and saying not to worry about it, but it is written so obtusely that anyone that cannot rewrite URLs surely isn’t going to understand that article. The fact that they contradict all previous documentation only further confuses me.

I think I’ll wait for this shit storm to settle out but for now I am going to abide by the old axiom of designing your site for users and not search engines and as a user I am much more likely to understand what:
http://www.google.com/support/webmasters/dynamic-pages/
Is about than:
http://www.google.com/support/webmasters/bin/answer.py?answer=34431&ctx=sibling

Since Google cannot figure out that a page which lists every article on the site is indeed a sitemap I cannot believe that they can figure out how to handle session IDs and numeric references to pages either.

posted in Google | 2 Comments

3rd September 2008

Twitter Reciprocity


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

I’m sure the millions of readers headed my warning about Twitter caving to mattcuttsean-like pressures and nofollowing everything on your profile back on 7/22/08. So this is no surprise to you, but twitter has finally pulled the plug on that loophole.

The web educated amongst you will add twitter.com to your well maintained nofollow reciprocity list in your plug-ins I’m sure.

As pointed out in my original post, I still find it incredibly stunning that @mattcutts offers @ev advice on furthering the nofollow carnage while ignoring the actually helpful advice that would #1) decrease their server load, and #2) decrease Google’s own crawler load.

I guess we’ll see who twitter is more interested in pleasing, it’s users by reducing the server load with a simple url canonicalization fix or Google with their cure-all rel=”nofollow”, by which is fixed first.

I’m not sure if it’s because lcase() is so hard for them to implement or that bowing to Google’s pressure is more important for the eventual buyout price, but their problems persist, now with the added benefit of HTTPS versions! Nice.

As always and of course follow me on Twitter I’ll follow you back if you #1) update regularly and #2) don’t use it primarily as a bastardized IM service with too many ‘@’ twits.

posted in Google, Matt Cutts, Webmastering | 1 Comment

2nd September 2008

Google Chrome


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

I’d loose my Google fanboy status if I didn’t mention it. It’s a web browser. Made by Google. But apparently with less flexibility than the current 900 other browsers available, just more Googlier and faster.

This announcement coupled with the recent Google wikipedia Knol project makes me giddy with excitement for the next Google innovation: Google Wheel Beta. This will be a much more Googlier wheel and available in only red or yellow, and of course roundier.

Which will be followed up with the much rumored Google Ten Piece Hammer (pictures not available at time of publication)

posted in Google | 2 Comments

21st August 2008

Sitemap


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

This post is for Google. It’s not really meant to be ready by humans, but you can if you’d like. With the new more useful 404’s by Google they promise such things as:

In addition to attempting to correct the URL, the 404 widget also suggests the following, if available:

  • a link to the parent subdirectory
  • a sitemap webpage
  • site search query suggestions and search box

I haven’t seen them offer the sitemap page yet.  Perhaps its because I was clever and named it articles and not sitemap or because they hate it since it’s graybarred even though linked to on every page of the site.  Either way, Googlebot the sitemap is located at http://www.jlh-design.com/articles/.  Seeing that it’s just a giant list of all the posts and pages on the site I would think it would have been quite transparent that though the page is named articles, it’s actually a SITEMAP.

posted in Google | 0 Comments

21st August 2008

Giving Yahoo some love


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220


This blog is quite Google centric but every once in a while I need to give Yahoo! some love. Well, today is the day. See how the word Yahoo! was highlighted in that first sentence? (whoops I did it again) I didn’t need to do anything but actually just type it, a plug-in took over and stylized for me. It will actually do that throughout the blog automatically in all posts and comments.

Yahoo! Highlighter Download

posted in Plug-ins | 0 Comments

1st August 2008

Don’t use Robots.txt to control indexing


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

It seems a day doesn’t go by in GWHG that someone is concerned that some page that they blocked in their robots.txt file is showing up in Google. Google’s handling of the robots.txt is quite elaborate, well documented, and easily tested. Having said all of that many do not fully understand the intent of robots.txt and how the opportunity to use it for optimization of a web site.

Any discussion of robots.txt cannot be complete without the caveat that only GOOD robots follow it and it’s a very public file, so don’t expect it to keep out rouge bots or as a security measure to keep stuff hidden. That being said, I’d like to talk about an obedient bot, googlebot.

As elaborate or simple as your robots.txt may be it accomplishes one thing it directs the crawler where it can and cannot go explicitly by disallowing some pages/folders or indirectly by only allowing certain pages and blocking others. Stopping the crawler from crawling a page should not be confused with giving it direction on what to do with that page. As a matter of fact Google will indeed index urls that explicitly blocked by the robots.txt file. Since they cannot crawl them they really don’t know what’s on the page so the URL will often be listed as URL only without a Title or description (snippet). Sometimes if they can find the information elsewhere like the ODP they’ll use that to help fill in the blanks.

I don’t know exactly what threshold exists for the decision to include a URL that’s blocked by robots.txt but I’d imagine as with anything Google it has something to do with the quantity and quality of links pointing to it. That being said, and as anyone who’s trying to rank something in Google knows, those links are gold and not to be taken too lightly. Most honest-to-goodness real links start out in someones browser bar. They’ve navigated to a page and found it interesting enough to tell others about it by cutting-n-pasting the URL into some sort of HTML somewhere. It would be a crying shame if Google were to follow that link only to be blocked by a robots.txt and not be able to transfer any value to the site other than to list the URL as URL-only in the search results, which will more than likely only ever be shown for a search on the anchor text, which may actually only be “click here“.

Say Matt Cutts really wants to rip into me with one of his famous debunking posts. In part of his article he really wants to show how often I speak of Google on this blog. To emphasis that fact he may link to an internal site search page like: http://www.jlh-design.com/?s=google which will find all the posts here that use the word Google. Being a good webmaster I don’t want Google to return my search results in their search results as we’ve been warned not to.

I could block all search results from being crawled in my robots.txt with something like this:

1
2
User-agent: *
Disallow: /?s=*

Which will keep Google from crawling that URL. However a link from Matt Cutts is prized and rare so I may want to take advantage of it when it does come around.

The better option is to allow the URL to be crawled but stop Google from indexing it via a robots meta tag.

1
<meta name="robots" content="noindex,follow,noodp,noydir" />

The page that Matt linked to does contain all of my site’s navigation pointing to previous posts, the home page, categories etc, that I’d like indexed and ranked. Allowing Google to crawl the page and follow the links while stopping it from being indexed accomplishes the goal of keeping it out of the index but passing value to the site as a whole.

For a fine example of this in the wild let’s take a renowned SEO site SEOmoz who has this in their robots.txt file.

1
2
User-agent: *
Disallow: /ugc/category/

Yet Google has 28 URL-only pages indexed currently. (screenshot)

So remember that robots.txt doesn’t stop a page from being indexed it does however stop the page from passing any value to your site if they can’t crawl it. Using the robots noindex meta tag will control indexing but allow crawling for discovery of other links on the page.

posted in SEO, Webmastering | 2 Comments

31st July 2008

Publish or Perish


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

Publish or perish is a term used in academia used to describe the notion that one must publish on a consistent basis to sustain their career and prestige within their institution and among their colleagues. The concept was no more apparent than tonight in a monthly review of this site’s statistics. I offer you this small snapshot:

Screen shot of awstats for JLH-Design.com

The trend is not what you’d like to see in the normal development of a site. Notice that uniques, visits, and pages are all down by at least 50% this month compared to last month, this after a reasonably steady natural growth rate.

Upon further inspection the search engine traffic is right where is normally is, reader’s per post is within normal range, but the large disparity is that “other sites” category. The normally (normal for me anyway) largest source of traffic which is other sites, type ins, bookmarks, social media, etc.

Admittedly posting and quality of content has been down lately as other pressing needs and sites have become more important than this small blog but trend is an important lesson in web publishing. If you’re (by ‘you’ I mean I) are not putting forth the effort to publish new and compelling material you’re also not spending enough time on promotion of the material. What can be more of an example than a loss of nearly 100,000 pageviews in a single month? Blog type formats may be more susceptible to this as the content tends to be timely in nature and rely much less on search engines supplying the visitors than normal information or commerce type sites.

In all the ongoing discussion of Google search results, links, optimisation, etc I think what’s often lost in the discourse is a less than concrete concept of passion. When I write or publish something that I’m excited about I get passionate about it and I want to share that passion with other readers. Saying something you believe in isn’t enough you want others to hear it. Given the flakiness and uncontrollable nature of search refers I tend to promote ideas I’m passionate through other means and that can be seen in the site’s stats. It’s not all about Google when it comes to a site’s readership, involvement and ultimately conversion it’s about engaging the audience and bringing them to the site first.

I half expect to see next months search referrals to be down as well. With 100,000 less pageviews this month that’s 100,000 less chances for someone to be inspired to provide a link and share the information. Negative link acceleration on a site can be the death knell for it in the natural rankings and those tend to lag reality by a few weeks. It should be noted that the lack of publishing really started (or stopped as the case may be) in June and continued in July, only now is the fall out being able to be seen and graphed.

I’m not making any promises on being more engaging on this site in the near future but I have made a mental note of the affects of passionate involvement and hope to further cultivate that in other projects.

posted in Site News, Webmastering | 1 Comment

28th July 2008

Cuil


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

Cuil.com debuted it’s search engine.  I’m not going to declare it the next Google killer or a total failure based one day’s results, however I did find this interesting.

If you visit their section for webmasters they have:

If you would like Cuil to crawl your site and have it included in our index, please let us know

Where the “please let us know” is a link to an actual email address.

I doubt if many remember this but Google used to actually use email back when they were young (and not billionaires)

Before I am too quick to judge cuil’s capabilities I’ll keep in mind Google’s once humble beginnings.

posted in search | 1 Comment

24th July 2008

I have arrived!


Warning: copy() [function.copy]: Filename cannot be empty in /home/jlhdes/public_html/wp-content/plugins/mytube/mytube.php on line 220

I’ve been flattered with interviews, received recognition on Google’s webmaster blog, mentioned on industry leading sites like Search Engine Roundtable and Search Engine Land, linked to by Matt Cutts, and even made the BigList.

But Today I have I arrived. My fame is now official.

I have an insane cyber-stalker.

Please, I beg you, please, go read my #1 fans’ site by John H. Gohde (screenshot). Apparently somewhere he got the impression that I was an SEO. Okay, so he doesn’t have his facts straight, but it makes for good comedy. He spends most of his day searching for [JLH] on Google to see where I rank. I never knew of my desire to rank for JLH until this very moment when I was trying to follow his posts.

I don’t know this guy from Adam other than he was one of a very few banned from Google groups for being abusive to people. I see he’s on Sphinn now, I expect the mods there will have their hands full once he settles in and starts rambling and attacking people.

If they’re shooting at you, you know you’re doing something right. (The West Wing - the Midterms)

I can’t think of someone I’d rather not like me in the online world more than someone who manages to get themselves banned from both Google Groups and Wikipedia. That puts me with some good company.

posted in SEO | 12 Comments

  • Please Support

  • Marquette University

  • Sponsored

125x125

  • Donations


  • ;

Enter your email address:

Delivered by FeedBurner

rss posts
  • Readers