Category: Video World

Keyword Stuffing

Earlier today, I was working on an article for another site and needed some anecdotal statistics, which I obtained by taking a look at the computer services section on craigslist.  I noticed that a lot of the website designers and so-called SEO experts advertising there are still trying to get away with keyword stuffing.

Keyword stuffing is a spam method that was employed in the 90s and the first half of the past decade; unethical web designers would fill their keyword meta tags with a huge number of unrelated keywords in an attempt to get their page to show up at the top of the search results; this is why many search engines no longer use that tag. Another common technique was to simply put a big list of keywords at the bottom of the page, possibly using cloaking techniques to make them visible to search engines but not to the users.

One thing many people are not aware of is that Google actually does employ human beings to check websites that are flagged by its software and see whether they’re legitimate or spam; as you can imaging, sites that are marked as spam don’t have much luck getting Google traffic after that! Keyword stuffing is one of the main things that these people are instructed to look for.

So noticing that a number of the craigslist posts had keyword stuffing in them, I checked out the webpages they were linking to and found the same thing: keyword stuffing in the meta keyword tag, in the body of the page, or both. Shouldn’t a “professional web designer” know better than to use SEO techniques that Google has been penalizing for half a decade?

Now, don’t take this to mean that you shouldn’t use relevant terms on your own site! This post contains a number of terms that we’d probably like to rank for – professional web designer, SEO, etc. But notice that they’re not in a list at the bottom of the post and they’re not repeated over and over; rather, they’re being used naturally. If a Google engineer was to come check out this page, it would get an easy “not spam” classification, because everything on it is legitimately there for the purpose of providing useful information to the reader. We also use tags on many posts; if you look down a bit, you’ll see that this post is tagged with several terms to help Google (and our on-site search) tell what this page is about. The difference is largely in scale, as well as intent: we’ve added a small number of terms relevant to this page, rather than pulling out the thesaurus and emptying it out!

Cross-Browser Compatibility

One thing that causes a lot of headaches for web developers is getting sites to look good in every browser.

Notice that I did not say, getting a site to look the same in every browser. While it’s possible to get a site to look more or less the same, there are always going to be minor differences and it’s generally more trouble than it’s worth. As long as the site looks good and has consistent branding, there’s no reason that it needs to look identical on every computer.

Once you realize that, web design becomes much less of a hassle. One method is to build out a basic site that will work in every browser, then add flourishes that will improve the viewing experience for people with modern browsers, without taking anything away from those who are still using older browsers.

HTML5 and CSS3 offer some great examples.  Suppose you want to have some really nice-looking buttons. You can put in rounded corners and add a background gradient to make the buttons stand out.  Visitors running Internet Explorer won’t be able to see this, and will just get the regular old buttons, so to them it’s the same as if you didn’t make any changes at all, but other visitors get an enhanced browsing experience.

Or consider HTML forms. The new input types in HTML5 degrade gracefully; any browser that doesn’t know how to handle them simply treats them as text inputs instead. As a result, older browsers get the same experience they always have, but more modern browsers get a better presentation.

In short: design your sites to look good on every major browser. Then, if you like, use modern tools such as HTML5 and CSS3 so that, on modern browsers, the sites look even better.

Pagerank, Relevance, and the SERPs

We’ve previously talked about why pagerank doesn’t matter, but a number of people still attach a great deal of importance to it. It’s easy to see their point: plug pretty much any search term with any competition into Google, and the top results are likely to be mostly high-PR sites.

What many people don’t realize is that the high PR is a symptom of what’s making the page rank well, not a cause.

What does that mean? Well, suppose I have a website talking about topic X. This is a really good site, so I get a lot of links to it, many of which will have anchor text that says “X”, “page about X”, etc. All of these links are telling Google that my page is about X; additionally, each one is also a vote for the page. Thus, when somebody searches for X, Google sees that my page is both relevant and popular, and thus returns it close to the top of the search engines.

However, all these pages that are linking to me are also passing on link juice, especially if some of them have a high page rank. As a result, the PR on my page goes up!

Just as one example, go to Google and search for the term pagerank doesn’t matter. I did that just now and looked at the top five results. (Granted, this isn’t a particularly competitive term). Three are PR0 (including the page on this site, which in spite of being fairly new comes up second), one is PR3, and one is PR5. The PR5 page is in 5th place, behind the three PR0 pages! Interestingly enough, that PR5 site is actuall Matt Cutts (who heads the Google webspam team). Why is our page beating his? Because even though his has better page rank (due largely to being on a PR7 site), it’s talking about PageRank sculpting; our site is more relevant to the search term.

Would that page show up in the top five if it didn’t have PR5? Probably not; the PR does elevate the page’s importance over other pages with similar relevance but less link juice. This is, however, a good example of how relevance trumps pagerank.

Page Flow, Inline and Block Elements, and Relative Positioning

The HTML flow model is pretty simple: every element goes in a box, and the web browser stacks up those boxes when you view the page. Block boxes, which are generated by tags such as <p> and <h1>, get stacked on top of each other, while inline elements stay (surprise, surprise) in a line (unless the line reaches the edge of the container, in which case it runs over to the next line). Easy enough, right? Although elements have a default type (block or inline), you can change this in your CSS with the display: property. You can also set display: none, which keeps the element from being displayed on the page at all. All of these elements, whether block or inline, are considered to have static positioning.

However, when you use relative positioning, this actually removes an element from the flow. Your browser will render the page as if the relatively positioned element is where it’s “supposed” to be, but you can actually move it around; this may result in it covering up other elements. Absolutely positioned elements are also removed from the flow; as previously discussed, they’re placed relative to the closest ancestor which is relatively positioned (this may be the html element). While other elements behave as if relatively positioned elements are still at their normal location, they act as if absolutely positioned elements do not exist. Every absolutely-positioned element has its own z-index level, so that it will appear above or below anything it overlaps with (even other absolutely positioned elements); while these are set by default (with elements that appear later in the file getting larger z-numbers and appearing on top), you can override them using the z-index property.

Tip: Ever end up with an unclickable link on your webpage? This may be caused by a transparent, absolutely positioned element which is covering up the link. IE7 and below have a bug that allows you to click through the above element, so you’ll actually see the behavior you want (a clickable link) in IE6 and IE7, but not in modern browsers.

CSS Programming and IE6, Part 2: The Box Model

Let’s start with a quick review of the CSS box model.

Suppose I have a div. From the outside in, I first have the margin, which is distance from the surrounding elements. Then there’s a border. Inside the border is padding, and inside that is the content.

In a modern browser, the total width that the div takes up on the screen (from the border in) will be the combined widths of the content, padding, and border. For example, given a div with width of 80px, border 3px, padding 7px will take up 100 pixels (80 + 3 + 3 + 7 + 7), as specified by the CSS standard. In IE6, it will take up 80 pixels, as internet explorer considers the border and padding to be part of the specified width. Accordingly, if you use the same style sheet for all browsers, your divs are likely to look much thinner when viewed with IE6.

The other major problem related to the box model is known as the double margin bug; when a box is floated against the edge of the containing div, IE6 will double the margin on that edge.  There are several hacks to fix this; without getting into why they work, just know that you can get rid of the error by adding one of two statements:

display: inline;
zoom: 1;

Neither should have any effect on other browsers; in fact, because zoom is a Microsoft-specific thing (nothing other than IE recognizes it), non-Microsoft browsers will ignore it completely (although it will make your style sheet fail a CSS standards check).

Meet Our Newest Staff Member

William Springer
Web Consultant

Since teaching himself to program at age eight, William has never been far from computers. After completing his bachelor’s and master’s degrees in computer science at the University of Colorado, William taught for two years before attending Colorado State University for his PhD.

After a number of years in academia, William was anxious to get out and do something; he decided to pursue his long-time interest in web design and joined his wife Brit at One Ear Productions, where he is currently specializing in CSS and SEO amoung other internet interests.

In his spare time, William enjoys playing heavy economic board games and reading science fiction. For the last few years he has also been very interested in digital photography, and is currently studying high dynamic range photography.

Partnership with Wild Critter Media

Moonlight Designs Studio is proud to announce the partnership between Moonlight Designs Studio and Wild Critter Media.  Like us Wild Critter Media provides the best solutions for their clients. We are excited to form this partnership as we have been working together for the past few months now testing out the waters.  What makes it easy is that we have the same goals/ideas for our clients.  We look forward to building a life long partnership and working on projects together.

If you would like us to provide you a quote on your project please contact us!

HTML5, Part V: Forms

When it comes to forms, one thing I hear a lot about is setting them up using PHP; HTML has basic form capability, but don’t you want them to do more?

With HTML5, forms can indeed do more, without the need for any fancy scripting. The best part is, all of the new  features degrade gracefully, which means you can use them now without spending a lot of time worrying about what they’ll do in legacy browsers.

Let’s start with the basic elements of an HTML form, which are the same in HTML 4 and 5:

<form>

<input name=”name”  type=”type” value=”value”>

</form>

That’s all there is to it! Let’s take a look at each of these elements. The name tag identifies the input, which lets you refer to it later; this is particularly important when you’re using it to pass information to the next page. The type tag tells the browser what type of input to use, such as a button or dropdown. Finally, the value field is the default value; for a button, this would be the displayed text. Each of these tags is optional.

The nice thing is that when a browser doesn’t understand the given type, or the type is not given, it simply displays it as a text field rather than giving unpredictable behavior. This means we can use new HTML5 input types, knowing that people with older browsers will still have a way to provide input.

How about an example? Suppose I want you to choose an even number between 8 and 22, with a default of 12. I can do that using the following code:

<form> <input type=”number” min=”8″ max=”22″ step=”2″ value=”12″> </form>

If you’re using an HTML5-compliant browser such as Opera, the following should show up as a spinbox that allows only the even numbers between 8 and 18. In older browsers, it will show as a simple text field, but will still be validated – the form will refuse to submit if the user enters an illegal value. The second field is the same code with the type changed to range.

 

So why use these special tags instead of just having the user type a number? For one thing, it can be optimized in various ways. If you’re viewing this page on an iPhone, you won’t get a spinbox, but your keyboard will default to numbers. A search field (another new type) may be functionally the same as a text box, but if you’re using Safari on a mac, it’ll have rounded corners to look like the standard mac search boxes; on both Safari and Chrome, once you start typing a little x will appear to erase the field. While browsers don’t handle these tags in the same way, telling them what type of data is expected allows the more appropriate data entry.

Aside from new types of input, you can also do new things with the old types. For example, one thing that drives me crazy is scripts that autofocus on an input box; when I check my email on the web I tend to have finished typing my username and be halfway through my password when the page finishes loading and moves the focus back to the username field. Now, instead of using javascript, we can stick with HTML and accomplish the same thing in a less annoying (and more consistant) way. Consider the following code:

<form> <input name=”tellme” autofocus  placeholder=”Placeholder Text”> </form>

Depending on which browser you’re using, both, one, or none of the new attributes we’re using will take effect; whichever ones don’t will simply be ignored. Autofocus (predictably) sets the focus to this field, while placeholder text appears in light grey to tell you what you’re supposed to type:

Again, the nice thing is that this degrades gracefully, so it provides a better user experience for people with compatible browsers without annoying those people who don’t see it. (Of course, you may want to detect compatibility and use javascript to provide the same functionality so that more people can see your site as designed). Note that in the above example, we used autofocus and placeholder text together, which is kind of silly because putting focus on that window removes the placeholder text, but it will come back if you click on something else and remind you what needs to be typed there! Also, the autofocus brings this box into view as soon as you load the page, which means you have to scroll back up to get to the top of the article; not particularly good design, but it does demonstrate the power of this attribute.

SEO, Part V: Pagerank Doesn’t Matter

Wait, how can I tell you that pagerank doesn’t matter? Wasn’t I just talking about the importance of getting links from high-ranking sites? Didn’t I say that pagerank is a measure of how important your site is?

The above is all true. You do need to get links from high-ranking sites, and Google does use PR as a measure of usefulness. What it doesn’t tell you, however, is how relevant the page is. Let me explain.

Suppose you do a search for “how to shampoo the dog”. What would be more useful to you: a PR0 site with detailed dog-washing instructions, or a PR6 site selling dog shampoo? Obviously the former is a lot more useful to you, because it’s relevant to your query.

 

Similarly, while high pagerank is nice to have, your users don’t really care about it; they just want your site to give them what they need. Similarly, you just want your site to come up at the top of the SERPs (Search Engine Results Pages) so your customers can find you. Thus, you want to get links from high-ranking, relevant sites because they count as a strong vote that your page is useful.. provided they’re done right.

Next question: suppose you’re the owner of that webpage on how to shampoo a dog. Which would be more useful to you: a link from a PR2 page using the anchor text (that’s the words you click on) “how to shampoo your dog” or one from a PR6 page with the anchor text “click here”? Again, the former is more helpful; while the latter is better for increasing your pagerank, the anchor text only helps you to rank well for the term “click here” (which Adobe dominates), while the former tells Google what your site is about and helps them to return relevant results.

So do you care about getting a high PR on your sites? Well, yes – more pagerank is always preferable to less pagerank, and having a high-ranking site also speaks to the usability of new pages on that site that don’t yet have their own backlinks. Just remember: relevance trumps numbers!

In fact, Google has explicitly said that the PageRank system is one over more than 200 signals they use to index and rank pages. Remember, what they’re trying to do is find relevant, high-quality content; everything else is just a means to that end.

Building a Blog, Part VI: The Importance of Permalinks

If you’re reading this from the main site page, go ahead and click through to the article. Now look up at the url for this page. What do you see? The link looks something like this:

http://blog.oneearproductions.com/2010/07/building-a-blog-part-vi-the-importance-of-permalinks

Now, if we were using the default WordPress settings, it would actually look like this:

http://blog.oneearproductions.com/?p=316

Which version tells Google what the page is about? Bingo! You always want to use the first type of link because (assuming you picked a good post title), it tells people and search engines what your post is about.

 

Doing this in WordPress is easy; just go to the settings dropdown and click permalinks. You’ll see a half-dozen options; we’re using year, month, and post title, but anything works as long as it includes the title. If you want to make your own custom style, use the last option; just be sure to include %postname% in your url template.

How does this affect your SEO results? Suppose Google is indexing this page (which will happen about two minutes after I hit “publish”). First it reads the url and sees the keywords blog and permalinks. Then it reads the page and sees those same words repeated again. Bingo: Google concludes that this webpage (that is, this post) must be related to those terms, and this is a relevant result for people who want to read more about them!

If I was trying to  make this page show up in the first SERP, I’d start by making the page search-engine friendly (as well as user-friendly); a relevant url is one way to do that. After that, I’d get started building links to this page from relevant websites. Notice that the link in that last sentence uses anchor text that includes the keyword (links) for the page it’s linking to; that page contains the same keyword in the url and title (and, of course, in the body of the post). At this point, Google has a really good idea what that page is about!

Remember, the whole point of search engines is to help users find the most relevant results; accurately describing the content of your pages helps them match the correct page with the correct user. While some people try to abuse the system to get as many visitors as possible to worthless spam pages in the hopes of getting advertising money, when running a legitimate site you want to attract exactly those users that are looking for the content you can provide. Why waste bandwidth (and people’s time) on users who don’t want what you’re offering? Google, of course, is always on the lookout for spam sites (and has human evaluators, as well as software, to help find them), and will happily remove them from the search results. Stick with white hat SEO; don’t let delisting happen to you!