Category: Community Driven

Meet Our Newest Staff Member

William Springer
Web Consultant

Since teaching himself to program at age eight, William has never been far from computers. After completing his bachelor’s and master’s degrees in computer science at the University of Colorado, William taught for two years before attending Colorado State University for his PhD.

After a number of years in academia, William was anxious to get out and do something; he decided to pursue his long-time interest in web design and joined his wife Brit at One Ear Productions, where he is currently specializing in CSS and SEO amoung other internet interests.

In his spare time, William enjoys playing heavy economic board games and reading science fiction. For the last few years he has also been very interested in digital photography, and is currently studying high dynamic range photography.

Partnership with Wild Critter Media

Moonlight Designs Studio is proud to announce the partnership between Moonlight Designs Studio and Wild Critter Media.  Like us Wild Critter Media provides the best solutions for their clients. We are excited to form this partnership as we have been working together for the past few months now testing out the waters.  What makes it easy is that we have the same goals/ideas for our clients.  We look forward to building a life long partnership and working on projects together.

If you would like us to provide you a quote on your project please contact us!

Cascading Style Sheets

In the past, I haven’t done a lot of CSS work; my interests lie more in the areas of SEO and content. We needed a WordPress programmer, though, and before I could start programming WordPress themes, I needed to brush up on my CSS and PHP. Fortunately, I had a decent book sitting around: O’Reilly’s CSS: The Missing Manual. After a few hours reading the book and playing with the CSS in some WordPress themes, it all makes sense; my only hesitation in recommending the book comes from the fact that, because it was released last year, it contains less than a hundred pages on CSS3.

I’ve probably mentioned this before, but if you’re not familiar with what Cascading Style Sheets can do, you need to run, not walk, over to the css Zen Garden and play with it a bit. Good programming practice for the web these days, and what HTML5 is built on, is the separation of style from content. HTML5 holds the content; CSS should be used to indicate to the browser how to display it. What’s really nice about this is that by using an external style sheet for your website, if you decide to change the look and feel of the site you can immediately alter every page (drastically, if desired) simply by editing one file.

CSS is really pretty simple. Suppose that I wanted to change how the text in each of these  posts is displayed. Let’s say that, for some reason, I wanted the text to appear in bright orange. In WordPress, the posts are contained in the following tags: <div id=”content”></div>. Thus, I would simply add the following code to my style.css file:

#content {
color: #FF7F00;
background: #FFFFFF url(‘images/background.jpg’) no-repeat;
}

So what does all that mean? The hash mark before the name of the div means that this is an ID, which is something that should appear at most once on each page; something that can appear multiple times should be a class and is indicated using a period. Everything between the brackets is CSS code that will apply to the contents of the content div; in this case, #content is called a selector because it selects this particular part of the HTML. Inside the brackets I can have any number of declarations; in this case I just have two. Each declaration is a property followed by a colon and then a value; some properties can take on multiple values. For example, here I told the browser that it should render the content area with a white background, use the file background.jpg in the images directory as a background image, and that that image should not repeat. If I later decide to use a less hideous color combination, all I have to do is edit this one file.

Aside from making my life a lot easier when I have to change something, this can also reduce the amount of space the code for a site takes up, as the formatting only has to be stored (and downloaded) once, rather than once for each page. This helps speed up the process of loading your site, which helps to keep Google and your customers happy.

The Google Sandbox

You may have heard people speak of the Google Sandbox as a dreadful place to be avoided. But what is it, exactly, and how can you avoid going there?

Google’s goal is to return the best possible search results. They have a number of rules for webmasters that basically boil down to: don’t screw with the search. For example, Google will penalize your site if they catch you buying or selling paid links.

Suppose you have a brand new site, just registered within the last few months, and it has thousands of links to it. What’s the most likely explanation – that this site is so incredible all these people have found out about it already, or that someone is trying to manipulate the search engine results? Google will assume the second, and they’ll put the site in the sandbox, which generally means it’s not going to show up when someone does a search for the targeted keywords. Considering Google’s share of the search market, this is a Bad Thing.

While the sandbox can be triggered automatically, Google employs people who actually go out and look at sites. If you have a new site that’s been growing so quickly that it got put in the sandbox, but not so quickly that it couldn’t just be the greatest site ever built, eventually someone from Google will come out and see if the site really does have the content the links say it has. For example, if One Ear Productions was a new site and suddenly had thousands of links talking about our awesome web design, someone might come check that the site really is talking about web design and doesn’t just have some crappy computer-generated articles and a bunch of advertisements. Google isn’t going to decide whether this really is the greatest web design company ever or whether we really are offering extremely informative articles on web design;  they just want to know that this is a legitimate site and we’re not trying to jerk them around.

Cliff notes? Publish high-quality content. Don’t buy or sell links, don’t give Google a reason to think you’re doing that, and don’t put a link to your brand new site in the footer of your 10,000 page site. Otherwise, you may find yourself sitting in the sandbox all alone..

HTML5, Part V: Forms

When it comes to forms, one thing I hear a lot about is setting them up using PHP; HTML has basic form capability, but don’t you want them to do more?

With HTML5, forms can indeed do more, without the need for any fancy scripting. The best part is, all of the new  features degrade gracefully, which means you can use them now without spending a lot of time worrying about what they’ll do in legacy browsers.

Let’s start with the basic elements of an HTML form, which are the same in HTML 4 and 5:

<form>

<input name=”name”  type=”type” value=”value”>

</form>

That’s all there is to it! Let’s take a look at each of these elements. The name tag identifies the input, which lets you refer to it later; this is particularly important when you’re using it to pass information to the next page. The type tag tells the browser what type of input to use, such as a button or dropdown. Finally, the value field is the default value; for a button, this would be the displayed text. Each of these tags is optional.

The nice thing is that when a browser doesn’t understand the given type, or the type is not given, it simply displays it as a text field rather than giving unpredictable behavior. This means we can use new HTML5 input types, knowing that people with older browsers will still have a way to provide input.

How about an example? Suppose I want you to choose an even number between 8 and 22, with a default of 12. I can do that using the following code:

<form> <input type=”number” min=”8″ max=”22″ step=”2″ value=”12″> </form>

If you’re using an HTML5-compliant browser such as Opera, the following should show up as a spinbox that allows only the even numbers between 8 and 18. In older browsers, it will show as a simple text field, but will still be validated – the form will refuse to submit if the user enters an illegal value. The second field is the same code with the type changed to range.

 

So why use these special tags instead of just having the user type a number? For one thing, it can be optimized in various ways. If you’re viewing this page on an iPhone, you won’t get a spinbox, but your keyboard will default to numbers. A search field (another new type) may be functionally the same as a text box, but if you’re using Safari on a mac, it’ll have rounded corners to look like the standard mac search boxes; on both Safari and Chrome, once you start typing a little x will appear to erase the field. While browsers don’t handle these tags in the same way, telling them what type of data is expected allows the more appropriate data entry.

Aside from new types of input, you can also do new things with the old types. For example, one thing that drives me crazy is scripts that autofocus on an input box; when I check my email on the web I tend to have finished typing my username and be halfway through my password when the page finishes loading and moves the focus back to the username field. Now, instead of using javascript, we can stick with HTML and accomplish the same thing in a less annoying (and more consistant) way. Consider the following code:

<form> <input name=”tellme” autofocus  placeholder=”Placeholder Text”> </form>

Depending on which browser you’re using, both, one, or none of the new attributes we’re using will take effect; whichever ones don’t will simply be ignored. Autofocus (predictably) sets the focus to this field, while placeholder text appears in light grey to tell you what you’re supposed to type:

Again, the nice thing is that this degrades gracefully, so it provides a better user experience for people with compatible browsers without annoying those people who don’t see it. (Of course, you may want to detect compatibility and use javascript to provide the same functionality so that more people can see your site as designed). Note that in the above example, we used autofocus and placeholder text together, which is kind of silly because putting focus on that window removes the placeholder text, but it will come back if you click on something else and remind you what needs to be typed there! Also, the autofocus brings this box into view as soon as you load the page, which means you have to scroll back up to get to the top of the article; not particularly good design, but it does demonstrate the power of this attribute.

SEO, Part V: Pagerank Doesn’t Matter

Wait, how can I tell you that pagerank doesn’t matter? Wasn’t I just talking about the importance of getting links from high-ranking sites? Didn’t I say that pagerank is a measure of how important your site is?

The above is all true. You do need to get links from high-ranking sites, and Google does use PR as a measure of usefulness. What it doesn’t tell you, however, is how relevant the page is. Let me explain.

Suppose you do a search for “how to shampoo the dog”. What would be more useful to you: a PR0 site with detailed dog-washing instructions, or a PR6 site selling dog shampoo? Obviously the former is a lot more useful to you, because it’s relevant to your query.

 

Similarly, while high pagerank is nice to have, your users don’t really care about it; they just want your site to give them what they need. Similarly, you just want your site to come up at the top of the SERPs (Search Engine Results Pages) so your customers can find you. Thus, you want to get links from high-ranking, relevant sites because they count as a strong vote that your page is useful.. provided they’re done right.

Next question: suppose you’re the owner of that webpage on how to shampoo a dog. Which would be more useful to you: a link from a PR2 page using the anchor text (that’s the words you click on) “how to shampoo your dog” or one from a PR6 page with the anchor text “click here”? Again, the former is more helpful; while the latter is better for increasing your pagerank, the anchor text only helps you to rank well for the term “click here” (which Adobe dominates), while the former tells Google what your site is about and helps them to return relevant results.

So do you care about getting a high PR on your sites? Well, yes – more pagerank is always preferable to less pagerank, and having a high-ranking site also speaks to the usability of new pages on that site that don’t yet have their own backlinks. Just remember: relevance trumps numbers!

In fact, Google has explicitly said that the PageRank system is one over more than 200 signals they use to index and rank pages. Remember, what they’re trying to do is find relevant, high-quality content; everything else is just a means to that end.

Building a Blog, Part VI: The Importance of Permalinks

If you’re reading this from the main site page, go ahead and click through to the article. Now look up at the url for this page. What do you see? The link looks something like this:

http://blog.oneearproductions.com/2010/07/building-a-blog-part-vi-the-importance-of-permalinks

Now, if we were using the default WordPress settings, it would actually look like this:

http://blog.oneearproductions.com/?p=316

Which version tells Google what the page is about? Bingo! You always want to use the first type of link because (assuming you picked a good post title), it tells people and search engines what your post is about.

 

Doing this in WordPress is easy; just go to the settings dropdown and click permalinks. You’ll see a half-dozen options; we’re using year, month, and post title, but anything works as long as it includes the title. If you want to make your own custom style, use the last option; just be sure to include %postname% in your url template.

How does this affect your SEO results? Suppose Google is indexing this page (which will happen about two minutes after I hit “publish”). First it reads the url and sees the keywords blog and permalinks. Then it reads the page and sees those same words repeated again. Bingo: Google concludes that this webpage (that is, this post) must be related to those terms, and this is a relevant result for people who want to read more about them!

If I was trying to  make this page show up in the first SERP, I’d start by making the page search-engine friendly (as well as user-friendly); a relevant url is one way to do that. After that, I’d get started building links to this page from relevant websites. Notice that the link in that last sentence uses anchor text that includes the keyword (links) for the page it’s linking to; that page contains the same keyword in the url and title (and, of course, in the body of the post). At this point, Google has a really good idea what that page is about!

Remember, the whole point of search engines is to help users find the most relevant results; accurately describing the content of your pages helps them match the correct page with the correct user. While some people try to abuse the system to get as many visitors as possible to worthless spam pages in the hopes of getting advertising money, when running a legitimate site you want to attract exactly those users that are looking for the content you can provide. Why waste bandwidth (and people’s time) on users who don’t want what you’re offering? Google, of course, is always on the lookout for spam sites (and has human evaluators, as well as software, to help find them), and will happily remove them from the search results. Stick with white hat SEO; don’t let delisting happen to you!

SEO, Part IV: More on Links

In SEO Part II: Links, we discussed the importance of incoming links from high-ranking sites in the same area as yours. What other factors influence the value of incoming links?

Aside from relative popularity, some sites are considered by Google to be particularly authoritative; Wikipedia is one example, but there are others. Pages on Wikipedia can outrank pages on other sites that have a lot more useful information, due to the site’s importance. A vote from these “expert sites” is thus worth more than most. You can read more about this by Googling ‘Hilltop Algorithm’.

Other good sites to get backlinks from are those with .edu or .gov TLDs. While these pages don’t actually get a pagerank boost (at least, so says Google), they do tend to be considered more authoritative, so backlinks from them increase Google’s estimation of your site’s trustworthiness.

The value of links from web directories varies; generally you want to get links from human-created directories that are well organized and link to other good sites, while avoiding those that are automatically created. Of course, you’ll also want to check that the links aren’t using the nofollow attribute, so they pass along link juice.

As previously mentioned, it’s helpful to be one of just a few links rather than one of thousands, so as to get the largest possible share of link juice. Also, remember that Google considers content closer to the top of the page to be more important; thus, it’s good if your link shows up close to the beginning of the HTML file (which is not the same as showing up at the top of the page when displayed – see the note on CSS in that last link!)

It can be discouraging to go to a lot of effort getting new links without seeing a corresponding increase in your pagerank, but remember that, like sites, links need to “age”; a backlink that’s been around for a while is worth a lot more than a new link that could be gone tomorrow. Slowly building links over time tends to lead to a gradual increase in the pagerank of your site.  But be sure to read part five about when pagerank doesn’t matter!

SEO, Part II: Links

So now that we’ve mentioned a few of the things you shouldn’t do, what are some search-engine-approved ways of driving traffic to your site? We’ll focus on Google, since they have the majority of the search market in western countries, but most of what we’ll cover applies to other search engines as well. Today we’ll talk about links and their effect on how Google views your website.

Everyone knows that incoming links are key, but what people don’t always realize is that the quality of those links are also important. Let’s take a quick look at the Google pagerank algorithm.

Every web page indexed by Google has an ever-changing rank between 0 and 10 that indicates how important, useful, and trustworthy it is; of the top 100 US sites according to Alexa, the average pagerank is about 7.5, and with the exception of the adult sites, all have PR at least 6; on the entire internet, there are fewer than two dozen sites that have a PR of 10. (By the site’s PR, we mean the PR of the home page; the PR10 sites generally have multiple pages with that rank). Pagerank uses a logarithmic scale, so as the value decreases, the number of websites with that value increases exponentially. Note that the integer pagerank is only an approximation and can change whenever Google does a PR update, which generally happens at least once a quarter.

Google considers each link from a webpage (except those with the nofollow attribute) to be a “vote” for its destination; the total PR of the page is divided by the number of dofollow links. For example, if a PR6 page has 30 dofollow links, that page contributes a 0.2 “vote’ to each page it links to.

People often try to abuse this system by buying hundreds or thousands of links, a practice frowned upon by Google; their official policy is that sponsored links should use the nofollow attribute. “Organic” links, however, can be very valuable. Let’s look at what does and doesn’t work.

Obviously, links from PR0 sites, particularly those that have a lot of outgoing links, aren’t worth much; ideally you want to have links from higher PR (4 or above) sites that don’t have many other links on the same page. Additionally, links are worth more if they come from a related site; if your site is about X and you get a link from a highly-rated site that is also about X, that link is worth more because Google considers the other site to be an authority on the subject. This can be a problem for ecommerce sites because related sites are likely to be your competitors and their owners won’t be interested in helping to increase your pagerank!

One thing Google watches for is the speed at which links are acquired; a site that goes from 8 incoming links to 20,000 incoming links overnight probably is paying for those links, so they’ll be discounted. Again, Google wants to see “organic” links – those that come about naturally because your site is useful to users of the referring site – and tries to reward them. Google also looks at the “neighborhood” your links are coming from; links from spammy sites tend to be discounted.

Another thing to watch for is the anchor text used for your link; a link of the form <a href=”yoursite”>relevant text</a> is worth a lot more than one that looks like <a href=”yoursite”>click here</a> (although Google does have algorithms in place to try to catch Googlebombs). Of course, if every incoming link to your site uses the same anchor text, that’s another sign Google can use to catch paid links; it’s also a wasted opportunity since you want to rank highly for multiple ways to phrase the same thing. Notice that webpages can even rank highly for terms that don’t appear on them (as in the George Bush / miserable failure Googlebomb) if enough links use that phrase; for example, searching Google for the phrase “click here” will bring up the page to download Adobe Reader, due to the many, many sites that tell their users to “click here” to download it.

More recently, Google also started looking at a site’s outgoing links; a site that mostly links to trustworthy sites such as wikipedia is likely to be regarded more highly than one that links to spammy sites. In other words, Google judges you by the company you keep!

Tomorrow: how to code your webpages to make Google happy.

SEO, Part I: White Hat vs Black Hat

Today we’ll take a brief intermission from HTML5 to start our discussion of Search Engine Optimization, commonly known as SEO. SEO is the art of getting your website highly ranked in search engines, in order to increase traffic. Getting 50% or better clickthrough from users requires being one of the first seven results; in this series, we’ll discuss how to do that.

SEO is generally broken down into two types, white hat and black hat. White hat techniques are search-engine approved methods of improving your site’s rating; black hat techniques are likely to lead to a brief improvement, followed by a quick ban. In general, black hat techniques are those that increase traffic to your site but create a poor user experience, and are often considered unethical. Today we’ll discuss black hat techniques so that you can be aware of them. Again, you should NOT use these methods to increase traffic; the search engines have ways to detect them and will penalize your site.

Keyword spam
One popular technique is to place a number of keywords on the page such that the search engine will see them but users will not. Usually this is done by setting the words to be the same color as the page background or a very small font size, although there are a number of other methods. Spammers use either keywords that are related to the page (to artificially increase it’s rating) or unrelated (to suck in people for whom the page is not useful). Search engines hate this. As a general rule of thumb, don’t do anything to make the page look different for humans and computers (but there are exceptions – we’ll cover one in part III).

Cloaking
Cloaking is when, rather than display the page differently to users and search engines, the webmaster actually shows them different pages using methods such as javascript redirects and 100% frame.


Doorway pages
A doorway page is when the main page appears to direct the user into one of a number of other pages based on some criteria, but every destination is actually the same. For example, the main page may contain a link to another page of the site for each state, but the pages do not actually contain any state-specific information; the point is just to get users to load more advertisements. Again, search engines consider this to be spam; you shouldn’t have duplicate pages on your site. The term is also used for pages what are created to be visible only to search engines.


Computer-generated pages
You’ve probably seen webpages that seemed to be on-topic, but when you read them they were poorly written and didn’t make a lot of sense. Often these pages are computer-generated, pulling content from various sources. Not only do these pages tend to be fairly worthless to the user, they’re a strong spam signal for the search engine.
This isn’t even close to being a comprehensive list, but hopefully you’re getting the idea: don’t try to trick the search engines. Next time: Google-approved SEO.