Setting Up 4 Key Customer Loyalty Metrics in Google Analytics

The author’s posts are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.

Customer loyalty is one of the strongest assets a business can have, and one that any can aim to improve. However, improvement requires iteration and testing, and iteration and testing require measurement.
Traditionally, customer loyalty has been measured using customer surveys. The
Net Promoter Score, for example, is based on the question (on a scale of one to ten) “How likely is it that you would recommend our company/product/service to a friend or colleague?”. Regularly monitoring metrics like this with any accuracy is going to get expensive (and/or annoying to customers), and is never going to be hugely meaningful, as advocacy is only one dimension of customer loyalty. Even with a wider range of questions, there’s also some risk that you end up tracking what your customers claim about their loyalty rather than their actual loyalty, although you might expect the two to be strongly correlated.Common mistakes
Google Analytics and other similar platforms collect data that could give you more meaningful metrics for free. However, they don’t always make them completely obvious – before writing this post, I checked to be sure there weren’t any very similar ones already published, and I found some fairly dubious reoccurring recommendations. The most common of these was
using % of return visitors as a sole or primary metric for customer loyalty. If the percentage of visitors to your site who are return visitors drops, there are plenty of reasons that could be behind that besides a drop in loyalty—a large number of new visitors from a successful marketing campaign, for example. Similarly, if the absolute number of return visitors rises, this could be as easily caused by an increase in general traffic levels as by an increase in the loyalty of existing customers.
Visitor frequency is another easily misinterpreted metric; 
infrequent visits do not always indicate a lack of loyalty. If you were a loyal Mercedes customer, and never bought any car that wasn’t a new Mercedes, you wouldn’t necessarily visit their website on a weekly basis, and someone who did wouldn’t necessarily be a more loyal customer than you.The metrics
Rather than starting with the metrics Google Analytics shows us and deciding what they mean about customer loyalty (or anything else), a better approach is to decide what metrics you want, then deciding how you can replicate them in Google Analytics.
To measure the various dimensions of (online) customer loyalty well, I felt the following metrics would make the most sense:Proportion of visitors who want to hear more Proportion of visitors who advocate Proportion of visitors who return Proportion of macro-converters who convert again
Note that a couple of these may not be what they initially seem. If your registration process contains an awkwardly worded checkbox for email signup, for example, it’s not a good measure of whether people want to hear more. Secondly, “proportion of visitors who return” is not the same as “proportion of visitors who are return visitors.”1. Proportion of visitors who want to hear more
This is probably the simplest of the above metrics, especially if you’re already tracking newsletter signups as a micro-conversion. If you’re not, you probably should be, so see Google’s guidelines for event tracking using the
analytics.js tracking snippet or Google Tag Manager, and set your new event as a goal in Google Analytics.
2. Proportion of visitors who advocate
It’s never possible to track every public or private recommendation, but there are two main ways that customer advocacy can be measured in Google Analytics: social referrals and social interactions. Social referrals may be polluted as a customer loyalty metric by existing campaigns, but these can be segmented out if properly tracked, leaving the social acquisition channel measuring only organic referrals.

Social interactions can also be tracked in Google Analytics, although surprisingly, with the exception of Google+, tracking them does require additional code on your site. Again, this is probably worth tracking anyway, so if you aren’t already doing so, see Google’s guidelines for
analytics.js tracking snippets, or this excellent post for Google Tag Manager analytics implementations.3. Proportion of visitors who return
As mentioned above, this isn’t the same as the proportion of visitors who are return visitors. Fortunately, Google Analytics does give us a feature to measure this.

Even though date of first session isn’t available as a dimension in reports, it can be used as a criteria for custom segments. This allows us to start building a data set for how many visitors who made their first visit in a given period have returned since.
There are a couple of caveats. First, we need to pick a sensible time period based on our frequency and recency data. Second, this data obviously takes a while to produce; I can’t tell how many of this month’s new visitors will make further visits at some point in the future.
In Distilled’s case, I chose 3 months as a sensible period within which I would expect the vast majority of loyal customers to visit the site at least once. Unfortunately, due to the 90-day limit on time periods for this segment, this required adding together the totals for two shorter periods. I was then able to compare the number of new visitors in each month with how many of those new visitors showed up again in the subsequent 3 months:

As ever with data analysis, the headline figure doesn’t tell the story. Instead, it’s something we should seek to explain. Looking at the above graph, it would be easy to conclude “Distilled’s customer loyalty has bombed recently; they suck.” However, the fluctuation in the above graph is mostly due to the enormous amount of organic traffic that’s been generated by
Hannah’s excellent blog post 4 Types of Content Every Site Needs.
Although many new visitors who discovered the Distilled site through this blog post have returned since, the return rate is unsurprisingly lower than some of the most business-orientated pages on the site. This isn’t a bad thing—it’s what you’d expect from top-of-funnel content like blog posts—but it’s a good example of why it’s worth keeping an eye out for this sort of thing if you want to analyse these metrics. If I wanted to dig a little deeper, I might start by segmenting this data to get a more controlled view of how new visitors are reacting to Distilled’s site over time.4. Proportion of macro-converters who convert again
While a standard Google Analytics implementation does allow you to view how many users have made multiple purchases, it doesn’t allow you to see how these fell across their sessions. Similarly, if you can see how many users have had two sessions and two goal conversions, but you can’t see whether those conversions were in different visits, it’s entirely possible that some had one accidental visit that bounced, and one visit with two different conversions (note that you cannot perform the same conversion twice in one session).
It would be possible to create custom dimensions for first (and/or second, third, etc.) purchase dates using internal data, but this is a complex and site-specific implementation. Unfortunately, for the time being, I know of no good way of documenting user conversion patterns over multiple sessions using only Google Analytics, despite the fact that it collects all the data required to do this.Contribute
These are only my favourite customer loyalty metrics. If you have any that you’re already tracking or are unsure how to track, please explain in the comments below.

Google Kills Author Photos in Search Results: What You Should Know

Google gives, and Google takes away.
Even so, it came as a surprise when John Mueller announced Google is dropping authorship photos from most search results.
This one hits particularly hard, as I’m known as the guy who
optimized his Google author photo. Along with many other SEOs, I constantly advise webmasters to connect their content writers with Google authorship. Up until now, would-be authors clamored to verify authorship, both for the potential of increased click-through rates, and also for greater brand visibility by introducing real people into search results.Update: As of June 29th, the MozCast feature graph shows traditional authorship snippets dropping to 0% of search results across all data centers. Previously, Google displayed authorship photos in 22% of all searches.How are author photos changing?
The announcement means author photos in
most Google search results are going away. John Mueller indicated the change will roll out globally over the next few days.
Up until now, if you
verified your authorship through Google+, and Google choose to display it, you might have seen your author photo displayed in Google search results. This included both your author photo and your Google circle count.
Going forward, Google plans to only display the author’s name in the search snippet, dropping the photo and the circle count.

Google News adds a different twist. 
In this case, Google’s plans show them adding a small author photo next to Google News snippets, in addition to a larger news photo snippet. 
At this time, we’re not sure how authorship in Google News will display in mobile results.Why did Google drop author photos?
In his announcement, John Mueller said they were working to clean up the visual design of search results, and also to create a “better mobile experience and a more consistent design across devices.”
This makes sense in the way Google has
embraced mobile-first design. Those photos take up a lot of real estate on small screens. 
On the other hand, it also leaves many webmasters scratching their heads as most seemed to enjoy the author photos and most of the web is moving towards a more visual experience.
John Mueller indicated that testing shows that “click-through behavior” with the new results
is about the same, but we don’t know exactly what that means. One of the reasons authors like the photos in search results was the belief that a good photo could result in more clicks (although this was never a certainty). 
Will the new SERPs result in the same amount of clicks for authorship results? For now, it’s hard to say.
Critics argue that the one thing that will actually become more visible as a result of this change will be Google’s ads at the top and sides of the page.What isn’t changing?
Despite this very drastic visual change in Google search results, several things
are not changing:
1. Authorship is still here
As Mark Traphagen eloquently
pointed out on Google+, the loss of photos does not mean Google authorship itself is going anywhere. 

“Google Authorship continues. Qualifying authors will still get a byline on search results, so Google hasn’t abandoned it.”
2. Authors’ names still appear in search results
In the new system, authors still get their name displayed in search results, which presumably clicks through to their Google+ profile. Will this be enough to sway searchers into clicking a link? Time will tell.3. Your rankings don’t change
Authorship does not influence rankings for most search results. (exceptions for certain results like In-depth articles) Sometimes the photo led to more clicks for some people, but the new change should not alter the order of results.
4. You must still verify authorship for enhanced snippets
Google isn’t changing the guidelines for establishing authorship. This can be accomplished either through
email verification or linking your content to your Google+ profile, and adding a link back to your website from your Google+ contributor section.Tracking your authorship CTR
If you have authorship set up, you can easily track changes to your click-through rate using Google Webmaster Tools. Navigate to Labs > Author Stats to see how many time your author information has appeared in search results, along with total number of clicks and average position.

In the example above, search results associated with my authorship receive around 50,000 impressions a day, with an average of 1831 clicks, for an overall CTR of 3.6%. 
If you track your CTR immediately before and after the Google authorship change (by adjusting the dates in Webmaster Tools) you might notice any changes caused by the shakeup.
Keep in mind that CTR is highly determined by rank, or average position. Small fluctuations in rank can mean a large difference in the number of clicks each URL receives.Is Google Authorship still worth it?
For many, scoring photos in search results was the
only incentive people had to verify authorship. Whether or not it increased click-through rates, it was an ego boost, and it was great to show clients. With the photos gone, it’s likely fewer people will work to get verified.
Even with the photos gone, there is still ample reason to verify authorship, and I highly recommend you continue to do so. Even though a byline is much less visible than a photo, across the hundreds or thousands of search impressions you receive each day, those bylines can make a measurable difference in your traffic, and may improve your online visibility.
Google continues to work on promoting authoritative authors in search results, and authorship is one of the better ways for Google to establish “identity” on the web. Google continues to make statements explaining how important identity in content is, as explained by Matt Cutts both publicly and in this rarely seen interview. <a href=”http://polldaddy.com/poll/7962265/”>Poll: How do you feel about Google removing author photos from search results?</a>Facing the future
If Google begins to incorporate more “Author Rank” signals into its search algorithm, establishing yourself as a trusted authority now could pay off big down the road. Disappearing author photos today may someday be replaced by actual higher rankings for credible authors, but there are no guarantees. 
At this point, it’s hard to say exactly where the future of authorship lies, especially given the unknown future of Google+ itself.
Personally, I will be sad to see author photos disappear. Let’s hope for something better down the road.
More from across the web:Google Removes Author Photos From Search: Why And What Does It Mean?

8 Ways to Use Email Alerts to Boost SEO – Whiteboard Friday

Link building is nowhere near dead, and some of the best link opportunities can be discovered by setting up email alerts for various things that are published on the web. In today’s Whiteboard Friday, Rand runs through eight specific types of alerts that you can implement today for improved SEO.

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. Today we’re going to chat about email alerts and using them to help with some of your SEO efforts, specifically content identification, competitive intelligence, some keyword research, and, of course, a lot of link building because email alerts are just fantastic for this.

Now here’s what we’ve got going on. There are a number of tools that you can use to do email alerts. Obviously, Google Alerts, very well-known. It’s free. It does have some challenges and some limitations in scope, so you won’t be able to do everything that I’m going to talk about today.

There’s Fresh Web Explorer from Moz. Of course, if you’re a Moz Pro subscriber, you’ve probably used Fresh Web Explorer. And Fresh Web Explorer’s alerts functionality, in particular, is kind of my favorite Moz feature period right now.

We also have some very strong, good competitors in this space—Talkwalker, Mention.net, and Tracker—all of which have many of the features that I’m going to be talking about here. So whatever program you’re using, this stuff can help.

That being said, I am going to be talking in terms of the operators that you would use for Fresh Web Explorer specifically. Google Alerts has some of these operators but not all of them, and so do Talkwalker, Mention, and Tracker. They might not have all of these, or theirs might be slightly different. So make sure you take a look at how the search operators for each of those work before you go engaging in this.

The operators I’m going to specifically mention are the minus command, which removes. I think that works in all of them. That’s essentially saying show me this stuff, but don’t show me anything that contains this.

Link:, this works in plenty of them. That’s showing links to the URL specifically. RD: which in Fresh Web Explorer shows links to the root domain, and SD: which shows links to the subdomain.

Quotes, which matches something exactly, works in all of these. TLD, which shows only links from a given domain extension. If I want to see only German websites, I can put TLD:DE and see only sites from Germany. Then, site: which shows only results from a specific sub or root domain, as opposed to like SD or RD, which show links to a subdomain or root domain.

This will all make sense in a second. But what I want to impart is that you can be using these tools, these types of commands to get a ton of intelligence that’s updated daily.

What I love about alerts is whether you do it weekly, or you do it daily, however, whatever frequency works for you, the beautiful thing is it’s a constant nudge, a constant reminder to us as marketers to be concentrating on something like, oh, yeah, I should really be thinking about link building. I should really be thinking about what my competition’s writing about. I should really be thinking about what bloggers in this niche think about my keywords and who they’re talking about when they mention these keywords, all that kind of stuff.

That nudge phenomenon of just having the repetitive cycle is really important for marketers. I feel like it helps me a tremendous amount when I get my alerts every night just to remember oh, yeah, I should do this. I should take a look at that. It’s right in my email. I take care of it with the rest of my work. Very, very helpful.

#1: Links to my competitors, but not to me

I mean come on. It’s just a gimme. It’s an opportunity for a bunch of things. It shows you what types of keywords and content people are writing about in the field, and it almost always gives you a link opportunity or at least insight into how you might get a link from those types of folks. So I love this.

I’m going to imagine that I’m Rover.com. Rover is a startup here in Seattle. They essentially have a huge network. They’re sort of like Airbnb but for people who do dog sitting and pet sitting. Great little company.

Rover has got some competitors in the field, like DogVacay.com and PetSitters.org and some of these other ones. They might, for example, create an alert that is RD:dogvacay.com. Show me people who link to my competitor’s domain, anywhere on my competitor’s domain, people who link to PetSitters.org minus RD:rover.com. Don’t show me people who also link to me. This will show them a subset of folks who are linking to their competition not linking to them. What a beautiful link building opportunity.

#2: Mentions my brand, but doesn’t link to me

Number two, another gimme and one that I’ve mentioned previously in some link building videos on Whiteboard Friday, places that mention my brand but don’t link to me. A number of these services can help you with this. Unfortunately, tragically, Google Alerts is the only one that can’t. But mentions my brand, doesn’t link to me, this is great.

In this case, because Rover’s brand name is so generic, people might use it for a lot of different things, they’re not always referring to the company Rover. They might use a keyword in here like Rover and any mention of dog sitting minus RD:rover.com. That means someone’s talked about Rover, talked about dog sitting, and they didn’t link to them.

This happens all the time. I have an alert set up for Moz that is “RD:moz.com,” and actually for me I just put minus Morrissey because the singer Morrissey is like the most common thing that people mention with Moz. I think I have another one that’s like “moz marketing minus RD:moz.com.” Literally, every week I have at least some news sites or sites that have mentioned us but haven’t linked to us. A comment or a tweet at them almost always gets us the link. This is great. I mean it’s like free link building.

#3: Mentions my keywords, but doesn’t link to me

This is similar to the competitive one but a little broader in scope.

So I might, for example, say “dog sitting or pet sitting minus RD:rover.com.” Show me all the people in the space who are talking about dog sitting. What are they saying?

The nice thing is with Fresh Web Explorer, and I think Talkwalker and Mention both do this, they’re sorted in terms of authority. So you don’t just get a bunch of random jumble. You can actually see the most authoritative sites.

Maybe it is the case that The Next Web is covering pet sitting marketplaces, and they haven’t written about Rover, but they’re mentioning the word “dog sitting.” That’s a great outreach point of view, and it can help uncover new content and new keyword opportunities too.

#4: Shows content produced by a competitor or news site on a topic related to me

For example, in the case of Rover.com, they might be a little creative and go, “Man, I really want to see whenever the Humane Society mentions dog sitting, which they do maybe once every two or three months. Let me just get a reminder of that. I don’t want to subscribe to their whole blog and read every post they put out. But I do really care when they talk about my topic.”

So you can set up an alert like dog sitting “site:humanesociety.org.” Perfect. Brilliant. Now I’m getting those content ideas. Potentially there are some outreach opportunities here, link building opportunities, keyword opportunities. Awesome.

#5: Show links coming from a geographic region

Let’s say, hey, I saw PetSitters.org is going international. They just opened up their UK branch. They haven’t actually, but let’s say that they did. I could create an alert like “RD:petsitters.org TLD:.co.uk.” Now it shows me all the people who are linking to PetSitters.org from the U.K. Since I know they just expanded there, I can start to target all those people who are coming out.

#6: Links to me or my site

This is very important for two reasons. One is so you know when new links are coming, where they’re coming from, that kind of stuff, which is cool to see. Sometimes you can forward those on, see what people are saying about you. That’s great.

But my favorite part of this is so I can thank those people, usually via Twitter, or so I can promote it on social media networks. Seriously, if someone’s going to go and say something nice about Rover and link to me, and it’s a third party news source or a blogger or something, I totally want to share that with my audience, because it reminds them of me and is also great promotional content that’s coming from someone else, an authoritative external voice. That’s wonderful. This can also be extremely helpful, by the way, to find testimonials for your business and press mentions that you might want to put on your site or in your conversion funnel.

#7: Find blogs that are writing about topics relevant to my business

This is pretty slick.

It turns out that most of these alerts systems will also look at the URL when they’re considering alerts, meaning that if someone has blog.domain.com, or domain.com/blog/whateverpost, you can search for the word “blog” and then something like “dog sitter.” Optionally, you could add things like site:wordpress.com, site:blogspot.com, so that you are getting more and more alerts that are showing you blogs that write about your topic, your keywords, that kind of stuff. This is pretty slick.

I especially like this one if you have a very broad topic area. I mean if you’re only getting a few results with your keywords anyway, then you can just keep an alert on that shows you everything. But if you have a very broad topic area, and dog sitting is probably one of those, you want to be able to narrow in on the blogs that you really care about or the types of sites that you really care about.

#8: Links to resources/data that I can compete with/offer a better version

I like this as a link building strategy, and I’ll use it on occasion. I don’t do it all the time, but I do care at certain points when we’re doing a campaign.

For example, a link to a resource or a piece of data that’s been collected out there on the Web that I can compete with or offer a better version of. Somebody, for example, is linking to the Wikipedia page on dog sitting or, let’s say, a statistics page from a Chamber of Commerce or something like that, and I have data that’s better, because I’ve done a survey of dog owners and pet sitting, and I’ve collected all this stuff. So I have more recent, and more updated, and more useful data than what Wikipedia has or this other resource.

I can reach out to these folks. I love seeing that. When you see these, these are often really good link targets, targets for outreach. So there’s just a lot of opportunity by looking at those specific resources and why people link to them and who.

So, with all of this stuff, I hope you’re going, setting up those alerts, getting your daily or weekly nudges, and improving your SEO based on all this stuff.

Thanks, everyone. See you again next week for another edition of Whiteboard Friday.

Take care.

Google Announces the End of Author Photos in Search: What You Should Know

Google gives, and Google takes away.Even so, it came as a surprise when John Mueller announced Google will soon drop authorship photos from most search results.
This one hits particularly hard, as I’m known as the guy who
optimized his Google author photo. Along with many other SEOs, I constantly advise webmasters to connect their content writers with Google authorship. Up until now, would-be authors clamored to verify authorship, both for the potential of increased click-through rates, and also for greater brand visibility by introducing real people into search results.How are author photos changing?
The announcement means author photos in most Google search results are going away. John Mueller indicated the change will roll out globally over the next few days.
Up until now, if you
verified your authorship through Google+, and Google choose to display it, you might have seen your author photo displayed in Google search results. This included both your author photo and your Google circle count.
Going forward, Google plans to only display the author’s name in the search snippet, dropping the photo and the circle count.

Google News adds a different twist. 
In this case, Google’s plans show them adding a small author photo next to Google News snippets, in addition to a larger news photo snippet. 
At this time, we’re not sure how authorship in Google News will display in mobile results.Why did Google drop author photos?
In his announcement, John Mueller said they were working to clean up the visual design of search results, and also to create a “better mobile experience and a more consistent design across devices.”
This makes sense in the way Google has
embraced mobile-first design. Those photos take up a lot of real estate on small screens. 
On the other hand, it also leaves many webmasters scratching their heads as most seemed to enjoy the author photos and most of the web is moving towards a more visual experience.
John Mueller indicated that testing shows that “click-through behavior” with the new results
is about the same, but we don’t know exactly what that means. One of the reasons authors like the photos in search results was the belief that a good photo could result in more clicks (although this was never a certainty). 
Will the new SERPs result in the same amount of clicks for authorship results? For now, it’s hard to say.
Critics argue that the one thing that will actually become more visible as a result of this change will be Google’s ads at the top and sides of the page.What isn’t changing?
Despite this very drastic visual change in Google search results, several things
are not changing:
1. Authorship is still here
As Mark Traphagen eloquently
pointed out on Google+, the loss of photos does not mean Google authorship itself is going anywhere. 

“Google Authorship continues. Qualifying authors will still get a byline on search results, so Google hasn’t abandoned it.”
2. Authors’ names still appear in search results
In the new system, authors still get their name displayed in search results, which presumably clicks through to their Google+ profile. Will this be enough to sway searchers into clicking a link? Time will tell.3. Your rankings don’t changeAuthorship does not influence rankings for most search results. (exceptions for certain results like In-depth articles) Sometimes the photo led to more clicks for some people, but the new change should not alter the order of results.
4. You must still verify authorship for enhanced snippets
Google isn’t changing the guidelines for establishing authorship. This can be accomplished either through email verification or linking your content to your Google+ profile, and adding a link back to your website from your Google+ contributor section.Tracking your authorship CTRIf you have authorship set up, you can easily track changes to your click-through rate using Google Webmaster Tools. Navigate to Labs > Author Stats to see how many time your author information has appeared in search results, along with total number of clicks and average position.In the example above, search results associated with my authorship receive around 50,000 impressions a day, with an average of 1831 clicks, for an overall CTR of 3.6%. If you track your CTR immediately before and after the Google authorship change (by adjusting the dates in Webmaster Tools) you might notice any changes caused by the shakeup.Keep in mind that CTR is highly determined by rank, or average position. Small fluctuations in rank can mean a large difference in the number of clicks each URL receives. Is Google Authorship still worth it?
For many, scoring photos in search results was the only incentive people had to verify authorship. Whether or not it increased click-through rates, it was an ego boost, and it was great to show clients. With the photos gone, it’s likely fewer people will work to get verified.
Even with the photos gone, there is still ample reason to verify authorship, and I highly recommend you continue to do so. Even though a byline is much less visible than a photo, across the hundreds or thousands of search impressions you receive each day, those bylines can make a measurable difference in your traffic, and may improve your online visibility.
Google continues to work on promoting authoritative authors in search results, and authorship is one of the better ways for Google to establish “identity” on the web. Google continues to make statements explaining how important identity in content is, as explained by Matt Cutts both publicly and in this rarely seen interview. Facing the futureIf Google begins to incorporate more “Author Rank” signals into its search algorithm, establishing yourself as a trusted authority now could pay off big down the road. Disappearing author photos today may someday be replaced by actual higher rankings for credible authors, but there are no guarantees. At this point, it’s hard to say exactly where the future of authorship lies, especially given the unknown future of Google+ itself. Personally, I will be sad to see author photos disappear. Let’s hope for something better down the road.

Screen Size Matters: Adapting Content Strategy for Multiple Devices

The author’s posts are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.

The way we consume content is changing at a faster pace than at any time in history.
While the shift from print to digital was seismic from a structural perspective, things have not stopped moving ever since.
The growth of mobile, and now tablet use, is
altering the landscape once again and adding a layer of complexity that businesses have never had to deal with before.
Marketers have been talking about a “mobile-first” future for some time now and while I believe that is the wrong way to look at it, there is little arguing against the stats.
Earlier this year, Facebook unveiled blockbuster ad figures that left even the most ardent fans surprised. Revenues of $2.5 billion for the Jan-March quarter were made up, for the first time, by majority mobile-based advertising money in a clear sign of our changing media consumption habits.
It has also made the job of creating a great content strategy that much more complex, and “cracking it” requires a structured approach that begins with an understanding of the way in which we interact with our everyday devices.Multi-screen usage
Getting to grips with what content to place where and when is the key aspect of the strategy construction process.
One of the best sources of data to inform that is a Google study from 2012, which you can
view in full here. In it we learn that there is a clear user journey across devices and at different times of the day.
To create a truly data-informed picture, however, we need information on variables such as:What device is used, and when, in the day The order of devices in the classic purchase funnel Which devices we use to access key content platforms How long we spend on each device and what we look at on them.
With this knowledge it then makes it much easier to map out the right content.Creating the Variation
Understanding what people are looking for when using these various devices will help you write, package, and distribute your content across different channels.
Critically, it will also ensure you have the key ingredient in any content strategy:
variation.
A varied approach to content creation will not only ensure you entertain and inform your audience in the right way across multiple devices, but you also improve the level at which you retain existing audience and get them coming back for more.Device use: Timing
The way we interact with the content we consume changes throughout the day.

The chart below, from Google’s study, breaks this down simply, explaining that our morning habits push us towards mobile content first thing in the morning, at lunchtimes, and on the way home from work.
We will then spend evenings browsing tablets and working “Simultaneously” (more on this later) with a second screen as we research purchases and spend from the comfort of our living rooms.

The study also makes clear that if you’re an ecommerce brand, the tablet is increasingly becoming your number one device in conversion terms, which highlights the need for great responsive site design and content that is easily consumed on such screens.
Great examples of businesses doing this well include 
Burton.com, a cool snowboard retailer, and United Pixelworkers, which manage to combine great animation with multi-device friendly UX.The person, NOT the device
While it is clearly critical to get the experience right for each device, the most important element is actually removing the constraints created by this kind of strategy and centering it on the customer once again.
The mobile internet has given the control back to the reader, or customer, and the way they consume content is on their terms.
And with that power back in the hands of those who are buying, as opposed to selling, your content has to
be available when and where necessary to be effective. 
If your customer cannot access it wherever they are, they will simply find somewhere else to go, and that is highly likely to be a competitor.
Start with the user: personas
To ensure you focus your strategy correctly then you need to start with your client or customer and his means starting with clearly defining a set of content personas.
This is good practice for the wider marketing plan and begins with analysis and segmentation of existing customer data. Outside of that you can begin looking at social data (and a step-by-step guide to doing that can be
found in this ebook).
I also spoke recently on the importance of personas in content creation and you can get more background on that process
in this post, while Mike King also wrote this indispensible guide to understanding and segmenting your audience here on Moz a few months ago, and that should not be missed as part of this process.
In it there are examples of what you are trying to achieve through this process; a data-informed view on the 1-4 different groups of customers you have.
By being clear on the key details about those you wish to target with content, you can ensure the
ideas you create match the needs of the target audience, therefore putting them at the very centre of the process. An example of what this may look like is found below:
Plan for behaviour, not technology
Once you have a clear view of who it is you are targeting, the next piece of the jigsaw is to understand when and where they interact with which particular devices in their daily lives. This data can, and should, be included in the picture you paint for each persona.
General market data can certainly help with this, and what we do know, from key research on the subject, is that we generally use mobile devices at the beginning and end of the day:

When looking at what we then do during those times we can see here, courtesy of a
recent study on mobile usage by Salesforce, that a lot of that time is consumed by social media use:

Let’s look in a little more detail now at the types of content that work, by device breakdown.Device use: Content
The type of content we consume on each device varies widely and requires a systematic approach to content production to service well. For instance, you’ll
want to take into account how much time you actually have to grab and hold a readers’ attention.
Let’s take a look at how that breaks down below.Desktop: Keep us productive and informed
Desktop consumption is one of the easiest to cater for, as it is the device we are most used to using and the one that has had the most time for brands to gain experience on.
All content forms work here, but longer form articles really come into their own, as do more interactive experiences.
This is also where you may wish to present richer, interactive content that makes the best use of browser capabilities and larger screen experiences. For example, a particularly rich and interactive product demonstration could be made available for desktop users or an extremely streamlined product catalog could be provided for mobile users who need to make quick comparisons while on the go.Mobile: Keep us connected
Mobile use peaks at key commuting times and traffic is more focused on small chunk browsing. We utilize our time here looking to catch up on news and social. That said, increasingly we are also using the device to make critical purchases. Only this month figures revealed that a third of all global travel transactions are made on mobile.
And while travel is clearly a sector that will always have high penetration, here it’s a telling statistic and a sign of what is to come.
This is where, however, mobile traffic will come in via social posts and so having a responsive site is still critically important. Making it easy for users to float between blog posts and the rest of the site will improve dwell time and content consumption.
The best content for this is bite-sized, easily digestible list based features, image-led posts or news.
It is critical, of course, to ensure the responsive experience is good here and navigation supports touch screen protocols, such as swiping for the next article and includes easy-to-use social sharing functionality.
Instructional content works well here and recipe sites are one of the best examples of how to do this well. A current favourite of mine is the
Jamie Oliver site. The use of tabs makes it very easy to switch between ingredients and step-by-steps while thought has also been given to how easy that process is with fingers coated in flour!Tablet: Keep us entertained
While this segment didn’t even exist a couple of years ago it is now a critical part of our everyday lives.
We rely on them for evening and weekend entertainment and because of that we also find ourselves, increasingly, doing lots of research on the device.
It therefore forms a key part of the buying process and is a focus device for those looking to sell something.
We also tend to leave more reviews via tablets than we do other devices, and so ensuring your review platform or experience is responsive is very important.Device use: Conversion
When planning your content, think about when your consumers might be most ‘open’ to the various types of content you’re delivering and, critically, to the call to action it leads to.
We move between those devices in two main ways:
> Sequentially (moving from one device to another in sequence)
> Simultaneously (using multiple devices at the same time)
With many people starting a search in one place and finishing the activity on another device, it means that your message, design, and overall content experience should be as seamless and consistent as possible.
And given that we do jump around, having the ability to save something for later is also key. In other words, make it easy for people to get back to the same URL, irrespective of which device they are on and that means making sure menu structure is responsive and easy to use.Structured variation
We know that different devices elicit different behaviours and that our content should still focus on the user but how do you go about planning that in the real world? The answer lies in adding a level of structure to your content plan that allows you to see if you are ticking all the boxes, or not.
That process starts, as most strategic plans should, with questions and when designing the content plan you should always answer the following:Do I have content to suit each of the personas that make up our customer base? Have we thought of ideas for every content type relevant to our audience and brand (e.g. ebooks, infographics, articles)? Do we have ideas that will suit mobile, tablet, and desktop needs? Have we got a plan for the long tail based on what people are searching for? Do we know what evergreen content we need and how often we will revisit and improve it? Is our educational content designed for all Learning Types (e.g. kinetic, auditory, etc.)? Does your content and story translate to other platforms? For example, games, with good content or stories, can deliver more engagement.
The answers to all of the above should be yes BEFORE you attempt to pull the plan together. If it isn’t, loop back around and ensure you brainstorm ideas to fill that gap.
The key, therefore, to ensuring you have the right mix of content is to take a structured approach to idea creation (a subject I have
written about in the past for Moz).
If you have such a system in place that covers devices, along with an understanding of what kind of content works best on those devices, then you are in a great place.
The next step is to organize a calendar that is both realistic and structured to produce the
right content flow. Getting it right will keep your audience both entertained and absorbed to improve engagement and return visits.Testing the strategy
The process for creating content strategy is actually quite straightforward. As with anything, however, to do something well requires experience and skill.
To ensure you have the right balance of content for multi-device consumption you must first audit your existing site.
To do this you need to be able to extract a list of relevant content from the site to an Excel document and categorize by the device that content would be primarily consumed on. You can do this easily by classifying each piece based on the basic rules we explained earlier in the post around what we most use mobile, tablet and desktop for.
In reality there is little point in gathering every single piece of content you have. The only content that really matters are those pieces being viewed on a monthly, or at least quarterly, basis. And where do you find those? Within Analytics.
The best way of doing this is as follows:Go to your Google Analytics account. Set the date range. This should be at least six months but preferably longer. Go from Dashboard > Behaviour > Site Content > Landing Pages
The next stage is to add a Secondary Dimension. You do this by selecting the dimension dropdown, shown below, and finding Device Category within the Users segment.The next stage is selecting the number of landing pages you want to extract. The ‘right’ number here depends on the size of your site, but a good guide is to find the point at which there have been at least 10 visits to the page within the last six months. Use this as a cut-off point.Download that data in CSV or Excel by using the Export feature below.

Once you have a sense for what you do have it becomes very obvious where the focus should be in terms of devices. You can then simply front-load any ongoing creation strategy with those ideas to balance out the overall offering.
What is critical, however, is the ability to be able to then measure ongoing creation to ensure you get that balance right in the future, and to do that requires a little more work in Excel.
As you want to be creating content equally for all devices, you are looking for a constant flow of ideas, designed for each one. Testing this can be done in two ways; you can either create an editorial planner template that includes space specifically where you can record “by device” (and here’s one we made earlier for you!) or you can assign a number to each device and chart the ‘flow’ of content over time, a little like this:

To create this is a very simple process. I’m no Excel wizard and always prefer to make things as simple as possible as opposed to complicating without reason, and so here’s one way of doing it quickly:Assign a number to each device. For this example Mobile is “1,” Tablet is “2,” and Desktop “3.” Create a chart in Excel where the X-axis is “Time” (by week is best) and the Y-axis is numbered 1-3.Take your existing content strategy and replace titles or ideas with those numbers, so you end up with a list of dates and numbers, as you see from the screenshot above. Now simply turn this into a chart by highlighting the two columns of data and using the “Chart” option in Excel. Within this, select “line graph” and the program will then create the chart for you.
What you are looking for here is something that closely resembles a heart monitor with “peaks” and “troughs.” This is clearly a generalist view as your business may be much more focused on mobile, for instance, in which case you should have fewer spikes in a chart that may look like this:

By moving content around you will be able to create the perfect strategy for your brand and also ensure that you are not missing a key opportunity in the process.

Sitemaps Best Practices Including Large Web Sites

One of the key Search Engine Optimization (SEO) strategies for web sites is to have high quality sitemaps helping search engines to discover and access all relevant content posted on that web site. Sitemaps offer this really simple way for site owners to share information with every search engine about the content they have on their site instead of having to rely solely on crawling algorithms (ie: crawlers, robots) to find it.
The Sitemaps protocol defined at www.sitemaps.org, is a now widely supported. Often web sites and some Content Management Systems (CMSs) offers sitemaps by default or as an option. Bing even offers an open source server-side technology, Bing XML Sitemap Plugin, for websites running on Internet Information Services (IIS) for Windows® Server, as well as Apache HTTP Server.
Best practices if you want to enable a sitemaps
If you don’t have a sitemap yet, we recommend first that you explore if your web site or your CMS can manage this, or install a sitemap plugin.
If you have to, or want to, develop your own sitemaps, we suggest the following best practices:
First, follow the sitemaps reference at www.sitemaps.org. Common mistakes we see are people thinking that HTML Sitemaps are sitemaps, malformed XML Sitemaps, XML Sitemaps too large (max 50,000 links and up to 10 megabytes uncompressed) and links in sitemaps not correctly encoded.
Have relevant sitemaps linking to the most relevant content on your sites. Avoid duplicate links and dead links: a best practice is to generate sitemaps at least once a day, to minimize the number of broken links in sitemaps.
Select the right format:
Use RSS feed, to list real-time all new and updated content posted on your site, during the last 24 hours. Avoid listing only the past 10 newest links on your site, search engines may not visit RSS as often as you want and may miss new URLs. (This can also be submitted inside Bing Webmaster Tools as a Sitemap option.)
Use XML Sitemap files and sitemaps index file to generate a complete snapshot of all relevant URLs on your site daily.

Consolidate sitemaps: Avoid too many XML Sitemaps per site and avoid too many RSS feeds: Ideally, have only one sitemap index file listing all relevant sitemap files and sitemap index files, and only one RSS listing the latest content on your site.
Use sitemap properties and RSS properties as appropriate.
Tell search engines where our sitemaps XML URLs and RSS URLs are located by referencing them in your robots.txt files or by publishing the location of your sitemaps in search engines’ Webmaster Tools.
Scaling up sitemaps to very large sites
Interestingly some sites these days, are large… really large… with millions to billions of URLs. Sitemap index files or sitemap files can link up to 50,000 links, so with one sitemap index file, you can list 50,000 x 50,000 links = 2,500,000,000 links.  If you have more than 2.5 Billion links… think first if you really need so many links on your site. In general search engines will not crawl and index all of that. It’s highly preferable that you link only to the most relevant web pages to make sure that at least these relevant web pages are discovered, crawled and indexed. Just in case, if you have more than 2.5 billion links, you can use 2 sitemap index files, or you can use a sitemap index file linking to sitemap index files offering now up to 125 trillion links: so far that’s still definitely more than the number of fake profiles on some social sites, so you’ll be covered.
The main problem with extra-large sitemaps is that search engines are often not able to discover all links in them as it takes time to download all these sitemaps each day. Search engines cannot download thousands of sitemaps in a few seconds or minutes to avoid over crawling web sites; the total size of sitemap XML files can reach more than 100 Giga-Bytes.  Between the time we download the sitemaps index file to discover sitemaps files URLs, and the time we downloaded these sitemap files, these sitemaps may have expired or be over-written. Additionally search engines don’t download sitemaps at specific time of the day; they are so often not in sync with web sites sitemaps generation process. Having fixed names for sitemaps files does not often solve the issue as files, and so URLs listed, can be overwritten during the download process.
To mitigate these issues, a best practice to help ensure that search engines discover all the links of your very large web site is that you manage two sets of sitemaps files: update sitemap set A on day one, update sitemap set B on day two, and continue iterating between A and B. Use a sitemap index file to link to Sitemaps A and Sitemaps B or have 2 sitemap index files one for A and one for B. This method will give enough time (24 hours) for search engines to download a set of sitemaps not modified and so will help ensure that search engines have discovered all your sites URLs in the past 24 to 48 hours.
Regards,
Fabrice Canel
Principal Program Manager
Bing Index Generation

Analyzing 11,555 Questions Asked by SEOs: The Moz Q&amp;A Meta Study

The author’s posts are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.

Sometimes we don’t need to travel to exotic linked data sources to discover treasure troves of precious information about our audience’s desires, aspirations, fears, and complaints.

Sometimes that treasure is just far as a phone call to the customer care department.

Sometimes it is just a click away in the Q&A and/or Forum section of our site.

And sometimes it’s just there, freely offered by our own competitors to everybody able to retrieve the correct information from them.Understanding what our audience is really talking about, what the specific language is that they use, and what their topics and themes are can be easier than we may first think.

Be aware that I don’t mean that extracting useful information about our audience is easy – that would be trivializing the audience targeting work – but I mean that nowadays, thanks to the social nature of the web, it is much easier finding valuable sources from where retrieving information than just ten years ago.

For this reason, as I already said in 
my previous post, I asked the editorial team at Moz to let me analyze one year of Moz Q&As, with the purpose of identifying what the community was most frequently talking and asking about, and what they discussed most often, and so trying to paint a better portrait of the community itself. Finally, I wanted to offer the Moz team insight that can help them offer a better experience to the users.

I don’t know if was able to understand the “100 most asked questions,” as Rand asked, but the method I used, and that any of you can refine, is the correct one for offering that kind of list.

The Method

The first thing I did was
extract from the Moz database the following information related to questions published in the Moz Q&A between May 1, 2013, and April 28, 2014:The ID number of the questions (this is extremely important, because the same question may be published to a maximum of five categories and because, yes!, there are questions that are 100% identical in their phrasing); The date each question was asked; The URL for each question; The question itself (labeled “Title”); The number of answers to each question; The number of thumbs up obtained by each question; The categories to which the questions were assigned
From the database extract, it was not possible to retrieve other very important information, such as:The number of views (I had to manually scrape this information, as I don’t have direct access to Moz’s Google Analytics); The Real category (I had to look those up manually and add them to my speadsheet)
You are probably asking, “What is the real category?”
In the case of the Moz Q&A, the “real categories” are those that include the actual categories. They are a upper taxonomy level, which is shown to the users when they are asking a question, but not when filtering the questions:

The “real categories” are necessary information, because they help organize the questions into very recognizable macro-topics.
In order to quickly and easily understand topics,
I decided to use Wordle to create word clouds. Wordle has the great option of letting you hide words that complicate your analysis, letting you focus on the relevant words.
Finally, to understand what the questions were that really mattered to the Moz community in the analyzed time-frame,
I followed these simple consecutive rules:Questions with more views matter more than questions with less views; Given the previous value, questions with more answers matter more than questions with less answers.
I didn’t take into account the number of thumbs up of the question as metric for the simple reason that very rarely is a question thumbed up. My decision would have been different if I was also taking into consideration the answers.
For a more refined analysis, then, I’d recommend also considering the number of “Good Answers” and the presence or absence of a “Staff Endorsement.”
What other tools did I use for conducting my analysis? None but Excel.Moz Q&A bird’s eye view
Between May 2013 and April 2014, 26,775 questions were published in Moz Q&A, but if we eliminate the duplicates from those that were published in more than one category, there were
11,555 unique questions published.
First problem: Which number should I consider in my analysis? The raw number of questions or the one including the duplicates? The answer was easy: the raw number.
The reason is that it is impossible to understand what a user was considering to be the “main category” when publishing their question in more than one category; therefore any choice I would have taken would be totally subjective and so void the analysis;
In certain cases, though, I preferred checking the de-duplicated list as well, in order to confirm my first impressions.What are the Q&A users talking about?

The word cloud is quite clear. The Moz community is:Obsessed with Google; Composed mainly of SEOs (SEOs, Site, Ranking, Link…); Asking primarily on-site questions; and Interested in content, but not as interested as it is in relation to SEO (as we will see later).
This is even clearer if we see how many questions have been asked during the 12 months I analyzed:

We can easily see how “The SEO Process,” which includes all the categories directly related to SEO in the Moz Q&A, stands far above all the others. 
If we hide the “The SEO Process” questions, we can better understand what the other macro-topics Moz users are interested in are:

Q&A is also the space where Moz users can publicly ask questions to the Help Team about the Moz Tools, and that specific nature of this category explains why “Moz Products” is the second most-popular topic in the Q&A.
Then, two different but equally important points emerge from this graphic:Despite the tireless efforts in evangelizing inbound marketing, the “Online Marketing” category, which includes all the inbound disciplines but SEO, is not really performing well in Q&A, as if the users (mostly SEOs) were still too worried about classic SEO issues; “Local Marketing,” a category that was only created in January 2014, has quickly reached an interesting volume of questions. This could be telling us that Moz did well creating Moz Local, because local search marketers are an important percentage of the Moz users.
Be aware, then, that the decrease in the number of questions we see in the charts is not due to a diminished interest about SEO by the users, but—as described in my previous post—to the design of the Moz.com site in comparison to the old SEOmoz.org one.Digging into the dataThe SEO Process

The SEO Process category comprises seven subcategories.
On-Page / Site Optimization (3,967 questions) and Technical SEO Issues (4,118 questions) are almost tied in the first position, which is clearly indicating to us how
classic SEO still is the most important source of doubts for the Moz community.
A reason for the success of these categories, confirmed by the third position of Intermediate & Advanced SEO, could also be the increased difficulty of technical SEO, which has a steep learning curve—especially for the new generation of SEOs coming from the marketing/communication fields and not engineering/computer science.
Content & Blogging, which could be considered the “content marketing” side of the SEO Process, is only fifth, after the supposedly dead Link Building.
The Vertical SEO and Keyword Research categories are the last ones, and while we can consider Keyword Research somehow as a smaller topical niche by comparison to much wider ones like Technical SEO, it’s quite surprising to see how questions about vertical searches (news, videos, images) are not so common. Sure, Local Search, which was the most important vertical, now has its own macro-category (Local Marketing), but nevertheless I was quite surprised.

In this Wordle related to The SEO Process category, I omitted the word Google, because it was dwarfing all the others in the word cloud, making the analysis difficult.
Looking at the word cloud, it is almost obvious that Moz users are especially concerned with these topics:Duplicate Content Duplicate Pages Duplicate Site/Website Links/Backlinks
If we associate the topics, we can understand that
two big fears are constant:
Panda (which, curiously, is not called out explicitly in the questions); Penguin.
User are coming to the Moz Q&A in order to find help for their penalized sites
(drop, dropped, penalty, disavow, problem, manual…) or because they have understood their site is at high risk of penalization, or because they really have to make explicit its indignation.Link Building
I want to start with the Link Building subcategory because it is a very good example of what I’ve just said above.

I removed the words “Link” and “Links” for better visibility of all the other words.
It’s interesting seeing how the questions tend to be about 
penalty issues (Penguin, Penalty, Ranking, Disavow, Unnatural, Spam, Anchor…), about outdated tactics (Press Releases, Directories…) or risky ones (i.e. buying old domains with strong link profiles) and substantially blaming Google for letting other sites (especially competitors’ sites) rank well even if they have a supposedly spammy link profile (or because it is “killing” every link building option).
What doesn’t emerge from the word cloud is the frequently viewed and commented 
questions about tools, usually link analysis tools for Penguin recovery (i.e. Link Detox, Cognitive SEO).
In general, the sensation is that the users asking questions are usually 
new to the link building practice. A constant trend, though, is evident: people ask for creative help because they are working on so-called boring niches, or because they are dealing with niches usually dominated by spammy link building practices. This trend should make all us reflect when writing about link building, because we tend to write as if everybody was dealing with big brands and big budgets, when clearly it is not so. 

Another useful exercise is seeing how very specific topics return over and over in the Q&A. Obviously, for this very granular kind of analysis, it would be better to also have the question in the dataset, and not only its title.
Let’s take “Penguin” as an example:

The spike we see in October coincides with the rollout of Penguin 2.1, and confirms
the importance of Q&A for feeling the pulse of our audience almost in real time. For this reason, using tools like Fresh Web Explorer for monitoring our keywords’ mentions in our own Q&A is essential in order to spot hot trends and eventually creating very timely content.
Finally,
there’s a word that I totally missed and that, IMHO, should be one of the most relevant ones in the words cloud: outreach. And there are very few questions and discussions about strategy, too, which is making me very sad.Technical SEO Issues
This is the king of all the categories of the Moz Q&A. And it is quite ironic, because if in the SEO-blog world technical SEO is losing visibility for other topics, at the end of the day the most common questions asked by SEOs are about the most classic of the SEO subjects.
But what are the topics that worry the Q&A users the most?

Duplication issues, and the related canonicalization issues, seem to represent a big portion of the SEOs’ worries when it comes to Technical SEO. Another classic cause for concern is a site’s migration.
And, clearly, SEOs are worried about optimizing their site for Google (I feel sorry for Bing, but this is the real world).
The presence of “Links” and link-related words is partly caused by the liberty given to users to publish questions up to five categories, therefore many questions that should fit almost exclusively in the Link Building subcategory are present also in the Technical SEO Issues one.
That said, there are also a good bunch of questions related to
internal linking, especially in relation to information architecture, budget crawl management and no-indexation of duplicated pages.
We can also find quite a few questions about
“Why is my site not indexed by Google?”
A smaller but relevant amount of questions surround 
Technical SEO issues generated by the most common CMS platforms (WordPress, Magento, Drupal, and Joomla), and, apart from Wordpress, this is the kind of topic that is not taken into much consideration in the Moz and YouMoz blogs.
Finally, classic evergreen topics are 
htaccess and regular expressions: Maybe Moz could think about a specific cheat-sheet or even creating an htaccess generator better than the ones already available online.
The quality of the questions and answers, then, is higher than the Link Building one, even if it is still big the number of “newbie” kind of questions.
The engagement level of the community is greater, too, and good examples of this engagement can be found in this 
question about a migration gone wrong or this less-silly-than-it-seems question about the use of the meta keyword tag. Both are confirming how the biggest part of the Moz Community is still composed by SEOs.On-Page / Site Optimization
The On-Page / Site Optimization is the second most-used category in all Moz Q&A, but this data is strongly influenced by the fact that users tend to categorize their questions in both Technical issues and SEO On-Page / Site Optimization.
For this reason, in order to better understand what exclusively can be attributed to this category we must de-dupe the questions. The result is something like this:

The topical landscape we see is showing us how
users still tend to think of on-page / site optimization in terms of keywords and related keyword-centric topics (i.e. Title tag).
Quite surprising is seeing how a hot topic like
semantic search is barely present; we almost don’t see words like schema, semantics, structured data et al.

One of these two things is likely correct:
Users do not have any problem with Semantic SEO (and I do not think so); or Semantic SEO is still in an “early-adopters” phase (and this is what I believe).
If we analyze our Q&A sections to finding new ideas, then this “absence” should aim us toward creating better and more understandable content about semantic search, so as to educate our audience and be consistent with our mission.Intermediate & Advanced SEO

This category suffers from the same problem as the previous one; users tend to categorize things as Intermediate & Advanced SEO questions that really should be attributed to other categories.

For this reason, if we do not make a conscious de-duplication effort, the topics seem to be essentially identical to other categories.

The
problem, then, is not being able to provide a clear definition of what is meant for Intermediate & Advanced SEO. Without defining this clearly, the concept of “advanced” totally depends on the SEO education grade of the users who are asking questions, and what emerges quite clearly is that the Moz Q&A public generally is not really advanced.

But if we decide that advanced stands for questions that experienced SEOs may also find difficult to answer, than we can see
interesting topics:

Ecommerce sites tend to be the most difficult ones to handle with from an SEO perspective; Duplicated content and canonicalization questions, even if the most basic questions are omitted, are still the most asked, especially in relation to product pages and blog posts/categories/tags; Robots.txt, noindex and the nuanced uses of rel=”canonical” can result in a sort of puzzle that is difficult to be solved; Information architecture, site structure, and crawlability tend to be asked almost exclusively in this category.
A
special mention must be made for infinite scrolling, parallax design, and SEO for Ajax in general, which are topics that can be discovered as relevant to the community only if we consider metrics like page views and number of comments.Their popularity and level of of engagement, then, is confirming to us that there’s a space in the Moz Q&A for really advanced SEO questions; the problem is keeping them from sinking into a sea of basic SEO questions.Content & Blogging
The questions present in this category represent how
SEOs look at content:
as a method of ranking better.

This could lead to a discussion about how much SEOs have really understood the importance of Content Marketing (and blogging) as an inbound tactic for making your site/brand relevant for the users, and hence able to earn popularity, shares, and links, and not just another SEO task for ranking better on Google.
That’s not to say that the Moz users aren’t aware of the real meaning of Content Marketing, but they still  struggle to understand its effects on SEO. Good examples of this attitude are found in these two questions:
All this also explains why the most popular questions are related to the SEO
technical side of content optimization:Rich Snippets Indexation of non-HTML content (i.e. PDF files) Authorship Indexation and Duplicated Content
Or, to content creation for link building (i.e. guest blogging, good or bad?)Keyword Research and Vertical SEO
These are the Cinderellas of The SEO Process category.
This is due to their very specific nature. A nature which is very clear to all users, and that means that we don’t find replicated topics like duplication or canonicalization, even if they still are present.

In the case of keyword research, questions tend to be very specific, the most popular usually being about tools (
like this question) or keyword mapping (like in this other example).
Vertical SEO, instead, is particularly interesting because it effectively maps what the most common vertical the Moz users are dealing with is:

The dominance of
Local Search is evident, and it justifies:That Moz created a specific Q&A category for Local Marketing; and That the number of questions posted in the Vertical subcategory has plummeted since the Local Marketing category was created.

Video Search, with questions mostly about video hosting and YouTube optimization, is the second vertical for importance and frequency, followed by Images Search. News search, instead, is almost absent, with just one question that explicitly asks about that topic!Online Marketing

As I have told before, the Online Marketing category included all the Inbound Marketing disciplines except for SEO.

What emerges from the word cloud, though, is how
the unofficial title for this category should be “How to use other Online Marketing disciplines for SEO.” The outstanding presence of Google and, secondly, of SEO, is telling us just that.

Nine subcategories are present in the Online Marketing category:

As we can see, the interest SEOs have for any single discipline determines the ranking of these subcategories. This explains why Social Media ranks first, immediately followed by Web Design, while a discipline like Email Marketing is ranking in the last position (tied with Affiliate Marketing).

The poor performance of Affiliate Marketing is telling us that the SEOs working in that niche are not substantially part of the Moz community, or that they don’t consider Moz as their site of reference.

What we can conclude is that
Moz is mainly used by SEOs who use other online marketing disciplines in a wider Inbound Marketing strategy, but their main focus is the relation between those disciplines and the SEO process, more than specifically about their intricacies.

A last observation we can do is that the Moz community is very practical and looks for tools that can make their professional life easier or for tips about how to better use the tools.
Social Media

Let’s give a look to the Social Media words cloud:

Google+, Facebook, Twitter, and YouTube are the social media platforms people are asking about most. Social networks like LinkedIn or Instagram are present, too, but their presence is almost symbolic.

Google+ is the most cited social network by far, and this should not surprise us if we remember how SEOs compose the vast majority of the Moz users and the importance Google+ has for SEO.

The analysis of the questions about Twitter shows almost the same trend, but there are some that really could be taken as example of my theory that SEOs ask questions, such as 
this question that asks if the same content tweeted by two different accounts could be considered duplicated content: No social media marketer could have even imagined asking this.
Web Design
It should not surprise us that Web Design is the second most asked-about online marketing discipline. Aside from the timeless love/hate relationship that SEOs have with web designers, the evolution of Google and the increasing importance of correct web development for SEO performance explains it.

In this word cloud I purposely deleted words like
Google, Design, and SEO in order to better see the real topics users discuss in this subcategory.
We see two trends:Asking questions related to CMS, especially WordPress (but also Magento is quite present); Asking questions about Mobile Web and Responsive Design.

Site speed and performance optimization emerges as a third topic if we examine the questions more deeply.
Generally, though, again we see SEOs asking questions and many times they categorize as Web Design questions that they also asked in some of the subcategories of The SEO Process, which may indicate to us that many users are convinced that, for instance, the duplicated content issues are somehow related to a poor design of the site (when, maybe, they should look more at information architecture).Online Marketing Tools
I think the correct name for this category should be “SEO tools:”

If we look at the questions, and take into consideration also views and answers, what we see is that
the vast majority of questions are directly related to the SEO process. 
We have questions about 
Google Webmaster Tools, keyword tracking, Google Analytics (and others analytics needs, such as tracking phone calls, or alternative tools), the Google Places dashboard, and so on.
The only Online Marketing discipline that emerges with some force in this SEOed landscape is 
Adwords. Instead, we have a very small and dispersed presence of questions about tools for Social Media (which comes third as topic) and other Online Marketing areas. 
Is this a sign that SEOs:Know about the importance of the others Inbound Marketing disciplines, but don’t deal directly with them? Or that they deal sporadically with those disciplines, therefore don’t feel the urgency of using specific tools for them?Other Online Marketing categories questions analysis
The remaining six online marketing subcategories generated fewer questions than the three previously described (1,019 vs. 1,291 questions). Moreover, many of their questions could be considered duplicates from other categories.
Some of these Online Marketing subcategories, then, generated less than 100 questions:Affiliate Marketing > 54 questions; Email Marketing > 56 questions.
A special mention, though, must be given to Paid Search Marketing and Internet Advertising:

We easily see how AdWords is dominating the attention of the users, but we should not forget the emerging importance of Native Advertising or Social Advertising for Link Building purposes.

It would be interesting matching this interest in AdWords with the data collected by the Industry Survey Moz did few months ago. In fact, we could probably notice how many SEOs also offer PPC services or (in the case of in-house SEOs) have AdWords as one of their tasks.
Again, the predominant SEO nature of the Moz users emerges.Measuring & Testing
Aren’t we saying all the time that SEO and Inbound Marketing are data-driven Internet Marketing disciplines? Yes, and search marketers are aware of the importance of measuring and testing, but nevertheless this category has only 1/7 of the questions that “The SEO Process” has (2,127 questions vs. 16,015).
Five subcategories are present:

The
evident decline of Reporting over time made me wonder, could the reason behind its decline in interest be due to the fact that Moz users were asking questions in this category about the Moz Pro / Moz Analytics reporting functions? Once Moz created a better Moz Product category in Q&A, almost all those questions disappeared from Reporting.
The chart seems to confirm and reassure us that the users of the Q&A are data-driven folks.
But is it telling us the real story?

The answer is:
not really.
This word cloud is clearly telling us that
“analytics” is a synonym of Google Analytics for the Moz users.
Moreover, the great relevance of the word
Traffic should alert us. In fact, if we examining the Analytics questions one by one, we will discover how very frequently users refer to Google Analytics just because it was the tool that showed them a loss in organic traffic. Users, then, tend to publish these questions also in some of the most popular subcategories of The SEO Process category.
Again, the freedom given to the Moz users is making difficult to retrieve unique information on a subcategory level.
Difficult, but not impossible.
If we want to find questions that are completely devoted to Analytics, then we must focus on the word Tracking. Doing so, we find the most interesting questions, mostly about Google Analytics implementation issues (how to set up goal with event tracking, Ecommerce GA implementation issues, custom URL tracking, etc.).
All these questions hardly find an answer in other sections of the Moz site, but clearly
they manifest a need. Maybe is it time for creating a very practical Google Analytics Implementation Guide or Cheat Sheet?Research & Trends
Personally, this is my favorite Q&A category. Why? Because in it we can find questions about international search, alternative search sources, and a space for discussing the most advanced trends in search and everything related to audience targeting.

We could define it as a category devoted to strategy, but that doesn’t forget to translate it into concrete tactics.
Unfortunately not so many Moz users feel the same enthusiasm: In these 365 days, they asked only 1,319 questions in this category, half of them limited to the “Search Engine Trends” subcategory:
Search Engine Trends
What are the Search Engine Trends Moz users discuss?
Personally I already imagine the answer, but let’s check to see what the word cloud tells us:

Looking at the word cloud, something doesn’t add up here.
Where are Hummingbird, Knowledge Graph, MyAnswers, semantics, and patents? Instead of those terms, we see: ranking(s), drop(ped), bad, traffic, update, penguin, duplicate and semantically related words.
If we look directly at the questions, what we observe is how
Search Engine Trends is practically a synonym for penalties, and—let me tell you openly dear Moz community members—penalties are not a Search Engine Trend.
Only three questions about Knowledge Graph have been asked in 12 months. Four about Hummingbird (two of which by people convinced Hummingbird penalized their sites!). A topic like Personalized Search—which should be talked and asked about here—is completely absent.

Something is wrong here. Probably the Search Engine Trends subcategory is just another category users classify their questions for because they have this option. Or, maybe, Moz (and I count myself in) still has not being able to create the right awareness about the importance of being constantly updated about how search engines are evolving.
Or Moz users simply are more interested in finding immediate answers for very practical needs; and if it is under the aspect of tips and tricks better.International Issues
This subcategory is substantially different. In this case, almost all the questions are really on topic and very specific, as is made clear by viewing the word cloud:

Topics like
localization vs. simple translation, the correct implementation of the hreflang annotations, keyword research for multi-country sites, and how to deal with social media for multinational businesses, all are present with a many grades of difficulty.
I am surely biased, but the International Issues subcategory is the best example of what a Q&A category should be: clear in its nature.
The other Research & Trends questions
I must admit that when I saw the word cloud of
Alternative Search Sources I laughed a lot:

GOOGLE?! Alternative search source?!
In seriousness, apart from this obsession with the Big G, it’s interesting to notice the presence of Bing, Yahoo, and the very few questions about Baidu, Yandex, and Naver (only two!). It’s clear that Moz users are spending 99% of their time on Google and only allocate a very tiny amount of time to other search engines. It is also clear that SEO outside of the classic American-focused search engines is not something they are concerned about (probably because they are not dealing with it).
Finally, if you return to the chart with questions asked in the Research & Trends category, it is interesting to see the
strong decrease in questions about Behavior and Demographics. Why? Because people aren’t really asking questions about those topics, and the biggest percentage of the questions classified as Behavior and Demographics are what I’ve defined as “duplicates” of other categories.Community

Community is a Q&A category mostly meant as a space for discussing topics about the inbound marketing industry, not one where people ask for help.
Seeing that the only topic within Community that really matters to the Moz users is White Hat / Black Hat SEO is quite depressing, but it reflects the worries SEOs have for practices like Negative SEO or penalizations for spammy link building tactics that have been used in the past.
And those same topics dominate the other subcategories, which are not formally about spam, link penalizations, and negative SEO:

It’s certain: we can find words like
Mozcon and Articles, but they are just few words between many not relevant ones.
If I was Moz, I would seriously reconsider this category.Business Development
The Business Development has a very multi-faceted nature where the common denominator is the practical life of a search marketer. This being the nature, a subcategory I wish were here is one about how to deal with clients:

The questions present in this category, then, seems to suggest that it’s a
category mostly used by independent SEO consultants or owners of small SEO companies.
This may explain why only 504 questions have been asked in Business Development.
But, despite the small number of questions, this is the category with the
highest ratio of answers per question: 4.47Local Marketing
Local Marketing is a relatively new macro-category; it was created on January 2014.
Despite being new, it has been able to attract the attention of the many SEOs specialized in Local Search:

Local Strategy, Local Listings, and Website Optimization for Local Search are the most-used categories, and this interest is also reflected in the word cloud:

What surprised me was (finally!) seeing “schema” present in the word cloud.
It turns out that how to use Schema for local search is quite a hot topic that is able to create great engagement, 
like in this question.The Moz Support Q&As
A
Q&A section, at least in Moz’s case, is also a place a company can use for offering customer service.
Aside from the obvious benefits, a great
advantage of using Q&A for this purpose is that the company itself can collect useful data about their own products perception, weakest points and needs the users are expressing.
Initially, the support side of the Moz Q&A was limited to two categories (Moz Products and Pro Application), but during this last twelve months Moz rationalized the questions creating a taxonomy based on the different areas of Moz Analytics (Search, Social, Links, and Brand Mentions) and stages of learning the tool. Finally, specific Q&A categories were created for all the other tools owned by Moz (Moz Local, OSE, Followerwonk, APIs).

The chart above speaks for itself:
the tools users are most concerned with are the ones more strictly related to classic SEO functions:Search Links Other Tools (which includes tools like the Keyword Difficulty Tool, the Rank Tracker and the Crawler test)
What a clear confirmation of what has been repeatedly said in this analysis:
Moz users are SEOs, maybe adopting inbound marketing as a way of thinking, but ultimately SEOs.
For this reason, we can say that the partial return Moz is doing onto focusing again more on SEO practitioners, even if under the inbound marketing philosophy, is very well justified by the composition of its audience.
And what its audience is telling about its products to Moz? This:

Moz subscribers’ concerns, doubts, and desires are mainly directed toward pure SEO tools:Keyword tools; More extended crawling functionalities (and some better clarity, as in the case of the duplicated content algorithm Moz applies to its crawler); Links-related tools; Better reporting functionalities.
Not that other inbound marketing facets of Moz Analytics are not considered useful, but they are not considered as essential as the SEO ones. 
One thing, though, clearly emerges from analyzing the Support Q&As: the strength and participation of the Moz community itself. In fact, the biggest percentage of the answers given to these questions are from Moz users.Conclusions
The analysis of the Moz Q&A tells us many interesting things about the
Moz community:It is composed in its majority by SEOs; A big part of the community is represented by SEOs who are beginners or have an intermediate knowledge of SEO itself; Advanced SEOs tend to ask fewer questions, and when they do, it’s usually in very defined niche subcategories (i.e. international issues); The Moz community is generally proactive: only 2,120 over 11,555 questions (de-duped count) didn’t received fewer than two responses. Notwithstanding point 4, fewer than 500 were able to generate an ongoing discussion (10+ answers) Users tend to turn to Q&A in cases of extreme necessity: penalties and (apparently) unsolvable technical issues; Moz users look for and appreciate more concrete actionable tips than discussions about the whys of search strategy; SEO dominates and influences every Q&A category, and this means that: Inbound Marketing seems considered as a new framework where SEO is included, but SEO substantially seems considered as having the same functions it had before.
The analysis—to conclude this gigantic post—is telling us something we all need to reflect on: inbound marketing still hasn’t put solid roots in the minds of search marketers, and despite what the biggest majority of the Moz community says publicly, it seems it’s still thinking in terms of the old classic SEO.Image credit: Fear and Loathing in Las Vegas by Terry Gilliam – Universal Pictures

Basic Probability for Marketers: The Almighty P-Value

As someone who spends a lot of time dealing with maths (the joys of data vis development!), I spend a lot of time entrenched in statistics and probability. Whilst that’s great fun, or so I like to think, I’m always aware that there’s a lot of people out there who were never really taught the why behind a lot of maths, and so I’ve decided to write a short series of introductory posts on both and what part they have to play in the life of a modern marketer. However, first, a disclaimer
Disclaimer
Once you actually understand these, you’ll become deeply annoyed by the vast bulk of really bad statistical and probability-based reporting out there. I make no apologies for this.
The Outline
In this fourth post, we’re going to look at the p-value; the foundation of statistics. As always, don’t worry if you’ve not come across these terms before; we’ll break down each one to look at what they do, and why they matter.
p-value: The Root of Probability
In this, our first post on probability, we’re going to look at p-values, what they are, how they’re calculated, and what you can apply them to. So let’s start with a definition:
p-value
The probability of obtaining a real, observed value at least as extreme as one previously seen, assuming the null hypothesis is true
Null Hypothesis
The hypothesis that no relationship exists between the variables being measured
Alternate Hypothesis
The hypothesis that assumes a relationship does exist between the variables
So a p-value should tell us how likely it is that a value should turn up, if the things we’re looking at happen to be unconnected. The beauty of this form of reasoning is that it’s what’s properly known as an argumentum ad absurdum, or argument to absurdity. This form of argument shows something as true, by showing that if it was not the case, something truly bizarre or impossible would happen.
With our hypothesis, all we therefore have to do is show that our null hypothesis is probably false, which we can do by providing data that would seem to be beyond the bounds of acceptability if it were true.
Time for Examples
For our first example, let’s say that we’re rolling a dice. The dice in question has six sides, and our null hypothesis is that it’s not biased. We then roll the dice three times, and observe that we rolled a six every time. The odds of this happening are 1 in 216, or 0.0046. In testing we traditionally look for 95% or 99% confidence, or to put it another way, a score under 0.05 or 0.01. Our value of 0.0046 is below both of these, therefore we declare the dice biased. And here’s the first way we can mess up…
Sample Size
The problem with our first example lies in the design of our experiment. We only rolled the dice three times, and whilst it’s unlikely that we’d see three sixes, I’d be willing to say it’s not so rare an occurrence that we could reject our null hypothesis from just three rolls. If we were to roll the dice say, 15 times, and see a vastly inflated number of sixes, we’d be far more capable of saying that the dice was biased. Let’s say we do roll the dice 15 times. There’s 470,184,984,576 possible combinations we could see of those rolls. We also know that with 15 rolls, we should expect to see the number 6 occur between 2 and 3 times, if it’s a fair dice. In an absurdly perfect world, half the numbers would come up twice, with the other half occurring three times.
So what’s the odds of us seeing, for example, 10 sixes? Well, if you work it out, it’s 0.00001996, or 1 in 50,103. In other words, if you were to roll that dice a total of 15 times, and then do it again and again and again, you’d expect to see your result once for every 50,103 times you repeated it. Needless to say, 0.00001996 is far less than either 0.05 or 0.01, so we can be pretty darn sure that our dice is biased. Again though, we can’t be certain. It could just be a complete fluke.
There are of course other ways we can mess things up with testing, but we’ll be looking at those over the course of this short series. However, here’s a couple of quick warnings, to make sure you’re not likely to mess up so badly straight away (for certain values of likely, of course!)…
Some Words of Warning
Always remember that a p-value tells you nothing about how likely the alternate hypothesis is, only how unlikely the null hypothesis is; that is how likely it is that a relationship exists in your data.
Also, be very careful when looking at multiple factors in your data. The problem that gets observed here is that, by simply increasing the number of factors you look for, it becomes ever more likely that you’ll find one that does appear to be having an effect, not because it actually is, but just becomes of random chance. Relevant XKCD:
More comparisons can lead you to see things that aren’t thereCome back next time, when we’ll look at the basics of Bayes, and start tying probability and statistics together!

Stay Updated: Sign Up for Webinar & New Blog Alerts

Is Your Content Credible Enough to Share?

The author’s posts are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.

Insufficient credibility undermines digital marketing, particularly among SEOs who now produce or promote content as part of their job. People won’t share content that isn’t credible; they know the things they share reflect on them and impacts their own credibility. While the importance of credibility gets mentioned in passing, little has been said about
how to actually build it, until now.Your Guide to Establishing Credibility
You build credibility by signaling to the reader that you can be trusted. The signals of trust can come from the author, the site, and from within the content itself. Each signal will appeal to different types of readers in different contexts, but they come together to make content that is credible enough to share.
Rand mentioned credibility in his
Content Marketing Manifesto as one of the things we need to build familiarity, linkability, and trust. Several studies have also shown credibility’s critical role in promoting and sharing. So, let’s build some credibility.1. Establish expert credibility
Expert credibility comes from having knowledge others do not. People want experts they can understand and trust, especially when trying to understand complex or ambiguous topics like new technology, engineering, advanced science, or law.Be an expert or hire an expert with insight
A Syracuse University
study found “insightful” content was most correlated with users’ estimation of a blog’s credibility. You can’t offer interesting insight on a subject you know very little about, so obviously you need to be an expert or hire one.
Unless your expert has breaking news, he or she needs to provide quality analysis and opinion to add any value. Most successful non-news content is opinion and/or analysis, whether verbal, graphical, or textual.
If you’re creating video or text content for your site, the expert should also be able to clearly express complex subjects in a way readers can understand and follow. If he can’t then get a content writer to interview the expert and relay the information.Source experts
Do not try to give your opinion as an expert in a field where you’re not one. It won’t work.
We’ve all read non-expert content on subjects where we’re knowledgeable. We know what expertly-written content looks like and can easy detect pretenders. If you pretend to be an expert and get one little detail wrong, you’ll blow all your credibility with the people who actually understand and influence the discussion. They won’t link to or share that piece of content and they may never share any of your content again. Don’t take that risk.
Rather than trying to fake expertise, try finding experts and incorporating their expertise into your post. Journalists have long understood this tactic. Even journalists who
are experts use quotations from other experts in both news and analysis pieces. The front page of the Washington Post’s technology print section is usually littered with quotation marks and according-tos.
People running blogs can easily get a quote from someone knowledgeable enough to have an opinion that matters. Experts with strong opinions usually want to share them.Be passionate to build trust
The
Syracuse University study and this University of Pennsylvania study show that passion is key to judgments on credibility and sharing. Readers don’t just want an expert who can explain things; they want an expert who cares.
Experts who know what they’re talking about tend to have nuanced and sophisticated opinions about subjects they understand. Don’t undercut that understanding with a shallow piece of content. Expert pieces should be deep and thoughtful.
Legal experts who really care about
Citizens United vs. Federal Election Commission simply wouldn’t take the time to write a bland essay on what the ruling said and how it might impact the future of politics. SEO experts don’t want to report on the fact that Google penalized guest post networks. They care, and want to explain why it’s good or bad.
Expert opinion shouldn’t be confused with argument, and it doesn’t require you to start a firefight with anyone who’s taken the other stance.Cite sources
Cite the sources for all you expert insight. Citing expert sources is the most obvious way to back up your claims and gain trust. Often citing a source is as simple as linking to the webpage from which you got your information.
Don’t use
weasel words like, “it has been said,” or, “many people believe,” to skirt the citation responsibility. Experienced writers and editors instinctively close the tab on any content attempting to unnecessarily blur their sources.Show data
Sometimes, instead of breaking news, you can add to it with data. Data lends credibility to your post in a unique way because with numbers, your sources and methodology are more important than the author’s history and popularity. The data, if it’s compiled and analyzed correctly, speaks for itself.
For example, when the CableTV team heard about the potential Comcast/Time Warner merger, we knew simply sharing the news would be a waste of time. Every major news outlet would easily drown out our site, and opinion pieces where popping up everywhere. Instead, we crunched some numbers, comparing U.S. Census data with coverage data, producing a
coverage and population analysis people could see and understand. A few large news organizations used the data in ongoing analysis, Reddit’s founder (Alexis Ohanian) shared the post, and roughly 60,000 people ended up seeing it.

JavaScript libraries and HTML 5 tools are springing up everywhere to help non-technical users visualize data in interesting ways. Mapping examples include
SimpleMaps (used in our post), MapBox, Google Fusion Tables, etc. Graphing and other data options are all over, but this site is a good place to start. Compile data in-between writing stories related to your niche with Census data or any of these data sources so you’re ready to go when news hits. For more tips, Kane Jamison always has tips on data-driven content marketing, including the presentation below:

2. Harness hierarchical credibility
Hierarchical or rank-based credibility comes from a person’s position or title. High-ranking members of an organization have a better chance of being taken seriously simply by nature of their perceived authority, especially when the organization is well-known.Have important people write important things
People lend more credibility to an article written by an unknown CEO than a writer they don’t know—even if the writer knows more about the topic than the CEO. For better or worse, people are simply influenced by official job titles and standing within hierarchy.
Your definition of what’s important may vary. Almost everything on the popular
42floors blog is written by a founder, while CEOs of larger companies will probably have less time and less interest in regular blogging.Use executives for guest posts
I know – I’m the guy who wrote
guest posting had gone too far. Google thought so too based on its latest round of penalties. I believe, however, the lack of credibility and expertise in many guest articles was a major cause for Google’s (perhaps disproportionate) response to guest blogging networks.
Don’t waste an executive’s time on small unknown sites no one would ever read. Instead, consider pitching an article written by an executive or other well-known figure to well-known sites. Trulia is a good example with high-ranking members adding guest posts for
Google, The Wall Street Journal, and interviewing with sites like Business Insider. Moz, of course, is another place to see founders adding posts and video frequently.Better job titles
If you want your content to be shared, make your authors experts in both title and in truth. Changing titles for title’s sake may sound stupid, but titles like managing editor, [subject] correspondent, [subject expert], or even [subject] writer have more gravitas than a plain “author” or “contributor.” Think about what the title says to a person reading your content (or email). The flip side: writers should actually be subject-matter experts.
You should also re-think giving quirky titles to everyone, as they can hurt credibility. I can’t imagine the Wall Street Journal quoting a “digital ninja” or “marketing cowboy” in their story – unless that story is about job titles.Leadership quotes
You can also make use of another person’s position to lend credibility to your content. This works especially well if you’re looking for insight into a recent news event. Quotes from company executives, government officials, and other high-title positions give you something unique and show you’re not just another blogger summarizing the news built on someone else’s journalism.3. Seek referent credibility
When someone trustworthy shares something with positive sentiment, we immediately trust the shared item. The referrer lends his or her credibility to the referee. The Moz audience will have no problem understanding referent credibility, as it’s the primary method Google uses to prioritize content that seems equally relevant to a user query. People also rely on referent credibility to decide whether a post is worth sharing. Those referrals build more credibility, and viral content is born. How do you get some referent credibility to radiate onto your content?Publish on credible sites
This post will receive some measure of credibility simply by being published on the main Moz blog. Anything on or linked to from well-known sites and authors receives referent credibility.Share referrals and testimonials
You’ll commonly see “as featured on” lists or testimonials from recognizable personalities. Testimonials from anyone at Google or Microsoft with an impressive-sounding position could go a long way for a B2B product. Referent credibility is the reason celebrity endorsements work.
Leveraging referent credibility in a press push generally works well if your company is involved in something newsworthy. Consider requesting and using quotes from relevant and well-known people in press releases or even outreach emails if you’ve done something worth announcing.
Analysis pieces are a little trickier: pointing out past coverage can lend some credibility to a blog post or press release, but it can also look a little desperate if done incorrectly. High relevance and low frequency are key. A good offline analogy is that person who mentions that time they met a celebrity every chance they get, whether it’s relevant or not. Name-droppers are trying (too hard) to build credibility, but it’s actually just sad and annoying. The same celebrity encounter might actually generate interest and credibility if it’s relevant to the conversation and you haven’t told the story to death. Feel free to talk about times well-known people shared or endorsed you, but make sure it’s relevant and don’t overdo it.Appeal to credible people
When a well-known person shares your content, more links and shares often follow. Find credible people, see what they talk about and share, and then try make something great that appeals to them. This idea has already been covered extensively
here on Moz.4. Take advantage of associative credibility
People make associations between one trait and another, creating a
Halo effect. For example, several studies (1, 2, 3) have found that attractive people often receive higher pay and are seen as more intelligent, when in reality there is no correlation. Users do the same thing with websites, so making your website look and feel like other credible sites is important.Use trusted design as a guide
Don’t run in and steal the Times’ CSS file. I’m pretty sure that’s illegal. It’s also probably not going to work unless you’re running a national multi-channel newspaper. But you should be aware that people associate design elements on a site with the credibility of the site. You can help or hinder your credibility through web design in hundreds of ways. Start by looking at legitimate sites and incorporating some of their design elements into your own. Then check out some untrustworthy and unknown sites to see the difference and determine what to avoid.
Obviously you want your site to be unique, but be carefully unique. If you stray from trusted convention, know why you’re doing it. Maybe you want to
kill hamburger icons on mobile – just make sure you have a well-considered alternative.When in doubt, test
Split tests tend to focus on conversion and sales, and too often the blog/news design gets dragged along for the ride. Given the importance of content and sharing on visibility, testing the impact of site elements on sharing should be as important as the tests we do on sales funnels.
You can test different design elements as they relate to sharing by creating posts and pages with a page-level variable and a canonical tag back to the original post. Always test URLs with variables against other URLs with variables to account for site owners manually removing them. This setup may also be useful for testing different content promotion channels and methods.

Tracking results externally requires a different URL. You may use a modified URL rather than a variable, but only for single-page tests. Note that results will be a little erratic with variables people might remove, but trends will still emerge.Consider your domain name
You have probably read a news article and wanted to share it, but
then looked for a more reputable source to post to Reddit or Twitter.
Sometimes I’ll share content from a site I’ve never heard of, but usually I want the content I’m sharing to come from a site with a name that evokes trust. Everything in this article goes into a decision on whether to share, but domain name is a surprisingly large factor. When I post an article, I don’t want the first response or comment to be something snarky like, “Oh, according to goodbusinessnews4u.com – sounds legit.”
Domain will also impact click-through on social networks and social sharing sites. A couple years ago I wrote about
choosing the right domain for your branding strategy, and I think its message still holds true.
Domain name will also influence what content seems appropriate. You don’t want people asking, “Why is highspeedintaernet.com writing about cooking recipes?” Make sure content strategy aligns with your domain and branding strategy.Write like a writer; build profiles
You must have credibility in your writing if you want your content to be shared. Follow these simple tips:Write clearly, hire writers, or don’t waste your time on text content. Even a credible expert will have a hard time being trusted enough to share unless they write clearly with native-level grammar. Build author profiles, use full names, and use author images. Nothing says, “I’m not proud of this” like a partial name without an image. Build a full section about your company. Be as specific as possible, and avoid vague statements on the value your site adds. Craft headlines that are easy to follow, avoid trick/cute headlines unless you have a great reason for tricking or confusing users about what the content will deliver. Be consistent with surrounding articles. Jumbled topics and unrelated surrounding articles make sites look inconsistent.Avoid clip art and stock images
Just ask Ian Lurie
what he thinks about stock images. When I wrote “How Google’s Algorithm Silences Minority Opinions” I had the image in my head of Googlebot placing a gag on a user. Thankfully one of CLEARLINK’s talented designers had a better (and less aggressive) idea:

A Google logo would have been easy, but boring. The custom image added a strong visual to the argument, emphasizing key points: a computer algorithm silencing a person, the person not caring too much. It also sent the associative message to users that the site is legitimate enough to use unique images.
Most of us can’t get custom illustrations or photographs for each post, but you should consider it for high-investment pieces or pieces you think have a good shot at success.Final thoughts
Unless you have inside information on a rumor or are willing to burn your credibility going forward, your content must project credibility. Smaller sites and lesser-known brands have the most to gain by understanding how users and journalists make judgments on credibility and working to improve those factors. You don’t necessarily need to employ every strategy and tactic, but the best coverage and links will always require a high level of credibility. 

Quintessential Seattle Places to Visit During #MozCon 2014

We’re gearing up for all of you to land in Seattle for 
MozCon! It’s just around the corner, July 14-16th, and as we do every year, we want to make sure you have a great time and get a chance to explore our city. (Or to just find a tasty place for dinner after a long day of learning.)

If you haven’t bought your ticket for MozCon, 
do it now! We’re quickly selling the last few tickets, and are over 93+% sold out. Buy your ticket today, and sign up for a 30-day free trial to get your Moz Subscriber best deal. (If you cancel because Moz Pro isn’t for you, we’ll see you at MozCon regardless.) 

What is your quintessential Seattle place?

This year, we asked Mozzers to name their quintessential Seattle place. They came up with a bunch of favorites, from breakfast spots to parks and more. Here’s what they had to say:

Dick’s Drive-In Resturants

“Delicious hamburgers and fries. It’s cash only, and there’s almost always a line. How Seattle.”

– 
Joel Day

Quinn’s Pub

“The best burger north of 
Father’s Office in Santa Monica and always a solid taplist.”
– David Mihm

Editor’s note: Quinn’s Pub is also on our MozCrawl agenda.

The Market Theater Gum Wall

“I’m based in Mozlandia, and I love coming to Seattle and experiencing this great city. Gum wall is a truly gross tourist trap—actually, careful, you could indeed get trapped—and in the heart of tourism central, Pike Place Market. Still there’s a charm to such an offbeat (though heavily touristed) spot.”

– 
Peter Bray

Pie Bar

“Pie Bar = pie + booze. An array of whiskies. Local and craft beers on tap. Fresh pies, both savory and sweet, made daily. If they would let me move in, I’d just live there.”

– 
Jess Stipe

Black Bottle

“Ah-mazing food! Not bad for happy hour. Broccoli blasted – need I say more?”

– 
Stefanie Riehle

Petit Toulouse

“Petit Toulouse in Queen Anne is the quintessential Seattle favorite when it comes to Cajun/Creole food. Petit Toulouse does not fail to impress every time I have been there. The atmosphere is superb and the food is out of this world. Additionally, I would recommend the buttermilk beignets after a good meal.”

– Marcin Narozny

La Bete

“I feel like it is the secret Cap Hill restaurant that only the neighborhood tends to frequent. The service is always great; the ambiance is always perfect for whatever occasion you are celebrating (romance, friendship, new boots, hunger); and it’s a great place to sit at the bar, order a great glass of wine, and read by yourself. It’s just good.”

– 
Leah C. Tyler

Serious Pie

“This is the best pizza in the whole city. The community-style tables make for great conversation with strangers next to you. Great food, good beers. So fun.”

– 
Nicelle Herron

Belltown Pizza

“If you’re looking for pizza and are not into the odd California-style pizzas, this place has the best New York-style pizza in Seattle. Right off downtown, it’s the first pizza joint I found in Seattle and is still the best IMHO.”

– 
Phil Hildebrand

Kayaking on Lake Union

“You really get a feel for the Seattle landscape. Seeing the Space Needle, Gas Works Park, floating homes, wooden boats…all from a kayak on the water. Nothing better.”
– 
Jackie Immel

Revel

“Some of the best food and drinks you’ll find in the city (and that’s saying a lot), and their patio in the summer is amazing.”
– 
Rand Fishkin

Ballard Locks

“It’s great to hang out in the sun and watch the boats go through the locks, plus the fish ladder is fun and free! The added bonus is that you’re in Ballard so there are about fifty awesome breweries and bars at your fingertips.”
– 
Jamie Seefurth

World Spice Market

“The proprietor here makes her own blends of spices, and everything is freshly ground or grind-at-home. Best spices. Try the Advieh – yum!”
– 
Lisa Wildwood

Roux

“Matt Lewis of the Where Ya At Matt? food truck started a brick and mortar restaurant, and it is good. Very good. With a updated French Creole menu, he has taken it to the next level, and we locals love it. Keep in mind, it’s a bit of a trek from downtown, but there is plenty to explore throughout the rest of Fremont making it well worth the trip.”
– 
Ben Simpson

Pier 66

“You can see the Space Needle, Puget Sound, Mt. Rainier (on a clear day), and the Great Wheel. Such an amazing view!”
– 
Chiaryn Miranda

Staple & Fancy

“Fresh ingredients, dishes perfectly made, and an amazing chef’s choice option.”

– 
Megan Singley

Biscuit Bitch at Caffe Lieto

“Whether you need a pick-me-up in the morning or after some late night fun, Biscuit Bitch serves delicious Southern-style, with unique Seattle flare, biscuits and all the toppings you could want. I love their Bitchwitch sandwich. Just be prepared to eat it with a fork. They also have gluten-free options. Also make sure to use your favorite location check-in service and get a free sticker.”

– 
Erica McGillivray

Looking for more options? Don’t miss
our mega post from last year, Rand’s personal recommendations, and Jon Colman’s Seattle coffee guide.

What’s new in Seattle?

Seattle’s Waterfront, photo by Rachel Sarai, creative commons licensed

We’re always discovering new places to eat and enjoy in Seattle, and here are few that have opened up since last year’s MozCon:

Bars

Restaurants 
Hanging out in Seattle longer than just for MozCon?

There are tons of great Seattle events happening around MozCon. Here are a few, plus some special deals just for MozCon attendees.

Want to see the MozPlex for yourself? We have office tours!
Come visit the MozPlex and see where all the Moz magic happens. Plus, you’ll get some fun swag.

Soccer fan? See the Sounders FC vs. Portland TimbersThe Pacific Northwest’s biggest rivalry is on Sunday, July 13th at 7:00 p.m. Get Low-Upper Deck seats (normally $25) for $18. Make sure to join our MozCon Facebook Group and make plans to see the game with other MozCon attendees.

More of a baseball fan? See the A’s vs. Mariners on Sunday, July 13th at 1:10 p.m.
With the link above, get a special discount on Main Level tickets: normally $43-48 and now $25, just for you!

Need a ride around town? Uber has some Seattle deals.All UberX rides are 25% off for the summer, and if you’re a first time Uber customer, use the code SEAMOZ14 and get up to $30 off your first ride. Code expires 7/30/14.

Want to see some local music? Don’t miss GeekGirlCon’s annual concert, featuring local nerdcore acts, Sunday, July 13th at 6:30pm.
Come out an support a Settle nonprofit and enjoy the nerdcore rap of Shubzilla, DJ K91, NY artist Sammus, local trio Death*Star, and Jonny Nero Action Hero, who mixes beats with his Nintendo gaming systems.

Love to run? Run or Dye 5k is Saturday, July 12th in nearby Lake Stevens.You can even run with fellow attendees as Dana Tan’s organizing a group to run and have some fun.

Interested in exploring some of Seattle’s neighborhoods and cultural celebrations?

Seattle’s Chinatown-ID Dragon Fest 2014, Saturday and Sunday, July 12th and 13th
40th Annual Ballard SeafoodFest, Saturday and Sunday, July 12th and 13th
Wedgwood Art Festival, Saturday and Sunday, July 12th and 13th
West Seattle Summer Fest, Friday through Sunday, July 11-13th
Polish Festival Seattle, Saturday, July 12th
Bastille Bash, Saturday, July 12th
Georgetown Garden Walk, Sunday, July 13th
White Center Jubilee Days Street Fair, Saturday and Sunday, July 19-20th

Can’t get enough beer? Head over to the peninsula for Bremerton Summer BrewFest, Saturday, July 12th.No one loves beer more than the Pacific Northwest (okay, maybe Bavaria…), and if you’re looking for local brews, this is your best bet.

Wish to experience Etsy offline? Go to Urban Craft Uprising, Seattle’s largest indie craft show, for their summer edition, Saturday and Sunday, July 12th and 13th.
Shop local and find the perfect Seattle gift to bring home for your loved ones or yourself.

A foodie and staying after MozCon? The Bite of Seattle, the Northwest’s premier food festival, is Friday through Sunday, July 18-20th.
It’s a great way to try out a ton of different restaurants from around the area. I’m sure a few are on our must-eat lists.

Who doesn’t love local 4-H fairs? The King County Fair is Thursday through Sunday, July 17-20th in nearby Enumclaw.Check out the mutton busting, 4-H exhibits, fried food, and the rides.

Hope you can find some fun and time to explore Seattle. Don’t forget to 
buy your MozCon ticket before we sell out!