Link Fixation – What To Know, What To Do

Link Fixation – What To Know, What To Do

Why do you seek links? The answer to this question may tell you more about your future chances of success in organic search than you realize.

Always a hot topic among SEOs – the need to collect links to their content. But why do people seek links in the first place? Once upon a time, links were needed to rank well. In the early days of search (5 – 10 years ago) when algorithms were more unrefined than we see today, links counted for much. And more links counted for more.

And links from .edu sites counted for even more. The holy grail being links from .gov sites.

And as with so many things, those lines of thinking are so old, busted and trampled on, they are simply blind alleys today. Shortcuts to a dead end.

I recently sat in on a Link Building session at a conference and found it entertaining. The first presenter went on to showcase all the links their company had built by contacting websites and either requesting the link, or by providing the sites content. Every example was at a legitimate, trustworthy website. National .org sites, well known brands, etc.

The problem is that so many people have tried these tactics in the past – and abused them – that there just isn’t the value there once was anymore. Like so many “tactics” for link building, if its hand curated in some way, it’ll eventually fall into the bucket of “unearned” and when that happens, either at an individual level as applied to a single site, or at a broad level as a tactic employed by multiple sites, the value simply evaporates.

Large companies don’t really link build anymore. 6 years ago, when I was doing SEO at MSN, my decision was to forgo all external link building activities. They are simply impractical at that level. More and more companies are moving in this direction. There is no return on a link building budget these days.

And that’s because the point behind the link was as a vote of confidence from one location to another. That’s been perverted to no end these days, so what’s an engine to do? Rely less on the signal.

Looming large in front of many SEOs today is a cliff. Those continuing to actively build links to boost search rankings could well find themselves wasting their company’s money, time and resources on a futile tactic that’s bit the dust.

Does this mean links are dead? Hardly. Links won’t ever die. But wrapping your head around the value they can provide is more important than ever.

Links will likely remain a small part of the algos for a while yet. But as other signals grow in value, importance and trustworthiness, older signals (such as those sent by links, for example) tend to lose importance. They count for less of the overall “decision pie” the algo reviews when determining value and rank.

So where is the value in links?

Where it’s always been, for years now – referral traffic. Now many will split hairs at this point (as proof of being a true SEO). What’s the difference? The difference is in intent. And outcome. The difference is felt by your business. We won’t really care about the links, but developing links from locations capable of driving traffic directly to converting pages, well, you’ll care a great deal about that.

Its time businesses stop fixating on tactics like link building. In the Air Force the term “target fixation” refers to a situation where a pilot concentrates so much on the enemy they’re trying to shoot down, that they lose situational awareness, allow enemy planes to actually get behind them and take shots at them.

Too many SEOs have lost situational awareness. If you’re building links today, do so as part of a direct traffic acquisition strategy. Not to boost rankings.

Let’s review some link types:

  • Reciprocal links – still useful for driving referral traffic, useless for SEO
  • Guest posting – useful for building a reputation, largely pointless for boosting rankings
  • Widget links – maybe useful for referral traffic, dead end for SEO
  • Forum links – depends on context – if posted by a real person, a real forum member, with history, as part of an actual conversation, in context, maybe a bit of usefulness for SEO. Otherwise, as commonly deployed en masse and randomly, a dead end.
  • Blog comments – again, depends on context, but largely a dead end.
  • Inline content links – still useful, assuming the link is actually in the text, pointed to a relevant page and doesn’t exhibit obvious “low value” characteristics. Would a writer for CNN actually include a link to your sales page in their article? Unlikely.
  • Directory Links – useful for referral traffic, maybe, but almost no value for rankings.
  • Link schemes – just don’t. Unless you’re comfortable with a bulls eye on your back.
  • Footer/header links – footer links are a dead end, and why in the name of all that’s useful in business would a company put a link to another site in their header? Unless they were, I don’t know, paid to do it?
  • Social media links – great for spreading the word and driving traffic, which has knock on benefits.
  • Le Garbage – hidden links, paid-for links, incentivized links, linked pixels, etc. With a bullet labeled as “Le Garbage”, it should be obvious to avoid these, and yet…

And it doesn’t matter what is linked – text, images, videos, etc. Thinking a linked image will help where a text link won’t is a waste of time. So infographics won’t skate past where text links fail.

Now, with all those traditional link building efforts yielding little to no value today, why do people continue to invest time in them when that time could produce what’s really valuable?

The Ultimate Link Bait

Engaging content. Not surprising at all that after 15+ years of trying to find every advantage to outwit search algorithms, SEOs find themselves right back here. Content is what searchers seek. If you understand what types of content they actually engage with, and build it for them, you’ll be more successful than chasing links. Yes, it takes more work to produce winning content. Yes it’s more expensive. No one said running a business would be cheap and easy, though.

Duane Forrester
Sr. Product Manager
Bing

Handling User-Generated & Manufacturer-Required Duplicate Content Across Large Numbers of URLs

We know that Google tends to penalize duplicate content, especially when it’s something that’s found in exactly the same form on thousands of URLs across the web. So how, then, do we deal with things like product descriptions, when the manufacturers require us to display things in exactly the same way as other companies?

In today’s Whiteboard Friday, Rand offers three ways for marketers to include that content while minimizing the risk of a penalty.

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. Today I’m going to be chatting a little bit about a very specific particular problem that a lot of e-commerce shops, travel kinds of websites, places that host user-generated and user-review types of content experience with regards to duplicate content.

So what happens, basically, is you get a page like this. I’m at BMO’s Travel Gadgets. It’s a great website where I can pick up all sorts of travel supplies and gear. The BMO camera 9000 is an interesting one because the camera’s manufacturer requires that all websites which display the camera contain a lot of the same information. They want the manufacturer’s description. They have specific photographs that they’d like you to use of the product. They might even have user reviews that come with those.

Because of this, a lot of the folks, a lot of the e-commerce sites who post this content find that they’re getting trapped in duplicate content filters. Google is not identifying their content as being particularly unique. So they’re sort of getting relegated to the back of the index, not ranking particularly well. They may even experience problems like Google Panda, which identifies a lot of this content and says, “Gosh, we’ve seen this all over the web and thousands of their pages, because they have thousands of products, are all exactly the same as thousands of other websites’ other products.”

So the challenge becomes: How do they stay unique? How do they stand out from this crowd, and how can they deal with these duplicate content issues?

Of course, this doesn’t just apply to a travel gadget shop. It applies broadly to the e-commerce category, but also to categories where content licensing happens a lot. So you could imagine that user reviews of, for example, things like rental properties or hotels or car rentals or flights or all sorts of things related to many, many different kinds of verticals could have this same type of issue.

But there are some ways around it. It’s not a huge list of options, but there are some. Number one, you can essentially say, “Hey, I’m going to create so much unique content, all of this stuff that I’ve marked here in green. I’m going to do some test results with the camera, different photographs. I’m going to do a comparison between this one and other ones. I’m going to do some specs that maybe aren’t included by the manufacturer. I’ll have my own BMO’s editorial review and maybe some reviews that come from BMO customers in particular.” That could work great in order to differentiate that page.

Some of the time you don’t need that much unique content in order to be considered valuable and unique enough to get out of a Panda problem or a duplicate content issue. However, do be careful not to go way overboard with this. I’ve seen a lot of SEOs do this where they essentially say, “Okay, you know what? We’re just going to hire some relatively low quality, cheap writers.” Maybe English isn’t even their first language or the country of whatever country you’re trying to target, that language is not their first language, and they write a lot of content that just all sits below the fold here. It’s really junky. It’s not useful to anyone. The only reason they’re doing it is to try and get around a duplicate content filter. I definitely don’t recommend this. Panda is built even more to handle that type of problem than this one, from Google’s perspective anyway.

Number two, if you have some unique content, but you have a significant amount of content that you know is duplicate and you feel is still useful to the user, you want to put it on that page, you can use iframes to keep it kind of out of the engine’s index, or at least not associated with this particular URL. If I’ve got this page here and I say, “Gosh, you know, I do want to put these user reviews, but they’re the same as a bunch of other places on the web, or maybe they’re duplicates of stuff that happened on other pages of my site.” I’m going to take this, and I’m going to build a little iframe, put it around here, embed the iframe on the page, but that doesn’t mean that this content is perceived to be a part of this URL. It’s coming from it’s own separate URL, maybe over here, and that can also work.

Number three, you can take content which is largely duplicative and apply aggregation, visualization, or modifications to that duplicate content in order to build something unique and valuable and new that can rank well. My favorite example of this is what a lot of movie review sites, or review sites of all kinds, like Metacritic and Rotten Tomatoes do, where they’re essentially aggregating up review data, and all of the snippets, all of the quotes are coming from all of these different places on the web. So it’s essentially a bunch of different duplicates, but because they’re the aggregator of all of these unique, useful pieces of content and because they provide their own things like a metascore or a Rotten Tomatoes rating, or an editorial review of their own, it becomes something more. The combination of these duplicative pieces of content becomes more than the sum of its parts, and Google recognizes that and wants to keep it in their index.

These are all options. Then the last recommendation that I have is when you’re going through this process, especially if you have a large amount of content that you’re already launching with, start with those pages that matter the most. So you could go down a list of the most popular items in your database, the things that you know people are searching for the most, the things that you know you have sold the most of or the internal searches have led to those pages the most; great, start with those pages. Try and take care of them from a uniqueness and value standpoint, and you can even, if you want, especially if you’re launching with a large amount of new content all at once, you can take these duplicative pages and keep them out of the index until you’ve gone through that modification process. Now you sort of go, “All right, this week we got these 10 pages done. Boom, let’s make them indexable. Then next week we’re going to do 20, and then the week after that we’ll get faster. We’ll do 50, 100, and soon we’ll have our entire 10,000 product page catalog finish and completed, all with unique, useful, valuable information that will get us into Google’s index and stop us from being considered duplicate content.”

All right everyone, hope you’ve enjoyed this edition of Whiteboard Friday. We’ll see you again next week. Take care.

Moz’s 2013 Year in Review: More Than You Ever Wanted to Know About Moz, and Then Even More

Time for the 2013 end-of-year wrap-up. You can read Rand’s 2012 wrap-up here. The “T” in TAGFEE is for “Transparency,” so let’s get this show started.

Moz 2013 A Year In Review

This is a long post. Here are some convenient links to whatever tickles your fancy:

Four Big Investments in 2013

Community & Customers

Inside Moz HQ

Technical Achievements

Looking Forward

If long-form blog posts aren’t your thing, I invite you to check out our Moz 2013 Year In Review Infographic extravaganza!

1. The rebrand from SEOmoz to Moz

We’d been planning the rebrand from SEOmoz to Moz for about 18 months before we executed. The original plan was to launch the rebrand simultaneously with the release of Moz Analytics. Ah, yes. What a lovely dream!

When it became clear that Moz Analytics wasn’t going to be ready before May, we decided to decouple the rebrand from the product launch.

The rebrand went amazingly well. Better than we anticipated. Most people weren’t surprised, which I think is a good thing; we had been seeding the idea of “Moz” for a long time. Recently, Search Engine Land released study results indicating that Moz is the second-most recognized marketing technology brand. Happily, 85% of people who listed us correctly used “Moz” instead of “SEOmoz.” Whoa!

Search Engine Land Most Recognized Brands

We suffered a slight dip in traffic after the switch from seomoz.org to moz.com, but managed to recover quickly. <high five> You can see our traffic stats in the Community and Customers section below.

2. The epic nine-month launch of Moz Analytics

We began launching Moz Analytics in May and are only now wrapping up the final phase of launch. Rebuilding the old PRO application from back to front was the biggest and highest-risk endeavor we’d ever undertaken. Including product planning, we worked on it for two+ years, and a team of engineers spent 18+ months building it.

The Moz Analytics launch deserves a thoughtful blog post on its own. I tried to capture the complexity, disappointment, and excitement when drafting this post, but it’s just not possible.

So I’ll just say this: Launch diverged dramatically from the plan. Some of it was forgivable naiveté about the complexity of the project, and some of it was just plain old stupid mistakes. All of the factors were valuable lessons.

I’m relieved and happy to say that we’re wrapping up the “launch phase” of Moz Analytics very soon. We’ve almost solved the the critical bugs, and soon we’ll be launching some critical features (like monthly timeframes) that weren’t quite critical enough to block public release.

3. Building data centers in Virginia, Washington, and Texas

We create a lot of our own data at Moz, and it takes a lot of computing power. Over the years, we’ve spent many small fortunes at Amazon Web Services. It was killing our margins and adding to product instability. Adding insult to injury, we’ve found the service… lacking.

Building our private cloud

We spent part of 2012 and all of 2013 building a private cloud in Virginia, Washington, and mostly Texas.

This was a big bet with over $4 million in capital lease obligations on the line, and the good news is that it’s starting to pay off. On a cash basis, we spent $6.2 million at Amazon Web Services, and a mere $2.8 million on our own data centers. The business impact is profound. We’re spending less and have improved reliability and efficiency.

Our gross profit margin had eroded to ~64%, and as of December, it’s approaching 74%. We’re shooting for 80+%, and I know we’ll get there in 2014.

4. Growing the Moz team

We ended the year with 134 people on the team. In 2013, we brought in 47 new people, nearly a person each week. We also saw 16 people move on. That’s a tremendous amount of change and growth on top of a high-headcount growth year in 2012. It’s invigorating and humbling when I think of the talent we’ve brought together at Moz. Check out our Annual Report Infographic for lots fun extras on the team.

Headcount Growth

Revenue grew 33% last year and ended at $29.3 million.

Gross Revenue

Product Revenue

That is off-plan performance. It could have been even worse. Quarters 1 and 2 were actually really strong, and in a subscription business, the year is made by Q1 and Q2 net adds. In the second half of the year, though, we lost momentum while in launch mode.

What happened? Many books will be written by many historians on what happened at Moz in 2013. (Okay, maybe not.)

Here is a list of contributing factors, in no particular order (like I said, this is really a much bigger topic than we can squeeze into this post):

  • Delayed launches of the rebrand and Moz Analytics (MA)
  • Customer unhappiness from some nasty bugs at launch
  • Customer unhappiness from some high-priority features not included at launch
  • Lower-quality leads than hoped through the invite list for MA
  • Compromised marketing funnel during the transition time from the legacy PRO app to MA
  • Mismatch between marketing materials and launch offering

It feels really good to have this launch nearly behind us. I’m looking forward to the next phase of the product development cycle: iterate, iterate, iterate. And we’ll take the lessons of this launch with us into all of our future projects.

Our Cost of Goods Sold came in at approximately $10.8 million.

The vast majority of this spend is Amazon Web Services, listed here as “Cloud Services” (see the section above on data centers for more context). While building our data centers and moving our systems over, we paid for both the data centers and Amazon. In 2014, we should see a substantial reduction in Cloud Services because we’ll have moved almost entirely off Amazon.

Our Gross Profit Margin for the year came in at 63% overall, but it was up to 70% and climbing in December. Data centers FTW!

For those of you curious about what we spend money on, feast your eyes:

I’ve included the expense as a percentage of revenue to better express how the business scales. Notice the growth in Personnel (headcount- and benefits-driven), Professional Contractors (the fleet to help us get Moz Analytics out the door), Office Expenses (doubling our square footage means more supplies and snacks!), and the arrival of the Data Center Depreciation line.

Yes, you’re reading that right. The percentage of revenue is greater than 100% when you add it up. We’re not profitable this year. We’re ending the year with a $5,754,925 EBITDA loss (that’s fancy accounting-speak for how much money was left over after you paid the bills).

We knew we were going to burn in 2013. That’s why we took the $18m Series B. This is a bigger loss than we planned on, though, and we’re disappointed we missed our goals.

The good news is that we have enough capital to keep growing the business, and we’re really excited to have the hairy launch behind us and a clean platform on which we can start iterating. We expect to be profitable in Q3 of this year.

At the end of the year, over 25,000 people had Pro subscriptions.

Over 21,000 of those accounts are paying subscribers.

Total Pro Subscriber breakdown

On the one hand, I’m disappointed because I don’t think we met our potential. We have the skills, resources and passion to build the best Inbound Marketing Analytics software on the planet. We’re on the path, but we’re not there yet. The whole team is picking up the pace. The destination keeps moving too, which is part of the fun.

It was a pretty amazing year for web traffic. We’re seeing a little dip here at the end of the year, but we averaged 2.9 million uniques per month. Wow.

Web Traffic Moz

Our organic search traffic rose by 28% this year, even with the little hit we took during the transition from SEOmoz.org to Moz.com.

Organic Search Traffic

Our community engagement metrics increased really nicely this year. (Thanks if y’all are still reading this!) Also, people LOVE Whiteboard Friday. Can you believe folks watched over 35,000 hours!?

I already mentioned the team growth in my Big Four above. What I didn’t talk about is how awesome it is to be a part of this team.

Total Charitable Donations

Can you believe that combined we donated over $100k to charity in 2013?! Mozzers gave $41k+ to their favorite charities. Moz contributed $63k+ as part of our 150% Moz Match program. That’s an average of ~$783 per Mozzer.

I’m really proud of this number. It shows me that I work with people who believe the world can be a better place and are willing to do something about it. It excludes the time Mozzers have donated; We should start tracking that, too.

The Annual Report has a complete list of the organizations we’ve helped.

It’s hard to capture what it feels like to work at Moz. This list helps:

We’ve already talked about building Moz Analytics and the new data centers. Those are major achievements, but they aren’t the only things we’ve been busy with. Au contraire.

Moz 2013 Deployments

Moz is an increasingly complicated business. We have loads of tools. Even if you’re a long-time Moz fan, I bet we have tools you’ve never seen. We’ve been making investments in some of our key tools and the back-end infrastructure to support complex web properties.

Launched Fresh Web Explorer in April 2013

We’re really excited to offer a simple, high quality mentions-tracking tool. This is one of our most powerful features, but is still a relatively hidden gem. Do yourself a favor and go play around with it. It’s better than Google Alerts, and way cheaper than enterprise-y mention trackers.

Relaunch GetListed in May 2013

We acquired GetListed in 2012 and did a complete back-to-front rebuild in early 2013. The rebuild improved scalability, reliability, speed, and allowed folks to log in to GetListed with their Moz accounts. The improvements laid the groundwork for an upcoming launch that we can’t wait to share with you! The new release will be a major step forward in listing management.

Followerwonk Improvements

Check out Social Authority Score and our Partnership with Buffer.

Open Site Explorer Improvements

We added Just Discovered Links to Open Site Explorer in May 2013. We’re working hard to get you fresh link data. There will be more to come in 2014.

Sexy Behind-The-Scenes Stuff

Well, we think it’s sexy. We rebuilt our billing system (rapture!), updated our email management back-end (joy!), and created a unified authentication system to tie all our different web properties together (hallelujah!).

Things are starting to move fast around here, and that’s a good thing. There is a renewed sense of energy. Launch is behind us, and we can focus on bug squashing, tuning, and adding some critical MA features like monthly timeframes, a keyword-not-provided solution, and a content section. We’re getting a little bit closer each month to our goal of 80+% gross profit margin (GPM). The launch of our Moz Local v.1 is in alpha testing as I write.

For the next several months the company is focused on (1) Increasing retention by making happier customers, (2) Acquiring new customers by improving the funnel and driving high-quality traffic to the site, (3) Getting to 80% GPM, and (4) Launching and learning from our v.1 Local product.

Rand and I are settling into our new roles, and the whole team is starting to rock a faster, more iterative approach to building software. We’re also learning our way around our brand spanking new digs. If you’re in Seattle, you should come say “hi.”

Did I mention the Annual Report?

About SarahBird — Sarah is the CEO of Moz. She is happiest when creating inclusive environments for people to learn and do their best work. She spends her days spreading TAGFEE and making software that helps marketers understand and improve their inbound marketing efforts. Oh, and doing email. Lots of email. She also enjoys cookies, her adorable son Jack, her handsome hubby Eric, binge reading, and walking down the street.

The Hard and Fast Rule of Guest Blogging

My girlfriend is a professional athlete and trainer working from a Gym near my home in London. Though she doesn’t know a thing about SEO (amazingly I’ve resisted the temptation to even discuss the mechanics of what SEO is), I do think she’s a naturally clever marketer.

Take a look at this guest post, written on the 6th January for Greenwich Mums. Greenwich Mums is a resource, review, jobs and loyalty site for local families in the Greenwich area. They have sound editorial policy and it’s a well thought of site in our local area.

Is there a link to my girlfriend’s website in this post?

greenwich-mums

Nope. Doesn’t mention it. Not once. Not even on her author page.

But it has been very positive for her enquiries, introducing new people to the first stage of her funnel – a one to one consultation, and several new clients.

I find this such a refreshing concept. Put simply, this marketing works. It’s helpful, transparent and it’s obviously perfectly targeted. And, it’s working perfectly well without a link. When a new enquiry comes in, the enquirer is simply asked  ”how did you hear about us?”.

What I’ve spent time thinking about is this. Am I brave enough to guest post without asking for a link?

It’s the acid test for my targeting. Am I delivering the right message to the right person in the right format at the right time? Do I get sales from my work regardless of the SEO implications of the coverage?

If we don’t link, what else can we do except pray?

Stop being so narrow minded. What does this chart show?

followerwonk-follower-growth-richardbaxter

It’s followerwonk follower count data for my Twitter profile.

Follower growth is measurable. So is Facebook Fan Page growth, so are Slideshare subscriber numbers, so are connections on LinkedIn, so are so many KPI’s we use to assess the efficacy of our marketing efforts on a daily basis.

The hard and fast rule of guest blogging

Would it measurably benefit your business without a link?

What if your guest blogging activity was focused on growing your Twitter followers, or getting email enquiries for new leads?

I think your SEO would be quite safe from a penalty.

Image credit: Piermario

The Hard and Fast Rule of Guest Blogging, 4.5 out of 5 based on 4 ratings

Personas: The Art and Science of Understanding the Person Behind the Visit

The author’s posts are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.

Market segmentation is a basic tenet of marketing that has long been ignored by SEOs. And that’s okay, because for a long time working on the keyword-level of abstraction was enough. In fact, you can still do SEO and marketing in any other channel without ever having the idea of market segmentation cross your mind despite (not provided), Hummingbird, and a whole host of changes Google is forcing as of late.

That is… if you enjoy 0.04% conversion rates. Right.

There have been many posts about personas in the wake of the methods I’ve popularized for SEO, but nothing that truly walks through the process with data or gives context into how measurement has matured. In this post I’ll go into detail about these approaches, giving frameworks and step-by-step instructions on how to build and use personas.

There’s something in this post for everyone from beginners to advanced marketers. I feel that it’s important to give context to the discussion to clarify why developing and using data-driven personas is critical to the future of search and digital marketing in general. Use this table of contents as a way to navigate to precisely what you want to know. You’ll also find “Back to table of contents” links at the end of every section.

First, personas are a method of market segmentation wherein we collect a combination of qualitative and quantitative data to build archetypes of the members of our target audience. In other words we take data to tell a predictive story about our users based on past behaviors and attributes.

I mentioned keywords as a level of “abstraction;” Google has obscured that type of abstraction with (not provided), taking an otherwise perfect direct-response dataset and turning up the opacity. Nonetheless, it was always a representation of a person taking an action to fulfill a need. However, that abstraction removed us completely from those people and placed our focus clearly on the keyword and the Boolean idea of whether or not their visit on that keyword led to the completion of a task.

If the keyword-level of abstraction is a stick figure a persona is an action figure.

Over the past few years I’ve built methodologies in a land of marketing make believe to develop personas and apply them to the intersection of Search and Social Media helping us understand the person behind the search. Much like the cartoons that action figures are modeled after personas have a set of attributes that they are to (ahem) personify. Dictated by the business goals and the data that can be collected and analyzed these attributes are typically a picture, demographics, psychographics, user needs and a user story, but can be as in-depth or as vague as you want – as long it’s actionable. For example, some people like to give each persona a “quote” that sums them up. Personas also come with a user journey which is a collection of steps a user takes in fulfilling those needs.

Ultimately, though, you’re trying to tell the most actionable story with your data. Think of it as another layer to your analytics. The most important layer. The people layer.

People often ask me why they need to use personas. In my previous role of selling SEO services to people that talk about SEO and marketing as separate things I’ve experienced a lot of pushback. Fortunately there were far more instances where a CMO or VP was in the room and speaking in terms of segments and market opportunity rather than just keywords, meta tags and guest posts helped us win the business.

But I digress.

One of the main reasons for using personas is that when you target everyone you actually target no one. The art of segmentation is about narrowing your focus in on people in the market more likely to become your users/customers so you can better serve them. This applies not only to your product and/or service, but your content as well.

Donald A. Norman of the Nielsen Norman Group explained it best when he declared “A major virtue of personas is the establishment of empathy and understanding the individual who uses the product.”

In the Content Strategy world one of the major concepts they push is “empathy.” How can we understand and then fight for the user to create the best possible content experience to fulfill their needs? Not just the right words, but the right structure, the right metadata, the right presentation.

User Experience professionals use that idea of empathy with personas to plan and build things that work for the target audience. For example, if our audience is people over 50 then it may make sense to design a site with the larger text.

In the world of marketing this is all a means to specific end of course, but ultimately we just want to know who we’re talking to so we can improve our rate of persuasion – or conversion.

Organic Search as a marketing channel is about just that – persuading people who have a specific intent to believe you can fulfill their needs. Building personas allows you to speak directly to their needs from as early as the page title and meta description. This applies to not only your product and/or service, but your content as well.

Back to table of contents

The terms are often used interchangeably, but they can mean slightly different things. All of these concepts are abstractions of people, but the basic difference between the three lies in their specificity. A segment is the broadest concept of a person while a persona is the most specific snapshot of a user archetype.

For the purposes of this discussion the Smurfs will act as a way to make these ideas a little more real (whoa, meta). I tried to get G.I. Joe, but, they were busy fighting wars and stuff… yeah, anyway…

Segments

Segments are groupings of similar entities. You can (and should) quite literally segment by any set of rules in your data as I’ve discussed in my last Moz post. On the cartoon the Smurfs you had humans, animals and Smurfs. Each of those could be a segment. You could segment just the Smurfs themselves by color of their mushroom homes. You can segment them based on things that happened on the show. Two segments could be “Those that Gargamel Has Captured” and “Those that Gargamel Has Not Captured.” You could segment by where they live in the Village. North Smurfs, West Smurfs, Southeast Smurfs. You could sub-segment any of these groups with any granularity that you see fit or combine criteria just like you would with standard clickstream data in Google Analytics.

The point is, although you can segment by anything you can track, will it be actionable? Popular actionable segments that are used every day are geographic, behavioral, seasonal, and benefit segments.

Nielsen PRIZM is a popular market segmentation system that is based on zip codes where people are chunked into subsets regarding their location, income and behavior. Nielsen builds this system on top of US Census data and sends out surveys to a large sample of people to create 66 segments throughout the United States. Experian Simmons is similar, and maybe more interesting to inbound marketers with its connection to Hitwise, but Google has recently brought segmentation purely online and has the potential to supplant them all. More on that later.

Cohorts

Cohorts are groupings based on similar experience. Common vernacular for cohorts would be generations. In the Smurf Village you had three generations of Smurfs. The baby Smurfs (which for whatever reason had the only other female Smurf). Let’s call them Generation Next. You had the adult Smurfs like Jokey, Vanity, Brainy, and Smurfette’s cohort. Let’s call them Generation Now. A cohort that walked around believing shirts were optional.

And you had Papa Smurf and a few of his buddies. Let’s call them the Elder Smurfs.

//d2v4zi8pl64nxt.cloudfront.net/personas-the-art-and-science-of-understanding-the-person-behind-the-visit/52e85981ddb730.62969620.jpg//d2v4zi8pl64nxt.cloudfront.net/personas-the-art-and-science-of-understanding-the-person-behind-the-visit/52e85982bae804.52984570.jpg

Obviously, each individual in any of these groups is different from the next, but they are grouped by their shared temporally attitudes, cultural interests (ex. fashion sense, music), and life experiences (Gargamel captures, first appearance of Smurfette).

In the real world we have Baby Boomers, Generation X, and the ever elusive Millenials. Baby Boomers were a generation defined in the post-World War II era of increasing affluence, Civil Rights movements and the death of JFK. Generation X was a people defined by rebellion, MTV, baggy pants, the dot Com Bubble, the rise of Grunge, Microsoft, and the death of Kurt Cobain. Millenials are defined by 9/11, job-hopping, Apple, Google, Facebook, free music, nerd glasses, tight jeans, everybody having a startup and the death of Michael Jackson.

Right now every big product-driven company is asking how do we get Millenials to care about us?

Personas

Personas are specific archetypes of people in the target audience. The attributes identified across the group are collected to give birth to a single entity that represents these users. A persona has a descriptive name and is meant to be thought of like someone that actually exists. They are generally a composite of people that do exist.

In this case we will use individual Smurfs themselves as our personas. While some people in the 80s viewed the cartoon as communist it can also be seen as an exercise in behavioral segmentation. Each character was clearly differentiated by what they specifically did or how they acted within the Smurf Village.

You had Brainy Smurf, the original hipster. He’s a bit of an introvert and likely to be found at a Barnes & Noble sipping a macchiatto latte and discussing Sartre, injecting barbs of sardonic wit. He spends a lot of time updating his blog, and he’s a freelance copywriter for a multinational ad agency, but he only shops at the mall. Brainy prefers Facebook over Twitter as he would rather have a long-form discussion where he can definitively disprove what you believe. He listens to NPR and of course is a Mac rather than a PC.

You had Smurfette. Well, you had two Smurfettes, each of which could be a persona.

The first Smurfette was a tom boy who just wanted to hang with the homies. After all she was created by Gargamel as a way to distract and trap the Smurfs. She shopped at second hand stores before it was in style. No, really.

Old Smurfette goes to open mics and loves to be around music. She enjoys vintage vinyl records and playing with her rescue cat. The Old Smurfette is a bit of a couch surfer who frequents SmurfBNB and eats at Baker Smurf’s restaurant rather than the big chains. You guessed it; Old Smurfette is a persona based on the female hipster Millenial cohort.

Later, after Papa Smurf turned her into a real Smurf she got all high-end fashion on us, dying her hair blonde, wearing Diane von Smurfstenburg dresses and Christian Smurfboutin shoes. She’s more likely to be found at high-end establishments, but only goes out when invited. Smurfette would rather be shopping than go to a music night spot. She’s all about convenience over supporting her local community. Smurfette likes to see and be seen.

Then you had Jokey Smurf. His persona name would probably be Terrorist Tom because he loves to hand people presents that explode. In the context of marketing Jokey is the type of user who loves extreme sports, sites like Break.com, and the type of content that Red Bull creates. He’s highly likely to buy Ed Hardy clothing. Only the jeans, though, because males in his cohort don’t wear shirts. Jokey loves craft beer, Xbox One and action movies.

In the above cases I’ve taken what I know about the millennial cohort and layered it into a story about the different Smurf characters based on things that could be observed on the show. As marketers building personas we do this with regard to the context of our marketing programs. That is to say we focus on elements of the story that is relevant to our goals rather than including every data point we can find.

A key distinction to be made in the context of inbound marketing programs is that between the buyer person and the audience persona. The audience persona is typically someone looking to consume content for education or entertainment. These people are not actively looking to purchase a good or service and are better measured via KPIs having to do with the spread or the building of authority for that content or the building of community.

Conversely, buyer personas may also be looking to consume content, but only as a means to make the specific transaction to support their needs. There is frequently overlap between the two types of personas and a given user can also transition between the two types. Keep this in mind as you develop your personas.

Once this in-depth profile of the audience is created smart marketers ask questions and take actions with regard to how these personas would best be served to meet the business objective.

At Amazon, Jeff Bezos leaves a chair empty at meetings to signify the customer persona is in the room listening to the decisions they are making. At Experian they have developed the character and placed her on banners throughout the office and in the company newsletter to keep the customer top of mind. When I worked on LG they sent a poster of their home appliances persona Wendy and she came up often in our strategy meetings. At AirBNB they have a section of the office with the personas in storyboards on the wall along with illustrations of those of personas going through the user journey.

No matter what method you use, it is important to keep the consumer, customer, user at the center of the marketing initiative. Don’t just build personas and forget they exist.

Back to table of contents

“Why should I care,” you say? Well for some time I have touted this idea of the intersection of search and social media to take intent to match it up with the person. This and some of Google’s actions towards the end of 2011 (remember the consolidation of the privacy policies ?) led me into the idea that they are using G+ to model users to apply a sliding scale of authority based on topical relevance for better search quality and to provide the Holy Grail of advertising opportunities. In fact I believed the whole purpose was modeling beyond the keyword to make every dollar worth a lot more by marrying multiple data sets. It turns out this is exactly where Google wants to go with their marketing products and I’m basically just ahead of my time. ;]

Ian Lurie has also been talking about this extensively for the past few years as well through a concept he calls “random affinities” which is similar to something I was (perhaps mistakenly) calling “co-relevance” when I built a tool for getting ahead of search demand with social media.

Forgive the quality of these screenshots, but in a recent video from Google featuring Forrester Research’s VP/Principal Analyst Nate Elliot they discussed the concept of affinity and market segmentation. What he describes as Smart Affinity is what a methodology like Keyword-Level Demographics is looking to harness. This is a capability that marketers in general have yet to embrace because it’s simply too complicated for most. Google is taking us there kicking and screaming.

Diya Jolly from Google gives some of the insight into why Google is obviously the best suited for the job in her discussion of the data signals available across the Google ecosystem. The amount of data combined with the sample size allows Google to have probably the most robust and accurate model of user behavior which potentially render other modes of advertising and market research nearly obsolete or at least less effective.

I dug a little deeper into the process and found the “Inferring User Interests” patent where they discuss more in-depth how they figure out user interests. For example:

“In the situation where a first user lacks information in his profile, profiles of other users that are related to the first user can be used to generate online ads for display with the first user’s profile. For example, a first user Isaac may not have any information in his profile except that he has two friends-Jacob and Esau. The server system 104 can use information from Jacob and Esau’s profiles to infer information for Isaac’s profile. The inferred information can be used to generate online ads that are displayed when Isaac’s profile is viewed.”

How’s the saying go? When it’s free, you’re the product.

Affinity segments/categories

All the data we give Google for free has allowed them to roll out this new Affinity Segments product which is Google’s own new segmentation system.

In their own words :

“We use the main topics and themes from the page as well as data from third-party companies to associate interests with a visitor’s anonymous cookie ID, taking into account how often people visit sites of those categories, among other factors.

Google may use information that people provide to these partner websites about their gender, age, and other demographic or interest information. We may also use the websites people visit and third-party data to infer this information. For example, if the sites a person visits have a majority of female visitors (based on aggregated survey data on site visitation), we may associate the person’s cookie with the female demographic category.”

In typical Google fashion, aside from the video and a few articles in the Adwords Support site, the detailed information about these segments is pretty sparse. Luckily, I was able to get my hands on a deck with short user stories and targeting ideas for each segment. I’m sure your Adwords account manager would be able to furnish something like the below if you asked them nicely.

Affinity Segments is the broad name for these targeting types, but in practice Google offers “Affinity Categories,” “In-market Buyers,” and “Other Categories” as targeting types in AdWords. Affinity Segments are users in a broad sense, In-market segments are people that are actively looking to purchase and other categories are a variety of things. You’re likely to see other categories the most if you’re not in the US.

I appreciate that Google makes the distinction between “Affinity Categories” and “In-market Buyers” as this directly mirrors the approach I take in creating both “Audience Personas” and “Buyer Personas.” More on that later.

As an end user you can see which demographics and interests Google has associated with you in your Ad Settings. You can also opt-out or change your features as seen below.

However, the most important point for this persona discussion is that you can now measure everything in Google Analytics based on these segments.

Let that sink in for a second. Google has Google+ as an “identity platform” which is pretty much a front end for data collection and modeling of people. They have Google Consumer Surveys so marketers can poll the audience and I imagine at some point you’ll be able to ask questions by affinity segment. And now you have Google Analytics showing website actions in context of those affinity segments. Google has just set itself up to disrupt the entire market research industry with end to end people modeling. If that doesn’t sell you at least on the power of segmentation nothing will. This is completely unprecedented.

Back to table of contents

Ok, enough with the background; let’s get you building personas. There are many methods for developing personas and I will discuss several of them, but you should choose your approach based on the data and resources at your disposal. Again, what we will be doing is collecting data, segmenting it and telling a story about that segment. First I’ll outline different processes then we’ll walk through the creation of a persona for Moz leveraging data from the scraping post, Twtrland, Followerwonk,the community Q&A forum, and feature requests.

In my experience a combination of approaches yields the best personas. Otherwise you’ll end up relying too much on your own assumptions. Also I typically build four personas with Googlebot, which AJ Kohn has aptly named the Blind Five Year Old, acting as the fifth, but you can build as many as you see fit.

Layering data

If you’ve seen me speak in the past year or so you’ve probably seen this image. When I was at my previous agency my market research lead Norris Rowley and I developed a methodology wherein we layered data from Nielsen Prizm and Experian Simmons to collect data on segments at scale.

When I say layering I mean that we look for commonality between datasets and if there is enough commonality or overlap we consider all features potentially valid for sub-segments. That is to say if enough attributes of a Prizm Code and a MOSAIC Type are shared we consider any data in one to be potentially valid for the other and we applied this approach across all the available datasets. Whether or not that is scientifically sound can be debated, but remember that personas are hypotheses that will ultimately be validated or invalidated through measurement.

Since the Prizm and Simmons surveys deal mostly in offline behaviors we’d plug those data points into Social PPC inventories (Facebook Ads, Twitter Ads, LinkedIn Ads) to ensure that those segments were valid online. If they proved to be valid then we’d take that segment and build a persona.

I still believe this to be a solid approach especially if you can leverage this data in context with some of Simmons’ other products measure online usage behavior as well as Google’s Affinity Segments.

No matter which method you use you should start by determining the business objectives which will then help to determine the goals of your research. Then define how these personas will be used. Are you just looking to focus on your buyer personas or will you be thinking about audience personas as well?

Qualitative research

With Qualitative Research you’re asking open-ended questions to small sample sizes to get a sense of the “how”s and the “why”s behind a specific problem. You’re typically looking at unstructured data to inform commonality amongst your user group and any insights are then validated for scale throughout quantitative research processes. Qualitative research within our context is often user interviews, focus groups, content analysis, text-mining, ethnography and affinity mapping.

https://i0.wp.com/s3images.coroflot.com/user_files/individual_files/original_361928_No_SeHzVguCrugfGZEcmANfO9.jpg?resize=624%2C468
(image source)

Affinity mapping / affinity diagramming

When most people think of a persona-building exercise they think of this. Affinity mapping or affinity diagramming is the process of collecting everyone’s thoughts and segmenting them into meaningful groups. In the context of personas this is typically done in a several hour session of everyone writing their ideas of their customers on post-it notes with Sharpies, discussing them as a team and then grouping them.

This process is great for putting the consumer back in focus for the team and also for getting executive buy-in. However it’s mostly based on assumptions so I would not suggest doing only this when building personas as your research may be attacked and biased by HIPPOs.

When you do this you want to get all the key stakeholders involved, especially the upper management team but most importantly the people that deal with your customers or users on a regular basis like your Sales or Help teams. The former helps with internal adoption. The latter helps get closest to the right answer. Many people building personas stop here to save time and resources, but when you do these profiles are typically known as “proto-personas.”

Affinity Mapping is typically done in the following 90 minute rounds:

  1. Assumption round one (Needs) – Each person spends 5-20 minutes jotting down a goal, activity, need, or problem for any user. This is to be exclusive of any attributes of the user, rather it’s about what the user is trying to accomplish.

    An example assumption could be “User needing to make a confident decision on which LSAT prep course to enroll in.”

    Once everyone has comfortably collected their ideas you go around the room and each person introduces one of their post-its. The entire team weighs in on how valid they believe the assumption to be. Those that are valid are placed on the board. Those that are not are discarded. You continue until all post-its are on the boards or in the trash. Throughout the process groups start to emerge as assumptions begin to fit together. You can give the groups names if you’d like, but at this point it’s not that important as names can be given later in the analysis phase.

  2. Assumption Round Two (Attributes) – Each person again spends 5-20 minutes jotting down information about the target audience, but this time they present attributes of the user groupings from the first round. Again, you’ll go around the room and everyone will share and discuss their assumptions. The ones that the group agrees are valid will then get added to the wall.

    To continue the example from above an assumption could be “College graduate 25-34 who is unhappy with their career.” Starting with the needs in the first round helps to really zero on the demographics and psychographics of the people in this round. If you go the other way around the parameters of the people may be too broad.

  3. Factoid Round – The final round of the exercise involves everyone in the group spending 15-30 minutes finding data points to back up the groups of assumptions. This data can come from any number of other relevant sources including analytics, sales data, internal and external research. Again, the team discusses the data points and decides their validity and adds it them to the groups.

    An example fact could be “20% of all signups for our LSAT prep course graduated college 4-11 years ago and reported in their registration that they want to make more money.”

    The factoid round helps perfect the user story based on quantifiable realities instead of just assumptions. It also allows you to potentially dump segments if there’s no data to back them up.
    ProTip: Although it sounds like a daunting procedure that requires in-person interaction it can be done very effectively remote by using Mural.ly and Google+ Hangouts.

    The screenshot above is from a recent session I did with a startup called Trip.Me in Berlin. We got members of the marketing team, the CEO and the Operations team together on G+. We color-coded each round of assumptions and factoids with the virtual post-it notes and then the tool makes it easy to bring in links and content that supports any assumptions anyone on the team had. The Google+ effects made it a fun time for all.

  4. Build Personas – At this point you have all the data to build out the skeletal personas. Your goal should be to whittle all of these insights into 3-5 actionable personas. While you can make as many as you’d like, it’s difficult for teams to stay mindful of too many. We’ll go into more of how to formulate stories based on the data when I actually walk through the process, but at this point that is what you’ll do.

    These are often referred to as skeletons or proto-personas because they don’t have direct user research or a large wealth of quantitative data behind them. However for many people this is just fine because the team may be most invested in this type of persona, and that will help with adoption.

Focus groups

Focus groups are formal meetings with people of the target segment wherein a moderator asks research questions to understand users and their needs. I’ve personally never run one of these, but the ones that clients have conducted and shared with us have made useful inputs in the creation of personas. They help with determining questions and need states of users. However I often find that moderators lead the group on some of the questions thereby invalidating their responses to draw bias conclusions.

The quality of the output from a focus group has entirely to do with the experience and biases of the moderator, the quality of the questions, and most importantly the selection and attentiveness of the panelists. Another thing to be leery of is the dominance of one opinion in group settings as people are often swayed by the loudest participant. Furthermore the incentive the people set for being involved may be their only reason to participate and they won’t give thoughtful answers.

We’re about halfway through the post so I encourage you to take a break and watch Conan O’Brien go undercover and moderate a focus group about himself:

[embedded content]

User/customer interviews

These are similar to focus groups except they involve a one-on-one or small one-on-two group environment where you directly speak to a user or customer. For design, products or CRO this can be usability testing and eye-tracking or it can just be direct Q&A as in the case of personas. All of the insights on how customers user the product can be valuable to both the personas and the determining the user journey.

Ethnographic research

Ad-Hoc data collection is what I’ve been calling the method of using social listening, forum searches and keyword research to build personas , but I’ve come to learn that research such as this when you watch users act in their natural habitats is called “ethnography” or when it’s on the web “netnography.”

This is a great way to build personas when you have few resources because you can easily identify online communities or watch hashtags and specific representatives of your users on Twitter. Great tools for this include Topsy, Sysomos, Radian6, Google Discussion Search, Keyword Planner, and the Display Planner, Twtrland and Followerwonk.

The Display Planner, Quantcast, Compete, Twtrland and Followerwonk will all give you demographic data that helps you frame your personas. Where Twtrland bests Followerwonk is in its ability to infer interests from tweets and not just user bios. The Keyword Planner gives you the keywords associated with the site for use as the vocabulary to find your users in discussion search and eavesdrop on their conversations with Social Listening tools like R6, Topsy and Sysomos.

Naturally, you’ll need to do several iterations of looking at keywords and conversations to identify trends across your users. You can also uses sites like Quora and Reddit by going as far as to pose questions to kickstart the conversation.

While the screenshot above is a good framework to work within there’s no defined structure to ethnographic research. You’ll have to judge for yourself when you feel your research questions have been answered. However you should generally expect to do the following:

  • Collect examples of what you see users doing in their natural environments called “field notes”
  • Analyze notes to discover new questions and reiterate
  • Look for shared patterns of belief, language and behavior
  • Write the ethnography which in this case is the persona

Ethnographic research is both the easiest and hardest of approaches because it just requires observation, but the approach is completely subjective so it’s hard to convince people that the insights should stick in and of themselves without quantitative research to back it up.

Quantitative research

If you’re reading Moz you’re probably a data-driven marketer so this end of the research spectrum will appeal to your sensibilities. Quantitative research is about using numbers and statistics to understand behaviors of users empirically. The sample sizes are often quite large so that the insights can be applied to broad populations of people.

Multiple-choice surveys

Polling your target audience allows you to ask precise questions. There are many options for this, but I prefer SurveyMonkey Audience for this type of work simply due to the fact that they collect of demographic data explicitly from users while Google has is inferring it from user behavior. Survio is also a good choice for surveying non-US markets. Survey Design is a science in itself and SurveyMonkey has great resources on it , but the key thing to note here is that at this point you want your surveys to not be exploratory or open-ended in nature. You want your surveys to give users well-defined choices that you’ve defined based on your qualitative research. The results will need to be cross-tabbed until insights are wrangled out and personas begin to appear.

Market segmentation tools

As I mentioned before Experian Simmons, Nielsen as well as tools like MRI and ComScore provide market segmentation based on surveyed panels and usage data. These tools are incredibly helpful with scaling the persona building process by providing prebuilt segments as well as a wealth of data in context of those segments.

These tools fail when there if a specific question has not been included. These providers are eager to take feedback and insights to add to their quarterly surveys, but even when they do you are at least 3 months away from seeing your questions answered and input into the system.

Analytics

Even without demographic tracking your analytics can have a wealth of knowledge especially internal search, paid search and historical organic search keywords in context of site actions. Also looking at location demographic data as well as the times your users are visiting can be helpful determinations of their attributes. Really what you can pull from analytics is completely dependent upon your setup.

User profiles

If your site has user profiles, especially those that have collected data from Social logins or other identity data providers there is a wealth of data that users have explicitly set.

Internal data

Data on sales, calls, returns, reviews, users and transactions of all types can be leveraged to give parameters and color during the persona development process.

Publicly available studies

Every industry has public research and data that can be leveraged when building personas. For example Google has the Consumer Barometer where you can pull various data points.

I tend to use a combination of these approaches in my persona building depending on what resources are available. In my client work experience I’ve found it best to start with an affinity mapping session and then to prove or disprove those assumptions and gain additional insights with data from the other sources.

Back to table of contents

For this exercise we will be using data I’ve scraped from Moz in context with some social analysis and listening tools to build Mozzy Smurf. I’m calling this persona Mozzy Smurf just to keep with the theme of the post, but I generally like to give personas an alliterated name in the form of [adjective] [name]. For example, this persona might normally be called “Busy Bob.”

Naming is incredibly important because the adjective helps all the people that will use the personas to recall their attributes much easier and the name portion helps us imagine them as a real person.

State our goal

One of Moz’s key business goals is to increase the number of users that signup for free services that become monthly subscribers. Therefore the goal of this persona exercise will be to discover a key segment of Moz’s audience that is very likely to share and link to content, but hasn’t purchased a Moz Analytics pro membership yet. Let’s get to the bottom line of how we can show Moz is valuable enough to pay for. The ultimate output will be the user story, user needs, psychographics, demographics and engagement insights.

Additionally, we’ll have all the values required to set up a segment to measure this persona in Google Analytics including which Affinity Segment best represents the persona in the data that we’ve collected. We’ll be using data from the Google Display Planner, Twtrland, Followerwonk, Moz Q&A, and data I scraped from Moz user profiles almost a year ago.

Demographic data

First, I’ll start by pulling demographic data from the Google Display Planner. If you remember the DoubleClick Adplanner this has replaced it. Starting from the demographic data allows me to determine what parameters of features are valid for the user segments that I’m looking to discover. While the Display Planner will be the most relevant we could have also pulled this data from sites like Compete and Quantcast. If there’s no data for your site pull data on a high-performing competitor site.

Based on this data most of the people that visit Moz are between 25 and 34, Male, and use Mobile devices. They are interested in SEO, Marketing, Advertising, and Loyalty Programs. By the same token based on this data it’s also valid to build a segment that is 65+, female, is a heavy tablet user and is interested in Loyalty Programs, but not SEO. While this segment is valid it’s not actionable to Moz so we wouldn’t create a persona based on that combination. As we collect more data the attributes we’ll zero in on who are persona is.

There’s one big caveat to this data, I’ve noticed that when comparing this to client analytics that the devices data is typically way off. You must keep in mind that every analytics program measures differently and ultimately your analytics is the proving ground for any assumptions.

Another caveat is that since I’m so close to the Moz brand and the 25-34, Male, mobile devices segment is me it’s easy for me to lean on my assumptions. This is the very reason that I’ll need to pull data from a variety of sources in order to validate any hypotheses and get the most value out of this exercise.

User needs

Normally user needs are best surfaced qualitatively through user interviews, but as digital marketers we can discover the user needs that we aren’t currently serving through internal search analytics and social listening. Before (not provided) we could also look at Organic keywords, but now only PPC will work for that data.

Once needs are determined we’ll be able to identify “need states” which are the specific goal the user is looking to fulfill with their search and/or visit. An example need state could be “How do I found out the best software for rankings?” and this could be mapped to the awareness phase of the consumer decision journey. We’ll speak more about this when we get to the user journey.

In this case we already have a quantified user needs data set from the user profile data that Jiafeng Li already analyzed. While this information was pulled in early 2013, it’ll still work to illustrate the process. From the screenshot we see that the biggest segment of users with Basic accounts is the Business Owner which we can assume means Small Business Owner in the case of Moz.

Some more key data points from the report are:

  • The largest single group of Basic users has been using Moz for less than a year though there are many that have been users for 2-7 years.
  • There is a large group of Business Owners that spend more than 50 hours a week on SEO and are Basic users.
  • Super Heavy Basic Users that are Business Owners are mostly interested in on-page optimization, link building, content & blogging, intermediate & advanced SEO, analytics, SEO technical issues, social media, keyword research, and entrepreneurship and web design – in that order.
  • Business Owners make up 22% of the entire sample of users.

Next, I’ll switch to netnographic research. I’ll take a random sampling of Moz Q&A threads looking at popular questions in each of the categories that fits my audience to identify what their needs are. I’ll also look at the feature requests section of the site and finally do some social identification and listening.

In Moz Q&A there are filters that help with this process allowing me to pull the questions with the most responses of each of the topics. Unfortunately this is a relatively time-consuming process because I’ll need to double check the profiles of the contributors to ensure they fit within my basic user / small business segment. In interest of time I’ll only review the first page of results for each topic looking at only the past 30 days because I’m not sure whether or not the old private Q&A was merged into public Q&A when Moz made the change.

Next we’ll look at the explicitly requested user needs with regard to the Moz product. The issues and features request section of the site provides just that. I’m sorting by the most popular feature requests and looking at the top 10. Again, this may not be completely scientifically sound because I’m looking at different windows of time for each dataset. Unfortunately, this is a hazard of netnography, but it’s worth keeping track of the dates of posts when you collect your data so you can decide the range you’ll be looking at after data collection is complete. A lot of this data will be captured in the form of screenshots and if you’re using a tool like SnagIt it will keep track of the URL so you can refer back.

Then I review the people asking and contributing to the questions to see what they are specifically talking about.

Since the feature request app is on Zendesk I have to search for people’s Moz profiles for verification.

After this process I’ve found that the small business owner segment is largely underrepresented in the feature requests section of the site. Those that do give feedback are mostly agency, followed by in-house, and followed by independent consultants or agency owners. Naturally, Moz does proactively reach out to users for feedback, but the mom and pops that the getListed.org acquisition was likely to be target are definitely underrepresented in the online conversation I was able to find.

Roughly, in the order of pain points that had the most business owners, we have:

  • Multi-seat accounts – Users have been incredibly vocal for the last couple of years about wanting to be able to associate multiple email addresses with an account so multiple users can login. The conversation has gotten a bit heated because the team hasn’t been able to deliver on the timelines due to other more pressing features, updates and the rollout of Moz Analytics. This was the biggest issue across all account types, but it was definitely dominated by agencies. This makes sense because business owners typically will not require multiple parties to login to their account.
  • The Value of Moz – Based on the insights I got from the segmentation I went into this exercise I assumed the biggest pain point would be in a small business owner not understanding the value of Moz.

    These users seem to understand that there is some value in the Moz toolset, but they can’t quite justify the expense when they are a small fry.

  • Moz iPhone App – Some People want at least top line metrics from Moz Analytics and Whiteboard Friday in a native phone app.

  • Cloning / Altering Campaigns – Users need to be able to make changes to the domain name in accounts and not lose their historical data

  • Analysis of More Competitors – Users need to compare more than 5 competitors. Some are asking for as many as 15

  • Moz Link Manager – Some users appear to be big fans of the toolset, but wish it had features of other tools so they could just use Moz for everything

From this I’ve found some specific user needs and validated that there are indeed users within the demographic that the Display Planner reported.

The next step is social listening. I’ll be leveraging free tools with keywords identified in the user needs collection phase for this, namely Twtrland and Twitter Search. Normally, I would have used Discussion Search, but it seems like Google killed it recently. Luckily Twitter Search allows us to search by sentiment and return tweets that have questions. The negative sentiment filter is a bit of a joke though because it just looks for a frown smiley face rather than performing sentiment analysis.

I’ll keep it simple and search for tweets with questions.

Immediately I find a user within our target group is asking for feature. It’s good old Justin Briggs asking for improvements to the workflow. Justin is no longer a small business owner, but was until recently so I’d consider his feedback valid. However this reveals my bias and context so I will dump it.

Further searches through the tweet with question marks reveal more ephemeral questions regarding the status and uptime of Moz. However that’s an insight in and of itself, Moz should do a better job of making the Application Status experience more visible. It took me 10 minutes to remember where it was and I couldn’t find it by searching.

My next step is to review the users that fit my demographic data to look for commonalities. In this case I can use Twtrland to look at that specific subset of Followers. Twtrland has filters that allow me to set the gender, the age range and whether or not the user is an entrepreneur.

I’ll also take a quick peek at Graph Search on Facebook to see what type of people it returns when I look for Men who are not my friends and like Moz.

This allows me to review these people’s timelines to discover more common interests and develop more robust profiles. For example, I’ve noticed a lot of users that follow Moz also read Tim Ferriss books. Then I can go to Followerwonk and see that there is a user overlap of nearly 20k users which tells me this is a potentially valid data point.

Bringing it all together

At this point we’ve discovered more than enough data points from the user’s ecosystem to tell a full story of Mozzy Smurf, so let’s do it.

Mozzy Smurf is an internet entrepreneur whose coffee table is littered with books like “The Lean Startup” and “The 4-Hour Work Week.” As a young male business owner in an always-on world he has little tolerance for lack of speed or agility in the tools he chooses to pay for.

Mozzy Smurf is a power user that prefers one tool over many and he needs his data easily on the go. A fan of the Moz brand, he has learned a lot of what he knows from the thought leaders on the Moz blog and is busy putting it into practice on his business venture. Trying to get his business off the ground he knows SEO is free traffic, but he also knows that it requires a large time and content commitment. While Mozzy Smurf subscribes to the philosophies in the books he’s read he knows it’ll take hard work to get him there and he appreciates that.

Keeping the ball in the air takes a lot of traveling for Mozzy Smurf. He’s often found in the SmurfAir Lounge between connections on upgraded flights. Mozzy Smurf appreciates his loyalty programs and will pay money to get exactly what he wants, but only if he’s getting exactly what he wants.

Mozzy Smurf is an avid reader of the Moz blog and its long form content. He especially enjoys listening to Whiteboard Friday as he’s traveling. Mozzy Smurf wants to know the tactics that will get his business to profitability as soon as possible and he needs his team to be able to help out and monitor the progress.

Mozzy Smurf was a long time follower of Moz for some time and considers himself a fan of the brand, but he expects more out of the software in the wake of their funding and doesn’t see enough value. He monitors the growth of the product, but finds it difficult to invest in when there are so many features that he never uses.

Engagement Insights:

  • A la Carte Pricing Tier – Moz should consider an a la carte tier of product pricing model for this user type, but only offer it sometime after they cancel their account. Mozzy Smurf wants his account to scale up and down as required.
  • iOS/Android App or Phone-Optimized Views – Mozzy Smurf wants to check his Moz metrics at a glance when he’s about to hop on a plane or when he’s in a meeting. Moz should consider building a paired down app that allows for customized dashboard display.
  • Multi-Seat Accounts – Mozzy Smurf’s team needs to be able to login to his account when he’s on the road, but he doesn’t like them being so close to the password he uses for everything else. He also doesn’t quite understand that other users can’t access his credit card details.
  • Post the Moz Roadmap and Progress – Mozzy Smurf could believe in the Moz product more if he had more visibility into the overall roadmap and progress. While Moz does do some of this, a more robust experience integrated with the Moz Status experience may be exactly what Mozzy Smurf needs. He’s a savvy enough individual to know making great products takes time, but just wants to know what’s going on.
  • Gamified Feature Catch up – Mozzy Smurf loves the features of Moz’s competitors, but hates having to use several different tools. Moz should consider polling this persona for the features they want the most. This experience can delivered in a leaderboard fashion and there could be an offering of incentives to those users who have picked the feature that will roll out next.
  • More Premium Gated Content – With the launch of Moz Analytics Moz made a lot of the gated content free. Mozzy Smurf saw the value of a subscription decrease when Mozinars and Q&A became public. Moz may be well-served developing even more new gated content units and typoes.
  • Value-based On-boarding – Mozzy Smurf doesn’t know about all the awesome products in Moz and therefore doesn’t understand what his $99/month is getting him.
  • Tooltips at Login – Surfacing features, tricks and content relevant to Mozzy Smurfs interests as tooltips will help him see more value in the product.

Of the Affinity Segments Mozzy Smurf matches the Technophiles best and the following is how he’d be represented as a custom segment in Google Analytics.

We use the demographics we uncovered in our research as well as the most relevant Affinity Category. In the case of Moz we’d also look to fire custom variables based on the user profiles so we can more accurately ensure that it’s Mozzy Smurf that’s visiting.

Now just like that we have a persona whose activity can be measured across the entire site. Normally there would be even more iterations of the research and deeper dives into every available data point, but this illustrates the process of collecting data and then telling a plausible story that we can act against.

Back to table of contents

The user journey is the path the user must take to fulfill a given need or meet a given goal (which can be a collection of needs). You may be familiar with the concept of the marketing funnel and the consumer decision journey; the user journey can mirror these or can be comprised of various steps that lie within these stages.

For designers this typically is just the path within the site, but for the marketing context these need states represent different phases that are associated with different actions, content and even keywords. User journeys happen across various properties, devices and time periods. For example, planning a vacation involves many steps. Research places to go, search for things to do, research to plan itinerary, find the best prices, book the trip, find out whether you need a visa, get the visa, buy things for the trip, pack bags, find transport to the airport, check into the flight, fly, get to the hotel, enjoy the trip, come home, post on social media about the trip, review the trip on the travel agent site.

Some portions of the user journey is online, some is off. All of these need states that are relevant to the business can be mapped to the consumer decision journey and your funnel for better measurement and optimization, but what’s important is understanding user needs and how to support them at all relevant stages in order to meet the business objectives.

The phase in the user journey or need state is what we’re looking to uncover per keyword with the persona-driven keyword research process.

I’ll go more in-depth on this in a future post, but the user journey will become relatively obvious during the persona creation exercises if you’re doing them right. As you interview, review and empathize with the user it’ll be clear what steps they are taking to the meet their goals and which of those steps are large pain points.

In your content auditing process you should make sure that content fits the need states of the user journey and your content plan should support any phases within the journey that have content gaps. For example, until recently Moz lacked robust documentation for the API. Users on a journey to leverage Moz data in their own applications or custom reporting systems looking for information were not able to find what they are looking for. Therefore the “How do I use Moz data in my application?” need state was not sufficiently met. The new docs are amazing. Shoutout to the Moz team for fixing the issue.

Back to table of contents

Now that we’ve built our personas we are able to add more layers of intelligence to our marketing efforts.

Lead scoring

I’ve never used Marketo, but I know their product is one of the leading tools for lead management. However by identifying your personas using data sets like FullContact or RapLeaf or even by having them sign in to your site with LinkedIn and tracking their actions over time you can score the leads yourself. I’m sure I will follow up with a more detailed post on how to do this including the code, but from a high-level it goes as follows:

  1. First develop values for different actions on the site. Visiting the pricing page several times is worth more than reading a blog post while visiting the jobs page and looking at a specific job is worth negative points.
  2. Upon arrival log the following for the user in persistent cookies:
    • The channel the user came from
    • The landing page
    • Their first action
    • Score of actions
  3. Offer progressively aggressive opportunities for the user to login and/or self-identify using LinkedIn, creating an account.
  4. Once the user has self-identified apply the previously logged actions to the user and save them in the database and continue to log their score.
  5. Keep tracking the user’s actions across all possible channels until they meet a specified threshold. At which point send an email to your team and log them as a hot lead in SalesForce or whatever other CRM you use.

Lead scoring also needs to be a feedback loop between the marketing and sales teams because you’re ultimately trying to mathematically determine when a lead is worthwhile. Your salespeople will know better if your math is right or wrong.

Building effective business cases

It tends to resonate far more with clients when you make a business case in context of a persona. “Our link building campaign will help us capture 40% more of the Mozzy Smurf persona who converts at a rate of 13.4% when they land on our signup page and has subscription retention rate of 42.3%” means a lot more to a client than “our link building campaign will get us number one for the keyword “software as a service.” I spoke about this at length in my MozCon talk in 2012.

Content strategy and content marketing

I usually start all conversations when I mention Content Strategy and Content Marketing in the same sentence with some sort of remark about how they are not the same thing and this will be no different. Content Strategy and Content Marketing are very different things, but what they should always share is a focus on the user.

The work products of both disciplines are much less effective without keeping the user at the center. With all the intelligence collected on users it’s quite easy to make data-driven decisions about what content will resonate with your target audience and drive processes and workflows to support the inclusion of those users in ideation phases. You’ll also become acutely aware of channels where those users are hanging out so you can get direct feedback on any content you’re thinking about creating and you can easily identify these influencers that will spread your content.

Vetting content ideas through the lens of your users keeps the strategy in your Content Marketing and the marketing in your Content Strategy.

Legitimate guest posting

With all the discussion of the death of guest posting the focus shifts back to high quality guest posting opportunities to place your message in front of your target audience in order to drive referral traffic. If there is a good contextual link opportunity within that framework – awesome. Otherwise guest posting should be vetted by more than just domain authority or PageRank. Once you’ve got a strong understanding of your audience you’ll be able to compare their makeup with the makeup of a given site’s audience.

Considering the situation where I’m targeting the Mozzy Smurf segment and I have two sites to choose from for a guest post, the Smurf’s Village site and the Moz site. Using a tool like Twtrland or Followerwonk I can pull demographic and interest data on their followers.

It’s clear from the demographic data that my audience is not strong in the Smurfs’ Village community.

It’s also clear that my audience is definitely here on the Moz site. Granted this is an obvious example, but the methodology holds true for sites that are not as obvious. For example, what if I made a SaaS toolset for managing outreach, but realized that it doubles as a good CRM system for entrepreneurs? Now I want to write a post about digital entrepreneurship in New York City and my choices are Forbes and Wall Street Journal. Which should I choose?

Your research now shows empirically that you are better off guest posting on Forbes than WSJ. This gives your content the best opportunity to spread naturally. It’s cyclical however because without the research you may not have discovered that the topic is one that your audience cares about or you may not have realized that your biggest user opportunity is in NYC, but once you’ve got those inputs the tools make it easy.

Outreach

I’ve spoken at length about persona based outreach with Social Media methodologies I’ve developed and shared. Personas help with the identification of link prospects at scale through tools like Followerwonk and Twtrland. They also help with the creating contextual conversations with these people . The psychographics of the persona allows you to quickly identify topics that may be of interest to these people in order to write outreach emails that will resonate.

Measurement

I went into a lot of detail on what is called cohort analysis in my last Moz post. The key takeaway is that cohort analysis gives us context that keywords never did. Consider this hypothetical case of my favorite ambiguous keyword “reading glasses.”

The keyword “reading glasses” is a funny one because it could mean one thing to me as someone who lives in New York City, but it could mean something entirely different to someone visiting Reading, Pennsylvania looking for souvenirs. Let’s compare two scenarios.

Explicit intent scenario

  • Users land on the “men’s reading glasses” landing from Organic search on the keyword [reading glasses]
  • Time On Site: 00:01
  • Bounce Rate: 100%

Implicit intent scenario

  • Cohort Lands on the “men’s reading glasses” landing page from Organic Search
  • Age: 25 – 34
  • Gender: Female
  • Location: Reading, Pennsylvania
  • Affinity Category: Do It Yourselfers
  • Time On Site: 00:31
  • Site Action: Visits Souvenir Reading Glasses for Women page
  • Bounce Rate: 50%

In the explicit intent scenario we knew that the user landed on the “men’s reading glasses page” using the keyword [reading glasses] from Organic Search and they left immediately. We’re not all that sure why that happened we just know that it did and they must not have found what they were looking for.

However in the implicit intent scenario although we don’t have the keyword to tell us what this user is looking for we know she is between 25 and 34, landed on the “men’s reading glasses” page from Reading,PA. She’s of the DIY Affinity Category, she stayed on the site for 31 seconds and her immediate action once she landed on the wrong page was to visit the Souvenir Reading Glasses for Women page. That tells me that this person most likely came in on the keyword [reading glasses] and we’ve failed in our page title and meta description. We should also figure out how to build a better landing page that supports both ideas and drives the user straight to the conversion once they arrive. Ultimately, using personas gets us closer to the why behind what a given user type is doing.

With (not provided) still on everyone’s mind as we are rapidly approaching 100% Secure Search in all the relevant engines, applying this idea to solve not provided allows us a more sophisticated form of measurement through implicit intent rather than explicit intent.

If you’re going to do this you should implement the following code rather than the code that Google tells you to because Google’s code relies on the execution of DoubleClick to populate the data into GA. The problem is that many users have installed AdBlock and, as you can see in the Ad Settings above, users can opt-out entirely (hat tip to Mike Pantoliano for schooling me on this). Using Google’s DoubleClick code would be far worse than the sampling issues that pop up in those infernal yellow indicators below the date in GA.

<script type="text/javascript">
   var _gaq = _gaq || [];
   _gaq.push( ['_setAccount', 'UA-XXXXXXX-1'],['_trackPageview'] );
   (function() {
   var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
   ga.src = ('https:' == document.location.protocol ? 'https://' : 'http://') + 'stats.g.doubleclick.net/dc.js';
   var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
   })();
   window.onload = function() {
   if(_gaq.I==undefined){
   _gaq.push(['_trackEvent', 'tracking_script', 'loaded', 'ga.js', ,true]);
   ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
   ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
   s = document.getElementsByTagName('script')[0];
   gaScript = s.parentNode.insertBefore(ga, s);
   } else {
   _gaq.push(['_trackEvent', 'tracking_script', 'loaded', 'dc.js', ,true]);
   }
   };
</script>

You should further consider leveraging Google Analytics Content Grouping in context with this so you can easily see which content types are performing for each persona type. And finally you should consider tracking site actions to infer intent through context of personas as well.

Optimization

On a basic level what Conversion Rate Optimizers do is look to understand the audience and make adjustments based on what will work better in persuading the people that are likely to take action. Now Google’s Content Experiments allows you to set up you’re A/B tests based on segments (demographics, affinity categories, channels). So the first thing to do is build user personas atop of Google’s Affinity Segments so they are inherently measurable and now you have a much better idea of which of your personas is performing.

Back to table of contents

Although there are many advanced applications that we’ve discussed there are even more ways that personas can be leveraged as building blocks to more intelligent marketing.

Web psychology

Nathalie Nahai’s work is very much the next level of progression of what to do with personas. Applying cohort and socio-psychological principles to personas specifically with regard to the user journey is a more complicated layer that I’m not learned enough to do. However filling in the blanks beyond what can be directly inferred from the data is precisely the way to make personas more actionable. Check out her MozCon talk.

Behavioral economics

The behavioral economics field in general is also highly applicable here and Dan Ariely has written some great books on it. If you’ve ever wondered why people are more likely to by something for $999.99 than for $10,000.00 then behavioral economics is a field for you. Start with A Marketer’s Guide to Behavioral Economics for an overview and then check out Dan’s books.

Dynamic targeting / personalization

The ability to personalize experiences based on the given persona is the true power of digital marketing. Despite the ugly light that the NSA has cast on tracking usage data this is the same reason why the user experience with Amazon is so incredible. As Christopher Butler discusses in his “Don’t Make Me Search” post, Amazon tracks all of your actions in context of your user type and surfaces products you need before you think to search for them. Amazon believes in their ability to model users so effectively that they are taking steps to start shipping products to you before they even order them ! This is the same type of customer modeling we found out that Target is doing to figure out a girl was pregnant before her family knew a few years back.

As marketers we can easily harness this power with tools like Nudgespot and Keyword Lift.

Back to table of contents

Like I said I’ve experienced a lot of pushback, especially amongst those that have more SEO than marketing experience. Here are some questions I’ve gotten on more than one occasion.

This isn’t SEO. Why do I need to do it?

If SEO is marketing then I’d argue that this is indeed SEO. You need to understand the user in order to fight for the user. Their attention, their money, their time. Many optimized pages fail to perform against the KPIs beyond rankings largely because we stop at the keyword and forget there is a person seeking to fulfill a specific need behind it.

Also, I firmly believe that the future of Google is the personal algorithm and we’re seeing the beginnings of it with Google Now and the like. Google will soon be using your identity in context with your search history and all the other affinity data they’ve collected to do the same thing Amazon does. Soon creating content that speaks to specific audiences will be the only way to effectively get your message in front of them through Organic Searc.

In fact Bing is already rumored to be taking demographic data into account when they rank pages. That makes sense given the wealth of data they have via their Facebook and Twitter partnerships and the Page-biased Search patent supports that idea with this statement:

“A page-biased search system can use demographic information to bias search results toward results associated with similar demographics. Demographic information of a user of the page-biased search system, of other viewers of a currently- or previously-viewed Web page or other suitable document or Web site, or a combination of these can be compared with demographic information of viewers of a Web page or document to be included in a set of search results. Web pages or other suitable documents to be included in a set of search results having demographics that are similar to demographics of a user or a currently- or previously-viewed Web pages or documents can be ranked more highly than other Web pages or other suitable documents.”

I think we can all agree that Google is rarely, if ever, behind. After all these features are already built into the ad platform and personalization of SERPs is already a very real thing.

My site’s doing well without this. Why should I build personas?

If your conversion rates across all KPIs are 100%, then I agree. Otherwise there are always ways to improve and better serve users. Understanding and segmenting the available market is a key first step to doing that.

My target audience is everybody, why do I need to segment?

Targeting everyone is targeting no one. Treating every visitor as exactly the same is not the best idea for marketing since it limits your ability to get the most conversions. At the very least users should be able to take separate paths through the site that speak directly to their needs and. I would suggest reassessing who needs your product or service and developing content strategy with regard to them rather than just keywords.

I don’t have any data, how can I build personas?

You can always do ethnographic research through social listening and reviewing communities where your users or targets congregate. If you have customers you always have data. If you don’t have customers you can always survey your targets or leverage competitive intelligence tools.

How long does it take to build personas?

That’s hard to say. How long did it take you to put together your first analytics report? How about your first SEO & Social report? It depends how in-depth you’d like to go. As with anything it takes a long time the first time, but you get better at spotting trends quickly or you learn how to use your tools faster as you master the process.

What if we or the client already has personas, do I use those?

That’s your judgment call to make. Norris and I always looked to vet a client’s personas before we’d use them. If they aren’t actionable for your context definitely don’t use them as-is. You can use them as inputs for your own research.

Debunking personas can go either way. Your client can get onboard saying they always thought their previous personas were inadequate or they could push back against your new ones for reasons that are not necessarily logical.

I’m in a small niche, small or local biz. Do I need to do this?

Heavens, yes! I’d say that if you’re in a small niche you need this more than anyone because the people looking to use your products or services will be especially fickle and looking for differentiation. If you’re a small or local business you are in a better position to leverage market segmentation tools out of the box. PRIZM was especially made for your use case. The ZIP-code lookup tool is free for a certain amount of queries to try out and it will return the top 5 PRIZM codes in your ZIP code. This is especially useful if you’re working on local search campaigns.

Back to table of contents

Here are some smurfy resources for continued reading. There are many resources out there, but I feel these are the most actionable that I’ve come across.

Also, here are some things that I’ve written on related topics:

Go forth and fight for the users

So you’ve made it through the longest post I’ll probably ever write about anything. I’d love to hear how you’ve leveraged personas to do better inbound marketing. Or if you haven’t done it yet, I’d love to hear about how you plan to. I’d also be delighted to answer any questions around research and implementation.

Oh yeah, and I wrote this entire post in the voice of Gargamel. I dare you to go back and read it again.

New Top Strategies, Salaries and Tools: Announcing the 2014 Industry Survey Results

Late last year we set out to discover the top tools, tactics, and trends of the online marketing world. With the help of our partners, over 3700 participated in this year’s Industry Survey.

Today, Moz is proud to present the results.

Read the Industry Survey 2014 Results

Shifting demographics and salaries

This year’s data was analyzed by Dr. Pete Meyers, Moz’s Resident Marketing Scientist.

One area we’ve tracked for several years is the demographic makeup of professionals working in the online marketing industry. Among the shifts the survey has revealed is a rise in the number of women working in the field.

This year’s respondents were 28.3% female, up from 20.7% in 2010, although theses numbers indicate we still have a long way to go.

The survey also examines salaries, as indicated by the graphic below showing the median salary by role of respondents around the world. The full data set (see below) contains even more granular information.

Median Salary by Role

Tools and strategies in 2014

In the age of (not provided), Google’s Hummingbird update and changing practices in the world of link building and content marketing, the survey tracts both shifting tactics and tools inbound marketers most use to perform their jobs.

This year, we particularly wanted to know how marketers dealt with (not provided) keywords, as we’ve seen it’s prevalence expand to over 75% worldwide.

How do marketers deal with (not provided)?

This represents only a small sampling of the data analyzed by Dr. Pete. Check out the complete results for more insight.

Bonus: Build your own content with the full data download

Moz is making all the data collected public under a Creative Commons license. This means you are free to use it for research, creating visual assets, or even producing your own content from the raw data, as long as you follow the requirements of the Creative Commons license.

We only published a portion of the data for this year’s Industry Survey results, so the possibilities of what you can do with the remaining full data set are endless. You could segment the data by country, profession, salary or more, and publish the findings on your own site.

Get the full data download here.

Thanks to our partners and contributors

We firmly believe in collecting this data for the benefit of the entire industry, and this effort wouldn’t be possible without the help of our partners. A few of the companies that deserve special recognition:

Also a big thanks to Moz team members Dr. Pete, Derric Wise and Devin Ellis who produced the content, and finally Jackie Immel who worked tirelessly for months to bring the whole project together.

Read the Industry Survey 2014 Results

Hand Coding A Personal Website

Last year, I had something of an epiphany about web design. I realised I didn’t really know how anything worked.

Every website I’d created until then had relied on a CMS, namely WordPress. It was only when a bad plugin utterly botched the database tables leaving me helpless that I realised how little control I actually had over my precious creations. I count myself lucky that one of my friends is a wizard with PHP.

I’m not out to diss WordPress. It’s a fantastic piece of software and I still rely on it daily. However, it’s also a complex, dynamic, database-driven piece of kit – as a non-developer, a real understanding of how your site works entails learning a lot of code far beyond what is actually necessary.

This prompted me to return to the fundamentals. I wanted to create a website from scratch, without using a CMS, and using only code I fully understood – or, where possible, that I wrote myself.

Here, I’ll be sharing some of the tips and advice that I found useful along the way. This guide will assume a firm grasp of HTML and CSS, and as such we won’t be covering the actual design process (although we will cover some great design tools and timesavers!). Instead, we’ll be exploring the nuts and bolts that hold your website together – the things that are most often taken for granted when relying on a CMS.

Is This For Me?

The good news is you don’t need to be a developer to hand-code a simple site of your own. And I mean a site that’s really your own, built just the way you want it and with full control from the ground-up. What’s more, it can be beautiful, responsive, SEO-friendly, and you can blog with it. All that’s required is a healthy dose of persistence.

Before we begin, three caveats. I would only recommend giving this a try if:

  1. You’re a bit of a control freak

    If you like building things online, tweaking every last detail of a project until it’s perfect – even if that means learning something new – great. If, on the other hand, you like being able to seamlessly publish your writing with no technical hurdles (which is certainly no bad thing!), then this probably isn’t for you. It requires a lot of fiddling just to get things up-and-running.

  2. You’re not fussed about blogging on the go

    Personally, I’m quite content to write locally on my laptop and publish it later. But if that’s not your cup of tea, and you enjoy the flexibility afforded by the WordPress mobile app (or similar), then stick with that.

  3. You want something simple (but infinitely extensible)

    With sufficient time and effort, anything is possible. But I’d suggest starting small and taking it slowly, at least at first. If you require a multi-author blog with categories, tags, subscriptions, and search functionality all from day one, this isn’t for you.

Still here? Great! Let’s get started.

A Head Start: Boilerplates and Grids

We’re not going to be using a full framework like Foundation (as powerful as these can be). Instead, we’re going to be starting with something more fundamental – after all, the goal is to build something, rather than adapt something.

I would suggest checking out OpenDAWS (Open Digital Application Wireframing Styleset), by our very own Pete Wailes. In its own words, it’s a huge set of resets and helpful classes, as well as the logic for grid layouts, but without any of the constraints of a full framework. It’s not heavily pre-styled, it’s hugely extensible, and it’s free! Naturally I’m biased, but I’d say it’s ideally suited to our needs.

I’d also suggest reading up on the different kinds of grid system – it’s important to appreciate that a variety of different techniques can be used to achieve what is commonly referred to as a ‘responsive design’. I found Joni Korpi’s Frameless and Golden Grid System to be particularly helpful in understanding some of these principles.

Local Development

We want to develop our new site locally (i.e. without an internet connection). Installing a local web server is not difficult or time-consuming, and – because it simulates how your website will behave online – it will make the eventual transition to a live server extremely easy. You can configure things like 404 pages, clean URL structure, redirections, and it means you can use root-relative URL paths.

I recommend using XAMPP to run a local Apache server – it’s cross-platform, takes less than 10 minutes to set up, and there are plenty of helpful tutorials out there. Once it’s up-and-running, you’ll be able to point your browser of choice to http://localhost to access web content in C:xampphtdocs, or wherever you installed it.

The Fun Starts

If you’re anything like me, you learn by doing, not by reading, so let’s get cracking. Download the current version of OpenDAWS (4.5.2 at time of writing), unzip the package, and place demo.html and the css and images folders into your local server’s web content folder.

A quick primer on OpenDAWS: it uses LESS, a CSS pre-processor and extension to the CSS language. If you’ve never used it before, don’t worry – it’s backwards compatible with CSS and uses the same syntax, meaning you can use all your existing code. It allows you to use operations, variables and nested rules, all of which are wonderful and will save you many hours of your life.

You need two things to get started: the first is a compiler, to convert your lovely LESS into regular CSS. I recommend trying SimpLESS – it’s free and super-easy to use. The second is a text-editor that supports the LESS syntax – the fantastic Sublime Text has an add-on package for doing just that.

You’ll notice that OpenDAWS comes with a series of stylesheets corresponding to different screen resolutions. These – along with your size-agnostic layout rules (style_0.less) – are imported into the main style.less stylesheet with media queries, which check the width of the user’s browser.

Perhaps I’m just thick, but it took me a little while to get my head around this – hopefully the diagram below will offer some clarity. Open the blue files and you’ll notice that they’re mostly left blank for us to edit.

open-daws-diagram

Compiled.css – The finished CSS stylesheet, output by your compiler.
Compiled.less – The main OpenDAWS styles and logic. You don’t need to edit this.
LESShat.less – Not fashion advice, but a mixin library which extends LESS even further. You don’t need to edit this.
Style.less – Blank for your own styles. Mixins, imports, classes, overrides, etc.
Size_0.less – Blank for your layout rules (non-size-specific).
Size_640_800.less, Size_800_up, etc – Blank for your size-specific layout rules, xxx corresponding to a particular range of resolutions.

The best advice I can offer at this point is simply to start experimenting.  Familiarise yourself with the source code and behaviour of OpenDAWS by designing some nifty page layouts. Try creating a horizontal top menu bar that stacks vertically at tablet size and below. Design a page that completely changes at a particular resolution, its elements dynamically altering in width in response to the browser resizing.

Remember, this is your site! You’re not constrained by a framework, theme or template; create a layout to suit your needs.

Design Tools & Resources

When it comes to design, everyone has their preferred way of doing things. You know your favourite combination of image editor, code editor and browser, and you know the methodology that works best for you. Therefore, instead of re-treading old ground, I’ll simply offer some of the resources I keep bookmarked:

  • For colours, Adobe Kuler: Create a colour-swatch for your site.  You can copy and paste hex codes, utilize a variety of schemes (complementary, analogous, triadic), and browse popular combinations. Side note: don’t forget to set global variables for your colours in LESS, a massive timesaver that helps consistency.
  • For typography, Google Fonts: There’s a lot of chaff here, but plenty of great web fonts too. Experiment with combining different families and previewing the results.
  • For icons, Font Awesome: Huge collection of scalable vector icons, all customizable with CSS. Glorious.

HTML5 & Best Practices

One of the advantages of hand coding your own site is the ability to scrutinize every line of code that goes into it. You may as well take this opportunity to make your site a shining example of best practises, and sharpen your code to a razor edge.

I would suggest reading the HTML5 Boilerplate source code – it’s the combined work of hundreds of developers, chock full of best practises and everything is heavily commented with explanatory notes (and often links to helpful articles).  Supplement this with the HTML5 Doctor, a fantastic reference for learning about the latest HTML elements, and you should be set.

Server Configuration & .htaccess

Whilst almost everyone has heard of .htaccess and knows roughly what it does, far fewer have actually had to use it. The .htaccess file – note it is just ‘.htaccess’, not ‘htaccess.txt’ – is an ASCII text file that can be used to control the server hosting your site. Create it using your usual code editor, and save it to the root directory of your site. Be sure to test it using your local server.

I’ve listed some of the most common uses of .htaccess below – note that anything preceded by a hash (#) is a comment and will be ignored.

Custom error pages

# Return custom error pages by error type
ErrorDocument 404 /not-found.html
ErrorDocument 403 /forbidden.html

Redirects

# 301 redirect all WWW subdomain traffic to the non-WWW equivalent
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteCond %{HTTP_HOST} ^www.(.+)$ [NC]
RewriteRule ^ http://%1%{REQUEST_URI} [R=301,L]
</IfModule>

Security

# Block directory access
<IfModule mod_autoindex.c>
    Options -Indexes
</IfModule>

A lot has been written on how to use .htaccess (it’s an art in itself), so rather than reproduce it here, I’ll link to Nettuts comprehensive guide. I’d also recommend checking out the template .htaccess file in the HTML5 boilerplate, which contains dozens of well-commented examples of common techniques.

Getting Your Site on the Interwebs

When you’re happy with your new site and have tested it thoroughly in a variety of browsers, it’s time to stick it online. We’re going to use the free FTP client FileZilla to copy our site’s files to the web server.

Don’t yet have a hosting provider? No problem. I would recommend trying Tsohost – their cheapest monthly package includes 4 hosted websites and 20GB of monthly traffic, which is perfect for our purposes. Their cloud hosting platform is very easy-to-use, and they also offer well-priced domains in all the major TLDs, should you still be undecided as to where your site is going to live.

It takes a lot for me to sing the praises of a particular company, so understand that I don’t say this lightly: Tsohost are genuinely one of the nicest companies I have ever used, for anything. In the year that I’ve been with them, I’ve submitted dozens of support tickets and they’ve always gone out of their way to help me, even if the root cause of the problem was my own stupidity. So, if you’re in the market for hosting, give them a try.

Add your chosen domain to your Tsohost cloud dashboard, and hit FTP Accounts under Basic Management Tools. You’ll need to click the Add New FTP Account tab and choose a username and password. Leave Home Directory as it is, and click Create FTP Account. Next, fire up FileZilla, and in the fields at the top enter your host (ftp.yourdomain.com), username and password, then click Quick Connect. Drag and drop your site files from your local server folder into the ‘public_html’ folder on the web server, and you’re done!

Open your browser, navigate to your website, and check everything works as it should.

Where Next?

lego-spacewalk
The world is your oyster. You’ve done the hard part – you’ve created a little corner of the internet that’s entirely your own. You’re familiar with every file and line of code that makes up your website, and you’ve taken a crucial step towards creating your own personal brand (if that’s what floats your boat).

What comes next is up to you. Every new feature you implement and bug you fix is a chance to learn how websites really work: rather than simply copying chunks of code into your project, take the opportunity to deconstruct the existing solution, understand how it works, and perhaps modify it to meet your needs. Having laid firm yet flexible foundations, freeing ourselves from pre-set options, there are no roadblocks except gaps in our own knowledge. And with thousands of great tutorials out there, these can easily be filled.

My site, in case you’d like to take a look, is at bennet.org. On my roadmap for the future are comments and RSS feeds, and I’m also considering utilising a templating tool to ease the process of creating new posts and pages.

Thanks for reading! If you found this guide helpful, please consider sharing it with your friends and followers. I’d love to hear from you in the comments.

Image Credit

Lego Earth-Mars Cycler and Lego Spacewalk, by Andrew Becraft

Hand Coding A Personal Website, 5.0 out of 5 based on 3 ratings

Top Strategies, Salaries and Tools Revealed: Industry Survey Results 2014

Late last year we set out to discover the top tools, tactics, and trends of the online marketing world. With the help of our partners, over 3700 participated in this year’s Industry Survey.

Today, Moz is proud to present the results.

Read the Industry Survey 2014 Results

Shifting demographics and salaries

This year’s data was analyzed by Dr. Pete Meyers, Moz’s Resident Marketing Scientist.

One area we’ve tracked for several years is the demographic makeup of professionals working in the online marketing industry. Among the shifts the survey has revealed is a rise in the number of women working in the field.

This year’s respondents were 28.3% female, up from 20.7% in 2010, although theses numbers indicate we still have a long way to go.

The survey also examines salaries, as indicated by the graphic below showing the median salary by role of respondents around the world. The full data set (see below) contains even more granular information.

Median Salary by Role

Tools and strategies in 2014

In the age of (not provided), Google’s Hummingbird update and changing practices in the world of link building and content marketing, the survey tracts both shifting tactics and tools inbound marketers most use to perform their jobs.

This year, we particularly wanted to know how marketers dealt with (not provided) keywords, as we’ve seen it’s prevalence expand to over 75% worldwide.

How do marketers deal with (not provided)?

This represents only a small sampling of the data analyzed by Dr. Pete. Check out the complete results for more insight.

Bonus: Build your own content with the full data download

Moz is making all the data collected public under a Creative Commons license. This means you are free to use it for research, creating visual assets, or even producing your own content from the raw data, as long as you follow the requirements of the Creative Commons license.

We only published a portion of the data for this year’s Industry Survey results, so the possibilities of what you can do with the remaining full data set are endless.

Get the full data download here.

Thanks to our partners and contributors

We firmly believe in collecting this data for the benefit of the entire industry, and this effort wouldn’t be possible without the help of our partners. A few of the companies that deserve special recognition:

Also a big thanks to Moz team members Dr. Pete, Derric Wise and Devin Ellis who produced the content, and finally Jackie Immel who worked tirelessly for months to bring the whole project together.

Read the Industry Survey 2014 Results

Applying Lessons from the Publishing Industry to SEO Consulting

The author’s posts are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.

“Search has been less and less relevant since Facebook released News Feed. Now we get the vast majority of our traffic via social, and about 1-2% from search”
– Chris Dannen from Fast Co Labs

“We benefited a ton from an early SEO audit thanks to IAC’s SEO pros, but once the right framework was in place, it’s been up to us as content creators to really dig deep into Google Analytics to determine where the opportunities lie…”
– Jordan Shakeshaft, editorial director of Life by DailyBurn

“None” was Shakeshaft’s response to a question about the role of SEO within her company. “There’s very little of actual value in it for us.” This from a respected British magazine.

In preparation for this post, I started thinking about publishers and their plans for 2014, specifically their growth strategies for the coming year. My thought was that as the publishing industry usually leads the way when it comes to new content techniques and products, it is at the forefront of publishing initiatives. As publishers blaze new trails, we as consultants have the opportunity to learn by proxy, observing what has worked and what has not. These observations can then be applied to our own clients’ content creation. In this post-panda arena, the scramble to produce high-quality, compelling content is as real as ever, and lessons need to be learned fast. Let the publishing industry be your guide; come, walk with me.

In researching this post, I spoke to a combination of editors, industry analysts and publishing company employees. The quotes are representative of my contacts and their responses, but it is in no way comprehensive for the publishing industry as whole.

As per the quotes above, the sobering reality is that, at best, publishers see SEO as just one small part of their marketing strategy. Moz’s very own legend, Dr. Pete, has been trying to tell us this for a while, encouraging the search community to look beyond rankings. Our goal as consultants is to continue to add value in this altogether more varied landscape. The good news is that we can if we leverage our technical knowledge and use this to present some of the newer ideas, beyond our usual scope, to our clients.

This post is an examination of some of the other opportunities publishers are pursuing this year, along with my dreams for what they could be doing and some tips on how to present these ideas to clients.

What are publishers doing for growth?

1) Investing in site redesigns

The internet was all aflutter earlier this month, when The New York Times launched its site redesign. That project, in addition to generating buzz, traffic, and links, was the site’s first major redesign since 2006. The main visual changes include:

  1. Changing of fonts and font colours so it more closely resembles the print edition. The links from the home page to the categories page are now black, not blue for example.
  2. Article comments now appear on the right-hand side of the article, allowing comments to receive the same level of visibility as the article.
  3. Infinite scroll, rather than pagination.
  4. A much more minimal look on article pages with more white space.

This redesign freshens up the look of the page as a whole and the cleaner, sparer UI is more in keeping with what other publications are doing. This video from Fi talks us through its process for redesigning USAToday.com, which has several design features in common with the Times’ update.

The insight: Good design matters.

Your access point: When presenting ideas of this ilk to your clients, it is important to be in cahoots with the designers. Your aim is to collaborate in these projects, ideally from initial conception. The advantage of being an outsider weighing in on a site redesign is that you are invariably not bound by the limitations of a CMS or the like; you are free to see the site and where it stands in relation to industry competitors with a detached view. You can represent SEO and call on your experiences with redesigns to offer suggestions.

2) Embracing social

You probably already know that social networks are an increasingly important means of discovery, and amongst the under-45s, they are the most popular method of finding content. Social becomes more and more important as user groups get younger. For example, 44% of 18- to 24-year-olds rely on social, versus just 19% of users over 55. This is illustrated by this graph from the Reuters Institute’s Digital News Report 2013:

Clearly, if you wish to build long term trust with your users, social networks are critical for getting your content in front of younger users. It goes without saying that social networks are also now critical for engagement among all age groups.

What is surprising is the extent to which publishers are still missing this opportunity, whilst newer companies such as Upworthy and Buzzfeed are swooping in and winning traffic. This recent article from the Media Briefing visualizes how some of the media players are doing on Facebook, and the newsworthy part is that none of the more established players feature at all. In short, they are not getting it right. The winners in this particular data set are companies that have been formed within the last eight years (Buzzfeed was formed in 2006, Upworthy in 2012); the Huffington Post is the old guard here, and that is only nine years old.

The results are clear Upworthy and Buzzfeed have mastered the sort of content that gets people sharing. Whilst the audience may eventually tire of cats in unlikely situations, photoshop-shaming, and listicles, you can be sure that both companies are investing time and effort to evolve from their current strategy. Mark Suster expanded on this idea in a recent post, saying “I think companies like Upworthy can build really compelling businesses in the future – but I’m willing to bet serious cash … that it won’t be by sticking to the playbook [that is, writing content to generate as many social shares as possible] that has worked tremendously well to date.”

The insight: For all of the chatter about social networks, publishers are still not getting it right.

Your access point: Present working in social networks as a series of easy-to-implement A/B tests.

Using the Upworthy premise, as outlined below, clients have a quick, clean testing method that should give them confidence to test their social network content.

Upworthy produced a wildly popular slide deck back in 2012 that outlines some of their tactics, which makes for an interesting read. The key takeaway, regardless of the sort of content your client might produce, is the idea of testing multiple headlines. Upworthy writes 25 different headlines for a post, and then tests the headlines in two demographically similar cities within Facebook for an hour or so. They then push the headline with more shares.

This is both agile and data-driven; keep this example in mind, as it’s deliciously simple and reasonably easy to implement. It can also be applied to subheadings, images, and more. As consultants, A/B Testing is very much within the traditional scope of your work. By using this experience (and the client’s trust in this experience) you are moving into new terrain via a familiar method.

Let social embrace you back

To approach the opportunities of social networks from another angle, Facebook and Twitter are both making a concerted effort to woo publishers. Facebook’s algorithm tweak in August 2013 has increased the amount of traffic sent to news sites. Buzzfeed saw a 69% jump during this time, and they were not the only ones. In December 2013, Facebook gave us more insight.

“We’ve noticed that people enjoy seeing articles … and so we’re now paying closer attention to what makes for high-quality content and how often articles are clicked on …

“Starting soon, we’ll be doing a better job of distinguishing between a high-quality article on a website versus a meme photo hosted somewhere other than Facebook … this means that high-quality articles you or others read may show up a bit more prominently in your News Feed, and meme photos may show up a bit less prominently.”

(Is this the end of memes? Maybe so if Facebook gets its way)

The insight: Facebook is working to keep its users entertained with your content

Your access: Leverage your Analytics prowess; you are an Analytics tiger!

Analyse your Facebook referral traffic comparing August-December 2013 with the previous six-month period and the same period in 2012, and assess how much impact the algorithm update had on your site. In the same article quoted above, Facebook claims that they have increased the amount of traffic to media sites by an average of 170%. If you did not see a significant jump it suggests that the site is not sufficiently integrated into Facebook. The sort of numbers referenced by Facebook (the 170%) are considerable, all publishers would love to see traffic increases in this range, let this be your approach to re-evaluate your Facebook strategy.

But wait, there’s more

Beyond sending more traffic to publishers, Facebook is also working with publishers to share the vast trove of data about what is trending so publishers can incorporate it into their stories. Facebook’s Public Feed API shares public data and is open to anyone with the functionality. A second API, the Keyword Insights API, is only available to a select number of news organisations. The Keyword Insights API allows news organisations like CNN, Today Show, and BSkyB access to programmatically search through Facebook’s public data for anonymous keyword data. This data can be sliced by gender, current city, and age range. There are no plans yet to release it to a wider audience, but it seems inevitable that (if successful) it will be rolled out in the future. (Note, an email to Facebook about this has not yet been answered. I will update in the comments if I hear more.)

The insight: other publishers are working with Facebook, if only in the sense that they are incorporating new data sources for their users.

Your access: Shaming (gently!). Depending on the size of your client. The Keyword Insights API isn’t publicly available yet, but you can present opportunities for anyone consistently producing content to get access to similar data. For example, try Mass Relevance, a Facebook Preferred Marketing Developer, which can provide insights and trends from Facebook slicing data by a variety of metrics, including device.

What publishers could be doing

Now we have a general sense of how some publishers are trying to grow, I’ve also compiled a short list of some of the opportunities or ideas that have not been mentioned thus far. This list is based on stealing ideas from other industries, general common sense, and no small amount of wishful thinking.

1) Embracing Google products

Google’s range of products is staggering. For publishers this can lead to confusion about how to use the products available. To address this, Google has created Google Media Tools, a valuable hub designed to demystify many of the products in the roster, explaining everything from hot searches and trends to Google Earth to Google Crisis Response, and references examples of how publishers are using these products. For example, NBC Today uses Google Trends each Monday to give viewers a sense of what was popular over the weekend. At the Google For Media Summit, hosted earlier in January, attendees tweeted about BBC News’ integration with Hangouts.

Quick note: Make sure you get it right. This screengrab of a Google search for “bbc news” is from 22nd August 2013, not 2001…

Clearly, it can be difficult to implement, but do not give up. Again, referring to Dr Pete’s slide deck, as Google products increasingly appear in the search results, pure organic search results will be forced lower down the page. Embrace Google’s products to maximise your client’s chances of staying on the first page.

The insight: Competing for organic rankings is only ever going to get you so far. (Again, Dr Pete said so!) Encourage clients to embrace the suite of Google products out there, in the spirit of trying new things and also offering new products to the end users.

Your access point: Your expertise. Most people do not differentiate between Google Search, Google News, Google Local, Google Trends, etc. Anything to do with an internet search engine is your domain.

Your second access point: Training.

Offer your clients and their writers training in using these new products. As an experienced consultant, there will inevitably be a few training slide decks or “best practices” guides in your past. Use this didactic approach to showcase your knowledge and support the clients when they start to use them. As with the BBC example above, it might not be perfect immediately, but persevere.

2) Planning for change

“The pace of technological change will not abate, and to think of our current time as a transition between two eras, rather than a continuum of change is a mistake.”
Richard Gingras, Senior Director of News and Social Products at Google

The New York Times appears to have taken this advice seriously, for amidst the redesign fanfare, the most important feature is the Times’ decision to change the back end. I interpret this as a commitment to the future; this fluidity is admirable. As referenced in this Fast Co Labs summary of the redesign:

“The new system, however, is more dynamic. “We can continually iterate on the site and take advantages of the trends as we see them happening, rather than having to do a big unveil.”

Insight: Change is the only constant. (this is probably true of more than just technology used in the publishing industry)

Your access point: this will be the toughest sell of anything else recommended in this post. Persuading clients that it is important to invest money in the backend system without any proven ROI is difficult. I’d welcome any ideas in the comments, but know this: It still has to be done. The best method I have so far is to use sites like the New York Times as a case study. The theory being that as they can present new ideas quickly, they get more press (possibly with links), and maybe even more readers. By monitoring new products on The New York Times and monitoring their search visibility using a tool like Searchmetrics, you should hopefully see traffic growth. You can then present this data to your clients. The good news is that you don’t have to manually check the Times’ site everyday; instead, sign up for the free email digests from Mediagazer, as they monitor new product developments.

3) Understanding paywall models

Paywalls are starting to work, and you can be certain that your clients will be watching how competitors are starting to use them. As a consultant, it is important that you understand the variety of paywalls out there and how to implement them. These articles from SEO Book and Mashable are excellent resources to get you started. Google also has some limited information about using First Click Free, their solution for publishers wanting to charge for their content whilst still appearing in the search results. The goal in this instance is to develop an opinion on paywalls as well as an up-to-date idea of how your competitors are using them (and if they are successful).

The insight: As paywalls are beginning to pay off, you will be asked about them

Your access point: Forward planning. By researching ahead of time, you will be ready with an opinion when asked (and you will be asked).

4) Putting their content to work

Publishers are in the enviable position of having plenty of content to play with, however now it’s a question of putting that content to work. Here are a few ideas, some riskier than others.

i) Creating new page types

Creating new page types is a classic tactic to get more traffic. If this is what your client is looking for, look at different ways of categorizing your content.

As referenced in Sara Wachter-Boettcher’s Content Everywhere, the BBC Food pages tried this approach in 2011 by introducing pages organizing their content by recipe and also by ingredient. This led to an increase of 150,000 in organic traffic, and overall traffic doubled to 1.3 million visitors.

The insight: New page types lead to more traffic

Your access point: Grounding the creative task of thinking of new page types within standard information architecture best practices. Abby Covert, Information Architect extraordinaire, explains it well: there are 5 methods of categorizing. Use these as a starting point for inspiration when thinking about how to group your client’s content:

On this theme, I would love to see news publishers in particular tagging their content with zip-codes. I think it would prove a useful resource for tourists, anyone looking to rent or buy in an area, historians, and even schools. This could become even more useful on portable devices if there was an opportunity to tie news stories of particular importance into existing map products. But I’m getting carried away.

Some news organisations are already trying new page types, the AP has, frankly, had some fun experimenting with Archive page types to commemorate pivotal moments in history, and has used its own images and stories to add to the narrative.

ii) Partnering with new businesses

Partnering up with other businesses can be seen as risky because success cannot be guaranteed. One option would be to partner up with some of the newer content creation services on the market. LinkedIn has just bought Pulse, a service that pulls in news it believes will be of interest to you based on your LinkedIn profile.There is also the wistful Kennedy app, which automatically supplies iPhone users with context when taking notes and writing deep thoughts.

Insight: Your client’s content can live on in different formats.

Your access point: Introducing this and other like ideas to your client. In terms of publishers, the opportunity lies in being part of the potential newsfeed as it is a valuable branding opportunity. You might be able to generate revenue from supplying products like this with your content.

5) Looking to other niches within publishing and adapting their best ideas.

The academic eBook publishing industry is in a stage of rapid change as it moves beyond the basic eBooks into much more exciting enhanced eBook territory. The broader industry themes are:

  • Interactivity
  • Socially-connected groups
  • Adaptive eBooks

Interactivity

Bookry, a Welsh company, is just one of the many companies out there building interactive components for eBook. The company specializes in building widgets that allow eBook users to play with data tables. This allows users to see how positive coefficient correlation looks and how the data points, when changed, change the graph. By allowing users to play around with the data, you make them think about the material itself. The most obvious use is to improve educational resources, but there’s no reason why it couldn’t be applied in a broader sense for all publishers.

Socially-connected groups

The idea behind this is that eBook publishers are trying to encourage commentary and interaction with the course material. Most publishers are already offering social sharing as a matter of form, however some eBook publishers are going one step further and developing products that allow all the comments, notes and questions to be stored in the cloud, all in one place. This allows the user to keep track of where she has interacted but also is useful for professors looking to grade a student on the quantity and quality of her interactions. It would be incredibly useful for users to track all comments and interactions in one place, other than on the site of the comment.

Adaptive technology

McGraw Hill launched what they call Smartbooks last year, designed to assess the reader’s understanding of the material and then adapt it based on her knowledge of the subject.

Another company, Knewton, based in New York, specialize in adaptive technology and offer education publishers the opportunity to personalise the reading experience. The effect on students’ pass rates has been impressive, which supports the idea that tailoring content to the user’s comprehension boosts retention. Any publisher or content-producing site looking to launch a body of work for a large audience of differing ages might find these developments interesting.

This is an extremely top-level summary of some of the developments in the eBook publishing sphere, as documented in the Digital Book World Conference held last week in New York.

The insight: use developments in a related industry to inspire your clients, in ebook publishing as per my example, the industry leaders are pushing ebook content in new, exciting, immersive directions, adapt these ideas to suit your customer’s content.

Your access point: Your expert curation skills. By taking the time to understand the broader industry trends, you can skim the very best ideas and present them as opportunities to your client. If you assume responsibility for industry developments, you save your clients time and headspace whilst also expanding your sphere of influence.

Have you seen new publishing products, or been involved in building them? Do you have any strong opinions about where content creation is heading next? Please share in the comments below. In terms of reading around on this subject, I’ve included a limited list of resources that I have found helpful.

Resources

People

  • Tim O’Reilly – an ebook pioneer. He’s thinking at least two years ahead.

  • Charlie Melcher – of Melcher media, founder of the Future of StoryTelling mentioned above and also involved in Al Gore’s Our Choice app, as referenced in this Tedtalk.

  • Frank Rose – Frank Rose writes beautifully on immersive content, he will inspire you to think about the role the audience plays in telling a story.

  • Tim Pool – now at Vice magazine. Tim’s livestream of NY’s Occupy Wall Street has changed the perception of citizen journalism.

  • Jeff Jarvis – this post from 2008 has some thought provoking ideas.

  • Chris Danen – Fast Co Labs, tends to write about the future of media and often brings in Fast Co examples.

Products

Announcing the Brand New Beginner’s Guide to Social Media

I’m both honored and excited to announce the release of a second beginner’s guide from Moz: The Beginner’s Guide to Social Media.

The prevalence and importance of social media to web marketing can’t be overstated. To quote a few statistics from the guide itself, 72% of online adults use social networking sites, and YouTube now reaches more U.S. adults aged 18-34 than any cable network. With that kind of traffic, it’s no wonder marketers now use these networks to interact with their customers, and there’s plenty more data to prove it. Google searches for “social media” have seen a steady rise since early 2009:

Data from this year’s industry survey tell a similar story. In 2012, nearly 20% of respondents reported not using any social media tools; this year, that number was down to 11%. On top of that, 63% of respondents indicated that their demand for social media marketing has increased over the last year. Whether you’ve been in on the game from the very beginning or are just starting to wonder how social tools can apply to your own professional life, this guide was created to help take you to the next level. Click below to dive in, or keep reading for more details!

What’s inside

There’s something for everyone in here, from the fundamentals of how social media is used to details about individual platforms and overarching best practices. Here’s a list of the chapters you’ll find in the guide:

The first section of the guide talks about social media in general, offering a plethora of best practices, a clear sense of the value of social media, ways you can measure your success in this endeavor, and recommendations on how to get started. From there, we dive into individual sites, slicing and dicing each of the major social media platforms and offering a consistent set of topics about each.

Here’s a run-through of what you’ll find:


Key stats and demographics

How many people are using these networks, and what kinds of people are they? When it comes to figuring out which social networks are right for you to use, it helps to know who you might be able to reach by developing a presence. This section is designed to give you the who, what, where, and when of each platform. You’ll find infographics with statistics as well as some more general info.


How are people using the platform?

While the previous section covers who, what, where and when, this one covers the how and why of each platform. With uses ranging from establishing thought leadership to building customer advocates, this section (complete with innovative pro-tips) will give you a clearer picture of why you might want to choose one platform over another.


Strategies and tactics for success

Okay; you’ve decided to dive in. Success comes in different ways for different platforms, though, so how do you maximize your chances of seeing early results? This section is all about starting you in the right direction, making sure you can learn from the mistakes of everyone who came before you instead of from your own.


What success looks like

Many people learn better when they can see a great example of what they’re going for, and while there’s certainly no “best” way of going about your presence on any particular network, there’s a great deal you can learn by examining some of the biggest success stories for each network. The sites listed in this section are all continuously finding new and innovative practices, so checking back in on them once in a while will help keep you up-to-date.


Etiquette tips and guidelines

At their hearts, all of these networks are really just tools to facilitate different kinds of social interactions. For that reason, there exists an unwritten code of etiquette for each. Most of this code mirrors basic human etiquette, but in new environments it’s easy to make accidental slips. This section aims to point out some of the ways in which folks end up harming the trust and authority of their brands by ruffling their audiences’ feathers, reducing the chances of any accidental train wrecks.


Recommended tools

While the platforms themselves are full of functionality, there are other tools on the web that can really take your social presence to the next level, offering you everything from scheduling functionality to insight and analytics not offered by the networks. For each of the major platforms, we list our favorite tools and talk about how they can help your efforts.


Thank-yous

This guide would never have been possible if not for the absolutely tireless efforts of the following folks.

Thanks to Kristy Bolsinger for actually writing the guide, for her expertise, and for her patience during the editorial process; Rob Eagle for his energy and vision in creating graphics for the guide; Ashley Tate for wrangling the process and polishing the early drafts; Lindsay Wassell for her long hours of thorough edits; and the many Mozzers who worked to bring the guide to life (particularly Erica McGillivray, who worked so hard that I really need to bake her cookies).

About Trevor-Klein — Trevor is the editorial specialist at Moz—a proud member of the content team. He manages the Moz Blog, helps craft and execute content strategy, and wrangles other projects in an effort to provide the most valuable content possible for the Moz community.