Back to Top

Tag Archives: keywords


What Is Data Visualization And How To Use It For SEO

Updated on by

Learn about data visualization, why it’s important, and how to use it for SEO.

Planning and executing an excellent SEO strategy is critical for any digital marketing campaign.

However, the effort requires data to tell the story in a way that resonates with our clients.

But poring through pools of numbers can be tedious and mentally exhausting. This is where data visualization comes in.

Data visualization takes your data (numbers) and places it in a visual context, such as a chart, graph, or map. It also helps create data stories that communicate insights with clarity.

Without visualizing data to extract insights, trends, and patterns, the chances of getting support from other departments plummet. The best data visualizations break down complicated datasets to present a concise and clear message.

Read on for more on data visualizing, its importance, and how to use it for your SEO campaign.

Types Of Data Visualizations

For years, the easiest way to build data visualization was through adding information to an Excel spreadsheet and transforming it into a chart, graph, or table.

While still effective, the method has undergone a few updates in the last few decades.

The options today allow users to create elaborate data visualizations, including:

  • Bullet graphs.
  • Animated charts.
  • Radial trees.
  • Interactive charts.
  • Bubble clouds.
  • Data art.
  • Heatmaps.
  • Dashboards.
  • Infographics.

And many more.

This is an example of data visualization to see a website's crawl hierarchy.Screenshot from, August 2022

The above is an example of data visualization to see a website’s crawl hierarchy.

How To Choose The Right Visualization Type

Choose the right visualization type to communicate your message effectively.

Before getting started:

  1. Identify the key message you want to communicate and summarize it in a short sentence.
  2. Find the data you require to communicate your message and also consider simplifying it to make this message clearer.
  3. Consider the type of data you have, such as comparisons, trends, patterns, distribution, and geographical data.
  4. Consider what display type is simple and will capture the audience’s attention.
  5. Like all web content, your visualization should be accessible to all users.
  6. Consider the information to include with the visualization, and readers can understand and interpret the data.

Importance Of Data Visualization

Modern companies are generating massive amounts of data through machine learning.

While excellent, we must sort, filter, and explain the information; so it makes sense for stakeholders and business owners.

It’s easy to identify patterns and trends in your SEO strategy quickly using data visualization. Visualization makes it easy and fast to convey insights. Making data visualization a habit in your business offers several benefits.

Create Robust Value Propositions

It’s easy to express to stakeholders or clients how and why your products are good, but not as easy for the same people to understand what you are saying.

Visualizing your data is an excellent strategy that can increase buy-ins into ideas. The strategy can also transform site traffic into sales, contributing to a business’s success.

Enable Faster, Easier Communication

All businesses are looking to advertise their products to the public. But few people take the time to read tons of words.

Instead, simplify the message into visual content that’s easy to present.

More people will focus on visual data than on text. The visuals help capture the attention of and persuade potential customers, clients, and investors.

As a result, this helps drive traffic into your venture, which leads to success.

Analyze Patterns And Trends

Business industries depend on specific patterns and trends. It’s your role to make decisions based on market patterns and trends.

Visualizing data summarizes the entire process of identifying current and future opportunities.

The data also helps make business owners prudent decision-makers aligned with the market situation.

Motivate Team Members

Business success depends on the effort team members put into the process. Each member of your organization is happy when the business makes development strides.

Data visualization can help identify the business’ initial position and the direction it’s heading.

The process can motivate the team members to work harder and elevate your business to greater heights.

Improve Customer Experience

Visualizing data plays a critical role in improving your customer’s experience. The data makes it easy to ensure customers are happy and their requirements are included.

Data visualization makes shaping, filtering, and desegregating data on-demand easy.

Data Visualization And SEO

SEO data significantly affects the keyword search volume contributing to a site’s ranking.

Keyword search volume is the number of times visitors search for a specific keyword in a particular time frame. The term also refers to the number of people interested in a keyword.

SEO data is also critical for organic traffic in different online marketing aspects. The latter is the number of people visiting your site.

Page speed is another critical SEO practice that determines your website’s reliability.

Your online visitors don’t have the time to wait for a page to load. Further, page speed also affects your position on the search engine.

Using Data Visualization To Improve SEO

Visualizing your data has a significant impact on interpretation. Visualization can help represent search volume for different keyword sets you want to use in your next campaign.

Visualization tools can also present a detailed analysis of your site from the SEO point of view.

Presenting your content in charts and graphs helps the audience grasp every aspect of an SEO campaign.

Elevate SEO Capabilities

Data visualization can help elevate your SEO strategies in several ways. Here are the most effective areas in that visualization will help boost SEO.

Competitive Analysis

Working on your SEO strategy also means evaluating what competitors are doing. The analysis helps you understand what requires doing and areas to improve.

Visualization can help you:

  • Determine the social media strength of competitors.
  • Find top competitors for a keyword.
  • Analyze competitor backlink profiles.
Keyword Difficulty Distribution

The above is an example of using a bar chart to visualize the keyword difficulty distribution of current keyword rankings.

Backlink Analysis

Visualization also aids you in creating an effective link-building campaign.

Some items to analyze include:

  • Backlink geographic locations.
  • Quality of backlinks.
  • The distribution of backlink anchor text.

Wrapping It Up

Data visualization is a vital contribution to the success of any business practice.

What makes visualization critical is its ability to convey complicated data sets visually.

Anything that can condense large amounts of data into infographics, charts, and graphs is a successful recipe.

It’s clear incorporating visualization in your digital marketing operations elevates SEO capabilities too.

Further, visualizing your data plays a major role in business development and SEO decisions.

If you are interested in original article by Adam Heitzman you can find it here


Putting lipstick on a pig: Fix the website or fail at SEO

Updated on by

If a website is not performing well, optimizing it is just putting lipstick on a pig. Here’s what to do to ensure your site succeeds in SEO.

You’ve landed a new client and you’re digging into the website. One look tells you this is not going to be easy.

Sure, you could do some SEO activities to make the client happy. But there’s bigger fish to fry. 

The information is out of date, it’s not well written and the formatting is hard to read. The website looks old and the CMS is clunky.

The client has big expectations of your SEO program. What do you do?

There’s an old saying about putting lipstick on a pig. Sure, you can “do some SEO.” But you and I both know that we need to address the fundamental problems if we want to succeed in SEO and rank on Page 1 out of millions of results.

At this point, you need to get real with the client. This can be a hard conversation; what if they don’t have the budget for what you are proposing?

You have to prepare to walk away from the project or else get creative with the budget. Because neither one of you will win if you don’t get it right.

That said, there are two major categories you need to fix in any website before you kick an SEO program into high gear:

  1. The content on the website
  2. The technical back-end of the website

I’ll touch on what to look for in each category next.

Updating the content

A common lipstick-on-a-pig mistake is when a client asks for more and more new content and fails to address the content they already have on the website. I encourage clients to divert equal resources to updating their old content in addition to creating new. 

How often to update the content on a website depends on the topic. In general, there are three things to consider:

  1. If the topic is evergreen. Some topics are evergreen, meaning the information can stay relevant for a long time. Of course, that doesn’t mean you can’t tweak and optimize evergreen pages for best results. It simply means that there will likely be less work.
  2. If the query deserves freshness. You will need the most up-to-date content for a website if it is targeting queries or keywords that require the freshest content in the search results. A social or political event is one example. Google discusses that here.
  3. If the topic is deemed “your money or your life” (YMYL). Google discusses YMYL topics in its Search Quality Rater guidelines (think financial or medical advice) and holds webpages that contain them to a higher standard.

In addition to updating the content itself, websites need a strategy for how they will organize the content to get the most SEO value from it — and to create a better user experience.

This includes things like the navigation and how you link pages internally. SEO siloing and internal linking best practices are foundational SEO strategies that will streamline your SEO program’s efforts if done well early on.

Updating the website

At first glance, does the website look trustworthy? Or does it have an outdated look and feel? What about the performance of the website — does it provide a good user experience?

These are the fundamentals we must get right before driving more traffic to a website. 

Some things to work on right away include:

  • Spider-friendly code: The website needs clean, streamlined code the search engine spiders can crawl with ease.
  • The content management system: That custom CMS the client built may have been great at first, but it has SEO problems, and you can’t easily update things inside it. There’s a reason why WordPress is the most popular content management system because it is always up to date.
  • Site speed: How fast a webpage loads impacts the user experience, which is why it’s a part of Google’s ranking algorithm.
  • Mobile usability: Most websites have a large number of people visiting from a mobile device. Websites need to be responsive to desktop and mobile users.
  • Robots.txt: Robots.txt can block unnecessary crawling to reduce the strain on a server and help bots more efficiently find good content.
  • XML sitemaps: It’s a best practice to tell search engines about the pages, images and videos on a website with an XML sitemap.
  • 301 redirects: 301s prevent error pages by redirecting old pages to newer, more relevant content as needed. This can improve user experience.
  • Fully qualified URLs: When you link internally and include the full URL starting with the “https:” instead of a relative URL it can fix certain crawl issues.
  • Canonical tags: The canonical link element tells search engines which version of a URL you want in the search results and can resolve duplicate content issues.
  • Server maintenance: Server diagnostic reports help you address common errors right away to improve the user experience.
  • Plugins: For security reasons, update all plugins.
  • The design: Make sure the web design/user interface is in keeping with modern functionality and trends. Usually, an update every three to five years works.

Of course, there are more ways to address the technical aspects of a website, but this is the minimum to give an old website some new life. For more, an SEO checklist can help.

Just say “no” to pigs

I believe as SEO professionals, we have a duty to advise potential clients in a way that sets them up for success.

If we know that what the client is asking for is merely putting lipstick on a pig, then we need to be upfront about it. Cosmetic changes can add curb appeal, and may be needed — but don’t let the work stop there if the site is still just a pig.

SEOs must go beyond the lipstick and help create a good user experience to succeed. Updated content, Core Web Vitals, and especially page speed are examples of how SEO changes the pig into something better.

And, we should be prepared to either walk away from a potential client who does not want to follow our advice, or get creative and know how to make the most impact with their budget.

That may mean you address the fundamental problems of the website bit by bit on a slower timeline before kicking it into high gear. It’s not the job of SEO to make a pig fly … it’s the job of SEO to transform a website so that it becomes an eagle.

If you are interested in original article by Bruce Clay Inc you can find it here

Posted in Blog | Tagged , , , | Leave a reply

Keyword rank tracking software – 6 tools compared

Updated on by

We have looked at some of the most popular rank tracking tools to see how they differ and what features each one offers for position tracking.

Reliable rank tracker software is crucial to developing a successful SEO strategy. Aside from tracking keyword rankings, SEO tools provide you with keyword research features and useful SEO metrics, which lets you pick the most effective keywords for your website.

With so many rank tracking tools in the market, how do you know which one is the best fit for you? This list includes some popular rank tracking tools, showing what scope of tasks they offer based on different pricing models.

1. Rank Tracker by SEO PowerSuite

Rank Tracker from SEO PowerSuite enables you to track keyword positions, analyze SERPs, research keywords, explore competitors, and make estimates for SEO and PPC campaigns.

The accuracy and volume of keyword data make it a decent competitor of industry-leading rank-checking tools, such as Semrush or Ahrefs. Meanwhile, Rank Tracker is affordable for small businesses. It is also cost-efficient when it comes to large SEO agencies.

Main features:

  • An unlimited number of keywords and projects.
  • Local rank tracking: the software supports over 500 search engines in all possible locations, including Yahoo, Bing, Baidu, Naver and others. 
  • Separate mobile rank tracking.
  • Automated ranking checks for regular monitoring of your keyword positions.
  • Custom alerts and keyword position reports straight to your or your clients’ email boxes.
  • SERP feature analysis: the tool tracks if any of your pages are represented in featured snippets, FAQs, People Also Ask, featured images and videos, etc.
  • 24 keyword research techniques: in addition to its own keyword database, Rank Tracker includes Google and YouTube suggestions, related searches, Amazon autocomplete tool, keyword gap, TF-IDF analysis, etc.
  • Integration with Google Analytics, Google Search Console and Keyword Planner.

Pricing plans: Free version or Professional $149/year, Enterprise $349/year, billed annually.

Rank Tracker’s free version is pretty generous. It allows checking an unlimited number of keywords and websites. You can compare your rankings with one competitor. Besides, you can research as many keywords as you want.

With the Professional edition, Rank Tracker allows you to record the SERP history and track rankings for up to five competitors. Besides, it allows scheduling keyword position checks.

The Enterprise version of Rank Tracker is great for reporting to SEO clients as it comes with the white-label feature to add your company name and logo to your reports.

2. Mangools

Mangools is web-based software providing various tools for tracking search results. With this tool, you can quickly monitor top gainers and losers in SERPs.

Main features

  • Mangools software calculates the proprietary Performance Index score to show your ranking success.
  • The Keyword Position Flow board shows the number of keywords that moved up or down across the SERP. There is a compact dashboard of distribution for your Keyword Positions.
  • SERPWatcher shows your keywords’ ranking performance, defined by the target country and user device type. 
  • SERPСhecker checks rankings on the search results page and their quality (page authority and backlinks). The rank checker also provides a snapshot of the SERP.
  • Keyword research tools in Mangools include Related queries, Autocomplete, and Related questions.
  • Easy sharing of SEO reports on your search engine rankings. 

Pricing plans: Basic $358.80, Premium $478.80, Agency $958.80 if billed annually, monthly billing is available.

Mangool’s pricing plans differ by the limits on the number of keyword lookups per 24 hours. The Basic plan allows up to 100 lookups, 500 lookups are allowed in the Premium plan, and 1,200 lookups in the Agency plan. The number of tracked keywords also ranges from 200 to 700 and 1500 respectively.

This SEO tool offers a fairly limited free trial.

3. SE Ranking

SE Ranking is a cloud-based platform for SEO and online marketing professionals.

Its rank checker has neat keyword dashboards to help you easily understand what’s going on in search results.

Main features

  • Web-based keyword tracking and site audit platform.
  • Allows tracking your site’s ranking positions on Google, Bing, Yahoo, Yandex and YouTube.
  • The tool also lets you track local and mobile keyword rankings.
  • This rank monitoring tool offers 14 days free trial.

Pricing plans: Essential, Business, and Pro, monthly and yearly billing available.

The pricing plan of the SE Ranking online rank tracker is based on the frequency of your ranking checks: depending on whether you run the checks daily, every three days, or weekly. Besides, there is a limit of 5 search engines or locations per keyword.

The cheapest is the Essential edition, with weekly checks of 250 tracked keywords. It will cost $225 if billed annually. The most advanced Business plan with daily checks for at least 2500 keywords will cost around $1800 if billed yearly.

4. Ahrefs

Ahrefs is a popular SEO and marketing platform that lets you audit websites, research keywords, analyze backlinks, etc. The rank tracker tool in Ahrefs provides a pretty compact dashboard where you can grasp all your trends in keyword rankings at a glance.

Main features:

  • All-in-one web-based SEO software.
  • Ahrefs’ Rank Tracker lets you overview top traffic keywords and the SERP in detail, with each result accompanied by important metrics. 
  • SERP features appear next to each result, and you can unwrap and see all three ranking results in the local pack.
  • The filtering tags let you quickly filter the data you need.
  • The widely-known metric of Ahrefs is Domain Rating (DR), the score showing the authority of a domain based on its backlink profile.
  • In every plan, you will get weekly updates on your ranking progress.

Pricing plans: Lite $990/year, Standard $1,990/year, Advanced $3,990/year, Enterprise $9,990/year if billed annually; monthly billing is available.

Rank tracking history is available in all plans but Lite. Like most cloud-based rank tracking software tools, Ahrefs charges per keyword in the project. The Lite plan allows tracking up to 750 keywords per project; Standard — 2,000 keywords, Advanced — 5,000; and 10,000 keyword checks are available in Enterprise.

There is also a pay-as-you-go option to get additional keyword entries at $50 per 500 keywords. For additional costs, you can get daily checks of your keyword positions.

Ahrefs’ Rank Tracker tool allows an unlimited number of verified projects. But there are limits on tracking the unverified domains, starting with five in the cheapest plan.

5. Semrush

Semrush is an all-in-one web-based solution for a complete SEO and marketing workflow. Due to its costly pricing, it is more suitable for large SEO agencies.

Main features:

  • Organic Traffic Insights with all sorts of data: the number of ranking keywords, traffic fluctuation over the last 12 months, and traffic cost.
  • Branded vs. non-branded traffic trends, top organic keywords. 
  • The graph of organic keywords trends shows how many keywords entered the top 3-10 and up to 100 historically.
  • The free plan is available with a limit of 10 requests per day.
  • 7-day free trial.

Pricing plans: Pro $1,199/year, Guru $ 2,299/year, Business $4,499/year, if billed annually.

Similar to other powerful rank tracking tools, Semrush tracks search engine rankings in multiple countries and languages. However, these features, as well as keeping historical data, are available only with Guru and Business plans.

The Pro plan limit is 500 keywords to track simultaneously per all projects, 1500 for Guru, and 5000 for Business (with the keyword ranking progress updated daily). The number of projects is 5, 15 and 40 respectively. There are also some limits on the number of keyword metrics’ updates per month.

6. Wincher plugin

WordPress enthusiasts know the power of the Yoast SEO plugin. Now they can have even more with the Wincher integration for tracking keyword rankings.

Main features:

  • The plugin is used in combination with Yoast SEO.
  • Up to 5 key phrases tracking per post.
  • The trend of keyword ranking position over time.

Pricing plans: Starter $288/year, Business $588/year, Enterprise $2388/year, if billed annually; monthly billing is also available (the price in U.S. dollars is approximate since the payment is made in euros).

The free version of the Wincher rank tracking plugin is limited to one keyword per five posts, just to give you a slight taste of rank checking. The paid version allows reviewing organic ranking positions of up to 10,000 keywords, with five keyword phrases per post if used in combination with the paid Yoast SEO.

Rank checking in a meaningful way

Hopefully, this comparison list of rank tracking tools will help you figure out what software you need for your tasks. If you are new to rank tracking, read our detailed guide on how to track SEO results and what important metrics to consider.

If you are interested in original article you can find it here

Posted in Blog | Tagged , , | Leave a reply

Why Brand Awareness Is The Fifth Pillar Of SEO

Updated on by

While the traditional SEO techniques work on non-branded search, they are ineffective on branded search. Learn how to grow branded search by embracing brand awareness as the fifth SEO pillar.

Search engine optimization (SEO) is a marketing practice for increasing a website’s organic traffic through search engines.

It consists of techniques in four key areas: keyword and content, technical SEO, on-site SEO, and off-site SEO.

These four areas are typically considered the four pillars of SEO. They work together to help a website rank well on search engines.

However, even as extensive as these four pillars are, an SEO strategy isn’t complete if it ignores brand awareness.

In this article, you’ll learn why SEO marketers should consider brand awareness as the fifth pillar of SEO.

The First Four Pillars Of SEO

Before we look into the fifth pillar, let’s review the first four pillars of SEO:

Keyword And Content

Content rules – and keywords are the foundation of search.

A good piece of keyword-optimized content is the building block of an SEO strategy.

Technical SEO

Great content is insufficient if the website hosting doesn’t have a sound technical foundation.

Technical SEO covers areas like indexability and performance of the website.

It ensures a website loads its pages fast, and search engines can easily crawl the content.

Notably, Google has developed a set of metrics called Core Web Vitals to measure a web page’s technical performance and usability.

On-Site SEO

This pillar helps search engines understand the page’s content by creating a better website structure and its pages.

Site navigation hierarchy, schema markup, page titles, meta descriptions, heading tags, and image alt text are tools to create an easy-to-understand website and page structure for search engine crawlers and visitors.

Off-Site SEO

Having great content and a great website is just the beginning.

A website can’t rank well on search engines if it lacks authority and doesn’t garner trust in its subject domain.

From the onset, Google uses the amount and quality of backlinks as an indicator to evaluate a website’s authority.

Nevertheless, even as far-reaching as these four areas appear in creating a search-optimized website, they can only help drive part of your website’s search traffic, i.e., the type of traffic coming from non-branded searches.

Non-Branded Vs. Branded Searches

What are non-branded searches, and how are they different from branded searches?

Branded Queries

They contain branded names in the search terms.

If you’re Apple Inc., the search term “apple” is a branded term.

Yes, Google knows you’re looking for the company founded by Steve Jobs and Steve Wozniak rather than the fruit. Moreover, “iPhone,” ‘iPad,” and “MacBook” are also branded terms.

Branded searches are conducted by people looking for information, especially about your brand or products.

Non-Branded Queries

On the other hand, non-branded queries don’t contain any branded name in the search terms. Again, for Apple Inc., “laptop,” “smartphone,” and “tablet” are non-branded terms related to its products.

Non-branded searches are from people who may not know about your brand or products but are looking for information about the type of products or solutions you offer.

With this in mind, for a brand as strong as Apple, you may think its search traffic is largely from branded searches. And, for the most part, you’d be correct.

According to Semrush, more than half of search traffic to Apple’s website comes from branded searches.

Why Is Branded Search Important?

Branded search traffic not only reflects the level of interest of a specific brand, but also has higher commercial intent and a higher conversion rate.

Generally speaking, non-branded search traffic feeds the upper part of the marketing funnel, and branded search traffic feeds the lower part of the funnel.

A brand needs to grow both types of traffic to maintain a healthy and growing business.

A diagram illustrating non-branded and branded traffic representing different part of the marketing funnelImage created by author, August 2022

That said, most businesses don’t have the level of brand recognition like Apple’s.

What can marketers do to drive branded search traffic to a website?

Different Traffic Drivers For Non-Branded And Branded Searches

As illustrated by the formula below, search traffic is driven by two factors: keyword search volume and clickthrough rate (CTR) on the search engine results page (SERP).

A website with high aggregated keyword search volume and clickthrough rate will have high search traffic.

Search traffic = Keyword search volume * Clickthrough rate

Wait a minute! Where does keyword ranking fit into the equation?

Keyword ranking is, in fact, a critical factor in determining the clickthrough rate.

The higher your keywords rank, the higher the clickthrough rate you get.

According to Advanced Web Ranking, position #1 on Google can have a 38% CTR. CTR drops to about 5% on position #5 and stays around 1% or below after position #10.

Bearing that in mind, how do the four pillars of SEO contribute to a website’s search traffic?

They help a website increase its search traffic in two ways:

  • Maximizing the aggregated keyword search volume through keyword research and targeting.
  • Improving keyword ranking to achieve a higher SERP clickthrough rate through valuable and keyword-optimized content, technical SEO, and on-site and off-site optimization.

Nevertheless, the problem is that these SEO techniques work on mostly non-branded searches only.

They have limited effect on branded search, because branded search and non-branded search have different traffic drivers.

Non-Branded Traffic Driver

For non-branded search, a website can harvest a virtually unlimited amount of keywords and aggregated search volume.

The main lever of non-branded search traffic is improving your target keywords’ ranking to gain a higher clickthrough rate and capture a larger share of the search clicks.

Branded Traffic Driver

For branded search, assuming your website is already ranked No. 1 for your brand name (if not, you need to fix this problem first), ranking is generally not an issue.

As the brand owner, you always have an advantage on Google for your branded keywords.

The main lever of branded search traffic is simply increasing your branded keywords’ search volume.

However, the first four pillars of SEO have little effect on getting more people to search for your brand or your products.

As a result, they are ineffective on branded search.

How To Grow Brand Awareness & Branded Search

In short, branded search traffic results from a brand’s awareness and interest.

People wouldn’t search for your brand if they didn’t know or have any interest in your brand or your offerings.

To grow brand awareness and interest, you need to increase a brand’s visibility to its potential customers, develop authority, and garner trust for the brand.

Content marketing and the growth of non-branded search traffic could help increase brand awareness.

However, solely relying on people coming to your website to learn about your brand and offerings won’t take you very far.

To grow brand awareness at scale, marketers need to bring their brand to their potential customers. You can’t just wait for them to come to you.

Luckily, there are plenty of tools in the digital marketing arsenal to help marketers build brand awareness, including advertising, influencer marketing, customer marketing, and digital PR.


At AdRoll, we classify advertising into two categories: retargeting and brand awareness campaigns.

As the names suggest, retargeting campaigns target people who have engaged with you (e.g., visited your website), and brand awareness campaigns target potential customers who have not yet interacted with you.

Marketers can choose from several targeting methods to bring your brand to potential customers.

Contextual Targeting

Contextual targeting is one of the oldest advertising targeting methods.

Think of a hotel chain placing its ads in a travel magazine. A brand can place ads on websites or mobile apps with content relevant to its products or services.

A big difference between contextual targeting and other targeting methods is that contextual targeting doesn’t rely on personal or behavioral data about the target audience.

It’s a more privacy-friendly way for marketers to find and connect with their potential customers.

With regulators and technology companies looking at ways to improve consumer privacy protection, the importance of contextual targeting to advertisers is likely to increase.

Demographic And Interest Targeting

Demographic and interest targeting leverages your knowledge of existing customers to find new customers.

Suppose your customers fit into any specific demographic segment or are interested in certain activities or subjects. In that case, you can bring your brand to potential customers by running ads targeting people with similar demographic characteristics or interests.

Lookalike Targeting

Lookalike targeting is similar to demographic and interest targeting.

But, instead of manually defining the target audience segment based on a list of demographic or interest attributes – advertising platforms use machine learning technologies to find target audiences who look or behave similarly to the seed audience provided by marketers.

The seed audience is typically a subset of existing customers.

Influencer Marketing

Influencer marketing covers a broad range of tactics for leveraging someone who influences your target customers to promote your brand and products.

Even before the digital age, it was common for big brands to hire famous athletes or celebrities to endorse their products. Think Michael Jordan and Nike in the ‘80s.

Today, “influencers ” are social media personalities who have built a following with a particular audience.

The vast number of influencers on social media also means influencer marketing is no longer a privilege available only to those brands with deep pockets.

Marketers can recruit influencers at very low or no cost by reaching out to those who have shown interest or already invested in the niche you serve, including your customers (more on that in the next section).

While many influencer marketing activities are in the B2C sector, it also works for B2B.

Customer Marketing

When you shop at an ecommerce marketplace, such as Amazon, you might look at the product reviews before making a purchase decision.

The number and rating of a product’s reviews are also ranking factors for product searches on

The more people review a product and the higher the review rating, the more visibility and traffic a product gets.

The same logic applies even if Amazon may not be the channel for your business.

For B2B SaaS providers, customer reviews on G2, Trustpilot, etc., play the same role.

For direct-to-consumer (D2C) brands, customer reviews and sharing on social media bring your brand and products to new customers and help establish trust for your brand.

Take as an example.

This D2C company from Bulgaria has a team of brand ambassadors – their customers – all around the world to promote its brand and product simply by sharing their experiences on social media.

Some of their customers even created YouTube channels dedicated to Halfbikes.

Digital PR

Among all the strategies driving brand awareness, digital PR is the one most directly related to SEO. In fact, it’s often considered “link building 2.0.”

The main difference between link building and digital PR is that link building focuses on acquiring links from other websites.

In contrast, digital PR focuses on bringing your brand to your target audiences through stories published in relevant and high-quality publications.

The types of stories vary depending on the industries and subjects.

Take Facebook’s name change to Meta. In that context, the topic of how consumers perceive the metaverse, for instance, could make an interesting story for B2C marketers.

Because well-known publications usually have very strict link policies, digital PR prioritizes brand visibility and reach, whereas link acquisition is a secondary goal.

Brand Awareness Is The Fifth Pillar Of SEO

One of the goals of a comprehensive marketing strategy should be to grow a brand’s awareness – just as a comprehensive SEO strategy should aim at growing both non-branded and branded searches.

While non-branded search traffic is driven by keyword ranking, branded search traffic is mostly driven by the search volume of the branded keywords.

The more people are aware of and interested in a brand, the higher branded search traffic a brand gets.

Given the different growth drivers of branded and non-branded searches, SEO professionals need to include brand awareness as a pillar of SEO.

If you are interested in original article by Wilson Lau you can find it here


How SEO & Graphic Design Can Become A Dream Team

Updated on by

Despite natural friction, SEO and graphic design go together. Here’s how to reconcile differences and tips on how to compromise

SEO pros and graphic designers don’t always see eye to eye – and that’s a shame.

Modern graphic designers often prefer clean designs with lots of white space, whereas SEO professionals are less concerned about the latter.

Generally speaking, SEO pros want content wherever we can get it.

After all, if the keyword or keyword phrase doesn’t appear on the page, the page won’t appear high in the search engine results.

Anyone who has worked on a website project knows that disagreements between SEO pros and graphic designers won’t be solved by designers specific design methodologies or SEO experts pointing to unconfirmed statistics.

I’m not a graphic designer, but, having worked closely with designers for over 20 years, I know a few tricks to help SEO experts and designers get what they want.

Below are some of the best tricks I’ve learned throughout my career.

Everything Doesn’t Have To Be Above The Fold

When it comes to content, I’ve found that both SEO professionals and designers tend to agree: The most crucial text and the copy must be at the top of a page.

Google tells us this as well.

The page is about what the page is about – and it’s up to the website’s author to discern the essence of the page and communicate that to the intended audience.

And when it comes to websites, both SEO pros and designers need to keep the intended audience top of mind.

SEO pros need to remember that the intended audience is not Googlebot. In contrast, designers need to remember that the intended audience is not an art professor, nor is it the person approving the final design – well, to a point.

Typically, designers’ work must be reviewed and approved by somebody who oversees the site.

If an SEO pro wants to place content somewhere that looks off, this could delay approval of the overall design – and thus, designers might push back on the request.

I’ve found that good designers who are willing to compromise can typically incorporate changes to a design that works for the client, the designer, and the SEO pro.

At the end of the day, the look and feel of a site are extremely important for it to be successful.

But if you spend time and money building a beautiful site, you want to make sure people are visiting it.

So, designers and SEO experts should work closely to strike the right balance.

SEO pros can advise on the proper structure to get visitors to your site, and designers can make sure you’re not sending traffic to a site that doesn’t mesh well with its intended audience.

All the traffic in the world won’t make a difference if those visitors don’t take the desired action.

Break Up Copy
While both designers and site visitors might find giant blocks of text ugly and intimidating, SEO pros often love them.

We want pixels and pixels of text that the search engine spiders can feed on, to their heart’s content.

In my opinion, SEO pros are typically wrong when it comes to the pagination of copy.

As SEO experts, our job is to make sure that the content written for each page shows expertise, authority, and trust (E-A-T).

While the way the words are placed on the page does contribute somewhat to a page’s E-A-T, pagination is not the defining factor of E-A-T.

In fact, if we are being honest, E-A-T is more a concept than a hard and fast rule.

Most SEO pros know what E-A-T is when they see it, but defining it can be a daunting task.

But, once the research is complete and the copy is written, it’s time to trust the designer to do their job.

SEO pros can insist that the copy be present, but dictating text placement is akin to telling the pilot how to fly a plane just because you are a Platinum mileage traveler.

As long as it’s placed in a way that makes sense, our job is done.

Here are some tips I’ve found for breaking up copy without interfering with traditional designer duties:

  • Break up text with bulleted and numbered lists. Bulleted lists are excellent vehicles for topics and keyword phrases. And, they can break up a wall of text to make it less daunting for the end user.
  • Pull quotes are your friend. Pull quotes break up the page and can also emphasize key points to end users and search engine robots.
  • Use image captions. I’m an old newspaper editor, so I believe every image should have a caption – though many people don’t use captions for their images anymore. Captions are also great for additional keyword and keyword phrase placement. And no, ALT Text is not the same as forward-facing captions.

Compromise On Fonts And Images

Some SEO experts sometimes act like the Maverick in “Top Gun”: We feel the need for speed.

Designers don’t always share, or fully understand, our obsession with how fast a site loads.

But, they can save themselves a ton of time, headache, and argument by using Web-native fonts.

Designers should work to optimize images so they load quickly – and if they can’t get them fast enough, they may need to be loaded via a content delivery network (CDN).

Designers should use animation sparingly, as it typically can’t be read by search engines and distracts end users. The same can be said for excessive videos.

But, SEO pros must remember that a score of 100 on the Google Page Speed tool isn’t necessary (anything above 90 is just an ego boost).

In Conclusion

SEO pros and site designers must work together to create websites that delight their intended audience.

Any organization that doesn’t have both an SEO expert and a graphic designer on the marketing team is most likely missing opportunities.

But, if they work together, the natural friction between SEO pros and designers can create some pretty brilliant diamonds.

If you are interested in original article by Tony Wright you can find it here

Posted in Blog | Tagged , , , | Leave a reply

Indexing and keyword ranking techniques revisited: 20 years later

Updated on by

Learn how keyword-based ranking techniques have evolved – from the ‘little old ladies’ to the vector space model, to today.

When the acorn that would become the SEO industry started to grow, indexing and ranking at search engines were both based purely on keywords.

The search engine would match keywords in a query to keywords in its index parallel to keywords that appeared on a webpage.

Pages with the highest relevancy score would be ranked in order using one of the three most popular retrieval techniques: 

  • Boolean Model
  • Probabilistic Model
  • Vector Space Model

The vector space model became the most relevant for search engines. 

I’m going to revisit the basic and somewhat simple explanation of the classic model I used back in the day in this article (because it is still relevant in the search engine mix). 

Along the way, we’ll dispel a myth or two – such as the notion of “keyword density” of a webpage. Let’s put that one to bed once and for all.

The keyword: One of the most commonly used words in information science; to marketers – a shrouded mystery

“What’s a keyword?”

You have no idea how many times I heard that question when the SEO industry was emerging. And after I’d given a nutshell of an explanation, the follow-up question would be: “So, what are my keywords, Mike?”

Honestly, it was quite difficult trying to explain to marketers that specific keywords used in a query were what triggered corresponding webpages in search engine results.

And yes, that would almost certainly raise another question: “What’s a query, Mike?”

Today, terms like keyword, query, index, ranking and all the rest are commonplace in the digital marketing lexicon. 

However, as an SEO, I believe it’s eminently useful to understand where they’re drawn from and why and how those terms still apply as much now as they did back in the day.

The science of information retrieval (IR) is a subset under the umbrella term “artificial intelligence.” But IR itself is also comprised of several subsets, including that of library and information science.

And that’s our starting point for this second part of my wander down SEO memory lane. (My first, in case you missed it, was: We’ve crawled the web for 32 years: What’s changed?)

This ongoing series of articles is based on what I wrote in a book about SEO 20 years ago, making observations about the state-of-the-art over the years and comparing it to where we are today.

The little old lady in the library

So, having highlighted that there are elements of library science under the Information Retrieval banner, let me relate where they fit into web search. 

Seemingly, librarians are mainly identified as little old ladies. It certainly appeared that way when I interviewed several leading scientists in the emerging new field of “web” Information Retrial (IR) all those years ago. 

Brian Pinkerton, inventor of WebCrawler, along with Andrei Broder, Vice President Technology and Chief Scientist with Alta Vista, the number one search engine before Google and indeed Craig Silverstein, Director of Technology at Google (and notably, Google employee number one) all described their work in this new field as trying to get a search engine to emulate “the little old lady in the library.” 

Libraries are based on the concept of the index card – the original purpose of which was to attempt to organize and classify every known animal, plant, and mineral in the world.

Index cards formed the backbone of the entire library system, indexing vast and varied amounts of information. 

Apart from the name of the author, title of the book, subject matter and notable “index terms” (a.k.a., keywords), etc., the index card would also have the location of the book. And therefore, after a while “the little old lady librarian” when you asked her about a particular book, would intuitively be able to point not just to the section of the library, but probably even the shelf the book was on, providing a personalized rapid retrieval method.

However, when I explained the similarity of that type of indexing system at search engines as I did all those years back, I had to add a caveat that’s still important to grasp:

“The largest search engines are index based in a similar manner to that of a library. Having stored a large fraction of the web in massive indices, they then need to quickly return relevant documents against a given keyword or phrase. But the variation of web pages, in terms of composition, quality, and content, is even greater than the scale of the raw data itself. The web as a whole has no unifying structure, with an enormous variant in the style of authoring and content far wider and more complex than in traditional collections of text documents. This makes it almost impossible for a search engine to apply strictly conventional techniques used in libraries, database management systems, and information retrieval.”

Inevitably, what then occurred with keywords and the way we write for the web was the emergence of a new field of communication.

As I explained in the book, HTML could be viewed as a new linguistic genre and should be treated as such in future linguistic studies. There’s much more to a Hypertext document than there is to a “flat text” document. And that gives more of an indication to what a particular web page is about when it is being read by humans as well as the text being analyzed, classified, and categorized through text mining and information extraction by search engines.

Sometimes I still hear SEOs referring to search engines “machine reading” web pages, but that term belongs much more to the relatively recent introduction of “structured data” systems.

As I frequently still have to explain, a human reading a web page and search engines text mining and extracting information “about” a page is not the same thing as humans reading a web page and search engines being” fed” structured data.

The best tangible example I’ve found is to make a comparison between a modern HTML web page with inserted “machine readable” structured data and a modern passport. Take a look at the picture page on your passport and you’ll see one main section with your picture and text for humans to read and a separate section at the bottom of the page, which is created specifically for machine reading by swiping or scanning.

Quintessentially, a modern web page is structured kind of like a modern passport. Interestingly, 20 years ago I referenced the man/machine combination with this little factoid:

“In 1747 the French physician and philosopher Julien Offroy de la Mettrie published one of the most seminal works in the history of ideas. He entitled it L’HOMME MACHINE, which is best translated as “man, a machine.” Often, you will hear the phrase ‘of men and machines’ and this is the root idea of artificial intelligence.”

I emphasized the importance of structured data in my previous article and do hope to write something for you that I believe will be hugely helpful to understand the balance between humans reading and machine reading. I totally simplified it this way back in 2002 to provide a basic rationalization:

  • Data: a representation of facts or ideas in a formalized manner, capable of being communicated or manipulated by some process.
  • Information: the meaning that a human assigns to data by means of the known conventions used in its representation.


  • Data is related to facts and machines.
  • Information is related to meaning and humans.

Let’s talk about the characteristics of text for a minute and then I’ll cover how text can be represented as data in something “somewhat misunderstood” (shall we say) in the SEO industry called the vector space model.

The most important keywords in a search engine index vs. the most popular words

Ever heard of Zipf’s Law?

Named after Harvard Linguistic Professor George Kingsley Zipf, it predicts the phenomenon that, as we write, we use familiar words with high frequency. 

Zipf said his law is based on the main predictor of human behavior: striving to minimize effort. Therefore, Zipf’s law applies to almost any field involving human production.

This means we also have a constrained relationship between rank and frequency in natural language.

Most large collections of text documents have similar statistical characteristics. Knowing about these statistics is helpful because they influence the effectiveness and efficiency of data structures used to index documents. Many retrieval models rely on them.

There are patterns of occurrences in the way we write – we generally look for the easiest, shortest, least involved, quickest method possible. So, the truth is, we just use the same simple words over and over.

As an example, all those years back, I came across some statistics from an experiment where scientists took a 131MB collection (that was big data back then) of 46,500 newspaper articles (19 million term occurrences).

Here is the data for the top 10 words and how many times they were used within this corpus. You’ll get the point pretty quickly, I think:

Word frequency
the: 1130021
of 547311
to 516635
a 464736
in 390819
and 387703
that 204351
for 199340
is 152483
said 148302 

Remember, all the articles included in the corpus were written by professional journalists. But if you look at the top ten most frequently used words, you could hardly make a single sensible sentence out of them. 

Because these common words occur so frequently in the English language, search engines will ignore them as “stop words.” If the most popular words we use don’t provide much value to an automated indexing system, which words do? 

As already noted, there has been much work in the field of information retrieval (IR) systems. Statistical approaches have been widely applied because of the poor fit of text to data models based on formal logics (e.g., relational databases).

So rather than requiring that users will be able to anticipate the exact words and combinations of words that may appear in documents of interest, statistical IR lets users simply enter a string of words that are likely to appear in a document.

The system then takes into account the frequency of these words in a collection of text, and in individual documents, to determine which words are likely to be the best clues of relevance. A score is computed for each document based on the words it contains and the highest scoring documents are retrieved.

I was fortunate enough to Interview a leading researcher in the field of IR when researching myself for the book back in 2001. At that time, Andrei Broder was Chief Scientist with Alta Vista (currently Distinguished Engineer at Google), and we were discussing the topic of “term vectors” and I asked if he could give me a simple explanation of what they are.

He explained to me how, when “weighting” terms for importance in the index, he may note the occurrence of the word “of” millions of times in the corpus. This is a word which is going to get no “weight” at all, he said. But if he sees something like the word “hemoglobin”, which is a much rarer word in the corpus, then this one will get some weight.

I want to take a quick step back here before I explain how the index is created, and dispel another myth that has lingered over the years. And that’s the one where many people believe that Google (and other search engines) are actually downloading your web pages and storing them on a hard drive.

Nope, not at all. We already have a place to do that, it’s called the world wide web.

Yes, Google maintains a “cached” snapshot of the page for rapid retrieval. But when that page content changes, the next time the page is crawled the cached version changes as well.

That’s why you can never find copies of your old web pages at Google. For that, your only real resource is the Internet Archive (a.k.a., The Wayback Machine).

In fact, when your page is crawled it’s basically dismantled. The text is parsed (extracted) from the document.

Each document is given its own identifier along with details of the location (URL) and the “raw data” is forwarded to the indexer module. The words/terms are saved with the associated document ID in which it appeared.

Here’s a very simple example using two Docs and the text they contain that I created 20 years ago.

Recall index construction

After all the documents have been parsed, the inverted file is sorted by terms:

In my example this looks fairly simple at the start of the process, but the postings (as they are known in information retrieval terms) to the index go in one Doc at a time. Again, with millions of Docs, you can imagine the amount of processing power required to turn this into the massive ‘term wise view’ which is simplified above, first by term and then by Doc within each term.

You’ll note my reference to “millions of Docs” from all those years ago. Of course, we’re into billions (even trillions) these days. In my basic explanation of how the index is created, I continued with this:

Each search engine creates its own custom dictionary (or lexicon as it is – remember that many web pages are not written in English), which has to include every new ‘term’ discovered after a crawl (think about the way that, when using a word processor like Microsoft Word, you frequently get the option to add a word to your own custom dictionary, i.e. something which does not occur in the standard English dictionary). Once the search engine has its ‘big’ index, some terms will be more important than others. So, each term deserves its own weight (value). A lot of the weighting factor depends on the term itself. Of course, this is fairly straight forward when you think about it, so more weight is given to a word with more occurrences, but this weight is then increased by the ‘rarity’ of the term across the whole corpus. The indexer can also give more ‘weight’ to words which appear in certain places in the Doc. Words which appeared in the title tag <title> are very important. Words which are in <h1> headline tags or those which are in bold <b> on the page may be more relevant. The words which appear in the anchor text of links on HTML pages, or close to them are certainly viewed as very important. Words that appear in <alt> text tags with images are noted as well as words which appear in meta tags.

Apart from the original text “Modern Information Retrieval” written by the scientist Gerard Salton (regarded as the father of modern information retrieval) I had a number of other resources back in the day who verified the above. Both Brian Pinkerton and Michael Maudlin (inventors of the search engines WebCrawler and Lycos respectively) gave me details on how “the classic Salton approach” was used. And both made me aware of the limitations.

Not only that, Larry Page and Sergey Brin highlighted the very same in the original paper they wrote at the launch of the Google prototype. I’m coming back to this as it’s important in helping to dispel another myth.

But first, here’s how I explained the “classic Salton approach” back in 2002. Be sure to note the reference to “a term weight pair.”

Once the search engine has created its ‘big index’ the indexer module then measures the ‘term frequency’ (tf) of the word in a Doc to get the ‘term density’ and then measures the ‘inverse document frequency’ (idf) which is a calculation of the frequency of terms in a document; the total number of documents; the number of documents which contain the term. With this further calculation, each Doc can now be viewed as a vector of tf x idf values (binary or numeric values corresponding directly or indirectly to the words of the Doc). What you then have is a term weight pair. You could transpose this as: a document has a weighted list of words; a word has a weighted list of documents (a term weight pair).

The Vector Space Model

Now that the Docs are vectors with one component for each term, what has been created is a ‘vector space’ where all the Docs live. But what are the benefits of creating this universe of Docs which all now have this magnitude?

In this way, if Doc ‘d’ (as an example) is a vector then it’s easy to find others like it and also to find vectors near it.

Intuitively, you can then determine that documents, which are close together in vector space, talk about the same things. By doing this a search engine can then create clustering of words or Docs and add various other weighting methods.

However, the main benefit of using term vectors for search engines is that the query engine can regard a query itself as being a very short Doc. In this way, the query becomes a vector in the same vector space and the query engine can measure each Doc’s proximity to it.

The Vector Space Model allows the user to query the search engine for “concepts” rather than a pure “lexical” search. As you can see here, even 20 years ago the notion of concepts and topics as opposed to just keywords was very much in play.

OK, let’s tackle this “keyword density” thing. The word “density” does appear in the explanation of how the vector space model works, but only as it applies to the calculation across the entire corpus of documents – not to a single page. Perhaps it’s that reference that made so many SEOs start using keyword density analyzers on single pages.

I’ve also noticed over the years that many SEOs, who do discover the vector space model, tend to try and apply the classic tf x idf term weighting. But that’s much less likely to work, particularly at Google, as founders Larry Page and Sergey Brin stated in their original paper on how Google works – they emphasize the poor quality of results when applying the classic model alone:

“For example, the standard vector space model tries to return the document that most closely approximates the query, given that both query and document are vectors defined by their word occurrence. On the web, this strategy often returns very short documents that are only the query plus a few words.”

There have been many variants to attempt to get around the ‘rigidity’ of the Vector Space Model. And over the years with advances in artificial intelligence and machine learning, there are many variations to the approach which can calculate the weighting of specific words and documents in the index.

You could spend years trying to figure out what formulae any search engine is using, let alone Google (although you can be sure which one they’re not using as I’ve just pointed out). So, bearing this in mind, it should dispel the myth that trying to manipulate the keyword density of web pages when you create them is a somewhat wasted effort.

Solving the abundance problem

The first generation of search engines relied heavily on on-page factors for ranking.

But the problem you have using purely keyword-based ranking techniques (beyond what I just mentioned about Google from day one) is something known as “the abundance problem” which considers the web growing exponentially every day and the exponential growth in documents containing the same keywords.

And that poses the question on this slide which I’ve been using since 2002:

If a music student has a web page about Beethoven’s Fifth Symphony and so does a world-famous orchestra conductor (such as Andre Previn), who would you expect to have the most authoritative page?

You can assume that the orchestra conductor, who has been arranging and playing the piece for many years with many orchestras, would be the most authoritative. But working purely on keyword ranking techniques only, it’s just as likely that the music student could be the number one result.

How do you solve that problem?

Well, the answer is hyperlink analysis (a.k.a., backlinks).

In my next installment, I’ll explain how the word “authority” entered the IR and SEO lexicon. And I’ll also explain the original source of what is now referred to as E-A-T and what it’s actually based on.

Until then – be well, stay safe and remember what joy there is in discussing the inner workings of search engines!

If you are interested in original article by Mike Grehan you can find it here

Posted in Blog | Tagged , , , | Leave a reply