SEO compilation of the semantic kernel. How to make a semantic kernel for an online store: step-by-step instructions. Selection of search queries and frequency check

Before the start of SEO promotion, you need to make a semantic kernel of the site - a list of search queries, which is used when searching for the potential customers offered by us. All further work - internal optimization and work with external factors (links) are conducted in accordance with the query list specified at this stage.

The final cost of promotion and even the estimated level of conversion (the number of appeals to the company) also depends on the proper nucleus collection.

The greater the number of companies progressing on the chosen word, the higher the competition and, accordingly, the cost of promotion.

Also, when choosing a list of requests, not only to rely on your ideas about which words are using your potential customers, but also to trust the opinions of professionals, because not all expensive and popular requests have high conversion and promotion of some of the words directly related to your business can It's just unprofitable, even if possible to achieve the perfect result in the form of top 1.

The correctly formed semantic core, other things being equal, provides confident site finding on the upper search positions for a wide range of queries.

Principles of preparation of semantics

Search queries form people - potential site visitors based on their goals. It is difficult to digitate for mathematical methods of statistical analysis laid out in the algorithm of the robots - search engines is difficult, especially since they are continuously specified, improved, and therefore change.

The most effective way to cover the maximum number of possible requests in the formation of the initial core of the site is to look at him from the position of a person who makes a search request.

The search engine is designed to help a person quickly go to the most suitable search query source of information. The search engine is focused, first of all, on a quick way to narrow to several dozen most appropriate options for the key phrase (word) request.

When forming a list of these keywords, which will be the basis of the semantics of the site, actually determines the circle of its potential visitors.

Stages of collecting the semantic kernel:

  • Initially, a list of the main key phrases and words encountered in the site information field and characterizing its target orientation. In this case, you can use the latest statistical information about the frequency of requests for the direction in question from the search engine. In addition to the main options for words and phrases, it is also necessary to write them off synonyms and options for other names: washing powder - detergent. For this work, the Yandex Wordstat service is perfect.

  • You can write down the components of the name of any product or the object of the request. Very often, words with the words, typos, or simply incorrectly written due to insufficient literacy of the entire Internet users fall into queries. Accounting for this feature can also attract an additional resource of site visitors, especially in the event of any new names.
  • The most common requests, they are also called high-frequency, rarely lead a person to the desired site. Better low-frequency work, that is, requests with clarification. For example, the ring's request will give one top, and the piston ring is already more specific information. Focus when collecting is better than such requests. This will be attracted to the target visitors, that is, for example, potential buyers, if this is a commercial site.
  • In compiling a list of keywords, it is also advisable to take into account and widespread folk, so-called folk, which have become generally accepted and sustainable names of some objects, concepts, services, etc., for example, cell phone - Mobile phone - mobile phone - mobile phone. Accounting such neologisms in some cases can give a significant increase in the target audience.
  • And in general, when drawing up a key list, it is better to initially navigate exactly the target audience, that is, those site visitors to whom the product or service is designed. The kernel should not be the main option to be present a little-known name of the subject (product, services), even if it must be promoted. Such words in the requests will be extremely rare. It is better to use them with clarifications or apply more promoted close names or analogues.
  • When semantics is ready, it should be skipped through a number of filters to remove the clogging keys, which means that the site leading to the site is not a target audience.

Accounting in semantics associated requests

  • To the original SEO SEO list made up from the main keys, a number of auxiliary low-frequency, in which I can include important, but not taken into account that did not occur when it is compiled. This well will help the search engine. With a recalculation of key phrases from the list on the topic, the search engine itself offers for consideration options for frequent phrases in this direction.
  • For example, if the phrase is entered - repair of computers, and then the second request is the matrix, the search engine will be perceived by associated, that is, interrelated in meaning, and to help various frequently encountered requests in this area will be issued. Such key phrases can expand the original semantics.
  • Knowing several major words from the text kernel, using the search engine it can be significantly expanded by associated phrases. In the case when the search engine gives an insufficient number of such additional keys, you can get them using the methods of thesaurus - a set of concepts (terms) for a specific subject from the same conceptual area. Dictionaries and reference books can help here.

Logic semantics semantics scheme for site

Formation of the query list and the final editing

  • The key phrases forming semantics formed on the first two steps requires their filtering. Among such phrases may be useless, which will only lose the kernel, without bringing any tangible benefit to attract the target audience of site visitors. The phrases obtained by analyzing the target direction of the site and extended with the help of associated keys were called the mask. This is an important list that allows you to make the site visible, that is, when the search engine is working, in response to the request will be issued in the list of offered and this site.
  • Now you need to create search queries on each mask. To do this, it will be necessary to use the search engine, which is focused on this site, for example, Yandex, Rambler, Google or other. The created list for each mask is subject to further editing and stripping. This work is carried out based on the clarification of the information posted on the site, as well as the actual search engine estimates.
  • Stripping is to remove unnecessary, non-informative and harmful queries. For example, if phrases with the words "Coursework" got into the list of building materials, then it should be removed, since it is unlikely to expand the target audience. After stripping and final editing, a variant of actually working key queries will be obtained, filling under which in the zone of so-called visibility for search engines. In this case, the search engine for the internal links of the site will be able to show the desired page from the semantic kernel.

Summing up all of the above, you can briefly say that the semantics of the site is determined by the total number of applicable formulations of the search engine requests and their total frequency in the statistics of appeals on a specific request.

All work on the formation and editing of semantics can be reduced to the following:

  1. analysis of the information posted on the site, goals pursued by the creation of this site;
  2. drawing up a general list of possible phrases, based on the analysis of the site;
  3. formation of an extended keyword option using associated requests (masks);
  4. forming a list of query options for each mask;
  5. editing (clearance) of the list in order to exclude insignificant phrases.

From this article you found out that such a semantic site core is and how it needs to be.

Given the constant struggle of search engines with various chests of reference factors, the correct structure of the site is increasingly entering the fore when conducting search engine optimization.

One of the main keys for the competent study of the site structure is the most detailed study of the semantic kernel.

At the moment, there is a sufficiently large number of general instructions how to make a semantic core, so in this material, we tried to give more details exactly how to do it and how to do with minimal time.

We have prepared a leadership that answers step by step to the question of how to create a semantic site core. With specific examples and instructions. Applying which, you can independently create semantic kernels for promoted projects.

Because this post is quite practical, then a lot of different work will be performed through Key Collector, since it is quite a lot of time saving when working with the semantic core.

1. Formation of generating phrases for collecting

Expand the phrases for the parsing of one group

For each query group, it is very desirable to immediately expand synonyms and other wording.

For example, take a request for swimsuits for swimsuits and get more different reformulations using the following services.

WordStat.Yandex - Right Column

As a result, for a given initial phrase, we can still get 1-5 other different formulations for which you will need to collect requests within a single group of queries.

2. Collecting search queries from various sources

After we have identified all phrases within the same group, go to the data collection from various sources.

The optimal set of parsing sources to obtain the highest possible output data for Runet this is:

● WordStat.yandex - left column

● Search tips Yandex + Google (with busting at the end and substitution of letters before the specified phrase)

prompt : If you do not use proxy when you work, then in order for your IP to be launched by search engines, it is advisable to use such delays between requests:

● In addition, it is also desirable to manually import data from the PRODVIGATOR database.

For bourgeoinewe use the same except data from WordStat.yandex and data on search prompts Yandex PS:

● Google search tips (with busting at the end and substitution of letters before the specified phrase)

● Semrush - Relevant Regional Base

● Similarly, use imports from the PRODVIGATOR database.

In addition, if your site is already collecting search traffic, then for the general search queries analysis in your topic, it is desirable to unload all phrases from Yandex.Metrika and Google Analytics:

And already for a specific analysis of the desired query group, you can, with filters and regular expressions, can be determined by those queries that are needed to analyze a specific group of queries.

3. Cleaning requests

After all requests are assembled to pre-clean the obtained semantic kernel.

Cleaning with Ready Lists Stop Words

It is desirable to immediately take advantage of the finished lists of stop words as common and special on your subject.

For example, for commercial themes there will be such phrases:

● Free, download, ...

● Abstracts, Wikipedia, Wiki, ...

● Used, old, ...

● Work, profession, vacancies, ...

● Dream, sleep, ...

● And other such a plan.

In addition, immediately clean from all cities of Russia, Ukraine, Belarus, ....

After we downloaded the entire list of our stop words, we select the option to find the type of entry "independent of the word stop-word" and click "Mark phrases in the table":

Thus, we remove obvious phrases with minus with words.

After we have cleared words from obvious stop, it is already necessary to view the semantic kernel in manual mode.

1. One of the rapid ways is: when we meet the phrase with the words apparent not suitable for us, for example, the brand that we do not sell, then we

● opposite such a phrase click on the left to the specified icon,

● Select the word stop

● Select a list (it is desirable to create a separate list and call it, respectively),

● Immediately, if necessary, you can highlight all phrases that contain the specified stop words,

● Add to Stop Words

2. The second way to quickly reveal the stop words to use the "analysis of groups" functionality, when we group phrases according to the words that are included in these phrases:

Ideally, in order to re-return to a certain feet, it is desirable to return all marked words to a certain list of words stop.

as a result, we will receive a word list to send to the Stop Words list:

But, it is desirable, this list also quickly see so that there are no unambiguous stop words come there.

Thus, you can quickly go through the main stop words and remove phrases that contain the status of the word stop.

Cleaning implicit oaks

● We are sorting in descending frequency for this column

As a result, we only leave the most frequency phrases in such subgroups, and you delete everything else.

Cleaning phrases that do not carry a special semantic load

In addition to the above cleaning of words, you can also remove phrases that do not carry a special semantic load and will not particularly affect the search for groups of phrases to create separate landing pages.

For example, for online stores you can delete such phrases that contain the following keywords:

● Buy

● sale,

● Online store, ....

To do this, we create another list in stop words and enter the word data to this list, mark and delete from the list.

4. Customer grouping

After we cleaned from the most obvious garbage and unsuitable phrases, then you can already start grouping requests.

This can be done in manual mode, and you can use some kind of help from search engines.

We collect issuance by the desired search engine

In theory, it is better to collect on the desired region in the Google PS

● Google understands semantics well enough

● It is easier to collect it, not so banitis various proxies

Nuances: Even for Ukrainian projects it is better to collect issuance on Google.ru, since there sites are better built in structure, therefore, we will get a much better study on the landing pages.

Harvesting such data can be produced

● So with the help of other tools.

If you have a lot of phrases, it is already clearly needed for collecting data issuing search engines. Optimally, the speed of collecting and work shows a bunch of A-Parser'a + proxy (both paid and free).

After we have collected issuance data, now we are grouping inquiries. If you have collected data in Key Collector, then you can then produce a grouping of phrases right in it:

We do not really like how it makes KC so we have our own developments that allow you to get significantly better results.

As a result, with the help of such a grouping, we can quickly combine requests with different formulation, but with one problem of users:

As a result, this leads to good savings of the final processing of the semantic kernel.

If you do not have the opportunity to collect issuance with the help of a proxy, then you can use various services:

They will help you in a quick queries grouping.

After such clustering based on issuance data, in any case, it is necessary to carry out a further detailed analysis of each group and combine similar in meaning.

For example, such groups of requests in the end need to be combined to one page of the site.

The most important thing:each individual page on the site must comply with one user need.

After such processing of semantics at the outputs, we must get the most detailed structure of the site:

● Information requests

For example, in the case of swimsuits, we can make this site structure:

which will contain their Title, Description, text (for need) and goods / services / Content.

As a result, after we have already unloaded all the requests in detail, you can already begin a detailed collection of all key queries within the same group.

To quickly collect phrases in Key Collector We:

● We select Fundamentant phrases for each group.

● We go, for example, to tip parsing

● We choose to distribute by groups

● Select from the drop-down list "Copy phrases from Yandex.WordStat

● Press Copy

● And begin collecting data from another source already, but according to the same distributed phrases within groups.

Eventually

Let's look at the numbers now.

For the theme "Swimwear" initially from all sources, we collected more than 100,000 different queries.

At the query cleaning stage, we managed to reduce the number of phrases by 40%.

After that, we collected frequency on Google AdWords and only those that were with frequency greater than 0 were left for analysis.

After that, we made a grouping of requests based on the issuance of PS Google and we managed to get about 500 groups of requests in which we have already conducted a detailed analysis.

Conclusion

We hope that this guide will help you much faster and qualitatively collect semantic kernels for our sites and step by step will answer the question of how to collect the semantic kernel for the site.

Successful collection of semantic nuclei, and as a result of high-quality traffic on your sites. If you have any questions, we will be happy to answer them in the comments.

(78 Ratings, average: 4,90 out of 5)

We released a new book "Content Marketing on Social Networks: How to sit in the head of subscribers and fall in love with their brand."

The semantic kernel (abbreviated) is a specific list of keywords that bestly describe the subject of the site.

Why do you need to make a semantic site core

  • the semantic kernel characterizes, it is thanks to him the indexing page of the robots define not only the naturalness of the text, but also the subjects to make a page in the appropriate search section. Obviously, robots work on full autonomy after entering the address of the site page to the search resource base;
  • the competently composed XIA is the semantic basis of the site and reflects the appropriate structure for seo-advancement;
  • each site page, respectively, is attached to a certain part of the web resource;
  • thanks to the semantic kernel, a promotion strategy is formed in search engines;
  • according to the semantic kernel, you can estimate how much promotion will cost.

Basic rules for the preparation of the semantic nucleus

    To collect Xia, you will need to collect keyword kits. In this regard, it is necessary to evaluate its strength on the promotion of highly and mid-frequency requests. If you need to get a maximum of visitors if there is a budget, you need to use high and mid-frequency queries. If on the contrary, then medium and low-frequency requests.

    Even with a high budget, it makes no sense to promote the site only at high-frequency requests. Often, such requests are too general and non-specific semantic load, such as "listening to music", "News", "Sport".

When selecting search queries, a variety of indicators are analyzed that correspond to the search phrase:

  • number of hits (frequency);
  • the number of impressions without morphological changes and phrases;
  • pages that are issued by the search engine when entering a search query;
  • pages in search tops by key requests;
  • assessment of the cost of promotion on request;
  • competition of keywords;
  • predicted number of transitions;
  • failure indicator (closing the site after the link on the link) and seasonality of the service;
  • geographic dependence of the keyword (geographical location of the company and its clients).

How can you collect a semantic kernel

In practice, the selection of the semantic kernel can be carried out by the following methods:

    Site of competitors can become a source of keywords for the semantic kernel. It is here that you can quickly choose keywords, as well as determine the frequency of their "environment" with the help of semantic analysis. To do this, it will be necessary to make a semantic assessment of the page of the text, the most mentioned words are the morphological core;

    We recommend forming your own semantic kernel based on special services statistics. Use, for example, Wordstat Yandex - the Statistical Search Engine System Yandex. Here you can see the frequency of the search query, and also find out that users are looking for with this keyword;

    "Tips" systems appear when trying to introduce a search phrase into an interactive string. These words and phrases can also enter both connected;

    The source of keywords for Xia can be closed search queries, such as Pastukhov base. These are special data arrays, containing information about effective combinations of search queries;

    Internal site statistics can also be a source of data on the user of search queries. It contains information about the source and knows where the reader came from, how many pages looked and from which browser he came.

Free tools for compiling semantic kernel:

Yandex.WordStat. - Popular free tool used in the preparation of the semantic kernel. With the help of the service, you can find out how many times the visitors have entered a certain request to the Yandex search engine. Provides the opportunity to analyze the dynamics of demand for this request by months.

Google AdWords. refers to the number of the most used systems to leaving the semantic core of the site. With the help of a keyword scheduler from Google, you can calculate and make a forecast of the appeals of specific queries in the future.

Yandex.Direct Many developers use the most profitable keywords for the selection. If in the future it is planned to place advertisements on the site, the owner of the resource with this approach will receive good profits.

Slobel - The younger brother Kay collector, which is used to compile the semantic core of the site. The basis of Yandex data is taken. Of the advantages, you can note an intuitive interface, as well as accessibility not only for professionals, but also for beginners who are just beginning to engage in SEO analytics.

Paid tools for the preparation of the semantic kernel:

Pastukhov base According to many specialists do not have competitors. The database displays such requests that do not show google nor Yandex. There are many other features inherent in Max Pastukhov bases, among which you can note a convenient software shell.

Spywords. - An interesting tool that allows you to analyze the keywords of competitors. With it, you can conduct a comparative analysis of the semantic nuclei of the resources of interest, as well as obtain all the data on RRS and SEO companies of competitors. The Russian-speaking resource, to deal with its functionality will not be any problems.

Paid program created specifically for professionals. It helps to make a semantic kernel, defining current requests. Used to estimate the cost of promoting a resource for the keywords you are interested in. In addition to the high level of efficiency, this program is favorably featured in use.

Semrush. Allows on the basis of data from competing resources to determine the most efficient keywords. With it, you can choose low frequency queries, characterized by a high level of traffic. As practice shows, on such requests it is very easy to promote a resource for the first positions of issuing.

Seolib. - Service that has won trust by optimizers. It has a fairly large functionality. Allows you to competently make a semantic kernel, as well as perform the necessary analytical measures. In free mode, you can analyze 25 requests per day.

Prodvigator. Allows you to collect primary semantic kernel literally in a few minutes. This service is mainly used to analyze competing sites, as well as to select the most efficient key queries. The analysis of words is selected for Google in Russia or for Yandex in the Moscow region.

The semantic kernel is going fast enough if using sources and databases as a hint.

The following processes should be highlighted.

According to the content of the site and the relevant themes are selected key requests that most accurately reflect the semantic load of your web portal.
- From the selected set, excess, possibly those queries that may worsen the resource indexation. The keyword filtering is carried out based on the results of the analysis described above.
- The resulting semantic kernel should be evenly distributed among the pages of the site, if necessary, be ordered texts with a certain topic and the volume of keywords.

An example of collecting the semantic kernel using the Wordstat Yandex service

For example, you are promoting a nail service salon in Moscow.

We think and select all sorts of words that are suitable for the subject of the site.

Activity of the company

  • manicure salon;
  • nail service salon;
  • studio nail service;
  • manicure Studio;
  • pedicure Studio;
  • studio nail design.

General name services

Pedicure;
- manicure;
- Nail extension.

Now we go to the Yandex service and enter each request, after selecting the region by which we are going to move.

Copy all the words in Excel from the left column, plus auxiliary phrases from the right.

We delete unnecessary words that are not suitable for the theme. Below red are highlighted words that are suitable.

The number of 2320 requests shows how many times people gained this request not only in its pure form, but also as part of other phrases. For example: manicure and price in Moscow, price for manicure and pedicure in Moscow, etc.

If you enter our request in quotes, then there will already be another digit where the key phrase is taken into account. For example: manicure prices, manicure price, etc.

If you enter the same request in quotes with exclamation marks, we will see how many times the users gained a request for the "manicure price".

Next, we make a breakdown of the list of words on the site pages. For example, we will leave high-frequency queries on the main page and on the main sections of the site, such as: Manicure, Nail Service Studio, nail extensions. Medium- and low-frequency distribution over the rest of the pages, for example: manicure and pedicure prices, nail extension gel design. Words should also be divided into groups in meaning.

  • Main page - Studio, Nail Service Salon, etc.
  • 3 Section - Pedicure, Manicure, Prices for manicure and pedicure.
  • Pages - nail extension, hardware pedicure, etc.

What errors can be allowed when drafting

In the preparation of the semantic kernel, no one is immune from errors. The most common includes the following:

  1. There is always a danger of choosing inefficient requests that give the minimum number of visitors.
  2. When re-promoting the site should not be changed completely content located on the site. Otherwise, all previous parameters will be reset, including ranking in search results.
  3. It is not necessary to use requests incorrectly for Russian, search robots are already well defined by such requests and with the "spam" cleaned the page from the search.

We wish you good luck in promoting your site!

Fast navigation on this page:

Like almost all other webmasters, I make a semantic kernel using the KeyCollector program - this is definitely the best program for compiling the semantic kernel. As she uses - this is a topic for a separate article, although it is full of information on the Internet on this score - I recommend, for example, manual from Dmitry Sidasha (sidash.ru).

Since the question is raised about the example of the compilation of the nucleus - I give an example.

List of keys

Suppose our website is dedicated to British cats. I drive into the "List of phrases" the phrase "British Cat" and click on the "Puro" button.

I get a long list of phrases, which will begin with the following phrases (shown in phrase and private):

British Cats 75553 British Cats Photo 12421 British Fold Cat 7273 British Cat Cat 5545 Cats British Breed 4763 British Shorthair Cat 3571 British Cats 3474 British Cats Price 2461 Cat Blue British 2302 British Fold Cat Photo 2224 British Cats 1888 British Cats Character 1394 Buy British Cats Cat 1179 British Cats Buy 1179 Long-haired British Cat 1083 Pregnancy British Cat 974 Cat British Chinchilla 969 Cats British Breed Photo 953 Nursery British Cats Moscow 886 Coloring British Cats Photo 882 British Cats Care 855 British Shorthair Cat Photo 840 Scottish and British Cats 763 British Cats Names 762 Cat British Blue Photo 723 Photo British Blue Cat 723 British Cat Black 699 Than Fed British Cats 678

The list itself is much more, I only led his beginning.

Key grouping

Based on this list, I have on the site there will be articles about the varieties of cats (Fold, blue, short-haired, long-haired), there will be an article about pregnancy of these animals, about how to feed them, about names and so on by the list.

For each article takes one major such request (\u003d the topic of the article). However, only one request article is not exhausted - other requests suitable, as well as different variations and word formations, which can be found in Kay collector below the list are also added to it.

For example, with the word "folding" there are the following keys:

British Fold Cat 7273 British Fold Cat Photo 2224 British Fold Cat Price 513 Breed Cats British Fold 418 British Blue Fold Cat 224 Scottish Fold and British Cats 190 Cats British Breed Fold Photo 169 British Fold Cat Photo Price 160 British Fold Cat Buy 156 British Fold Blue Cat Photo 129 British Fold Cats Character 112 British Fold Cat Care 112 British Fold Cats 98 British Shorthair Fold Cat 83 Color British Fold Cats 79

So that there is no repaimer (and the renabases may be on the totality of using too much keys in the text, in the title, in, etc.), I would not take them all with the inclusion of the main request, but the individual words of them make sense Using in the article (photo, buy, character, care, etc.) In order for the article to be better ranked for a large number of low-frequency queries.

Thus, we have a group of keywords that we use in the article will be formed under the article about the lop cats. Similarly, groups of keywords for other articles are also formed - here's the answer to the question of how to create a semantic site core.

Frequency and competition

There is still an important moment associated with accurate frequency and competition - they must be collected in Key Collector. To do this, you need to highlight all requests by checkmarks and on the "Frequency of Yandex.Wordstat" tab, click the "Collect the frequency"! "!"! - The exact feature of each phrase (that is, it is with such a word order and in such a case), this is a much more accurate indicator than the total frequency.

To check the competition in the same Key Collector, click the "Get Data for PS for PS" (or for Google), then click "Calculate KEI according to available data." As a result, the program will collect how many main pages on this request are in the top 10 (the more - the more difficult to break there) and how many pages in the TOP-10 contain such title (similarly, the more - the more difficult to get into the top).

Next, you need to act on the basis of which we have a strategy. If we want to create a comprehensive site about cats, then we are not so important accurate frequency and competition. If we only need to publish several articles - then we take requests that have the highest frequency and at the same time the lowest competition, and on their founding we write articles.

We released a new book "Content Marketing on Social Networks: How to sit in the head of subscribers and fall in love with their brand."

The semantic kernel of the site is a complete set of keywords corresponding to the subject of the web resource by which users will be able to find it in the search engine.


More video on our channel - Learn Internet Marketing with Semantica

For example, the fabulous character of Baba Yaga will have the following semantic kernel: Baba Yaga, Baba Yaga Tales, Baba Yaga Russian fairy tales, woman with a stupid fairy tale, woman with a stupkey and a broom, evil woman wizard, Baba Hut smoke legs, etc.

What is the site semantic kernel

Before you start work on promotion, you need to find all the keys by which the target visitors can look for. On the basis of semantics, the structure is drawn up, the keys are distributed, metategs, document headers, descriptions for images are prescribed, and anchorage sheet is being developed for working with reference mass.

When drafting semantics, it is necessary to solve the main problem: determine which information should be published to attract a potential client.

Drawing up the key list decides another important task: for each search phrase you define the relevant page that will be fully answered by the user's question.

This task is solved in two ways:

  • You create a site structure based on the semantic kernel.
  • You distribute selected terms on the finished resource structure.

Types of key queries (KZ) by number of views

  • NC - low-frequency. Up to 100 shots per month.
  • SC - mid-frequency. From 101 to 1,000 shows.
  • HF - high frequency. More than 1000 shows.

According to statistics, 60-80% of all phrases and words belong to the LF. Work when promoting with them is cheaper and easier. Therefore, you must compile the maximum volume of the phrase core, which will be constantly complemented by new LF. HF and MF also should not be ignored, but do the main emphasis on the expansion of the list of low-frames.

Types of KZ by search type

  • Information is needed when searching for information. "How to fry potatoes" or "how many stars in the sky".
  • Transactional are used to perform action. "Order Pooh handkerchief", "download songs of Vysotsky"
  • Navigation are used to search for a related to some particular company or binding to the site. "Bakery MVIDEO" or "Svyaznoy Smartphones".
  • Others - an extended list for which it is impossible to understand the final search goal. For example, the request "Cake Napoleon" - perhaps a person is looking for a recipe for its preparation, and maybe wants to buy a cake.

How to make semantics

It is necessary to highlight the main terms of your business and the needs of users. For example, laundry customers are interested in washing and cleaning.

Then it is necessary to define the tails and specification (more than 2 words in the query) that users add to the main terminal. By this, you will increase the scope of the target audience and reduce the frequency of terms (washing plaids, washing jackets, etc.).

Manually semantic kernel collection

Yandex Wordstat.

  • Select the web resource region.
  • Enter the key phrase. The service will issue you the number of requests with this keyword for the last month and the list of "related" terms who were interested in visitors. Keep in mind that if you enter, for example, "buy windows", then you get the results on the exact entry of the keyword. If you enter this key without quotes, you get general results, and requests such as "buy windows in Voronezh" and "Buy the plastic window" will also be reflected in this figure. For narrowing and clarification of the indicator, you can use the "!" Operator, which is put before each word:! Buy! Windows. You will receive a number showing accurate issuance for each word. It turns out a type of type: Buy plastic windows, buy and order windows, while the words "buy" and "windows" will be reflected in a constant form. To obtain an absolute indicator, on request "buy windows" should be applied to the following scheme: We enter in quotes "! Buy! Windows. You will get the most accurate data.
  • Collect words from the left column and analyze each of them. Make the initial semantics. Pay attention to the right column containing the CWs that users entered before or after finding words from the left column. You will find a lot of desired phrases.
  • Go through the "History of Requests" tab. On the chart you can analyze seasonality, the popularity of phrases in each month. Good results gives work with search tips Yandex. Each CZ is entered into the search field, and the semantics expands on the basis of pop-up tips.

Google Scheduler KZ

  • Enter the main RF request.
  • Select "Get options".
  • Select the most relevant options.
  • Repeat this action with each selected phrase.

Exploring competitor sites

Use this method as an additional to determine the correctness of the choice of this or that KZ. This will help you tools BuzzSumo, SearchMetrics, Semrush, AdVS.

Programs for the preparation of the semantic kernel

Consider some of the most popular services.

  • Key Collector. If you make a very surround semantics, then without this tool you can not do. The program picks up the semantics, referring to Yandex Wordstat, collects search prompts of this search engine, filters the KZ with stoppal words, a very low frequency, duplicated, determines the seasonality of phrases, studies the statistics of meters and social networks, selects relevant pages to each request.
  • Slovoeb. Free service from Key Collector. The tool picks up keywords, grows and analyzes them.
  • AllSubmitter. It helps to choose a CZ, shows the sites of competitors.
  • Keyso. Analyzes the visibility of the web resource, its competitors and helps in the compilation of Sia.

What to take into account when selecting key phrases

  • Frequency indicators.
  • Most of the KZ must be LF, the rest - the sch and HF.
  • Relevant page requests for pages.
  • Competitors in the top.
  • Competitism phrase.
  • The predicted number of transitions.
  • Seasonality and geo-dependence.
  • KZ with errors.
  • Associative keys.

Proper semantic kernel

First of all, it is necessary to decide on the concepts of "keywords", "keys", "key or search queries" are words or phrases, with which potential customers of your site are looking for the necessary information.

Make the following lists: categories of goods or services (hereinafter referred to), the names of their brands, commercial tails ("buy", "order", etc.), synonyms, transliteration on Latin (or in Russian, respectively), professional jargonisms ("Keyboard" - "Klava", etc.), specifications, words with possible typos and errors ("Orenburg" instead of "Orenburg", etc.), binding to the area (city, streets, etc. .).

When working with lists, focus on the KZ from the Agreement on the Promotion, the structure of the web resource, information, price lists, competitors, the experience of the preceding SEO.

Be proceed to the semantics semantics by mixing the phrases selected at the previous step using a manual method or using services.

Form the list of stop-words and remove unsuitable KZ.

Grouple KZ for relevant pages. Under each key, the most relevant page is selected or a new document is created. It is desirable to carry out this work manually. For large projects there are paid services like Rush Analytics.

Go from more to a smaller. First distribute the RF on the pages. Then do the same with the SC. NC can be added to the pages with the RF and LC distributed on them, as well as pick up individual pages for them.
After analyzing the first results, we can see that:

  • promotable site is not visible in all stated keywords;
  • there are not those documents that you assumed relevant;
  • the improper structure of the web resource is prevented;
  • for some KZ relevant several web pages;
  • not enough relevant pages.

When grouping a KZ, work with all possible partitions on the web resource, fill every page with useful information, do not create duplicate text.

Common errors when working with KZ

  • it was selected only obvious semantics, without words, synonyms, etc.;
  • the optimizer distributed too much kz to one page;
  • the same CZ is distributed to different pages.

At the same time, ranking is worsening, the site can be punished for renabase, and if the web resource has an incorrect structure, it will be very difficult to promote it.

No matter how you will pick up semantics. With the right approach, you will receive the correct SI, necessary for the successful website promotion.