Cluster of old computers on Windows. Desktop cluster. Check network connections and name permissions

Today, the business processes of many companies are fully tied to information
Technologies. With an increase in such dependence of organizations from the work of computational
networks availability of services at any time and under any load plays a big
role. One computer can only provide the initial level of reliability and
scalability, the maximum level can be achieved by combining
Unified system of two or more computers - cluster.

What is a cluster

Clusters are used in organizations that need a round-the-clock and
uninterrupted availability of services and where any interruptions are undesirable and
Invalid. Or in cases where a load splash is possible, with which
Do not cope main server.then it will be helped to compensate for additional
Hosts that usually perform other tasks. For mail server processing
Tens and hundreds of thousands of letters per day, or a web server serving
Online stores, the use of clusters is very desirable. For the user
Such a system remains completely transparent - the whole group of computers will be
Look like a single server. Using a few, even cheaper,
computers allows you to get very significant advantages over solitary
And the shock server. This is a uniform distribution of incoming requests,
Increased fault tolerance, since when one element exits its load
Pick up other systems, scalability, convenient maintenance and replacement
Cluster nodes, as well as much more. The failure of one node automatically
detects and the load is redistributed, all this will remain for the client
unnoticed.

Win2k3 features

Generally speaking, some clusters are designed to increase the availability of data,
Others - to ensure maximum performance. In the context of the article us
Will be interested Massive Parallel Processing)- clusters, in
which the same type applications are performed on multiple computers, providing
Scalability of services. There are several technologies that allow
Distribute the load between multiple servers: traffic redirection,
translation of addresses, DNS Round Robin., using special
program
working at the applied level, like web accelerators. IN
Win2K3, unlike Win2K, clustering support is initially laid and
Supports two types of clusters characterized by applications and specifics
Data:

1. NLB Clusters (Network Load Balancing) - Provided
Scalability and high availability of services and applications based on TCP protocols
and UDP, combining into one cluster up to 32 servers with the same data set, on
which are performed by the same applications. Each request is performed as
Separate transaction. Used to work with sets of rarely changing
Data, like www, ISA, Terminal Services and other similar services.

2. Server clusters - can combine up to eight knots, their main
Task - ensuring the availability of applications in case of failure. Consist of active and
Passive knots. Passive knot most of the time idle, playing a role
Reserve main node. For individual applications, it is possible to configure
Several active servers, distributing the load between them. Both nodes
Connected to a single data warehouse. Server cluster used to work
with large volumes of frequently changing data (postal, file and
SQL servers). Moreover, such a cluster may not consist of nodes operating under
control of various options for Win2K3: Enterprise or Datacenter (Web version
Standart server clusters do not support).

IN Microsoft Application CENTER 2000. (and only) there was another kind
cluster - CLB (COMPONENT LOAD BALANCING)providing the opportunity
Distribution of COM + applications between multiple servers.

NLB clusters

When using load balancing on each of the hosts is created
Virtual network adapter with its independent of the real IP and the MAC address.
This virtual interface is a cluster as a single node, customers
Contact it exactly through the virtual address. All requests are obtained by each
Cluster node, but are processed only by one. On all nodes launches
Network Load Balancing Service (Network Load Balancing Service)
,
which using a special algorithm that does not require data exchange between
nodes, makes a solution, is it necessary to process a request to one or another node or
not. Nodes are exchanged heartbeat messagesshowing them
availability. If the host stops issuing a HeartBeat or a new node appears,
The rest of the nodes begin cONVERGENCE processrenovate
Redistributing load. Balancing can be implemented in one of two
Modes:

1) unicast.- unicast mailing, when instead of physical mas
The cluster virtual adapter is used. In this case, the cluster nodes are not
can exchange data using MAC addresses, only through IP
(or the second adapter not related to the cluster);

Within one cluster, only one of these modes should be used.

You can configure several NLB clusters on one network adapter,
Indicing specific rules for ports. Such clusters are called virtual. Them
Application makes it possible to set for each application, node or IP address
Specific computers in the primary cluster, or block traffic for
some application without affecting traffic for other programs running
on this node. Or, on the contrary, the NLB component can be tied to several
network adapters, which will allow you to set up a number of independent clusters on each
node. You should also know that configuring server clusters and NLB on one node
It is impossible because they work in different ways with network devices.

The administrator can make some kind of hybrid configuration.
the advantages of both methods, for example, creating a NLB cluster and configuring replication
data between nodes. But replication is not constantly carried out, and from time to time
Therefore, information on different nodes Some time will be different.

With the theory on this finish, although it is possible to tell about the construction of clusters
Long, listing the possibilities and ways to build up, giving various
Recommendations and options for specific implementation. All these subtleties and nuances will leave
For self-study and proceed to the practical part.

Setup NLB cluster

For nLB cluster organizations Additional software is not required, all
It is made by the existing means Win2K3. To create, support and monitor
NLB clusters use a component "Network load balancing manager"
(Network Load Balancing Manager)
which is in the tab
"Administration" "Control Panel" (command NLBMGR). Since component
"Network load balancing" is placed as standard network Driver Windows
NLB installation can be performed using a component " Network connections", in
which access is available. But it is better to use only the first
Option, simultaneous use of the NLB Manager and "Network Connections"
May lead to unpredictable results.

The NLB dispatcher allows you to configure and manage from one place by work immediately
Multiple clusters and nodes.

Also installing the NLB cluster on a computer with one network
adapter associated with the component "Network load balancing" but in this
case with Unicast mode NLB dispatcher on this computer can not be
used to control other nodes, and the nodes themselves cannot exchange
with each other information.

Now call the NLB dispatcher. Clusters do not yet have, so
The window does not contain any information. Select the "New" and the "Cluster" menu and
We start filling the fields in the Cluster Parameters window. In the "Setup
Cluster IP Parameters »Enter the value of the virtual IP address of the cluster, mask
Subnet and full name. The value of the virtual MAC address is set
automatically. Just below, select the cluster mode: unicast or
Multicast. Pay attention to the Allow Remote Control checkbox -
All Microsoft documents strongly recommends not to use it
Avoid security issues. Instead, apply
dispatcher or other means remote control, for example, toolkit
Windows Management (WMI). If the decision on its use is made, follows
Perform all the proper network protection activities, covering additionally
UDP ports 1717 and 2504 firewall.

After filling in all fields, click "Next". In the "IP address cluster" window at
We add additional virtual IP addresses that will be
Used by this cluster. In the next window "Rules for ports" can
Set load balancing for one or for group ports of all or
Selected IP via UDP or TCP protocols, as well as block access to cluster
certain ports (which firewall Does not replace). By default, cluster
processes requests for all ports (0-65365); better to limit this list
By entering it only really necessary. Although, if there is no desire to mess around,
You can leave everything as it is. By the way, in Win2K by default, all traffic,
directed to the cluster, processed only the node that had the highest priority,
The rest of the nodes were connected only at the failure of the main one.

For example, for IIS, you will need to enable only ports 80 (HTTP) and 443 (HTTPS).
And it can be done so that, for example, protected compounds were processed
Only certain servers on which the certificate is installed. For adding
New rules click "Add", in the dialog that appears in the dialog box
The IP address of the node, or if the rule applies to all, then leave the checkbox
"Everything". In the fields "C" and "By" port range, set the same value -
80. The key field is "Filtering Mode" - here
Worth who will be processed by this request. Three fields that determine the mode are available.
Filtration: "Multiple nodes", "One node" and "Disable this range of ports".
Select "One node" means that traffic directed to the selected IP (computer
or cluster) with the specified port number, will be processed by the active node,
having the smallest priority rate (about it just below). Choosing "Disable ..."
It means that such traffic will be discarded by all cluster participants.

In the "Multiple nodes" mode, you can additionally specify the option.
determining the similarity of customers to direct traffic from the specified client to
The same cluster node. Three options are possible: "No", "One" or "Class
C. " The choice of the first means that any request will be answered
knot. But it should not be used if the UDP protocol is selected in rule or
"Both". When election of other points, the similarity of customers will be determined by
Specific IP or Class S. Network Range

So, for our rule with the 80th port, you will choose your choice.
"Multiple nodes - class C". Rule for 443 fill in the same way, but we use
"One node" so that the client always responds to the main node with the smallest
priority. If the dispatcher detects an incompatible rule, will be derived
warning message, additionally log windows events will be made
Relevant entry.

Next, connect to the node of the future cluster by entering its name or real IP, and
We define the interface that will be connected to the cluster network. In the "Parameters
node "Select priority from the list, specify the network settings, set the initial
The state of the node (works, stopped, suspended). Priority at the same time
is a unique node identifier; The smaller the number, the higher the priority.
A node with priority 1 is a master server, first of all receiving
Packages and acting as a routing manager.

Check box "Save the status after rebooting a computer" allows in case
Failure or rebooting this node to automatically enter it into operation. After clicking
On the "ready" in the dispatcher window will be recorded about a new cluster in which
There is one node.
The next node is also simple. Select in the Add Node menu or
"Connect to an existing", depending on which computer
Connection is performed (it is already included in the cluster or not). Then in the window
Specify the name or address of the computer, if enough rights to connect, new
The node will be connected to the cluster. First time icon in front of his name will be
differ, but when the process of convergence is completed, it will be the same as
The first computer.

Since the dispatcher displays the properties of nodes at the time of its connection, for
Current state clarifications should be selected cluster and in context menu paragraph
"Update." The dispatcher will connect to the cluster and will show the updated data.

After installation NLB cluster do not forget to change the DNS record to
The name resolution now showed on the IP cluster.

Changing server boot

In such configuration, all servers will be loaded uniformly (except
Option "One node"). In some cases, it is necessary to redistribute the load,
Most of the work laying on one of the nodes (for example, the most powerful).
In relation to the cluster, the rules after their creation can be changed by choosing in
The context menu appears when you click on the name, the "Cluster Properties" item.
All those settings we talked about above are available here. Menu item
"Properties of the node" provides slightly more opportunities. In the "node parameters"
You can change the priority value for the specifically selected node. In "Rules
For ports »Add or delete Rule You can not, it is available only at the level
cluster. But by choosing editing a specific rule, we get the opportunity
Adjust some settings. So, as set mode Filtration
"Multiple nodes" becomes available "Estimated load", allowing
Redistribute the load on a specific node. The default checkbox is checked.
"Equal", but in the "load assessment" you can specify another load value on
specific node, as a percentage of shared download cluster. If the mode is activated
Filtering "One node", a new parameter "Priority" appears in this window
Processing. " Using it, it can be done so that traffic to a specific port
will be primarily processed by one cluster node, and to the other - to others
node.

Journaling events

As already mentioned, the "Network load balancing" component writes all
Actions and cluster changes to Windows event log. To see them
Select "View Events - System", the NLB includes WLBS messages (from
Windows Load Balancing Service, as this service was called NT). Besides, in
The dispatcher window displays the latest messages containing error information.
and all changes in the configuration. By default, this information is not
persists. To be recorded in a file, select "Parameters -\u003e
Log parameters ", check the" Enable logging "checkbox and specify the name
file. The new file will be created in your account subdirectory in Documents
and settings.

Configure IIS with replication

Cluster cluster, but without service it does not make sense. Therefore, add IIS (internet
Information Services)
. IIS server is part of Win2K3, but to reduce to
Minimum the possibility of attacks to the server, it is not installed by default.

You can install IIS in two ways: through the "control panel" or
Master management of the roles of this server. Consider the first. Go to B.
"Control Panel - Installing and Deleting Programs" (Control Panel - Add OR
Remove Programs), choose "Installation windows components"(Add / Remove Windows
Components). Now go to the "Application Server" point and celebrate in "Services
IIS »All you need. By default, the server's work directory is \\ inetpub \\ wwwroot.
After installing the IIS, it can output static documents.

I built my first "cluster" from single-board computers almost immediately after the Orange PI PC microcomputer began to gain popularity. "Cluster" could be called with a large stretch, because from a formal point of view, it was just a local network of four boards, which "saw" each other and could go online.

The device participated in the project [Email Protected] And even managed to count something. But, unfortunately, no one flew to pick me up from this planet.
However, for all time, it has learned a lot with the wires, connectors and microSD cards. So, for example, I found out that you should not trust the stated power supply power, which would be nice to distribute the load in terms of consumption, and the cross section of the wire is important.

And yes, the "collective me" of the power management system had to "collect, for the simultaneous start of five single-port specimens may require the starting current of about 8-10a (5 * 2)! It is a lot, especially for BP made in the cellars of the country, where we love to order all sorts of ... Interesting gadgets.

I'll start, perhaps with it. The task is reduced to relatively simple actions - after a specified period of time, it is sequentially included 4 channels for which 5 volts are served. The easiest way to realize the conceived - Arduino (koi every self-respecting guy in excess) and here is such a miracle board with Ali with 4 relays.

And you know, it even earned.

However, the "refrigerator-style" click when starting caused some rejection. First, when it was rushed with meals, it was necessary to put condensers, and in the second, the whole design was rather large.

So on one day, I simply replaced the relay block on the transistor keys based on the IRL520.

It solved the issue with interference, but since Mosfet controls the "zero", it was necessary to abandon the brass legs in the rack, in order not to connect the random land of the boards.

And so, the decision is perfectly replicated and already two clusters work stably without any surprises. Just As Planned.

But let's return to reprehensibility. Why buy power supplies for a tangible amount when literally under your feet there are many available ATXs?
Moreover, they have all the stresses (5,12,3.3), the primitiveness of self-diagnosis and the possibility of software control.

Well, here I am not particularly cutting, I will not - an article about the management of ATX through Arduino.

Well, all the pills are approached, the brands are also stuck? It's time to combine it all together.

There will be one head node that connects to the outside world on WiFi and gives "Internet" to the cluster. It will eat from the ATX standby voltage.

In fact, TBNG is responsible for distributing the Internet.
So, if desired, the cluster nodes can be hidden for TOR.

Also, there will be a trickle board connected via I2C to this head unit. It can turn on-off each of the 10 work nodes. Plus, it will be able to control the three fans of 12V to cool the entire system.

The work script is this - when you turn on the ATX in 220V, the head unit starts. When the system is ready for operation - it consistently includes all 10 nodes and fans.
When the switching process is completed - the head unit will bypass each working knot and ask how we feel that they say the temperature. If one of the racks is heated - we will increase by blowing.
Well, when the disconnection command, each of the nodes will be carefully repaid and de-energized.

I painted the board for the board myself, so it looks terribly. However, a well-trained person was taken for tracing and making, for which he thanks very much.

Here it is in the assembly process

Here is one of the first sketches of the location of the cluster component. Made on a piece of cell and perpetuated through Office Lens by phone.

The whole design is placed on a sheet of textolite purchased on occasion.

It looks like this is the location of the nodes inside. Two racks of five boards.

Here the control Arduino is blocked. It is connected to the head Orange PI PC via I2C through the level converter.

Well, here is the final (current option).

So, all that is needed is to write a few utilities on Python, which would be conducted with all this music - turned on, included, adjusted the fan speed.

I will not tire with technical details - it looks like this:

1
2
3
4
5
6
7
8
#! / usr / bin / env sh

echo "Starting ATX Board ..."
/home/zno/i2creobus/i2catx_tool.py --Start.
echo. "SETTING INITIAL FAN VALUES ..."
/home/zno/i2creobus/i2Creobus_Tool.py --Fan 0 --Set 60
/home/zno/i2creobus/i2Creobus_Tool.py --Fan 1 --Set 60
/home/zno/i2creobus/i2Creobus_Tool.py --Fan 2 --Set 60

Since we have already 10 nodes, we take an Ansible to weapons, which will help, for example, turn off all nodes correctly. Or run on each temperature monitor.

1
2
3
4
5
6
7
8
---

- Hosts: Workers
roles:
- Webmon_stop.
- Webmon_remove.
- Webmon_install
- Webmon_start.

I am often accused of dismissive tone, they say that it is just a local network of single-page panels (as I mentioned at the very beginning). I, in general, shit on someone else's opinion, but perhaps add Glamor and organize Docker Swarm Cluster.
The task is very simple and performed in less than 10 minutes. Then I will start the PORTAINER copy on the head node, and Voila!

Now you can remons Scaling tasks. So, at the moment, Mainer Cryptovaya Verium Reserve works in the cluster. And, quite successful. I hope the nearest Tuzube will pay off eaten electricity;) Well, or reduce the number of knots involved and the Major something else type Turtle Coin.

If you want a payload - you can throw Hadoop into the cluster or set up web server balancing. The finished images on the Internet a lot, and the training material is enough. Well, if there is no image (Docker Image) - you can always collect your own.

What did it teach me? In general, the "stack" technologies are very wide. Judge ourselves - Docker, Ansible, Python, pumping Arduino (Forgive Lord, will be told to Night), well, and Shell of course. As well as kicad and work with the contractor :).

What can be done better? Much. On the software part, it would be nice to rewrite control utilities on GO. By Iron - Make It More Steampunkish - KDPV at the beginning perfectly lifts the bar. So there is, on what to work.

Roles performed:

  • Head node - Orange Pi PC with USB WiFi.
  • Work units - Orange PI PC2 x 10.
  • Network - 100 Mbps [Email Protected]
  • Brain - ARDUINO clone based on ATMEGA8 + level converter.
  • Heart - ATX power controller with power supply.
  • Soft (soul) - Docker, Ansible, Python 3, a little shell and a little bit of laziness.
  • The time spent is invaluable.

In the process of experiments, the pair of ORANGE PI PC2 boards was injured due to the confused nutrition (very beautifully lit), another PC2 lost Ethernet (this is a separate story in which I do not understand the physics of the process).

That seems to be the whole story "by top." If someone considers it interesting - ask questions in the comments. And vote in the same place (plus - everyone has a commercial button for this). Most interesting questions will be covered in new notes.
Thank you for reading to the end.

First, decide which components and resources will be required. You need one main node, minimum dozen identical computing nodes, Ethernet switch, power distribution unit and rack. Determine the power of wiring and cooling, as well as the space area that you will need. Also decide which IP addresses you want to use for nodes, which you will deliver and what technologies will be required to create parallel computing capacities (more about this below).

  • Although "iron" is expensive, all the programs given in the article are distributed free of charge, and most of them are open source.
  • If you want to find out how fast your supercomputer can be theoretically, use this tool:

Mount the nodes. You will need to collect network nodes or purchase pre-assembled servers.

  • Choose frames for servers with the most rational use of space and energy, as well as with efficient cooling.
  • Or You can "recycle" a dozen or so used servers, somewhat outdated - and let their weight exceed the total weight of components, but you will save a decent amount. All processors, network adapters and motherboards should be the same so that computers work well together. Of course, do not forget about RAM and hard drives For each node, as well as at least one optical drive for the main node.
  • Install servers in a rack. Start from the bottom so that the rack is not overwhelmed from above. You will need a friend's help - the collected servers can be very heavy, and put them in the cells on which they hold in the rack are quite difficult.

    Install the Ethernet switch next to the rack. It is worth it to immediately configure the switch: set the size of the Jumbo framework 9000 bytes, set the static IP address that you selected in step 1 and turn off the unnecessary protocols, such as SMTP.

    Install the power distributor (PDU, or Power Distribution Unit). Depending on which maximum load gives the nodes on your network, you may need 220 volts for a high-performance computer.

  • When everything is installed, proceed to configuration. Linux in fact is the main system for high-performance (HPC) clusters - it is not only perfect as an environment for scientific computing, but you still do not have to pay for the installation of the system for hundreds and even thousands of nodes. Imagine how much it would cost installing Windows On all nodes!

    • Start with setting the last bIOS version For the motherboard and software from the manufacturer, which should be the same for all servers.
    • Install the preferred Linux distribution to all nodes, and to the main node - a distribution with a graphical interface. Popular systems: Centos, OpenSuse, Scientific Linux, Redhat and Sles.
    • The author highly recommends using Rocks Cluster Distribution. In addition to installing all program and tool cluster, ROCKS implements an excellent method for quick "transfer" of a variety of copies of the system to similar servers using PXE Boot and the "Kick Start" procedure from Red Hat.
  • Set messaging interface, resource manager and other necessary libraries. If you did not put Rocks in the previous step, you will have to manually install the necessary softwareTo configure the logic of parallel computing.

    • To begin with you will need portable system To work with Bash, for example, Torque Resource Manager, which allows you to separate and distribute tasks for several machines.
    • Add to Torque more Maui Cluster Scheduler to complete the installation.
    • Next, you need to install the messaging interface that is necessary in order for individual processes in each individual code to use general data. OpenMP is the easiest option.
    • Do not forget about multi-threaded mathematical libraries and compilers that will "collect" your programs for distributed computing. I already said that you should just put Rocks?
  • Connect computers to the network. The main node sends tasks to calculate on the subordinate nodes, which in turn must return the result back, as well as send messages to each other. And the faster all this happens, the better.

    • Use private Ethernet networkTo connect all the nodes into the cluster.
    • The main node can also work as NFS, PXE, DHCP, TFTP and NTP server when connected to Ethernet.
    • You must separate this network from public to be sure that packets do not overlap by others in LAN.
  • Test a cluster. The last thing you should do before you access users to computer facilities - test performance. HPL (High Performance Lynpack) Benchmark is a popular option for measuring the speed of calculations in the cluster. You need to compile from the source sources with the highest degree of optimization that your compiler allows for the architecture you have chosen.

    • You must, of course, compile with all possible settings Optimizations that are available to the platform you have chosen. For example, when using AMD CPU, compile in Open64 and optimization level -0.
    • Compare the results with the Top500.org to match your cluster with 500 fastest supercomputers in the world!
  • Press center

    Creating a cluster based on Windows 2000/2003. Step by step

    The cluster is a group of two or more servers operating together to ensure trouble-free operation of a set of applications or services and perceived by the client as a single element. Cluster nodes are combined with the help of hardware networks, shared resources shared and server software.

    Microsoft Windows. 2000/2003 Supports two clustering technologies: Load balancing clusters and server clusters.

    In the first case (load balancing clusters) The Network Load Balancing service gives the properties and applications of the high level of reliability and scalability by combining up to 32 servers to a single cluster. Queries from customers in this case are distributed among the cluster nodes in a transparent way. If the cluster fails, the cluster automatically changes its configuration and switches the client to any of the available nodes. This cluster configuration mode is also called Active-ACTIVE mode when one application works on several nodes.

    The server cluster distributes its load among cluster servers, and each server carries its own load. If the node fails in the cluster, the applications and services configured to work in the cluster are transparently restarted on any of the free nodes. Server clusters use shared discs to exchange data inside the cluster and to provide transparent access to applications and cluster services. They require special equipment, but this technology Provides very high level reliability, since the cluster itself does not have any single point of failure. This cluster configuration mode is also called Active-Passive mode. The application in the cluster works on one node with shared data located on the external storage.

    The cluster approach to the organization of the internal network gives the following advantages:

    The high level of readiness is that if the service fails or the application on some node of the cluster configured to work in a cluster, cluster software allows you to restart this application on another node. Users at the same time will make a short-term delay when carrying out some operation either do not notice the server failure at all. Scalable for applications running in a cluster, adding servers to the cluster means an increase in the capabilities: fault tolerance, load distribution, etc. Controllability Administrators using a single interface can manage applications and services to establish a reaction to the cluster node failure, distribute the load among nodes Cluster and remove the load from nodes for prophylactic work.

    In this article, I will try to collect my experience in creating clustered Windows-based systems and give a small step by step guide By creating a two-zone cluster of servers with shared data storage.

    Software Requirements

    • Microsoft Windows 2000 Advanced (Datacenter) Server or Microsoft Windows 2003 Server Enterprise Edition installed on all cluster servers.
    • Installed service DNS. I will explain a little. If you build a cluster based on two domain controllers, then it is much more convenient to use the DNS service, which you in any case are installed when creating Active Directory. If you create a cluster based on two servers, Windows NT members domain, you will have to use either WINS service, or to make matching the names and addresses of the machines to the Hosts file.
    • Terminal Services for remote servers management. It is not necessary, but if there is Terminal Services, it is convenient to manage servers from your workplace.

    Hardware requirements

    • The hardware for the cluster node is better to select, based on Cluster Service Hardware Compatible List (HCl). According to Microsoft Recommendations hardware Must be tested for compatibility with Cluster Services.
    • Accordingly, you will need two servers with two network adapters; SCSI adapter having an external interface for connecting an external data array.
    • External array having two external interface. Each of the cluster nodes is connected to one of the interfaces.

    Comment: To create a two-zone cluster, it is not necessary to have two absolutely identical servers at all. After a failure on the first server, you will have a little time to analyze and restore the work of the main node. The second node will work on the reliability of the system as a whole. However, this does not mean that the second server will stand idle. Both cluster nodes can safely do their affairs, solve different tasks. But we can set up a certain critical resource to work in a cluster by increasing it (this resource) fault tolerance.

    Network Settings Requirements

    • Unique NetBIOS name for cluster.
    • Five unique static IP addresses. Two for network adapters On the cluster network, two for network adapters on the common network and one for the cluster.
    • Domain Account for Cluster Service (Cluster Service).
    • All cluster nodes must be either Member Server in the domain or domain controllers.
    • Each server must have two network adapters. One for connecting to a common network (Public Network), the second to exchange data between the cluster nodes (Private Network).

    Comment: On Microsoft Recommendations, your server must have two network adapters, one for a common network, the second to exchange data inside the cluster. Is it possible to build a cluster on one interface - probably yes, but I did not try.

    Cluster installation

    When designing a cluster, you must understand that using one physical network both for cluster exchange, and for the local network, you increase the percentage of the failure of the system. Therefore, it is extremely desirable to use one subnet allocated in a separate physical network element for cluster data sharing. And for the local network it is worth using another subnet. Thus, you increase the reliability of the entire system as a whole.

    In the case of constructing a two-zone cluster, one switch is used general Network. Two cluster servers can be associated with each other cross-cable directly as shown in the figure.

    The installation of a two-zone cluster can be divided into 5 steps

    • Installing and configuring nodes in a cluster.
    • Installing and configuring a shared resource.
    • Verify disk configuration.
    • Configuring the first cluster node.
    • Configuring the second node in the cluster.

    This step-by-step guide will allow you to avoid errors during installation and save a lot of time. So, let's begin.

    Installing and configuring nodes

    We simply simplify the task. Since all cluster nodes must be either domain members or a domain controllers, then the root holder of the AD directory (Active Directory) will make the 1st cluster node, the DNS service will work on it. The 2nd cluster node will be a full domain controller.

    Installation operating system I am ready to skip, believing that you should not have any problems. But configuration network devices I want to clarify.

    Network settings

    Before starting the cluster and Active Directory installation, you must perform network settings. All network settings want to divide into 4 stages. To recognize name names, it is desirable to have a DNS server with already existing records about the cluster servers.

    Each server has two network cards. One network card will serve to exchange data between cluster nodes, the second will work on customers in our network. Accordingly, I will call the Private Cluster Connection, the second we call Public Cluster Connection.

    Settings of network adapters for one and for another server are identical. Accordingly, I will show you how to configure the network adapter and give a sign with network settings of all 4 network adapters on both cluster nodes. To configure a network adapter, you must perform the following steps:

    • My Network Places → Properties
    • Private Cluster Connection → Properties → Configure → Advanced

      This item requires explanation. The fact is that according to the ultimate Microsoft recommendations on all network adapters of the cluster nodes, the optimal speed of the adapter should be installed, as shown in the following figure.

    • Internet Protocol (TCP / IP) → Properties → Use the Fallowing IP: 192.168.30.1

      (For the second node, use the address 192.168.30.2). Enter the subnet mask 255.255.255.252. As the DNS server address for both nodes, use the address 192.168.100.1.

    • Additionally on the Advanced → WINS tab, select Disabled NetBiOS OVER TCP / IP. For network adapters for network adapters (Public), this item is lowered.
    • Do the same with the network card for the LAN Public Cluster Connection. Use the addresses given in the table. The only difference in the configuration of two network circuit boards is that the Public Cluster Connection does not need to turn off the WINS mode - NetBIOS OVER TCP / IP.

    To configure all network adapters on the cluster nodes, use the following tablet:

    Knot Network name IP address Mask. DNS Server
    1 Public Cluster Connection 192.168.100.1 255.255.255.0 192.168.100.1
    1 Private Cluster Connection. 192.168.30.1 255.255.255.252 192.168.100.1
    2 Public Cluster Connection 192.168.100.2 255.255.255.0 192.168.100.1
    3 Private Cluster Connection. 192.168.30.2 255.255.255.252 192.168.100.1

    Installing Active Directory.

    Since my article does not pursue the goal to tell about install Active Directory, then this item I will omit. All sorts of recommendations, books about this are written quite a lot. Select a domain name, like mycompany.ru, install Active Directory on the first node, add the second node to the domain as the domain controller. When you do everything, check the configuration of the servers, Active Directory.

    Setting Cluster User Account

    • Start → Programs → Administrative Tools → Active Directory Users and Computers
    • Add a new user, for example, ClusterService.
    • Check boxes on: User Cannot Change Password and Password Never Expires.
    • Also add this user to the Administrators group and give him Log on AS A Service (Rights are assigned to Local Security Policy. and DOMAIN CONTROLLER SECURITY POLICY).

    Setting up an external data array

    To configure an external data array in a cluster, it must be remembered that before installing the Cluster Service on the nodes you must first configure the discs on the external array, only then install the cluster service first on the first node, only then on the second. In case of violation of the installation order, you will fail, and you will not achieve the goal. Can I fix it - probably yes. When an error appears, you will have time to fix the settings. But Microsoft is so mysterious thing that you do not know at all what rakes will come to. It is easier to have step-by-step instructions before your eyes and do not forget to press the buttons. By steps, configuring an external array looks like this:

    1. Both servers must be turned off, the external array is enabled, connected to both servers.
    2. Turn on the first server. We get access to the disk array.
    3. Check to external disk array Was created as Basic. If this is not the case, then we translate the disk using the Revert to Basic Disk option.
    4. Create on external disk Through Computer Manage-Ment → Disk Management Small section. On the recommendations of Microsoft, it must be at least 50 MB. I recommend creating a section of 500 MB. or a little more. It is quite enough to place cluster data. The section must be formatted in NTFS.
    5. On both cluster nodes, this section will be named by one letter, for example, Q. Accordingly, when creating a partition on the first server, we select item ASSIGN THE FOLLOWING DRIVE LETTER - Q.
    6. You can post the remaining part of the disk at your own desire. Of course, it is extremely desirable to use file System NTFS. For example, when configuring the DNS services, WINS main services databases will be transferred to a common disk (not system Tom Q, and the second you created). And for the consideration of security, it will be more convenient for you to use NTFS Toms.
    7. Close Disk Management and check access to the newly created section. For example, you can create on it text file Test.txt, record and delete. If everything went fine, then we are finished with the configuration of an external array on the first node.
    8. Now turn off the first server. The external array must be enabled. Turn on the second server and check access to the created section. Also check that the letter appointed by the first partition is identical to the chosen by us, that is, Q.

    This configuration of the external array is completed.

    Cluster Service Software Installation

    Configuration of the first cluster node

    Before starting to install Cluster Service Software, all cluster nodes must be turned off, all external arrays must be turned on. Let us turn to the configuration of the first node. The external array is enabled, the first server is enabled. The entire installation process takes place using CLUSTER SERVICE CONFIGURATION WIZARD:


    Configuration of the second cluster node

    To install and configure the second cluster node, it is necessary that the first node is turned on, all network drives are included. The procedure for setting up the second node very much reminds the one that I described above. However, there are minor changes. To do this, use the following instructions:

    1. In the Create or Join a Cluster dialog box, select The Second or Next Node in the Cluster And click Next.
    2. Enter the cluster name that we have previously set (in the example is MyCluster), and click Next.
    3. After connecting the second node to the cluster Cluster Service Configuration Wizard will automatically take all settings from the main node. To start the Cluster Service service, use the name we created earlier.
    4. Enter your account password and click Next.
    5. In the next dialog box, click Finish to complete the installation.
    6. Cluster Service will be launched on the second node.
    7. Close the Add / Remove Programs window.

    To install additional cluster nodes, use the same instructions.

    Postscript, thanks

    To make you not get confused with all the stages of the cluster installation, I will give a small tablet, which reflects all the main stages.

    Step Node 1. Node 2. External array