The following page and subsequent installation guide section is for TeamConnect 5.2. For a copy of the TeamConnect 5.0 guide (applicable to 5.0.x releases), please download this PDF from the online help site.
The following configuration details cover a basic, functional installation of Elasticsearch. Some cases may require more robust configurations; users should contact Mitratech Support if unable to proceed through the following configuration.
The configuration file is the same for both Windows and Linux.
Many of these fields can be filled in during the installer; however, viewing the configuration file in a text editor will provide better visibility and custom configuration options.
1. Open the elasticsearch.yml file located in elasticsearch-5.3.0/config . This is the config file for Elasticsearch.
To access the configuration file on Linux, use a text editor such as vim or nano. For Windows users, simply open the config file with your text editor of choice.
The following properties must be uncommented and set:
cluster.name: Example Cluster
Since multicast is disabled, ensure that you provide the entry points into the cluster by specifying the server locations here. The example below shows the configuration with two nodes. If you only have one server, simply put the one server without duplication or a "hostname2:port".
discovery.zen.ping.unicast.hosts: ["hostname1:port", "hostname2:port"]
The following properties are commented out and set to default values (defaults listed with properties below). If you would like to enter custom values for these properties, simply comment them back in and replace the default value.
Property |
Default Value |
Recommendation, if available |
---|---|---|
network.host |
0.0.0.0 |
Recommended to set this to the IP Address where Elasticsearch is/will be running |
http.port |
9200 |
|
transport.tcp.port |
9300 |
Note: Your Linux host may refuse to start if all Elasticsearch specifications are not met. It is not highly recommended, but users can work around this issue by adding the following line to the config/elasticsearch.yml file:
bootstrap.system_call_filter: false
The following properties are optional, but may be useful for instances with multiple nodes:
This is the name of this specific node. If it is not set, Elasticsearch will simply choose from a list of names from the Marvel Universe.
node.name: Node1
This allows the node to be master eligible. You will need to manually add this property in if desired. For further detail, please see the Master Node sections below.
node.master: true
This allows the node to store data. The default is true.
node.data: true
Set this to true if the server is in a Linux environment. It locks the memory for Elasticsearch so that the JVM does not start swapping.
bootstrap.memory_lock: true
Master nodes are nodes that are in charge of maintaining the state of the cluster. All nodes within the cluster report to the master node.
There can only be 1 Master Node, but there can be multiple Master Eligible Nodes that can take it's place were something to go wrong.
There is a known issue with having more than 1 Master Eligible Node called the Split Brain. The scenario plays out as follows:
There are 2 Master Eligible Nodes in the cluster.
A node loses communication (does not crash).
The lost node now thinks that it's in a cluster with no Master, so it elects itself as Master.
The communication is regained between the nodes, and there are now 2 Master Nodes.
Data is sent to one node for indexing, and search requests are sent to another node that does not hold the recently indexed information. This causes corruption of data.
In order to remedy this, Elasticsearch has a setting called discovery.zen.minimum_master_nodes. This allows you to set the minimum number of Master Eligible Nodes that need to be present for a Master Node to be elected. The idea is that if you have 3 Master Eligible Nodes, you can set this setting to "2". If one node gets lost, the cluster will still be up and running because it has 2 Master Eligible Nodes. The one node that lost communication will try to elect itself as master but won't be able to because it needs at least one more Master Eligible Node in the cluster to become Master.
A general rule of thumb is to have this setting set to (number of master-eligible nodes / 2) + 1.
This setting is useless if you have 2 Master Eligible Nodes in the cluster. Setting it to 2 means that if one node goes down, the entire cluster is inoperable. Setting it to 1 does not protect against split brain.
Dedicated Master Nodes
If the cluster becomes too large, then it becomes difficult for a data/master node combo to maintain the state of the cluster and perform the regular work of a data node. In these cases, it becomes useful to have Dedicated Master Nodes.
A Dedicated Master Node is a node that has node.data: false & node.master: true. Since a master node is only in charge of maintaining the state of the cluster, it is fairly lightweight; thus, it can be allocated less memory than a normal node. This reduces the risk of the Master Node crashing and making the cluster inoperable.
Because there is already a Dedicated Master Node, other nodes in the cluster can also be relieved of their burden as Master Eligible Nodes (i.e. node.data: true & node.master: false).
A good configuration for larger clusters is to have the proper number of Master Eligible Nodes that are Dedicated Masters, and an equal (or more) amount of data nodes underneath them with the Master Eligible Nodes being the entry point into the cluster (discovery). An example configuration would be:
•3 Master Eligible Nodes that are Dedicated Masters with discovery.zen.ping.unicast.hosts pointing to them.
•6 Data nodes.
•discovery.zen.minimum_master_nodes: 2