Date: Thu, 28 Mar 2024 15:59:34 +0000 (UTC) Message-ID: <216934562.5.1711641574049@f6c42222f9ce> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_4_21596011.1711641574049" ------=_Part_4_21596011.1711641574049 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
OpenSpecimen can be deployed in a clustered setup to facilitate = high availability, high throughput processing of requests. Typically, OpenS= pecimen nodes are deployed behind a load balancer/proxy which delegates the= user requests to appropriate node based on the load, availability etc.
The configuration of OpenSpecimen cluster involves letting each node kno= w about the presence of other nodes in the cluster. This is required to pro= pagate the localised knowledge, like updates to cached metadata, of the act= ions to all the cluster nodes. This ensures all the cluster nodes have the = same knowledge of the metadata and data, and behave exactly the same way in= deterministic manner. The consequence of this is, requests can be routed t= o any node using any strategy that fits the user traffic pattern. This help= s in achieving scalability.
As a first step, every node in the cluster should be given an unique nam= e. This is to identify the nodes in the cluster. The name should not contai= n any whitespace characters. The node name should be assigned, preferably, = only once and should not be changed thereafter.
Edit openspecimen.properties file located in $TOMCAT_HOME/conf directory= .
Add a property node.name
as below:
# VM1 -= /usr/local/openspecimen/tomcat/conf/openspecimen.properties node.name=3Dlion # VM2 - /usr/local/openspecimen/tomcat/conf/openspecimen.properties node.name=3Dpanther
Restart OpenSpecimen
Ensure the log files are created using os.{node.name}.log pattern. E.g. o= s.lion.log, os.panther.log
All the cluster nodes should share the same data directory. There will b= e only one data directory shared across all the nodes in the cluster using = any file sharing mechanism like SAMBA, NFS etc. All the nodes should have r= ead/write access to the data directory.
Navigate to Home =E2=86=92 Settings =E2=86=92 Search for Cluster
Upload a JSON file like below:
{ "notifTimeout": 60, "notifErrorRcpts": ["john.doe@krishagni.com", "jane.doe@krishagni.com"], "secret": "TopSecret!@3", "nodes": [ { "name": "lion", "url": "http://10.0.1.1:8080/openspecimen/" }, { "name": "panther", "url": "http://10.0.1.2:8080/openspecimen/" } ] }
Attribute |
Description |
---|---|
nodes |
The list of OpenSpecimen nodes in the cluster.
|
secret |
A secret known only to the OpenSpecimen nodes. Used for trusted communic= ation between the nodes in the cluster. |
notifTimeout |
Max. amount of time, in seconds, that a node waits for receiving the ack= nowledgement from the other nodes in the cluster for its broadcast event.= p> |
notifErrorRcpts |
Comma separated list of email IDs to whom the cluster error notification= s are sent. |
The preferred way to startup the nodes in cluster is to start one node a= t a time, ensure the node is up and functional, and then move on to the nex= t node in the cluster. Do not attempt to start all nodes of the cluster at = the same time or concurrently.
A small amount of downtime is required for smooth upgrade of OpenSpecime= n. When upgrading, bring down all the nodes. Upgrade one node at a time, en= sure the node is up and functional, and then move on to upgrade the next no= de in the cluster. Do not attempt to run multiple versions of OpenSpecimen = against the same database schema. Otherwise, the behaviour is unspecified.<= /p>