Many projects rely on Apache Zookeeper for cluster coordination and Apache NiFi is no different. But what makes Apache NiFi different is that it comes with an embedded Zookeeper, so you don’t need to set up a dedicated Zookeeper cluster if you are just testing or you don’t have any other system that needs Zookeeper. However, you soon will feel the limitations of that approach and will decide that now is the time to invest in a dedicated Zookeeper cluster and that is will be worth the effort and cost.
To migrate NiFi from the embedded Zookeeper to an external one, NiFi created a script in its toolkit called zkMigrator
. It exports NiFi’s data from Zookeeper as a json file and import it to the new cluster. One major downside to this approach that it needs all the flows on you NiFi cluster to stop, but what if we don’t want or can’t stop our flows?
In this article I will discuss two ways to move from the embedded Zookeeper to an external one without stopping the NiFi flows. One way for when the external cluster is on the same NiFi cluster – each NiFi node will run an external Zookeeper instead of an embedded one - and the other is when the Zookeeper is on a totally new cluster. In both examples the NiFi cluster is a 3-node cluster, the embedded Zookeeper runs on all 3 nodes, and the new Zookeeper cluster is targeted to also be a 3-node cluster.
Example 1: Move from embedded Zookeeper to external Zookeeper on the same node
On a 3-node Zookeeper cluster, we need at least 2 to be online to keep the cluster online. We will use this mechanism to our advantage.
- Disconnect a NiFi node from the GUI, you may need also to offload it
- Stop the NiFi process on that node (
sh nifi.sh stop
) - Edit the
nifi.properties
file and setnifi.state.management.embedded.zookeeper.start
tofalse
. This tells the node not to start the embedded Zookeeper - Copy the
zookeeper.properties
file in your$NIFI_HOME/conf
directory to your$ZOOKEEPER_HOME/conf
directory. We will set up the external Zookeeper with the same configuration as the embedded one - Embedded Zookeeper usually set up its
dataDir
to be$NIFI_HOME/state/zookeeper
. If you are going to separate Zookeeper from NiFi then it’s normally a good idea to move that directory and its contents to a new place, and whether you decided to move it or not please make sure that thezookeeper.properties
that the external Zookeeper will use contains the correct full path - Start the external Zookeeper with the configuration file that you copied
- Wait for a moment till the new Zookeeper node joins and syncs with the cluster, then start the NiFi node
And that’s it. After you check the logs and make sure that this node’s NiFi and Zookeeper are working as intended, move into the next node in you cluster and do the same steps
Example 2: Move from embedded Zookeeper to external Zookeeper on new nodes
The last example was a bit easy as we replace the embedded Zookeeper with the external Zookeeper on the same nodes with the same configurations, this made the old nodes connects to the new node automatically since it runs on the same IP with the same ports. This example takes it up a notch by introducing new nodes to the mix. Shout out to Mike as his guide helped a lot.
To be honest, this example is not without any downtime, but the downtime is minimal as you will see. Also, after reading this you might think that it is not worth the trouble and you can just take the NiFi cluster offline for the migration process, this might be true but I am midway the article so I might as well continue.
As things will get very complicated, let us label our nodes. Nodes A, B and C represent the old nodes which contains the embedded Zookeeper. Nodes X,Y and Z represent the new nodes with just Zookeeper. When I shutdown an old node, for example C, I mean implicitly to change set nifi.state.management.embedded.zookeeper.start
to false
and nifi.zookeeper.connect.string
to the new nodes. Also when I set a node to point to specific nodes, I mean to change the servers in the zookeeper.properties
to only those servers. Now lets start the process assuming that the cluster leader is node B
- Start nodes X and Y with them point to nodes A,B,C,X,Y
- Start node Z pointing to A,B,C,X,Y,Z.
- Restart node C to point to B,C,X,Y,Z
- Shutdown node A
- Restart nodes X,Y,Z pointing to B,C,X,Y,Z
- Shutdown node B
Now when you shutdown the cluster leader you will have one of two scenarios, either a new node will be the cluster leader, in this case let it be node Z, or node C will be the cluster leader. Only if C is the cluster leader will you have to face minor downtime. Let explore each scenario separately
If Z is the cluster leader:
- Restart X,Y to point to X,Y,Z
- Shutdown C
- Restart Z to point to X,Y,Z
If C is the cluster leader:
- Restart X,Y,Z to point to C,X,Y,Z
- Shutdown C
- You will have to restart X,Y,Z to remove C from the ensemble as with the current configuration the cluster cannot tolerate any node’s failure. Unfortunately with each restart the cluster will be unavailable until the node is running again, hence some very minor downtime
So, there you have it. Not the easiest way but if you have to keep you NiFi cluster online then it’s you best hope. If you need to move to a 5-node external Zookeeper cluster instead of 3 I would suggest moving first to 3 and then scaling up to 5.