Recently we had to migrate all the data from one Redis server to another. There are a few ways this can be done. However, we needed to migrate the data without downtime or any interruptions to the service.
We decided the best course of action was a three step process:
- Update the service to send all write commands (SET, DEL, etc) to both servers. Read commands (GET, etc) would still continue to goto the first server.
- This would suffice for all the keys with a TTL, but would not guarantee all of the non-expirable data would make it over. So then we run a step to iterate through all of the keys and DUMP/RESTORE them into the new service. This would certainly recopy keys that already exist, but that was fine with us.
- Once the new Redis server looked good we could make the appropriate changes to the application to point solely to the new Redis server.
Now, step 1 and 3 were actually quite easy because we already have our own Redis client interface where it was easy to hide the sending of reads and writes to separate servers without any change in application logic. Thanks Go interfaces!
Step 2 is the bulk of the work and has a few gotchas:
- Redis provides a serialisation format from the DUMP command so that you can load any key into the same or a new server. However, this is a binary format and care needs to be taken when handling it correctly in Bash. More details about this later.
- The DUMP command serialises the value, but not the expiry so you have to be careful to also (separately) request the TTL so that it can be included in the RESTORE.
- RESTORE only supports whole second TTLs, so make sure you use the TTL command and not the more accurate PTTL command. In our case the application did not require millisecond expiries so this wasn't a problem.
- DUMP and RESTORE both only work on a single key. So we have to scan all keys in Redis and issue individual DUMP, TTL and RESORE for each key. Fortunately the redis-cli provides a way to do non-blocking key scanning, as well as specifying a pattern. More details about this later.
- The TTL command will return special negative values depending on your Redis version if there is no expiry set on a key. However, RESTORE only accepts 0 (meaning no expiry) or a positive amount of whole seconds.
- One other annoying thing to note is that the RESTORE command does not allow you to restore a key that already exists. It will return a BUSYKEY error. RESTORE does allow an extra argument REPLACE to avoid this. However, it must be placed after the binary data which we will see is not possible with using the redis-cli tool wiht the -x argument. The easiest way around this is to DEL the key beforehand.
export OLD="redis-cli -h foo.0001.usw2.cache.amazonaws.com"export NEW="redis-cli -h bar.clustercfg.usw2.cache.amazonaws.com"
I have obfuscated the original Redis hostnames. The important part is that you provide the CLI with any arguments you need to interact with the server.
Before we continue we should check that both servers are accessible and responding:
$OLD PING# PONG$NEW PING# PONG
Great! Now let's push the big red button. Or, at least look at it:
for KEY in $($OLD --scan); do$OLD --raw DUMP "$KEY" | head -c-1 > /tmp/dumpTTL=$($OLD --raw TTL "$KEY")case $TTL in-2)$NEW DEL "$KEY";;-1)$NEW DEL "$KEY"cat /tmp/dump | $NEW -x RESTORE "$KEY" 0;;*)$NEW DEL "$KEY"cat /tmp/dump | $NEW -x RESTORE "$KEY" "$TTL";;esacecho "$KEY (TTL = $TTL)"done
I should also mention that we were only copying around 320k keys. Bash seemed to have no problem with this (after all the keys were loaded) but if you are loading a huge amount of keys this may not work.
It might be helpful to read my other blog post: Deleting a Huge Number of Keys in Redis.
If you see an error similar to "(error) MOVED 660 10.1.174.29:6379" it's because you are running against a cluster and will need to add the "-c" option to your redis-cli command.