You may be familiar with the Lernaean Hydra. The complexity of this beast was perfect to name a gLite service that is used to encrypt/decrypt data.
This service is based on the "Shamir's Secret Sharing" algorithm where a the encryption/decryption key is divided to X parts and Y parts of them (where Y <= X) are needed to reconstruct the key.
A requirement for data encryption was raised sometime in the previous years and we had deployed 3 gLite Hydra servers (each one will hold a part of every user's key and only 2 of them would be required for encryption/decryption operations) with clear geographic and administration separation.
A software update to one of them led to a "funny" situation where no new keys were able to be registered and no old ones could be unregistered. (These are the only operations that require all the servers to be up and responding). The tool that was provided to (re)configure the service had the very interesting operation of dropping every DB table and re-create them using the predefined schema.
A re-configuration of the updated server gave us a "everything just doesn't work" state, which we had to resolve under user community pressure. Note that if the service just didn't work, users may have lost lots of human/cpu hours because they are just able to get an encrypted output which they can't decrypt.
Analysis to the DB at another gLite Hydra instance gave us an idea of how this service stores its data. Due to luck the actual keys were not deleted by the configuration script but only the relation between users and keys was deleted.
A copy of the user database and some reverse engineering at the relation DB at a working Hydra instance was enough to recover the service with (almost?) no cost.
That reminded me that common Murphy's law where the backup you have is either unreadable at the time you needed or was last updated BEFORE your critical data was stored.