Mdrepo may lose data if the mdrepo data files are shared with NFS.
Description and Documentation
Mdrepo is a mechanism in TWiki core to store site and web metadata. Use of it is optional (not used by default). It's primarily for a large installation having thousands of webs.
So far, mdrepo has been using Berkeley DB. But Berkely DB is not designed to store data on a shared NFS file. There was a case where several records were lost on a TWiki installation consisted of multiple servers sharing files by NFS.
Mdrepo is for a large installation, which is likely to consist of multiple servers behind load balancer. So using an NFS-unsafe mechanism is bad. That said, instead of tying a hash to a Berkely DB file, storing a serealized hash data and retrieving it is better. Still, there is a chance of a record insertion/change/deletion being lost if multiple servers update simultaneously. But that's better than the current situation where records not touched may be lost or altered.
Among ways to serealize/de-serialize data in Perl, Sereal::Encoder and Sereal::Decoder seem promising. The webs mdrepo file having 5,000 records occupies 700 kilobytes of Berkeley DB file. The same hash data serealized by Sereal::Encoder should be similar in size. In this time and age, handling something smaller than 1 megabyte shouldn't be a problem.
-- Contributors: Hideyo Imazu - 2016-07-21