As of 1.5, we have a great database schema maintenance tool in place (sql/changes/) which remembers which changes have already been applied to the schema and which still have to be applied. This new tool has gotten rid of the truckload of warnings and errors that the repeated loading of Fixes.sql gave us on 1.3 and 1.4.
Preparing to merge my MC branch to master for 1.6, I'm thinking that the sql/changes/ infrastructure works perfectly to handle schema changes.
However, if such a schema change would be accompanied with the need to run a data migration, the sql/changes/ infrastructure suddenly won't work too great anymore, because I'm expecting that we'll want to improve on data migrations as we gain knowledge of the various data sets out there.
Regardless of the technical limitations of the sql/changes/ infrastructure, I'm also envisaging that we'll want to do "pre-migration checks" and allow users to correct data before the data migration (to a changed schema) is executed, if they have data that's non-compliant. While the pre-migration checks are definitely an easy extension to the sql/changes/ infrastructure, it's hard to provide feedback from the checks as it is. The other requirement (fixing data) can't be resolved at all.
My current thinking is that schema improvement becomes a continuously incremental thing instead of these one-off huge jumps such as we had on 1.3 and 1.4, *if* we find a way to address these requirements:
1. Run pre-checks on schema changes
2. Allow users to adjust / add / delete data in order for the pre-checks to be satisfied
3. Run post-schema-change data migrations
4. Allow data migration for the same change to be tweaked (i.e. different data migrations == different sha-sums)
5. Allow additional data migrations to be run *iff* (if-and-only-if) a specific data migration has been run in the past (my idea here is to allow the data to be "upgraded" according to progressive insight)
I would really love to hear your thoughts on the topic!
Robust and Flexible. No vendor lock-in.