[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Managing database change: improvements needed




Hi John,

Thanks for your reaction. I went to the sqitch mailing list for a bit more information, hence my later reaction. 
Some more info:

It seems Sqitch wants to address the issue of multiple branches receiving the same patches and upgrading from one branch to another, like we want to support: https://github.com/theory/sqitch/issues/200


I reviewed some of this, and the Skitch Postgres tutorial... it sounds to me that we're not at the level of complexity that would pose this level of problem

That's correct. However, on users@ Rob Ransbottom describes how he is working to work in a few of his own requirements into his own local install. These requirements also include changes to the database.

While strictly his use-case isn't in *our* listing of requirements, I'd like to be able to support people to add their own enhancements in their own fork and merge the changes of the main project into their own install. If we can support the rollout of our changes without just throwing away theirs, that'd be my ideal situation. With Sqitch's change-oriented roll-out, we seem to be going a long way already, because I imagine we'd stop doing full schema replacements.
 
-- main things we should keep in mind is to always declare dependencies, and then can make use of a "union merge" if we don't want to resolve merge conflicts manually.

Right. I think that's the best solution for our own development process indeed.

Regards,

Erik.

On Wed, Aug 26, 2015 at 9:13 PM, Erik Huelsmann <..hidden..> wrote:
Hi,

Hi all,

There are a few issues that I have with the way we currently handle database schema changes.

Current situation

============

We have a main schema file for the table definitions and a number of modules with stored procedures grouped by "subject".

Next to that, there's a "Fixes" file, which contains all incremental updates to the schema since some version.

When I want to change the schema as part of our development, I need to change the schema definition files *and* I need to add the schema changes to the Fixes.sql file.


I agree this is less than ideal.  I think we need to think of a comprehensive tooling approach and there are a few reasons we haven't done this (mainly because I think getting the tooling right is a lot of effort and there has in the past been other urgent things that needed more effort at the beginning).  However I think this is growing in urgency as we continue development.

Right. I'm merely summarizing for those who didn't know the current situation and to support my line of reasoning. (I don't think you think that I am, but for the record: I'm not criticizing the current situation.)

As usual, I'll comment on how it's done elsewhere (Drupal) and my experience with other upgrade systems...

Drupal ends up with a dual approach for schema upgrades: Each module provides a base schema that is up-to-date with the current version, and any arbitrary series of update scripts with a "schema level". The upgrade scripts are skipped for new installs. The "system" table tracks the "Schema level" for each module, and for updates runs all scripts with a greater schema level than is currently installed, updating the schema level as each is successfully installed.

There is no rollback with this system, and while you can declare some module updates as dependent on particular schema updates in other modules, this is cumbersome and not widely used, and so there are often race-condition-type issues on upgrades where one update can't complete because it's dependent on an upgrade to another module that has not yet run.

I know there was talk of switching to a hash-based system rather than a simple integer sequence, I have not yet dug into Drupal 8 to see if there's any improvements here.

I did consult briefly on a Microsoft project on Azure, Drupal on SQL Server, and they had some interesting tools for managing schema updates that seem comparable to what Sqitch is attempting to do.


Ok. So, what we have now come up with looks a lot like what Drupal is doing then. Not that I want to copy other projects as much as we can, but if we can delegate some of the thinking, that'd be a most welcome approach to load-sharing :-)

I think your story confirms we're on the right track to look at Sqitch. Thanks!

BTW,  I think that for LedgerSMB it's essential to be able to roll back to earlier versions. Our current approach to that is half-harted, because we simply load the modules from the older version. However, there are no guarantees in place that it would actually work. 
 

Problems with the current approach

==========================

The current approach has (IMO) several problems:

* When loading the "Fixes.sql" file, even in a production environment, produces a truckload of ERRORs, because the schema may contain some or all of the fixes in the file, leading to aborted transactions

* Making adjustments in multiple places can lead to incompleteness on either side

* Fixes.sql is loaded twice; before and after loading the stored procedures, making it unsuitable for some kinds of fixes

* In order to change types or parameter lists of stored procedures, they need to be dropped before being (re)created, resulting in DROP TYPE and DROP FUNCTION all over the place.

Especially the high number of ERRORs in the log file when creating a new database is disconcerting to new users. Myself, I'm running into the DROP TYPE and DROP FUNCTION more and more often as my refactorings for 1.5 and 1.4-mc reach deeper into the system.

Additional requirements

=================

* Next to solving the problems above, some users have expressed the desire to extend LedgerSMB locally. However, the way we currently upgrade the system removes all customizations (at least from 1.2->1.3->1.4; maybe not 1.4->1.5). If our changed approach could help solve this requirement, all the better.

* The solution chosen preferably works with a strategy of branching and merging as we do in the development cycle.

Agreed on all the above. 

Good. Then we have good reasons to change :-) 
 

Possible solutions

=============

One thing that I think we should do is separate development from production when loading the database. That is to say: when creating a new database, that is a different step than upgrading a production database which is in turn again very different from working on a development database. Our processes should work to provide the best for each.


I am not sure I understand what you mean by this.  Do you mean that we need an extra build step?  I would not be opposed to that.  But if that isn't what you mean, could you be more specific?

Well, other than the standard Perl steps "copy to blib" and "manify POD", we don't really have build steps of our own. I'm indeed proposing that we have a build step. I envision it to be something like this:

* Create PostgreSQL database
* Play Sqitch scripts
* Dump schema from the database

and then consider the dumped schema a build artifact which doesn't need version control. This step would be executed before creating a distribution archive.

The pre-distribution step should -if we go with Sqitch- also include compilation of a Sqitch bundle, so we don't distribute development progress, branches, or whatever that might be in the Sqitch repository (which is - as far as I understand it - stored in the development tree).

I'm not quite clear on all of this. Are you suggesting two different Sqitch steps -- one for the data schema/model, and a different one for the stored functions? Is this a recipe for building a clean database, which would destroy all data on reversion? And thus something that you would only run to get a clean environment -- but then you'd have a different set of Sqitch scripts for all the functions that change/evolve within a major release, including non-destructive schema changes? If that's what you mean, then sounds good -- otherwise I'm lost as to the purpose of this.


Yes, it's a recipe for building a clean database. The clean database-build would become part of the project's "build" step (a step to be executed before release). That database can then be dumped and the dump should be used to create new databases after the release has been deployed to our users' systems.
The idea being that loading a schema definition file, including loads of stored procedures, is much faster than having a base schema after which a slew of upgrades have to be run (even if the DB is empty). It's the same idea as you describe how Drupal skips the updates on initial base schema creation.

Proposed solution to the duplicate change problem

=====================================


[snip] 
Getting this tooling to work right is not a minor project however.  It definitely cannot be done in a minor release because of the additional dependencies, and I think we'd need a policy against adding extensions to core in major versions.  However it would also provide a nice platform for third party integration and reuse as well.

Each of our modules could be an extension and separately versioned.

Ok. I like the technical side of that, but I'm not sure we need that at this point. I *do* see value for this in terms of optional modules. Say the way we used to have Budgetting as an extension in 1.3 (although it solves only 20% of the problem, because hooking up Budgetting into the web-application was actually a bigger problem than getting the tables installed).


I would generally be in favor of splitting up modules into independent versions, but agree that's a lot of work that probably isn't as much a priority now -- but thinking about it now is certainly appropriate. I wonder what lessons we could take from the Budgeting module to define extension points so that adding other optional modules is not so painful...


I'm thinking that here - again - we should look at how Drupal and other systems (Joomla?) are approaching the problem; with our development resources we can't afford to reinvent the wheel and some of these projects have been doing this for so long that we'll never be able to catch up with their accumulated experience.
For branching and merging, the extensions would have a number specific to development head and could be re-numbered during the build process.  For modules all we really need to do is drop extension/create extension if the version number has changed.

We'd probably want to reserve a number series for "next in version series."  I am thinking that using 999 would be a good one, so 1.4.999 would be "whatever next comes in 1.4" and this would allow us to install and run out of the development tree.


-1 on using 1.4.999 as "next 1.4 version". If you're on 1.4.15, developing for 1.4.16, that interim development at 1.4.999 ends up greater than the release! I would prefer using more established semantics, mostly like we've been doing: 1.4.15 -> 1.4.16-dev > 1.4.16-alpha1 > 1.4.16-beta1 > 1.4.16-rc1 > 1.4.16.

Right. We've switched to that recently, because we were doing each release version number for the entire series of commits until the next release. I'm opposed to doing that, because people should be able to depend on a canonical definition of released versions.
 
Drupal has taken to adding "+5" to indicate 5 commits past whatever the most recent release is -- e.g. 1.4.16-beta1+5. Don't use 1.4.999, unless that's a dev release of 1.5.0!

Ok. Even there I'd prefer 1.5.0-dev -- which is what 'master' reports now, btw.
Hmmm. Sqitch - like version control systems - does this by itself (in casu by generating a hash, like Git and other DVCSes). 

As for sqitch, I have no experience with it yet. It's supposed to work well with branching and merging. One thing it *does* do is integrate with version control tools and it means to integrate with the project's repository. Ideally when merging and branching, no additional processing is required to integrate the changes from branches. However, I'm not exactly sure that's what happens (given https://groups.google.com/forum/#!searchin/sqitch-users/merge/sqitch-users/GXqgt7nJ_1k/Vvg-r1HOEqMJ) but I think the referenced link is about reordering changes which already have been partially deployed. What I like about Sqitch is that it integrates with the VC system, but is VC system agnostic, some of us can use it with Git while others can keep using Hg-git as they currently do.

Ok so if we ship our storage schema as an extension (or group of extensions) we still need to have ordered versioning going on.

Well, it seems to me that that's what Sqitch does, *unless* you have a database which is running some development version. From tag to tag (which would be from release to release), ordering doesn't seem to matter. The problem the page seems to refer to - but I should probably subscribe to the mailing list and ask this quention - a database which is left at an intermediary state.

My reading from the tutorial is that each individual step may declare a dependency on some earlier step. The discussion mentioned above is more about handling merges from multiple branches.

Exactly. So, if people stay with the released versions, there are no problems. If we take care only to merge branch commits at the end of the sqitch plan file, then there's not even a problem for people running development versions. Problems do start to pop up if people start maintaining their own branch of database changes and merge in changes from master regularly: then Sqitch will detect that some changes are happening out of order, if the ordering gets broken, of course.
I am thinking that sqitch may allow us to do the grunt work of the schema change scripts as a part of the build phase (we have to strip out begin/commit statements since create/alter extension adds those), but it seems like it would be very helpful in managing that side.
I do see the value of this -- but it looks like an awful lot of grunt work creating these schema change scripts, if you also have to carefully create matching revert and verify scripts for every change!

Well, it's probably a lot of work, but our current situation is really very similar: in our case, I have to be very careful to add the same change/new definition to the Fixes.sql file *and* to the Pg-database.sql file. *Or* I have to make sure that I drop the function and its type in the right place, because that specific type gets used in multiple places and dropping the type cascades into dropped functions etc.

With Fixes.sql being run twice, it's not exactly the right place to start dropping stuff. But if I need to drop functions in the module files, how long should the DROP statement for older versions remain in the module file? 5 versions of the function? 1 major release? 10 patch releases? 2 major releases?
 
Seems like a lot of discipline to pull this off -- certainly something that's worth doing if we aim to deliver a quality package.

I do see what you mean regarding the forward/reverse scripts. However, even if we would leave the reverse scripts empty and just skip that step (which I'm really very much against doing), I *still* see benefits in switching to a tool that provides a database change process.

I am curious on some of the tools that can supposedly detect the differences between two database schemas, and generate a script to convert them? I think I did run across a couple modules that claimed to do this for SQL Server. I have no experience with these, or knowledge about reliability. But I'm lazy ;-)

Sqitch maintains a few tables in which it stores the updates/upgrades it does. In order to assess rolled-out order, it creates a block-chain of all changes. So, Sqitch isn't verifying the schema (other than with verify scripts); it's just keeping an administration in sync and uses that to determine what to roll out and what not.

If we don't have something like that, Sqitch seems like a great tool in the arsenal, and something that seems quite approachable/understandable, and something we should be able to easily hook up in Travis.

Yes, with Sqitch being Perl, we should be able to really simply hook this into the Travis CI build and I'd advocate doing that, as well as design a few tests which verify upgrades from some assumed-good version to the latest commit that Travis is testing.

Yea. Maybe this too is a question to ask Sqitch's mailing list. 

What I *am* sure about is that (1) will be completely change-oriented. *Maybe* we can use sqitch with a the current module-based source code organization.

If we go with extensions we don't need to.  We just bump the version number on each change of a module and keep a catalog of what is required (and tests to verify of course).  We can then drop/create extensions as needed for the procedural modules, and we can generate migration scripts for the for the storage extension(s).  The big difference is that instead of one db version we would have a bunch of components that were versioned and likely a requirement that these match on login.

Hmm. I'm somehow that doesn't sound like a really attractive outlook: more things that (apparently) can go wrong.
... and now do we need our own dependency manager? :-)

I'm growing quite fond of Composer in the PHP world, and ran across Carton for managing Perl dependencies when putting together the Docker image. Carton definitely looks helpful for managing the installation of specific versions of CPAN modules to get to a consistent state. I wonder if that could be extended to cover Postgres extensions...

With these things not sorted out yet, I'm not sure we should go there now. One thing that I think we *can* use Sqitch for, even if a few years from now, is to maintain the sql scripts that would upgrade one extension version to another. In that sense, I think extensions are another layer of complexity on top of the solution that Sqitch is offering: of course they allow upgrades to run upgrade scripts, but that means these scripts will need to be developed and that's exactly where Sqitch comes in...

If we're going to go with Sqitch, I don't think it's a good idea to switch just then and there, but test it to build up some experience and then choose a well defined point in time to start using it.

Agreed.  Maybe start on plugins?

Actually, I was thinking of starting with 1.5 *after* it's been upgraded from 1.4? However, that would mean that *everybody* would need to migrate through 1.5.0, because that's the moment we'll move to Sqitch? (Or we could choose 1.6 or ...?)
 

So, there are 2 proposals here. What do you say?

I don't see how we can get away from doing some things ourselves, so I guess the question is what and where to reuse other work. 

So my preference would be sqitch and pg extensions.  In my mind this would require a two-step build process for dbs:

1.  Build the schema extensions using sqitch
2.  Install all pg extensions using make.

Ok. Is this on releases only, or with our day-to-day work on the tree?

My view on using Sqitch for everything in our schema would be to replace the current schema loading with a call to Sqitch's API. Of course we have to do that ourselves, but it's not in the category that I'd call "do something ourselves", because all we have to do is create the code to delegate to "the other project's code". But figuring out *how* to do that, yes, that's our work.

The first is a developer-only task (sort of like running YACC is when building PostgreSQL -- not even required when installing from source packages) so there isn't a user-visible dependency there.

The second though does impose some user dependencies in that pg_config and other pg development tools must be installed before  the build can be done.  Of course for those installing debs, these would be done when building debs rather than when installing them.

If we use sqitch only, we also need it as a dependency, both for developers and for end users. However, we probably don't need to learn our users about it, if we simply use it as a library.

Not sure if you've looked at the Docker builds I put together a couple months ago. They're at https://hub.docker.com/r/ledgersmb/ledgersmb/ .

At least for that installation path (which I think is becoming more popular by the minute) the Sqitch approach is far easier to support, if I understand these options... I can easily install Sqitch inside the container and script whatever we need to do there to update the schemas in the database. However, I explicitly did not build Postgres into those images, assuming you would connect the LSMB container to an external Postgres container (or server). So installing an extension in Postgres becomes a much more challenging proposition -- I'm assuming you need access to the Postgres environment to be able to install an extension? If you can install a Postgres extension over a TCP connection to Postgres, I suppose we can script that -- but at least from my standpoint Sqitch sounds much easier to deal with.

Another potential challenge -- can you install extensions in a cloud-provisioned Postgres server, such as Amazon RDS?

Unless I completely misunderstand how you install extensions, I think they would cause some significant challenges to those who want to install in a cloud or container environment.


Good questions. I'll search for their answers, but if the answers indicate in the direction you're suggesting, then I think we're very good off with Sqitch and much less so with extensions. 

So that is my preference and why.
Thanks for your response!

With my reaction above, I'm wondering how to get to a next step. We probably need to be practical in some sense and just start *somwhere*. But the question would be *where* and *with what* -- in the sense that the *what* question seems to be a choice between (as we now know them) (a) Sqitch or (b) Sqitch+Extensions.

It does strike me that there's still a question of what schema changes destroy data.

The intent is to create schema changes which *don't* destroy data, of course. But if people change our schema, and we roll out schema updates on upgrade just the way we do on the 1.2->1.3 and 1.3->1.4 upgrades, then these changes to the schema get lost, because we throw away the existing schema and build a bright and shiny new schema, filling it with only the data we *know* to exist in the old schema, which excludes the customizations.
 
From the Sqitch tutorial, I'm not seeing why we need a build step as described earlier -- couldn't our setup simply run the Skitch deployment under the hood? Unless that's slow enough that flattening it to a single database dump file we can quickly load is a big enough win...

That's my assumption, yes. Also, having a build step would allow generating the schema documentation files that we now have in the repository simply as build outputs to be included in the release and no longer part of the repository. Another thing that could benefit from having a build step is to produce an optimized Dojo version. Maybe there are other things we could do in a build step that we can't or shouldn't do when everything is versioned.
 
that is, other than the consideration of how do we safely provide tools to roll database schema changes backwards and forwards on production databases, without losing data?

How does Sqitch track what has been deployed, and what has not? (Just thinking with my Docker hat on -- is this important metadata that needs to get preserved when lsmb containers get destroyed/recreated?)

Sqitch records the state of the rollout in the database itself. So, recreating the container is not a problem at all.

Overall I like the looks of Skitch and would favor that approach...

Thanks for your comments. I hope my explanations above only enforced your position.


Regards,
 
Cheers,
John Locke
http://www.freelock.com


------------------------------------------------------------------------------

_______________________________________________
Ledger-smb-devel mailing list
..hidden..
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel




--
Bye,

Erik.

http://efficito.com -- Hosted accounting and ERP.
Robust and Flexible. No vendor lock-in.
------------------------------------------------------------------------------
Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
Get real-time metrics from all of your servers, apps and tools
in one place.
SourceForge users - Click here to start your Free Trial of Datadog now!
http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140
_______________________________________________
Ledger-smb-devel mailing list
..hidden..
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel