[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Database updates



On Thu, Jul 14, 2011 at 2:31 PM, John Locke <..hidden..> wrote:
> Hi,
>
>
> On 07/13/2011 03:15 PM, Chris Travers wrote:
>> Ok, two pieces of quick feedback.
>>
>> First right now I would highly recommend that if you have to follow
>> schema changes you join the ledger-smb-commits list on sourceforge.
>> That will help a lot right there.
>
> Well, with git I'm not having any trouble seeing what's changed. What I do:
>
> 1. create a tag of my current local branch
> 2. merge changes from upstream
> 3. git diff <tagname> sql/ and get all the schema changes.
>
> Problem is, there have been quite a few! And so I've just been relying
> on the upgrade scripts -- but some of these have failed, and I don't
> think they're complete.
>
>> Secondly, a tool to compare schemas will be helpful, and I would like
>> to create one in the near future.  in the mean time, as a simple
>> discussion draft, let me post the following bash code.
>>
>> rm ../testdump1.sql ../testdump2.sql
>> for line in `grep -i "^create table" sql/Pg-database.sql | sed -e
>> 's/(.*$//' | awk '{ print $3 }'`
>>     do pg_dump -U postgres -s -t $line database1 | grep -v GRANT |
>> grep -v REVOKE >> ../testdump1.sql
>> done
>>
>> for line in `grep -i "^create table" sql/Pg-database.sql | sed -e
>> 's/(.*$//' | awk '{ print $3 }'`
>>     do pg_dump -U postgres -s -t $line database2 | grep -v GRANT |
>> grep -v REVOKE >> ../testdump2.sql
>> done
>>
>> diff ../testdump1.sql ../testdump2.sql
>>
>> If this sort of result is found to be acceptable, I could probably
>> port the first two parts to Perl pretty easily.  Not sure about diffs
>> though.
>
> That sounds like a good start.
>
> I'm thinking, though, that Pg-database.sql is still too big and
> monolithic for this -- and the resulting dump file ought to be where you
> do the initial schema load, don't you think?

Ok, so if we are dealing with a single database server, we could I
suppose do a load, and then dump schema dump
and then compare against a schema dump from production (filtering out
set commands and comments)

That won't give you everything....  For example menu changes...... But
with menu changes, if you have any custom entries (which are often
created by addons), you can't simply import the different records, so
the update scripts have to be relied on there.  And typically the way
this works is to run the update scripts ona new database, then dump
the tables and copy them back into the Pg-database.



>
> Maybe have a separate file, sort of like an ini file or parsable
> "loadorder" that can identify the type of relation and the name. e.g.

I would REALLY like to avoid having multiple authoritative sources of
the database schema if possible.  It's very easy for those to get out
of sync.
>
> table acc_trans
> table account
> type xxxx_xx_xx
> view chart
>
> ... etc.
>
> This could then be fed to the dump script, instead of parsing
> Pg-database.sql...

If that's what we want., I think we'd want to automatically generate
it from the SQL files in question rather than maintain it separately.

>
> Then we can have the tables ordered in a way that will actually load
> with foreign keys to tables previously defined, etc.
>
> And as long as that's consistent, then I think diff is fine for now for
> seeing what's changed... can just run the dump script, direct the output
> to a tmp file, and compare against the one in the code tree.

Also there is a second issue here......

Column ordering is not guaranteed when alter table commands are used.
So no matter how we do it, there will be false positives that have to
be reviewed and dismissed.

Best wishes,
Chris Travers