[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: State of Perl-based database setup utilities for LedgerSMB 1.3



On Fri, May 27, 2011 at 2:23 PM, David F. Skoll <..hidden..> wrote:
> On Fri, 27 May 2011 14:01:28 -0700
> Chris Travers <..hidden..> wrote:
>
> [...]
>
>> In an upgrade you'd want to run all relevant production-safe tests on
>> your database right?  Wouldn't that require a test harness?
>
> No, I think you'd want to run the tests if you are building a
> new upgrade release.  But in some distant future if LedgerSMB is packaged
> as a deb or rpm, there's no reason to run any unit tests as part of the
> installation of the deb or rpm.  At least, I don't think any other
> Perl modules that are packaged as debs or rpms run unit-tests as part
> of installation.

What about installing add-ons into a database?  Do we care if
something has changed and a third party add-on accidentally overloads
a native LSMB defined function?
>
> [...]
>
>> Let's go over a hypothetical problem here......
>
>> Suppose LedgerSMB 1.3 is accidentally installed into template1 and
>> nobody notices.
>
> That should be a Can't Happen scenario because ledgersmb-init-database
> should prevent that.

How sure can we be that there are not functions which overload ones we
automatically map arguments to?  In PostgreSQL even template0 could be
altered.  I suppose we could simply say that any alterations to
template0 or template1 are entirely unsupported.  I am just hardly
convinced this would be safe in practice.

One of the real reasons this question arises is due to the automatic
sql generation that goes on when we are calling stored procedures.  In
essence we look into the system catalogs for the function name, pull
out the argument names and map those to input data.  Given that Perl
is a weakly typed language, there is a fundamental inability to
discern between overloaded functions of the same name and schema when
argument lists are different.

So here you have a brittleness point which is out of the control of
the developers once it is deployed.  Currently it is believed that the
overall tradeoffs in reliability and robustness are worth it, but that
this is a positive tradeoff regarding reliability and robustness only
holds true if it is reasonable to run unit tests on production
databases when appropriate.
>
>> We could document "we highly recommend you go into the
>> sql/modules/test directory and run all relevant files there" but I am
>> wondering if we really should run automated tests on the database
>> after it is set up.
>
> Here's what we do for our software: For every database schema change
> from the beginning of time until now, we have a pg_dump sample
> database.  As part of our unit-tests, we run tests to make sure that
> any previous schema can be upgraded to the current schema and we also
> make sure that the so-upgraded schema matches the schema that would be
> produced by a brand-new installation.
>
> Once those regression-tests pass, there's no need to run them on deployment
> systems because We Know It Will Work.  (Well, OK, there was a change from
> PostgreSQL 8.3 to 8.4 or 9.0 in how automatically-created indexes were
> named that threw us off, but that was easily worked around.)
>
> LedgerSMB can run into trouble because it hasn't had a proper installation
> script until now.  But once it has that, upgrades should be correct almost
> by construction.
>
> If you want to have the automated tests available, that might not
> be a bad idea, but they should be fairly easy to separate out as a separate
> deb or rpm (you could have ledgersmb, ledgersmb-tests, etc.)
> It's obviously fine for ledgersmb-tests to depend on the Perl testing
> framework.

In which case, I think we'd have to highly recommend the installation
of ledgersmb-tests, given the concerns mentioned above, would we not?

Best Wishes,
Chris Travers