> I think this is a better long term solution as for many scenarios it may
> be impossible to properly remove entries from the database due to the
> Audit Features we have.
Drupal has a tremendous amount of variation between sites, and lots of
configuration that ends up in the database. This certainly colors my
perspective -- and that's why I think it's important to be able to run
BDD tests on a copy of any production database.
I'm not sure that's the same for LedgerSMB -- but it would certainly
help track down issues if people customize their database in ways we
don't expect.
What we're really talking about here is how to set up test data --
whether we ship a test database already containing data our tests rely
upon, or have those dependencies created when running the tests.
I pretty strongly advocate the latter -- create the configurations/data
we are testing for at the start of a test run, if they don't already
exist. And make it safe to re-run a test on the same database.
I don't mind cleaning up test data if a test fails in development, but
as long as tests are completing, they should be able to be run multiple
times on the same db.
>> > Additionally, John and I were talking about supporting test infrastructure
>> > and we agree that it would be tremendously helpful to be able to see
>> > screenshots of failing scenarios and maybe to be able to see screenshots of
>> > various points in non-failing tests too. Since Travis storage isn't
>> > persistent, we were thinking that we'd need to collect all screenshots as
>> > "build artifacts" and upload them into an AWS S3 account for inspection.
>>
>> Email to ticket system?
>> Or S3...
> Michael makes a really good point here.
> Perhaps the easiest way of capturing the screenshots is not to use S3,
> but have a github project (eg: ledgersmb-bdd-results) that we can raise
> a ticket against for failing builds with associated screenshots attached.
> At the same time we could use "git annex" to store all screenshots for a
> test in a new git branch (or just simply a tag) in the
> ledgersmb-bdd-results project repository.
>
> Storing "good" results probably should only be done if a specific flag
> is passed in the PR commit message.
> While all screenshots (good and bad) should be stored if a single test
> fails.
However we store them, I suggest we at least store "good" results for
each release. Especially of screenshots. This will allow comparing
version-on-version, as well as give you a place to go back to see "what
did this look like in version x?"
S3 storage seems to be built in to many test runners like Travis, I'm
guessing that's the fastest/easiest to get up and running.
The Matrix project uses Jenkins as a test runner, and the runs are
public, so you can access artifacts just by visiting their jenkins
instance, no logins necessary. Can Travis do the same?
------------------------------------------------------------------------------ Site24x7 APM Insight: Get Deep Visibility into Application Performance APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month Monitor end-to-end web transactions and take corrective actions now Troubleshoot faster and improve end-user experience. Signup Now! http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________ Ledger-smb-devel mailing list ..hidden.. https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel