[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Deciding on a default company setup for BDD tests
- Subject: Re: Deciding on a default company setup for BDD tests
- From: David G <..hidden..>
- Date: Fri, 22 Jan 2016 09:54:59 +0800
Hi All,
I agree with Michaels comments, with a couple of extra thoughts inline
below.
On 19/01/16 09:13, Michael Richardson wrote:
> Erik Huelsmann <..hidden..> wrote:
> > Chris, John and I have been slowly working our way to creating infrastructure
> > on which we can base browser-based BDD tests. We had some problems with race
> > conditions between the HTML/JS renderer (PhantomJS) and the expectations
> > being tested in the test-driver (Selenium::Driver). However, these have been
> > fixed as of this morning.
>
> WOOHOO!
> Before PhantomJS became available, with the firefox plugin, I found it best
> to run it all under Xnest or Xvnc, so that I could control the screen
> resolution. Otherwise, whether or not certain things displayed depended upon
> the size of the display.... With PhantomJS that shouldn't be an issue, I think.
>
> > Earlier today, I merged the first feature file (2 tests) to 'master'. This
> > feature file does nothing more than just navigate to /setup.pl and /login.pl
> > and verify that the credentials text boxes are displayed.
>
> > Now that we're able to create feature files and write step files (and we know
> > what we need to do to prevent these race conditions), I'm thinking that we
> > need to devise a generally applicable structure on how tests are initialized,
> > torn down, cleanup takes place, etc.
>
> Yes.
>
> > John and I were talking how we'd like tests to clean up behind themselves,
> > removing database objects that have been added in the testing process, such
> > databases, (super/login) roles, etc...
>
> yes, also one might sometimes like to write the test to validate that the
> resulting database objects exist.
>
> I suggest a basic set of infrastructure, including logins, a few customers
> and some transactions. Ideally, one would then start a transaction and open
> the HTTP port within the transaction...
>
> > To start with the first and foremost question: do we want our tests to run
> > succesfully on a copy of *any* company (as John stated he would like, on IRC)
> > or do we "design" the company setups we want to run our tests on, from
> > scratch, as I was aiming for? (Note that I wasn't aiming for regenerating all
> > setup data on each scenario or feature; I'm just talking about making sure we
> > *know* what's in the database -- we'd still run on a copy of a database set
> > up according to this plan).
>
> By *any* company, you mean, I could run it against (a copy of) my database?
> I think that is not useful to focus on right now.
I agree that it's probably not a good thing to focus on right now,
but,
I think it would be worth keeping in mind so the tests aren't written to
exclude this as a possibility.
In the long run, I think rather than the tests being designed to be run
on a *live* database they should,
if run on a "non test" database copy the DB to a new DB ${name}-bdd-test
and run against the copy.
I think this is a better long term solution as for many scenarios it may
be impossible to properly remove entries from the database due to the
Audit Features we have.
>
> > Additionally, John and I were talking about supporting test infrastructure
> > and we agree that it would be tremendously helpful to be able to see
> > screenshots of failing scenarios and maybe to be able to see screenshots of
> > various points in non-failing tests too. Since Travis storage isn't
> > persistent, we were thinking that we'd need to collect all screenshots as
> > "build artifacts" and upload them into an AWS S3 account for inspection.
>
> Email to ticket system?
> Or S3...
Michael makes a really good point here.
Perhaps the easiest way of capturing the screenshots is not to use S3,
but have a github project (eg: ledgersmb-bdd-results) that we can raise
a ticket against for failing builds with associated screenshots attached.
At the same time we could use "git annex" to store all screenshots for a
test in a new git branch (or just simply a tag) in the
ledgersmb-bdd-results project repository.
Storing "good" results probably should only be done if a specific flag
is passed in the PR commit message.
While all screenshots (good and bad) should be stored if a single test
fails.
>
> --
> ] Never tell me the odds! | ipv6 mesh networks [
> ] Michael Richardson, Sandelman Software Works | network architect [
> ] ..hidden.. http://www.sandelman.ca/ | ruby on rails [
>
>
> ------------------------------------------------------------------------------
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
> _______________________________________________
> Ledger-smb-devel mailing list
> ..hidden..
> https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel
>
------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
Ledger-smb-devel mailing list
..hidden..
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel