On 24/01/16 05:51, John Locke wrote:
Ok, so you have set the end date to the past, you then re-run the
test, which will either get skipped because the customer already
exists, or fail due to an error creating the customer.
On 01/23/2016 08:12 AM, Erik
What I have in mind is along the lines of "orders that get created
get closed", "invoices that get created get fully paid", that sort
of thing. So when your test expects to see one open invoice, it
doesn't then see two the next time.
I think it's reasonable to say that running tests on a production
database will change your overall balances (e.g. don't do that!)
but I find that during testing, especially when trying to resolve
a thorny issue I don't understand, there's lots of small iterative
incremental changes. I don't want to have to wipe and reload the
database every time, especially when I don't get it right the
I think the main point here is that for a lot of the setup steps,
the step definitions check to see if it exists before creating --
particularly things like test accounts, test customers, test
parts, test warehouses, etc.
And this will need to be split out into features -- e.g:
Feature: create a customer and vendor
-- this feature should test the interface for creating customers
and vendors, and should not rely upon steps to set these up in the
background, because they are testing the interface. At the end,
should delete the customers and vendors created. (hmm, not seeing
this is possible...maybe set the end date for the customer to the
Either way you can't rerun the test (and possibly others that expect
that customer to exist) on the same database.
I think Erik's suggestion that we simply Drop the DB and reclone
before running a set of tests is the most reliable option here.
Feature: create parts/services
I haven't tried this, but I would expect it to subtly change the
process even if it is just a case of needing a single checkbox in a
-- this feature tests the interface for adding/editing parts. In
its background steps it creates the appropriate income/cogs
accounts that will be used. The setup steps for the background
creates the accounts if they do not exist, and succeeds without
changing anything if they do exist -- for example:
| accno | name | flags|
| 2410 | COGS - parts | AR_paid,AP_paid|
At the end of the feature, mark all created parts obsolete, so the
next test run can re-insert with the same skus, etc.
Surely this makes the integrity of the tests more difficult to
Aside from the fact that I don't see any way of then testing the
"create account" step more than once, unless you are going to use a
random account number/name generator.
Feature: Create sales orders:
-- this feature would put the parts and customers it uses into the
background section, using steps that populate parts, accounts, and
customers as before -- create them if they don't exist, pass
without changing anything if they do exist.
In other words, I'm proposing that each feature tests one module
(or workflow), and uses background steps to provide the necessary
supporting data. And that it should be possible to run each
feature multiple times in the same database -- what we're actually
testing should be cleaned up sufficiently to actually run again
without throwing errors/failures. But allow the supporting data
used in each feature to persist for future runs.
And each of those background data steps needs to have its own
feature to test that the interface works correctly -- and these
features do need to clean up for future runs...
This kind of testing I think reaches the limits of BDD. We're not
going to be able to verify that the math is handled correctly
through every phase, on copies of different databases, through
We have unit tests for testing individual module functionality,
and BDD is good for user interface testing... MIght need another
layer for the business logic testing -- integration testing... For
those kinds of tests, having a clean/well-known starting point for
the database seems necessary.
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
Ledger-smb-devel mailing list