Hi, On 01/23/2016 08:12 AM, Erik Huelsmann
wrote:
What I have in mind is along the lines of "orders that get created get closed", "invoices that get created get fully paid", that sort of thing. So when your test expects to see one open invoice, it doesn't then see two the next time. I think it's reasonable to say that running tests on a production database will change your overall balances (e.g. don't do that!) but I find that during testing, especially when trying to resolve a thorny issue I don't understand, there's lots of small iterative incremental changes. I don't want to have to wipe and reload the database every time, especially when I don't get it right the first time. I think the main point here is that for a lot of the setup steps, the step definitions check to see if it exists before creating -- particularly things like test accounts, test customers, test parts, test warehouses, etc. And this will need to be split out into features -- e.g: Feature: create a customer and vendor -- this feature should test the interface for creating customers and vendors, and should not rely upon steps to set these up in the background, because they are testing the interface. At the end, should delete the customers and vendors created. (hmm, not seeing this is possible...maybe set the end date for the customer to the past?) Feature: create parts/services -- this feature tests the interface for adding/editing parts. In its background steps it creates the appropriate income/cogs accounts that will be used. The setup steps for the background creates the accounts if they do not exist, and succeeds without changing anything if they do exist -- for example: Background: Given accounts: | accno | name | flags| | 2410 | COGS - parts | AR_paid,AP_paid| (or whatever)... At the end of the feature, mark all created parts obsolete, so the next test run can re-insert with the same skus, etc. Feature: Create sales orders: -- this feature would put the parts and customers it uses into the background section, using steps that populate parts, accounts, and customers as before -- create them if they don't exist, pass without changing anything if they do exist. In other words, I'm proposing that each feature tests one module (or workflow), and uses background steps to provide the necessary supporting data. And that it should be possible to run each feature multiple times in the same database -- what we're actually testing should be cleaned up sufficiently to actually run again without throwing errors/failures. But allow the supporting data used in each feature to persist for future runs. And each of those background data steps needs to have its own feature to test that the interface works correctly -- and these features do need to clean up for future runs...
This kind of testing I think reaches the limits of BDD. We're not going to be able to verify that the math is handled correctly through every phase, on copies of different databases, through BDD. We have unit tests for testing individual module functionality, and BDD is good for user interface testing... MIght need another layer for the business logic testing -- integration testing... For those kinds of tests, having a clean/well-known starting point for the database seems necessary. Cheers, John Locke http://www.freelock.com |
------------------------------------------------------------------------------ Site24x7 APM Insight: Get Deep Visibility into Application Performance APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month Monitor end-to-end web transactions and take corrective actions now Troubleshoot faster and improve end-user experience. Signup Now! http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________ Ledger-smb-devel mailing list ..hidden.. https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel