[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

LedgerSMB Scalability Bottleneck-- proposal



One of the issues that LedgerSMB can run into in larger environments is that large processes can run into client timeout issues, especially when large amounts of data are passed back and forth.

In one case, I am running into issues where approx. 4k invoices are being paid at once.  Even when these are consolidated into stored procedures as much as possible, the overhead in the db is causing the web pages to time out.

My proposal is to offer a queue system for mass transaction processing.  This would allow one to queue thousands of transactions which could then be entered into batches or posted in the background.  A message could be delivered to the user when this completes.

From a user perspective, a mass operation (such as a mass payment) would bring you to a screen stating that the request had been queued.  The user could then get on to other work and then get an alert when this finishes.  When _javascript_ is enabled, a _javascript_ alert would appear on the next screen loaded after the batch finishes posting.

From a technical perspective, we would just throw the records in the format the application expects to process them, into a holding table, save a batch record in another queue table, and then use the notify framework to let another resident perl script know to initiate the process.  That script would simply initiate an asynchronous PostgreSQL set of stored procedures.  When these procedures finish, they would add a message to a message queue table which could be checked in a variety of ways including _javascript_ alerts and a special message checking template.

Does anyone have any better ideas?

Best Wishes,
Chris Travers