[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: LedgerSMB Scalability Bottleneck-- proposal



Hi Chris,

I have a little familiarity with the db structure but by no means have a
complete knowledge set about its design. From what little I know it seems
to me that the database structure is the main problem for scalability. (I
know little or no perl so can't comment on optimisations there)

It looks like everything is stored in one main table (acc_trans) instead
of sub-tables. One of the things I like about this is that it makes
understanding the application design much easier and therefore lowers the
learning curve for customisation. If one looks at tinyerp or openbrave
there are over 150 tables!  The downside is scalability.

Still 4000 transactions is not really a hell of a lot of processing for a
database. It should handle it no problem. Maybe the perl routines could be
optimised?

Regards

Mark

> One of the issues that LedgerSMB can run into in larger environments is
> that
> large processes can run into client timeout issues, especially when large
> amounts of data are passed back and forth.
>
> In one case, I am running into issues where approx. 4k invoices are being
> paid at once.  Even when these are consolidated into stored procedures as
> much as possible, the overhead in the db is causing the web pages to time
> out.
>
> My proposal is to offer a queue system for mass transaction processing.
> This would allow one to queue thousands of transactions which could then
> be
> entered into batches or posted in the background.  A message could be
> delivered to the user when this completes.
>
>>From a user perspective, a mass operation (such as a mass payment) would
> bring you to a screen stating that the request had been queued.  The user
> could then get on to other work and then get an alert when this finishes.
> When Javascript is enabled, a javascript alert would appear on the next
> screen loaded after the batch finishes posting.
>
>>From a technical perspective, we would just throw the records in the
>> format
> the application expects to process them, into a holding table, save a
> batch
> record in another queue table, and then use the notify framework to let
> another resident perl script know to initiate the process.  That script
> would simply initiate an asynchronous PostgreSQL set of stored procedures.
> When these procedures finish, they would add a message to a message queue
> table which could be checked in a variety of ways including Javascript
> alerts and a special message checking template.
>
> Does anyone have any better ideas?
>
> Best Wishes,
> Chris Travers
> -------------------------------------------------------------------------
> SF.Net email is sponsored by:
> Check out the new SourceForge.net Marketplace.
> It's the best place to buy or sell services for
> just about anything Open Source.
> http://sourceforge.net/services/buy/index.php_______________________________________________
> Ledger-smb-users mailing list
> ..hidden..
> https://lists.sourceforge.net/lists/listinfo/ledger-smb-users
>