[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Proof of concept structs for 2.0



On Fri, Mar 12, 2010 at 7:29 PM, Adam Thompson <..hidden..> wrote:
> Chris Travers wrote:
>> LedgerSMB::Web::Request is designed specifically to provide HTTP
>> wrappers for workflow scripts.  Thick clients wouldn't use it.
>
> I'm not sure this is really the best way to handle workflow - what about
> workflow in the non-web (CLI, thick client, etc.) scenario?  OTOH, I
> don't have any better suggestions right now.  For that matter, I'm not
> even sure that the workflow concept (à la LSMB) is valid in non-web cases.

Well, the work flow concept is different outside a web app
environment.  The basic concept seems applicable in some cases,  but I
can't see how a single implementation could work across both
environments.

This might be something where the workflow scripts could eventually be
broken into a top-half and a bottom-half.

The workflow scripts are designed to be light-weight as well meaning
if the wheel has be be reinvented somewhat it isn't a lot of
reinventing, but the real reason there was to make the system easy to
customize.

> Interesting... I'm accustomed to simply using transaction isolation
> levels to accomplish this.  Since Pg defaults to READ COMMITTED,
> wouldn't converting the affected SQL to use ...FOR UPDATE solve the same
> problem?  I get that batch runs typically can't all be run inside a
> single transaction, but I think in almost all cases I've worked around
> that by using some sort of checkpointing scheme, even at the expense of
> performance.

There are two things we get though the current system.  The first is
that we can lock records across transactions for a minimal set of
workflows that need these (bulk payment processing).  The second is
that we can see which user has a record locked.  These are much harder
to do in other environments.  And note a lock doesn't affect
everything and is rarely used outside things like bulk payment
processing.

Basically the locking is only used for things like preventing two
people from paying the same invoice :-)  Most other operations are
designed to be fully concurrent operation safe.  For example, if two
people try to approve a batch at the same time, it will not double the
batch in the books or anything like that.

(This locking is hence designed to be a system for controlling access
to some transactions on workflows where there is no other proper
solution.)

>  Horrid memories of Clipper apps permanently locking files
> that could never be unlocked probably causes this bias.

Early SQL-Ledger 2.8 has this problem occasionally too.  We solve it
in a different way though it has something of a performance cost:  We
use a locked_by column which references the session table.  When the
session is deleted, the locked_by fields referencing it are set null.
What this means is that locks disappear when the session times out.
There's also a session management interface to forcibly terminate
users' sessions.

I see no reason to change this behavior in 2.0 at the moment, esp.
since they only affect bulk payment processing at the moment.


>> First, ideally the authentication mechanism for a given installation
>> should be admin-defined.  For example, if you use HTTP_KRB5, you
>> really don't want users confused if they get authentication prompts so
>> it would be rather concerning if this were to change.
>
> That's my point - if a user switched auth methods in the middle of a
> session, I would assume that was an attack of some sort, not a valid
> sequence of events.

That's a sensible suggestion and I think it needs to be implemented.
>
>> struct LedgerSMB::Session, {
>>      id     =>  '$',
>>      token  =>  '$',
>>      dbclient =>  '$', # Pg type inet, connection of client connecting to Pg
>>      type =>  '$', #type of session (web, etc)
>>      auth_module =>  '$', #name of auth module
>> };
>> This could be subclassed for web interfaces by also adding a remote_addr field:
>> struct LedgerSMB::Session::Web, {
>
> That makes more sense to me, seems more obvious and more likely to help
> ensure correctness of implementation.  A change in auth_module would
> thus represent a new session altogether - which is sane IMHO, as an
> attack in that mode would be indistinguishable from an administrative
> configuration change or simply a valid connection from another web
> instance or another type of client.

Further it allows for different approaches to handling the session
itself.  For example, there's no reason that one couldn't require only
one session per user (we know the username when we retrieve the
session so that's not a part of the data structure itself, but it
would be a part of the database structure).  This could be
accomplished with a simple CREATE UNIQUE INDEX statement.  Or someone
could allow multiple sessions per user.

Best Wishes,
Chris Travers