First, great thoughts here. I think we are fast approaching total agreement.
Lots of snips in the below bit, not marked :-)
On Mon, Nov 21, 2011 at 8:42 AM, John Locke <..hidden..> wrote:
> Ah, yes, I wasn't thinking through the transaction handling over
> multiple requests.
> If we skip doing transactions for more than single request actions (like
> using the shipping field to partially ship an order and generate an
> invoice), then I don't think we get into that much trouble.
> Drupal does provide a batch API, which can be used to iterate through
> large result sets while performing some action. For the user interface,
> processes a configurable number of items (generally 50 or so) and then
> updates its progress bar. For calls from script, you typically process
> the first batch and then leave the rest of the processing to a cron job
> that processes a batch at a time. One approach to bulk operations...
Starting to look at Node.js and related technologies (Perl Object Environment for example) I am starting to see what you are thinking about. More thoughts on that below.
>> Now, if this wasn't over HTTP, we could make the application far more
>> responsive, and lead to better productivity, but as it is, we are
>> fairly limited. This is one reason why I see the database-level API
>> as such a big deal. It allows one to tune performance by removing the
>> limitations of HTTP in this regard.
>> If we can do it, I would much prefer to confine these problems to the
>> web application than I would to export them to every other application
>> that might interface to LedgerSMB. I would therefore suggest that
>> running LedgerSMB over HTTP is a bit kludgy, and that long-term I
>> would like to see the web app be secondary to a desktop app directly
>> connecting to the database in most environments.
> Have you looked at node.js?
Just starting to.look into it and related technologies.
>> One thing to keep in mind is that the db-level interface has taken a
>> lot of inspiration from things I think SOAP and friends do well
>> (discoverability, etc). Additionally you can do things which are
>> (IMNSHO opinion) insane to try to do over HTTP, like control over
>> database transactions.
> Discoverability is the one aspect of SOAP I like...
> I would say node is to XMPP what REST is to SOAP -- a much simpler,
> friendly way to handle long-running connections. It basically is a
> callback-oriented patterns to handle large amounts of simultaneous
> long-running connections.
Ok. So we are talking about a network server here, and presumably for a long-running connection we'd have to use an existing protocol or design our own, correct? In which case we may be:
1) largely in agreement,
2) asking different questions (which is a good thing).
>> That doesn't mean, however, there can't be a common framework that
>> couldn't be run stateless over HTTP and statefully over something like
>> XMPP where there are questionable cases, however.
> +1. I think this is a good approach -- start with RESTful services for
> resource access, creation/deletion, etc.
Exactly what I was going to propose.
> And for batch operations, we could add a long-running connection like
> node or xmpp or access to the database API -- given what I currently
> see, node is the hot one right now, and what I would be most interested
> in working with.
In other words, we need an API that can be encapsulated in other transports. I like it :-)
> I don't think we need per-request tokens -- at least not unpredictable
> ones. I think a counter or something to handle replays might be
> worthwhile (mainly for unreliable connections/resent traffic).
Ok, I see the concern now.
Will respond with some ideas later.
> I suppose we could try skipping sessions altogether -- it's just that in
> my experience, something has always come up that necessitated using
> sessions in the API and it's been trivial to support since the
> underlying web application has it already.
I think where sessions are required something like node.js can give it to us with fewer concerns of overhead than we get with HTTP. We have clear connection state handling which we can outsource to the operating system below.
I want to think about this one a little more.
>>> Well, server-to-server is certainly the first step. And easiest to adapt
>>> to just about any interface we develop. But today we're doing most web
>>> services for iOS or Android apps. Think about the POS or an inventory
>>> module being available as an app for an Android phone.
>> I think we'd need more details to see what the relevant costs and
>> benefits of using a web service in such an app would be.
>> The questions in my mind become:
>> 1) Is this an environment where the db-level API is appropriate and
>> likely to be available?
>> 2) If not, is this an environment where the document/resource
>> metaphor of HTTP makes sense and where the systems can be loosely
>> coupled? If so, web services are a good choice.
>> 3) If not, then are there other approaches to encapsulating one of
>> the above API's in another protocol that does make sense?
> Part of my thinking is ease of implementation. I'd rather see something
> workable very soon, than something perfect but not for years. Providing
> a relatively simple wrapper for existing functionality seems like the
> shortest path to getting something in place.
> I do think http has the most widespread support. Is there a postgres
> driver for iOS? And for me, there's a comfort issue here -- I am not
> that comfortable allowing the Internet direct access to Postgres --
> perhaps it's secure enough, but I'm not that experienced securing it
1: jdbc appears to work with iOS.
2: I would suggest there are both differences wrt use cases and security concerns between mobile access via wifi and via the public internet. This deserves more discussion.
>> I agree that everything should be an API. I am just less convinced
>> that everything in LedgerSMB should be an API over HTTP.
> I would think the web application ideally should get ported to use the
> web services as its API, basically as the first client. If we abstract
> all the data processing out of the web client and into a web service,
> then any other application can do everything the web client can do.
> Nothing prevents us from adding more web services over other transports
> with additional functionality.
So here's my thinking:
1) Maybe the web services portion should, for now, be a subset of what the application can do, essentially as a method for atomic, asynchronous operations based on a concept of document processing. One option would be to allow an HTTP transport to accept document identifiers and track them so that the web service can later check back and see whether an identified document was processed.
2) Over other transports, this would be optional and split off.
>>>>>>> 2. Since companies are separate databases, where do we put the name of
>>>>>>> the company in the URL? <prefix>/store.pl/<company>/<etc...>?
>>>>>> What do you think of the above proposal?
>>>>> I suggest we include the company in the body of the login, and then it's
>>>>> represented in the session cookie. If an external application needs to
>>>>> work with multiple companies, it can manage multiple session ids.
>>>> This is contrary to REST ideals, correct? Not that departures from
>>>> that are necessarily out, but I would prefer to see some justification
>>>> for the added complexity of requiring state handling and cookies.
>>> Well, yes, it is contrary to REST ideals -- but there's definitely room
>>> in REST for actions as well as resources. And I was thinking while
>>> writing this up about what might be an effective way of supporting
>>> transactions -- complete with begin transaction, commit, and rollback posts.
>> I don't see any sane way of handling database transaction controls
>> over HTTP. I think any attempt to do so would significantly reduce
>> the robustness of the controls on the server for anyone accessing the
>> However if the same API can be encapsulated over XMPP, then the
>> problem goes away entirely and now you can be sure of the state enough
>> to expose transaction controls safely.
> Yes, I'd just suggest looking at Node.js/Socket.io as an alternative to
Or Perl Object Environment, etc.
Anyway here is my current thinking:
1) Let's have a RESTful web services interface based on a document exchange metaphor, but with reasonably atomic units of processing. Longer run, these can be grouped together so that larger, more complex documents can be posted and retrieved, basically as a document model.
2) Let's build things with the idea of re-use over connection-oriented, stateful protocols, whatever people what to use that for later. That way people can build solutions over the API using whatever protocols suit their needs. A transaction-control interface can be implemented by the transport handler
Does this sound reasonable?