[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Web Services API: URL naming proposal



First some general notes.

One of the real headaches in web app programming is state handling.
With a thick client, what you do is you open a connection, perform
your operations, commit your changes, etc. and then close the
connection when you log off.  With a web application, a single atomic
unit of work may span several network connections. As a result a lot
of things may have to be tracked in this way, and in some of our
workloads, a large majority of database processing time is actually
spent managing the application state. A lot of this time could be cut
out if we didn't have to make sure that this information persisted
when the database connection was closed and a new one opened.

I will give you an example.  One of my customers pays a large number
of invoices per week.  I would say probably as many as 5000 invoices
may be paid in a single batch payment workflow.  I have good
information on the profiling of the web app as to where time is spent.
 The basic selection takes only a couple of seconds (2-3), but once we
add in the necessity to track that someone has selected these, we end
up with about 45-50 sec of database time in the actual selection for
payment.  The XHTML document generation in some cases takes another
couple of minutes.  We can't use cursors to page through results
because the cursors can't survive the connection teardown process.
Consequently there is very limited performance tuning possible at
present.

Now, if this wasn't over HTTP, we could make the application far more
responsive, and lead to better productivity, but as it is, we are
fairly limited.  This is one reason why I see the database-level API
as such a big deal.  It allows one to tune performance by removing the
limitations of HTTP in this regard.

If we can do it, I would much prefer to confine these problems to the
web application than I would to export them to every other application
that might interface to LedgerSMB.   I would therefore suggest that
running LedgerSMB over HTTP is a bit kludgy, and that long-term I
would like to see the web app be secondary to a desktop app directly
connecting to the database in most environments.

On Sun, Nov 20, 2011 at 9:33 PM, John Locke <..hidden..> wrote:
> Hi,
>
> On 11/20/2011 04:24 PM, Chris Travers wrote:
>> I think John's points here raise some important questions I'd like to
>> raise here for further discussion:
>>
>> On Sun, Nov 20, 2011 at 9:30 AM, John Locke <..hidden..> wrote:
>>> On authentication, yes we can use http auth headers, but do we want to
>>> explicitly require a session token, too? We're starting to delve into
>>> OAuth -- which adds a layer of complexity but also can take away the
>>> need for the remote system to collect the password at all. This seems
>>> like a good option to support.
>> A couple questions:
>> 1)  For an API aimed at other applications, why have a session token?
>> What does it buy us?  In the main application we use a session token
>> to enforce a full round trip in some cases (XSRF prevention), and to
>> handle discretionary locking in long workflows (actually more
>> properly, timing out of such locks).  Neither of those general
>> requirements apply to a job from an OS Commerce installation which is
>> feeding sales orders into LedgerSMB.
>
> Well, think mobile app for a minute, or desktop app. There may well be
> many cases where you want round trip, transaction handling, and
> anti-XSRF (or at least anti-replay) prevention.

Ok, so we have layers of API's here, and in a lot of ways, a web
services API is going to almost always be the wrong choice for a
desktop app for reasons I have explained above.  I could see mobile
applications (in the sense of mobile wifi devices) using the db
interface as well, though ones connecting over the public internet
would probably not.

One thing to keep in mind is that the db-level interface has taken a
lot of inspiration from things I think SOAP and friends do well
(discoverability, etc).  Additionally you can do things which are
(IMNSHO opinion) insane to try to do over HTTP, like control over
database transactions.

>
> That is what I hate most about SOAP -- having to do multiple calls and
> manage state. But to a certain extent, it seems unavoidable.

Ever looked at SOAP over XMPP?  For that matter although it would no
longer be RESTful, I don't see any reason why RESTful approaches
couldn't get encapsulated in XML stanzas and sent over XMPP.  XMPP
could then handle state and you'd no longer have to worry about it.

Also if you are willing to to pass in HTTP to your accounting server
through your firewall, not sure XMPP would be out.
>
> It's probably not a big deal to make a remote application pass the
> company in the URL (instead of in a login session), but slightly easier
> I would think to omit (in client implementation), to simply pass in with
> authentication.
>
> In many ways, the web application front end is a model for other
> applications that might call the web service -- ideally everything in
> the web application should be reflected in the web service.

Given the headaches dealing with application state we already have to
worry about (including providing an interface for clearing
discretionary locks when someone gets called away when running a
selection for payment), I would highly recommend utilizing some
discretion in what we push out to web services.  I would instead focus
on making more performance-friendly ways of closing that gap, even
when that means ruling out using HTTP as a transfer protocol.

Fortunately the areas where this is most likely to be an issue are
also the areas where the database API is likely to be available.

That doesn't mean, however, there can't be a common framework that
couldn't be run stateless over HTTP and statefully over something like
XMPP where there are questionable cases, however.
>
> In my experience, most web services still do make use of session
> handling, it's not at all an uncommon approach.

I guess I am questioning the need for it.  We don't really have forms
to submit, and requiring server-side tokens is going to mean more API
calls rather than fewer (i.e. you'd have to get a token for each
resource you intend to post).  This means additional latency.

>>
>> 3)  Does the added complexity make sense with general use cases?  I am
>> assuming we are primarily interested in a web services API for
>> server->server integration since a piece of software used primarily by
>> the end user would be more likely to just call the db-level API (which
>> would provide greater control over db transactions, and the like than
>> one would get from a web services interface)?
>
> Well, server-to-server is certainly the first step. And easiest to adapt
> to just about any interface we develop. But today we're doing most web
> services for iOS or Android apps. Think about the POS or an inventory
> module being available as an app for an Android phone.

I think we'd need more details to see what the relevant costs and
benefits of using a web service in such an app would be.

The questions in my mind become:

1)  Is this an environment where the db-level API is appropriate and
likely to be available?

2)  If not, is this an environment where the document/resource
metaphor of HTTP makes sense and where the systems can be loosely
coupled? If so, web services are a good choice.

3)  If not, then are there other approaches to encapsulating one of
the above API's in another protocol that does make sense?


>
> The recent thread by a Google engineer praising Amazon for making
> everything an API applies here. If you haven't read it:
> https://plus.google.com/112678702228711889851/posts/eVeouesvaVX
>
I agree that everything should be an API.  I am just less convinced
that everything in LedgerSMB should be an API over HTTP.

>
>>>>> 2. Since companies are separate databases, where do we put the name of
>>>>> the company in the URL? <prefix>/store.pl/<company>/<etc...>?
>>>>>
>>>> What do you think of the above proposal?
>>> I suggest we include the company in the body of the login, and then it's
>>> represented in the session cookie. If an external application needs to
>>> work with multiple companies, it can manage multiple session ids.
>> This is contrary to REST ideals, correct?  Not that departures from
>> that are necessarily out, but I would prefer to see some justification
>> for the added complexity of requiring state handling and cookies.
>
> Well, yes, it is contrary to REST ideals -- but there's definitely room
> in REST for actions as well as resources. And I was thinking while
> writing this up about what might be an effective way of supporting
> transactions -- complete with begin transaction, commit, and rollback posts.

I don't see any sane way of handling database transaction controls
over HTTP.  I think any attempt to do so would significantly reduce
the robustness of the controls on the server for anyone accessing the
database.

However if the same API can be encapsulated over XMPP, then the
problem goes away entirely and now you can be sure of the state enough
to expose transaction controls safely.

>
> I'm not entirely opposed to putting the company in the URL -- it's
> certainly a viable approach. However, given the complex structure of
> entity/eca/customer objects alone, having the ability to wrap that in a
> transaction might be desirable...

I dont see how you can possibly have a database transaction safely
exist across multiple HTTP requests, hence my suggestion to explore
XMPP for these areas.
>
> And I think leveraging the current session handling in the app can
> reduce opening up new security vulnerabilities. Not suggesting we build
> anything new for this, just use what we've already got.
>>
>> Also I suspect (though I will defer to others here) that debugging an
>> incorrect company name may be easier if that shows up in the url in
>> the access logs.
>
> This could easily be printed in a debug log. Not seeing how this is any
> more complex than debugging issues in the current app...
>>
Fair enough.

>>> So one thing is identifying supported formats for the data -- I suggest
>>> we support JSON, multi-part form (e.g. URL encoded like a regular form
>>> post) that returns HTML, and some relatively simple XML. Type can then
>>> be specified via "Accept" header, and also by adding a suffix to the
>>> URL. For example:
>>>
>>> http://myhost/ledgersmb/api/1.3/customer/224.json
>>> http://myhost/ledgersmb/api/1.3/customer/224.xml
>> But those formats are not all entirely equivalent are they?  I JSON
>> and XML are close and could be easily supported together, but they
>> allow nested data structures while form submissions are flat, right?
>> If we support form type submissions as a full API, then this means we
>> have to choose between added maintenance of two very different data
>> representations and forcing the xml and json to the least common
>> denominator, correct?  This becomes a bigger deal as time goes on and
>> more stored procedures expect some form of nesting in argument lists.
>>
>> This being said, I like the use of extensions here, and I think the
>> overall idea is sound.  Now, if we are to do this, I would suggest we
>> go with a plugin model for file types, i.e. require a parser which
>> converts the incoming file into a Perl hashref, so that we can add
>> other file types if we ever have to.  That way if someone really needs
>> plain form submission handling we have an avenue to support that in
>> the future, even if the API might be more complex in order to handle
>> lit.
>
> Totally agree. Plugin model for handling the format is the way to go.
> That way if somebody wanted a particular XML dialect, it could be added
> on as well.
>
> JSON is all I'm interested in actually using, though XML is handier for
> testing...
>
> Form posts can be built much like the current web app -- using indexes
> for fields with repeating values (e.g. on the invoice forms, qty_1,
> qty_2, description_1, description_2, price_1, price_2, etc). Cumbersome
> but not that difficult to support.

If we go with a plugin model we can also decide to set it aside for
now until someone needs it.

>
> Main point of abstracting this out though is that the plugin just needs
> to convert it to an appropriate object with the defined properties set

That is true.
>
>>> POST/PUT type should get specified by Content-Type header.
>>>
>>> I suggest we start with the base entity structure that mostly maps to
>>> the database structure, then add "sugar" path shortcuts to make this
>>> easier to use. e.g. all of the below might map to the same item:
>>>
>>> http://myhost/ledgersmb/api/1.3/entity?eca=224
>>> http://myhost/ledgersmb/api/1.3/entity/eca/224
>>> http://myhost/ledgersmb/api/1.3/eca/224
>>> http://myhost/ledgersmb/api/1.3/entity/eca/customer/224
>>> http://myhost/ledgersmb/api/1.3/customer/224
>>> http://myhost/ledgersmb/api/1.3/customer?meta=557 (which might redirect
>>> to the actual eca id)
>> Ok, if you mean the structure of the db api then I would entirely
>> agree (I think the physical structure of the DB is beside the point).
>> So let's do this:
>> 1)  A flat namespace for primary types/identifiers below the API base URL
>> 2)  Open discussion for what other shortcuts should be available for
>> these, and whether they provide additional checks (I am assuming
>> customer would check the entity_class but eca would not?  Which entity
>> class would it check?  If grabbing from entity/...  should we require
>> an entity_id we can check?  That sort of thing)
>
> Yes, exactly. This can start with the raw types/identifiers that map
> relatively straightforward to the db schema. By providing versions, we
> can add or change functionality as identified, without having to resolve
> all these issues up front.

Agreed entirely.
>>
>>
>>>
>>> ... then add more "sugar" methods to get related items:
>>>
>>> http://myhost/ledgersmb/api/1.3/customer/224/invoice?poststartdate=2011-01-01&poststartoper=gte
>>>
>>> ... might return a collection of invoice objects for customer 224 with a
>>> post date greater than/equal to January 1, 2011.
>> These can map to db-based search routines, correct?
>
> Yes, exactly... when I think of REST, I'm thinking of these methods:
>
> Index - GET with resource name but no id -- can pass various search
> parameters
> Create - POST to resource name with no id, data in body
> Read - GET with resource id
> Update - PUT with resource id, data in body
> Delete - DELETE with resource id
>
> ... and then whatever actions to support, using POST with specific
> resource paths associated with the action, and varying data necessary to
> process the action in the body.
>
> The "sugar" methods are most commonly Indexes of related objects.
>
> With JSON, one standard for identifying related objects is passing a
> $ref property with the resource URL so you can load the entire related
> object with another GET.
>
> e.g.
>
> {"id":"234",
> "url":"http://mycompany/api/1.3/customer/234","name":"Sample company",
> ... "contacts":[{"id":"567","type":"email","value":"..hidden..",
> "$ref":"http://mycompany/api/1.3/contact/567"},{"id":"568","type":"phone","value":"800-555-1234"}]}
>
>
>>
>>> http://myhost/ledgersmb/api/1.3/customer/224/invoice?status=open
>>>
>>> ... might return all open invoices for the customer.
>> That becomes syntactic sugar above the above?
>
> ... Yes, that might be a Sugar method that's equivalent to:
>
> http://myhost/ledgersmb/api/1.3/ar/invoice?customer_id=234&status=open
>
>
>>> ... and it would come back with the ECA id set, and a Location: header
>>> set with the resource URL for that item.
>>  The ECA id would be in the location header, however, right?  I guess
>> what I am wondering is if we are going to return the ECA id as well,
>> shouldn't we return the whole object?  Or wouldn't it be better to
>> just issue a redirect to the new object so that default values can be
>> pulled?
>
> Yes, I suggest doing both -- returning the whole object as rewritten by
> the server, as well as adding a header to the final URL.


I like the idea of returning the whole object.

I am thinking through the security, and I think we'll have to have
some way of authenticating clients as well as users.  I don't see
another way around xsrf issues that doesn't break a web services
model.  Amazon Web Services does this with a preshared key approach.
I would personally prefer client-side certificates with a configurable
CN root, and ban use of any of these same certs in the web app.

Best Wishes,
Chris Travers