[ts-gen] Downstream architecture (2nd)

Bill Pippin pippin at owlriver.net
Fri Aug 28 20:53:35 EDT 2009


Ken,

With respect to your questions about downstream architectural
approaches to the IB tws api, this post focuses on the following
question: 

    What should an IB tws api program bring to the table; what
    responsibilies, in contrast, should downstream programs
    concentrate on; and how can we avoid code duplication between
    the two?
________________________________________________________________________

A.  What should an IB tws api program bring to the table?

One natural trap for IB tws api [hereafter api] program developers
is to add an empty layer to the api, that is to more or less publish
a duplex connection that accepts requests and responds with messages,
and where those events are essentially equivalent to those of the IB
tws api.

At first thought this is in essence what an api program should
do, yet ... where's the beef?  What's the point, and how is the
downstream client any better off than without such layering?
It's tempting to try to ignore this problem, and just define
a library of request procedures, and a queued message retrieval  
operation; and such an approach has been tried many times, and
leads to code similar to IB's sample client; and duplicating IB's
tws sample client is not the point of the trading-shim.

I'll start, instead, with protocol issues.   The IB tws api uses
null terminated tokens that, when parsed, give numbered events
that vary in length and by version, and are mostly deterministic
after the initial index, that is with the exception of combo bag
contract descriptions, history query answer detail lines, and
market scanner detail lines.  Although the language is simple
to parse given a correct initial starting point and exact knowledge
of the number of attributes per event, there is one glaring
lacuna, the lack of a message delimiter, and so one serious
problem:

    How can a client recover synchronization, once given a client
    mistake during parsing --- how much should that client discard,
    and how can resynchronization be verified?

I'll leave a detailed answer to the above for another time, if and
when I discuss the shim's parser, and for now note only that the
shim discards just one token, is *very* robust in the presence of
the resulting syntax errors while it tries to resync, and uses the
relative rarity of false matches between data and event index --
event version pairs to provide a strong, and with repeated matches
increasingly trustworthy, indication of successful synchronization.  

The key point here, then, for downstream clients, is what is
provided: the shim adds newlines, and so parsing events is easy
for downstream users.  It also makes sense to convert the
null-terminated tokens of the api from binary to text form, by
replacing the ascii NUL character with something in the 95
character printable subset, and we've chosen the vertical bar.
One service of an api program, then, is as follows:

    An api program should convert messages from binary to text,
    say by replacing NULs with vertical bars, and delimit the
    events in its output stream, say by adding trailing newlines.

There's more the shim can and does do with message output, but the
rest is icing on the cake.  Presumably IB has defined the api in
binary, newline-free terms for reasons of efficiency, yet the
resulting language, though trivially easy to tokenize, is
unnecessarily hard to parse, and an api program can fix that. 

Now, for requests.  There are three issues here, resource limits,
timing, and keying; the first two are tightly coupled.

The api is defined to limit market data lines, for the typical user
to 100 symbols; to limit market depth subscriptions, to three; to
limit history queries in flight to some low number, possibly even
as low as 1, although in practice higher values seem to work; and
there are also limits for the market scanner, although without that
part of the api implemented for the shim, I'll leave that aside for
now.

There is also rate limiting.  Requests must not arrive faster than
fifty per second; history queries, faster than one every ten seconds;
and for orders, although there is no published limitation beyond the
overall one per 20 millisecond average request rate limit, by anecdotal
evidence parent child oca groups are not reliably linked if related
orders arrive less than 300 milliseconds apart. 

As for keying: subscriptions, queries, and orders are identified by
numbers, on the client side natural, or counting, numbers, with the
sequence base determined by the IB tws at startup via an unsolicited
next id message.  The api program must either track the mapping from
tick ids to contracts and orders, or else the downstream must perform
that tracking itself.  If the api program tracks such mappings, it's
also natural to save a history of such for orders.

    An api program should track subscription and resource limits,
    either rate-limit requests itself, or else provide fine timing
    control over request sends to the downstream client, and  
    translate between tick and order ids, on the one hand, and
    contracts and order ids, on the other.  It should also journal
    those order-related events that include order id numbers.

Once given a datastore for the journal, it's convenient for the
api program to use it for other purposes, such as symbol management.
And, once given such additional database info, the api program can
serve as an amplifier from simple, brief commands to the more verbose
events that define api requests.

    An api program may choose to map from simple commands to elaborate
    api requests, hiding api complexity from the downstream client.

In addition to the duplex socket from the upstream, an api program
must also accept downstream input, and without blocking in such a way
as to increase latency for either downstream client or upstream IB
tws.  Altough threads with message queues or locking can be used here,
the bsd-style select() system call is a robust, well-known, and
efficient method to multiplex IO handling, and the shim uses select()
for input.  It should be clear that such an api program is a reactive
system, and formally more complex than a pure filter.
________________________________________________________________________

B.  What responsibilies should downstream programs concentrate on?

The client downstream relates market data and orders.  It is the
locus of order strategy and position management.

As a result, it  must be able to:

    * ask for market data;
    * interpret such data;
    * generate orders; and
    * manage positions.

Although the above is probably obvious, note the resulting conclusion
given the next topic that the roles of the api and client programs are
mutually exclusive, so that the shim should *not* decide any of the
tasks above.

I'll talk more about using a scripting language with the shim in my
next post, and for now, continue on to the key question that drives
architectural choices:
________________________________________________________________________

C.  How do we avoid code duplication between the api program and client?

Once converted to text and split into records, api events are easy to
parse in perl/python/ruby; each language has some means to split lines
of text into tokens given some delimiter, and so event objects are simple
to build.  Such code typically is of at most a few lines.

The key issue is event *understanding*, and this is a problem that
cries out for table driven code.  Ideally, class definitions --- and
I'm thinking in particular of ruby here --- and token labels should
be derived from database tables.   This has the result that the
specification of events has a common, single point of truth --- what
Kernighan calls the SPOT principle --- and is common to the api program
and its clients.

This brings us back to the need for versioned event tables in the shim's
database, mentioned earlier this week, and again in the previous post. 
Once given these event-attribute tables, compact event parsing and
construction for ruby or other modern scripting languages falls right
out.

The other tasks of the downstream client involve processing those
events, and accumulating state, whether about markets in general or
one's positions in particular.   Here the separation between api
program and downstream is clear cut, so that code duplication is
unlikely.

To recap, given separate api and downstream programs, code duplication 
does occur in a logical sense; both must read and construct events; and
in addition, both must use a select() loop or other means to multiplex
IO --- this latter task is unavoidable for any reactive system, that is
any program more than some trivial filter.  Granted.

Still, once the api program has converted the binary api events to
newline-terminated text records, parsing is easy, table driven object
construction simplifies most of the rest of the logically duplicative
input, and a [main] select() loop is part of the price of admission
for any reactive system.

Thanks,

Bill
 


More information about the ts-general mailing list