New Async NDB Tutorial

(last update: 2/16/2012)


See the Official NDB Docs

NOTE: This document replaces the NDB Tutorial, Part Deux, which is hopelessly out of date.


Now that you’re comfortable using NDB’s synchronous API, let’s explore NDB’s async facilities. We’re going to learn about async APIs, Futures, and tasklets.

[Style note: I’m writing “synchronous” in full, but “async” abbreviated, to emphasize the difference. The App Engine Documentation team will probably not copy this style...]

Hello Async World

Let's start with an example; we'll go into specifics in the next section.

Fire-and-forget: write an entity without waiting for the write to complete:

class MyRequestHandler(webapp.RequestHandler):


    def get(self):

        acct = Account.get_by_id(users.get_current_user().user_id())

        acct.view_counter += 1

        acct.put_async()  # Ignoring the Future this returns other stuff from the datastore...


Note the @ndb.toplevel decorator -- without this you run the risk of not writing the entity at all.

Alternate version without @ndb.toplevel decorator:

class MyRequestHandler(webapp.RequestHandler):

    def get(self):

        acct = Account.get_by_id(users.get_current_user().user_id())

        acct.view_counter += 1

        future = acct.put_async() other stuff from the datastore...


        future.get_result()  # Wait for completion

(Both examples have the weakness that if you don't block for datastore access at all, the write may not be initiated until after the request handler completes, thereby missing the point of fire-and-forget. [TBD: Choose a first example that doesn't have this problem? Or add an API to force sending the RPC?])

Using Async APIs and Futures

Let’s start with the async APIs. Almost every synchronous NDB API (assuming it interacts with the datastore) has an async counterpart. The async API function and method names are always formed by appending ‘_async’ to the synchronous name, and the arguments are always the same as for the synchronous version. For example, given that we have entity.put(), and key.get() we also have entity.put_async() and key.get_async(). The return value of an async method is always either a Future or a list of Futures. (Whether the async version of an API that returns a list is a list of Futures or a Future whose result is a list depends on the API; e.g. get_multi_async() returns a list of Futures, but fetch_async() returns a Future whose result is a list.)

A Future is an object that maintains state for an operation that has been initiated but may not yet have completed; all async APIs return one or more Futures. (Read the Wikipedia article for some conceptual background on Futures.) At any point you can ask the Future for the result of the operation; the Future will then block, if necessary, until the result is available, and then give it to you. To ask a Future for its result, you call its get_result() method. (However, tasklets give you an alternative way of accessing a Future’s result. We’ll get to these later.)

The get_result() returns the same value that would be returned by the synchronous version of the same API. In fact, under the hood all the synchronous APIs are implemented by calling the corresponding asynchronous API and then calling the get_result() method on the Future(s) returned. Check for yourself in the NDB source code: the _put() method is implemented as follows:

def _put(self, **ctx_options):

    return self._put_async(**ctx_options).get_result()

(Don’t worry, we’re not going to review the code of _put_async(). :-)

NOTE: You must call get_result() to get a Future's result value (or use yield in a tasklet to obtain the same effect, as discussed later). In some other programming languages, Futures automatically coerce themselves into the desired result; however, NDB Futures don't behave this way. In the terminology of the Wikipedia article, NDB Futures are explicit.

What happens if the operation raises an exception? That depends on the reason for the exception. If the exception is complaining about an obvious problem with an argument, the _async() method will raise (e.g., passing a non-list to get_multi()). But if the exception is detected by the server (or, in some cases, by the support code that sends your requests to the server and processes the responses), the _async() method will just return a Future, and the exception will be raised when you call get_method() on it. Don’t worry too much about this, it all ends up behaving quite natural; perhaps the biggest difference is that if a traceback gets printed, you’ll see various bits and pieces of the low-level asynchronous machinery exposed, such as the event loop (in fact, you’ll see this even when a synchronous API raises an exception). We’ll discuss the event loop briefly at the end of the tutorial.

So what can you do with async APIs and Futures? Later, we’ll see how you can create tasklets which use them to implement a lightweight concurrency mechanism. But first, let’s see how we can use async APIs to our advantage without using tasklets.

Let’s assume you are writing a simple guestbook app, which requires the user to log in, and manages Account entities to record registered users. Once the user is logged in, you want to present them with a page showing the most recent guestbook posts (i.e., Guestbook entities); this page should also display the nickname of the logged-in user, which we get from the Account. In a synchronous world, the core of your request handler would look something like this:

uid = users.get_current_user().user_id()

acct = Account.get_by_id(uid)  # I/O action 1

qry = Guestbook.query().order(-Guestbook.post_date)

recent_entries = qry.fetch(10)  # I/O action 2

# Followed by rendering some HTML based on this data

There are two independent I/O actions here: getting the Account entity, and fetching the 10 most recent Guestbook entities. Using the synchronous API, we can’t combine these actions; our only choice is whether to get the Account entity or the Guestbook entities first. But using the async API, we can:

uid = users.get_current_user().user_id()

acct_future = Account.get_by_id_async(uid)  # Start I/O action 1

qry = Guestbook.query().order(-Guestbook.post_date)

recent_entries_future = qry.fetch_async(10)  # Start I/O action 2

acct = acct_future.get_result()  # Complete I/O action 1

recent_entries = recent_entries_future.get_result()  # Complete I/O action 2

# Now invoke the HTML rendering code

In this version, we first create two Futures (acct_future and recent_entries_future), and then wait for them. It doesn’t really matter in which order we wait; we can’t start rendering the response until we have both results (at least, that’s how most templating libraries work), and the server works on both requests in parallel. Also note that the _async() calls do more than create a Future object: they also send the request to the server, so the server can start working on the request right away. The server responses may come back in an arbitrary order; the Future object contains the information necessary to link the responses to their corresponding requests, so that e.g. acct_future.get_result() will always return the requested Account object. (If this seems obvious to you, don’t think too hard about it. :-)

Graphically, the difference between the two versions can be shown as follows:

So the total (real) time spent in the second version is roughly equal to the maximum time across the operations, while the total time spent in the first version exceeds the sum of the operation times. Clearly, the win is greater if you can run more operations in parallel.

Another thing you  can do with Futures is to do some computation while the server is working on a query or some other I/O operation. The basic idea is the same:

recent_entries_future = qry.fetch(10)

# Do your computation here

recent_entries = recent_entries_future.get_result()

Of course, this is only worth it when your computation takes a somewhat considerable amount of time compared to the query (say, 10-100 msec), and does not depend on having access to the query results.

By the way, if you want to know how long your queries take, or how many I/O operations your app is doing per request, consider using Appstats. This tool can show charts similar to the drawing above based on instrumentation of a live app.

List of Async APIs

Here’s a table listing all async APIs in NDB that are derived from a corresponding synchronous API. I am using some shorthands in the signatures:

Synchronous version

Async version

Async return value



Future -> entity or None



Future -> None



Future -> key


  name, **options)


  name, **options)

Future -> entity


  size, max, parent, **options)


  size, max, parent, **options)

Future -> (integer, integer)


  id, **options)


  id, **options)

Future -> entity or None


  callback, **options)


  callback, **options)

Future ->

  whatever callback returns


  [key], **options)


  [key], **options)

[Future -> entity or None]


  [entity], **options)


  [entity], **options)

[Future -> key]


  [key], **options)


  [key], **options)

[Future -> None]


  limit. **options)


  limit. **options)

Future -> [entity or key]



Future ->

  entity or key or None


  limit, **options)


  limit, **options)

Future -> integer


  page_size, **options)


  page_size, **options)

Future ->

  (results, more, cursor)

  callback, **options)


  callback, **options)

Future ->

  [whatever callback returns]



Future -> boolean

Note that not all APIs have an async version. For example, there are no async versions of qry.iter(), or This is by design; Python iterators are not particularly friendly to async APIs, and you're usually better off using qry.map_async(). In a later section we'll discuss how to loop over a query iterator using async APIs.

Async APIs on the Context Object

Contexts offer some additional asynchronous APIs. Recall from part one that Contexts contain state such as the context cache, various cache policies, transaction state, and default configuration options. You access the current Context by calling ndb.get_context(). Context objects offer the following async APIs; these don't have '_async' in their name but they do return a Future. (For brevity, I don't indicate parameter defaults here; e.g. in the memcache_get() call, all parameters except key are optional.) Most of these interface to memcache, but urlfetch is also supported, and in the future I plan to support the blobstore API here as well.




  key, for_cas, namespace, use_cache)

Async, auto-batching memcache get()


  key, namespace, use_cache)

Async, auto-batching memcache.Client().gets()


  key, value, time, namespace, use_cache)

Async, auto-batching memcache set()


  key, value, time, namespace)

Async, auto-batching memcache add()


  key, value, time, namespace)

Async, auto-batching memcache replace()


  key, value, time, namespace)

Async, auto-batching memcache.Client().cas()


  key, seconds, namespace)

Async, auto-batching memcache delete()


  key, delta, initial_value, namespace)

Async, auto-batching memcache incr()


  key, delta, initial_value, namespace)

Async, auto-batching memcache decr()

urlfetch(url, payload, method, headers, allow_truncated, follow_redirects, validate_certificate, deadline, callback)

Async urlfetch() [New in NDB 0.9.6]

For completeness here are the async Context APIs implementing the datastore API. NOTE: ou should not use these directly, because they don't always call the model-specific hook methods documented in part one. Always use the datastore APIs documented in part one instead. These all return Futures. Note that some of these use "options", not "**options", meaning that they don't take keyword arguments to specify options, but they do take a datastore_rpc.Configuration object for an argument.



get(key, **options)

Async, auto-batching key.get()

put(entity, **options)

Async, auto-batching entity.put()

delete(key, **options)

Async, auto-batching key.delete()

allocate_ids(key, size, max, **options)

Async, auto-batching Model.allocate_ids()

map_query(query, callback, options)

Used to implement

iter_query(query, callback, options)

Used to implement qry.iter() / iter(qry)

transaction(callback, **options)

Used to implement ndb.transaction()

Finally, for completeness here are some other public Context APIs. These do not return Futures. (There are more, but those have all been documented in part one already, e.g. set_cache_policy().)




Flush all auto-batcher queues, sending their work to the servers, and waiting until all the responses have been received


True if this is a transactional Context


Clear the context cache

The Future Object API

Futures have some useful methods besides get_result(). Here's a complete list of Future methods (any others should be considered undocumented and may change or disappear): [TBD: convert to tables]

There are also some useful class methods:

Do not attempt to construct Futures yourself -- you should always use one of the documented async functions and methods that returns a Future.

Using Tasklets

Now let's look at some more sophisticated uses of async APIs, introducing tasklets. As an illustrative example, let's consider the following schema:

class Account(ndb.Model):

    email = ndb.StringProperty()

    nickname = ndb.StringProperty()

    def nick(self):

        return self.nickname or  # Whichever is non-empty

class Message(ndb.Model):

    text = ndb.StringProperty()

    when = ndb.DateTimeProperty()

    author = ndb.KeyProperty()  # references Account

and let's assume we want to render this by displaying the message time, text and author's nickname. (This is taken from a real-life example, in Rietveld.) Using synchronous APIs, and substituting Python print statements for actual template expansion, we could write the following code:

qry = Message.query().order(-Message.when)

for msg in qry.fetch(20):

    acct =

    print 'On %s, %s wrote:' % (msg.when, acct.nick())

    print msg.text

Unfortunately, if we look at this with Appstats (see above), we'll find that it is very inefficient: there's one Query request followed by up to 20 separate Get requests (fewer if some messages have the same author, since subsequent requests for the same key will be satisfied from the context cache). For example you might see the following "staircase pattern":

In the synchronous world, the best we can do is to rewrite this code to prefetch all the Accounts:

qry = Message.query().order(-Message.when)

msgs = qry.fetch(20)

accts = ndb.get_multi([msg.key for msg in msgs])

for msg, acct in zip(msgs, accts):

    print 'On %s, %s wrote:' % (msg.when, acct.nick())

    print msg.text

However, this can get cumbersome, for example if there are other things to be prefetched, or if it is complicated to figure out which things to prefetch given a list of Messages.

Anyway, let's say we'd like to rewrite this code using async APIs and tasklets. We'll use the method and write a callback for it. The callback will be a tasklet; the call itself is synchronous, but the callbacks may run out of order.


def callback(msg):

    acct = yield ndb.get_async(

    raise tasklet.Return('On %s, %s wrote:\n%s' % (msg.when, acct.nick(), msg.body))

qry = Messages.query().order(-Message.when)

outputs =, limit=20)

for output in outputs:

    print output

The return value of is a list of the return values of the callback calls. This list is returned synchronously, and its order corresponds to the order in which the query results were received (not to the order in which the callbacks returned, which may be different).

If we run this and look at Appstats output, we'll see the following:

This means that magically, those 6 separate Get requests have been collapsed into a single Get request (with 6 keys, returning 6 entities; in practice this is much faster than 6 separate requests). Wow. How is that done?

Here's how. Whenever a callback hits the "yield ndb.get_async(" expression, it makes a get_async() call and then gets suspended until the result is available (I'll explain how in a minute). Contradicting slightly what I wrote earlier, the get_async() call doesn't actually immediately send an RPC to the datastore server with that one key in it: instead, it passes the key on to an actor behind the scenes called the autobatcher. The autobatcher can collect multiple keys and bundles them up in a single batch RPC to the server; it does this in such a way that as long as there is more work to do (e.g. another callback may run) it will be collecting keys, and as soon as one of the results is needed, it sends the batch RPC.

(When the synchronous API is used, the autobatcher usually sends the RPC immediately, because the synchronous API immediately waits for the result, by calling get_result() on some Future.)

What would happen if the callback wasn't written as a tasklet? A synchronous callback would be somewhat shorter:

def callback(msg):

    acct = ndb.get(

    return 'On %s, %s wrote:\n%s' % (msg.when, acct.nick(), msg.body)

There is no @ndb.tasklet decorator, no yield, no get_async() call (just a synchronous get() call), and a return statement is used instead of the mysterious raise ndb.Return(). The map() call will work with such a callback, but when we look at the Appstats output, we'll see the old staircase pattern. Why? Because we're not yielding, we're not giving the NDB scheduler a chance to run another callback. (I'll discuss the scheduler in more detail later as well.)

The advantage of using tasklets is even stronger when we run a query that returns more results. Let's suppose we request 100 results, in batches of 25. This can be expressed as follows:

outputs =, limit=100, batch_size=25)

Now look at the Appstats output. Using the tasklet callback, we might see the following:

But using the synchronous callback, we would see something like the following (showing only the behavior for the first two batches, to save some space -- the full chart would have 3 Next calls and up to 100 Get calls).

(Note that even here there is an overlap between the Next request and some of the Get requests. This because the low-level query implementation -- below NDB, even! -- uses async requests to get the batches, which provides a handsome speed-up for synchronous queries whose results don't fit in a single batch. But this doesn't prevent the Get requests to form an inefficient staircase pattern.)

Parallel Queries, Parallel Yield

Another common application is running multiple queries concurrently. Let's say we have a page that displays both the contents of your shopping cart and a list of special offers. Our schema might look like this:

class Account(Model):


class InventoryItem(Model):

    name = StringProperty()


class CartItem(Model):

    account = KeyProperty(kind=Account)

    inventory = KeyProperty(kind=InventoryItem)

    quantity = IntegerProperty()


class SpecialOffer(Model):

    inventory = KeyProperty(kind=InventoryItem)


Now let's write a function that, given an Account object, fetches all related cart items as well as 10 special offers. It should also prefetch the inventory items corresponding to those items. Synchronously we could do this as follows:

def get_cart_plus_offers(acct):

    cart = CartItem.query(CartItem.account == acct.key).fetch()

    offers = SpecialOffer.query().fetch(10)

    get_multi([item.inventory for item in cart] + [offer.inventory for offer in offers])

    return cart, offers

The get_multi() call whose result is ignores here deserves some explanation: it serves to populate the context cache using a single efficient batch Get RPC. Once an entity is in the context cache, other code running in the same Context can read that entity given its key without doing any I/O. For example, here is some code that would print the names of all special offers:

def print_offers(offers):

    for offer in offers:

        print offer.inventory.get().name

Without the prefetching get_multi() call in get_cart_plus_offers(), each of the calls to offer.inventory.get() would make a synchronous request to the datastore (or, if you're lucky, to memcache -- but still, this would be 10 synchronous RPCs, which we know is bad).

But once the get_multi() call has been made, all those entities are loaded into the context cache, and those same offer.inventory.get() calls are satisfied immediately from the context cache, without doing any I/O or blocking.

However, get_cart_plus_offers() is completely synchronous, so the time it takes is equal to (making up a notation on the fly :-) T(CartItem) + T(SpecialOffer) + T(get_multi).

Can we do better? We certainly can! The simplest solution first creates Futures for the two queries and then waits for them:

def get_cart_plus_offers(acct):

    cart_future = CartItem.query(CartItem.account == acct.key).fetch_async()

    offers_future = SpecialOffer.query().fetch_async(10)

    cart = cart_future.get_result()

    offers = offers_future.get_result()

    get_multi([item.inventory for item in cart] + [offer.inventory for offer in offers])

    return cart, offers

This overlaps the two queries, but the get_multi() call is still separate (because it depends on the query results -- no Future in the world can fix that for us :-), so the total time is MAX(T(CartItem), T(SpecialOffer)) + T(get_multi). That's the best we can do unless there are other operations that we could do concurrently, but let me improve the flexibility of the solution.

Suppose we sometimes need the cart, sometimes the offers, and sometimes both. And in all cases we want the inventory items prefetched. We can easily write two separate functions to get the cart and the offers:


def get_cart(acct):

    cart = CartItem.query(CartItem.account == acct.key).fetch()

    get_multi([item.inventory for item in cart])

    return cart

def get_offers(acct):

    offers = SpecialOffer.query().fetch(10)

    get_multi([offer.inventory for offer in offers])

    return offers

def get_cart_plus_offers(acct):

    cart = get_cart(acct)

    offers = get_offers(acct)

    return cart, offers

Sadly, this version of get_cart_plus_offers() doesn't combine the get_multi() calls, and we end up doing 4 RPCs instead of 3. Luckily, using tasklets we can get this back to 3. Let's see how. We convert each function to a tasklet, changing all synchronous calls into async calls with yield. Let's also stick to the convention that functions returning a Future (such as tasklets) have a name ending in '_async':


def get_cart_async(acct):

    cart = yield CartItem.query(CartItem.account == acct.key).fetch_async()

    yield get_multi_async([item.inventory for item in cart])

    raise ndb.Return(cart)


def get_offers_async(acct):

    offers = yield SpecialOffer.query().fetch_async(10)

    yield get_multi_async([offer.inventory for offer in offers])

    raise ndb.Return(offers)


def get_cart_plus_offers_async(acct):

    cart, offers = yield get_cart_async(acct), get_offers_async(acct)

    raise ndb.Return(cart, offers)

This seems like a big refactoring, and it is -- but the changes are almost all purely mechanical:

The only time we had to think was the taskletization (not a real word :-) of get_cart_plus_offers(). The straightforward translation according to these rules would have been:



def get_cart_plus_offers_async(acct):

    cart = yield get_cart_async(acct)

    offers = yield get_offers_async(acct)

    raise ndb.Return(cart, offers)

Alas, this version waits for the results of get_cart_async() before calling get_offers_async(), so it fails to parallelize our queries -- and hence it also fails to combine the multi_get_async() calls. That's where parallel yield comes in handy: "yield X, Y, ..." takes two or more expressions each producing a Future, blocks until all have their result ready, and then returns the list of results. It is equivalent to, but shorter than, the following block which first assigns the Futures to local variables and then waits for all of them in turn:

    cart_future = get_cart_async(acct)

    offers_future = get_offers_async(acct)

    cart = cart_future.get_result()

    offers = offers_future.get_result()

If we run the version with the parallel yield and look at the Appstats output, we'll see the desired result: two parallel Query RPCs followed by a single batch Get RPC.

The beauty of parallel yield is that it can parallelize any set of calls as long as they return Futures. For example, you can write another function that calls get_cart_plus_offers_async() in parallel with a query for sub-accounts (assuming an ancestor relationship between Accounts):


def get_more_stuff_async(acct):

    sub_accts, (cart, offers) = yield (Account.query(ancestor=acct.key).fetch_async(),


    raise ndb.Return(sub_accts, cart, offers)

Everything gets parallelized beautifully here. It is, however, important to make sure that all such functions are tasklets and use async calls with yield instead of synchronous calls. For example. the following variant would be sub-optimal:


def get_more_stuff(acct):

    sub_accts = Account.query(ancestor=acct.key).fetch()

    cart, offers = get_cart_plus_offers_async(acct).get_result()

    return sub_accts, cart, offers

We can easily formulate a rule about when to use parallel yield:

For example, this code can be parallelized (the two lines can be placed in either order):

foos = yield Foo.query().fetch_async()

bars = yield Bar.query().fetch_async()


foos, bars = yield Foo.query().fetch_async(), Bars.query().fetch_async()

However, this code cannot be parallelized (the second line depends on the first):

foo = yield Foo.query().get_async()

bars = yield Bar.query( == foo.key).fetch_async()

A final note about parallel yield: in other languages this is sometimes known as a barrier.

Query Iterators

I mentioned query iterators before. If you really want to iterate over query results in a tasklet, use the following pattern:

qry = Model.query(<filters>)

qit = qry.iter(<options>)

while (yield qit.has_next_async()):

    entity =

    # Do something with entity

This is the tasklet-friedly equivalent of the following:


qry = Model.query(<filters>)

for entity in qit:

    # Do something with entity

The three bold lines in the first version are the tasklet-friendly equivalent of the single bold line in the second version. This is somewhat sub-optimal, but it can't be helped: Because tasklets can only be suspended at a yield keyword, a for-loop won't let other tasklets run while it is blocked waiting for the next result.

There's also a reason why the API design introduces has_next() and has_next_async(), instead of making an async version of the next() method. The next() method may raise StopIteration, so in order to properly terminate the loop you would have to write something like this, i.e. 6 lines instead of 3:


qry = Model.query(<filters>)

qit = qry.iter(<options>)

while True:


        entity = yield qit.next_async()

    except StopIteration:


    # Do something with entity

You may ask, why would you want to use a query iterator at all, instead of just fetching all the entities using qry.fetch_async() and then using a plain for-loop over the returned list?

For one, you might have so many entities that they don't all comfortably fit in the available RAM. Another reason might be that you're doing some kind of search that cannot quite be expressed as a query, e.g. you need some kind of correlation between different property values, and you might want to break out of the loop once you find a matching entity (or have collected a given number of matching entities).

Especially the latter case (stopping before the loop is exhausted) cannot easily be done using qry.map_async(). Even in the former case (avoiding loading everything in RAM at once) setting up the proper callback for qry.map_async() might be more work than using a query iterator.

Just remember that there's usually more than one way to skin a cat -- e.g. if you have to sort through thousands or millions of entities, it might make more sense to use the App Engine map/reduce library.

Using map_async()


Using Async Urlfetch with NDB

Here's another example of doing multiple things simultaneously. If you remember that the Context object has an async urlfetch() method and paid attention in the section above about parallel queries and parallel yield, you should have no problems coming up with the code for running a query and a urlfetch call in parallel. Let's build on the shopping cart example, and add a feature where the shopping cart tries to see if the user has an account (using a completely unrealistic made-up web API for accessing the Amazon account info :-):


def get_amazon_account_async(acct):

    ctx = ndb.get_context()

    result = yield ctx.urlfetch('' %

    if result.status_code == 200:

        raise ndb.Return(result.content)

    # else return None

We can run this in parallel with get_cart_plus_offers_async():


def get_cart_offers_amazon_async(acct):

    (cart, offers), amazon = yield (get_cart_and_offers_async(acct),


    raise ndb.Return(cart, offers, amazon)

This will run both queries and the prefetch concurrently with the urlfetch request; usually the datastore is much faster so the entire thing will run in approximately the (real) time it takes the urlfetch request to complete.

Using Future.wait_any()

Sometimes you want to make multiple urlfetch calls and return whenever the first one completes. You can do this using the Future.wait_any() class method:

def get_first_ready(urls):

    ctx = ndb.get_context()

    futures = [ctx.urlfetch(url) for url in urls]

    first = ndb.Future.wait_any(futures)

    return first.content

A downside of this approach is that there is no convenient way to turn this into a tasklet, since parallel yield is defined as "wait for all Futures to complete". Maybe an API to express that can be added in the future...

Tasklet Decorators

[TBD: difference between @ndb.toplevel, @ndb.synctasklet, @ndb.tasklet]

Decorating a generator function with @ndb.tasklet changes it so that when you call it, instead of returning a generator object, it returns a Future. Whereas generator objects are typically used to iterate over, Futures are generally used to wait for completion of the tasklet. When the tasklet completes, it can "return" a value to the Future using "raise ndb.Return(<value>)". This value will be the return value of the yield expression that waits for the Future. (Why use raise? That's answered at the end of this document.)

Under the Hood; Theory of Tasklets

[TBD: Scheduler. Event loop. Tasklets are not threads.]

Tasklets don't use threads. A tasklet can only run when another tasklet explicitly yields.

The overall effect is that we can write things like

result = yield some_function_async(args)

and if we squint a little it looks almost like a synchronous call.

Programming Tips

[TBD: Debugging. Warn against yield in lambdas.]

[TBD: Larger example, e.g. deleting lots of things based on queries.]


Random Notes

[TBD: Move this to a separate document. ATM these are just some points that came up during earlier reviews that I don't want to lose.]

Why Contexts?

Contexts are mostly an advanced feature now; you can use them to use a different cache policy for some part of your code. But mostly they exist for the benefit of transactions. Really advanced users can create multiple concurrent transactions as well. There are also people talking of handling multiple HTTP requests concurrently in a single process; that would also be a good use for multiple contexts.

Why raise Return(<value>) instead of yield Return(<value>) or yield <value>?

This is controversial, but I strongly oppose using yield to return a final value. There are several differences between yield and raise, but perhaps the strongest is that yield inside a try/finally does *not* execute the finally clause before exiting (it only gets run when the generator is closed), wherease raise inside try/finally does run the finally clause before exiting. Also Emacs auto-dedents after a raise, but not after a yield. And the future (PEP 380) is that the semantics of "return value" in a generator are going to be those of "raise StopIteration(value)".

Does auto-batching really happen?

If you use map_query() and your callbacks block to get an entity from the datastore, these will be indeed be batched together, because query results themselves come in batches. For each batch of results (typically 20), first map_query() will create a task for each, then each task will run until the first blocking operation (yield), and once there are no more immediately-runnable generators the auto-batcher (which runs at a lower priority) will kick in and collect all the get requests and put them in a batch (different batch than the query batch).

TODO: Add a lifetime for cache keys

It'll be a per-context setting for sure. A problem with making it per-key would be that it reduces batching opportunities since memcache put_multi() takes a single timeout for all the keys you are putting. But time will tell if there are compelling use cases for per-key variable timeouts; if they are needed it's not hard to add them.

Why is the name parameter to get_or_insert() not called id?

Because it doesn't make sense to use get_or_insert() with numeric ids. The whole point of get_or_insert() is that it either creates the object or returns you the one that already exists, using an id that you provide. But this only makes sense when using a string id. Integer ids are assigned by the datastore. When the id is an integer, and you know its value, this means the entity has been created and assigned an id by the datastore previously, so you should just use key.get().

Why don't you use api_version in app.yaml to indicate the db version to use?

In part because that parameter is tied to a very different mechanism in the App Engine runtime, which would require us to maintain two copies of the entire runtime library (not just the db subtree). Also because that parameter is global, i.e. applies to the entire application, not just to the db library. But mostly because it would just cause more confusion -- someone reading the source code of your application trying to help you debug a problem might not realize that you are using a different version of the library. By requiring the application to use a different import path, it is always clear to the reader which version of the API is being used. Finally, this allows an application to migrate from the old to the new API piecemeal -- you could keep using the old API for old code while starting to use the new API for new code, in the same application (as long as you don't mix the old and the new API for the same model).

Notes on Async APIs in General

Async calls do not use threads; it's all event-driven.

You are not charged for CPU time when you're blocked waiting for the next async call to complete. Of course the real time clock ticks on, but this does not affect what you are charged. The real time taken by requests may factor into how future requests will be scheduled, but using async APIs most likely makes your total elapsed real time smaller, which will improve your scheduling. :-)

Under the hood, all App Engine APIs are asynchronous (event-driven). However the documented "stable" API we published is synchronous. This means that it is easy for us (the App Engine team) to add an async variant for any existing API. We will do so as time permits and as the need becomes clear; if you need a specific API to have an async version please write up your use case and file it as a feature request in our issue tracker, then start lobbying for others to star it. We do not recommend reverse-engineering our code (it's all in the SDK) and building your own asynchronous version of an API; we cannot guarantee the undocumented low-level API on which you would have to build will remain stable.