In-Place Python Script Reloading


In order to improve the speed of development iteration, several languages have support for modifying functions and data structures without terminating the runtime environment.

This type of feature is generally not necessary when developing simple command-line scripts in Python, and Python's built-in reload method is often suitable when iterating in the interactive shell.

Unfortunately, for longer-lived applications such as web servers or persistent simulations, the built-in reload method suffers from several limitations.  In particular, only a single module may be reloaded at a time, and objects and other modules may continue to refer to elements of the old module, causing inconsistent behavior.

This document proposes a secondary extended reloading reloading mechanism that addresses some of these limitations.  The general approach is to fix up modified types and functions in-place through an extension to the GC module.


Rollback Importer

For some simple use cases, there is an existing documented approach to reloading.  This Rollback Importer is currently used by PyUnit.  The approach relies on performing a checkpoint at some point during execution (before modules which will be reloaded are imported for the first time).  In order to reload a module, the module state is rolled back to the checkpoint.  Then modules can be imported again in a clean state.

This approach has two main limitations.  First, a checkpoint must be established, and any state changes in modules that occur after the checkpoint will be lost when the module state is rolled back.  This means that modules which store long-term state are ineligible for reloading using this method.

Second, a rollback importer does nothing to address references to classes or functions defined in reloaded modules that are stored outside of the modules that are rolled back.  As a result the check-point must be selected conservatively.

Late Binding

According to this talk, this strategy is used by CCP for Eve Online.  Eve Online has discarded the built-in Python module framework and replaced it by a name-based system, in which classes, functions and constants are referred to by a fully qualified name which is independent from the file in which the object is declared.  As long as references to objects are looked up by name each time they are used, there is no risk of maintaining references to stale objects.

Unfortunately this approach introduces a performance penalty when looking up late bound references.  It also does nothing to address fixing up references to objects that are not late bound, for example in local stack frames.

Dependency-based Reloading

Another reloading strategy is based on identifying the modules that depend on the module to be reloaded.  After a module is reloaded all modules that depend on it are reloaded (recursively).  The goal of this approach is to eliminate references to the old module by reloading any modules that may contain these stale references.

This approach addresses one fundamental case of inter-module references: reloading a module that contains a class will correctly reload a module containing a derived class.  A dependency-based reloader does not address references that leak outside a module, for example when an object is passed into a function in another module which then stores it locally.  

Dealing with state stored within a module (such as caches) must be handled on a case-by-case basis.  An additional shortcoming is that reloading a module which is heavily referenced can cause many other modules to reload.  This has performance implications, and exacerbates the problem of needing to preserve module state.


In-place reloading has four stages:

  1. Reloading a module or set of modules
  2. Identifying objects in the old and new modules that are logically identical
  3. Remapping references from old objects to new objects
  4. Fixing up invariants

Detecting when files have changed in order to initiate this reload process is currently out of the scope of this process.  The functionality associated with this module would be located in a new module, tentatively named hot (e.g., hot.reload).

Each of these stages will be described in more detail in the following sections.


We first save a copy of the objects in the old version of the module, and then reload the module using the built-in Python reloading mechanism.  If exceptions are thrown during the import process, the module will be rolled back to its previous state and the remainder of the process will not be run.

It will also be possible to load a set of modules simultaneously.  Each module will be reloaded individually using the built-in reload, and the remaining stages will happen in one pass.

Mapping Old to New

During this stage we determine which objects in the old module are logically identical to objects in the new module.  There are several ways that this could be accomplished, so we will start with a relatively simple approach.  All classes and functions declared in the relevant modules with identical names will be correlated.  The same process is applied recursively to the contents of correlated classes.

This approach could be extended in the future to also support remapping other types of objects.  Care must be taken not to correlate objects that should not be remapped (for instance, integers are unsafe to remap).

The output of this process is dict-like object mapping from old objects to new objects (unlike a dict it is keyed by Python object id() rather than hash() to support non-hashable objects as keys).

Remapping References

All references to old instances now need to be replaced with references to new instances.  This requires an extension to the Python C API.  We propose adding a new method to the gc module:

gc.remap(mapping, ignore=None)

Update references to heap objects using the supplied mapping.

References inside objects in ignore are skipped.

This method will use the visitor mechanism to traverse all objects that are tracked by the GC.  The signature for visitor methods must be changed:

- typedef int (*visitproc)(PyObject *, void *);

+ typedef int (*visitproc)(PyObject **, void *);

The visit function used by the remap function will actually change the value of members that are visited.  Here is an example of how this method might be written:

static int

remapvisit(PyObject** obj, PyObject *mapping)


    if (*obj && PyMapping_HasKey(mapping, *obj))


        /* The new reference returned by PyObject_GetItems will be */

        /* taken by (*obj).  The old reference is decremented only */

        /* after the new value has been assigned. */

        PyObject * old_value = *obj;

        PyObject * value = PyObject_GetItem(mapping, *obj);

        (*obj) = value;



    return 0;


Under typical conditions the mapping will hold references to all keys, so the last reference to a key will not be removed until the remapping process has finished.  Note that a limitation of using the GC traverse mechanism to remap references is that the target of weak refs will not be remapped.  For reloading this is not an issue because we do not expect weak refs to classes and functions declared at module scope to be a common use case.

Another consequence of this approach is that Py_VISIT now takes the address of the op argument, and so needs to be called on the actual member that is being visited, for instance::

PyObject *x = self->x;

Py_VISIT(x);            /* Bad! */

Py_VISIT(self->x);      /* Good */

This requirement necessitates several minor changes for remapping to work correctly, mainly in dictobject.c and typeobject.c.  Other C extension modules which are not part of the standard Python distribution may also require changes to support remapping properly.  The signature of the Py_VISIT macro is unchanged so in typical cases extension modules will not require code changes, but they will require a recompile because the signature of traverseproc is changed (via visitproc).

As a note, the gc.remap function is extremely powerful and dangerous (e.g., gc.remap({True: False})), but the same is the case for many of the methods in the gc module such as gc.get_referrers.  In any environment where security is important access to the gc module should already be restricted.

Fixing Invariants

Remapping may invalidate invariants by changing the contents of immutable objects.  In particular, the hash values of dictionary keys may be changed by the reloading process, which can invalidate dictionaries.  For example:

>>> class A:

...     def __hash__(self): return 3


>>> class B:

...     def __hash__(self): return 4


>>> a = A()

>>> d = {a: None}

>>> hash(a)


>>> a in d


>>> gc.remap({A: B})

>>> hash(a)


>>> a in d


>>> a in list(d)


At this point the dictionary is in an invalid state.  This is a somewhat contrived case, but similar cases could easily arise with dictionaries keyed by types defined in modules that are reloaded.

There are several approaches we can take to solve this problem.  The initial implementation will use a simple option: After reloading any module, all dictionaries and dictionary-like objects will be rehashed.  We will use the list of all classes registered as subclasses of collections.Set and collections.Mapping (set, frozenset, dict, and many containers in collections) plus _weakrefset.WeakSet.  Another potential approach involves determining which objects are used as keys in dictionaries, recording their hash values before and after remapping, and rehashing up only the dictionaries that need to be repaired.

Additionally, if a module defines a __reload__ function, we will call this function on the reloaded module with the old module data before generating the mapping from old module data to new.  This will allow arbitrary overriding of reloading behavior in the hopefully few cases where this is necessary.


The rules defined above for which objects are identified to be remapped are fairly basic – only classes and functions are remapped.  One consequence of this is that other instances will not be remapped, even if identified by the same name.  This is intentional for many types of instances, for example we would not want to remap all instances of the “one” singleton to the “two” singleton.  On the other hand, there may be some types of instances that should be remapped.

As a general rule, users of this reloading functionality are discouraged from using module level singleton objects.  There are several reasons to avoid these, including problems with unit testing and circular imports.  As a future extension, we could provide a mechanism to register types that should participate in remapping.  For the first pass, we believe the __reload__ mechanism will be sufficient.

Remapping strings is generally unsafe because interning can cause identical string references to exist in unrelated contexts.  Likewise, remapping cached atomic objects such as small integers is not a good idea.

Remapping of executing functions in threads, tasklets or generators that are currently yielding is not fully supported, in that a currently running instance of the original method will run to completion.  The next time the method is called the new version will run.  This limitation should be possible to work around, and would be very difficult to address in a general way, due to difficulty identifying where the instruction pointer should be placed in the remapped method.  A fairly simple approximate approach might be to generate a ReloadException in suspended contexts, and then at the interpreter level, re-call the last exited method once all stale methods have left the stack.

Reloading a class does not re-run the __init__ method on instances of the class, so new members that are added will not be present on old instances of the class.  To work around this issue, new members can be (temporarily) accessed using getattr to check for presence of the member, or alternatively by using a property that wraps checking for the member.  If this is problematic a future extension could also allow adding a hook to a class that would be called on all instances when the class is reloaded.  This could use a class __reload__ function, or possibly a decorator:


class Foo:


Due to these and other limitations, reloading is intended as a development aid, and not a feature intended for use in production software.


We have done some early performance testing of this feature.  The performance of gc.remap() scales roughly with the number of references in the Python GC heap.  Remapping takes roughly 190 ns per reference on a 2.53 GHz Pentium 4 processor, or about 1.9 s for a heap with 10 million references.  It is difficult to estimate the expected typical number of references in a heap, but performance does not seem like a significant obstacle at this time.

The modifications to the visitproc signature have a minor performance impact on the garbage collector.  In a release Python build calling gc.collect on a heap with 30 million references is about 2% slower.  On a 2.13 GHz Core 2 processor, the gc.collect time per reference increases from about 12.75 ns to 13 ns per reference.


Instead of modifying the Py_VISIT macro and using tp_traverse for remapping, we could require C extensions to implement a third gc function (in addition to tp_traverse and tp_clear).  This has the advantage of eliminating the performance impact on the normal GC, and also makes supporting gc.remap explicit.  The primary disadvantage is that C extension types would need to implement another gc support method, which in nearly all cases will be structured identically to the traverse method.


Much helpful feedback on this proposal was provided by Jon Parise, Max Rebuschatis, Ryan Seiff and Geoff Lay.