Checkpointing

Actors provides a simple mechanism for taking user-defined checkpoints/copies of variables and restoring them on demand. This can be used together with actor and task supervision to restart failed computations from a previously saved state.

Why do we need checkpointing in addition to supervision? If for example a computing node fails, a supervisor does not receive an error message with the last state of a failed actor. It must be able to restart an actor on another node with a state saved somewhere else.

This is experimental and in active development!

We conceive actor-based checkpointing as sending copies of variables over a link (a Channel) to a checkpointing actor. There is no stopping of an actor/task/process/application from the outer system or taking copies of an actors' (shared) state.

The user must checkpoint critical variables regularly and restore them within init! and restart callbacks (described in supervision). This enables computations to recover from node failures. We must develop appropriate patterns for using this kind of checkpointing.

Multi-level Checkpointing

Checkpointing actors can be grouped into a hierarchy of several levels. This allows their subordinated actors or tasks to be restarted from a common state. This is similar a kind of multi-level checkpointing:

Multi-level checkpointing ... uses multiple types of checkpoints that have different levels of resiliency and cost in a single application run. The slowest but most resilient level writes to the parallel file system, which can withstand a failure of an entire machine. Faster but less resilient checkpoint levels utilize node-local storage, such as RAM, Flash or disk, and apply cross-node redundancy schemes. [1]

All checkpointing actors have a Dict (in-memory) for checkpoints, which they can save to a file and reload. They are isolated from one another and their location (e.g. a worker pid) can be chosen according to redundancy and efficiency considerations.

A hierarchy of checkpointing actors may look like the following:

checkpointing

C1 .. C3 take level 1 checkpoints from connected worker actors A1 .. A17. C4 is at level 2 and updates regularly its dictionary from checkpoints of actors A1 .. A10. C5 updates regularly from C3 and C4. Depending on the failure and the supervisory strategy the system can recover from checkpoints on various levels. Maybe C5 saves its checkpoints to disk. Thus the whole system could be restarted from that.

Level One

A checkpointing actor at level=1 operates as server for worker actors or tasks. On checkpoint it stores values from current computations under a key and can restore them on demand. It usually resides on the same node as its clients and can take frequent and inexpensive checkpoints.

Higher levels

Checkpointing actors at levels > 1 aggregate and save checkpoints from actors below them periodically or at demand. They usually reside on other nodes and save checkpoints to a file system. If desired, the highest level actor can save and serve the checkpoints of an entire distributed application.

Checkpointing API

API functionbrief description
checkpointingstart a checkpointing actor,
checkpointtell it to take a checkpoint,
restoretell it to restore the last checkpoint,
register_checkpointregister it to another higher level one,
@chkeybuild a checkpointing key,
set_intervalset the checkpointing periods,
get_intervalget the checkpointing periods,
start_checkpointingstart periodic checkpointing,
stop_checkpointingstop the periodic checkpointing,
get_checkpointsget (a Dict of) all checkpoints,
save_checkpointstell it to save the checkpoints to a file,
load_checkpointstell it to load them from a file.