Contiki

Introduction

Contiki is an open-source multitasking event-driven operating system designed for networked embedded devices. Its lightweight footprint makes it suitable for memory constraint microcontrollers.

Contiki gathers several independent modules such as an event-driven thread-like multitasking environment with the protothread library, the uIP TCP/IP (v4 and v6) stack, the wireless sensor network set of protocols: the Rime stack.

Contiki is primarily designed for networking applications, but can be used for any other application using only its event driven kernel.

You can visit the official website for more information.

This page describes the main functionality of Contiki, and what works on the WSN430 platform. The Contiki source files ported for the WSN430 platform are found in the download section. For setup howto and examples, see the Contiki example page.

Events

The Contiki kernel is event-driven. The idea of such a system is that every execution of a part of the application is a reaction to an event. The entire application (kernel + libraries + user code) may contain several processes that will execute concurrently.

The different processes usually execute for some time, then wait for events to happen. While waiting, a process is said to be blocked. When a event happen, the kernel execute the process passing it information about the event. The kernel is responsible for activating the processes when the events they're waiting for happen.

Events can be classified in three kinds:

  • timer events: a process may set a timer to generate an event after a given time, it will block until the timer expires and then continue its execution. This is useful for periodic actions, or for networking protocols e.g. involving synchronization;
  • external events: peripheral devices connected to IO pins of the microcontroller with interrupt capabilities may generate events when triggering interruptions. A push-button, a radio chip or a shock detector accelerometer are a few examples of devices that could generate interruptions, thus events. Processes may wait for such events to react accordingly.
  • internal events: any process has the possibility to address events to any other process, or itself. This is useful for interprocess communication as informing a process that data is ready for computation.

Events are said posted. An interrupt service routine will post an event to a process when it is executed. Events have the following information:

  • process: the process addressed by the event. It can be either one specific process or all the registered processes;
  • event type: the type of event. The user can define some event types for the processes to differentiate them, such as one when a packet is received, one when a packet is sent;
  • data: additionally, some data may be provided along with the event for the process.

This is the main principle of the Contiki Operating System: events are posted to processes, these execute when they receive them until they block waiting for another event.

Processes

Processes are the task-equivalent of Contiki. The process mechanism uses the underlying protothread library (website) which in turn uses the local continuation library (website). Refer to the given links for more information.

A process is a C function most likely containing an infinite loop and some blocking macro calls. Since the Contiki event-driven kernel is not preemptive, each process when executed will run until it blocks for an event. Several macros are defined for the different blocking possibilities. This allows programming state-machines as a sequential flow of control. Here is the skeleton of a Contiki process, as provided by the Contiki website:

#include "contiki.h"
 
/* The PROCESS() statement defines the process' name. */
PROCESS(my_example_process, "My example process");
 
/* The AUTOSTART_PROCESS() statement selects what process(es)
   to start when the module is loaded. */
AUTOSTART_PROCESSES(&my_example_process);
 
/* The PROCESS_THREAD() contains the code of the process. */
PROCESS_THREAD(my_example_process, ev, data)
{
  /* Do not put code before the PROCESS_BEGIN() statement -
     such code is executed every time the process is invoked. */
  PROCESS_BEGIN();
  /* Initialize stuff here. */
  while(1) {
    PROCESS_WAIT_EVENT();
    /* Do the rest of the stuff here. */
  }
  /* The PROCESS_END() statement must come at the end of the
     PROCESS_THREAD(). */
  PROCESS_END();
}

This code does obviously nothing, since it's a skeleton. It waits repeatedly for an event to happen, and again…

There are some special consideration to take care of when programming with protothreads (or Contiki processes):

  • local variables are not preserved: when a process calls a blocking macro, what happens in fact is that the process function returns, letting the kernel call others. When an event is posted, the kernel will call the same process function which will jump right after it returned before. Thus if local variables of the process (but not declared static) were given some values before blocking, these values are not guaranteed to hold the same value when continuing after the block! A good way to work it around is to use static variables in the process function.
  • don't use switches: protothreads use local continuations to find their state back after returning, which is done using a switch statement. If the case statements can be put almost anywhere (in an if or a while section), it can't be mixed up with another switch statement. Therefore it is better not to use switch statements inside a process function.

Please look at the examples section for some simple application showing the use of processes

uIP TCP/IP stack

Contiki contains a lightweight TCP/IP stack called uIP (uIP website). It implements RFC-compliant IPv4, IPv6, TCP and UDP (the latter two compatible with IPv4 and IPv6). uIP is very optimized, only the required features are implemented. For instance there is a single buffer for the whole stack, used for received packets as well as for those to send.

Application API

There are two ways to program an application on top of the uIP stack:

  • raw API: the uIP raw API is well suited for implementing one simple application, e.g. a simple 'echo' server that would listen on some TCP port and send back every data it receives. However it becomes more complex to program when a more feature-full application is desired, or when two of these should run together. Even the TCP connection state machine is already a little pain.
  • protosocket API: the protosocket library makes use of the protothread library for having a more flexible way to program TCP/IP applications. This library provides an interface similar to the standard BSD sockets, and allows programming the application in a process.

Refer to the examples section for some explained networking examples.

Lower Layers

Having a functional TCP/IP stack and some applications running on top of it is good, but not enough. The uIP stack requires a lower layer (according to the OSI model) in order to communicate with peers. We'll distinguish two different types of peers:

  • nodes: communication between nodes is achieved with a wireless link. The uIP stack needs to be able to send and receive packets. Depending on the uIP version, Contiki follows different directions.
    • When it comes to IPv6, Contiki chose to follow a route-over configuration. Therefore, uIP6 uses a simple MAC layer called sicslowmac. Beside header compression provided by the 6loWPAN module, it just forwards the packet to/from the radio.
    • However, for IPv4, Contiki chose a mesh-under configuration. This is done with the Rime communication stack. Rime provide mesh routing and route discovery, therefore uIP uses it to forward packets on the network. From the IP point of view, all the nodes of the sensor network form a local subnetwork, even though multiple radio hops may be required.
  • gateways: to reach a network entity outside the wireless sensor network, a gateway is required. It's a system that will make the link between the wireless sensors network and another network. It will typically be a PC in most experiments, although it could be many embedded system. The connection between a PC and a mote is a serial link. IP packets are sent between these two using SLIP, which stands for Serial Line IP. On the computer side, a program must run to do the interface between the serial line and a network interface. Depending on the uIP stack version, the functionality is not the same.
    • With uIPv6, a node will be loaded with a very simple program that forwards every packet from the radio to the serial link and vice versa. It doesn't do any address comparison, there is no IP stack on it, besides the header compression/decompression mechanism (6loWPAN). This node will just be seen from the PC point of view as an ethernet network interface, thus that's the PC that does all the work.
    • With uIPv4 it works differently. The node connected to the PC will act as a gateway, with all the IP stack in it. Every time it has a packet to send, it will check its IP address: if it belongs to the wireless sensor network subnet range, then it will send it using its radio, otherwise it will send it to the PC using the serial link. The PC runs a program that create a IP network interface.

Rime stack

The Rime stack provides a hierchical set of wireless network protocols, from a simple anonymous broadcaster to a mesh network routing. Implementing a complex protocol (say the multihop mesh routing) is split into several parts, where the more complex modules make use of the simpler ones.

Here is the overall organization of the Rime protocols:

The Rime hierarchical organization

And here is a brief description of the different modules of Rime:

  • abc: the anonymous broadcast, it justs sends a packet via the radio driver, receives all packet from the radio driver and passes them to the upper layer;
  • broadcast: the identified broadcast, it adds the sender address to the outgoing packet and passes it to the abc module;
  • unicast: this module adds a destination address to the packets passed to the broadcast block. On the receive side, if the packet's destination address doesn't match the node's address, the packet is discarded;
  • stunicast: the stubborn unicast, when asked to send a packet to a node, it sends it repeatedly with a given time period until asked to stop. This module is usually not used as is, but is used by the next one;
  • runicast: the reliable unicast, it sends a packet using the stunicast module waiting for an acknowledgment packet. When it is received it stops the continuous transmission of the packet. A maximum retransmission number must be specified, in order to avoid infinite sending;
  • polite and ipolite: these two modules are almost identical, when a packet has to be sent in a given time frame, the module waits for half of the time, checking if it has received the same packet it's about to send. If it has, the packet is not sent, otherwise it sends the packet. This is useful for flooding techniques to avoid unnecessary retransmissions;
  • multihop: this module requires a route table function, and when about to send a packet it asks the route table for the next hop and sends the packet to it using unicast. When it receives a packet, if the node is the destination then the packet is passed to the upper layer, otherwise it asks again the route table for the next hop and relays the packet to it;

MAC layer

As we've seen, the uIPv6 stack use a very simple MAC layer called 'sicslomac', and there is no real option here. However the uIPv4 stack uses Rime as its immediately lower layer, which in turn uses a selectable MAC layer.

There are a few MAC layers implemented in Contiki:

  • the nullmac protocol, as its name may say does nothing but forwarding packets from the upper layer to the radio driver, and vice versa;
  • the xmac protocol, which is a preamble sampling protocol, where nodes listen on the radio channel for a small amount of time, periodically.
  • the lpp protocol, which is a probing protocol, where the nodes periodically send a small message saying they're listening, they listen for a short time afterward and go back to sleep.

Working so far

What has been ported and tested so far on the WSN430 platform follows:

  • kernel: the basic Contiki kernel works without problem, including the processes handling, the different timers management, etc… Furthermore, all the drivers required by Contiki should be provided by the WSN430 drivers package. In the Contiki original packaging, those were part of the system, but for maintainability reasons they have been separated.
  • uIP: the uIP stack works with no problem for both IPv4 and IPv6 versions. Both TCP and UDP are available, but care should be taken with UDP because of the Maximum Segment Size, which is usually set to 128 octets. The communications work directly between nodes using the radio link, or with a PC using a node gateway.
  • Rime: the wireless sensor network set of protocols has been partly tested, and the simpler modules are working, such as abc, broadcast, unicast, ipolite, stunicast, runicast, multihop. The global communication stack, that is using the *mesh* module has been proven to work with a few number of nodes in the same radio neighborhood. Multihop and routing has not been tested.
  • MAC: the four previously mentioned MAC protocols have been configured for the WSN430 platform: nullmac, xmac, lpp and sicslomac.

Examples

See the dedicated Contiki Example Page for the list of examples and their explanations.

Download

Here is the archives, the Contiki OS ported for the WSN430, along with the examples:

 
os/contiki.txt · Last modified: 2010/08/25 12:39 by burindes
 
Recent changes RSS feed Creative Commons License Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki