I’ve been spending the past week bringing up the ATXMega boards that I put together and porting Chibi over to them for testing the radio modules. While doing so, it also got me thinking about how I chose the parts and what the landscape for wireless sensor nodes is starting to look like.

The original idea of wireless sensor nodes was that they would be like dust. They’d be inconspicuous, ubiquitous, and you could essentially just sprinkle a couple all over the place to monitor some area that you’d like to keep tabs on. It was expected that wireless sensor nodes would be small, lean on memory resources, and extremely low power. Fast forward about seven years and we see that there are some real deployments going out with wireless sensor networks and the usage scenarios are much different than what was first envisioned.

It feels like wireless sensor nodes are going down two different paths. On the one hand, you still have extremely resource constrained nodes that are being used for specific applications. These would be like environmental monitoring or proprietary applications where the network and use cases are extremely well defined.

On the other hand, large scale deployments like the US smart grid (as well as the conversion over to smart meters in other countries) are showing that widespread adoption will put a limit on the minimum amount of resources required for a wireless node. One of the biggest resource consumers of a large scale deployment of wireless nodes is protocol standardization. 

There are currently many different standardization efforts for wireless sensors which go from the internationally recognized standards bodies like IETF, IEEE, IEC, ANSI, and ISA, to the more proprietary ones like Zigbee and Bluetooth. This is a very good thing for wireless sensor networks, but standardizing a protocol also requires generalization of the protocol to account for different use cases it might find itself deployed in. That generalization is where you start seeing the resource requirements like RAM and flash creep up. 

It’s also obvious that security is playing a huge role in the actual deployment of large scale wireless sensor networks and you can bet that the security requirements will only increase as WSN technology matures. And finally, the application layer of the protocol stacks are growing as the standards are modified to accommodate an increasing number of device types, technologies, and caching, as well as integrate existing standards such as HTTP (or some variant like CoAP), XML, etc...

If you haven’t noticed yet, the recent press releases about upcoming products from wireless sensor SoC manufacturers are showing that the devices are getting much bigger. The original devices from back in the early 2000’s were mostly running on 8-bit MCUs with 4 to 8 kB RAM and varying flash sizes from 32 to 128 kB. I remember when a MCU with 128 KB flash and 8 kB RAM was considered overkill for a wireless sensor node. These days, you can see a steady march of manufacturers beefing up the RAM sizes, flash sizes, and MCU speeds.

As an example, Ember recently released their EM35x chips which are using the ARM Cortex M3 32-bit MCU with 12 kB RAM and 192 kB flash. Their original SoC had a 16-bit MCU, 5 kB RAM, and 128 kB flash. Atmel recently introduced their first wireless sensor SoC (uhhh…by that I mean a real integrated chip as opposed to two die glued together) with 16 kB RAM, and 128 kB flash. Their previous multi-chip module series used MCUs with 8 kB RAM. Dust Networks, founded by Kris Pister, the guy that coined the phrase "smart dust", will also be introducing an integrated wireless node which features an ARM Cortex M3, a massive 512 kB flash, and equally massive 72 kB RAM.  You can contrast this with one of the original wireless transceiver SoCs from Freescale, the MC1321x announced back in 2004. It featured 1 to 4 kB RAM and 16 to 60 kB flash running on an HCS08 8-bit microcontroller.

In my opinion, one of the things that caught many wireless SoC manufacturers off guard was the constantly increasing RAM requirements of the WSN protocol stacks. The default build of Atmel’s Bitcloud requires around 9 kB RAM and 90 kB flash which I think are fairly reasonable numbers for a Zigbee stack. The Contiki-2.4 IPv6 webserver build requires around 11 kB RAM and 53 kB flash. You can see the craftsmanship in the small flash size, but even the amazing developers on the Contiki project still need an ever increasing amount of RAM.

This leads me to the point of this post which is that the WSN industry is still in a state of flux as people try to get a handle on what the resource boundaries are. Also, WSN SoC manufacturers look like they’re racing to catch a moving target as the memory requirements keep creeping up. I don’t think anyone has a clear picture of what things will look like when the dust settles. That's why I decided to make modular WSN platforms and it's definitely why I chose the ATXMega and its external memory bus….

P.S. ...the people involved in WSN are completely nuts. That’s probably why it’s so fun…

Add comment

Security code