Home arrow Blog arrow Misc arrow The Mystery of the Increasing RAM
The Mystery of the Increasing RAM | Print |
Written by Akiba   
Wednesday, 16 June 2010

I’ve been spending the past week bringing up the ATXMega boards that I put together and porting Chibi over to them for testing the radio modules. While doing so, it also got me thinking about how I chose the parts and what the landscape for wireless sensor nodes is starting to look like.

The original idea of wireless sensor nodes was that they would be like dust. They’d be inconspicuous, ubiquitous, and you could essentially just sprinkle a couple all over the place to monitor some area that you’d like to keep tabs on. It was expected that wireless sensor nodes would be small, lean on memory resources, and extremely low power. Fast forward about seven years and we see that there are some real deployments going out with wireless sensor networks and the usage scenarios are much different than what was first envisioned.

It feels like wireless sensor nodes are going down two different paths. On the one hand, you still have extremely resource constrained nodes that are being used for specific applications. These would be like environmental monitoring or proprietary applications where the network and use cases are extremely well defined.

On the other hand, large scale deployments like the US smart grid (as well as the conversion over to smart meters in other countries) are showing that widespread adoption will put a limit on the minimum amount of resources required for a wireless node. One of the biggest resource consumers of a large scale deployment of wireless nodes is protocol standardization. 

There are currently many different standardization efforts for wireless sensors which go from the internationally recognized standards bodies like IETF, IEEE, IEC, ANSI, and ISA, to the more proprietary ones like Zigbee and Bluetooth. This is a very good thing for wireless sensor networks, but standardizing a protocol also requires generalization of the protocol to account for different use cases it might find itself deployed in. That generalization is where you start seeing the resource requirements like RAM and flash creep up. 

It’s also obvious that security is playing a huge role in the actual deployment of large scale wireless sensor networks and you can bet that the security requirements will only increase as WSN technology matures. And finally, the application layer of the protocol stacks are growing as the standards are modified to accommodate an increasing number of device types, technologies, and caching, as well as integrate existing standards such as HTTP (or some variant like CoAP), XML, etc...

If you haven’t noticed yet, the recent press releases about upcoming products from wireless sensor SoC manufacturers are showing that the devices are getting much bigger. The original devices from back in the early 2000’s were mostly running on 8-bit MCUs with 4 to 8 kB RAM and varying flash sizes from 32 to 128 kB. I remember when a MCU with 128 KB flash and 8 kB RAM was considered overkill for a wireless sensor node. These days, you can see a steady march of manufacturers beefing up the RAM sizes, flash sizes, and MCU speeds.

As an example, Ember recently released their EM35x chips which are using the ARM Cortex M3 32-bit MCU with 12 kB RAM and 192 kB flash. Their original SoC had a 16-bit MCU, 5 kB RAM, and 128 kB flash. Atmel recently introduced their first wireless sensor SoC (uhhh…by that I mean a real integrated chip as opposed to two die glued together) with 16 kB RAM, and 128 kB flash. Their previous multi-chip module series used MCUs with 8 kB RAM. Dust Networks, founded by Kris Pister, the guy that coined the phrase "smart dust", will also be introducing an integrated wireless node which features an ARM Cortex M3, a massive 512 kB flash, and equally massive 72 kB RAM.  You can contrast this with one of the original wireless transceiver SoCs from Freescale, the MC1321x announced back in 2004. It featured 1 to 4 kB RAM and 16 to 60 kB flash running on an HCS08 8-bit microcontroller.

In my opinion, one of the things that caught many wireless SoC manufacturers off guard was the constantly increasing RAM requirements of the WSN protocol stacks. The default build of Atmel’s Bitcloud requires around 9 kB RAM and 90 kB flash which I think are fairly reasonable numbers for a Zigbee stack. The Contiki-2.4 IPv6 webserver build requires around 11 kB RAM and 53 kB flash. You can see the craftsmanship in the small flash size, but even the amazing developers on the Contiki project still need an ever increasing amount of RAM.

This leads me to the point of this post which is that the WSN industry is still in a state of flux as people try to get a handle on what the resource boundaries are. Also, WSN SoC manufacturers look like they’re racing to catch a moving target as the memory requirements keep creeping up. I don’t think anyone has a clear picture of what things will look like when the dust settles. That's why I decided to make modular WSN platforms and it's definitely why I chose the ATXMega and its external memory bus….

P.S. ...the people involved in WSN are completely nuts. That’s probably why it’s so fun…
Hits: 15008
Comments (4)Add Comment
This is really nuts all right...
written by André, June 16, 2010
I start by noting that the original Ember SoC (EM250/260) is a 16-bit processor.
A XAP2B core from Cambridge consultants.

On topic now, i'm getting involved in a project were i'm betting a hard to answer question. "Does this application should use a 32-bit mcu, or a 8-bit" :-/.
I understand that is more flexible and better for manufactures to make chips that have a broader usage range.

The stack size argument you use is quite valid and the ZigBee stacks out there seem to be getting quite big...
However, in a paper entitled The 6lowpan architecture from Geoff Mulligan there's a table saying that a router 6lowpan implementation may require only 22KB of flash with a mesh topology and 4KB of RAM.
Will we be seeing that big of a change?
report abuse
vote down
vote up
Votes: +0
written by Akiba, June 16, 2010
Oops...totally forgot that XAP was a 16-bit MCU. Sorry about that. I just made the correction.

I think that the paper is probably still valid. However 6LoWPAN pretty much refers to the IPv6 layer, translation, and 802.15.4 MAC. You'll need to think about the RPL IPv6 routing protocol, as well as the upper layers which will probably have UDP and other technologies like CoAP. There is also the security implementation which is still being defined and of course, the actual application.

You also don't want to go right up to the RAM boundary because the dynamic RAM usage also needs to be taken into account.
report abuse
vote down
vote up
Votes: +0
mc13224v still has plenty of RAM left
written by Mariano Alvira, June 17, 2010
I think Freescale got it right with the mc13224v in going with all RAM (and mirroring in the code on boot from a serial flash). With 96kB RAM, I still have 30kB free after loading Contiki IPv6/RPL--- at that's with a 30 node routing table.

Plenty of room left for whatever else you need.

12kB on the EM35x just seems puny.
report abuse
vote down
vote up
Votes: +0
written by Akiba, June 17, 2010
Ha ha ha...stop bragging. You just got lucky their engineer accidentally designed in serial flash smilies/wink.gif

Actually, Mariano at Redwire is contributing a lot of work to Contiki using the MC13224. You can also pick up some nodes to try out here:

report abuse
vote down
vote up
Votes: +0

Write comment

  No Comments.

< Prev   Next >