Best practices for managing RAM usage in high-level applications

Although the Azure Sphere OS uses the Linux kernel as a base, it is important to remember that you are still writing applications for an embedded device with significant RAM constraints. Applying good embedded programming practices will help you create reliable Azure Sphere applications.

Important

To get accurate RAM usage information for your application, it is important that you run your app without debugging. Running your app under the debugger will result in inflated RAM usage, because RAM consumed by the debugging server will be included in the reported RAM usage statistics. For more information on memory statistics for application running on the attached device, see Memory use in high-level applications.

Here are some best practices to follow:

  • Allocate memory upfront (ideally statically) and leave it allocated for the lifetime of your application whenever possible. This will greatly increase the determinism of your application's RAM usage, and reduce the risk of memory footprint increases and fragmentation over your application's lifetime.
  • When dynamic allocation is absolutely necessary:
    • Try to minimize the frequency of heap memory allocations and deallocations that are being performed by the application to reduce risks of heap memory fragmentation, for example, by leveraging chunk allocation/memory pool techniques.
    • Review stack pages and when possible, wrap calls to malloc() with calls to memset() to force pages to commit. This helps ensure that if an allocation causes your application to exceed its RAM limit, the OS will terminate it immediately and predictably. Waiting to access allocated pages will introduce the risk of a delayed out-of-memory crash, which is harder to reproduce and diagnose.
    • Enable heap memory allocation tracking while in development mode.
  • Avoid using Log_Debug with large strings and remove these calls (for example, with an #ifdef) when not in development mode. Log_Debug causes temporary buffers to be allocated, leading to sudden bursts in RAM usage when used with large strings.
  • Use the EventLoop API whenever possible for periodic asynchronous tasks (such as interacting with peripherals) instead of creating threads. Creating threads causes the Linux kernel to allocate additional memory attributed to your application. This reduces the determinism of your app as it increases the probability of the OS scheduler switching between multiple, distinct operations that may cause your application to exceed its RAM limit. Many of the Azure Sphere Sample applications, such as the GPIO_HighLevelApp, demonstrate how to use the EventLoop.
  • Avoid premature use of memory caches for values that can be recomputed in runtime.
  • When using libcurl:
    • Tune the max socket buffer sizes when using libcurl. Azure Sphere OS will allocate socket buffers that are attributed to your application's RAM usage. Reducing these buffer sizes can be a good way to reduce the RAM footprint of your application. Note that making the socket buffers too small will adversely impact the performance of libcurl. Instead, tune the max buffer sizes for your scenario:

          static int sockopt_callback(void* clientp, curl_socket_t curlfd, curlsocktype purpose)
          {
              int size = /*specify max buffer sizes here (in bytes)*/
              int size_size = sizeof(size);
              setsockopt(curlfd, SOL_SOCKET, SO_SNDBUF, &size, &size_size);
              setsockopt(curlfd, SOL_SOCKET, SO_RCVBUF, &size, &size_size);
              return CURL_SOCKOPT_OK;
          }
      
          // Place the following along with other calls to curl_easy_setopt
          curl_easy_setopt(curl, CURLOPT_SOCKOPTFUNCTION, &sockopt_callback);
      

      See the CURLOPT_SOCKOPTFUNCTION libcurl documentation.

      • The higher-level CURLOPT_BUFFERSIZE and CURLOPT_UPLOAD_BUFFERSIZE parameters can be similarly tuned.

      • Libcurl also supports overriding its internal memory functions by using curl_global_init_mem and passing in callback functions for malloc, free, realloc, strdup, and calloc. This functionality enables you to keep track of dynamic allocations or even alter behavior. For example, you could allocate a pool of memory upfront, then use these callbacks to allocate libcurl memory from that pool. This can be an effective technique for setting guardrails and increasing determinism of your application. See the curl_global_init_mem libcurl documentation for more information on how to use these callbacks.

        Note

        This callback mechanism does not cover all memory allocations caused by libcurl, only those made directly by libcurl itself. Specifically, allocations made by wolfSSL underneath are not tracked.