This page briefly explains how to implement AI models converted using RUHMI and its inference process in your AI Application.
You need to follow the steps below:
RUHMI outputs the following files including the converted AI models.
{output_directory}/converted/build/MCU/compilation/src/
.
├── ethosu_common.h
├── hal_entry.c # HAL entry example
├── model.c # AI Model file
├── model.h
├── model_io_data.c # AI Model I/O data file
├── model_io_data.h
├── sub_0000_command_stream.c # Ethos-U55 subgraph generated C source code
├── sub_0000_command_stream.h
├── sub_0000_invoke.c
├── sub_0000_invoke.h
├── sub_0000_io_data.c
├── sub_0000_io_data.h
├── sub_0000_model_data.c
├── sub_0000_model_data.h
├── sub_0000_tensors.c
├── sub_0000_tensors.h
└── ...
.
├── compute_sub_0000.c # CPU subgraph generated C source code, including inference process.
├── compute_sub_0000.h
├── ...
├── kernel_library_int.c # kernel library if CPU subgraphs are present
├── kernel_library_int.h
├── kernel_library_utils.c
├── kernel_library_utils.h
├── model_io_data.c # AI Model I/O data file
└── model_io_data.h
The following explains how to implement AI models and the inference process based on an AI application generated by RUHMI.
Click the button that matches your use case.
R_BSP_SdramInit(true);RM_ETHOSU_Open() as shown below.
/* Ethos-U Initialization */
int status = FSP_SUCCESS;
status = RM_ETHOSU_Open(&g_rm_ethosu0_ctrl, &g_rm_ethosu0_cfg);
if (status != FSP_SUCCESS) {
/* Error Handling */
}RM_ETHOSU_Open() fails, check the heap size setting.RunModel(), which is defined in model.h.model_io_data.h as below.#include "model.h"
#include "model_io_data.h"
...
memcpy(GetModelInputPtr_input0(), model_input0, model_input_SIZE0);
memcpy(GetModelInputPtr_input1(), model_input1, model_input_SIZE1);
/* Run inference */
RunModel(false);RM_ETHOSU_Close().
/* Stop Ethos */
status = RM_ETHOSU_Close(&g_rm_ethosu0_ctrl);
if (status != FSP_SUCCESS) {
/* Error Handling */
}compute_sub_0000.h and check the size of the input and output buffers./* File: compute_sub_0000.h */
void compute_sub_0000(
/* buffer for intermediate results */
uint8_t* main_storage, /* should provide at least <intermediate_buffers_size> bytes of storage */
/* inputs */
const int8_t <input_name>[XXX],
/* outputs */
int8_t <output_name>[YYY]
); /* Input buffer */
static int8_t input_buffer[XXX];
/* Output buffer */
static int8_t output_buffer[YYY];compute_sub_0000.h and check the size of the iintermediate buffer./* File: compute_sub_0000.h */
enum BufferSize_sub_0000 {
kBufferSize_sub_0000 = <intermediate_buffers_size>
};#include "compute_sub_0000.h"
...
/* Intermediate buffer */
static uint8_t compute_buffer[kBufferSize_sub_0000];compute_sub_0000().
#include "compute_sub_0000.h"
/* Input buffer */
static int8_t input_buffer[XXX];
/* Output buffer */
static int8_t output_buffer[YYY];
/* Intermediate buffer */
static uint8_t compute_buffer[kBufferSize_sub_0000];
/* Run inference */
compute_sub_0000(compute_buffer, input_buffer, output_buffer);Click “Build” in the AI Application view of AI Navigator to build your project.
Then, click “Run on the Board” in the AI Navigator menu and click “Run the AI”.
That’s all for the explanation of how to implement the converted AI models into your project.
For more information, please refer to RUHMI GitHub.