CoreFlow 1.0.0
A modern orchestration and execution runtime
Loading...
Searching...
No Matches
Extension: AI/ML

The OpenVX EdgeAI Vendor Extension. More...

Files

file  all.hpp
 CoreVX single-include header for C++ development.
 
file  circular_queue.hpp
 Circular queue implementation.
 
file  execution_queue.hpp
 Execution queue implementation.
 

Classes

class  CircularQueue< T, MaxDepth >
 Circular queue implementation. More...
 
class  ExecutionQueue< T, MaxDepth >
 Execution queue implementation. More...
 

Enumerations

enum  vx_kernel_ext_e {
  VX_KERNEL_ORT_CPU_INF = VX_KERNEL_BASE(VX_ID_EDGE_AI, VX_LIBRARY_KHR_BASE) + 0x1 ,
  VX_KERNEL_AIS_CHATBOT = VX_KERNEL_BASE(VX_ID_EDGE_AI, VX_LIBRARY_KHR_BASE) + 0x2 ,
  VX_KERNEL_LITERT_CPU_INF = VX_KERNEL_BASE(VX_ID_EDGE_AI, VX_LIBRARY_KHR_BASE) + 0x3 ,
  VX_KERNEL_TORCH_CPU_INF = VX_KERNEL_BASE(VX_ID_EDGE_AI, VX_LIBRARY_KHR_BASE) + 0x4
}
 Define Edge AI Kernels. More...
 

Detailed Description

The OpenVX EdgeAI Vendor Extension.

Enumeration Type Documentation

◆ vx_kernel_ext_e

#include <vx_corevx_ext.h>

Define Edge AI Kernels.

Enumerator
VX_KERNEL_ORT_CPU_INF 

The ONNX Runtime CPU Inference kernel.

Parameters
[in]vx_arrayThe input char array.
[in]vx_object_arrayThe input tensor object array.
[out]vx_object_arrayThe output tensor object array.
See also
Kernel: ORT Inference
VX_KERNEL_AIS_CHATBOT 

The AI Model Server Chatbot kernel.

Parameters
[in]vx_arrayThe input char array.
[out]vx_arrayThe output char array.
See also
Kernel: AI Chatbot
VX_KERNEL_LITERT_CPU_INF 

The LiteRT CPU Inference kernel.

Parameters
[in]vx_arrayThe input char array.
[in]vx_object_arrayThe input tensor object array.
[out]vx_object_arrayThe output tensor object array.
See also
Kernel: LiteRT Inference
VX_KERNEL_TORCH_CPU_INF 

The Torch CPU Inference kernel.

Parameters
[in]vx_arrayThe input char array.
[in]vx_object_arrayThe input tensor object array.
[out]vx_object_arrayThe output tensor object array.
See also
Kernel: Executorch Inference