NVDLA has a full software ecosystem including support from compiling network to inference. Part of this ecosystem includes the on-device software stack, a part of the NVDLA open source release; additionally, NVIDIA will provide a full training infrastructure to build new models that incorporate Deep Learning, and to convert existing models to a form that is usable by NVDLA software. In general, the software associated with NVDLA is grouped into two parts: the Compiler library (model conversion), and the Runtime environment (run-time software to load and execute compiled neural network on NVDLA). The general flow of this is as shown in the figure below;
compiler: Compiler sample application
runtime: Runtime sample application
runtime: Runtime environment
compiler: Compiler library
include: Application Programming Interface
common: Implementation shared between runtime and compiler such as loadable and logging
external: External modules used in UMD such as flatbuffers
make: Make files
linux: Portability layer for Linux
utils: Utility functions
Documentation: Device tree bindings for NVDLA device
firmware: Core DLA hardware programming including HW layer scheduler
include: Core Engine Interface
linux: Portability layer for Linux
prebuilt: Prebuilt binaries
flatbufs: Pre-generated loadables for sanity tests
golden: Golden results
scripts: Scripts used for test execution
testplan: Test plans
scripts: General scripts