Overview
========
CMSIS-NN implementation of object detector based on the ARM ML example [1].

A 32x32 pixel color image is set as an input to a simple 3-layer convolution
neural network (CNN). Each convolution layer is followed by ReLU activation and
pooling layer. The last layer is a fully-connected layer that classifies the
input image into one of 10 output classes: "airplane", "automobile", "bird",
"cat", "deer", "dog", "frog", "horse", "ship" and "truck". The CNN used in this
example is based on the CIFAR-10 example from Caffe [2]. 

The example model implementation needs 87 KB to store weights,
40 KB for activations and 6 KB for storing the im2col data.

Firstly a static ship image is used as input regardless camera is connected or not.
Secondly runtime image processing from camera in the case camera and LCD
is connected. Camera data are displayed on LCD.

HOW TO USE THE APPLICATION:
To classify an image, place an image in front of camera so that it fits in the
white rectangle in the middle of the LCD. 
Note Semihosting implementation causes slower or discontinuous video experience. 
Select UART in 'Project Options' for using external debug console 
via UART (Virtual COM port).

[1] https://github.com/ARM-software/ML-examples/tree/master/cmsisnn-cifar10
[2] https://github.com/BVLC/caffe

Files:
  cifar10.c - example source code
  ship.bmp - shrinked picture of the object to recognize
    (source: https://en.wikipedia.org/wiki/File:Christian_Radich_aft_foto_Ulrich_Grun.jpg)
  ship.h - image file converted into a C language array of RGB values
    using Python with the OpenCV and Numpy packages:
    import cv2
    import numpy as np
    img = cv2.imread('ship.bmp')
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    with open('ship_image.h', 'w') as fout:
      print('#define SHIP_IMG_DATA {', file=fout)
      img.tofile(fout, ',', '%d')
      print('}\n', file=fout)
  weights.h - neural network weights and biases generated by scripts available at [1]
  parameter.h - parameters of the neural network generated by scripts available at [1]
  timer.c - timer function source code
  timer.h - timer function declarations
  image.c - image processing function source code
  image.h - image processing function declarations


Toolchain supported
===================
- IAR embedded Workbench  8.50.1
- Keil MDK  5.30
- GCC ARM Embedded  9.2.1
- MCUXpresso  11.2.0

Hardware requirements
=====================
- No camera input
    - Mini/micro USB cable
    - EVK-MIMXRT1060 board
    - Personal computer

- Camera input
    - Mini/micro USB cable
    - EVK-MIMXRT1060 board
    - Personal computer
    - Camera MT9M114
    - Liquid crystal display RK043FN02H-CT

Board settings
==============
- No camera input
    - No special settings are required

- Camera input
    - Move jumper J1 to position 1-2
    - Connect external 5V power supply to J2 connector

Prepare the Demo
================
1. Connect a USB cable between the host PC and the OpenSDA USB port on the target board. 
2. Connect a camera to J35 connector. (Skip this step in the case offline version is used.)
3. Connect a LCD display to A1-A40 and B1-B6. (Skip this step in the case offline version is used.)
4. Open a serial terminal with the following settings:
   - 115200 baud rate
   - 8 data bits
   - No parity
   - One stop bit
   - No flow control
5. Download the program to the target board.

Running the demo
================
The log below shows the output of the demo in the terminal window (compiled with ARM GCC):

CIFAR-10 object recognition example using CMSIS-NN.
Detection threshold: 60%

Static data processing:
----------------------------------------
     Inference time: 55 ms    
     Detected:  ship       (99%)
----------------------------------------


Camera data processing:
----------------------------------------
     Inference time: 60 ms    
     Detected:  deer       (86%)
----------------------------------------
 
 ---------------------------------------
     Inference time: 61 ms    
     Detected:  ship       (86%)
----------------------------------------

----------------------------------------
     Inference time: 60 ms    
     Detected:  airplane   (93%)
----------------------------------------
 
----------------------------------------
     Inference time: 63 ms    
     Detected:  automobile (87%)
----------------------------------------

----------------------------------------
     Inference time: 59 ms    
     Detected:  frog       (99%)
----------------------------------------

----------------------------------------
     Inference time: 62 ms    
     Detected:  cat        (70%)
----------------------------------------
Customization options
=====================

