I have a Keras model that makes a conclusion on a raspberry Pi (with a camera). Raspberry Pi has a very slow processor (1.2.GHz) and there is no CUDA GPU, so the stage model.predict()
takes a lot of time (~ 20 seconds). I am looking for ways to reduce this as much as possible. I tried:
- Overclocking the CPU (+ 200 MhZ) and getting a few extra seconds of performance.
- Using float16 instead of float32.
- Reduce the size of the input image as much as possible.
Is there anything else to increase speed during output? Is there a way to simplify model.h5 and reduce accuracy? I had success with simpler models, but for this project I need to rely on the existing model, so I can’t train from scratch.
source
share