I have successfully implemented predictionIO engine templates. I can deploy the engine with
$pio deploy -- --driver-memory xG
But how can I run a recommendation (or any other) engine as a service? I want to write all the entries to the file specified for reference if there are some problems.
It was also mentioned that it’s small deploymentsbetter not to use distributed configuration. I have a json formatted dataset for a text classification template about 2 MB in size, and it takes about 8 GB of memory to prepare and deploy it. Does this fit the category small deployment?
source
share