How to start the prediction engine as a service

I have successfully implemented predictionIO engine templates. I can deploy the engine with

$pio deploy -- --driver-memory xG

But how can I run a recommendation (or any other) engine as a service? I want to write all the entries to the file specified for reference if there are some problems.

It was also mentioned that it’s small deploymentsbetter not to use distributed configuration. I have a json formatted dataset for a text classification template about 2 MB in size, and it takes about 8 GB of memory to prepare and deploy it. Does this fit the category small deployment?

+4
source share
1 answer

pio deploy . pio.log, . , , 8 , , , " ".

0

Source: https://habr.com/ru/post/1605685/


All Articles