How to use secondary user actions to improve recommendations with Spark ALS?

Is there a way to use secondary user actions derived from a user click stream to improve recommendations when using Spark Mllib ALS?

I reviewed the example of explicit and implicit feedback given here: https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html , which uses the same RDD ratings for train () and trainImplicit () methods.

Does this mean that I need to call trainImplicit () on the same model object with RDD (user, element, action) for each secondary user action? Or train several models, get recommendations based on each action, and then combine them linearly?

For added context, the gist of the matter is that Spark ALS can model secondary actions, such as Maut's spark resemblance. Any pointers will help.

+4
source share
1 answer

Disclaimer: I work with similarities to the details of Mahout Spark.

ALS . . , ALS, - . , buy = 5, view = 3. ALS , , . , . ALS / . , ALS , 3 ? ? ? ALS , , , 3 .

, , . ( ecom), ""? , . . 10 . - ecom ( ALS trainImplicit) , "", .

, ALS, , , . ALS - , - , - , ( ) - . . .

Mahout Spark Item , - , , . , , . . , ( ) , . , , , .

, , Mahout Spark MLLib ALS, Prediction.io, Apache.

+10

Source: https://habr.com/ru/post/1598040/


All Articles