I think one attractive way to think about it is in terms of prior and likelihood moods. Naive Bayes is a model of credibility (as far as I can see this exact tweet, given that it is positive?). You ask about the previous likelihood that the next tweet will be positive, given that you have observed a certain sequence of moods so far. There are several ways to do this:
- The most naive way - the percentage of tweets that the user said is positive - is the likelihood that the next will be positive.
- However, this ignores regency. You can come up with a model based on the transition: from each possible previous state, the probability of the next tweet will be positive, negative or neutral. Thus, you have a 3x3 transition matrix, and the conditional probability of the next tweet, positive with the last, was positive, this is the probability of the transition pos-> pos. This can be estimated from the accounts and is a Markov process (basically this is the main condition).
- With these transitional models, you can become more and more complex, for example, the current “state” may be the mood of the last two or even the last n-tweets, which means that you get more specific forecasts due to more and more parameters in the model. You can overcome this with anti-aliasing schemes, parameter bindings, etc. Etc.
As a last point, I think that @ Anony-Mousse points out that the previous ones were weak evidence would be true: indeed, no matter what you said earlier, I think this will be the dominant likelihood function (which in fact in the tweet in question). If you get to the tweet, consider CRF as @Neil McGuigan suggests.
source share