Denial of responsibility:)
Having implemented such a thing for my product and sharing many of your problems and technologies (especially SPA with Backbone, using a 100% backend without taking into account the state of inactivity), I can tell you what my opinion is, itβs clear that this is not so, I want to be the "answer", but rather the conversation starter, to learn from the final discussion, since I think I also need to participate a little in this topic.
First of all, I think you should go 100% stateless. And 100%, I mean 100% :). Not only your API level should be inactive, but the entire application (with the exception of the client, of course). Moving sessions to another level (e.g. redis) will move the problem a bit, but it does not solve the problem. Everything (especially scaling) will be much simpler, and you will later thank yourself for this decision.
So yes, authentication is required for each request . But this does not mean that you have to hit the provider every time. One of the things that I learned is that allowing the user to authenticate through FB / GitHub / Whatever (henceforth, the remote service) is just a means to ease the pain of registering / signing, nothing more. You still have to create your personal user database. Of course, each user will be associated with users of "remote", but soon after authentication, the application should refer to "his" user, and not to the user "remote" (for example, the GitHub user).
Implementation
Here is what I implemented:
My API methods always need authentication tokens. The authentication token is the hash that represents the user of my system, so when I call POST /api/board?name=[a_name]&auth=[my_token] , I know who is calling, can check permissions and can associate the newly created board object with the correct user.
The specified token has nothing to do with remote service tokens. The logic they compute is specific to my application. But it displays my user, which is also displayed on the remote user, so no information is lost if necessary.
Here is how I authenticate the user through a remote service. I implement the remote authentication specified in the service documentation. Usually this is OAuth or OAuth-like, which means that in the end I get authToken that represents the remote user. This token has 2 goals:
- I can use it to call API methods in a remote service acting as a user
- I have a guarantee that the user is the one who says that this is at least using a remote service.
Once your user authenticates with a remote service, you download or create the corresponding user on your system. If the user with remote_id: GitHub_abc123 not on your system, you create it, otherwise you will download it. Let's say this user has id: MyApp_def456 . You also create an authToken with your own logic that will represent the user MyApp_def456 and pass it to the client (cookies are ok !!)
Back to point 1 :)
Notes
Authentication is performed for each request, and this means that you are dealing with hashes and cryptographic functions, which are by definition slow. Now, if you use bcrypt with 20 iterations, this will kill your application. I use it to store user passwords at login, but then use the less heavy algorithm for authToken (I personally use a hash comparable to SHA-256 ). These tokens can be short-lived (even if less than the average time to crack them) and it is quite easy to calculate on a server machine. There is no exact answer. Try different approaches, measure and decide. Instead, Iβm sure that I prefer to have these kinds of problems besides the problems with the sessions. If I need to calculate more hashes or faster, I will add processor power. Sessions and a clustered environment have memory problems, load balancing, and problems with sticky sessions or other moving parts (redis).
Obiouvsly, HTTPS absolutely required, since authToken always passed as a parameter.