If you have a peer-to-peer system that you can request, you need
- reduce the total number of requests over the network (by distributing โpopularโ items broadly and โsimilarโ items together)
- avoid excess storage on each node
- ensure good accessibility even for moderately rare items in the face of customer downtime, equipment failure and user exit (possibly to detect rare items for archivists / historians).
- avoid queries that are not found matches in the case of network partitions
Given these requirements:
- Are there any standard approaches? If not, are there any reputable but experimental studies? I am familiar with some distribution schemes, but I have not seen anything really address learning for reliability.
- I do not see any obvious criteria?
- Anyone interested in working on this issue? (If so, I am happy with the open source code of a very powerful simulator that I put together this weekend and generally offer worthless advice).
@cdv: now I watched the video, and itโs very good, and although I donโt feel that it is well suited to a flexible distribution strategy, it is certainly 90% of the way. However, these questions point to useful differences with this approach, which touch on some of my further problems and give me some recommendations for further study. Thus, I temporarily accept your answer, although I am considering the question openly.
source share