I'm just going to talk about my topic, and she has some kind of redundant information for Gaurava Mantry's answer. This is based on the design that I came up with, making something very similar to my current job.
Azure Blob Storage
Randomly select a pod from the pod pool when the tenant is created and save its namespace along with the tenant information.
Provide api for creating containers where container names are compound for the tenant identifier Guid::ToString("N") + <resourcename> . You do not need to sell to your users as containers, I can be folders, work sets or a file field, you will find a name.
Provide api for storing documents in these containers.
This means that you can simply increase the pod pool , if you get more tenants, ect delete those pods that are filling.
The advantages of this are that you do not need to store two systems for your data, using both table storage and memory storage. Blob already has a way to present data as a directory / file hierarchy.
Extension points
Blob storage api broker
In addition to the design above, I made Owin middleware that is exchanged between clients and the blob repository, basically just redirecting requests from clients to the blob repository. This step is disabled, so it is not required, since you can delegate normal sas tokens and talk directly to the block storage from clients. But it makes it easy to connect when actions are performed in files. Each tenant will receive its endpoint files/teantid/<resourcename>/
Using such an API will also allow you to connect to any token authentication system that you can already use to authenticate and authorize incoming requests, and then sign the requests in this API.
Blob Memory Metadata
Using the api broker extension listed above, combined with metadata, you can actually take one more step and modify incoming requests to always include metadata and add filters to xml returned to the blob repository before sending them to clients for filtering containers or bunches. For example, when users delete blob and then set x-ms-meta-status:deleted and filter them when blobs / containers return. Thus, you can add various procedures to delete data behind the scenes.
You need to be careful here, since you do not want to introduce a lot of logic here, since it adds a penalty to all requests, but doing it smart can make this work very enjoyable.
These extensions will also allow your users to create “empty” subfolders inside the container, but place a zero byte file with the status: hidden, which will also be filtered out. (remember that the blob repository can only show virtual folders if there is anything in them). This can also be achieved by storing tables.
Azure Search
Another important point of expansion is that for each blob you can save it in Azure Search to be able to find content, and this is most likely my favorite. I don’t see any good solution using only a memory store or table store, which can give you good search functionality or, to some extent, even good filtering. Thanks to Azure Search, this will give users a truly rich experience for finding their content again.
snapshots
Another extension is that snapshots can be taken every time a file changes automatically. It becomes even easier with the api broker, otherwise log monitoring is an option.
These ideas come from a project that I started with what I wanted to share, but since I’m busy at work in the coming months, I don’t see me release my project before the summer holidays give me time to finish. The project’s motivation is to provide the nuget package, which allows other developers to quickly configure this api broker, which I mentioned above, and configure a solution for storing multi-user repositories.
I ask you to vote for this answer if you read this and believe that such a project could save your time in your current development process. That way, I can see if I can use more time on the project or not.
I think that the answer of Gaurara Mantris is more suitable for the question above, but I just wanted to share my ideas on this topic.