I am updating some of the legacy code in one of our web applications. The application allows the user to upload a spreadsheet, which we will process as a background job.
Each of these user downloads creates a new spreadsheet for storing spreadsheet data, so the number of tables in my SQL Server 2000 database will grow rapidly - thousands of tables in the short term. I am worried that this may not be what SQL Server is optimized for.
It would be easier to leave this mechanism as it is, but I do not want to leave time bombs that will explode later. It’s better to fix it now if it needs to be fixed (the obvious alternative is to have one large table with a key connecting the records with user batches).
Perhaps this architecture can create a performance problem when the number of tables increases? And if so, can the problem be mitigated by upgrading to a later version of SQL Server?
Edit : additional information in response to questions:
- Each of these tables has the same schema. There is no reason that it could not be implemented as one large table; it just wasn’t.
- Removing old tables is also an option. They may be needed for a month or two, nothing more.