During 2017:
I have a long-running stored procedure that works intensively in tempdb. Setting up Tempdb on the OS disk, since Azure configured it by default on the DSv2 machine using SDDs, the request was completed after about a minute and a half.
Moving Tempdb to temporary storage (and doing nothing) changed the launch request in 57 seconds, which is about a 33% performance improvement. The request was repeated in both cases, and the time was (give or take) sequentially in these numbers.
Putting TempDB in temporary storage requires special consideration in terms of starting an SQL server. There are two approaches. One of them is to point the files to root D and grant local administration permissions to the SQL server. This is a scenario that you want to consider if you already have something else, starting the SQL Server process, and not just starting the service automatically. Otherwise, he raises his brows.
The second option is to configure the SQL Server service to start manually, and then write a powershell script file to run it, and put this powershell script in a scheduled task to run at startup. In a PowerShell script, first verify that the directory exists in the temporary storage before starting the SQL server.
It was already linked to another answer, but this document has been updated since 2017, and it does not officially recommend this kind of setup for TempDb, and not just move it from the OS section. However, it says:
If your workload makes heavy use of TempDB (for example, for temporary objects or complex joins), saving TempDB to drive D can result in higher TempDB throughput and lower TempDB latency.
My experience has confirmed that the last line.
source share