If you have only one core, then the only way multithreading can help you is that chunks of this work depend on something other than the processor, so one thread can do some work while the other is waiting for data from the disk or network.
If your application has a graphical interface, then it can benefit from multithreading in that although it would not be faster to execute processing (actually slower, although probably careless, so if the task is very long), it can still Respond to user input in the meantime.
If you have two or more cores, you can also use processor-bound operations, although this can vary from trivial to impossible, depending on what kind of operation it is. This is not relevant to your case, but it is worth considering that if the code you write can later be run on a multi-core system.
Raising priority is probably a bad idea, though, especially if you have only one core (one advantage of multi-core systems is that people whose priorities cannot do so much damage).
All topics have priorities, which are a factor in the priority of their process and their priority in this process. The low priority stream in the high priority process is superior to the high priority stream in the low priority process.
The scheduler distributes processor fragments in a circular fashion with the highest priority threads that should work. If there are processors left (which in your case means that there are zero threads with this priority that need to be started), then it separates the slices to the next low priority, etc.
In most cases, most threads do not do much, as can be seen from the fact that in most cases CPU usage in most systems is below 100% (hyperthreading negates this, internal planning within the cores means that the hyperthreading system can be completely saturated and apparently only works 70%). In any case, as a rule, things are done, and a stream that suddenly has many possibilities will do this at normal priority almost at the same time as at a higher level.
However, although the advantage for this busy stream with a higher priority is usually little or nothing, the decrement is great. Since this is the only thread that receives any processor time, all other threads are stuck. Therefore, all other processes freeze for a while. Ultimately, the planner notices that they all waited about 3 seconds, and corrects this, increasing them all to the highest priority and providing them with larger fragments than usual. Now we have a burst of activity, since threads that have not received time are all of a sudden the highest priority threads that require processor time. There, the distribution of each stream, except for the high-priority one, starts, and the system stops boiling, although there are probably many more applications in their headers showing "Do not respond". This is far from ideal, but it is an effective way to cope with a thread with a higher than usual priority, capturing the kernel for so long.
Streams are gradually reduced in priority order, and as a result, we return to the situation when the only thread with a higher priority is the only one that can work.
For added pleasure, if our high-priority thread in any way depends on the services provided by the lower-priority threads, it would be waiting for them. Hopefully this way that he blocked and stopped himself from any damage, but probably not.
In general, thread priorities should be applied with great care and process priorities. They are valid only if they exit quickly and are either important for the work of other threads (for example, some OS processes will be executed with a higher priority, the finalization of threads in .NET will be higher than the rest of the process, etc. .), or if delays in milliseconds can go wrong (this requires intensive work in the media).