Why doesn't my multi-threaded Java program maximize all my kernels on my machine?

I have a program that runs and creates a data model in memory, and then creates the number of threads specified on the command line to run several string checking algorithms against an input set and this data model. The work is divided between threads along the input rowset, and then each thread iterates the same instance of the data model in memory (which is never updated again, so there are no problems with synchronization).

I run this on a Windows 2003 64-bit server with two quadcore processors, and looking at the Windows task manager they don’t get the most out of (and they don’t look like they are especially taxed) when I run 10 threads. Is this normal behavior?

It seems that in 7 threads all the same work is done in the same time period, so what would you recommend working with 7 threads instead?

Should I run it with a lot of threads? ... Although I suppose this could be harmful as the JVM will do more context switching between threads.

Alternatively, should I run it with fewer threads?

Alternatively, what would be the best tool that I could use to measure this? ... Can a profiling tool help me here - indeed, one of several profilers is better at detecting bottlenecks (assuming I have one here) than the rest?

: SQL Server 2005 ( ), .

, , - - , .

+3
2

, . , , , , . , , io, ? .

cpu, , , , , , , , , .

+2

, , .. , , . , ( ). , , , .

, , ( ) - , ; , . - , - .

+5

Source: https://habr.com/ru/post/1740335/


All Articles