To better understand Threading in Java, I wrote the following code
public class SimpleRunnableTest { public static void main(String[] args) throws InterruptedException { long start = System.currentTimeMillis(); Thread t1 = new Thread(new TT1()); t1.start(); Thread t2 = new Thread(new TT2()); t2.start(); t2.join(); t1.join(); long end = System.currentTimeMillis(); System.out.println("end-start="+(end-start)); } } class TT1 implements Runnable { public void run(){ try { Thread.sleep(5000); } catch (InterruptedException e) { e.printStackTrace(); } } } class TT2 implements Runnable { public void run() { try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } } }
The idea is that I sequentially execute Thread.sleep(5000) and Thread.sleep(1000) in main Thread, the time spent on consumption will be 6 sec , but since I use Threading, it will cost only 5 sec for multi-core processor machine, and thatโs it. But my question is:
Why does the result remain 5 sec on a single-core CPU? Of course, Threading is used, but isn't it just simulated time-division multiplexing?
My understanding of time division multiplexing: suppose Thread.sleep(5000) is task A, and Thread.sleep(1000) is task B, and we can break it into parts: A1, A2, A3; B1, B2
The sequence is simple: A1, A2, A3, B1, B2
Time Division Multiplexing Thread only: A1, B1, A2, B2, A3
If so, then why does the first cost 6 seconds and the second only 5?
Am i here here?