Why is `Time.utc` slower in a forked process in Ruby on OS X (and not Python)?

I saw a question. Why does Process.fork make stuff slower in Ruby on OS X? and was able to determine that Process.fork was not actually performing tasks, in general, more slowly.

However, it seems to make Time.utc , in particular, much slower.

 require 'benchmark' def do_stuff 50000.times { Time.utc(2016) } end puts "main: #{Benchmark.measure { do_stuff }}" Process.fork do puts "fork: #{Benchmark.measure { do_stuff }}" end 

Here are some results:

 main: 0.100000 0.000000 0.100000 ( 0.103762) fork: 0.530000 3.210000 3.740000 ( 3.765203) main: 0.100000 0.000000 0.100000 ( 0.104218) fork: 0.540000 3.280000 3.820000 ( 3.858817) main: 0.100000 0.000000 0.100000 ( 0.102956) fork: 0.520000 3.280000 3.800000 ( 3.831084) 

One clue might be that the above takes place on OS X, while on Ubuntu there seems to be no difference:

 main: 0.100000 0.070000 0.170000 ( 0.166505) fork: 0.090000 0.070000 0.160000 ( 0.169578) main: 0.090000 0.080000 0.170000 ( 0.167889) fork: 0.100000 0.060000 0.160000 ( 0.169160) main: 0.100000 0.070000 0.170000 ( 0.170839) fork: 0.100000 0.070000 0.170000 ( 0.176146) 

Can anyone explain this oddity?

Further research:

@tadman suggested this might be a bug in the macOS / OS X temporary code, so I wrote a similar test in Python:

 from timeit import timeit from os import fork print timeit("datetime.datetime.utcnow()", setup="import datetime") if fork() == 0: print timeit("datetime.datetime.utcnow()", setup="import datetime") else: pass 

Again, on Ubuntu, the tests are the same for branched / core processes. However, in OS X, the bifurcation process is now slightly faster than the main process, which is the opposite of behavior in Ruby.

This makes me think that the source of the "fork" is in the Ruby implementation, and not during OS X.

+5
source share
1 answer

As it turned out, the slowdown occurs approximately equally with two function calls in time.c , in the gmtime_with_leapsecond function. Two functions: tzset and localtime_r .

This discovery led me to the question Why is tzset () much slower after forking on Mac OS X? which could be said in the current question to be a duplicate.

There are two answers there, none of them are accepted, which indicates the root causes associated with

  • "safe for asynchronous signal" - tzset and localtime / localtime_r , or
  • Using Apple in a passive notification registry, which is not valid with fork 'd.

The fact that the slowdown occurs only in years without known seconds of jump (as the user discovered that the other guy ) is obviously due to the fact that Ruby doesn't call gmtime_with_leapsecond when he knows that the year does not have seconds of jump.

I'm not sure why such a slowdown is not observed in Python. One possible explanation is that my test script using fork and utcnow may not create a child process that calls tzset or localtime / localtime_r .

+4
source

Source: https://habr.com/ru/post/1261966/


All Articles