Set string value in shared c_wchar_p in subprocess?

I have this situation:

The main process will generate some subprocess so that they must write the result to a common object in string and numeric types, there are no problems for numeric types, but the value will be lost with the string.

import multiprocessing as mp from ctypes import Structure, c_double, c_wchar_p, c_int # shared obj class class SharedObj(Structure): _fields_ = [('name', c_wchar_p), ('val', c_double) ] def run_mp( values , lock , s ) : for i in range( s , len( values ) , 2 ): lock.acquire() values[i].name = str( i ) # write the string value in the shared obj values[i].val = float( i ) print( "tmp: %d" % i ) lock.release() def main(): # creating the shared obj and mutex values = mp.Array( SharedObj , [SharedObj() for i in range( 10 )] ) lock_j = mp.Lock() # creating two sub-process form the function run_mp p1 = mp.Process( target=run_mp , args=( values , lock_j , 0 )) p2 = mp.Process( target=run_mp , args=( values , lock_j , 1 )) p1.start() p2.start() p1.join() p2.join() for v in values: print() print( "res name: %s" % v.name ) print( "res val: %f " % v.val ) if __name__ == '__main__': main() 

As a result, the field in the shared object containing c_double is written to the field, but the line generated in the rum-mp subprocesses ( string values[i].name = str( i ) ) will be lost in the main process.

Is there a method for storing strings generated in a subprocess?

the output of this code is as follows:

If the resulting row in the main process is completely random.

 tmp: 0 tmp: 2 tmp: 3 tmp: 4 res name:     羍    羍res val: 0.000000 res name:     羍    羍res val: 1.000000 res name: res val: 2.000000 .... 
+4
source share
2 answers

Here is a slightly modified version of your code:

  #! / usr / bin / env python

 import multiprocessing as mp


 def run_mp (values):
     for c_arr, c_double in values:
         c_arr.value = 'hello foo'
         c_double.value = 3.14

 def main ():
     lock = mp.Lock ()
     child_feed = []
     for i in range (10):
         child_feed.append ((
             mp.Array ('c', 15, lock = lock),
             mp.Value ('d', 1.0 / 3.0, lock = lock)
         )))

     p1 = mp.Process (target = run_mp, args = (child_feed,))
     p2 = mp.Process (target = run_mp, args = (child_feed,))

     p1.start ()
     p2.start ()

     p1.join ()
     p2.join ()

     for c_arr, c_double in child_feed:
         print ()
         print ("res name:% s"% c_arr.value)
         print ("res val:% f"% c_double.value)


 if __name__ == '__main__':
     main ()

Take a look at http://docs.python.org/library/multiprocessing.html There is an example of using an array of characters.

There is also a mmap module for sharing memory http://docs.python.org/library/mmap.html , but with this you need to synchronize access, possibly using semaphores. If you like a simpler approach, just use pipes.

+2
source

I did not want to use multiprocessing.Array because of the obvious requirement to indicate its size ahead of time. Instead, it works for me to use a multiprocessor compatible unicode object. It has been tested with Python 2.6 with several processes.

 >>> shared_str = multiprocessing.Manager().Value(unicode, 'some initial value') >>> shared_str.value 'some initial value' >>> shared_str.value = 'some new value' >>> shared_str.value 'some new value' 

To solve the author-specific issue of sharing strings and numbers, a serializable object can be created to store them, and Value should be specified instead.

Of course, you can use Manager for this purpose. If so, provide an alternative solution.

+1
source

Source: https://habr.com/ru/post/1433253/


All Articles