How to use 'for' parallel loop in Octave or Scilab?

I have two loops that work in my Matlab code. The inner loop is parallelized using Matlabpool in 12 processors (which is the maximum allowed for Matlab in one machine).

I do not have a license for distributed computing. Please help me how to do this using Octave or Scilab. I just want to parallelize the for loop ONLY.

There are some broken links given during google search.

+6
source share
4 answers

parfor is not yet implemented in octave. The keyword is accepted, but is a synonym for ( http://octave.1599824.n4.nabble.com/Parfor-td4630575.html ).

The pararrayfun and parcellfun parallel package functions are convenient for multi-core machines. They are often a good substitute for looping pairs.

See http://wiki.octave.org/Parallel_package for an example. To install, release (only once)

 pkg install -forge parallel 

And then, once in each session

 pkg load parallel 

before using functions

+9
source

In Scilab, you can use parallel_run :

 function a=g(arg1) a=arg1*arg1 endfunction res=parallel_run(1:10, g); 

Limitations

  • uses only one core on Windows platforms.
  • Currently, parallel_run processes only the arguments and results of scalar matrices of real values, and type arguments are not used
  • You should not rely on side effects, such as changing variables from the external area: only the data stored in the result variables will be copied back to the calling environment.
  • macros called by parallel_run cannot use the JVM
  • changing the size of the stack (via gstacksize () or via stacksize ()) should occur during a call to parallel_run
+2
source

In GNU Octave, you can use the parfor construct:

 parfor i=1:10 # do stuff that may run in parallel endparfor 

For more information: help parfor

0
source
  1. To see a list of free and open source MATLAB-SIMULINK, please check out the alternatives page or my answer here . Especially for alternatives to SIMULINK see this post .

  2. You must consider the difference between vectorized, parallel, parallel, asynchronous, and multi-threaded computing. Without going into details, vectorized programming is a way to avoid loops. For example, the map function and list comprehension in Python, as well as ndarray arithmetic in numpy and matrix arithmetic in Scilab, are vectorized calculations. This is the way you write code, not how it is processed by a computer. Parallel computing, mainly used for computing on the GPU (data parallelism), is when you perform a huge amount of arithmetic on large arrays using computational units on the GPU. There is also a parallelism of tasks, which mainly relates to the routine of a task in several threads, each of which is processed by a separate CPU core. Parallel or asynchronous is when you have only one computing unit, but it performs several tasks at the same time, without any blocking of the processor. Basically, how a mother prepares, cleans and takes care of her child at the same time, but does only one job at a time :)

  3. Given the above description, in the FOSS world there is a lot for each of them. For Scilab, specifically check this page . There is an MPI interface for distributed computing (multithreading / concurrency on multiple computers). OpenCL Interfaces for GPU / Parallel Data Computing. OpenMP interface for multi-threading / concurrency tasks. The parallel_run or feval proposed in other answers are not parallelism, but a way to vectorize a regular function. Numpy also does this with the numpy.vectorize method. Basically, all Scilab arithmetic is already vectorized.

0
source

Source: https://habr.com/ru/post/972881/


All Articles