How to speed up fractal generation using numpy arrays?

Here is a little script that I wrote to create fractals using the newton method.

import numpy as np import matplotlib.pyplot as plt f = np.poly1d([1,0,0,-1]) # x^3 - 1 fp = np.polyder(f) def newton(i, guess): if abs(f(guess)) > .00001: return newton(i+1, guess - f(guess)/fp(guess)) else: return i pic = [] for y in np.linspace(-10,10, 1000): pic.append( [newton(0,x+y*1j) for x in np.linspace(-10,10,1000)] ) plt.imshow(pic) plt.show() 

I use numpy arrays, but still go through each 1000-by-1000 linspaces element to apply the newton() function, which acts on one guess, not the whole array.

My question is: How can I change my approach to better take advantage of numpy arrays?

PS: If you want to try the code without waiting too long, it is better to use 100 to 100.

Additional background:
See Newton's method for finding the zeros of a polynomial.
The main idea of ​​a fractal is to check guesses in the complex plane and count the number of iterations in order to converge to zero. This is what recursion happens in newton() , which ultimately returns the number of steps. The guess in the complex plane is a pixel in the picture, colored by the number of steps to convergence. From a simple algorithm, you get these beautiful fractals.

+6
source share
4 answers

I worked with the Lauritz V. Thaulow code and was able to get quite significant speedup with the following code:

 import numpy as np import matplotlib.pyplot as plt from itertools import count def newton_fractal(xmin, xmax, ymin, ymax, xres, yres): yarr, xarr = np.meshgrid(np.linspace(xmin, xmax, xres), \ np.linspace(ymin, ymax, yres) * 1j) arr = yarr + xarr ydim, xdim = arr.shape arr = arr.flatten() f = np.poly1d([1,0,0,-1]) # x^3 - 1 fp = np.polyder(f) counts = np.zeros(shape=arr.shape) unconverged = np.ones(shape=arr.shape, dtype=bool) indices = np.arange(len(arr)) for i in count(): f_g = f(arr[unconverged]) new_unconverged = np.abs(f_g) > 0.00001 counts[indices[unconverged][~new_unconverged]] = i if not np.any(new_unconverged): return counts.reshape((ydim, xdim)) unconverged[unconverged] = new_unconverged arr[unconverged] -= f_g[new_unconverged] / fp(arr[unconverged]) N = 1000 pic = newton_fractal(-10, 10, -10, 10, N, N) plt.imshow(pic) plt.show() 

At N = 1000, I get a time of 11.1 seconds using the Lauritz code and a time of 1.7 seconds using this code.

There are two main accelerations here. First, I used meshgrid to speed up the creation of an array of numpy input values. This is actually a fairly significant part of the acceleration at N = 1000.

The second acceleration comes only from performing calculations in unrelated areas. Loritz used masked arrays for this before realizing that they were slowing down. I have not looked at them for quite some time, but I remember masked arrays that were a source of slowness in the past. I believe this is because they are pretty much implemented in pure Python on top of the numpy array, and not written almost completely in C, like numpy arrays.

+5
source

Here is my blow to him. This is about 16 times faster.

 import numpy as np import matplotlib.pyplot as plt from itertools import count def newton_fractal(xmin, xmax, ymin, ymax, xres, yres): arr = np.array([[x + y * 1j for x in np.linspace(xmin, xmax, xres)] \ for y in np.linspace(ymin, ymax, yres)], dtype="complex") f = np.poly1d([1,0,0,-1]) # x^3 - 1 fp = np.polyder(f) counts = np.zeros(shape=arr.shape) for i in count(): f_g = f(arr) converged = np.abs(f_g) <= 0.00001 counts[np.where(np.logical_and(converged, counts == 0))] = i if np.all(converged): return counts arr -= f_g / fp(arr) pic = newton_fractal(-10, 10, -10, 10, 100, 100) plt.imshow(pic) plt.show() 

I am not an expert on quantity, I am sure that those who can optimize it a little more, but have already achieved a huge improvement in speed.

EDIT: It turned out that masked arrays do not help at all, their removal leads to an increase in speed by 15%, so I removed the masked arrays from the above solution. Can someone explain why masked arrays didn't help?

+3
source

I vectorized a new function and got approx. 85 times faster with 200x200 pixels, 144 times faster with 500x500 pixels and 148 times faster with 1000x1000 pixels:

 import numpy as np import matplotlib.pyplot as plt f = np.poly1d([1,0,0,-1]) # x^3 - 1 fp = np.polyder(f) def newton(i, guess): a = np.empty(guess.shape,dtype=int) a[:] = i j = np.abs(f(guess))>.00001 if np.any(j): a[j] = newton(i+1, guess[j] - np.divide(f(guess[j]),fp(guess[j]))) return a npts = 1000 x = np.linspace(-10,10,npts) y = np.linspace(-10,10,npts) xx, yy = np.meshgrid(x, y) pic = np.reshape(newton(0,np.ravel(xx+yy*1j)),[npts,npts]) plt.imshow(pic) plt.show() 
+3
source

Well, I solved the endless loop in Justin Peel's code, adding the maximum iteration condition to the code, now the code exposes polynomials such as z ^ 4-1, and it does not go into the endless loop. If anyone knows how to improve this error, let us know. My solution may slow down code execution, but it works. This is the code:

  #!/usr/bin/python # -*- coding: utf-8 -*- import numpy as np import itertools import matplotlib.pyplot as plt __author__ = 'Tobal' __version__ = 1.0 def newton_fractal(f, xmin, xmax, ymin, ymax, xres, yres, tolerance, maxiters): yarr, xarr = np.meshgrid(np.linspace(xmin, xmax, xres), np.linspace(ymin, ymax, yres) * 1j) arr = yarr + xarr ydim, xdim = arr.shape arr = arr.flatten() fp = np.polyder(f, m=1) counts = np.zeros(shape=arr.shape) unconverged = np.ones(shape=arr.shape, dtype=bool) indices = np.arange(len(arr)) iters = 0 for i in itertools.count(): f_g = f(arr[unconverged]) new_unconverged = np.abs(f_g) > tolerance counts[indices[unconverged][~new_unconverged]] = i if not np.any(new_unconverged) or iters >= maxiters: return counts.reshape((ydim, xdim)) iters += 1 unconverged[unconverged] = new_unconverged arr[unconverged] -= f_g[new_unconverged] / fp(arr[unconverged]) pic = newton_fractal(np.poly1d([1., 0., 0., 0., -1.]), -10, 10, -10, 10, 1000, 1000, 0.00001, 1000) plt.imshow(pic, cmap=plt.get_cmap('gnuplot')) plt.title(r'$Newton^{\prime} s\;\;Fractal\;\;Of\;\;P\,(z) =z^4-1$', fontsize=18, y=1.03) plt.tight_layout() plt.show() 

I am using Pycharm 5 with Anaconda Python 3 and this IDE reports a warning in non np.any code (new_unconverged)

The expected 'type Union [ndarray, iterable]', instead got "bool"

This warning appears in the source code of Justin Peel; and I don’t know how to solve it. I am very interested in this issue. Newton fractal

0
source

Source: https://habr.com/ru/post/948432/


All Articles