Analytical intersection of two cubic expressions

I solve the analytic intersection of two cubic curves whose parameters are defined in two separate functions in the code below.

We construct the curves, it is easy to see that there is an intersection:

enter image description here

enlarged version:

enter image description here

However, sym.solve does not find the intersection, that is, when requesting print 'sol_ H_I(P) - H_II(P) =', sol result is not returned:

 import numpy as np import matplotlib.pyplot as plt from matplotlib.font_manager import FontProperties import sympy as sym def H_I(P): return (-941.254840173) + (0.014460465765)*P + (-9.41529726451e-05)*P**2 + (1.23485231253e-06)*P**3 def H_II(P): return (-941.254313412) + (0.014234188877)*P + (-0.00013455013645)*P**2 + (2.58697027372e-06)*P**3 fig = plt.figure() # Linspace for plotting the curves: P_lin = np.linspace(-5.0, 12.5, 10000) # Plotting the curves: p1, = plt.plot(P_lin, H_I(P_lin), color='black' ) p2, = plt.plot(P_lin, H_II(P_lin), color='blue' ) # Labels: fontP = FontProperties() fontP.set_size('15') plt.legend((p1, p2), ("Curve 1", "Curve 2"), prop=fontP) plt.ticklabel_format(useOffset=False) plt.savefig('2_curves.pdf', bbox_inches='tight') plt.show() plt.close() # Solving the intersection: P = sym.symbols('P', real=True) sol = sym.solve(H_I(P) - H_II(P) , P) print 'sol_ H_I(P) - H_II(P) =', sol 
+5
source share
3 answers

Problem

- your assumption that the solution is real in combination with a poor sympys score for numerical uncertainty. If you select a task, you will receive the following code:

 import sympy as sym def H_I(P): return (-941.254840173) + (0.014460465765)*P + (-9.41529726451e-05)*P**2 + (1.23485231253e-06)*P**3 def H_II(P): return (-941.254313412) + (0.014234188877)*P + (-0.00013455013645)*P**2 + (2.58697027372e-06)*P**3 P = sym.symbols('P') sol = sym.solve(H_I(P) - H_II(P) , P) sol = [x.evalf() for x in sol] print(sol) 

with output:

[- 6.32436145176552 + 1.0842021724855e-19 * I, 1.79012202335501 + 1.0842021724855e-19 * I, 34.4111917095165 - 1.35525271560688e-20 * I]

You can access the real part of the sym.re(x) solution

Decision

If you have a certain numerical accuracy, I think the easiest way to collect your real results is something similar to this piece of code:

 def is_close(a,b,tol): if abs(ab)<tol: return True else: return False P = sym.symbols('P') sol = sym.solve(H_I(P) - H_II(P) , P) sol = [complex(x.evalf()) for x in sol] real_solutions = [] for x in sol: if is_close(x,x.real,10**(-10)): real_solutions.append(x.real) print(real_solutions) 

Because you asked: I use a difficult question of taste. Not necessary, depending on your future goal. However, there are no restrictions. I am writing this is_close () as a function for general reasons. You can use this code for other polynomials or use this function in a different context, so why not write the code in an intelligent and understandable way? However, the initial goal is to tell me if the variable x and its real part re (x) are the same up to a certain numerical accuracy, i.e. The imaginary part is negligible. You should also check out the minor real parts that I forgot.

Edit

The small imaginary parts are usually residues of subtractions on complex numbers that occur somewhere in the process of solving. Perceived to be accurate, sympy does not erase them. evalf () gives you a numerical estimate or approximation of an exact solution. This is not about the best accuracy. Consider, for example:

 import sympy as sym def square(P): return P**2-2 P = sym.symbols('P') sol2 = sym.solve(square(P),P) print(sol2) 

This code prints:

[- sqrt (2), sqrt (2)]

not a floating point number as you might expect. The solution is accurate and perfectly accurate. However, this, in my opinion, is not suitable for further calculations. Therefore, I use evalf () for every sympy result. If you use a numerical estimate for all the results in this example, the output will look like this:

[- 1.41421356237310, 1.41421356237310]

Why is it not suitable for further calculations, you may ask? Remember your first code. The first root found was

-6.32436145176552 + 0.e-19 * I

Yes, the imaginary part is zero, nice. However, if you print sym.im(x) == 0 , the output is False . Computers and the operator "precise" are sensitive combinations. Be careful.

Decision 2

If you want to get rid of only a small imaginary part without imposing explicit quantitative accuracy, you can simply use the .evalf(chop = True) keyword inside the numerical estimate. This effectively neglects unnecessary small numbers and simply cuts off the imaginary part in your source code. Given that you are even well versed in any imaginary parts, as you stated in your answer, this is probably the best solution for you. For completeness reasons, here is the corresponding code

 P = sym.symbols('P') sol = sym.solve(H_I(P) - H_II(P) , P) sol = [x.evalf(chop=True) for x in sol] 

But remember that this is not so different from my first approach, if you also implemented β€œclipping” for the real part. However, the difference is this: you have no idea about the accuracy that this imposes. If you never work with other polynomials, this may be good. The following code should highlight the problem:

 def H_0(P): return P**2 - 10**(-40) P = sym.symbols('P') sol = sym.solve(H_0(P) , P) sol_full = [x.evalf() for x in sol] sol_chop = [x.evalf(chop=True) for x in sol] print(sol_full) print(sol_chop) 

Even though your roots are perfectly beautiful and still accurate after using evalf (), they are chopped off because they are too small. That is why I would advise with the simplest, most general solution of all time. After that, look at your polynomials and find out about the required numerical accuracy.

+5
source

Let's start with the first result:

  sol = sym.solve(H_I(P) - H_II(P) , P) print 'sol_ H_I(P) - H_II(P) =', sol 

which prints the following:

[- 6.32436145176552 + 0.e-19 * I, 1.79012202335501 + 0.e-19 * I, 34.4111917095165 - 0.e-20 * I]

I totally agree with your evalf() strategy because it provides better accuracy:

  evalf_result = [x.evalf() for x in sol] print '[x.evalf() for x in sol] = ', evalf_result 

which gives:

[- 6.32436145176552 + 1.0842021724855e-19 * I, 1.79012202335501 + 1.0842021724855e-19 * I, 34.4111917095165 - 1.35525271560688e-20 * I]

Your solution involves working on the complex python built-in function, which converts the above result into a more pleasant tuple-ish result, in which the characters "I" are beautifully replaced by "j":

  complex_evalf_result = complex(x.evalf()) for x in sol print 'complex(x.evalf()) for x in sol = ', complex_evalf_result 

which gives the following:

[(- 6.324361451765517 + 1.0842021724855044e-19j), (1.7901220233550066 + 1.0842021724855044e-19j), (34.41119170951654-1.3552527156068805e-20j)]

Since type(complex_evalf_result) returns complex , now you can use the strategy complex_evalf_result.real to get the real part of each x in complex_evalf_result . It was your strategy, and I agree.

After the evalf and complex functions were applied, now you implement the is_close functional approach, which I found very interesting:

"If the difference in abs. Between the real part and the complex part is less than 10E-10, then discard the complex part."

This is usually true when the compound part is less than 10E-10. For example, for

[(- 6.324361451765517 + 1.0842021724855044e-19j), (1.7901220233550066 + 1.0842021724855044e-19j), (34.41119170951654-1.3552527156068805e-20j)]

It happens:

abs(-6.324361451765517+1.0842021724855044e-19j - (-6.324361451765517)) = 1.0842021724855044e-19 ,

which is always less than 10E-10.

Your function essentially discards the difficult part (please forgive me if there is another application for this function, and I'm a little dumb to not understand it).

So why not use this simpler solution?

  import numpy as np import sympy as sym from sympy.functions import re # Crude intersection: sol = sym.solve(H_I(P) - H_II(P) , P) print 'sol_ H_I(P) - H_II(P) =', sol print """ """ # Use of evalf to obtain better precision: evalf_result = [x.evalf() for x in sol] print '[x.evalf() for x in sol] = ', evalf_result print """ """ # Now, let grab the real part of the evalf_result: real_roots = [] for x in evalf_result: each_real_root = re(x) real_roots.append(each_real_root) print 'real_roots = ', real_roots 

This directly prints:

real_roots = [-6.32436145176552, 1.79012202335501, 34.4111917095165]

Following this strategy, it happens that:

1) there is no need to call the python complex built-in strategy. When the evalf function completed the task, the real part can be removed simply by re(x) .

2) there is no need to pass our intersection results over the is_close function to discard the hard part.

Please tell me if there is something that I misunderstand, or something that you do not quite agree with - I am very happy to discuss :) All your help was amazing, thank you very much!

0
source

To find the roots of a polynomial, use the selected roots method instead of the general solve .

 sol = sym.roots(H_I(P) - H_II(P) , P) 

This returns the roots as a dictionary with multiplicities, {-6.32436145176552: 1, 1.79012202335501: 1, 34.4111917095165: 1}

Most often, it’s easier to get a list of roots (multiple roots are repeated, if any):

 sol = sym.roots(H_I(P) - H_II(P) , P, multiple=True) 

returns [-6.32436145176552, 1.79012202335501, 34.4111917095165]


If this equation was not polynomial, I would recommend using SciPy solvers such as fsolve instead of SymPy. SymPy is not the right tool for finding a numerical solution to an equation full of floating point coefficients. It is intended for symbolic mathematics, and symbolic mathematical and floating numbers do not mix very well.

0
source

Source: https://habr.com/ru/post/1274250/


All Articles