Optimization of a simple pinball solver

I am trying to solve a programming problem and have only partially achieved this; I need to optimize the solution - and what you enter. First, the parameters are:

  • You are provided with some line segments int x1, y1, x2, y2.
  • You are given the starting point int px, py = INT_MAXfor your ball.
  • You must simulate a dropping ball from px, pystraight down (lower y values)
    • When it crosses the line, it follows that the line is to its lower endpoint
    • When the lines no longer intersect, print px.

So what I thought:

  • Read the lines in the list of tuples int xtop, ytop, xbot, ybot,
  • Sort strings by ybot. This allows us to skip the lines that the ball went through,
  • The tracking index is iminsuch that lines[i].ybot >= pyfor everyone i < imin, until an intersection is found:
    • Increase the vertical intersection yi = kx + m,
    • Install px, py = lines[i].{x, y}bot.

The problem is that this time complexity & in; ? theta; (n 2 ) - IOW, this sucks for large inputs.

One idea is to use something like kd tree , but then the question arises whether it would be very expensive to calculate which lines go into the so-called half-spaces.

+4
source share
1 answer

, . , , ; , , 2n + 1 . , , . , , , , , ( ); : O (log n) -time.

. x-. . . , , . , , , O (n log n) O (n ^ 2), - - , .

: O (n log n).

0

Source: https://habr.com/ru/post/1527535/


All Articles