Block beam adjustment flow

I am working on setting up a bundle block to find

  • X, Y, Z values ​​of image points
  • Corrected values ​​of camera characteristics (external parameters, etc.)
  • Corrected Measurement Values

In my opinion, the BB adjustment process is carried out following these steps (given camera settings):

  • Collect tie points (x, y for each pair of images) and ground control points (x, y and corresponding X, Y, Z positions for each image)
  • Calculate initial external parameters (camera view) for each view
  • Calculate each position of the initial real world using a camera pose
  • Performing a sparse beam tuning step using all of these initial values ​​and other parameters as inputs
  • Use the output of the sparse configuration of the ligament as accurate results of the real world situation, external characteristics and measurements.

One thing I want to ask is the correct flow. There are many methods for evaluating the structure and movement of representations, so I cannot be so sure of this.

As I look at various resources, I find that there are libraries that perform each part of the block block configuration operation. For each step:

  • Image processing libraries such as OpenCV can be used to automatically collect tie points.
  • cvFindExtrinsicCameraParams2 can be used for spatial resection (but it requires 4 points, for adjusting block binding it is mentioned that 3 ground control points are enough for each species. Should I use a different method, for example, to estimate the pose from stereo images?)
  • Using OpenCV triangulation and projection methods, real-world positions can be calculated
  • SBA or SSBA is suitable for this operation.
  • N / a

Another question is that if the previously mentioned stream of rights, comparable libraries is enough to implement the entire stream? (It might be better to advise for each piece)

I am new to this area, so I appreciate any help in this matter, thanks ...

+6
source share
1 answer

You have described the default stereo approach. Instead of using the terms computer vision (external, internal), I suggest you search using the terms internal and external orientation. This is a good approach if you have a finite number of overlapping images, and this has the advantage of some well-defined error estimation methods.
Here is the basic math:

http://itee.uq.edu.au/~elec4600/elec4600_lectures/1perpage/uq1.pdf http://itee.uq.edu.au/~elec4600/elec4600_lectures/1perpage/uq2.pdf

0.2. cvFindExtrinsicCameraParams2 can be used for spatial resection (but this requires 4 points, to adjust the block binding it is mentioned that 3 For each point of view, ground control points are enough.

The reason that four control points are required by cvFindExtrinsicCameraParams2 is because the equations are undetermined by only three. If you do not have enough control, you may need an alternative approach (or sensor) to estimate the initial camera positioning vector.

+3
source

Source: https://habr.com/ru/post/910166/


All Articles