You are trying to implement a subset of affine transformations in the plane. In your case, you only need to combine translation and scaling (scaling) of the drawing plane. The full set of capabilities for affine transformations in the plane includes the use of matrices of three dimensions, but for now I’ll just give the minimum necessary for your problem. Feel free to search the entire topic online, there is a lot of literature on this subject.
First of all, we will declare a 2D vector, and some operators:
class vector2D { protected: public: vector2D(const vector2D &v); vector2D(float x, float y) { } vector2D operator +(const vector2D &v) const { } vector2D operator -(const vector2D &v) const { } vector2D operator *(float v) const { } bool operator ==(const vector2D &v) const { } const vector2D &operator = (const vector2D &v) { } };
I will let you fill in the blanks or use your own class if you have one. Please note that this interface may not be optimal, but I want to focus on algorithms and not on performance.
Now consider mapping transformations:
We will call zf the scaling factor, trans translation part of the transformation, and the origin is the view in the window. You mentioned that your coordination system is centered in the window, so the center of the window will be the source. The conversion from the viewing system to the window coordinate can be decomposed into two separate stages: one of which will be the scaling and translation of the displayed objects, which we will call the model, and one that will be the translation from the codes of the views on the window coordinates, which we will call the projection. If you are familiar with 3D rendering, this can be seen as a similar mechanism used in OpenGL.
Projection can be described as a simple translation in the upper left corner of the window to the origin view.
vector2D project(const vector2D &v){ return v + origin; }
Modelview combines translations and scaling (at the moment, the UI code will only handle scaling at arbitrary points).
vector2D modelview(const vector2D &v){ return trans + (v * zf); }
I will let you organize these functions and the corresponding data ( zf , centre , trans ) the most convenient way for you.
Next, let's see how the various data should be changed by the user interface.
Basically, you need to change the coordinates of the point from the coordinate system located in the center of your view, to the system centered at the zoom point, then change their new coordinates, and then return to the center of view. Every object that you want to conduct must undergo this transformation.
the formula then:
v '= (v + zp) * s - zp
where zp is the scaling point, s is the scaling factor, v is the coordinate of the point in the system to be transformed, and therefore v 'is the resulting enlarged point.
If you want to adjust the scaling in different places, you need to consider the case-based scaling factor and center:
if c is the new scaling center, t is the current translation, z is the current scaling factor, and z2 is the new scaling factor, then we can calculate the new global transformation using:
t '= t + c * (1 - z2) z' = z * z2
This comes from moving the coordinate system to the center of scaling, applying scaling to the transformation, and returning to the origin.
As for the center of scaling, you should be careful that the mouse input will be in the coordinate system of the window and, therefore, should be converted back to your view system (centered in origin ). The following unproject function does just that:
vector2D unproject(const vector2D &v){ return v - origin; }
Finally, let us have a simple implementation of a function that transforms the model transformation according to a new input:
void onMouseWheel(float mouseX, float mouseY, bool zoom_in){ float z2 = zoom_in? 1.1 : 1/1.1; vector2D m(mouseX,mouseY); if (! (m == origin)) {
As you can see, I have provided more or less general code that you will have to adapt to the features of the Win32 API.