Architectural GUI Implementation Tips

So, I use the graphical interface with the SVG editor in the same application in which I work. Here are some examples of logic that are needed for this:

  • If the user right-clicks on the canvas, a new node must be created, and subsequent nodes must be “connected” to the line, forming a polygon
  • If the user presses the left button on the node, I must move the entire set of polygons, respectively, to the mouse position.
  • User can delete nodes
  • Selected nodes must be colored differently.
  • User can select multiple nodes by pressing SHIFT and clicking on nodes

Etc.

I already implemented all these elements, but I didn’t like the end result, mainly because I had to use a lot of flags for state management (mouse clicked && left button && not move? Do this), and, of course, this code could be more elegant. So I did a little research and came to the following options:

  • Pipeline drawing: I would create classes that would handle each logical event separately, and use the order of priorities to ensure what to do / what will be transmitted first, and how the event will propagate to subsequent elements of the pipeline.

  • MVC: This is the most common answer, but how I could use it to make the code cleaner is very vague at the moment.

  • State Machine: That would be nice, but managing the detail of a state machine would be tricky.

So, I ask the SO guru for tips on how to create better and happier code.

+6
source share
4 answers

I propose to separate the logic of mapping the inputs of an interface to a specific operation in selected objects. Lets you call them a Sensor object. Without knowing your implementation language, I will have a general character with this, but you should get this idea.

OperationSensor + OnKeyDown + OnKeyPress + OnKeyUp + OnLeftMouseDown + OnLeftMouseUp + OnNodeSelect + OnNodeDeselect + OnDragStart + OnDragStop 

Let's say you have a central class that combines all of the various user interface inputs, UiInputManager . It uses language-specific mechanisms to listen for keyboard and mouse input. It also detects basic operations, such as finding that if the mouse is clicked and then moved, it is a logical “drag and drop”.

 UiInputManager // event listeners + keyboard_keydownHandler + keyboard_keyupHandler + mouse_leftdownHandler + mouse_rightdownHandler // active sensor list, can be added to or removed from + Sensors 

UiInputManager is NOT responsible for knowing which operations trigger these inputs. It simply notifies its sensors in a specific language.

 foreach sensor in Sensors sensor.OnDragStarted 

or, if the sensors listen for logical events thrown by the UiInputManager

 RaiseEvent DragStarted 

You now have plumbing to enter the route into subclasses of OperationSensor. Each OperationSensor has logic related to only one operation. If it finds that the work criteria are met, it creates the corresponding Command object and passes it back.

 // Ctrl + zooms in, Ctrl - zooms out ZoomSensor : OperationSensor override OnKeyDown { if keyDown.Char = '+' && keyDown.IsCtrlDepressed base.IssueCommand(new ZoomCommand(changeZoomBy:=10) elseif keyDown.Char = '-' && keyDown.IsCtrlDepressed base.IssueCommand(new ZoomCommand(changeZoomBy:=-10) } 

I would recommend that command objects pass from sensors to the UiInputManager. Then the manager can transfer them to the command processing subsystem. This gives the manager the opportunity to notify the Sensors of the completion of the operation, allowing them to reset their internal state, if necessary.

Multistage operations can be handled in two different ways. You can either implement internal automata inside SensorOperation, or you have a Step 1 sensor that creates a Step 2 sensor and add it to the list of active sensors, possibly even removing yourself from the list. When "Step 2" is completed, it can re-add the sensor "Step 1" and delete itself.

+6
source

a little late, but I would like to add that the general template for this is a mediator template in which you move the complexity of interactions between different nodes into a separate class, a mediator (for example, ConnectionCreator class). see the design patterns of Gamma and his colleagues: http://ebookbrowse.com/addison-wesley-gamma-helm-johnson-vlissides-design-patterns-elements-of-reusable-object-oriented-pdf-d11349017

+3
source

Martin Fowler is a good entry on MVC and related patterns. You can also take a look at the command template to let the user interface elements know how they should behave (i.e. when clicking on a node should move it or delete it, etc.)

+2
source

For user interfaces, MVC is pretty versatile these days. Very briefly, M (model) contains state, V (view) shows visual elements, C (controller) sends incoming user actions, such as mouse clicks. The goal is to ensure that the model does not directly care about the presentation, with the possible exception of shooting events.

I would probably put smart people in the model. The model will know when a node is selected, its neighboring nodes forming a polygon, finite machines, etc. This design gives you several advantages. It does not depend on the details of the execution of the user interface; therefore, you can make big changes to your vision without disturbing the basic functions. It also makes unit testing easier.

0
source

Source: https://habr.com/ru/post/902166/


All Articles