My game uses the only 5 new user interfaces with a canvas. The game itself can receive touches for amputation shooting via the OnMouseDown () functions on several game objects with 2D colliders indicating tangible areas, and I can set the priorities of various tangible areas by changing the position of game objects. Z.
However, when adding a user interface, any touch of the interface elements (buttons, panels, etc.) will not only (if possible) initiate the user interface elements, but can also go through the user interface elements and launch touch areas. This is very strange when you press the button, not only the button is pressed, but the shooting action behind the (visually) “UI” layer is also triggered.
I can think that one way is to add a collider to the user interface elements, and then, at runtime, change its position and size to world space and adjust its position.z value to absorb all the touches that are in the user interface. However, this seems very ugly and unsafe.
Is there an elegant way to allow all user interface elements (mainly panels) to swallow strokes? Thank!
source
share