How does a game engine that models objects as “component collections” work at run time?

I am writing a lightweight game engine, and while doing some research, I came across a number of compelling articles on the implementation of Game Objects using the “component collection” model rather than the “weight loss from specific classes” model. There are many advantages:

  • objects can be composed using data-oriented design methods, allowing designers to come up with new objects without involving a programmer;
  • usually smaller than the original dependency file, allowing code to be compiled faster;
  • the engine as a whole becomes more general;
  • unforeseen consequences to change specific classes a hierarchy of inertia can be avoided;
  • etc.

But there are parts of the system that remain opaque. First of all, this is how the components of the same object communicate with each other. For example, suppose an object that models a bullet in a game is implemented in terms of these components:

  • a bit of geometry for visual representation
  • position in the world
  • volume used to collide with other objects
  • other things

During rendering, the geometry must know its position in the world in order to display it correctly, but how does it find this position among all its related components in the object? And during the update, how does the collision volume find the position of the object in the world to check its intersection with other objects?

I think my question can be reduced to the following: well, we have objects that consist of several components, each of which implements a bit of functionality. What is the best way to work at runtime?

+4
source share
7 answers

Composite architectures typically rely on interfaces. The component then represents implementation data +, allowing developers to reuse available implementations with different data. for example using a rocket code once with a rocket and once with an arrow. Flexibility lies in the ability to "tune" such combinations outside the actual runtime.

During execution time, objects receive and provide the necessary information through interfaces. For example, the object will receive the origin and direction of the link in order to position itself in the world. For the actual drawing of the material, I would assume that some kind of graphic context will be transferred, and the infrastructure will take care of aligning the default offset / projection for the current object.

+1
source

Another serious reason to implement this strategy is the ability to compose an object’s behavior from behavior components, allowing you to reuse behavior in multiple game objects.

So, for example, you have a base class of a game object with these properties: burnable , movable , live . By default, everyone has a null reference. If you want your object to be liquefied, set:

object.burnable = new Burnable(object); 

Now that you want to burn the object, use:

 if (object.burnable != null) { object.burnable.burn(); } 

And combustible behavior changes the game object in any way.

+1
source

I saw (and tried) several ways to implement this:

1) Components do not exist in a vacuum, but are assembled in an object “object” (or “game object”). All components have a reference to their essence, so your collision can do something like GetEntity () → GetComponent ("Position") → GetCoords () (possibly checking for zero vectors, etc.), depends on the language, re works )

In this case, sometimes it is convenient to place some general information directly in the object (position, unique identifier, active / inactive status) - there is a compromise between creating something “clean” and general, and something fast and efficient.

2) There is no entity, only components (I use this for my own lightweight game engine). In this case, the components should be explicitly connected with other components, so perhaps your “collisions” and “graphics” will contain a pointer to a “position”.

+1
source

I always found Kyle Wilson's blog to be an interesting source from someone who works with it, and seems to give him a lot of thought. Especially this entry may be of interest: http://gamearchitect.net/2008/06/01/an-anatomy-of-despair-aggregation-over-inheritance/ . This is not the key point of the article, but basically that he says that they (when developing Fracture ') had separate hierarchies. One for GameObjects and SceneGraph for visual presentation. Personally, I think this is a very healthy design, but I'm not an expert in this area.

+1
source

Is it possible to return a link to a game object to objects?

This would make it possible to find a world position by returning to the Game object, then collapsing to a world position.

0
source

I also study this and come up with some solutions that can be quick and non-intrusive.

The goal of creating a component-based application is that you can add and remove components whenever you want at runtime, and have little or no connection; components can coexist without even knowing about each other.

So how does the “visualization” component know where it should look or worse, what should it display? My answer is that each component to be shared has access to a common table of proprietary entries stored in their container (usually in a game object or game object).

For example, suppose a game object with a motion component, a rendering component, and a symbol component

Rendering components request their container for a property named "positionx", "positiony" and "positionz" (if it is a 3D game), and then request the property "rendermodel". Based on this, the rendering components display the property returned by rendermodel in positionx, positiony, and positionz. Positioning properties are changed by the movement component, which itself sets the "speed" property on the container, which, in turn, is configured by the symbol component based on keyboard input.

If these properties do not exist, they are created at the time of their request and initialized with a valid value (for example: 0).

In my opinion, components should never request another component, since they should be considered as general as possible.

0
source

It sounds a little redesigned; what do you get by making location an abstract component of an object instead of a fundamental property?

But if you really want to do this, I think you can set up a dependency graph where everything is clearly connected. Thus, (for example) collision-volume has an input for a location that is connected to the output of the component position. Take a look at the insides of Maya to see how this can work.

But then again IMHO this is very similar to busting.

-1
source

Source: https://habr.com/ru/post/1277741/


All Articles