Here's the plot of the question: suppose I have some abstract classes for objects, let's call it Object . This definition will include two-dimensional position and dimensions. Let it also have some virtual void Render(Backend& backend) const = 0 method used for rendering.
Now I specialize my inheritance tree and add the Rectangle and Ellipse class. Guess that they will not have their own properties, but they will have their own virtual void Render method. Let's say I implemented these methods, so the Render for the Rectangle actually draws some kind of rectangle the same for the ellipse.
Now I add some object named Plane , which is defined as class Plane : public Rectangle and has a private member std::vector<Object*> plane_objects;
Right after that I add a method to add some object to my plane.
And here the question arises. If I create this method as void AddObject(Object& object) , I would void AddObject(Object& object) into a problem, since I could not name virtual functions, because I would need to do something like plane_objects.push_back(new Object(object)); , and it should be push_back(new Rectangle(object)) for rectangles and new Circle(...) for circles.
If I implement this method as void AddObject(Object* object) , it looks nice, but then in another place it means a call like plane.AddObject(new Rectangle(params)); , and this is usually a mess, because then he does not understand which part of my program should free the allocated memory,
["when destroying a plane? why? sure that AddObject calls AddObject executed only as AddObject(new something ).]
I think the problems caused by using the second approach can be solved with smart pointers, but I'm sure there should be something better.
Any ideas?
source share