I know there is a similar question here. But I think mine expands this a bit more.
I recently worked on an Application that has been in production for about a year, without any problems and no real expansion plan. The application has several dependencies and uses DI, but without a container.
Now I am distributing the application to a much wider scope in the company's instructions, this led me to implement the use of the IoC container. The problem here was the overhead of adding a container to the code, which, as I thought earlier, would never be needed.
My specific question is when I move forward:
When planning and coding small applications, which, apparently, will not expand much, I have to implement the container in the expectation that scenarios like these can appear on my own and in that expectation I would better implement the container from the very beginning, therefore when expanding the framework already exist.
Is this a sign of poor design if the implementation of the container when expanding the application beyond its original intent becomes cumbersome?
EDIT:
I use reliable principles (as far as possible) and am currently interacting intensively with my applications, this issue is more about using the IoC container, not the DI itself. The previously mentioned application is its own DI style, which I add to the container in which the question arises.