but later I read about the garbage collector and setting object to null makes it work more often.
No, that would be a pure rumor. The best approach is to assume that the GC has been optimized to work as often as necessary to provide the best balance between performance and memory usage.
Setting references to zero is how you signal to the GC that you no longer need an object. GC should not do anything about it.
Update
To tune application performance, you must measure the behavior of the entire application - this means that you must first write the entire application (or a very realistic end-to-end model). Micro optimization does not work.
Thus, the best approach is to let the GC do what it is designed to - to make it easier for you to write clear, simple, and easy-to-use code thanks to automatic memory management. This way, when you tested your application on the target machine / device, and you can see where you need to tune performance, it will be easy to make the necessary changes without breaking anything.
Performance optimization should be determined by measurement. Measurement should be carried out on a realistic prototype of the complete product. So, in your implementation of the first step, concentrate on writing simple code. Then measure and place dirty hacks in those places where they are really necessary.
Keep in mind that they can be in different places depending on the device you are working on! On some devices, a hack used in a certain place can slow down work, while on another device it speeds up work. Therefore, you cannot simply blindly follow the rules in your code. You have to measure.
source share