Yes, if they are independent from each other, it would really be more effective, since access to one does not block access to another. You also have money to risk a deadlock if this independence is false.
The question is that _someProperty1 = new List<SomeObject1>(); is not real code for assigning _someProperty1 (it is hardly worth the lazy load, is it?), then the question arises: can the code that SomeProperty1 ever fills something that SomeProperty2 fills, or vice versa, through any code-path, no matter how bizarre?
Even if you can call another, there can be no deadlock, but if they can both call each other (or 1 call 2, 2 call 3 and 3 call 1, etc.), then a dead end can definitely happen.
As a rule, I started with wide locks (one lock for all blocked tasks), and then would lock the nodes as optimization as needed. In those cases where you have, say, 20 methods that require locking, then a security assessment can be more difficult (you also begin to fill the memory with only locking objects).
Please note that the code also has two problems:
Firstly, you are not blocking your setter. Perhaps this is normal (you just want your lock to prevent multiple heavy calls to the boot method and don't really care if there are overwrites between set and get ), maybe it's a disaster.
Secondly, depending on which processor is running, double-check how you write, there may be problems with read / write reordering, so you must either have a variable field or cause a memory barrier. See http://blogs.msdn.com/b/brada/archive/2004/05/12/130935.aspx
Edit:
It is also worth considering whether this is really necessary.
Note that the operation itself must be thread safe:
- Let's do a bunch of things.
- Create an object based on this group of things.
- Assign this object to a local variable.
1 and 2 will occur in only one thread, and 3 will occur atomically. Therefore, the advantage of locking is:
If the execution of steps 1 and / or 2 above has its own problems with threads and is not protected from them by its own locks, then blocking is 100% necessary.
If it would be catastrophic for something to act on the value obtained in steps 1 and 2, and then to do this when repeating steps 1 and 2, the lock is 100% necessary.
Blocking will prevent waste 1 and 2 being carried out several times.
So, if we can eliminate cases 1 and 2 as a problem (a little analysis is required, but this is often possible), then we only prevent the waste in case 3, which we need to worry about. Now, maybe this is a big concern. However, if he rarely appeared, and also was not such a large amount of waste when this happened, the blocking gains would outweigh the blocking gain.
If in doubt, locking is probably a safer approach, but it's possible that just living with a random wasteful operation is better.