Yes, you are correct that Subversion has this problem. Infact, it is even worse than you think. Subversion works on a per-file basis when it determines if your working copy is out of date. So you can get
A changes the default values ββin foo() and repeats the experiment. Let them say that the change only affects results/output-0001.dat .
A does this as a version of SVN 2.
B revises another piece of code and generates new results. Since B has no change from A, only the change in results/output-1000.dat changed by repetition.
B does this as a version of SVN 3.
B can commit without updating because the changes he made did not overlap with the changes made by A. Moreover, SVN version 3 does not match the working copy on machine A or B! If Professor C comes in and does a version 3 check of the SVN, he sees:
results/output-0001.dat with results from A andresults/output-1000.dat with the results from B.
This is very controversial.
The main concept that allows this is mixed editorial working copies . Subversion allows you to have files in different versions of your working copy. When you create revision 2 with the change to foo.c , this file is marked as revision 2. Other files in the working copy remain in version 1. This allows you to selectively upgrade part of your working copy to the old version for debugging purposes, and allows you to commit files without updates if no one has touched the file.
Tools such as Mercurial and Git will stop you from doing this because they model history as a DAG (directed acyclic graph). Each change becomes a new node in the graph, and you must make the merge commit explicit to merge the two sets of changes. In the above scenario, B will try to push it to change, and Mercurial will abort. Then he does
$ hg pull $ hg merge
All three versions of the results are now stored in history.
source share