The following classes represent, at a minimum, my real scenario with an outdated database. I can add new columns to it, but that’s all I can do, since over 300 table databases are used by many other legacy applications that won't be migrated to NHibernate (so switching from compound keys is not an option)
public class Parent { public virtual long Id { get; protected set; } ICollection<Child> children = new HashSet<Child>(); public virtual IEnumerable<Child> Children { get { return children; } } public virtual void AddChildren(params Child[] children) { foreach (var child in children) AddChild(child); } public virtual Child AddChild(Child child) { child.Parent = this; children.Add(child); return child; } } public class Child { public virtual Parent Parent { get; set; } public virtual int ChildId { get; set; } ICollection<Item> items = new HashSet<Item>(); public virtual ICollection<Item> Items { get { return items; } } long version; public override int GetHashCode() { return ChildId.GetHashCode() ^ (Parent != null ? Parent.Id.GetHashCode() : 0.GetHashCode()); } public override bool Equals(object obj) { var c = obj as Child; if (ReferenceEquals(c, null)) return false; return ChildId == c.ChildId && Parent.Id == c.Parent.Id; } } public class Item { public virtual long ItemId { get; set; } long version; }
Here's how I mapped them to an existing database:
public class MapeamentoParent : ClassMap<Parent> { public MapeamentoParent() { Id(_ => _.Id, "PARENT_ID").GeneratedBy.Identity(); HasMany(_ => _.Children) .Inverse() .AsSet() .Cascade.All() .KeyColumn("PARENT_ID"); } } public class MapeamentoChild : ClassMap<Child> { public MapeamentoChild() { CompositeId() .KeyReference(_ => _.Parent, "PARENT_ID") .KeyProperty(_ => _.ChildId, "CHILD_ID"); HasMany(_ => _.Items) .AsSet() .Cascade.All() .KeyColumns.Add("PARENT_ID") .KeyColumns.Add("CHILD_ID"); Version(Reveal.Member<Child>("version")); } } public class MapeamentoItem : ClassMap<Item> { public MapeamentoItem() { Id(_ => _.ItemId).GeneratedBy.Assigned(); Version(Reveal.Member<Item>("version")); } }
This is the code I use to insert a parent with three children and one child with an element:
using (var tx = session.BeginTransaction()) { var parent = new Parent(); var child = new Child() { ChildId = 1, }; parent.AddChildren( child, new Child() { ChildId = 2, }, new Child() { ChildId = 3 }); child.Items.Add(new Item() { ItemId = 1 }); session.Save(parent); tx.Commit(); }
These are the SQL statements generated for the previous code:
-- statement
Statements 4, 5 and 6 are extraneous / redundant, since all this information has already been sent to the database in batch inserts in instruction 2.
This would be the expected behavior if the parent mapping did not set the Inverse property in relation to HasMany (one to many).
In fact, it gets even weirder when we get rid of the one-to-many relationship from Child to Item as follows:
Remove the assembly from the Child and add the child property to the element:
public class Child { public virtual Parent Parent { get; set; } public virtual int ChildId { get; set; } long version; public override int GetHashCode() { return ChildId.GetHashCode() ^ (Parent != null ? Parent.Id.GetHashCode() : 0.GetHashCode()); } public override bool Equals(object obj) { var c = obj as Child; if (ReferenceEquals(c, null)) return false; return ChildId == c.ChildId && Parent.Id == c.Parent.Id; } } public class Item { public virtual Child Child { get; set; } public virtual long ItemId { get; set; } long version; }
Change the display of Child and Item to remove HasMany from Item and add links to the composite key to Item back to Child:
public class MapeamentoChild : ClassMap<Child> { public MapeamentoChild() { CompositeId() .KeyReference(_ => _.Parent, "PARENT_ID") .KeyProperty(_ => _.ChildId, "CHILD_ID"); Version(Reveal.Member<Child>("version")); } } public class MapeamentoItem : ClassMap<Item> { public MapeamentoItem() { Id(_ => _.ItemId).GeneratedBy.Assigned(); References(_ => _.Child).Columns("PARENT_ID", "CHILD_ID"); Version(Reveal.Member<Item>("version")); } }
Change the code to the following (note that now we need to explicitly call the "Save" element):
using (var tx = session.BeginTransaction()) { var parent = new Parent(); var child = new Child() { ChildId = 1, }; parent.AddChildren( child, new Child() { ChildId = 2, }, new Child() { ChildId = 3 }); var item = new Item() { ItemId = 1, Child = child }; session.Save(parent); session.Save(item); tx.Commit(); }
The resulting sql statements:
-- statement #1 INSERT INTO [Parent] DEFAULT VALUES; select SCOPE_IDENTITY() -- statement #2 INSERT INTO [Child] (version, PARENT_ID, CHILD_ID) VALUES (1 , 1 , 1 ) INSERT INTO [Child] (version, PARENT_ID, CHILD_ID) VALUES (1 , 1 , 2 ) INSERT INTO [Child] (version, PARENT_ID, CHILD_ID) VALUES (1 , 1 , 3 ) -- statement #3 INSERT INTO [Item] (version, PARENT_ID, CHILD_ID, ItemId) VALUES (1 , 1 , 1 , 1 )
As you can see, there are no extraneous / unnecessary UPDATE statements, but the object model is not modeled in a natural way, since I do not want Item to have a link back to Child, and I NEED a collection of elements in Child.
I cannot find a way to prevent unwanted / unnecessary UPDATE statements except to remove any HasMany relationships from Child. It seems that since the “child” is already “a lot” from the “inverted” one-to-many relationship (he is responsible for preserving himself), he does not take into account the reverse installation when it is the “one” part from the other -to-many inverted attitude ...
It drives me crazy. I cannot accept these additional UPDATE statements without any well-thought-out explanations :-) Does anyone know what is going on here?