CTP5 EF Code First vs. Linq-to-sql

Well, I know that I have to do something wrong here, because the execution time that I get is so different from this shocking. I was considering using the first version of the object code in an existing project, so I tried to run some performance tests to see how it compares. I use MSpec to run tests with a remote development database.

Here are my tests:

public class query_a_database_for_a_network_entry_with_linq : ipmanagement_object { protected static NetINFO.IPM_NetworkMaster result; Because of = () => { var db = new NetINFODataContext(); result = db.IPM_NetworkMasters.SingleOrDefault(c => c.NetworkID == 170553); }; It should_return_an_ipm_networkmaster_object = () => { result.ShouldBeOfType(typeof(NetINFO.IPM_NetworkMaster)); }; It should_return_a_net_ou_object_with_a_networkid_of_4663 = () => { result.IPM_OUIDMaps.First().NET_OU.NET_OUID.ShouldEqual(4663); }; 

}

 public class query_a_database_for_a_network_entry_with_entity_code_first : ipmanagement_object { protected static NetInfo.Core.Models.CTP.IPM_NetworkMaster result; Because of = () => { NetInfo.Core.Models.CTP.NetInfoDb db = new NetInfo.Core.Models.CTP.NetInfoDb(); result = db.IPM_NetworkMasters.SingleOrDefault(c => c.NetworkID == 170553); }; It should_return_an_ipm_networkmaster_object = () => { result.ShouldBeOfType(typeof(NetInfo.Core.Models.CTP.IPM_NetworkMaster)); }; It should_return_a_net_ou_object_with_a_networkid_of_4663 = () => { result.NET_OUs.First().NET_OUID.ShouldEqual(4663); }; } 

As you can see from the datacontext with linq-to-sql, I cannot directly access an object that has many, many relationships. I have to use a staging lookup table. This is one of the things I like about Entity infrastructure. However, when I run these tests, the linq test does not last longer than 4 seconds (the database is remote). In cases where object verification takes almost 8 seconds each time. Not sure why there is such a huge difference? Here are excerpts from my POCO classes and my dbcontext:

Dbcontext:

 public class NetInfoDb : DbContext { public NetInfoDb() : base("NetINFOConnectionString") { } public DbSet<IPM_NetworkMaster> IPM_NetworkMasters { get; set; } public DbSet<IPM_NetworkType> IPM_NetworkTypes { get; set; } public DbSet<NET_OU> NET_OUs { get; set; } protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) { modelBuilder.Entity<IPM_NetworkMaster>() .HasMany(a => a.NET_OUs) .WithMany(b => b.IPM_NetworkMasters) .Map(m => { m.MapRightKey(a => a.NET_OUID, "NET_OUID"); m.MapLeftKey(b => b.NetworkID, "NetworkID"); m.ToTable("IPM_OUIDMap"); }); } } 

IPM_NetworkMaster:

 public class IPM_NetworkMaster { public int NetworkID { get; set; } <snip> public virtual ICollection<NET_OU> NET_OUs { get; set; } } 

NET_OU:

 public class NET_OU { public int NET_OUID { get; set; } <snip> public virtual ICollection<IPM_NetworkMaster> IPM_NetworkMasters { get; set; } } 
+4
source share
1 answer

As already mentioned, you need to profile your queries. Assuming you are using SQL Server, you can simply deploy the SQL Server Profiler and compare queries and execution plans.

As with any performance, you must first measure. With your script, you need to do more. You have to measure twice with each technology and make sure that you compare apples with apples. If you can eliminate the generated sql, you will have to measure the application code to possibly manage any bottlenecks.

I suspect that these will be generated queries.

+1
source

Source: https://habr.com/ru/post/1340390/


All Articles