I am currently working on the creation and implementation of software that should implement CRUD operations on two tables with a master-detail architecture. The header has about half a million lines and parts about a million lines.
Fill all this data in the data set with a crazy one, also the data can change, and I'm not interested in a local copy of the database. I am interested in the fact that the software works freely. Although a dataset may not be the best solution, I have to use it to fit other pieces of software.
At first I think of using TypedDataset and some methods like GetNext (), GetFirst (), GetByCod (), but I'm not sure if this is the best solution .... I do a little test and don't pay very well.
I’m interested in learning how other developers do it, best practices and what the “best choice” is for big data operations.
I am using Visual Studio 2008 and Sql Server 2005.
ADDED: When you talk about using SqlDataReader, are you referring to something like this?
using (SqlConnection con = new SqlConnection(CON)) { con.Open(); SqlCommand cmd = new SqlCommand("SELECT * FROM TABLE"); cmd.Connection = con; SqlDataReader rd = cmd.ExecuteReader(); BindingSource bindingSource = new BindingSource(); bindingSource.DataSource = rd; bindingNavigator1.BindingSource = bindingSource; txtFCOD.DataBindings.Add("Text", bindingSource, "FIELD"); }
source share