Best practices with a large dataset in C #

I am currently working on the creation and implementation of software that should implement CRUD operations on two tables with a master-detail architecture. The header has about half a million lines and parts about a million lines.

Fill all this data in the data set with a crazy one, also the data can change, and I'm not interested in a local copy of the database. I am interested in the fact that the software works freely. Although a dataset may not be the best solution, I have to use it to fit other pieces of software.

At first I think of using TypedDataset and some methods like GetNext (), GetFirst (), GetByCod (), but I'm not sure if this is the best solution .... I do a little test and don't pay very well.

I’m interested in learning how other developers do it, best practices and what the “best choice” is for big data operations.

I am using Visual Studio 2008 and Sql Server 2005.

ADDED: When you talk about using SqlDataReader, are you referring to something like this?

using (SqlConnection con = new SqlConnection(CON)) { con.Open(); SqlCommand cmd = new SqlCommand("SELECT * FROM TABLE"); cmd.Connection = con; SqlDataReader rd = cmd.ExecuteReader(); BindingSource bindingSource = new BindingSource(); bindingSource.DataSource = rd; bindingNavigator1.BindingSource = bindingSource; txtFCOD.DataBindings.Add("Text", bindingSource, "FIELD"); } 
+6
source share
2 answers

I think there is no way to manage such a large dataset.

You need a DataReader, not a DataSet.

A local copy of a database with a really large amount of data is an effective way to get something like this (quick response from your application), but you will have problems with synchronization (replication), concurrency, etc.

The best practice is to receive from the server only the data that the user really needs. You must use server processing , stored procedures, etc.

I still don’t know what data you want to manipulate and what is the purpose of your application, but there is another drawback of a lot of data on the client side - your application will need a lot of ram and a fast processor. Your computer may be quick and able to handle this, but think about what happens when someone installs your application on a tablet with a 1 GHz Atom processor. It will be a disaster.

+4
source

Rarely should there be a scenario in which you need to immediately restore all the data.

You may consider the following:

  • Use views to serve certain smaller datasets.
  • Consider swapping using the OVER () function introduced in SQL Server 2005
  • Do not use datasets for large amounts of data. DataReaders are much more efficient in this case.

Personally, I think that you should avoid loading large amounts of data into memory unless you have full control over how much is loaded and when it is located. Remember that when processing data on the server side, you use resources that might be required for another process.

You should always try to work with smaller pieces at the same time and preferably as short as possible. This prevents your process from holding any resources for extended periods of time.

+3
source

Source: https://habr.com/ru/post/909072/


All Articles