SQL Server errors importing CSV file despite using varchar (MAX) for each column

I am trying to insert a large CSV file (several gigs) into SQL Server, but as soon as I go through the import wizard and finally try to import the file, I get the following error report:

  • Execution (error) Messages Error 0xc02020a1: data stream Task 1: data conversion failure. The data conversion for the Title column "returned a status value of 4 and the status text" Text was truncated or one or more characters did not match on the destination code page. ". (SQL Server Import and Export Wizard)

  • Error 0xc020902a: data stream Task 1: "Source - Train_csv.Outputs [Output file with a flat file]. Columns [" Header "]" failed because a truncation occurred, and the location of the truncation line on "Source - Train_csv" .Outputs [ Output Source Output Output] .Columns ["Title"] "indicates a truncation failure. A truncation error occurred on the specified object of the specified component. (SQL Server Import and Export Wizard)

  • Error 0xc0202092: data flow Task 1: An error occurred while processing the file "C: \ Train.csv" in data line 2. (SQL Server Import and Export Wizard)

  • Error 0xc0047038: data stream Task 1: SSIS error code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method for Source - Train_csv returned error code 0xC0202092. The component returned a failure code when the pipeline engine was called PrimeOutput (). The value of the failure code is determined by the component, but the error is fatal, and the pipeline has stopped executing. Prior to this, error messages may appear with additional information about the failure. (SQL Server Import and Export Wizard)

I created a table to insert the file first, and I set each column to store varchar (MAX), so I don't understand how I can still have this truncation problem. What am I doing wrong?

+46
sql sql-server csv
03 Sep '13 at 19:30
source share
6 answers

In the SQL Server Import and Export Wizard, you can configure the source data types on the Advanced tab (they become output data types when creating a new table, but otherwise only used to process the source data).

Data types are annoyingly different from types in MS SQL, not VARCHAR(255) it DT_STR , and the width of the output column can be set to 255 . For VARCHAR(MAX) it DT_TEXT .

So, on the "Data Source" tab on the Advanced tab, change the data type of any offensive columns from DT_STR to DT_TEXT (you can select multiple columns and change them all at once).

Import and Export Wizard - Data Source - Advanced

+121
03 Sep '13 at 20:04 on
source share

This answer may not apply universally, but it fixed the appearance of this error that I encountered while importing a small text file. The flat file provider imported based on the fixed 50-character text columns in the source, which was incorrect. No number of reassignment of destination columns affected the problem.

To solve the problem, in the "Select Data Source" section for the flat file provider, after selecting the file, the "Suggest Types .." button appears under the list of input columns. After clicking this button, even if no changes were made to the input dialog box, the flat file provider then re-requested the source CSV file and then correctly determined the lengths of the fields in the source file.

Once this was done, imports continued without further problems.

+1
Jan 21 '16 at 17:13
source share

I think this is a mistake, apply a workaround, and then try again: http://support.microsoft.com/kb/281517 .

In addition, go to the "Advanced" tab and confirm that the target columns are Varchar (max).

0
Sep 03 '13 at 19:48
source share

The advanced editor did not solve my problem, instead I had to edit the dtsx file via notepad (or your favorite text / text editor) and manually replace the values ​​in the attributes with

length="0" dataType="nText" (I use unicode)

Always back up your dtsx file before editing in text / xml mode.

Starting SQL Server 2008 R2

0
Jun 16 '15 at 11:59
source share

Go to the "Advanced" tab ----> column data type ---> Here, change the data type from DT_STR to DT_TEXT and the column width is 255. Now you can check that it will work fine.

0
Jun 19 '15 at 21:30
source share

Question: The Jet OLE DB provider reads a registry key to determine how many rows should be read in order to guess the type of source column. The default value for this key is 8. Therefore, the provider scans the first 8 rows of the source data to determine the data types for the columns. If any field looks like text and the data length is more than 255 characters, the column is entered as a memo field. Thus, if there is no data longer than 255 characters in the first 8 rows of the data source, Jet cannot determine the exact nature of the data type. Since the first row length of 8 rows in the exported sheet is less than 255, it treats the length of the source as VARCHAR (255) and cannot read data from a column with a longer length.

Fix: The solution is to sort the comment column in descending order. In 2012, we can update the values ​​on the Advance tab in the import wizard.

0
Sep 01 '15 at 6:56
source share



All Articles