SSIS Script Component: Microsoft.SqlServer.Dts.Pipeline.BlobColumn

Fighting the C # component. What I'm trying to do is take a column that is ntext in my original source, which is split into pipes, and then writes the array to a text file. When I run my component, my output is as follows:

DealerID,StockNumber,Option 161552,P1427,Microsoft.SqlServer.Dts.Pipeline.BlobColumn 

I worked with the GetBlobData method and I fought with it. Any help greatly appreciated! Here is the full script:

 public override void Input0_ProcessInputRow(Input0Buffer Row) { string vehicleoptionsdelimited = Row.Options.ToString(); //string OptionBlob = Row.Options.GetBlobData(int ; //string vehicleoptionsdelimited = System.Text.Encoding.GetEncoding(Row.Options.ColumnInfo.CodePage).GetChars(OptionBlob); string[] option = vehicleoptionsdelimited.Split('|'); string path = @"C:\Users\User\Desktop\Local_DS_CSVs\"; string[] headerline = { "DealerID" + "," + "StockNumber" + "," + "Option" }; System.IO.File.WriteAllLines(path + "OptionInput.txt", headerline); using (System.IO.StreamWriter file = new System.IO.StreamWriter(path + "OptionInput.txt", true)) { foreach (string s in option) { file.WriteLine(Row.DealerID.ToString() + "," + Row.StockNumber.ToString() + "," + s); } } 
+4
source share
2 answers

Try using

 BlobToString(Row.Options) 

using this function:

  private string BlobToString(BlobColumn blob) { string result = ""; try { if (blob != null) { result = System.Text.Encoding.Unicode.GetString(blob.GetBlobData(0, Convert.ToInt32(blob.Length))); } } catch (Exception ex) { result = ex.Message; } return result; } 

Adapted from: http://mscrmtech.com/201001257/converting-microsoftsqlserverdtspipelineblobcolumn-to-string-in-ssis-using-c

+10
source

Another very simple solution to this problem, since it is a common PITA, is to route the output of the error to the derived component of the column and cast your blob data to STR or WSTR as a new column.

Pull the output of this into your script component, and the data will go in as an extra column in the pipeline, ready for parsing.

This will probably only work if your data is less than 8000 characters long.

+1
source

Source: https://habr.com/ru/post/1337401/


All Articles