SQL Server will definitely store "£" correctly in a varchar or nvarchar column using the Latin1_General_CI_AS . I see this happening every day in the software that I support.
I think the problem is that the text file is encoded and read. "£" has a code point value of 163 on Windows-1252 and Unicode. However, in extended ASCII (for example, DOS code on page 850), the value “£” is 156, and “ú” is 163. Is your code an attempt to convert csv text encoding before transferring data to SQL Server? If csv is encoded as UTF-8, then conversion from ASCII is not required.
UPDATE
Looking at MSDN, it seems like the bulk insert command is performing character set conversion. OEM is the default option if not specified.
CODEPAGE = {'ACP' | "OEM" | 'RAW' | 'code_page'}
The default is definitely not what you want here. Ideally, you should specify UTF-8 ( CODEPAGE = '65001' ). However, MSDN says that UTF-8 is not supported.
I suggest you change the encoding of your CSV file to Windows-1252, and then use the CODEPAGE = 'ACP' parameter to import the data.
source share