Python Encoding Conversion

I wrote a Python script that processes CSV files with non-ascii characters encoded in UTF-8. However, encoding of the output file is interrupted. So from this input:

"d\xc4\x9bjin hornictv\xc3\xad"

I get this in the output:

"d\xe2\x99\xafjin hornictv\xc2\xa9\xc6\xaf"

Can you suggest where an encoding error may occur? Have you seen this behavior before?

EDIT: I am using the standard library csvwith the class UnicodeWriterprovided in docs . I am using Python version 2.6.6.

EDIT 2: Code for reproducing behavior:

#!/usr/bin/env python
#-*- coding:utf-8 -*-

import csv
from pymarc import MARCReader # The pymarc package available PyPI: http://pypi.python.org/pypi/pymarc/2.71
from UnicodeWriter import UnicodeWriter # The UnicodeWriter from: http://docs.python.org/library/csv.html

def getRow(tag, record):
  if record[tag].is_control_field():
    row = [tag, record[tag].value()]
  else:
    row = [tag] + record[tag].subfields
  return row

inputFile = open("input.mrc", "r")
outputFile = open("output.csv", "wb")
reader = MARCReader(inputFile, to_unicode = True)
writer = UnicodeWriter(outputFile, delimiter = ",", quoting = csv.QUOTE_MINIMAL)

for record in reader:
  if bool(record["001"]):
    tags = [field.tag for field in record.get_fields()]
    tags.sort()
    for tag in tags:
      writer.writerow(getRow(tag, record))

inputFile.close()
outputFile.close()

Input data is available here (large file).

+3
source share
2 answers

, force_utf8 = True MARCReader :

reader = MARCReader(inputFile, to_unicode = True, force_utf8 = True)

( inspect) - :

string.decode("utf-8", "strict")
+2

UTF-8:

import codecs
codecs.open('myfile.txt', encoding='utf8')
0

Source: https://habr.com/ru/post/1787682/


All Articles