I like three concepts:
The simplest scenario for me: I will have the first two mentioned in the same encoding, say CP850, and I will store my .bat in the same encoding (in Notepad ++, menu Encoding → Character sets → West European → OEM 850).
But suppose someone hands me a .bat in a different encoding, say CP1252 (in Notepad ++, the Encoding menu * → Character sets → Western European → Windows-1252)
Then I would change the internal encoding of the command line using chcp 1252.
This changes the encoding that he uses to talk to other processes, neither the input device nor the output console.
Thus, my command line instance will effectively send characters in 1252 via the STDOUT file descriptor, but gabbed text appears when the console decodes them as 850 (é is Ú).
Then I modify the file as follows:
@echo off perl -e "use Encode qw/encode decode/;" -e "print encode('cp850', decode('cp1252', \"ren -hlice hlice\n\"));" ren -hlice hlice
First, I turn on the echo, so no commands are output unless either the echo is explicitly executed ... or perl -e "print ..."
Then I put this template every time I need to output something
perl -e "use Encode qw / encode decode /;" -e "print encode ('cp850', decode ('cp1252', \" ren -hélice hélice \ n \ "));"
I will replace the actual text that I will show for this: ren -hélice hélice.
And also I would need to replace my console encoding for cp850 and another side encoding for cp1252.
And a little lower I will put the desired team.
I broke the problematic line into half the output and the actual half of the command.
The first thing I do for sure: "é" is interpreted as "é" by transcoding. This is necessary for all output sentences, since the console and the file are in different encodings.
The second, real command (skipped from @echo off), knowing that we have the same encoding from both chcp and .bat text is enough to ensure the correct interpretation of characters.