This would be a direct violation of the HTML specifications. To them, significant markup characters are Ascii characters, while characters such as U + FF1C FULLWIDTH LESS-THAN SIGN "<" are just data characters that have no special meaning. Browsers need additional code to match the bandwidth characters for Ascii (either as a special mapping, or, for example, by normalizing to NFKD or NFCKC), but there is no reason to believe that they will do such things more than there is reason to believe so that they can start displaying "[" in "<".
Thus, a blog that claims it is different simply describes the possibility that someone has invented, but has no real reason. You can usually see this from the links and demos provided. (That is, due to their absence.)
Of course, the security problems around Unicode characters look alike, but then it is a question of people mistakenly accepting one character for another, even if they are internally completely different, for example, "<" for "<". (and therefore, for example, see a line in the HTML source as a script element, even if it is not) or "a" for "a" (Cyrillic letter for a Latin letter with the same appearance). That is, people can see characters the same, even if programs see them differently.
source share