You should have some rationale for each data type used.
nvarchar (255) (in SQL Server) stores 255 Unicode characters (510 bytes plus overhead).
Of course, you can store regular Unicode data encoded in UTF-8 in varchar columns - one varchar character for each byte in the source (UTF-8 will use several bytes respectively for wide characters). In this case, normal ASCII data uses only 1 byte per character, so you do not have double-byte overhead. This has many drawbacks, not least because the database can no longer help as much with collaborations and other character manipulations as the data is potentially encoded. But, as I said, this is possible.
I recommend char or varchar characters of the appropriate length for things like account numbers where the decimal value may not be used, because zero additions, license numbers, account numbers (with letters), postal codes, phone numbers, etc. These are column types that NEVER contain any wide characters and are usually limited only to rooted letters and numbers, sometimes even punctuation, and are often heavily indexed. There is absolutely no need for the overhead of extra NUL bytes for all these characters in the columns of both tables and indexes and in the working set in the database engine.
I recommend nvarchar for things like names and addresses, etc., where wide characters are possible, perhaps even when no use is expected in the near future.
I usually never use nchar - I never need shortcodes (usually where I selected the char columns) that needed wide characters.
In all cases, the use of length (or max) really needs to be fully taken into account. I would definitely not use max for names or addresses, and the overhead might be obvious when benchmarking. I have seen that casting in varchar (length) in the intermediate stages of queries dramatically improves performance.
source share