The word semantics is ambiguous, and you come across two slightly different meanings in these different contexts.
The first value (your code) is related to how the compiler interprets the code you entered. But there are different degrees of interpretation for this. The syntax is one level where the interpretation simply determines that n1*n2 means that you want to do the multiplication. But there is also a higher level of interpretation - if n1 is an integer and n2 is a floating point, what is the result? What if I drop it, if it is rounded, truncated, etc.? These are “semantic” questions, not syntactic ones, but someone decided that yes, the compiler can answer most of them.
They also decided that the compiler had limitations on what it could (and should!) Interpret. For example, he may decide that casting to int is truncation rather than rounding, but he cannot decide what you really need when you are trying to multiply the array by a number.
(Sometimes people decide that they CAN, though. In Python, [1] * 3 == [1,1,1] .)
The second meaning means much wider coverage. If it is assumed that the result of this operation will be sent to a peripheral device, which can take values from 0x000 to 0xFFF, and you multiply 0x7FF by 0x010, it is obvious that you made a semantic error. Peripheral device designers must decide whether to handle this or how. You, as a programmer, can also take steps to test sanity. But the compiler has no idea about these external semantic restrictions or about how to enforce them (filter user input? Return error? Truncate? Wrap?), Which is what the second quote says.
detly source share