Is there any explanation why we cannot put semicolons between CSS declaration blocks?

When you put semicolons between css rules, the rule following the colon will be ignored. This can lead to some very strange results . MDN has jsfiddle which can be used to show this effect quite clearly.

This is the initial state and this after the first rule has a semicolon at the end.

Fortunately, essentially a universal practice is to exclude semicolons from a single css block.

My question is: why is this so? I heard that this is because it will save space (in this case exactly one character per css rule). But this reasoning, though true, seems strange. I could not find details about how much space each char takes in the css file, but if it is similar to JS, this SO message tells us that each char is approximately 16 bits or 2 bytes. This means that we would save 2 bytes per rule.

According to this list of average connection speeds across the country , the average average connection speed is 5.1 megabits / s. Since we save exactly 1 char per rule, avoiding half-columns, and each char is 16 bits, we can show that on average the number of rules that it accepts allows us to save one second:

5,100,000(bits/second) / 16(bits{saved}/rule) (5,100,000/16)*[(bits * rule)/(second * bits] or 318750 (rule/second) 

And therefore, based on the average average connection speed, it will take about 300,000 rules to save us one second.

Of course, there should be more efficient ways to save load time for the user, as well as minification / ouglification css / js. Or shortening the length of CSS property names, since they are much longer than 1 char and can appear many times, reducing them can save orders of magnitude more bytes compared to shredding the end semicolon.

More important than the stored bytes, in my opinion, is how this confuses this for the developer. Many of us learn the habit of following closed brackets with semicolons.

 returnType/functionDec functionName(arguments){ //...function body }; 

- A VERY common template found in many languages โ€‹โ€‹(including JavaScript), and it is absolutely impossible to imagine how the developer prints

 cssRuleA{ /*style Rules */ }; cssRuleB{ /* Style Rules*/ }; 

as a random result of this habit. The console will not register errors, the developer will not have any indication that the error was made outside of the styles that are not displayed correctly. The absolutely REAL part of this is that even if cssRuleA causes an error, it will work fine, cssRuleB will be a rule that does not display correctly, even if there is nothing wrong with that. The fact that

  • This log does not contain errors in the console and
  • A style that is not displayed is never the fault in this situation.

can especially cause problems in large projects where style / interface problems can have many different possible roots.

Is there a factor inherent in CSS that makes this convention clearer? Is there something in some white docs that I skipped that explains why CSS behaves this way? Personally, I tried to see if it was faster to exclude semicolons from the point of view of Finite Automata / Grammars , but I could not definitively determine if it was faster or not.

+5
source share
1 answer

In CSS, rules are defined either by blocks or by operators, but not simultaneously. A block is a piece of code that is surrounded by a pair of curly braces. A statement is a piece of code that ends with a semicolon.

An empty rule is not a valid CSS rule because it cannot be parsed as a qualified rule or at-rule . It is therefore reasonable that it is solitary ; between two blocks is invalid for the same reason that a block that does not contain foreplay (either a selector list or a keyword followed by an optional prelude) is invalid: because it cannot be analyzed for anything meaningful.

Only at-rules can take the form of statements and therefore end with a semicolon (examples include @charset and @import ); qualified rules never do. Therefore, when an incorrect rule is found, if the parser does not yet perform at-rule parsing, then it is considered as a qualified rule, and everything before and after the next corresponding set of curly braces is consumed and discarded, including a semicolon, This is briefly described in section 2.2 css-syntax-3 (he says that the text is non-normative, but this is only because the normative rules are defined in the grammar itself).

And the reason why error handling takes such an impatient approach in CSS is mainly due to error handling of the selector - if it was conservative, browsers might not have parsed the parsing as something completely unexpected. For example, if IE6, which does not understand > , had to ignore only p > in p > span {...} and consider everything starting with span valid, this rule will correspond to any span element in IE6, while only matching corresponding subset of elements in supporting browsers. (Actually, a similar problem exists in IE6 with chain class selectors - .foo.bar treated as .bar .) You might think about it, therefore, not as a liberal error handling, but a conservative application of CSS rules. It is better not to apply the rule if in doubt than to apply it with unexpected results.

Anyone who told you that this is for performance reasons is just making it up.

+4
source

Source: https://habr.com/ru/post/1265193/


All Articles