When you put semicolons between css rules, the rule following the colon will be ignored. This can lead to some very strange results . MDN has jsfiddle which can be used to show this effect quite clearly.
This is the initial state and this after the first rule has a semicolon at the end.
Fortunately, essentially a universal practice is to exclude semicolons from a single css block.
My question is: why is this so? I heard that this is because it will save space (in this case exactly one character per css rule). But this reasoning, though true, seems strange. I could not find details about how much space each char takes in the css file, but if it is similar to JS, this SO message tells us that each char is approximately 16 bits or 2 bytes. This means that we would save 2 bytes per rule.
According to this list of average connection speeds across the country , the average average connection speed is 5.1 megabits / s. Since we save exactly 1 char per rule, avoiding half-columns, and each char is 16 bits, we can show that on average the number of rules that it accepts allows us to save one second:
5,100,000(bits/second) / 16(bits{saved}/rule) (5,100,000/16)*[(bits * rule)/(second * bits] or 318750 (rule/second)
And therefore, based on the average average connection speed, it will take about 300,000 rules to save us one second.
Of course, there should be more efficient ways to save load time for the user, as well as minification / ouglification css / js. Or shortening the length of CSS property names, since they are much longer than 1 char and can appear many times, reducing them can save orders of magnitude more bytes compared to shredding the end semicolon.
More important than the stored bytes, in my opinion, is how this confuses this for the developer. Many of us learn the habit of following closed brackets with semicolons.
returnType/functionDec functionName(arguments){
- A VERY common template found in many languages โโ(including JavaScript), and it is absolutely impossible to imagine how the developer prints
cssRuleA{ }; cssRuleB{ };
as a random result of this habit. The console will not register errors, the developer will not have any indication that the error was made outside of the styles that are not displayed correctly. The absolutely REAL part of this is that even if cssRuleA causes an error, it will work fine, cssRuleB will be a rule that does not display correctly, even if there is nothing wrong with that. The fact that
- This log does not contain errors in the console and
- A style that is not displayed is never the fault in this situation.
can especially cause problems in large projects where style / interface problems can have many different possible roots.
Is there a factor inherent in CSS that makes this convention clearer? Is there something in some white docs that I skipped that explains why CSS behaves this way? Personally, I tried to see if it was faster to exclude semicolons from the point of view of Finite Automata / Grammars , but I could not definitively determine if it was faster or not.