Basically, it comes down to compromise.
One of your questions has an example of Linus himself:
[...] CVS, ie in fact, it is heavily focused on the “one file at a time” model.
It’s good that you can have a million files, and then you can check only some of them - you won’t even see the influence of other 999,995 files.
Git basically never looks for less than the whole repo. Even if you are limiting a bit (for example, checking only a part or returning to the story a bit), git ends up always taking care of all this, and carries the knowledge around.
So git scales very much if you force it to look at everything as one huge repository. I do not think that this part is really fixed, although we can probably improve it.
And yes, then a "large file" arises. I really don't know what to do with huge files. We suck them, I know.
Just as you will not find a data structure with access and O (1) insertion, you will not find a content tracker that does everything fantastically.
Git deliberately chose to be better at some things, to the detriment of others.
Disk usage
Since git is a DVCS version control system ( distributed ), everyone has a copy of the entire repo (unless you are using a relatively recent minor clone).
It has some really nice benefits, so DVCS, such as git, have become insanely popular.
However, the 4 TB repo on a central server with SVN or CVS is controllable, whereas if you use Git, everyone will not be thrilled to port it.
Git has excellent mechanisms to minimize the size of your repo by creating delta chains ("diffs") between files. git is not limited to paths or commits orders when they are created, and they really work very well ... sort of like gzipping the entire repo.
Git puts all these small differences in packfiles. Delta chains and packfiles force objects to be retrieved for a while, but it is very effective to minimize disk usage. (There are these tradeoffs again.)
This mechanism also does not work for binary files, since they tend to differ quite a bit, even after a "small" change.
History
When you register in a file, you have it forever and always. Your grandchildren grandchildren grandchildren will upload your cat gif every time they clone your repo.
This, of course, is not unique to Git, since DCVS makes the consequences more significant.
Although files can be deleted, a git content-based design (each object identifier is a SHA of its contents) makes deleting these files difficult, invasive and destructive to the story. In contrast, I can remove a twofold entity from an artifact or S3 bucket repository without affecting the rest of my content.
Complexity
Working with really large files requires a lot of careful work so that you minimize your operations and never load all this into memory. This is extremely difficult to do reliably when creating a program with such a complex set of functions as git.
Conclusion
Ultimately, developers who say "don't put large files in Git" are a bit like those who say "don't put large files in databases." They don’t like it, but any alternatives have disadvantages (Git integration in one case, ACID and FKs matching with another). In fact, it usually works fine, especially if you have enough memory.
It just doesn't work as well as with what it was intended for.