It is not entirely correct to say "created databases of the time when the hierarchical model is used."
Firstly (for nit-pick), these are database management systems that do / do not use any physical structure. "Databases" are database projects that can use all kinds of abstractions. Entity-relational modeling has been and remains popular as a design tool regardless of the possible physical platform.
Secondly, at a time when hierarchical models were common for large "big iron" databases, indexed-sequential was much more common in what was used (for example, DEC PDP-8 / -11, IBM System / 34, / 36, ICL 1900 / ME29, Honeywell DPS4 / DPS7).
We could say that a system with indexed sequences on disk grew out of systems with punch cards or magnetic tape, using a batch update by copy. Where the "sequential" comes from.
You say you don’t want to ask about the actual implementation; but the answer is about the actual implementation. Reading a disc is consistently more efficient than random access (which requires reading a puzzle). Therefore, unlike a disk, the memory is called "Random Access Memory". (This was long before the operating system became so cheap that we could store the whole database in memory.)
Similarly, a hierarchical model was organized to provide quick access to commonly used query routes. A hierarchy places closely related nodes on the same physical disk patch. Thus, it was easy to move from the customer to these customer orders to these order item positions.
The disadvantage was difficult to navigate the hierarchy - for example, to find all the order lines for item P5432, regardless of which customer / order. (In addition, if you want to get customers who order the P5432, you need to work “backward” in the hierarchy. If all this is on one patch of the disk, I hope you do not need to look too much / maybe it is in the same disk loaded in RAM.)
Similarly, an indexed sequential organization endorsed one particular index - the primary key. If you want to search by the name of the Client, and not by the number that requires a “secondary index” with all kinds of ugly organizations, in order to maintain index buckets somewhere near the data. And the notorious “bucket overflow”, which could stop the car dead when you corrected one tynya urgent spelling mistake in the name, so you moved it to a completely different letter position.
(By the way, NoSQL databases, which are repositories with key values with only one key, seem to fall into all these traps for secondary indexing. They need a second key repository to provide an alternative index, all kinds of entertainment that supports their synchronization. Back to the future !)
The biggest problem that Codd had in implementing the Relational Model was to convince IBM that the model could efficiently support queries through several “access paths”. You will see that many of his early works speak of abstracting "navigation" from the author-requestor / programmer. In fact, there were many tradeoffs in the original System / R system because
a) IBM Engineers simply did not understand the mathematical abstractions Codd spoke about;
b) they were scared of recklessness, they would act like a dog, and they would lose their job.
[um! personal opinion: but the group got together afterwards, and there are some memories somewhere on the Internet.] These trade-offs continued to this day in SQL; which, frankly, is a bunch and should be killed as just an interesting proof of concept.
How did the Codd model go (or rather, the SQL model, not Codd)?
Drive technology improved - especially search time
someone calculated hash indexing and b-trees and saved all indexes for the table in separate memory to the actual data; instead of trying to hold it like a magtape serial store.
Larry Ellison sniffed, something happened, and stole members of the IBM engineering team to create the same in Oracle. In addition, Michael Stonebreyer formed the Ingres.
The race is over! There was no time to stop and fix it. Implement what you have (i.e., SQL proof of concept) and rush it to the market, ready or not. (Sounds like a familiar story?)
Your points about the superiority of the relational model are all well made. They mainly follow from normalization methods. I would say, however, that they were not well understood in the late 70s / 80s. Design patterns were similar to hierarchical or indexed sequential data models that simply carried over to “flat” tables. In particular, there was a tendency to create “wide” tables in order to combine everything we know about a Client on a single patch of the disk, rather than vertically. (Due to the fear of performance when merging partitions.) This meant a lot of inapplicable or "unknown" fields - this is an abomination of SQL null.
So, your "improvements" so far only partially achieved. One day, perhaps we will see a DBMS created for the relational model. For now, we have to put up with SQL.