Because bytes should not be considered strings, and strings should not be considered bytes. Python3 understands this correctly, no matter how it sounds to a new developer.
In Python 2.6, if I read data from a file and I pass the โrโ flag, the text will be considered by default in the current locale, which will be a string, and when passing the โrbโ flag, create a series of bytes. Indexing data is completely different, and methods that accept str may not be sure if I use bytes or str. This gets worse because for ASCII data the two are often synonymous, which means that code that works in simple test cases or English locales will not be able to meet characters other than ASCII.
Thus, there was a conscious effort to ensure that the bytes and strings were not identical: one was a sequence of blank bytes, and the other was a Unicode string with optimal data encoding to preserve O (1) indexing (ASCII, UCS-2, or UTF -32, depending on the data used, I think).
In Python 2, the Unicode string was used to disambiguate text from "dead bytes", however str considered as text for many users.
Or, to quote a Voluntary dictator :
Current Python string objects are overloaded. They serve to store both sequences of characters and sequences of bytes. This overloading of the target leads to confusion and errors. In future versions of Python, string objects will be used to store character data. The bytes object will act as a byte container. In the end, the unicode type will be renamed to str, and the old str type will be deleted.
tl; dr version Forcing separation of bytes and str makes coders aware of their difference, short-term dissatisfaction, but better long-term code. This is a conscious choice after many years of experience: what makes you aware of the difference will immediately save you days in the debugger later.