I know that several answers here say that you cannot compare pointers unless they point to the same structure, but that the red herring, and I will try to explain why. One of your pointers points to the beginning of your array, and the other points to the end, so they point to the same structure. A language attorney may say that if your third pointer points outside the object, the comparison is undefined, so x >= array.start may be true for all x . But this is not a problem, since at the time of comparison C ++ cannot know if the array is built into an even larger structure. Also, if your address space is linear, as it is today, a pointer comparison will be implemented as an (un) signed integer comparison, since any other implementation will be slower. Even during periods of segments and offsets, a comparison (far) pointer was implemented by first normalizing the pointer and then comparing them as integers.
What it all boils down to is that if your compiler is fine, comparing pointers without worrying about signs should work if all you care about is that the pointer points inside the array, since the compiler needs to make pointers signed or unsigned, depending on which of the two borders a C ++ object can have.
Different platforms behave differently in this matter, so C ++ should leave it on the platform. There are even platforms in which both addresses around 0 and 80..00h cannot be displayed or are already accepted at the start of the process. In this case, it does not matter if you agree on this.
This can sometimes cause compatibility issues. For example, there is no sign in Win32 pointers. Now it was so that for the 4 GB address space for applications only the lower half was available (more precisely 10000h ... 7FFFFFFFh due to the destination section of the NULL pointer); high addresses were available only to the kernel. This forced some people to put the addresses in the signed variables, and their programs continued to work, since the high bit was always 0. But then the /3GB switch appeared, which made almost 3 GB available for applications (more precisely 10000h ... BFFFFFFFh) and the application would crumble or behave randomly.
You expressly declare that your program will be for Windows only, which uses unsigned pointers. However, you may change your mind in the future, and using intptr_t or uintptr_t bad for portability. I also wonder if this should be done at all ... if you are indexing into an array, it may be safer to compare indexes. Suppose, for example, that you have an array of 1 GB at 1500000h ... 41500000h, consisting of 16 384 elements of 64 kB each. Suppose you accidentally looked at the 80,000 index - clearly out of reach. Computing the pointer will give 39D00000h, so your check of the pointer will allow this, although it should not.