In an application based on AR markers, images (or corresponding image descriptors) to be recognized are provided in advance . In this case, you know exactly what the application will look for when receiving camera data (camera frames). Most modern image recognition AR applications are brand-based. What for? Because itβs much easier to detect things that are hardcoded in your application.
On the other hand, an application without an AR token recognizes things that were not previously provided to the application in advance . This scenario is much more difficult to implement, because the recognition algorithm running in your AR application must identify patterns, colors, or some other functions that may exist in camera frames. For example, if your algorithm is capable of identifying dogs, this means that the AR application will be able to trigger AR actions when the dog is detected in the camera frame, without having to provide images to all dogs in the world (this is exaggerated, of course, preparing a database, for example) when developing applications.
In short: in an AR application on a marker basis, where image recognition is involved, the marker can be an image or corresponding descriptors (functions + key points). Typically, the AR marker is a black and white (square) image, such as a QR code. These markers are easily recognized and tracked. > To perform recognition (and possibly tracking), not much processing power is required on the end device.
There is no need for an accelerometer or compass in a marker-based application. The recognition library can calculate the placement matrix (rotation and translation) of the detected image relative to the camera of your device. If you know this, you know how far the image is recognized and how it rotates relative to your deviceβs camera. And from that moment AR begins ... :)
source share