The difficulty is this: different databases with annotated images have different sets of landmarks. For example, the IMM database has nearly 60 trademarks, and BioID has about 17. Some of the landmarks are “shared” between databases, some of which are not.
I would like to ask for advice on how such data structures should be represented in Haskell? The challenge is to use different image databases, train them with the same tools and be able to “crossover” the comparison of results obtained using predictors trained with them?

This is where some kind of pseudo code begins:
-- data FaceIMM = LeftEye RightEye Nose Mouth Chin data FaceBioID = LeftEye RightEye LeftNoseTip RightNoseTip NoseTop Mouth ... -- training -- predictor <- train confParameters landmarkDescriptors positionValues ... fitter <- meanShifter . predictors ... -- detection -- fitBioID = fitterBioID face fitIMM = fitterIMM face ... -- comparison errorBioID = distance (fitBioID - truth) errorIMM = distance (fitIMM - truth) compare errorBioID errorIMM
Just to be clear, I already have the “train” and “fit” functions that currently store or receive lists of data. But I want to do better.
I do not expect to see a fully polished data structure, and not something that will help me begin to approach this problem.
EXTRA: In the future, I would also like to do:
take the "intersection" of two image databases and train a locksmith with a small number of landmarks, but a larger size of the prepared data.
accept the “merging” of the two image databases and train another locksmith who will have the largest number of landmarks in it, but probably a smaller size of the prepared data, since only points common to both databases will be used.
FRANCK: link to franck database
IMM: link to IMM database
BioID: link to the BioID database
source share