- I want to get maximum performance from a process with many variables, many of which cannot be controlled.
- I cannot run thousands of experiments, so it would be nice if I could run hundreds of experiments and
- change many managed parameters
- collect data on many performance indicators
- "correct", as far as possible, for those parameters that I could not control.
- Separate the "best" values ββfor the things that I can control, and start all over again.
It seems like this will be called data mining, in which you look at a lot of data, which, apparently, are not immediately connected, but show a correlation after some effort.
So ... Where do I start to look at algorithms, concepts, the theory of these kinds of things? Even useful terms for search purposes would be helpful.
Prerequisites: I like to engage in ultra-marathon cycling and keep journals of each trip. I would like to save more data, and after hundreds of athletes can pull out information about how I perform myself.
However, everything is changing: routes, environment (pace, pressure, noise, solar load, wind, draft, etc.), fuel, ratio, weight, water load, etc. etc. etc. I can control several things, but going the same route 20 times to test the new fuel mode will be depressing, and it will take years to complete all the experiments that I would like to do. I can, however, record all these things and much more (telemetry on a FTW bike).
source share