I have been working on the SDK with fuzzy logic for the past 3 months, and this has come to the point that I need to start intensively optimizing the engine.
As with most “utility” or “needs” based AI systems, my code works by placing various advertisements around the world, comparing these advertisements with attributes of various agents and “clogging” ads [based on each agent], respectively.
This, in turn, creates very repeating graphs for most single agent simulators. However, if you take into account various agents, the system becomes very complex and much more difficult to model my computer (since agents can broadcast advertisements with each other, creating the NP algorithm).
Below: an example of system repeatability, calculated on 3 attributes for one agent:
Up: an example of a system designed for 3 attributes and 8 agents:

(Collapse at the beginning and restore shortly afterwards. This is the best example I could create that would fit the image, since recovery is usually very slow)
As can be seen from both examples, even with an increase in the number of agents, the system is still very repeated and, therefore, spends valuable computation time.
I am trying to restructure the program so that during periods of high repeatability the Update function only continuously repeats the line graph.
Although my fuzzy logic code can predict in advance to calculate collapse and / or system stabilization, it extremely puts my processor at risk. I believe that machine learning would be the best way to do this, since it seems that once the system has set up its initial setup, instability periods always seem to be about the same length (however, they occur in the “floor”, random times. I tell the floor because it is usually easily noticeable with the help of various figures shown on the graph, however, like the instability length, these patterns are very different from those configured for tuning).
Obviously, if the unstable periods have the same length of time, as soon as I know when the system will collapse, it is much easier to understand when it reaches equilibrium.
On the side of the note about this system, not all configurations are 100% stable during repetition periods.
This is very clearly shown in the graph:

Thus, a machine learning solution would have to distinguish between Pseudo collapses and total collapses.
How viable will the use of the ML solution be? Can anyone recommend any algorithms or implementation approaches that will work best?
As for the available resources, the scoring code does not compare with parallel architectures at all (due to the explicit relationships between the agents), so if I need to devote one or two CPU threads to perform these calculations, let it be so. (I would prefer not to use a GPU for this, since the GPU is taxed on the non-AI part of my program).
Although this, most likely, will not change the situation, the system in which the code runs has 18 GB of RAM remaining at runtime. Thus, the use of a potentially highly rated solution would certainly be viable. (Although I would prefer to avoid this if necessary)