dataResult <-processDataTest1(path, prefix, names)

Analysis of all learning parameters. Runs with 1000 ticks

With these constant variables: munsell, radius, 9, chordal, tole 0.2

In this analysis we are determining which exploration factor would be the best option to run the experiments

dataResult <-processDataTest2(path, prefix, names)

Charts

Observation

Overall, the exploration factors 20 and 40 achive quality stability earlier than others.


Analysis of all learning parameters. By Decreasing Factor. Exploration rate set to 40

With these constant variables: munsell, radius, 9, chordal, tole 0.2

In this analysis we are determining which decreasing factor would be the best option to run the experiments. We ran 10 times the simulation with each one of the decreasing factor values: 20, 40, 60, 80 100.

dataResult <- processDataTest3()
## [1] "data/Test_03_ByDecreasingFactor_Radius12_1000Ticks_ L_Rate1_Exploration40/munsell, radius, 12, chordal, tole 0.2, lRate 1, decr 20, expl 40, run 6.csv.txt"
## [1] 2
## [1] "Empty File"

## Warning: Removed 961 row(s) containing missing values (geom_path).

Observation

Overall, the decreasing factor 80 achive quality stability earlier than others.


By Average

Analysis of decreasing factor slopes

## Warning: `data_frame()` is deprecated as of tibble 1.1.0.
## Please use `tibble()` instead.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.


Analysis of Tolerance and Radius of Interaction Domain on Learning Quality

Once we calibrated the learning parameters of the learning agent, we seek to understand to what extent the quality of learning is affected by the radius of the interaction domain and the tolerance to imperfect actions. The value of these two variables are the same for all the agents. As a reminder, the radius of the domain determines the chance of interacting with others, whereas the tolerance to imperfection defines whether or not an agent acts given the reward of performing that action.

Our intuition is that the quality of learning is negatively effected by the tolerance because the agent’s assessment of the model quality highly depends on responsive actions of all the surrounding interactants. We also expect that the quality of learning benefits from a larger number of interactants encompassed in a greater domain because the model is informed by more hues.

To do this study, we collected data from the learning agent in 5 conditions of the radius variable that defines the size of agent’s domain, and 5 conditions of the tolerance variable. The data collected are relative learning quality (the quality perceived by the agent), absolute learning quality (the similarity of the agent’s color model and the model other agents are using), and the social viscosity.

We would also like to understand what is the effect of the learning agent on the global social viscosity

To do so, we make three ANOVAs:

Since we have ten runs for each independent condition we are looking for a mechanism to summarize them into a single dataset preserving the highest information. Our best guess is to derive a Markov chain of quality probabilities from the data in the pool of datasets. So, the result would be a sequence of probabilities that account for what are the chances of having a quality level given the precedent qualities, that in turn responded to the contextual circumstances in which the agent learned. But we are not ready to make this analysis at this time. We will keep on looking for the right way to do this solution. We anticipate we might need to discretize the quality values to assign probabilities to each q value.

In the mean time, we will do a simple average of q values in each set of ten runs and compute the ANOVAs.