To prevent spam users, you can only post on this forum after registration, which is by invitation. If you want to post on the forum, please send me a mail (h DOT m DOT w DOT verbeek AT tue DOT nl) and I'll send you an invitation in return for an account.
I'm in the process of reimplementing the Fuzzy Miner plugin in R as part of a simulation study for my PhD dissertation. I've successfuly implemented the functionality I need, but can't replicate the plugin's calculations for the relative importance metric in the Concurrency Filter. I'm unsure if it is a bug in the Fuzzy Miner code or if the calculation is suposed to be done as in the code.
Essentially, the processRelationPair() method grabs the aggregate binary significance from the graph object, computes the relative importances for the two orders of the event pair and then applies logic to decide which, if any, edges to keep. The problem is that the method is writing to the same graph object, which means that future calculations are affected by the elimination of edges.
For example, if edges (2,3) and (3,2) are eliminated, the calculation of sigSourceOutAcc and sigTargetInAcc of all pairs processed afterwards will be impacted, as the graph object now has those two edge values set to 0. Obviously this results in inconsistent and false results, as the order in which the events are listed will impact which edges are available at the begining and end of the processing loop.
As I understand Guenther's PhD dissertation (and the paper that presents the Fuzzy Miner) the relative importance metric is an intersection of ALL outgoing edges and ALL incoming edges. Shouldn't processRelationPair() read directly from metricsRepository and write to graph in order to avoid overwriting edge values?