I can accross a research paper on the integration of prom and hadoop to process huge files instead of a static log.
I ve looked in the plugins but found none about this integration
can any body explain the process ? Are the plugins implemented in hadoop again ? or there is a connection between the 2 technologies and how it is done.
I ve read a paper but didnt get it
Comments
Dear Muna,
The research paper you refer to, i.e., https://pdfs.semanticscholar.org/88b8/6ac0207119d0aedb0e45b622b344dfe23050.pdf, which also has a more academic "sister paper", which is sadly lesser known, i.e., https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7406336, is quite a few years old, and, based on a rather old version of Hadoop.
The core idea of the implementation, is simply establishing a connection to a pre-defined Hadoop cluster, and, running a pre-implemented algorithm (alpha miner / heuristics miner) on an event log that is stored on the cluster.
As the paper primarily served as a proof-of-concept, and, in research is not really useful, the corresponding ProM package has been disabled.
However, the source code is still accessible through https://svn.win.tue.nl/repos/prom/Packages/Hadoop/Trunk/.
Hope this helps.
Sebastiaan.
(Posted by Eric Verbeek on behalf of Sebastiaan van Zelst).