My understanding is that much of the data that is is collected at the Large Hadron Collider is similar to that in the image below, and that a vast amount of the data contains little of specific and immediate interest.
My understanding is that data is recorded at 700 MB/s and about 15 PetaBytes are collected per year.
I assume this is too high to be examined manually, like the second image here shows.
What automated methods are used by physicists to cull the vast amount of data collected and find results that are of interest?
I'm not referring to the vast number of computers throughout the world that are aggregated to do the data reduction. What I am looking for it what methods are used. I'm particularly interested in how AI or neural networks are being used, if they are. What papers have been published on this?
If someone could help with the tags on this question, I'd appreciate it.
[ Image By Lucas Taylor / CERN - http://cdsweb.cern.ch/record/628469, CC BY-SA 3.0, Link]
[Image by Fermilab. - http://history.fnal.gov/brochure_src/events_study.html]