Once you’ve registered and arrive in New Orleans, be sure to use our mobile web app to manage your busy schedule so you don’t miss a thing. Also check the website for updates and use the #PEARC17 hashtag to keep up with friends and colleagues.
Every year, up to 30 petabytes of data are captured from the large hadron collider (LHC) at CERN, the European Centre for Nuclear Research. 1 petabyte of this data is offline-processed everyday using 11,000 servers with 100,000 processor cores. This huge amount of data represents only a very small fraction of the total amount of raw data generated by sensors in the trigger system of the collider at a rate of 40 million events per second. For the Compact Muon Solenoid (CMS) [1] experiment, the design and the construction of a large portion of the Level-1 trigger for muon detection will become even more challenging with nearly 1 billion collisions occurring per second as a result of the increased luminosity from proposed upgrades to the LHC. In order to adhere to the very strict requirements of the experiment, efficient algorithms must be conceived and implemented to perform the required physics analytics very quickly. In this work, we present our approach on applying deep learning for the identification of rarely produced physics particles (such as the Higgs Boson) out of a majority of background or noise dominated data. Because latency is of essence to the electronics of the data, a fast and efficient discrimination system would translate to less data being stored and subsequently reduced processing time. We examine how a generalized version of our approach could be used for improving the state-of-the-art in experimenting with deep learning models for research in high-energy physics.