This query concerns only resources held in libraries.
32 results
Sort by:
Add to the list:
    • Article
    Select

    A Theory of Life as Information-Based Interpretation of Selecting Environments

    Rohr, David
    Biosemiotics, 2014, Vol.7(3), pp.429-446 [Peer Reviewed Journal]
    Springer Science & Business Media B.V.
    Available
    More…
    Title: A Theory of Life as Information-Based Interpretation of Selecting Environments
    Author: Rohr, David
    Subject: Information ; Interpretation ; Semiosis ; Charles sanders peirce ; DNA ; Brain ; Language
    Description: This essay employs Charles Peirce’s triadic semiotics in order to develop a biosemiotic theory of life that is capable of illuminating the function of information in living systems. Specifically, I argue that the relationship between biological information structures (DNA, brains, and human languages), selecting environments, and the adapted bodily processes of living organisms is aptly modelled by the irreducibly triadic relationship between Peirce’s sign, object, and interpretant, respectively. In each instance of information-based semiosis, the information structure (genome, brain, or language) is a complex informational sign that represents the informational object (the present environment according to the respects in which recurrent features of the selecting environment have proved salient over the course of a history of natural selection); and the bodily, behavioral, mental, or intellectual processes that are organized by the informational sign to more or less accurately interpret the present environment constitute a complex informational interpretant—a living, interpreting organism. The essay begins by discussing the precise sense in which this biosemiotic theory is based upon Charles Peirce’s semiotic theory. Next, the theory is developed at length in relation to genetic information structures. Finally, I present a brief outline of how the theory applies to neural and linguistic information structures. The essay concludes with a reflection upon the anti-reductionist implications of the theory.
    Is part of: Biosemiotics, 2014, Vol.7(3), pp.429-446
    Identifier: 1875-1342 (ISSN); 1875-1350 (E-ISSN); 10.1007/s12304-014-9201-4 (DOI)

    • Article
    Select

    Data processing and online reconstruction

    Rohr, David
    arXiv.org, Nov 28, 2018
    © ProQuest LLC All rights reserved, Engineering Database, Publicly Available Content Database, ProQuest Engineering Collection, ProQuest Technology Collection, ProQuest SciTech Collection, Materials Science & Engineering Database, ProQuest Central (new), ProQuest Central Korea, SciTech Premium Collection, Technology Collection, ProQuest Central Essentials, ProQuest One Academic, Engineering Collection (ProQuest)
    Available
    More…
    Title: Data processing and online reconstruction
    Author: Rohr, David
    Contributor: Rohr, David (pacrepositoryorg)
    Subject: Data Processing ; Computation ; Data Processing ; Reconstruction ; Data Compression ; Farms ; Experiments ; Data Compression ; Accelerators
    Description: In the upcoming upgrades for Run 3 and 4, the LHC will significantly increase Pb--Pb and pp interaction rates. This goes along with upgrades of all experiments, ALICE, ATLAS, CMS, and LHCb, related to both the detectors and the computing. The online processing farms must employ faster, more efficient reconstruction algorithms to cope with the increased data rates, and data compression factors must increase to fit the data in the affordable capacity for permanent storage. Due to different operating conditions and aims, the experiments follow different approaches, but there are several common trends like more extensive online computing and the adoption of hardware accelerators. This paper gives an overview and compares the data processing approaches and the online computing farms of the LHC experiments today in Run 2 and for the upcoming LHC Run 3 and 4.
    Is part of: arXiv.org, Nov 28, 2018
    Identifier: 2331-8422 (E-ISSN)

    • Article
    Select

    Tracking performance in high multiplicities environment at ALICE

    Rohr, David
    Cornell University
    Available
    More…
    Title: Tracking performance in high multiplicities environment at ALICE
    Author: Rohr, David
    Subject: Physics - Instrumentation And Detectors ; Nuclear Experiment
    Description: In LHC Run 3, ALICE will increase the data taking rate significantly to 50\,kHz continuous read out of minimum bias Pb-Pb events. This challenges the online and offline computing infrastructure, requiring to process 50 times as many events per second as in Run 2, and increasing the data compression ratio from 5 to 20. Such high data compression is impossible by lossless ZIP-like algorithms, but it must use results from online reconstruction, which in turn requires online calibration. These important online processing steps are the most computing-intense ones, and will use GPUs as hardware accelerators. The new online features are already under test during Run 2 in the High Level Trigger (HLT) online processing farm. The TPC (Time Projection Chamber) tracking algorithm for Run 3 is derived from the current HLT online tracking and is based on the Cellular Automaton and Kalman Filter. HLT has deployed online calibration for the TPC drift time, which needs to be extended to space charge distortions calibration. This requires online reconstruction for additional detectors like TRD (Transition Radiation Detector) and TOF (Time Of Flight). We present prototypes of these developments, in particular a data compression algorithm that achieves a compression factor of~9 on Run 2 TPC data, and the efficiency of online TRD tracking. We give an outlook to the challenges of TPC tracking with continuous read out. Comment: 6 pages, 8 figures, contribution to LHCP2017 conference
    Identifier: 1709.00618 (ARXIV ID)

    • Article
    Select

    Tracking performance in high multiplicities environment at ALICE

    Rohr, David
    arXiv.org, Sep 2, 2017
    © ProQuest LLC All rights reserved, Engineering Database, Publicly Available Content Database, ProQuest Engineering Collection, ProQuest Technology Collection, ProQuest SciTech Collection, Materials Science & Engineering Database, ProQuest Central (new), ProQuest Central Korea, SciTech Premium Collection, Technology Collection, ProQuest Central Essentials, ProQuest One Academic, Engineering Collection (ProQuest)
    Available
    More…
    Title: Tracking performance in high multiplicities environment at ALICE
    Author: Rohr, David
    Contributor: Rohr, David (pacrepositoryorg)
    Subject: Calibration ; Data Compression ; Radiation Counters ; Computation ; Cellular Automata ; Lossless ; Space Charge ; Data Compression ; Calibration ; Algorithms ; Tracking ; Reconstruction ; Compression Ratio ; Accelerators ; Kalman Filters ; Radiation Detectors
    Description: In LHC Run 3, ALICE will increase the data taking rate significantly to 50\,kHz continuous read out of minimum bias Pb-Pb events. This challenges the online and offline computing infrastructure, requiring to process 50 times as many events per second as in Run 2, and increasing the data compression ratio from 5 to 20. Such high data compression is impossible by lossless ZIP-like algorithms, but it must use results from online reconstruction, which in turn requires online calibration. These important online processing steps are the most computing-intense ones, and will use GPUs as hardware accelerators. The new online features are already under test during Run 2 in the High Level Trigger (HLT) online processing farm. The TPC (Time Projection Chamber) tracking algorithm for Run 3 is derived from the current HLT online tracking and is based on the Cellular Automaton and Kalman Filter. HLT has deployed online calibration for the TPC drift time, which needs to be extended to space charge distortions...
    Is part of: arXiv.org, Sep 2, 2017
    Identifier: 2331-8422 (E-ISSN)

    • Article
    Select

    Data processing and online reconstruction

    Rohr, David
    Cornell University
    Available
    More…
    Title: Data processing and online reconstruction
    Author: Rohr, David
    Subject: Physics - Instrumentation And Detectors
    Description: In the upcoming upgrades for Run 3 and 4, the LHC will significantly increase Pb--Pb and pp interaction rates. This goes along with upgrades of all experiments, ALICE, ATLAS, CMS, and LHCb, related to both the detectors and the computing. The online processing farms must employ faster, more efficient reconstruction algorithms to cope with the increased data rates, and data compression factors must increase to fit the data in the affordable capacity for permanent storage. Due to different operating conditions and aims, the experiments follow different approaches, but there are several common trends like more extensive online computing and the adoption of hardware accelerators. This paper gives an overview and compares the data processing approaches and the online computing farms of the LHC experiments today in Run 2 and for the upcoming LHC Run 3 and 4. Comment: 6 pages, 0 figures, contribution to LHCP2018 conference
    Identifier: 1811.11485 (ARXIV ID)

    • Several versions

    The L-CSC cluster: Optimizing power efficiency to become the greenest supercomputer in the world in the Green500 list of November 2014

    Rohr, David, Neskovic, Gvozden, Lindenstruth, Volker
    arXiv.org, Nov 28, 2018 [Peer Reviewed Journal]

    • Article
    Select

    Optimized HPL for AMD GPU and multi-core CPU usage

    Bach, Matthias, Kretz, Matthias, Lindenstruth, Volker, Rohr, David
    Computer Science - Research and Development, 2011, Vol.26(3), pp.153-164 [Peer Reviewed Journal]
    Springer Science & Business Media B.V.
    Available
    More…
    Title: Optimized HPL for AMD GPU and multi-core CPU usage
    Author: Bach, Matthias; Kretz, Matthias; Lindenstruth, Volker; Rohr, David
    Subject: Heterogeneous computing ; Linpack ; HPL ; DGEMM ; CALDGEMM ; GPGPU
    Description: The installation of the LOEWE-CSC ( http://csc.uni-frankfurt.de/csc/?51 ) supercomputer at the Goethe University in Frankfurt lead to the development of a Linpack which can fully utilize the installed AMD Cypress GPUs. At its core, a fast DGEMM for combined GPU and CPU usage was created. The DGEMM library is tuned to hide all DMA transfer times and thus maximize the GPU load. A work stealing scheduler was implemented to add the remaining CPU resources to the DGEMM. On the GPU, the DGEMM achieves 497 GFlop/s (90.9% of the theoretical peak). Combined with the 24-core Magny-Cours CPUs, 623 GFlop/s (83.6% of the peak) are achieved. The HPL () benchmark was modified to perform well with one MPI-process per node. The modifications include multi-threading, vectorization, use of the GPU DGEMM, cache optimizations, and a new Lookahead algorithm. A Linpack performance of 70% theoretical peak is achieved and this performance scales linearly to hundreds of nodes.
    Is part of: Computer Science - Research and Development, 2011, Vol.26(3), pp.153-164
    Identifier: 1865-2034 (ISSN); 1865-2042 (E-ISSN); 10.1007/s00450-011-0161-5 (DOI)

    • Article
    Select

    Track Reconstruction in the ALICE TPC using GPUs for LHC Run 3

    Rohr, David, Gorbunov, Sergey, Schmidt, Marten, Shahoyan, Ruben
    arXiv.org, Nov 28, 2018
    © ProQuest LLC All rights reserved, Engineering Database, Publicly Available Content Database, ProQuest Engineering Collection, ProQuest Technology Collection, ProQuest SciTech Collection, Materials Science & Engineering Database, ProQuest Central (new), ProQuest Central Korea, SciTech Premium Collection, Technology Collection, ProQuest Central Essentials, ProQuest One Academic, Engineering Collection (ProQuest)
    Available
    More…
    Title: Track Reconstruction in the ALICE TPC using GPUs for LHC Run 3
    Author: Rohr, David; Gorbunov, Sergey; Schmidt, Marten; Shahoyan, Ruben
    Contributor: Shahoyan, Ruben (pacrepositoryorg)
    Subject: Calibration ; Collisions ; Radiation Counters ; Time Dependence ; Computation ; Source Code ; Cellular Automata ; Accelerators ; Kalman Filters ; Data Reduction ; Accident Reconstruction
    Description: In LHC Run 3, ALICE will increase the data taking rate significantly to continuous readout of 50 kHz minimum bias Pb-Pb collisions. The reconstruction strategy of the online offline computing upgrade foresees a first synchronous online reconstruction stage during data taking enabling detector calibration, and a posterior calibrated asynchronous reconstruction stage. We present a tracking algorithm for the Time Projection Chamber (TPC), the main tracking detector of ALICE. The reconstruction must yield results comparable to current offline reconstruction and meet the time constraints like in the current High Level Trigger (HLT), processing 50 times as many collisions per second as today. It is derived from the current online tracking in the HLT, which is based on a Cellular automaton and the Kalman filter, and we integrate missing features from offline tracking for improved resolution. The continuous TPC readout and overlapping collisions pose new challenges: conversion to spatial coordinates...
    Is part of: arXiv.org, Nov 28, 2018
    Identifier: 2331-8422 (E-ISSN)

    • Article
    Select

    Alice hlt tpc tracking of pb-pb events on gpus

    Rohr, David, Gorbunov, Sergey, Szostak, Artur, Kretz, Matthias, Kollegger, Thorsten, Breitner, Timo, Alt, Torsten
    Journal of Physics: Conference Series, 2012, Vol.396(1), p.012044 (8pp) [Peer Reviewed Journal]
    IOPscience (IOP Publishing)
    Available
    More…
    Title: Alice hlt tpc tracking of pb-pb events on gpus
    Author: Rohr, David; Gorbunov, Sergey; Szostak, Artur; Kretz, Matthias; Kollegger, Thorsten; Breitner, Timo; Alt, Torsten
    Subject: Physics;
    Description: The online event reconstruction for the ALICE experiment at CERN requires processing capabilities to process central Pb-Pb collisions at a rate of more than 200 Hz, corresponding to an input data rate of about 25 GB/s. The reconstruction of particle trajectories in the Time Projection Chamber (TPC) is the most compute intensive step. The TPC online tracker implementation combines the principle of the cellular automaton and the Kalman filter. It has been accelerated by the usage of graphics cards (GPUs). A pipelined processing allows to perform the tracking on the GPU, the data transfer, and the preprocessing on the CPU in parallel. In order for CPU pre- and postprocessing to keep step with the GPU the pipeline uses multiple threads. A splitting of the tracking in multiple phases searching for short local track segments first improves data locality and makes the algorithm suited to run on a GPU. Due to special optimizations this course of action is not second to a global approach. Because of non-associative floating-point arithmetic a binary comparison of GPU and CPU tracker is infeasible. A track by track and cluster by cluster comparison shows a concordance of 99.999%. With current hardware, the GPU tracker outperforms the CPU version by about a factor of three leaving the processor still available for other tasks.
    Is part of: Journal of Physics: Conference Series, 2012, Vol.396(1), p.012044 (8pp)
    Identifier: 1742-6588 (ISSN); 1742-6596 (E-ISSN); 10.1088/1742-6596/396/1/012044 (DOI)

    • Several versions

    Online Reconstruction and Calibration with feed back loop in the ALICE High Level Trigger

    Rohr, David, Shahoyan, Ruben, Zampolli, Chiara, Krzewicki, Mikolaj, Wiechula, Jens, Gorbunov, Sergey, Chauvin, Alex, Schweda, Kai, Lindenstruth, Volker
    arXiv.org, Dec 26, 2017 [Peer Reviewed Journal]