This query concerns only resources held in libraries.
14 results
Sort by:
Add to the list:
    • Article
    Select

    FPGA Fault Tolerance in Particle Physics Experiments

    Gebelein, Jano, Engel, Heiko, Kebschull, Udo
    it - Information Technology, 2010, Vol.52(4), 2010, Vol.52(4), pp.195-200 [Peer Reviewed Journal]
    Walter de Gruyter GmbH
    Available
    More…
    Title: FPGA Fault Tolerance in Particle Physics Experiments
    Author: Gebelein, Jano; Engel, Heiko; Kebschull, Udo
    Publisher: Oldenbourg Wissenschaftsverlag GmbH
    Subject: Reliability ; Testing ; Fault-Tolerance ; Error-Checking ; Redundant Design
    Description: The behavior of matter in extreme physical conditions is in focus of many high-energy-physics experiments. For this purpose, high energy charged particles (ions) are collided with each other and energy- or baryon densities are created similar to those at the beginning of the universe or to those which can be found in the center of neutron stars. In both cases a plasma of quarks and gluons (QGP) is present, which decomposes to hadrons within a short period of time. At this process, particles are formed, which allow statements about the beginning of the universe when captured by large detectors, but which also lead to the massive occurrence of hardware failures within the detector´s electronic devices. This article is about methods to mitigate radiation susceptibility for Field Programmable Gate Arrays (FPGA), enabling them to be used within particle detector systems to directly gain valid data in the readout chain or to be used as Detector-Control-System (DCS).
    Viele Experimente der Hochenergiephysik haben Untersuchungen des Verhaltens von Materie unter extremen physikalischen Bedingungen zum Inhalt. Zu diesem Zweck werden geladene Teilchen (Ionen) mit hoher Energie zur Kollision gebracht und dabei Energie- oder Baryonendichten erzeugt, die den Bedingungen im frühen Universum kurz nach dem Urknall bzw. die dem Zentrum von Neutronensternen entsprechen; vgl. Compressed Baryonic Matter (CBM) Experiment. In beiden Fällen entsteht ein Plasma aus Quarks und Gluonen (QGP), das innerhalb sehr kurzer Zeit wieder in Hadronen zerfällt. Die dabei freiwerdenden Partikel erlauben es, sofern in Detektoren erfasst, wichtige Aussagen über den Ursprung des Universums und die Struktur der starken Wechselwirkung zu treffen. Sie verursachen andererseits jedoch auch massiv Fehler in der für ihre Erfassung notwendigen Elektronik. In diesem Artikel werden Verfahren beschrieben, wie Feldprogrammierbare Schaltungen (FPGA) toleranter gegen Fehler dieser Art gemacht werden können, sodass sie auch innerhalb von Detektoren zur Auslese von Daten oder als Detektor-Kontroll-System (DCS) Verwendung finden können.
    Is part of: it - Information Technology, 2010, Vol.52(4), 2010, Vol.52(4), pp.195-200
    Identifier: 1611-2776 (ISSN); 10.1524/itit.2010.0591 (DOI)

    • Article
    Select

    Exploiting the alice hlt for proof by scheduling of virtual machines

    Meoni, Marco, Boettger, Stefan, Zelnicek, Pierre, Lindenstruth, Volker, Kebschull, Udo
    Journal of Physics: Conference Series, 2011, Vol.331(7), p.072054 (6pp) [Peer Reviewed Journal]
    IOPscience (IOP Publishing)
    Available
    More…
    Title: Exploiting the alice hlt for proof by scheduling of virtual machines
    Author: Meoni, Marco; Boettger, Stefan; Zelnicek, Pierre; Lindenstruth, Volker; Kebschull, Udo
    Subject: Physics;
    Description: The HLT (High-Level Trigger) group of the ALICE experiment at the LHC has prepared a virtual Parallel ROOT Facility (PROOF) enabled cluster (HAF - HLT Analysis Facility) for fast physics analysis, detector calibration and reconstruction of data samples. The HLT-Cluster currently consists of 2860 CPU cores and 175TB of storage. Its purpose is the online filtering of the relevant part of data produced by the particle detector. However, data taking is not running continuously and exploiting unused cluster resources for other applications is highly desirable and improves the usage-cost ratio of the HLT cluster. As such, unused computing resources are dedicated to a PROOF-enabled virtual cluster available to the entire collaboration. This setup is especially aimed at the prototyping phase of analyses that need a high number of development iterations and a short response time, e.g. tuning of analysis cuts, calibration and alignment. HAF machines are enabled and disabled upon user request to start or complete analysis tasks. This is achieved by a virtual machine scheduling framework which dynamically assigns and migrates virtual machines running PROOF workers to unused physical resources. Using this approach we extend the HLT usage scheme to running both online and offline computing, thereby optimizing the resource usage.
    Is part of: Journal of Physics: Conference Series, 2011, Vol.331(7), p.072054 (6pp)
    Identifier: 1742-6588 (ISSN); 1742-6596 (E-ISSN); 10.1088/1742-6596/331/7/072054 (DOI)

    • Article
    Select

    Autonomous system management for the alice high-level-trigger cluster using the sysmes framework

    Boettger, Stefan, Breitner, Timo, Kebschull, Udo, Lara, Camilo, Ulrich, Jochen, Zelnicek, Pierre
    Journal of Physics: Conference Series, 2011, Vol.331(5), p.052003 (6pp) [Peer Reviewed Journal]
    IOPscience (IOP Publishing)
    Available
    More…
    Title: Autonomous system management for the alice high-level-trigger cluster using the sysmes framework
    Author: Boettger, Stefan; Breitner, Timo; Kebschull, Udo; Lara, Camilo; Ulrich, Jochen; Zelnicek, Pierre
    Subject: Environmental Monitoring ; Management ; Autonomous ; Clusters ; Systems Management ; Failure ; Fault Tolerance ; Manuals ; Atomic and Molecular Physics (General) (So) ; Physics of Metals (MD) ; Physics (General) (Ah);
    Description: The ALICE HLT cluster is a heterogeneous computer cluster currently consisting of 200 nodes. This cluster is used for on-line processing of data produced by the ALICE detector during the next 10 or more years of operation. A major management challenge is to reduce the number of manual interventions in case of failures. Classical approaches like monitoring tools lack mechanisms to detect situations with multiple failure conditions and to automatically react to such situations. We have therefore developed SysMES (System Management for networked Embedded Systems and Clusters), a decentralized, fault tolerant, tool-set for autonomous management. It comprises a monitoring facility for detecting the working states of the distributed resources, a central interface for visualizing and managing the cluster environment and a rule system for coupling of the monitoring and management aspects. We have developed a formal language by which an administrator can define complex spatial and temporal conditions for failure states and according reactions. For the HLT we have defined a set of rules for known and recurring problem states such that SysMES takes care of most of day-to-day administrative work.
    Is part of: Journal of Physics: Conference Series, 2011, Vol.331(5), p.052003 (6pp)
    Identifier: 1742-6588 (ISSN); 1742-6596 (E-ISSN); 10.1088/1742-6596/331/5/052003 (DOI)

    • Several versions

    Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    Gomez, Andres, Lara, Camilo, Kebschull, Udo
    arXiv.org, Apr 20, 2017 [Peer Reviewed Journal]

    • Several versions

    FPGA implementation of a 32x32 autocorrelator array for analysis of fast image series

    Buchholz, Jan, Krieger, Jan, Mocsár, Gábor, Kreith, Balázs, Charbon, Edoardo, Vámosi, György, Kebschull, Udo, Langowski, Jörg
    arXiv.org, Dec 7, 2011 [Peer Reviewed Journal]

    • Article
    Select

    Kommerzielle Großrechner als Ausbildungsaufgabe an Universitäten und Fachhochschulen

    Kebschull, Udo, Spruth, Wilhelm G.
    Informatik-Spektrum, 2001, Vol.24(3), pp.140-144 [Peer Reviewed Journal]
    Springer Science & Business Media B.V.
    Available
    More…
    Title: Kommerzielle Großrechner als Ausbildungsaufgabe an Universitäten und Fachhochschulen
    Author: Kebschull, Udo; Spruth, Wilhelm G.
    Subject: Großrechner ; Engineering ; Computer Science;
    Description: Kommerzielle Großrechner leiden noch immer unter dem Image der veralteten Technologie und des Auslaufmodells. Wenig bekannt ist die Renaissance, die in den letzten Jahren stattgefunden hat. Zahlreiche Neuentwicklungen haben dazu geführt, dass besonders Rechner der S/390-Architektur eine technologische Spitzenposition einnehmen. Dies gilt sowohl für die Hardware als auch für das OS/390-Betriebssystem und seine Subsysteme.
    Is part of: Informatik-Spektrum, 2001, Vol.24(3), pp.140-144
    Identifier: 0170-6012 (ISSN); 1432-122X (E-ISSN); 10.1007/s002870100161 (DOI)

    • Several versions

    ANaN — ANalyse And Navigate: Debugging Compute Clusters with Techniques from Functional Programming and Text Stream Processing

    Adler, Alexander, Kebschull, Udo
    EPJ Web of Conferences, Vol.245 [Peer Reviewed Journal]

    • Article
    Select

    Autonomous Multicamera Tracking on Embedded Smart Cameras

    Quaritsch, Markus, Kreuzthaler, Markus, Rinner, Bernhard, Bischof, Horst, Strobl, Bernhard
    EURASIP Journal on Embedded Systems, 2007, Vol.2007, 10 pages [Peer Reviewed Journal]
    Hindawi Journals
    Available
    More…
    Title: Autonomous Multicamera Tracking on Embedded Smart Cameras
    Author: Quaritsch, Markus; Kreuzthaler, Markus; Rinner, Bernhard; Bischof, Horst; Strobl, Bernhard
    Contributor: Kebschull, Udo
    Description: There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.
    Is part of: EURASIP Journal on Embedded Systems, 2007, Vol.2007, 10 pages
    Identifier: 1687-3955 (ISSN); 1687-3963 (E-ISSN); 10.1155/2007/92827 (DOI)

    • Article
    Select

    Ein Branch & Bound-Ansatz zur Verdrahtung von Field Programmable Gate-Arrays

    M, Ulrich, Herrmann, Paul, Steffen, Matthias, Kebschull, Udo, Spruth, Wilhelm G.
    it - Information Technology, 1999, Vol.41(6), pp.17-24 [Peer Reviewed Journal]
    Walter de Gruyter GmbH
    Available
    More…
    Title: Ein Branch & Bound-Ansatz zur Verdrahtung von Field Programmable Gate-Arrays
    Author: M, Ulrich; Herrmann, Paul; Steffen, Matthias; Kebschull, Udo; Spruth, Wilhelm G.
    Publisher: OLDENBOURG WISSENSCHAFTSVERLAG
    Is part of: it - Information Technology, 1999, Vol.41(6), pp.17-24
    Identifier: 1611-2776 (ISSN); 2196-7032 (E-ISSN); 10.1524/itit.1999.41.6.17 (DOI)

    • Several versions

    Design Considerations for Scalable High-Performance Vision Systems Embedded in Industrial Print Inspection Machines

    Fürtler, Johannes, Rössler, Peter, Brodersen, Jörg, Nachtnebel, Herbert, Mayer, Konrad J, Cadek, Gerhard, Eckel, Christian
    EURASIP Journal on Embedded Systems, 2007, Vol.2007, 10 pages [Peer Reviewed Journal]