This query concerns only resources held in libraries.
34 results
Sort by:
Add to the list:
    • Article
    Select

    Areflection on software engineering in hep

    Carminati, Federico
    Journal of Physics: Conference Series, 2012, Vol.396(5), p.052018 (11pp) [Peer Reviewed Journal]
    IOPscience (IOP Publishing)
    Available
    More…
    Title: Areflection on software engineering in hep
    Author: Carminati, Federico
    Subject: Software ; Computer Programs ; Communities ; Reflection ; Software Development ; Software Engineering ; Atomic and Molecular Physics (General) (So) ; Physics of Metals (MD) ; Physics (General) (Ah);
    Description: High Energy Physics (HEP) has been making very extensive usage of computers to achieve its research goals. Fairly large program suites have been developed, maintained and used over the years and it is fair to say that, overall, HEP has been successful in software development. Yet, HEP software development has not used classical Software Engineering techniques, which have been invented and refined to help the production of large programmes. In this paper we will review the development of HEP code with its strengths and weaknesses. Using several well-known HEP software projects as examples, we will try to demonstrate that our community has used a form of Software Engineering, albeit in an informal manner. The software development techniques employed in these projects are indeed very close in many aspects to the modern tendencies of Software Engineering itself, in particular the so-called “agile technologies”. The paper will conclude with an outlook on the future of software development in HEP.
    Is part of: Journal of Physics: Conference Series, 2012, Vol.396(5), p.052018 (11pp)
    Identifier: 1742-6588 (ISSN); 1742-6596 (E-ISSN); 10.1088/1742-6596/396/5/052018 (DOI)

    • Several versions

    Summary of the ACAT Round Table Discussion: Open-source, knowledge sharing and scientific collaboration

    Carminati, Federico, Perret-Gallix, Denis, Riemann, Tord
    arXiv.org, Jul 2, 2014 [Peer Reviewed Journal]

    • Article
    Select

    Rethinking particle transport in the many-core era towards geant 5

    Apostolakis, John, Brun, René, Carminati, Federico, Gheata, Andrei
    Journal of Physics: Conference Series, 2012, Vol.396(2), p.022014 (11pp) [Peer Reviewed Journal]
    IOPscience (IOP Publishing)
    Available
    More…
    Title: Rethinking particle transport in the many-core era towards geant 5
    Author: Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei
    Subject: Computer Simulation ; Prototyping ; Transport ; Hardware ; Detectors ; Tasks ; Particles (of Physics) ; Computational Efficiency ; Atomic and Molecular Physics (General) (So) ; Physics of Metals (MD) ; Physics (General) (Ah);
    Description: Detector simulation is one of the most CPU intensive tasks in modern High Energy Physics. While its importance for the design of the detector and the estimation of the efficiency is ever increasing, the amount of events that can be simulated is often constrained by the available computing resources. Various kind of “fast simulations” have been developed to alleviate this problem, however, while successful, these are mostly “ad hoc” solutions which do not replace completely the need for detailed simulations. One of the common features of both detailed and fast simulation is the inability of the codes to exploit fully the parallelism which is increasingly offered by the new generations of CPUs. In the next years it is reasonable to expect an increase on one side of the needs for detector simulation, and on the other in the parallelism of the hardware, widening the gap between the needs and the available means. In the past years, and indeed since the beginning of simulation programs, several unsuccessful efforts have been made to exploit the “embarrassing parallelism” of simulation programmes. After a careful study of the problem, and based on a long experience in simulation codes, the authors have concluded that an entirely new approach has to be adopted to exploit parallelism. The paper will review the current prototyping work, encompassing both detailed and fast simulation use cases. Performance studies will be presented, together with a roadmap to develop a new full-fledged transport program efficiently exploiting parallelism for the physics and geometry computations, while adapting the steering mechanisms to accommodate detailed and fast simulation in a single framework.
    Is part of: Journal of Physics: Conference Series, 2012, Vol.396(2), p.022014 (11pp)
    Identifier: 1742-6588 (ISSN); 1742-6596 (E-ISSN); 10.1088/1742-6596/396/2/022014 (DOI)

    • Article
    Select

    Using a source-to-source transformation to introduce multi-threading into the aliroot framework for a parallel event reconstruction

    Lohn, Stefan B, Dong, Xin, Carminati, Federico
    Journal of Physics: Conference Series, 2012, Vol.396(5), p.052049 (11pp) [Peer Reviewed Journal]
    IOPscience (IOP Publishing)
    Available
    More…
    Title: Using a source-to-source transformation to introduce multi-threading into the aliroot framework for a parallel event reconstruction
    Author: Lohn, Stefan B; Dong, Xin; Carminati, Federico
    Subject: Sockets ; Programming ; Links ; Transformations ; Gain ; Hierarchies ; Parallel Processing ; Manuals ; Atomic and Molecular Physics (General) (So) ; Physics of Metals (MD) ; Physics (General) (Ah);
    Description: Chip-Multiprocessors are going to support massive parallelism by many additional physical and logical cores. Improving performance can no longer be obtained by increasing clock-frequency because the technical limits are almost reached. Instead, parallel execution must be used to gain performance. Resources like main memory, the cache hierarchy, bandwidth of the memory bus or links between cores and sockets are not going to be improved as fast. Hence, parallelism can only result into performance gains if the memory usage is optimized and the communication between threads is minimized. Besides concurrent programming has become a domain for experts. Implementing multi-threading is error prone and labor-intensive. A full reimplementation of the whole AliRoot source-code is unaffordable. This paper describes the effort to evaluate the adaption of AliRoot to the needs of multi-threading and to provide the capability of parallel processing by using a semi-automatic source-to-source transformation to address the problems as described before and to provide a straight-forward way of parallelization with almost no interference between threads. This makes the approach simple and reduces the required manual changes in the code. In a first step, unconditional thread-safety will be introduced to bring the original sequential and thread unaware source-code into the position of utilizing multi-threading. Afterwards further investigations have to be performed to point out candidates of classes that are useful to share amongst threads. Then in a second step, the transformation has to change the code to share these classes and finally to verify if there are anymore invalid interferences between threads.
    Is part of: Journal of Physics: Conference Series, 2012, Vol.396(5), p.052049 (11pp)
    Identifier: 1742-6588 (ISSN); 1742-6596 (E-ISSN); 10.1088/1742-6596/396/5/052049 (DOI)

    • Article
    Select

    Stochastic monotony signature and biomedical applications

    Demongeot, Jacques, Galli Carminati, Giuliana, Carminati, Federico, Rachdi, Mustapha
    Comptes rendus biologies, December 2015, Vol.338(12), pp.777-83 [Peer Reviewed Journal]
    MEDLINE/PubMed (U.S. National Library of Medicine)
    Available
    More…
    Title: Stochastic monotony signature and biomedical applications
    Author: Demongeot, Jacques; Galli Carminati, Giuliana; Carminati, Federico; Rachdi, Mustapha
    Subject: Comparaison de Fonctions ; Comparison of Functions ; Monotony Statistical Test ; Signature de Monotonie Aléatoire ; Stochastic Monotony Signature ; Test Statistique de Monotonie ; Stochastic Processes
    Description: We introduce a new concept, the stochastic monotony signature of a function, made of the sequence of the signs that indicate if the function is increasing or constant (sign +), or decreasing (sign -). If the function results from the averaging of successive observations with errors, the monotony sign is a random binary variable, whose density is studied under two hypotheses for the distribution of errors: uniform and Gaussian. Then, we describe a simple statistical test allowing the comparison between the monotony signatures of two functions (e.g., one observed and the other as reference) and we apply the test to four biomedical examples, coming from genetics, psychology, gerontology, and morphogenesis.
    Is part of: Comptes rendus biologies, December 2015, Vol.338(12), pp.777-83
    Identifier: 1768-3238 (E-ISSN); 26563556 Version (PMID); 10.1016/j.crvi.2015.09.002 (DOI)

    • Article
    Select

    Identifying composite crosscutting concerns with scatter-based graph clustering

    Huang, Jin, Carminati, Federico, Betev, Latchezar, Zhu, Jianlin, Lu, Yansheng
    Wuhan University Journal of Natural Sciences, 2012, Vol.17(2), pp.114-120 [Peer Reviewed Journal]
    Springer Science & Business Media B.V.
    Available
    More…
    Title: Identifying composite crosscutting concerns with scatter-based graph clustering
    Author: Huang, Jin; Carminati, Federico; Betev, Latchezar; Zhu, Jianlin; Lu, Yansheng
    Subject: software engineering ; aspect mining ; link analysis ; undirected graph clustering
    Description: Identifying composite crosscutting concerns (CCs) is a research task and challenge of aspect mining. In this paper, we propose a scatter-based graph clustering approach to identify composite CCs. Inspired by the state-of-the-art link analysis techniques, we propose a two-state model to approximate how CCs tangle with core modules. According to this model, we obtain scatter and centralization scores for each program element. Especially, the scatter scores are adopted to select CC seeds. Furthermore, to identify composite CCs, we adopt a novel similarity measurement and develop an undirected graph clustering to group these seeds. Finally, we compare it with the previous work and illustrate its effectiveness in identifying composite CCs.
    Is part of: Wuhan University Journal of Natural Sciences, 2012, Vol.17(2), pp.114-120
    Identifier: 1007-1202 (ISSN); 1993-4998 (E-ISSN); 10.1007/s11859-012-0814-7 (DOI)

    • Article
    Select

    Vectorising the detector geometry to optimize particle transport

    Apostolakis, John, Brun, René, Carminati, Federico, Gheata, Andrei, Wenzel, Sandro
    arXiv.org, Dec 3, 2013
    © ProQuest LLC All rights reserved, Engineering Database, Publicly Available Content Database, ProQuest Engineering Collection, ProQuest Technology Collection, ProQuest SciTech Collection, Materials Science & Engineering Database, ProQuest Central (new), ProQuest Central Korea, SciTech Premium Collection, Technology Collection, ProQuest Central Essentials, ProQuest One Academic, Engineering Collection (ProQuest)
    Available
    More…
    Title: Vectorising the detector geometry to optimize particle transport
    Author: Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro
    Contributor: Wenzel, Sandro (pacrepositoryorg)
    Subject: Algorithms ; Geometry ; Sensors ; Transport ; Algorithms ; Navigation ; Computation ; Caching ; Optimization
    Description: Among the components contributing to particle transport, geometry navigation is an important consumer of CPU cycles. The tasks performed to get answers to "basic" queries such as locating a point within a geometry hierarchy or computing accurately the distance to the next boundary can become very computing intensive for complex detector setups. So far, the existing geometry algorithms employ mainly scalar optimisation strategies (voxelization, caching) to reduce their CPU consumption. In this paper, we would like to take a different approach and investigate how geometry navigation can benefit from the vector instruction set extensions that are one of the primary source of performance enhancements on current and future hardware. While on paper, this form of microparallelism promises increasing performance opportunities, applying this technology to the highly hierarchical and multiply branched geometry code is a difficult challenge. We refer to the current work done to vectorise an important...
    Is part of: arXiv.org, Dec 3, 2013
    Identifier: 2331-8422 (E-ISSN)

    • Article
    Select

    Vectorising the detector geometry to optimize particle transport

    Apostolakis, John, Brun, René, Carminati, Federico, Gheata, Andrei, Wenzel, Sandro
    Cornell University
    Available
    More…
    Title: Vectorising the detector geometry to optimize particle transport
    Author: Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro
    Subject: Physics - Computational Physics ; High Energy Physics - Experiment
    Description: Among the components contributing to particle transport, geometry navigation is an important consumer of CPU cycles. The tasks performed to get answers to "basic" queries such as locating a point within a geometry hierarchy or computing accurately the distance to the next boundary can become very computing intensive for complex detector setups. So far, the existing geometry algorithms employ mainly scalar optimisation strategies (voxelization, caching) to reduce their CPU consumption. In this paper, we would like to take a different approach and investigate how geometry navigation can benefit from the vector instruction set extensions that are one of the primary source of performance enhancements on current and future hardware. While on paper, this form of microparallelism promises increasing performance opportunities, applying this technology to the highly hierarchical and multiply branched geometry code is a difficult challenge. We refer to the current work done to vectorise an important part of the critical navigation algorithms in the ROOT geometry library. Starting from a short critical discussion about the programming model, we present the current status and first benchmark results of the vectorisation of some elementary geometry shape algorithms. On the path towards a full vector-based geometry navigator, we also investigate the performance benefits in connecting these elementary functions together to develop algorithms which are entirely based on the flow of vector-data. To this end, we discuss core components of a simple vector navigator that is tested and evaluated on a toy detector setup. Comment: 7 pages, 3 figures, talk at CHEP13
    Identifier: 1312.0816 (ARXIV ID)

    • Article
    Select

    Securing the alien file catalogue - enforcing authorization with accountable file operations

    Schreiner, Steffen, Bagnasco, Stefano, Banerjee, Subho Sankar, Betev, Latchezar, Carminati, Federico, Datskova, Olga Vladimirovna, Furano, Fabrizio, Grigoras, Alina, Grigoras, Costin, Lorenzo, Patricia Mendez, Peters, Andreas Joachim, Saiz, Pablo, Zhu, Jianlin
    Journal of Physics: Conference Series, 2011, Vol.331(6), p.062044 (6pp) [Peer Reviewed Journal]
    IOPscience (IOP Publishing)
    Available
    More…
    Title: Securing the alien file catalogue - enforcing authorization with accountable file operations
    Author: Schreiner, Steffen; Bagnasco, Stefano; Banerjee, Subho Sankar; Betev, Latchezar; Carminati, Federico; Datskova, Olga Vladimirovna; Furano, Fabrizio; Grigoras, Alina; Grigoras, Costin; Lorenzo, Patricia Mendez; Peters, Andreas Joachim; Saiz, Pablo; Zhu, Jianlin
    Subject: Storage Systems ; Messages ; Design Engineering ; Fraud ; Catalogues ; Tables (Data) ; Simplification ; Tracking ; Atomic and Molecular Physics (General) (So) ; Physics of Metals (MD) ; Physics (General) (Ah);
    Description: The AliEn Grid Services, as operated by the ALICE Collaboration in its global physics analysis grid framework, is based on a central File Catalogue together with a distributed set of storage systems and the possibility to register links to external data resources. This paper describes several identified vulnerabilities in the AliEn File Catalogue access protocol regarding fraud and unauthorized file alteration and presents a more secure and revised design: a new mechanism, called LFN Booking Table, is introduced in order to keep track of access authorization in the transient state of files entering or leaving the File Catalogue. Due to a simplification of the original Access Envelope mechanism for xrootd-protocol-based storage systems, fundamental computational improvements of the mechanism were achieved as well as an up to 50% reduction of the credential's size. By extending the access protocol with signed status messages from the underlying storage system, the File Catalogue receives trusted information about a file's size and checksum and the protocol is no longer dependent on client trust. Altogether, the revised design complies with atomic and consistent transactions and allows for accountable, authentic, and traceable file operations. This paper describes these changes as part and beyond the development of AliEn version 2.19.
    Is part of: Journal of Physics: Conference Series, 2011, Vol.331(6), p.062044 (6pp)
    Identifier: 1742-6588 (ISSN); 1742-6596 (E-ISSN); 10.1088/1742-6596/331/6/062044 (DOI)

    • Article
    Select

    Enhancing the alien web service authentication

    Zhu, Jianlin, Saiz, Pablo, Carminati, Federico, Betev, Latchezar, Zhou, Daicui, Lorenzo, Patricia Mendez, Grigoras, Alina Gabriela, Grigoras, Costin, Furano, Fabrizio, Schreiner, Steffen, Datskova, Olga Vladimirovna, Banerjee, Subho Sankar, Zhang, Guoping
    Journal of Physics: Conference Series, 2011, Vol.331(6), p.062048 (6pp) [Peer Reviewed Journal]
    IOPscience (IOP Publishing)
    Available
    More…
    Title: Enhancing the alien web service authentication
    Author: Zhu, Jianlin; Saiz, Pablo; Carminati, Federico; Betev, Latchezar; Zhou, Daicui; Lorenzo, Patricia Mendez; Grigoras, Alina Gabriela; Grigoras, Costin; Furano, Fabrizio; Schreiner, Steffen; Datskova, Olga Vladimirovna; Banerjee, Subho Sankar; Zhang, Guoping
    Subject: Certificates ; Access Control ; Running ; Web Services ; Xml ; Servers (Computers) ; Authentication ; World Wide Web ; Atomic and Molecular Physics (General) (So) ; Physics of Metals (MD) ; Physics (General) (Ah);
    Description: Web Services are an XML based technology that allow applications to communicate with each other across disparate systems. Web Services are becoming the de facto standard that enable inter operability between heterogeneous processes and systems. AliEn2 is a grid environment based on web services. The AliEn2 services can be divided in three categories: Central services, deployed once per organization; Site services, deployed on each of the participating centers; Job Agents running on the worker nodes automatically. A security model to protect these services is essential for the whole system. Current implementations of web server, such as Apache, are not suitable to be used within the grid environment. Apache with the mod_ssl and OpenSSL only supports the X.509 certificates. But in the grid environment, the common credential is the proxy certificate for the purpose of providing restricted proxy and delegation. An Authentication framework was taken for AliEn2 web services to add the ability to accept X.509 certificates and proxy certificates from client-side to Apache Web Server. The authentication framework could also allow the generation of access control policies to limit access to the AliEn2 web services.
    Is part of: Journal of Physics: Conference Series, 2011, Vol.331(6), p.062048 (6pp)
    Identifier: 1742-6588 (ISSN); 1742-6596 (E-ISSN); 10.1088/1742-6596/331/6/062048 (DOI)