32 resultados para Double cantilever beam test
Resumo:
This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we estimate the ability of the Bertini cascade to simulate Compact Muon Solenoid (CMS) hadron calorimeter HCAL. LHC test beam activity has a tightly coupled cycle of simulation-to-data analysis. Typically, a Geant4 computer experiment is used to understand test beam measurements. Thus an another aspect of this thesis is a description of studies related to developing new CMS H2 test beam data analysis tools and performing data analysis on the basis of CMS Monte Carlo events. These events have been simulated in detail using Geant4 physics models, full CMS detector description, and event reconstruction. Using the ROOT data analysis framework we have developed an offline ANN-based approach to tag b-jets associated with heavy neutral Higgs particles, and we show that this kind of NN methodology can be successfully used to separate the Higgs signal from the background in the CMS experiment.
Resumo:
Diagnostic radiology represents the largest man-made contribution to population radiation doses in Europe. To be able to keep the diagnostic benefit versus radiation risk ratio as high as possible, it is important to understand the quantitative relationship between the patient radiation dose and the various factors which affect the dose, such as the scan parameters, scan mode, and patient size. Paediatric patients have a higher probability for late radiation effects, since longer life expectancy is combined with the higher radiation sensitivity of the developing organs. The experience with particular paediatric examinations may be very limited and paediatric acquisition protocols may not be optimised. The purpose of this thesis was to enhance and compare different dosimetric protocols, to promote the establishment of the paediatric diagnostic reference levels (DRLs), and to provide new data on patient doses for optimisation purposes in computed tomography (with new applications for dental imaging) and in paediatric radiography. Large variations in radiation exposure in paediatric skull, sinus, chest, pelvic and abdominal radiography examinations were discovered in patient dose surveys. There were variations between different hospitals and examination rooms, between different sized patients, and between imaging techniques; emphasising the need for harmonisation of the examination protocols. For computed tomography, a correction coefficient, which takes individual patient size into account in patient dosimetry, was created. The presented patient size correction method can be used for both adult and paediatric purposes. Dental cone beam CT scanners provided adequate image quality for dentomaxillofacial examinations while delivering considerably smaller effective doses to patient compared to the multi slice CT. However, large dose differences between cone beam CT scanners were not explained by differences in image quality, which indicated the lack of optimisation. For paediatric radiography, a graphical method was created for setting the diagnostic reference levels in chest examinations, and the DRLs were given as a function of patient projection thickness. Paediatric DRLs were also given for sinus radiography. The detailed information about the patient data, exposure parameters and procedures provided tools for reducing the patient doses in paediatric radiography. The mean tissue doses presented for paediatric radiography enabled future risk assessments to be done. The calculated effective doses can be used for comparing different diagnostic procedures, as well as for comparing the use of similar technologies and procedures in different hospitals and countries.
Resumo:
This thesis concerns the dynamics of nanoparticle impacts on solid surfaces. These impacts occur, for instance, in space, where micro- and nanometeoroids hit surfaces of planets, moons, and spacecraft. On Earth, materials are bombarded with nanoparticles in cluster ion beam devices, in order to clean or smooth their surfaces, or to analyse their elemental composition. In both cases, the result depends on the combined effects of countless single impacts. However, the dynamics of single impacts must be understood before the overall effects of nanoparticle radiation can be modelled. In addition to applications, nanoparticle impacts are also important to basic research in the nanoscience field, because the impacts provide an excellent case to test the applicability of atomic-level interaction models to very dynamic conditions. In this thesis, the stopping of nanoparticles in matter is explored using classical molecular dynamics computer simulations. The materials investigated are gold, silicon, and silica. Impacts on silicon through a native oxide layer and formation of complex craters are also simulated. Nanoparticles up to a diameter of 20 nm (315000 atoms) were used as projectiles. The molecular dynamics method and interatomic potentials for silicon and gold are examined in this thesis. It is shown that the displacement cascade expansionmechanism and crater crown formation are very sensitive to the choice of atomic interaction model. However, the best of the current interatomic models can be utilized in nanoparticle impact simulation, if caution is exercised. The stopping of monatomic ions in matter is understood very well nowadays. However, interactions become very complex when several atoms impact on a surface simultaneously and within a short distance, as happens in a nanoparticle impact. A high energy density is deposited in a relatively small volume, which induces ejection of material and formation of a crater. Very high yields of excavated material are observed experimentally. In addition, the yields scale nonlinearly with the cluster size and impact energy at small cluster sizes, whereas in macroscopic hypervelocity impacts, the scaling 2 is linear. The aim of this thesis is to explore the atomistic mechanisms behind the nonlinear scaling at small cluster sizes. It is shown here that the nonlinear scaling of ejected material yield disappears at large impactor sizes because the stopping mechanism of nanoparticles gradually changes to the same mechanism as in macroscopic hypervelocity impacts. The high yields at small impactor size are due to the early escape of energetic atoms from the hot region. In addition, the sputtering yield is shown to depend very much on the spatial initial energy and momentum distributions that the nanoparticle induces in the material in the first phase of the impact. At the later phases, the ejection of material occurs by several mechanisms. The most important mechanism at high energies or at large cluster sizes is atomic cluster ejection from the transient liquid crown that surrounds the crater. The cluster impact dynamics detected in the simulations are in agreement with several recent experimental results. In addition, it is shown that relatively weak impacts can induce modifications on the surface of an amorphous target over a larger area than was previously expected. This is a probable explanation for the formation of the complex crater shapes observed on these surfaces with atomic force microscopy. Clusters that consist of hundreds of thousands of atoms induce long-range modifications in crystalline gold.
Resumo:
Close to one half of the LHC events are expected to be due to elastic or inelastic diffractive scattering. Still, predictions based on extrapolations of experimental data at lower energies differ by large factors in estimating the relative rate of diffractive event categories at the LHC energies. By identifying diffractive events, detailed studies on proton structure can be carried out. The combined forward physics objects: rapidity gaps, forward multiplicity and transverse energy flows can be used to efficiently classify proton-proton collisions. Data samples recorded by the forward detectors, with a simple extension, will allow first estimates of the single diffractive (SD), double diffractive (DD), central diffractive (CD), and non-diffractive (ND) cross sections. The approach, which uses the measurement of inelastic activity in forward and central detector systems, is complementary to the detection and measurement of leading beam-like protons. In this investigation, three different multivariate analysis approaches are assessed in classifying forward physics processes at the LHC. It is shown that with gene expression programming, neural networks and support vector machines, diffraction can be efficiently identified within a large sample of simulated proton-proton scattering events. The event characteristics are visualized by using the self-organizing map algorithm.
Resumo:
The TOTEM collaboration has developed and tested the first prototype of its Roman Pots to be operated in the LHC. TOTEM Roman Pots contain stacks of 10 silicon detectors with strips oriented in two orthogonal directions. To measure proton scattering angles of a few microradians, the detectors will approach the beam centre to a distance of 10 sigma + 0.5 mm (= 1.3 mm). Dead space near the detector edge is minimised by using two novel "edgeless" detector technologies. The silicon detectors are used both for precise track reconstruction and for triggering. The first full-sized prototypes of both detector technologies as well as their read-out electronics have been developed, built and operated. The tests took place first in a fixed-target muon beam at CERN's SPS, and then in the proton beam-line of the SPS accelerator ring. We present the test beam results demonstrating the successful functionality of the system despite slight technical shortcomings to be improved in the near future.
Resumo:
This paper investigates the clustering pattern in the Finnish stock market. Using trading volume and time as factors capturing the clustering pattern in the market, the Keim and Madhavan (1996) and the Engle and Russell (1998) model provide the framework for the analysis. The descriptive and the parametric analysis provide evidences that an important determinant of the famous U-shape pattern in the market is the rate of information arrivals as measured by large trading volumes and durations at the market open and close. Precisely, 1) the larger the trading volume, the greater the impact on prices both in the short and the long run, thus prices will differ across quantities. 2) Large trading volume is a non-linear function of price changes in the long run. 3) Arrival times are positively autocorrelated, indicating a clustering pattern and 4) Information arrivals as approximated by durations are negatively related to trading flow.
Resumo:
This paper is concerned with using the bootstrap to obtain improved critical values for the error correction model (ECM) cointegration test in dynamic models. In the paper we investigate the effects of dynamic specification on the size and power of the ECM cointegration test with bootstrap critical values. The results from a Monte Carlo study show that the size of the bootstrap ECM cointegration test is close to the nominal significance level. We find that overspecification of the lag length results in a loss of power. Underspecification of the lag length results in size distortion. The performance of the bootstrap ECM cointegration test deteriorates if the correct lag length is not used in the ECM. The bootstrap ECM cointegration test is therefore not robust to model misspecification.
Resumo:
Over the last few decades, literary narratology has branched out into a wide array of ‘post-classical’ narratologies that have borrowed concepts from cognitive psychology, sociology, anthropology, history, linguistics, and other disciplines. The question arises to what extent ‘classical’ narratological concepts can also be successfully exported to other disciplines which have an interest in narrative. In this article, I apply the concept of ‘focalization’ as well as David Herman’s insights into doubly-deictic ‘you’ in second-person narratives to an interview narrative and further materials from my empirical sociolinguistic study on general practitioners’ narrative discourse on intimate partner abuse. I consider how the narrative positioning of the GP as storyteller and ‘protagonist’ of his story corresponds with his social and professional positioning with regard to his patients in the context of intimate partner violence cases and vis-à-vis the interviewer during the research interview. Focalization and double deixis are shown to become part of a narrative strategy whereby the narrator distances himself from his own personal self in the narrative and at the same time tries to align the interviewer with his viewpoint.
Resumo:
Silicon strip detectors are fast, cost-effective and have an excellent spatial resolution. They are widely used in many high-energy physics experiments. Modern high energy physics experiments impose harsh operation conditions on the detectors, e.g., of LHC experiments. The high radiation doses cause the detectors to eventually fail as a result of excessive radiation damage. This has led to a need to study radiation tolerance using various techniques. At the same time, a need to operate sensors approaching the end their lifetimes has arisen. The goal of this work is to demonstrate that novel detectors can survive the environment that is foreseen for future high-energy physics experiments. To reach this goal, measurement apparatuses are built. The devices are then used to measure the properties of irradiated detectors. The measurement data are analyzed, and conclusions are drawn. Three measurement apparatuses built as a part of this work are described: two telescopes measuring the tracks of the beam of a particle accelerator and one telescope measuring the tracks of cosmic particles. The telescopes comprise layers of reference detectors providing the reference track, slots for the devices under test, the supporting mechanics, electronics, software, and the trigger system. All three devices work. The differences between these devices are discussed. The reconstruction of the reference tracks and analysis of the device under test are presented. Traditionally, silicon detectors have produced a very clear response to the particles being measured. In the case of detectors nearing the end of their lifefimes, this is no longer true. A new method benefitting from the reference tracks to form clusters is presented. The method provides less biased results compared to the traditional analysis, especially when studying the response of heavily irradiated detectors. Means to avoid false results in demonstrating the particle-finding capabilities of a detector are also discussed. The devices and analysis methods are primarily used to study strip detectors made of Magnetic Czochralski silicon. The detectors studied were irradiated to various fluences prior to measurement. The results show that Magnetic Czochralski silicon has a good radiation tolerance and is suitable for future high-energy physics experiments.
Improving outcome of childhood bacterial meningitis by simplified treatment : Experience from Angola
Resumo:
Background Acute bacterial meningitis (BM) continues to be an important cause of childhood mortality and morbidity, especially in developing countries. Prognostic scales and the identification of risk factors for adverse outcome both aid in assessing disease severity. New antimicrobial agents or adjunctive treatments - except for oral glycerol - have essentially failed to improve BM prognosis. A retrospective observational analysis found paracetamol beneficial in adult bacteraemic patients, and some experts recommend slow β-lactam infusion. We examined these treatments in a prospective, double-blind, placebo-controlled clinical trial. Patients and methods A retrospective analysis included 555 children treated for BM in 2004 in the infectious disease ward of the Paediatric Hospital of Luanda, Angola. Our prospective study randomised 723 children into four groups, to receive a combination of cefotaxime infusion or boluses every 6 hours for the first 24 hours and oral paracetamol or placebo for 48 hours. The primary endpoints were 1) death or severe neurological sequelae (SeNeSe), and 2) deafness. Results In the retrospective study, the mortality of children with blood transfusion was 23% (30 of 128) vs. without blood transfusion 39% (109 of 282; p=0.004). In the prospective study, 272 (38%) of the children died. Of those 451 surviving, 68 (15%) showed SeNeSe, and 12% (45 of 374) were deaf. Whereas no difference between treatment groups was observable in primary endpoints, the early mortality in the infusion-paracetamol group was lower, with the difference (Fisher s exact test) from the other groups at 24, 48, and 72 hours being significant (p=0.041, 0.0005, and 0.005, respectively). Prognostic factors for adverse outcomes were impaired consciousness, dyspnoea, seizures, delayed presentation, and absence of electricity at home (Simple Luanda Scale, SLS); the Bayesian Luanda Scale (BLS) also included abnormally low or high blood glucose. Conclusions New studies concerning the possible beneficial effect of blood transfusion, and concerning longer treatment with cefotaxime infusion and oral paracetamol, and a study to validate our simple prognostic scales are warranted.