74 resultados para Detector alignment and calibration methods (lasers, sources, particle-beams)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Apart from common cases of differential argument marking, referential hierarchies affect argument marking in two ways: (a) through hierarchical marking, where markers compete for a slot and the competition is resolved by a hierarchy, and (b) through co-argument sensitivity, where the marking of one argument depends on the properties of its co-argument. Here we show that while co-argument sensitivity cannot be analyzed in terms of hierarchical marking, hierarchical marking can be analyzed in terms of co-argument sensitivity. Once hierarchical effects on marking are analyzed in terms of co-argument sensitivity, it becomes possible to examine alignment patterns relative to referential categories in exactly the same way as one can examine alignment patterns relative to referential categories in cases of differential argument marking and indeed any other condition on alignment (such as tense or clause type). As a result, instances of hierarchical marking of any kind turn out not to present a special case in the typology of alignment, and there is no need for positing an additional non-basic alignment type such as “hierarchical alignment”. While hierarchies are not needed for descriptive and comparative purposes, we also cast doubt on their relevance in diachrony: examining two families for which hierarchical agreement has been postulated, Algonquian and Kiranti, we find only weak and very limited statistical evidence for agreement paradigms to have been shaped by a principled ranking of person categories.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

he physics program of the NA61/SHINE (SHINE = SPS Heavy Ion and Neutrino Experiment) experiment at the CERN SPS consists of three subjects. In the first stage of data taking (2007-2009) measurements of hadron production in hadron-nucleus interactions needed for neutrino (T2K) and cosmic-ray (Pierre Auger and KASCADE) experiments will be performed. In the second stage (2009-2010) hadron production in proton-proton and proton-nucleus interactions needed as reference data for a better understanding of nucleus-nucleus reactions will be studied. In the third stage (2009-2013) energy dependence of hadron production properties will be measured in p+p, p+Pb interactions and nucleus-nucleus collisions, with the aim to identify the properties of the onset of deconfinement and find evidence for the critical point of strongly interacting matter. The NA61 experiment was approved at CERN in June 2007. The first pilot run was performed during October 2007. Calibrations of all detector components have been performed successfully and preliminary uncorrected spectra have been obtained. High quality of track reconstruction and particle identification similar to NA49 has been achieved. The data and new detailed simulations confirm that the NA61 detector acceptance and particle identification capabilities cover the phase space required by the T2K experiment. This document reports on the progress made in the calibration and analysis of the 2007 data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-resolution and highly precise age models for recent lake sediments (last 100–150 years) are essential for quantitative paleoclimate research. These are particularly important for sedimentological and geochemical proxies, where transfer functions cannot be established and calibration must be based upon the relation of sedimentary records to instrumental data. High-precision dating for the calibration period is most critical as it determines directly the quality of the calibration statistics. Here, as an example, we compare radionuclide age models obtained on two high-elevation glacial lakes in the Central Chilean Andes (Laguna Negra: 33°38′S/70°08′W, 2,680 m a.s.l. and Laguna El Ocho: 34°02′S/70°19′W, 3,250 m a.s.l.). We show the different numerical models that produce accurate age-depth chronologies based on 210Pb profiles, and we explain how to obtain reduced age-error bars at the bottom part of the profiles, i.e., typically around the end of the 19th century. In order to constrain the age models, we propose a method with five steps: (i) sampling at irregularly-spaced intervals for 226Ra, 210Pb and 137Cs depending on the stratigraphy and microfacies, (ii) a systematic comparison of numerical models for the calculation of 210Pb-based age models: constant flux constant sedimentation (CFCS), constant initial concentration (CIC), constant rate of supply (CRS) and sediment isotope tomography (SIT), (iii) numerical constraining of the CRS and SIT models with the 137Cs chronomarker of AD 1964 and, (iv) step-wise cross-validation with independent diagnostic environmental stratigraphic markers of known age (e.g., volcanic ash layer, historical flood and earthquakes). In both examples, we also use airborne pollutants such as spheroidal carbonaceous particles (reflecting the history of fossil fuel emissions), excess atmospheric Cu deposition (reflecting the production history of a large local Cu mine), and turbidites related to historical earthquakes. Our results show that the SIT model constrained with the 137Cs AD 1964 peak performs best over the entire chronological profile (last 100–150 years) and yields the smallest standard deviations for the sediment ages. Such precision is critical for the calibration statistics, and ultimately, for the quality of the quantitative paleoclimate reconstruction. The systematic comparison of CRS and SIT models also helps to validate the robustness of the chronologies in different sections of the profile. Although surprisingly poorly known and under-explored in paleolimnological research, the SIT model has a great potential in paleoclimatological reconstructions based on lake sediments

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern mixed alluvial-bedrock channels in mountainous areas provide natural laboratories for understanding the time scales at which coarse-grained material has been entrained and transported from their sources to the adjacent sedimentary sink, where these deposits are preserved as conglomerates. This article assesses the shear stress conditions needed for the entrainment of the coarse-bed particles in the Glogn River that drains the 400 km2 Val Lumnezia basin, eastern Swiss Alps. In addition, quantitative data are presented on sediment transport patterns in this stream. The longitudinal stream profile of this river is characterized by three ca 500 m long knickzones where channel gradients range from 0·02 to 0·2 m m−1, and where the valley bottom confined into a <10 m wide gorge. Downstream of these knickzones, the stream is flat with gradients <0·01 m m−1 and widths ≥30 m. Measurements of the grain-size distribution along the trunk stream yield a mean D84 value of ca 270 mm, whereas the mean D50 is ca 100 mm. The consequences of the channel morphology and the grain-size distribution for the time scales of sediment transport were explored by using a one-dimensional step-backwater hydraulic model (Hydrologic Engineering Centre – River Analysis System). The results reveal that, along the entire trunk stream, a two to 10 year return period flood event is capable of mobilizing both the D50 and D84 fractions where the Shields stress exceeds the critical Shields stress for the initiation of particle motion. These return periods, however, varied substantially depending on the channel geometry and the pebble/boulder size distribution of the supplied material. Accordingly, the stream exhibits a highly dynamic boulder cover behaviour. It is likely that these time scales might also have been at work when coarse-grained conglomerates were constructed in the geological past.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Epidemiological studies show that elevated levels of particulate matter in ambient air are highly correlated with respiratory and cardiovascular diseases. Atmospheric particles originate from a large number of sources and have a highly complex and variable composition. An assessment of their potential health risks and the identification of the most toxic particle sources would require a large number of investigations. Due to ethical and economic reasons, it is desirable to reduce the number of in vivo studies and to develop suitable in vitro systems for the investigation of cell-particle interactions. METHODS We present the design of a new particle deposition chamber in which aerosol particles are deposited onto cell cultures out of a continuous air flow. The chamber allows for a simultaneous exposure of 12 cell cultures. RESULTS Physiological conditions within the deposition chamber can be sustained constantly at 36-37°C and 90-95% relative humidity. Particle deposition within the chamber and especially on the cell cultures was determined in detail, showing that during a deposition time of 2 hr 8.4% (24% relative standard deviation) of particles with a mean diameter of 50 nm [mass median diameter of 100 nm (geometric standard deviation 1.7)] are deposited on the cell cultures, which is equal to 24-34% of all charged particles. The average well-to-well variability of particles deposited simultaneously in the 12 cell cultures during an experiment is 15.6% (24.7% relative standard deviation). CONCLUSIONS This particle deposition chamber is a new in vitro system to investigate realistic cell-particle interactions at physiological conditions, minimizing stress on the cell cultures other than from deposited particles. A detailed knowledge of particle deposition characteristics on the cell cultures allows evaluating reliable dose-response relationships. The compact and portable design of the deposition chamber allows for measurements at any particle sources of interest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a summary of beam-induced backgrounds observed in the ATLAS detector and discusses methods to tag and remove background contaminated events in data. Trigger-rate based monitoring of beam-related backgrounds is presented. The correlations of backgrounds with machine conditions, such as residual pressure in the beam-pipe, are discussed. Results from dedicated beam-background simulations are shown, and their qualitative agreement with data is evaluated. Data taken during the passage of unpaired, i.e. non-colliding, proton bunches is used to obtain background-enriched data samples. These are used to identify characteristic features of beam-induced backgrounds, which then are exploited to develop dedicated background tagging tools. These tools, based on observables in the Pixel detector, the muon spectrometer and the calorimeters, are described in detail and their efficiencies are evaluated. Finally an example of an application of these techniques to a monojet analysis is given, which demonstrates the importance of such event cleaning techniques for some new physics searches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many of the interesting physics processes to be measured at the LHC have a signature involving one or more isolated electrons. The electron reconstruction and identification efficiencies of the ATLAS detector at the LHC have been evaluated using proton–proton collision data collected in 2011 at √s = 7 TeV and corresponding to an integrated luminosity of 4.7 fb−1. Tag-and-probe methods using events with leptonic decays of W and Z bosons and J/ψ mesons are employed to benchmark these performance parameters. The combination of all measurements results in identification efficiencies determined with an accuracy at the few per mil level for electron transverse energy greater than 30 GeV.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last years, the interest in proton radiotherapy is rapidly increasing. Protons provide superior physical properties compared with conventional radiotherapy using photons. These properties result in depth dose curves with a large dose peak at the end of the proton track and the finite proton range allows sparing the distally located healthy tissue. These properties offer an increased flexibility in proton radiotherapy, but also increase the demand in accurate dose estimations. To carry out accurate dose calculations, first an accurate and detailed characterization of the physical proton beam exiting the treatment head is necessary for both currently available delivery techniques: scattered and scanned proton beams. Since Monte Carlo (MC) methods follow the particle track simulating the interactions from first principles, this technique is perfectly suited to accurately model the treatment head. Nevertheless, careful validation of these MC models is necessary. While for the dose estimation pencil beam algorithms provide the advantage of fast computations, they are limited in accuracy. In contrast, MC dose calculation algorithms overcome these limitations and due to recent improvements in efficiency, these algorithms are expected to improve the accuracy of the calculated dose distributions and to be introduced in clinical routine in the near future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to provide a review of general processes related to plasma sources, their transport, energization, and losses in the planetary magnetospheres. We provide background information as well as the most up-to-date knowledge of the comparative studies of planetary magnetospheres, with a focus on the plasma supply to each region of the magnetospheres. This review also includes the basic equations and modeling methods commonly used to simulate the plasma sources of the planetary magnetospheres. In this paper, we will describe basic and common processes related to plasma supply to each region of the planetary magnetospheres in our solar system. First, we will describe source processes in Sect. 1. Then the transport and energization processes to supply those source plasmas to various regions of the magnetosphere are described in Sect. 2. Loss processes are also important to understand the plasma population in the magnetosphere and Sect. 3 is dedicated to the explanation of the loss processes. In Sect. 4, we also briefly summarize the basic equations and modeling methods with a focus on plasma supply processes for planetary magnetospheres.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Intraoperative laparoscopic calibration remains a challenging task. In this work we present a new method and instrumentation for intraoperative camera calibration. Contrary to conventional calibration methods, the proposed technique allows intraoperative laparoscope calibration from single perspective observations, resulting in a standardized scheme for calibrating in a clinical scenario. Results show an average displacement error of 0.52 ± 0.19 mm, indicating sufficient accuracy for clinical use. Additionally, the proposed method is validated clinically by performing a calibration during the surgery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: The aim of apical surgery is to hermetically seal the root canal system after root-end resection, thereby enabling periradicular healing. The objective of this nonrandomized prospective clinical study was to report results of 2 different root-end preparation and filling methods, ie, mineral trioxide aggregate (MTA) and an adhesive resin composite (Retroplast). METHODS: The study included 353 consecutive cases with endodontic lesions limited to the periapical area. Root-end cavities were prepared with sonic microtips and filled with MTA (n = 178), or alternatively, a shallow concavity was prepared in the cut root face, with subsequent placement of an adhesive resin composite (Retroplast) (n = 175). Patients were recalled after 1 year. Cases were defined as healed when no clinical signs or symptoms were present and radiographs demonstrated complete or incomplete (scar tissue) healing of previous radiolucencies. RESULTS: The overall rate of healed cases was 85.5%. MTA-treated teeth demonstrated a significantly (P = .003) higher rate of healed cases (91.3%) compared with Retroplast-treated teeth (79.5%). Within the MTA group, 89.5%-100% of cases were classified as healed, depending on the type of treated tooth. In contrast, more variable rates ranging from 66.7%-100% were found in the Retroplast group. In particular, mandibular premolars and molars demonstrated considerably lower rates of healed cases when treated with Retroplast. CONCLUSIONS: MTA can be recommended for root-end filling in apical surgery, irrespective of the type of treated tooth. Retroplast should be used with caution for root-end sealing in apical surgery of mandibular premolars and molars.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Particle biokinetics is important in hazard identification and characterization of inhaled particles. Such studies intend to convert external to internal exposure or biologically effective dose, and may help to set limits in that way. Here we focus on the biokinetics of inhaled nanometer sized particles in comparison to micrometer sized ones.The presented approach ranges from inhaled particle deposition probability and retention in the respiratory tract to biokinetics and clearance of particles out of the respiratory tract. Particle transport into the blood circulation (translocation), towards secondary target organs and tissues (accumulation), and out of the body (clearance) is considered. The macroscopically assessed amount of particles in the respiratory tract and secondary target organs provides dose estimates for toxicological studies on the level of the whole organism. Complementary, microscopic analyses at the individual particle level provide detailed information about which cells and subcellular components are the target of inhaled particles. These studies contribute to shed light on mechanisms and modes of action eventually leading to adverse health effects by inhaled nanoparticles.We review current methods for macroscopic and microscopic analyses of particle deposition, retention and clearance. Existing macroscopic knowledge on particle biokinetics and microscopic views on particle organ interactions are discussed comparing nanometer and micrometer sized particles. We emphasize the importance for quantitative analyses and the use of particle doses derived from real world exposures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Surgical risk scores, such as the logistic EuroSCORE (LES) and Society of Thoracic Surgeons Predicted Risk of Mortality (STS) score, are commonly used to identify high-risk or “inoperable” patients for transcatheter aortic valve implantation (TAVI). In Europe, the LES plays an important role in selecting patients for implantation with the Medtronic CoreValve System. What is less clear, however, is the role of the STS score of these patients and the relationship between the LES and STS. Objective The purpose of this study is to examine the correlation between LES and STS scores and their performance characteristics in high-risk surgical patients implanted with the Medtronic CoreValve System. Methods All consecutive patients (n = 168) in whom a CoreValve bioprosthesis was implanted between November 2005 and June 2009 at 2 centers (Bern University Hospital, Bern, Switzerland, and Erasmus Medical Center, Rotterdam, The Netherlands) were included for analysis. Patient demographics were recorded in a prospective database. Logistic EuroSCORE and STS scores were calculated on a prospective and retrospective basis, respectively. Results Observed mortality was 11.1%. The mean LES was 3 times higher than the mean STS score (LES 20.2% ± 13.9% vs STS 6.7% ± 5.8%). Based on the various LES and STS cutoff values used in previous and ongoing TAVI trials, 53% of patients had an LES ≥15%, 16% had an STS ≥10%, and 40% had an LES ≥20% or STS ≥10%. Pearson correlation coefficient revealed a reasonable (moderate) linear relationship between the LES and STS scores, r = 0.58, P < .001. Although the STS score outperformed the LES, both models had suboptimal discriminatory power (c-statistic, 0.49 for LES and 0.69 for STS) and calibration. Conclusions Clinical judgment and the Heart Team concept should play a key role in selecting patients for TAVI, whereas currently available surgical risk score algorithms should be used to guide clinical decision making.