998 resultados para precision experiment


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In case of violation of CPT- and Lorentz Symmetry, the minimal Standard Model Extension (SME) of Kostelecky and coworkers predicts sidereal modulations of atomic transition frequencies as the Earth rotates relative to a Lorentz-violating background field. One method to search for these modulations is the so-called clock-comparison experiment, where the frequencies of co-located clocks are compared as they rotate with respect to the fixed stars. In this work an experiment is presented where polarized 3He and 129Xe gas samples in a glass cell serve as clocks, whose nuclear spin precession frequencies are detected with the help of highly sensitive SQUID sensors inside a magnetically shielded room. The unique feature of this experiment is the fact that the spins are precessing freely, with transverse relaxation times of up to 4.4 h for 129Xe and 14.1 h for 3He. To be sensitive to Lorentz-violating effects, the influence of external magnetic fields is canceled via the weighted difference of the 3He and 129Xe frequencies or phases. The Lorentz-violating SME parameters for the neutron are determined out of a fit on the phase difference data of 7 spin precession measurements of 12 to 16 hours length. The result of the fit gives an upper limit for the equatorial component of the neutron parameter b_n of 3.7×10^(−32) GeV at the 95% confidence level. This value is not limited by the signal-to-noise ratio, but by the strong correlations between the fit parameters. To reduce the correlations and therewith improve the sensitivity of future experiments, it will be necessary to change the time structure of the weighted phase difference, which can be realized by increasing the 129Xe relaxation time.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Light pseudoscalar bosons, such as the axion that was originally proposed as a solution of the strong CP problem, would cause a new spin-dependent short-range interaction. In this thesis, an experiment is presented to search for axion mediated short-range interaction between a nucleon and the spin of a polarized bound neutron. This interaction cause a shift in the precession frequency of nuclear spin-polarized gases in the presence of an unpolarized mass. To get rid of magnetic field drifts co-located, nuclear spin polarized 3He and 129Xe atoms were used. The free nuclear spin precession frequencies were measured in a homogeneous magnetic guiding field of about 350nT using LTc SQUID detectors. The whole setup was housed in a magnetically shielded room at the Physikalisch Technische Bundesanstalt (PTB) in Berlin. With this setup long nuclear spin-coherence times, respectively, transverse relaxation times of 5h for 129Xe and 53h for 3He could be achieved. The results of the last run in September 2010 are presented which give new upper limits on the scalar-pseudoscalar coupling of axion-like particles in the axion-mass window from 10^(-2) eV to 10^(-6) eV. The laboratory upper bounds were improved by up to 4 orders of magnitude.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

De forma geral, os cursos de física clássica oferecidos nas universidades carecem de exemplos de aplicações nas áreas de química e biologia, o que por vezes desmotivam os alunos de graduação destas áreas a estudarem os conceitos físicos desenvolvidos em sala de aula. Neste texto, a analogia entre os osciladores elétrico e mecânico é explorada visando possívies aplicações em química e biologia, mostrando-se de grande valia devido ao seu uso em técnicas de medição de variação de massa com alta precisão, tanto de forma direta como indireta. Estas técnicas são conhecidas como técnicas eletrogravimétricas e são de especial importância em aplicações que envolvem biossensores. Desta forma, o texto explora o estudo da analogia eletromecânica de forma interdisciplinar envolvendo as áreas de física, química e biologia. Baseado nessa analogia é proposto um experimento que permite a sua aplicação em diferentes níveis conceituais dessas disciplinas, tanto em abordagem básica como mais profunda.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Die vorliegende Arbeit befasst sich mit der Entwicklung und dem Aufbau eines Experiments zur hochpräzisen Bestimmung des g-Faktors gebundener Elektronen in hochgeladenen Ionen. Der g-Faktor eines Teilchens ist eine dimensionslose Konstante, die die Stärke der Wechselwirkung mit einem magnetischen Feld beschreibt. Im Falle eines an ein hochgeladenes Ion gebundenen Elektrons, dient es als einer der genausten Tests der Quantenelektrodynamik gebundener Zustande (BS-QED). Die Messung wird in einem dreifach Penning-Fallen System durchgeführt und basiert auf dem kontinuierlichen Stern-Gerlach-Effekt. Der erste Teil dieser Arbeit gibt den aktuellen Wissensstand über magnetische Momente wieder. Der hier gewählte experimentelle Aufbau wird begründet. Anschließend werden die experimentellen Anforderungen und die verwendeten Messtechniken erläutert. Das Ladungsbrüten der Ionen - einer der wichtigsten Aufgaben dieser Arbeit - ist dargestellt. Seine Realisierung basiert auf einer Feld-Emissions-Spitzen-Anordnung, die die Messung des Wirkungsquerschnitts für Elektronenstoßionisation ermöglicht. Der letzte Teil der Arbeit widmet sich der Entwicklung und dem Aufbau des Penning-Fallen Systems, sowie der Implementierung des Nachweisprozesses. Gegenwärtig ist der Aufbau zur Erzeugung hochgeladener Ionen und der dazugehörigen Messung des g-Faktors abgeschlossen, einschließlich des Steuerprogramms für die erste Datennahme. Die Ionenerzeugung und das Ladungsbrüten werden die nächsten Schritte sein.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Hypernuclear physics is currently attracting renewed interest, due tornthe important role of hypernuclei spectroscopy rn(hyperon-hyperon and hyperon-nucleon interactions) rnas a unique toolrnto describe the baryon-baryon interactions in a unified way and to rnunderstand the origin of their short-range.rnrnHypernuclear research will be one of the main topics addressed by the {sc PANDA} experimentrnat the planned Facility for Antiproton and Ion Research {sc FAIR}.rnThanks to the use of stored $overline{p}$ beams, copiousrnproduction of double $Lambda$ hypernuclei is expected at thern{sc PANDA} experiment, which will enable high precision $gamma$rnspectroscopy of such nuclei for the first time.rnAt {sc PANDA} excited states of $Xi^-$ hypernuclei will be usedrnas a basis for the formation of double $Lambda$ hypernuclei.rnFor their detection, a devoted hypernuclear detector setup is planned. This setup consists ofrna primary nuclear target for the production of $Xi^{-}+overline{Xi}$ pairs, a secondary active targetrnfor the hypernuclei formation and the identification of associated decay products and a germanium array detector to perform $gamma$ spectroscopy.rnrnIn the present work, the feasibility of performing high precision $gamma$rnspectroscopy of double $Lambda$ hypernuclei at the {sc PANDA} experiment has been studiedrnby means of a Monte Carlo simulation. For this issue, the designing and simulation of the devoted detector setup as well as of the mechanism to produce double $Lambda$ hypernuclei have been optimizedrntogether with the performance of the whole system. rnIn addition, the production yields of double hypernuclei in excitedrnparticle stable states have been evaluated within a statistical decay model.rnrnA strategy for the unique assignment of various newly observed $gamma$-transitions rnto specific double hypernuclei has been successfully implemented by combining the predicted energy spectra rnof each target with the measurement of two pion momenta from the subsequent weak decays of a double hypernucleus.rn% Indeed, based on these Monte Carlo simulation, the analysis of the statistical decay of $^{13}_{Lambda{}Lambda}$B has been performed. rn% As result, three $gamma$-transitions associated to the double hypernuclei $^{11}_{Lambda{}Lambda}$Bern% and to the single hyperfragments $^{4}_{Lambda}$H and $^{9}_{Lambda}$Be, have been well identified.rnrnFor the background handling a method based on time measurement has also been implemented.rnHowever, the percentage of tagged events related to the production of $Xi^{-}+overline{Xi}$ pairs, variesrnbetween 20% and 30% of the total number of produced events of this type. As a consequence, further considerations have to be made to increase the tagging efficiency by a factor of 2.rnrnThe contribution of the background reactions to the radiation damage on the germanium detectorsrnhas also been studied within the simulation. Additionally, a test to check the degradation of the energyrnresolution of the germanium detectors in the presence of a magnetic field has also been performed.rnNo significant degradation of the energy resolution or in the electronics was observed. A correlationrnbetween rise time and the pulse shape has been used to correct the measured energy. rnrnBased on the present results, one can say that the performance of $gamma$ spectroscopy of double $Lambda$ hypernuclei at the {sc PANDA} experiment seems feasible.rnA further improvement of the statistics is needed for the background rejection studies. Moreover, a more realistic layout of the hypernuclear detectors has been suggested using the results of these studies to accomplish a better balance between the physical and the technical requirements.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

iTRAQ (isobaric tags for relative or absolute quantitation) is a mass spectrometry technology that allows quantitative comparison of protein abundance by measuring peak intensities of reporter ions released from iTRAQ-tagged peptides by fragmentation during MS/MS. However, current data analysis techniques for iTRAQ struggle to report reliable relative protein abundance estimates and suffer with problems of precision and accuracy. The precision of the data is affected by variance heterogeneity: low signal data have higher relative variability; however, low abundance peptides dominate data sets. Accuracy is compromised as ratios are compressed toward 1, leading to underestimation of the ratio. This study investigated both issues and proposed a methodology that combines the peptide measurements to give a robust protein estimate even when the data for the protein are sparse or at low intensity. Our data indicated that ratio compression arises from contamination during precursor ion selection, which occurs at a consistent proportion within an experiment and thus results in a linear relationship between expected and observed ratios. We proposed that a correction factor can be calculated from spiked proteins at known ratios. Then we demonstrated that variance heterogeneity is present in iTRAQ data sets irrespective of the analytical packages, LC-MS/MS instrumentation, and iTRAQ labeling kit (4-plex or 8-plex) used. We proposed using an additive-multiplicative error model for peak intensities in MS/MS quantitation and demonstrated that a variance-stabilizing normalization is able to address the error structure and stabilize the variance across the entire intensity range. The resulting uniform variance structure simplifies the downstream analysis. Heterogeneity of variance consistent with an additive-multiplicative model has been reported in other MS-based quantitation including fields outside of proteomics; consequently the variance-stabilizing normalization methodology has the potential to increase the capabilities of MS in quantitation across diverse areas of biology and chemistry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since 1995 the eruption of the andesitic Soufrière Hills Volcano (SHV), Montserrat, has been studied in substantial detail. As an important contribution to this effort, the Seismic Experiment with Airgunsource-Caribbean Andesitic Lava Island Precision Seismo-geodetic Observatory (SEA-CALIPSO) experiment was devised to image the arc crust underlying Montserrat, and, if possible, the magma system at SHV using tomography and reflection seismology. Field operations were carried out in October–December 2007, with deployment of 238 seismometers on land supplementing seven volcano observatory stations, and with an array of 10 ocean-bottom seismometers deployed offshore. The RRS James Cook on NERC cruise JC19 towed a tuned airgun array plus a digital 48-channel streamer on encircling and radial tracks for 77 h about Montserrat during December 2007, firing 4414 airgun shots and yielding about 47 Gb of data. The main objecctives of the experiment were achieved. Preliminary analyses of these data published in 2010 generated images of heterogeneous high-velocity bodies representing the cores of volcanoes and subjacent intrusions, and shallow areas of low velocity on the flanks of the island that reflect volcaniclastic deposits and hydrothermal alteration. The resolution of this preliminary work did not extend beyond 5 km depth. An improved three-dimensional (3D) seismic velocity model was then obtained by inversion of 181 665 first-arrival travel times from a more-complete sampling of the dataset, yielding clear images to 7.5 km depth of a low-velocity volume that was interpreted as the magma chamber which feeds the current eruption, with an estimated volume 13 km3. Coupled thermal and seismic modelling revealed properties of the partly crystallized magma. Seismic reflection analyses aimed at imaging structures under southern Montserrat had limited success, and suggest subhorizontal layering interpreted as sills at a depth of between 6 and 19 km. Seismic reflection profiles collected offshore reveal deep fans of volcaniclastic debris and fault offsets, leading to new tectonic interpretations. This chapter presents the project goals and planning concepts, describes in detail the campaigns at sea and on land, summarizes the major results, and identifies the key lessons learned.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eutrophication of the Baltic Sea is a serious problem. This thesis estimates the benefit to Finns from reduced eutrophication in the Gulf of Finland, the most eutrophied part of the Baltic Sea, by applying the choice experiment method, which belongs to the family of stated preference methods. Because stated preference methods have been subject to criticism, e.g., due to their hypothetical survey context, this thesis contributes to the discussion by studying two anomalies that may lead to biased welfare estimates: respondent uncertainty and preference discontinuity. The former refers to the difficulty of stating one s preferences for an environmental good in a hypothetical context. The latter implies a departure from the continuity assumption of conventional consumer theory, which forms the basis for the method and the analysis. In the three essays of the thesis, discrete choice data are analyzed with the multinomial logit and mixed logit models. On average, Finns are willing to contribute to the water quality improvement. The probability for willingness increases with residential or recreational contact with the gulf, higher than average income, younger than average age, and the absence of dependent children in the household. On average, for Finns the relatively most important characteristic of water quality is water clarity followed by the desire for fewer occurrences of blue-green algae. For future nutrient reduction scenarios, the annual mean household willingness to pay estimates range from 271 to 448 and the aggregate welfare estimates for Finns range from 28 billion to 54 billion euros, depending on the model and the intensity of the reduction. Out of the respondents (N=726), 72.1% state in a follow-up question that they are either Certain or Quite certain about their answer when choosing the preferred alternative in the experiment. Based on the analysis of other follow-up questions and another sample (N=307), 10.4% of the respondents are identified as potentially having discontinuous preferences. In relation to both anomalies, the respondent- and questionnaire-specific variables are found among the underlying causes and a departure from standard analysis may improve the model fit and the efficiency of estimates, depending on the chosen modeling approach. The introduction of uncertainty about the future state of the Gulf increases the acceptance of the valuation scenario which may indicate an increased credibility of a proposed scenario. In conclusion, modeling preference heterogeneity is an essential part of the analysis of discrete choice data. The results regarding uncertainty in stating one s preferences and non-standard choice behavior are promising: accounting for these anomalies in the analysis may improve the precision of the estimates of benefit from reduced eutrophication in the Gulf of Finland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The TOTEM experiment at the LHC will measure the total proton-proton cross-section with a precision better than 1%, elastic proton scattering over a wide range in momentum transfer -t= p^2 theta^2 up to 10 GeV^2 and diffractive dissociation, including single, double and central diffraction topologies. The total cross-section will be measured with the luminosity independent method that requires the simultaneous measurements of the total inelastic rate and the elastic proton scattering down to four-momentum transfers of a few 10^-3 GeV^2, corresponding to leading protons scattered in angles of microradians from the interaction point. This will be achieved using silicon microstrip detectors, which offer attractive properties such as good spatial resolution (<20 um), fast response (O(10ns)) to particles and radiation hardness up to 10^14 "n"/cm^2. This work reports about the development of an innovative structure at the detector edge reducing the conventional dead width of 0.5-1 mm to 50-60 um, compatible with the requirements of the experiment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on the scaling criteria of polymer flooding reservoir obtained in our previous work in which the gravity and capillary forces, compressibility, non-Newtonian behavior, absorption, dispersion, and diffusion are considered, eight partial similarity models are designed. A new numerical approach of sensitivity analysis is suggested to quantify the dominance degree of relaxed dimensionless parameters for partial similarity model. The sensitivity factor quantifying the dominance degree of relaxed dimensionless parameter is defined. By solving the dimensionless governing equations including all dimensionless parameters, the sensitivity factor of each relaxed dimensionless parameter is calculated for each partial similarity model; thus, the dominance degree of the relaxed one is quantitatively determined. Based on the sensitivity analysis, the effect coefficient of partial similarity model is defined as the summation of product of sensitivity factor of relaxed dimensionless parameter and its relative relaxation quantity. The effect coefficient is used as a criterion to evaluate each partial similarity model. Then the partial similarity model with the smallest effect coefficient can be singled out to approximate to the prototype. Results show that the precision of partial similarity model is not only determined by the number of satisfied dimensionless parameters but also the relative relaxation quantity of the relaxed ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The brain encodes visual information with limited precision. Contradictory evidence exists as to whether the precision with which an item is encoded depends on the number of stimuli in a display (set size). Some studies have found evidence that precision decreases with set size, but others have reported constant precision. These groups of studies differed in two ways. The studies that reported a decrease used displays with heterogeneous stimuli and tasks with a short-term memory component, while the ones that reported constancy used homogeneous stimuli and tasks that did not require short-term memory. To disentangle the effects of heterogeneity and short-memory involvement, we conducted two main experiments. In Experiment 1, stimuli were heterogeneous, and we compared a condition in which target identity was revealed before the stimulus display with one in which it was revealed afterward. In Experiment 2, target identity was fixed, and we compared heterogeneous and homogeneous distractor conditions. In both experiments, we compared an optimal-observer model in which precision is constant with set size with one in which it depends on set size. We found that precision decreases with set size when the distractors are heterogeneous, regardless of whether short-term memory is involved, but not when it is homogeneous. This suggests that heterogeneity, not short-term memory, is the critical factor. In addition, we found that precision exhibits variability across items and trials, which may partly be caused by attentional fluctuations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Looking for a target in a visual scene becomes more difficult as the number of stimuli increases. In a signal detection theory view, this is due to the cumulative effect of noise in the encoding of the distractors, and potentially on top of that, to an increase of the noise (i.e., a decrease of precision) per stimulus with set size, reflecting divided attention. It has long been argued that human visual search behavior can be accounted for by the first factor alone. While such an account seems to be adequate for search tasks in which all distractors have the same, known feature value (i.e., are maximally predictable), we recently found a clear effect of set size on encoding precision when distractors are drawn from a uniform distribution (i.e., when they are maximally unpredictable). Here we interpolate between these two extreme cases to examine which of both conclusions holds more generally as distractor statistics are varied. In one experiment, we vary the level of distractor heterogeneity; in another we dissociate distractor homogeneity from predictability. In all conditions in both experiments, we found a strong decrease of precision with increasing set size, suggesting that precision being independent of set size is the exception rather than the rule.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to realize the steady-state droplet evaporation, image feedback control system is designed based on DSP. The system has three main functions: to capture and store droplet images during the experiment; to calculate droplet geometrical and physical parameters such as volume, surface area, surface tension and evaporation velocity at a high-precision level; to keep the droplet volume constant. The DSP can drive an injection controller with the PID control to inject liquid so as to keep the droplet volume constant. The evaporation velocity of droplet can be calculated by measuring the injected volume during the evaporation. The structure of hardware and software of the control system, key processing methods such as contour fitting and experimental results are described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scepticism over stated preference surveys conducted online revolves around the concerns over “professional respondents” who might rush through the questionnaire without sufficiently considering the information provided. To gain insight on the validity of this phenomenon and test the effect of response time on choice randomness, this study makes use of a recently conducted choice experiment survey on ecological and amenity effects of an offshore windfarm in the UK. The positive relationship between self-rated and inferred attribute attendance and response time is taken as evidence for a link between response time and cognitive effort. Subsequently, the generalised multinomial logit model is employed to test the effect of response time on scale, which indicates the weight of the deterministic relative to the error component in the random utility model. Results show that longer response time increases scale, i.e. decreases choice randomness. This positive scale effect of response time is further found to be non-linear and wear off at some point beyond which extreme response time decreases scale. While response time does not systematically affect welfare estimates, higher response time increases the precision of such estimates. These effects persist when self-reported choice certainty is controlled for. Implications of the results for online stated preference surveys and further research are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to carry out high-precision machining of aerospace structural components with large size, thin wall and complex surface, this paper proposes a novel parallel kinematic machine (PKM) and formulates its semi-analytical theoretical stiffness model considering gravitational effects that is verified by stiffness experiments. From the viewpoint of topology structure, the novel PKM consists of two substructures in terms of the redundant and overconstrained parallel mechanisms that are connected by two interlinked revolute joints. The theoretical stiffness model of the novel PKM is established based upon the virtual work principle and deformation superposition principle after mapping the stiffness models of substructures from joint space to operated space by Jacobian matrices and considering the deformation contributions of interlinked revolute joints to two substructures. Meanwhile, the component gravities are treated as external payloads exerting on the end reference point of the novel PKM resorting to static equivalence principle. This approach is proved by comparing the theoretical stiffness values with experimental stiffness values in the same configurations, which also indicates equivalent gravity can be employed to describe the actual distributed gravities in an acceptable accuracy manner. Finally, on the basis of the verified theoretical stiffness model, the stiffness distributions of the novel PKM are illustrated and the contributions of component gravities to the stiffness of the novel PKM are discussed.