945 resultados para Precision Xtra®


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis focuses on the issue of testing sleepiness quantitatively. The issue is relevant to policymakers concerned with traffic- and occupational safety; such testing provides a tool for safety legislation and -surveillance. The findings of this thesis provide guidelines for a posturographic sleepiness tester. Sleepiness ensuing from staying awake merely 17 h impairs our performance as much as the legally proscribed blood alcohol concentration 0.5 does. Hence, sleepiness is a major risk factor in transportation and occupational accidents. The lack of convenient, commercial sleepiness tests precludes testing impending sleepiness levels contrary to simply breath testing for alcohol intoxication. Posturography is a potential sleepiness test, since clinical diurnal balance testing suggests the hypothesis that time awake could be posturographically estimable. Relying on this hypothesis this thesis examines posturographic sleepiness testing for instrumentation purposes. Empirical results from 63 subjects for whom we tested balance with a force platform during wakefulness for maximum 36 h show that sustained wakefulness impairs balance. The results show that time awake is posturographically estimable with 88% accuracy and 97% precision which validates our hypothesis. Results also show that balance scores tested at 13:30 hours serve as a threshold to detect excessive sleepiness. Analytical results show that the test length has a marked effect on estimation accuracy: 18 s tests suffice to identify sleepiness related balance changes, but trades off some of the accuracy achieved with 30 s tests. The procedure to estimate time awake relies on equating the subject s test score to a reference table (comprising balance scores tested during sustained wakefulness, regressed against time awake). Empirical results showed that sustained wakefulness explains 60% of the diurnal balance variations, whereas the time of day explains 40% of the balance variations. The latter fact implies that time awake estimations also must rely on knowing the local times of both test and reference scores.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Frequency multiplication (FM) can be used to design low power frequency synthesizers. This is achieved by running the VCO at a much reduced frequency, while employing a power efficient frequency multiplier, and also thereby eliminating the first few dividers. Quadrature signals can be generated by frequency- multiplying low frequency I/Q signals, however this also multiplies the quadrature error of these signals. Another way is generating additional edges from the low-frequency oscillator (LFO) and develop a quadrature FM. This makes the I-Q precision heavily dependent on process mismatches in the ring oscillator. In this paper we examine the use of fewer edges from LFO and a single stage polyphase filter to generate approximate quadrature signals, which is then followed by an injection-locked quadrature VCO to generate high- precision I/Q signals. Simulation comparisons with the existing approach shows that the proposed method offers very good phase accuracy of 0.5deg with only a modest increase in power dissipation for 2.4 GHz IEEE 802.15.4 standard using UMC 0.13 mum RFCMOS technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Currently, we live in an era characterized by the completion and first runs of the LHC accelerator at CERN, which is hoped to provide the first experimental hints of what lies beyond the Standard Model of particle physics. In addition, the last decade has witnessed a new dawn of cosmology, where it has truly emerged as a precision science. Largely due to the WMAP measurements of the cosmic microwave background, we now believe to have quantitative control of much of the history of our universe. These two experimental windows offer us not only an unprecedented view of the smallest and largest structures of the universe, but also a glimpse at the very first moments in its history. At the same time, they require the theorists to focus on the fundamental challenges awaiting at the boundary of high energy particle physics and cosmology. What were the contents and properties of matter in the early universe? How is one to describe its interactions? What kind of implications do the various models of physics beyond the Standard Model have on the subsequent evolution of the universe? In this thesis, we explore the connection between in particular supersymmetric theories and the evolution of the early universe. First, we provide the reader with a general introduction to modern day particle cosmology from two angles: on one hand by reviewing our current knowledge of the history of the early universe, and on the other hand by introducing the basics of supersymmetry and its derivatives. Subsequently, with the help of the developed tools, we direct the attention to the specific questions addressed in the three original articles that form the main scientific contents of the thesis. Each of these papers concerns a distinct cosmological problem, ranging from the generation of the matter-antimatter asymmetry to inflation, and finally to the origin or very early stage of the universe. They nevertheless share a common factor in their use of the machinery of supersymmetric theories to address open questions in the corresponding cosmological models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Agricultural pests are responsible for millions of dollars in crop losses and management costs every year. In order to implement optimal site-specific treatments and reduce control costs, new methods to accurately monitor and assess pest damage need to be investigated. In this paper we explore the combination of unmanned aerial vehicles (UAV), remote sensing and machine learning techniques as a promising technology to address this challenge. The deployment of UAVs as a sensor platform is a rapidly growing field of study for biosecurity and precision agriculture applications. In this experiment, a data collection campaign is performed over a sorghum crop severely damaged by white grubs (Coleoptera: Scarabaeidae). The larvae of these scarab beetles feed on the roots of plants, which in turn impairs root exploration of the soil profile. In the field, crop health status could be classified according to three levels: bare soil where plants were decimated, transition zones of reduced plant density and healthy canopy areas. In this study, we describe the UAV platform deployed to collect high-resolution RGB imagery as well as the image processing pipeline implemented to create an orthoimage. An unsupervised machine learning approach is formulated in order to create a meaningful partition of the image into each of the crop levels. The aim of the approach is to simplify the image analysis step by minimizing user input requirements and avoiding the manual data labeling necessary in supervised learning approaches. The implemented algorithm is based on the K-means clustering algorithm. In order to control high-frequency components present in the feature space, a neighbourhood-oriented parameter is introduced by applying Gaussian convolution kernels prior to K-means. The outcome of this approach is a soft K-means algorithm similar to the EM algorithm for Gaussian mixture models. The results show the algorithm delivers decision boundaries that consistently classify the field into three clusters, one for each crop health level. The methodology presented in this paper represents a venue for further research towards automated crop damage assessments and biosecurity surveillance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In uplink OFDMA, carrier frequency offsets (CFO) and/or timing offsets (TO) of other users with respect to a desired user can cause multiuser interference (MUI). In practical uplink OFDMA systems (e.g., IEEE 802.16e standard), effect of this MUI is made acceptably small by requiring that frequency/timing alignment be achieved at the receiver with high precision (e.g., CFO must be within 1 % of the subcarrier spacing and TO must be within 1/8th of the cyclic prefix duration in IEEE 802.16e), which is realized using complex closed-loop frequency/timing correction between the transmitter and the receiver. An alternate open-loop approach to handle the MUI induced by large CFOs and TOs is to employ interference cancellation techniques at the receiver. In this paper, we first analytically characterize the degradation in the average output signal-to-interference ratio (SIR) due to the combined effect of large CFOs and TOs in uplink OFDMA. We then propose a parallel interference canceller (PIC) for the mitigation of interference due to CFOs and TOs in this system. We show that the proposed PIC effectively mitigates the performance loss due to CFO/TO induced interference in uplink OFDMA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The electroweak theory is the part of the standard model of particle physics that describes the weak and electromagnetic interactions between elementary particles. Since its formulation almost 40 years ago, it has been experimentally verified to a high accuracy and today it has a status as one of the cornerstones of particle physics. Thermodynamics of electroweak physics has been studied ever since the theory was written down and the features the theory exhibits at extreme conditions remain an interesting research topic even today. In this thesis, we consider some aspects of electroweak thermodynamics. Specifically, we compute the pressure of the standard model to high precision and study the structure of the electroweak phase diagram when finite chemical potentials for all the conserved particle numbers in the theory are introduced. In the first part of the thesis, the theory, methods and essential results from the computations are introduced. The original research publications are reprinted at the end.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantum chromodynamics (QCD) is the theory describing interaction between quarks and gluons. At low temperatures, quarks are confined forming hadrons, e.g. protons and neutrons. However, at extremely high temperatures the hadrons break apart and the matter transforms into plasma of individual quarks and gluons. In this theses the quark gluon plasma (QGP) phase of QCD is studied using lattice techniques in the framework of dimensionally reduced effective theories EQCD and MQCD. Two quantities are in particular interest: the pressure (or grand potential) and the quark number susceptibility. At high temperatures the pressure admits a generalised coupling constant expansion, where some coefficients are non-perturbative. We determine the first such contribution of order g^6 by performing lattice simulations in MQCD. This requires high precision lattice calculations, which we perform with different number of colors N_c to obtain N_c-dependence on the coefficient. The quark number susceptibility is studied by performing lattice simulations in EQCD. We measure both flavor singlet (diagonal) and non-singlet (off-diagonal) quark number susceptibilities. The finite chemical potential results are optained using analytic continuation. The diagonal susceptibility approaches the perturbative result above 20T_c$, but below that temperature we observe significant deviations. The results agree well with 4d lattice data down to temperatures 2T_c.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Accelerator mass spectrometry (AMS) is an ultrasensitive technique for measuring the concentration of a single isotope. The electric and magnetic fields of an electrostatic accelerator system are used to filter out other isotopes from the ion beam. The high velocity means that molecules can be destroyed and removed from the measurement background. As a result, concentrations down to one atom in 10^16 atoms are measurable. This thesis describes the construction of the new AMS system in the Accelerator Laboratory of the University of Helsinki. The system is described in detail along with the relevant ion optics. System performance and some of the 14C measurements done with the system are described. In a second part of the thesis, a novel statistical model for the analysis of AMS data is presented. Bayesian methods are used in order to make the best use of the available information. In the new model, instrumental drift is modelled with a continuous first-order autoregressive process. This enables rigorous normalization to standards measured at different times. The Poisson statistical nature of a 14C measurement is also taken into account properly, so that uncertainty estimates are much more stable. It is shown that, overall, the new model improves both the accuracy and the precision of AMS measurements. In particular, the results can be improved for samples with very low 14C concentrations or measured only a few times.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increased availability of image capturing devices has enabled collections of digital images to rapidly expand in both size and diversity. This has created a constantly growing need for efficient and effective image browsing, searching, and retrieval tools. Pseudo-relevance feedback (PRF) has proven to be an effective mechanism for improving retrieval accuracy. An original, simple yet effective rank-based PRF mechanism (RB-PRF) that takes into account the initial rank order of each image to improve retrieval accuracy is proposed. This RB-PRF mechanism innovates by making use of binary image signatures to improve retrieval precision by promoting images similar to highly ranked images and demoting images similar to lower ranked images. Empirical evaluations based on standard benchmarks, namely Wang, Oliva & Torralba, and Corel datasets demonstrate the effectiveness of the proposed RB-PRF mechanism in image retrieval.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report a set of measurements of particle production in inelastic pbar{p} collisions collected with a minimum-bias trigger at the Tevatron Collider with the CDF II experiment. The inclusive charged particle transverse momentum differential cross section is measured, with improved precision, over a range about ten times wider than in previous measurements. The former modeling of the spectrum appears to be incompatible with the high particle momenta observed. The dependence of the charged particle transverse momentum on the event particle multiplicity is analyzed to study the various components of hadron interactions. This is one of the observable variables most poorly reproduced by the available Monte Carlo generators. A first measurement of the event transverse energy sum differential cross section is also reported. A comparison with a Pythia prediction at the hadron level is performed. The inclusive charged particle differential production cross section is fairly well reproduced only in the transverse momentum range available from previous measurements. At higher momentum the agreement is poor. The transverse energy sum is poorly reproduced over the whole spectrum. The dependence of the charged particle transverse momentum on the particle multiplicity needs the introduction of more sophisticated particle production mechanisms, such as multiple parton interactions, in order to be better explained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report a set of measurements of particle production in inelastic pbar{p} collisions collected with a minimum-bias trigger at the Tevatron Collider with the CDF II experiment. The inclusive charged particle transverse momentum differential cross section is measured, with improved precision, over a range about ten times wider than in previous measurements. The former modeling of the spectrum appears to be incompatible with the high particle momenta observed. The dependence of the charged particle transverse momentum on the event particle multiplicity is analyzed to study the various components of hadron interactions. This is one of the observable variables most poorly reproduced by the available Monte Carlo generators. A first measurement of the event transverse energy sum differential cross section is also reported. A comparison with a Pythia prediction at the hadron level is performed. The inclusive charged particle differential production cross section is fairly well reproduced only in the transverse momentum range available from previous measurements. At higher momentum the agreement is poor. The transverse energy sum is poorly reproduced over the whole spectrum. The dependence of the charged particle transverse momentum on the particle multiplicity needs the introduction of more sophisticated particle production mechanisms, such as multiple parton interactions, in order to be better explained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

At the Tevatron, the total p_bar-p cross-section has been measured by CDF at 546 GeV and 1.8 TeV, and by E710/E811 at 1.8 TeV. The two results at 1.8 TeV disagree by 2.6 standard deviations, introducing big uncertainties into extrapolations to higher energies. At the LHC, the TOTEM collaboration is preparing to resolve the ambiguity by measuring the total p-p cross-section with a precision of about 1 %. Like at the Tevatron experiments, the luminosity-independent method based on the Optical Theorem will be used. The Tevatron experiments have also performed a vast range of studies about soft and hard diffractive events, partly with antiproton tagging by Roman Pots, partly with rapidity gap tagging. At the LHC, the combined CMS/TOTEM experiments will carry out their diffractive programme with an unprecedented rapidity coverage and Roman Pot spectrometers on both sides of the interaction point. The physics menu comprises detailed studies of soft diffractive differential cross-sections, diffractive structure functions, rapidity gap survival and exclusive central production by Double Pomeron Exchange.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report a measurement of the top quark mass $M_t$ in the dilepton decay channel $t\bar{t}\to b\ell'^{+}\nu'_\ell\bar{b}\ell^{-}\bar{\nu}_{\ell}$. Events are selected with a neural network which has been directly optimized for statistical precision in top quark mass using neuroevolution, a technique modeled on biological evolution. The top quark mass is extracted from per-event probability densities that are formed by the convolution of leading order matrix elements and detector resolution functions. The joint probability is the product of the probability densities from 344 candidate events in 2.0 fb$^{-1}$ of $p\bar{p}$ collisions collected with the CDF II detector, yielding a measurement of $M_t= 171.2\pm 2.7(\textrm{stat.})\pm 2.9(\textrm{syst.})\mathrm{GeV}/c^2$.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A hybrid computer for structure factor calculations in X-ray crystallography is described. The computer can calculate three-dimensional structure factors of up to 24 atoms in a single run and can generate the scatter functions of well over 100 atoms using Vand et al., or Forsyth and Wells approximations. The computer is essentially a digital computer with analog function generators, thus combining to advantage the economic data storage of digital systems and simple computing circuitry of analog systems. The digital part serially selects the data, computes and feeds the arguments into specially developed high precision digital-analog function generators, the outputs of which being d.c. voltages, are further processed by analog circuits and finally the sequential adder, which employs a novel digital voltmeter circuit, converts them back into digital form and accumulates them in a dekatron counter which displays the final result. The computer is also capable of carrying out 1-, 2-, or 3-dimensional Fourier summation, although in this case, the lack of sufficient storage space for the large number of coefficients involved, is a serious limitation at present.