36 resultados para precision limit

em University of Queensland eSpace - Australia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effects of temporal precision constraints and movement amplitude on performance of an interceptive aiming task were examined. Participants were required to strike a moving target object with a 'bat' by moving the bat along a straight path (constrained by a linear slide) perpendicular to the path of the target. Temporal precision constraints were defined in terms of the time period (or window) within which contact with the target was possible. Three time windows were used (approx. 35, 50 and 65 ms) and these were achieved either by manipulating the size of the bat (experiment 1a), the size of the target (experiment 1b) or the speed of the target (experiment 2). In all experiments, movement time (MT) increased in proportion to movement amplitude but was only affected by differences in the temporal precision constraint if this was achieved by variation in the target's speed. In this case the MT was approximately inversely proportional to target speed. Peak movement speed was affected by temporal accuracy constraints in all three experiments: participants reached higher speeds when the temporal precision required was greater. These results are discussed with reference to the speed-accuracy trade-off observed for temporally constrained aiming movements. It is suggested that the MT and speed of interceptive aiming movements may be understood as responses to the spatiotemporal constraints of the task.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A narrow absorption feature in an atomic or molecular gas (such as iodine or methane) is used as the frequency reference in many stabilized lasers. As part of the stabilization scheme an optical frequency dither is applied to the laser. In optical heterodyne experiments, this dither is transferred to the RF beat signal, reducing the spectral power density and hence the signal to noise ratio over that in the absence of dither. We removed the dither by mixing the raw beat signal with a dithered local oscillator signal. When the dither waveform is matched to that of the reference laser the output signal from the mixer is rendered dither free. Application of this method to a Winters iodine-stabilized helium-neon laser reduced the bandwidth of the beat signal from 6 MHz to 390 kHz, thereby lowering the detection threshold from 5 pW of laser power to 3 pW. In addition, a simple signal detection model is developed which predicts similar threshold reductions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Las Campanas Observatory and Anglo-Australian Telescope Rich Cluster Survey (LARCS) is a panoramic imaging and spectroscopic survey of an X-ray luminosity-selected sample of 21 clusters of galaxies at 0.07 < z < 0.16. Charge-coupled device (CCD) imaging was obtained in B and R of typically 2 degrees wide regions centred on the 21 clusters, and the galaxy sample selected from the imaging is being used for an on-going spectroscopic survey of the clusters with the 2dF spectrograph on the Anglo-Australian Telescope. This paper presents the reduction of the imaging data and the photometric analysis used in the survey. Based on an overlapping area of 12.3 deg(2) we compare the CCD-based LARCS catalogue with the photographic-based galaxy catalogue used for the input to the 2dF Galaxy Redshift Survey (2dFGRS) from the APM, to the completeness of the GRS/APM catalogue, b(J) = 19.45. This comparison confirms the reliability of the photometry across our mosaics and between the clusters in our survey. This comparison also provides useful information concerning the properties of the GRS/APM. The stellar contamination in the GRS/APM galaxy catalogue is confirmed as around 5-10 per cent, as originally estimated. However, using the superior sensitivity and spatial resolution in the LARCS survey evidence is found for four distinct populations of galaxies that are systematically omitted from the GRS/APM catalogue. The characteristics of the 'missing' galaxy populations are described, reasons for their absence examined and the impact they will have on the conclusions drawn from the 2dF Galaxy Redshift Survey are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the limit state design (LSD) method each design criterion is formally stated and assessed using a performance function. The performance function defines the relationship between the design parameters and the design criterion. In practice, LSD involves factoring up loads and factoring down calculated strengths and material parameters. This provides a convenient way to carry out routine probabilistic-based design. The factors are statistically calculated to produce a design with an acceptably low probability of failure. Hence the ultimate load and the design material properties are mathematical concepts that have no physical interpretation. They may be physically impossible. Similarly, the appropriate analysis model is also defined by the performance function and may not describe the real behaviour at the perceived physical equivalent limit condition. These points must be understood to avoid confusion in the discussion and application of partial factor LSD methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new method is presented to determine an accurate eigendecomposition of difficult low temperature unimolecular master equation problems. Based on a generalisation of the Nesbet method, the new method is capable of achieving complete spectral resolution of the master equation matrix with relative accuracy in the eigenvectors. The method is applied to a test case of the decomposition of ethane at 300 K from a microcanonical initial population with energy transfer modelled by both Ergodic Collision Theory and the exponential-down model. The fact that quadruple precision (16-byte) arithmetic is required irrespective of the eigensolution method used is demonstrated. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The role of shoot water status in mediating the decline in leaf elongation rate of nitrogen (N)-deprived barley plants was assessed. Plants were grown at two levels of N supply, with or without the application of pneumatic pressure to the roots. Applying enough pressure (balancing pressure) to keep xylem sap continuously bleeding from the cut surface of a leaf allowed the plants to remain at full turgor throughout the experiments. Plants from which N was withheld required a greater balancing pressure during both day and night. This difference in balancing pressure was greater at high (2.0 kPa) than low (1.2 kPa) atmospheric vapour pressure deficit (VPD). Pressurizing the roots did not prevent the decline in leaf elongation rate induced by withholding N at either high or low VPD. Thus low shoot water status did not limit leaf growth of N-deprived plants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We demonstrate that the time-dependent projected Gross-Pitaevskii equation (GPE) derived earlier [M. J. Davis, R. J. Ballagh, and K. Burnett, J. Phys. B 34, 4487 (2001)] can represent the highly occupied modes of a homogeneous, partially-condensed Bose gas. Contrary to the often held belief that the GPE is valid only at zero temperature, we find that this equation will evolve randomized initial wave functions to a state describing thermal equilibrium. In the case of small interaction strengths or low temperatures, our numerical results can be compared to the predictions of Bogoliubov theory and its perturbative extensions. This demonstrates the validity of the GPE in these limits and allows us to assign a temperature to the simulations unambiguously. However, the GPE method is nonperturbative, and we believe it can be used to describe the thermal properties of a Bose gas even when Bogoliubov theory fails. We suggest a different technique to measure the temperature of our simulations in these circumstances. Using this approach we determine the dependence of the condensate fraction and specific heat on temperature for several interaction strengths, and observe the appearance of vortex networks. Interesting behavior near the critical point is observed and discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The isotope composition of Ph is difficult to determine accurately due to the lack of a stable normalisation ratio. Double and triple-spike addition techniques provide one solution and presently yield the most accurate measurements. A number of recent studies have claimed that improved accuracy and precision could also be achieved by multi-collector ICP-MS (MC-ICP-MS) Pb-isotope analysis using the addition of Tl of known isotope composition to Pb samples. In this paper, we verify whether the known isotope composition of Tl can be used for correction of mass discrimination of Pb with an extensive dataset for the NIST standard SRM 981, comparison of MC-ICP-MS with TIMS data, and comparison with three isochrons from different geological environments. When all our NIST SRM 981 data are normalised with one constant Tl-205/Tl-203 of 2.38869, the following averages and reproducibilities were obtained: Pb-207/Pb-206=0.91461+/-18; Pb-208/Ph-206 = 2.1674+/-7; and (PbPh)-Pb-206-Ph-204 = 16.941+/-6. These two sigma standard deviations of the mean correspond to 149, 330, and 374 ppm, respectively. Accuracies relative to triple-spike values are 149, 157, and 52 ppm, respectively, and thus well within uncertainties. The largest component of the uncertainties stems from the Ph data alone and is not caused by differential mass discrimination behaviour of Ph and Tl. In routine operation, variation of sample introduction memory and production of isobaric molecular interferences in the spectrometer's collision cell currently appear to be the ultimate limitation to better reproducibility. Comparative study of five different datasets from actual samples (bullets, international rock standards, carbonates, metamorphic minerals, and sulphide minerals) demonstrates that in most cases geological scatter of the sample exceeds the achieved analytical reproducibility. We observe good agreement between TIMS and MC-ICP-MS data for international rock standards but find that such comparison does not constitute the ultimate. test for the validity of the MC-ICP-MS technique. Two attempted isochrons resulted in geological scatter (in one case small) in excess of analytical reproducibility. However, in one case (leached Great Dyke sulphides) we obtained a true isochron (MSWD = 0.63) age of 2578.3 +/- 0.9 Ma, which is identical to and more precise than a recently published U-Pb zircon age (2579 3 Ma) for a Great Dyke websterite [Earth Planet. Sci. Lett. 180 (2000) 1-12]. Reproducibility of this age by means of an isochron we regard as a robust test of accuracy over a wide dynamic range. We show that reliable and accurate Pb-isotope data can be obtained by careful operation of second-generation MC-ICP magnetic sector mass spectrometers. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Results of experiments recently performed are reported, in which two optical parametric amplifiers were set up to generate two independently quadrature squeezed continuous wave laser beams. The transformation of quadrature squeezed states into polarization squeezed states and into states with spatial quantum correlations is demonstrated. By utilizing two squeezed laser beams, a polarization squeezed state exhibiting three simultaneously squeezed Stokes operator variances was generated. Continuous variable polarization entanglement was generated and the Einstein-Podolsky-Rosen paradox was observed. A pair of Stokes operators satisfied both the inseparability criterion and the conditional variance criterion. Values of 0.49 and 0.77, respectively, were observed, with entanglement requiring values below unity. The inseparability measure of the observed quadrature entanglement was 0.44. This value is sufficient for a demonstration of quantum teleportation, which is the next experimental goal of the authors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The duration of movements made to intercept moving targets decreases and movement speed increases when interception requires greater temporal precision. Changes in target size and target speed can have the same effect on required temporal precision, but the response to these changes differs: changes in target speed elicit larger changes in response speed. A possible explanation is that people attempt to strike the target in a central zone that does not vary much with variation in physical target size: the effective size of the target is relatively constant over changes in physical size. Three experiments are reported that test this idea. Participants performed two tasks: (1) strike a moving target with a bat moved perpendicular to the path of the target; (2) press on a force transducer when the target was in a location where it could be struck by the bat. Target speed was varied and target size held constant in experiment 1. Target speed and size were co-varied in experiment 2, keeping the required temporal precision constant. Target size was varied and target speed held constant in experiment 3 to give the same temporal precision as experiment 1. Duration of hitting movements decreased and maximum movement speed increased with increases in target speed and/or temporal precision requirements in all experiments. The effects were largest in experiment 1 and smallest in experiment 3. Analysis of a measure of effective target size (standard deviation of strike locations on the target) failed to support the hypothesis that performance differences could be explained in terms of effective size rather than actual physical size. In the pressing task, participants produced greater peak forces and shorter force pulses when the temporal precision required was greater, showing that the response to increasing temporal precision generalizes to different responses. It is concluded that target size and target speed have independent effects on performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We review progress at the Australian Centre for Quantum Computer Technology towards the fabrication and demonstration of spin qubits and charge qubits based on phosphorus donor atoms embedded in intrinsic silicon. Fabrication is being pursued via two complementary pathways: a 'top-down' approach for near-term production of few-qubit demonstration devices and a 'bottom-up' approach for large-scale qubit arrays with sub-nanometre precision. The 'top-down' approach employs a low-energy (keV) ion beam to implant the phosphorus atoms. Single-atom control during implantation is achieved by monitoring on-chip detector electrodes, integrated within the device structure. In contrast, the 'bottom-up' approach uses scanning tunnelling microscope lithography and epitaxial silicon overgrowth to construct devices at an atomic scale. In both cases, surface electrodes control the qubit using voltage pulses, and dual single-electron transistors operating near the quantum limit provide fast read-out with spurious-signal rejection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of presence/absence data in wildlife management and biological surveys is widespread. There is a growing interest in quantifying the sources of error associated with these data. We show that false-negative errors (failure to record a species when in fact it is present) can have a significant impact on statistical estimation of habitat models using simulated data. Then we introduce an extension of logistic modeling, the zero-inflated binomial (ZIB) model that permits the estimation of the rate of false-negative errors and the correction of estimates of the probability of occurrence for false-negative errors by using repeated. visits to the same site. Our simulations show that even relatively low rates of false negatives bias statistical estimates of habitat effects. The method with three repeated visits eliminates the bias, but estimates are relatively imprecise. Six repeated visits improve precision of estimates to levels comparable to that achieved with conventional statistics in the absence of false-negative errors In general, when error rates are less than or equal to50% greater efficiency is gained by adding more sites, whereas when error rates are >50% it is better to increase the number of repeated visits. We highlight the flexibility of the method with three case studies, clearly demonstrating the effect of false-negative errors for a range of commonly used survey methods.