966 resultados para Algebra of Errors


Relevância:

90.00% 90.00%

Publicador:

Resumo:

During locomotion, retinal flow, gaze angle, and vestibular information can contribute to one's perception of self-motion. Their respective roles were investigated during active steering: Retinal flow and gaze angle were biased by altering the visual information during computer-simulated locomotion, and vestibular information was controlled through use of a motorized chair that rotated the participant around his or her vertical axis. Chair rotation was made appropriate for the steering response of the participant or made inappropriate by rotating a proportion of the veridical amount. Large steering errors resulted from selective manipulation of retinal flow and gaze angle, and the pattern of errors provided strong evidence for an additive model of combination. Vestibular information had little or no effect on steering performance, suggesting that vestibular signals are not integrated with visual information for the control of steering at these speeds.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved [1, 2]. Previous evidence shows that the human visual system accounts for the distance the observer has walked [3,4] and the separation of the eyes [5-8] when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, data from spaceborne radar, lidar and infrared radiometers on the “A-Train” of satellites are combined in a variational algorithm to retrieve ice cloud properties. The method allows a seamless retrieval between regions where both radar and lidar are sensitive to the regions where one detects the cloud. We first implement a cloud phase identification method, including identification of supercooled water layers using the lidar signal and temperature to discriminate ice from liquid. We also include rigorous calculation of errors assigned in the variational scheme. We estimate the impact of the microphysical assumptions on the algorithm when radiances are not assimilated by evaluating the impact of the change in the area-diameter and the density-diameter relationships in the retrieval of cloud properties. We show that changes to these assumptions affect the radar-only and lidar-only retrieval more than the radar-lidar retrieval, although the lidar-only extinction retrieval is only weakly affected. We also show that making use of the molecular lidar signal beyond the cloud as a constraint on optical depth, when ice clouds are sufficiently thin to allow the lidar signal to penetrate them entirely, improves the retrieved extinction. When infrared radiances are available, they provide an extra constraint and allow the extinction-to-backscatter ratio to vary linearly with height instead of being constant, which improves the vertical distribution of retrieved cloud properties.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Rainfall can be modeled as a spatially correlated random field superimposed on a background mean value; therefore, geostatistical methods are appropriate for the analysis of rain gauge data. Nevertheless, there are certain typical features of these data that must be taken into account to produce useful results, including the generally non-Gaussian mixed distribution, the inhomogeneity and low density of observations, and the temporal and spatial variability of spatial correlation patterns. Many studies show that rigorous geostatistical analysis performs better than other available interpolation techniques for rain gauge data. Important elements are the use of climatological variograms and the appropriate treatment of rainy and nonrainy areas. Benefits of geostatistical analysis for rainfall include ease of estimating areal averages, estimation of uncertainties, and the possibility of using secondary information (e.g., topography). Geostatistical analysis also facilitates the generation of ensembles of rainfall fields that are consistent with a given set of observations, allowing for a more realistic exploration of errors and their propagation in downstream models, such as those used for agricultural or hydrological forecasting. This article provides a review of geostatistical methods used for kriging, exemplified where appropriate by daily rain gauge data from Ethiopia.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper analyzes the delay performance of Enhanced relay-enabled Distributed Coordination Function (ErDCF) for wireless ad hoc networks under ideal condition and in the presence of transmission errors. Relays are nodes capable of supporting high data rates for other low data rate nodes. In ideal channel ErDCF achieves higher throughput and reduced energy consumption compared to IEEE 802.11 Distributed Coordination Function (DCF). This gain is still maintained in the presence of errors. It is also expected of relays to reduce the delay. However, the impact on the delay behavior of ErDCF under transmission errors is not known. In this work, we have presented the impact of transmission errors on delay. It turns out that under transmission errors of sufficient magnitude to increase dropped packets, packet delay is reduced. This is due to increase in the probability of failure. As a result the packet drop time increases, thus reflecting the throughput degradation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This investigation moves beyond the traditional studies of word reading to identify how the production complexity of words affects reading accuracy in an individual with deep dyslexia (JO). We examined JO’s ability to read words aloud while manipulating both the production complexity of the words and the semantic context. The classification of words as either phonetically simple or complex was based on the Index of Phonetic Complexity. The semantic context was varied using a semantic blocking paradigm (i.e., semantically blocked and unblocked conditions). In the semantically blocked condition words were grouped by semantic categories (e.g., table, sit, seat, couch,), whereas in the unblocked condition the same words were presented in a random order. JO’s performance on reading aloud was also compared to her performance on a repetition task using the same items. Results revealed a strong interaction between word complexity and semantic blocking for reading aloud but not for repetition. JO produced the greatest number of errors for phonetically complex words in semantically blocked condition. This interaction suggests that semantic processes are constrained by output production processes which are exaggerated when derived from visual rather than auditory targets. This complex relationship between orthographic, semantic, and phonetic processes highlights the need for word recognition models to explicitly account for production processes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Eyjafjallajökull volcano in Iceland erupted explosively on 14 April 2010, emitting a plume of ash into the atmosphere. The ash was transported from Iceland toward Europe where mostly cloud-free skies allowed ground-based lidars at Chilbolton in England and Leipzig in Germany to estimate the mass concentration in the ash cloud as it passed overhead. The UK Met Office's Numerical Atmospheric-dispersion Modeling Environment (NAME) has been used to simulate the evolution of the ash cloud from the Eyjafjallajökull volcano during the initial phase of the ash emissions, 14–16 April 2010. NAME captures the timing and sloped structure of the ash layer observed over Leipzig, close to the central axis of the ash cloud. Relatively small errors in the ash cloud position, probably caused by the cumulative effect of errors in the driving meteorology en route, result in a timing error at distances far from the central axis of the ash cloud. Taking the timing error into account, NAME is able to capture the sloped ash layer over the UK. Comparison of the lidar observations and NAME simulations has allowed an estimation of the plume height time series to be made. It is necessary to include in the model input the large variations in plume height in order to accurately predict the ash cloud structure at long range. Quantitative comparison with the mass concentrations at Leipzig and Chilbolton suggest that around 3% of the total emitted mass is transported as far as these sites by small (<100 μm diameter) ash particles.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

At criminal trial, we demand that those accused of criminal wrongdoing be presumed innocent until proven guilty beyond any reasonable doubt. What are the moral and/or political grounds of this demand? One popular and natural answer to this question focuses on the moral badness or wrongness of convicting and punishing innocent persons, which I call the direct moral grounding. In this essay, I suggest that this direct moral grounding, if accepted, may well have important ramifications for other areas of the criminal justice process, and in particular those parts in which we (through our legislatures and judges) decide how much punishment to distribute to guilty persons. If, as the direct moral grounding suggests, we should prefer under-punishment to over-punishment under conditions of uncertainty, due to the moral seriousness of errors which inappropriately punish persons, then we should also prefer erring on the side of under-punishment when considering how much to punish those who may justly be punished. Some objections to this line of thinking are considered.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The development of NWP models with grid spacing down to 1 km should produce more realistic forecasts of convective storms. However, greater realism does not necessarily mean more accurate precipitation forecasts. The rapid growth of errors on small scales in conjunction with preexisting errors on larger scales may limit the usefulness of such models. The purpose of this paper is to examine whether improved model resolution alone is able to produce more skillful precipitation forecasts on useful scales, and how the skill varies with spatial scale. A verification method will be described in which skill is determined from a comparison of rainfall forecasts with radar using fractional coverage over different sized areas. The Met Office Unified Model was run with grid spacings of 12, 4, and 1 km for 10 days in which convection occurred during the summers of 2003 and 2004. All forecasts were run from 12-km initial states for a clean comparison. The results show that the 1-km model was the most skillful over all but the smallest scales (approximately <10–15 km). A measure of acceptable skill was defined; this was attained by the 1-km model at scales around 40–70 km, some 10–20 km less than that of the 12-km model. The biggest improvement occurred for heavier, more localized rain, despite it being more difficult to predict. The 4-km model did not improve much on the 12-km model because of the difficulties of representing convection at that resolution, which was accentuated by the spinup from 12-km fields.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We describe a one-port de-embedding technique suitable for the quasi-optical characterization of terahertz integrated components at frequencies beyond the operational range of most vector network analyzers. This technique is also suitable when the manufacturing of precision terminations to sufficiently fine tolerances for the application of a TRL de-embedding technique is not possible. The technique is based on vector reflection measurements of a series of easily realizable test pieces. A theoretical analysis is presented for the precision of the technique when implemented using a quasi-optical null-balanced bridge reflectometer. The analysis takes into account quantization effects in the linear and angular encoders associated with the balancing procedure, as well as source power and detector noise equivalent power. The precision in measuring waveguide characteristic impedance and attenuation using this de-embedding technique is further analyzed after taking into account changes in the power coupled due to axial, rotational, and lateral alignment errors between the device under test and the instruments' test port. The analysis is based on the propagation of errors after assuming imperfect coupling of two fundamental Gaussian beams. The required precision in repositioning the samples at the instruments' test-port is discussed. Quasi-optical measurements using the de-embedding process for a WR-8 adjustable precision short at 125 GHz are presented. The de-embedding methodology may be extended to allow the determination of S-parameters of arbitrary two-port junctions. The measurement technique proposed should prove most useful above 325 GHz where there is a lack of measurement standards.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a new composite of geomagnetic activity which is designed to be as homogeneous in its construction as possible. This is done by only combining data that, by virtue of the locations of the source observatories used, have similar responses to solar wind and IMF (interplanetary magnetic field) variations. This will enable us (in Part 2, Lockwood et al., 2013a) to use the new index to reconstruct the interplanetary magnetic field, B, back to 1846 with a full analysis of errors. Allowance is made for the effects of secular change in the geomagnetic field. The composite uses interdiurnal variation data from Helsinki for 1845–1890 (inclusive) and 1893–1896 and from Eskdalemuir from 1911 to the present. The gaps are filled using data from the Potsdam (1891–1892 and 1897–1907) and the nearby Seddin observatories (1908–1910) and intercalibration achieved using the Potsdam–Seddin sequence. The new index is termed IDV(1d) because it employs many of the principles of the IDV index derived by Svalgaard and Cliver (2010), inspired by the u index of Bartels (1932); however, we revert to using one-day (1d) means, as employed by Bartels, because the use of near-midnight values in IDV introduces contamination by the substorm current wedge auroral electrojet, giving noise and a dependence on solar wind speed that varies with latitude. The composite is compared with independent, early data from European-sector stations, Greenwich, St Petersburg, Parc St Maur, and Ekaterinburg, as well as the composite u index, compiled from 2–6 stations by Bartels, and the IDV index of Svalgaard and Cliver. Agreement is found to be extremely good in all cases, except two. Firstly, the Greenwich data are shown to have gradually degraded in quality until new instrumentation was installed in 1915. Secondly, we infer that the Bartels u index is increasingly unreliable before about 1886 and overestimates the solar cycle amplitude between 1872 and 1883 and this is amplified in the proxy data used before 1872. This is therefore also true of the IDV index which makes direct use of the u index values.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a new reconstruction of the interplanetary magnetic field (IMF, B) for 1846–2012 with a full analysis of errors, based on the homogeneously constructed IDV(1d)composite of geomagnetic activity presented in Part 1 (Lockwood et al., 2013a). Analysis of the dependence of the commonly used geomagnetic indices on solar wind parameters is presented which helps explain why annual means of interdiurnal range data, such as the new composite, depend only on the IMF with only a very weak influence of the solar wind flow speed. The best results are obtained using a polynomial (rather than a linear) fit of the form B = χ · (IDV(1d) − β)α with best-fit coefficients χ = 3.469, β = 1.393 nT, and α = 0.420. The results are contrasted with the reconstruction of the IMF since 1835 by Svalgaard and Cliver (2010).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent advances in thermal infrared remote sensing include the increased availability of airborne hyperspectral imagers (such as the Hyperspectral Thermal Emission Spectrometer, HyTES, or the Telops HyperCam and the Specim aisaOWL), and it is planned that an increased number spectral bands in the long-wave infrared (LWIR) region will soon be measured from space at reasonably high spatial resolution (by imagers such as HyspIRI). Detailed LWIR emissivity spectra are required to best interpret the observations from such systems. This includes the highly heterogeneous urban environment, whose construction materials are not yet particularly well represented in spectral libraries. Here, we present a new online spectral library of urban construction materials including LWIR emissivity spectra of 74 samples of impervious surfaces derived using measurements made by a portable Fourier Transform InfraRed (FTIR) spectrometer. FTIR emissivity measurements need to be carefully made, else they are prone to a series of errors relating to instrumental setup and radiometric calibration, which here relies on external blackbody sources. The performance of the laboratory-based emissivity measurement approach applied here, that in future can also be deployed in the field (e.g. to examine urban materials in situ), is evaluated herein. Our spectral library also contains matching short-wave (VIS–SWIR) reflectance spectra observed for each urban sample. This allows us to examine which characteristic (LWIR and) spectral signatures may in future best allow for the identification and discrimination of the various urban construction materials, that often overlap with respect to their chemical/mineralogical constituents. Hyperspectral or even strongly multi-spectral LWIR information appears especially useful, given that many urban materials are composed of minerals exhibiting notable reststrahlen/absorption effects in this spectral region. The final spectra and interpretations are included in the London Urban Micromet data Archive (LUMA; http://LondonClimate.info/LUMA/SLUM.html).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the prospect of exascale computing, computational methods requiring only local data become especially attractive. Consequently, the typical domain decomposition of atmospheric models means horizontally-explicit vertically-implicit (HEVI) time-stepping schemes warrant further attention. In this analysis, Runge-Kutta implicit-explicit schemes from the literature are analysed for their stability and accuracy using a von Neumann stability analysis of two linear systems. Attention is paid to the numerical phase to indicate the behaviour of phase and group velocities. Where the analysis is tractable, analytically derived expressions are considered. For more complicated cases, amplification factors have been numerically generated and the associated amplitudes and phase diagnosed. Analysis of a system describing acoustic waves has necessitated attributing the three resultant eigenvalues to the three physical modes of the system. To do so, a series of algorithms has been devised to track the eigenvalues across the frequency space. The result enables analysis of whether the schemes exactly preserve the non-divergent mode; and whether there is evidence of spurious reversal in the direction of group velocities or asymmetry in the damping for the pair of acoustic modes. Frequency ranges that span next-generation high-resolution weather models to coarse-resolution climate models are considered; and a comparison is made of errors accumulated from multiple stability-constrained shorter time-steps from the HEVI scheme with a single integration from a fully implicit scheme over the same time interval. Two schemes, “Trap2(2,3,2)” and “UJ3(1,3,2)”, both already used in atmospheric models, are identified as offering consistently good stability and representation of phase across all the analyses. Furthermore, according to a simple measure of computational cost, “Trap2(2,3,2)” is the least expensive.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The evidence for anthropogenic climate change continues to strengthen, and concerns about severe weather events are increasing. As a result, scientific interest is rapidly shifting from detection and attribution of global climate change to prediction of its impacts at the regional scale. However, nearly everything we have any confidence in when it comes to climate change is related to global patterns of surface temperature, which are primarily controlled by thermodynamics. In contrast, we have much less confidence in atmospheric circulation aspects of climate change, which are primarily controlled by dynamics and exert a strong control on regional climate. Model projections of circulation-related fields, including precipitation, show a wide range of possible outcomes, even on centennial timescales. Sources of uncertainty include low-frequency chaotic variability and the sensitivity to model error of the circulation response to climate forcing. As the circulation response to external forcing appears to project strongly onto existing patterns of variability, knowledge of errors in the dynamics of variability may provide some constraints on model projections. Nevertheless, higher scientific confidence in circulation-related aspects of climate change will be difficult to obtain. For effective decision-making, it is necessary to move to a more explicitly probabilistic, risk-based approach.