49 resultados para position estimation
em CentAUR: Central Archive University of Reading - UK
Resumo:
It is reported in the literature that distances from the observer are underestimated more in virtual environments (VEs) than in physical world conditions. On the other hand estimation of size in VEs is quite accurate and follows a size-constancy law when rich cues are present. This study investigates how estimation of distance in a CAVETM environment is affected by poor and rich cue conditions, subject experience, and environmental learning when the position of the objects is estimated using an experimental paradigm that exploits size constancy. A group of 18 healthy participants was asked to move a virtual sphere controlled using the wand joystick to the position where they thought a previously-displayed virtual cube (stimulus) had appeared. Real-size physical models of the virtual objects were also presented to the participants as a reference of real physical distance during the trials. An accurate estimation of distance implied that the participants assessed the relative size of sphere and cube correctly. The cube appeared at depths between 0.6 m and 3 m, measured along the depth direction of the CAVE. The task was carried out in two environments: a poor cue one with limited background cues, and a rich cue one with textured background surfaces. It was found that distances were underestimated in both poor and rich cue conditions, with greater underestimation in the poor cue environment. The analysis also indicated that factors such as subject experience and environmental learning were not influential. However, least square fitting of Stevens’ power law indicated a high degree of accuracy during the estimation of object locations. This accuracy was higher than in other studies which were not based on a size-estimation paradigm. Thus as indirect result, this study appears to show that accuracy when estimating egocentric distances may be increased using an experimental method that provides information on the relative size of the objects used.
Resumo:
A predictability index was defined as the ratio of the variance of the optimal prediction to the variance of the original time series by Granger and Anderson (1976) and Bhansali (1989). A new simplified algorithm for estimating the predictability index is introduced and the new estimator is shown to be a simple and effective tool in applications of predictability ranking and as an aid in the preliminary analysis of time series. The relationship between the predictability index and the position of the poles and lag p of a time series which can be modelled as an AR(p) model are also investigated. The effectiveness of the algorithm is demonstrated using numerical examples including an application to stock prices.
Resumo:
Resumo:
Data assimilation is a sophisticated mathematical technique for combining observational data with model predictions to produce state and parameter estimates that most accurately approximate the current and future states of the true system. The technique is commonly used in atmospheric and oceanic modelling, combining empirical observations with model predictions to produce more accurate and well-calibrated forecasts. Here, we consider a novel application within a coastal environment and describe how the method can also be used to deliver improved estimates of uncertain morphodynamic model parameters. This is achieved using a technique known as state augmentation. Earlier applications of state augmentation have typically employed the 4D-Var, Kalman filter or ensemble Kalman filter assimilation schemes. Our new method is based on a computationally inexpensive 3D-Var scheme, where the specification of the error covariance matrices is crucial for success. A simple 1D model of bed-form propagation is used to demonstrate the method. The scheme is capable of recovering near-perfect parameter values and, therefore, improves the capability of our model to predict future bathymetry. Such positive results suggest the potential for application to more complex morphodynamic models.
Resumo:
21st century climate change is projected to result in an intensification of the global hydrological cycle, but there is substantial uncertainty in how this will impact freshwater availability. A relatively overlooked aspect of this uncertainty pertains to how different methods of estimating potential evapotranspiration (PET) respond to changing climate. Here we investigate the global response of six different PET methods to a 2 °C rise in global mean temperature. All methods suggest an increase in PET associated with a warming climate. However, differences in PET climate change signal of over 100% are found between methods. Analysis of a precipitation/PET aridity index and regional water surplus indicates that for certain regions and GCMs, choice of PET method can actually determine the direction of projections of future water resources. As such, method dependence of the PET climate change signal is an important source of uncertainty in projections of future freshwater availability.
Resumo:
Models developed to identify the rates and origins of nutrient export from land to stream require an accurate assessment of the nutrient load present in the water body in order to calibrate model parameters and structure. These data are rarely available at a representative scale and in an appropriate chemical form except in research catchments. Observational errors associated with nutrient load estimates based on these data lead to a high degree of uncertainty in modelling and nutrient budgeting studies. Here, daily paired instantaneous P and flow data for 17 UK research catchments covering a total of 39 water years (WY) have been used to explore the nature and extent of the observational error associated with nutrient flux estimates based on partial fractions and infrequent sampling. The daily records were artificially decimated to create 7 stratified sampling records, 7 weekly records, and 30 monthly records from each WY and catchment. These were used to evaluate the impact of sampling frequency on load estimate uncertainty. The analysis underlines the high uncertainty of load estimates based on monthly data and individual P fractions rather than total P. Catchments with a high baseflow index and/or low population density were found to return a lower RMSE on load estimates when sampled infrequently than those with a tow baseflow index and high population density. Catchment size was not shown to be important, though a limitation of this study is that daily records may fail to capture the full range of P export behaviour in smaller catchments with flashy hydrographs, leading to an underestimate of uncertainty in Load estimates for such catchments. Further analysis of sub-daily records is needed to investigate this fully. Here, recommendations are given on load estimation methodologies for different catchment types sampled at different frequencies, and the ways in which this analysis can be used to identify observational error and uncertainty for model calibration and nutrient budgeting studies. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
A method is presented which allows thermal inertia (the soil heat capacity times the square root of the soil thermal diffusivity, C(h)rootD(h)), to be estimated remotely from micrometeorological observations. The method uses the drop in surface temperature, T-s, between sunset and sunrise, and the average night-time net radiation during that period, for clear, still nights. A Fourier series analysis was applied to analyse the time series of T-s . The Fourier series constants, together with the remote estimate of thermal inertia, were used in an analytical expression to calculate diurnal estimates of the soil heat flux, G. These remote estimates of C(h)rootD(h) and G compared well with values derived from in situ sensors. The remote and in situ estimates of C(h)rootD(h) both correlated well with topsoil moisture content. This method potentially allows area-average estimates of thermal inertia and soil heat flux to be derived from remote sensing, e.g. METEOSAT Second Generation, where the area is determined by the sensor's height and viewing angle. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
This study evaluates computer-generated written explanations about drug prescriptions that are based on an analysis of both patient and doctor informational needs. Three experiments examine the effects of varying the type of information given about the possible side effects of the medication, and the order of information within the explanation. Experiment 1 investigated the effects of these two factors on people's ratings of how good they consider the explanations to be and of their perceived likelihood of taking the medication, as well as on their memory for the information in the explanation. Experiment 2 further examined the effects of varying information about side effects by separating out the contribution of number and severity of side effects. It was found that participants in this study did not “like” explanations that described severe side effects, and also judged that they would be less likely to take the medication if given such explanations. Experiment 3 therefore investigated whether information about severe side effects could be presented in such a way as to increase judgements of how good explanations are thought to be, as well as the perceived likelihood of adherence. The results showed some benefits of providing additional explanatory information.