158 resultados para sampling error


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that the four-dimensional variational data assimilation method (4DVar) can be interpreted as a form of Tikhonov regularization, a very familiar method for solving ill-posed inverse problems. It is known from image restoration problems that L1-norm penalty regularization recovers sharp edges in the image more accurately than Tikhonov, or L2-norm, penalty regularization. We apply this idea from stationary inverse problems to 4DVar, a dynamical inverse problem, and give examples for an L1-norm penalty approach and a mixed total variation (TV) L1–L2-norm penalty approach. For problems with model error where sharp fronts are present and the background and observation error covariances are known, the mixed TV L1–L2-norm penalty performs better than either the L1-norm method or the strong constraint 4DVar (L2-norm)method. A strength of the mixed TV L1–L2-norm regularization is that in the case where a simplified form of the background error covariance matrix is used it produces a much more accurate analysis than 4DVar. The method thus has the potential in numerical weather prediction to overcome operational problems with poorly tuned background error covariance matrices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the error dynamics for cycled data assimilation systems, such that the inverse problem of state determination is solved at tk, k = 1, 2, 3, ..., with a first guess given by the state propagated via a dynamical system model from time tk − 1 to time tk. In particular, for nonlinear dynamical systems that are Lipschitz continuous with respect to their initial states, we provide deterministic estimates for the development of the error ||ek|| := ||x(a)k − x(t)k|| between the estimated state x(a) and the true state x(t) over time. Clearly, observation error of size δ > 0 leads to an estimation error in every assimilation step. These errors can accumulate, if they are not (a) controlled in the reconstruction and (b) damped by the dynamical system under consideration. A data assimilation method is called stable, if the error in the estimate is bounded in time by some constant C. The key task of this work is to provide estimates for the error ||ek||, depending on the size δ of the observation error, the reconstruction operator Rα, the observation operator H and the Lipschitz constants K(1) and K(2) on the lower and higher modes of controlling the damping behaviour of the dynamics. We show that systems can be stabilized by choosing α sufficiently small, but the bound C will then depend on the data error δ in the form c||Rα||δ with some constant c. Since ||Rα|| → ∞ for α → 0, the constant might be large. Numerical examples for this behaviour in the nonlinear case are provided using a (low-dimensional) Lorenz '63 system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Iatrogenic errors and patient safety in clinical processes are an increasing concern. The quality of process information in hardcopy or electronic form can heavily influence clinical behaviour and decision making errors. Little work has been undertaken to assess the safety impact of clinical process planning documents guiding the clinical actions and decisions. This paper investigates the clinical process documents used in elective surgery and their impact on latent and active clinical errors. Eight clinicians from a large health trust underwent extensive semi- structured interviews to understand their use of clinical documents, and their perceived impact on errors and patient safety. Samples of the key types of document used were analysed. Theories of latent organisational and active errors from the literature were combined with the EDA semiotics model of behaviour and decision making to propose the EDA Error Model. This model enabled us to identify perceptual, evaluation, knowledge and action error types and approaches to reducing their causes. The EDA error model was then used to analyse sample documents and identify error sources and controls. Types of knowledge artefact structures used in the documents were identified and assessed in terms of safety impact. This approach was combined with analysis of the questionnaire findings using existing error knowledge from the literature. The results identified a number of document and knowledge artefact issues that give rise to latent and active errors and also issues concerning medical culture and teamwork together with recommendations for further work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study investigates the growth of error in baroclinic waves. It is found that stable or neutral waves are particularly sensitive to errors in the initial condition. Short stable waves are mainly sensitive to phase errors and the ultra long waves to amplitude errors. Analysis simulation experiments have indicated that the amplitudes of the very long waves become usually too small in the free atmosphere, due to the sparse and very irregular distribution of upper air observations. This also applies to the four-dimensional data assimilation experiments, since the amplitudes of the very long waves are usually underpredicted. The numerical experiments reported here show that if the very long waves have these kinds of amplitude errors in the upper troposphere or lower stratosphere the error is rapidly propagated (within a day or two) to the surface and to the lower troposphere.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proactive motion in hand tracking and in finger bending, in which the body motion occurs prior to the reference signal, was reported by the preceding researchers when the target signals were shown to the subjects at relatively high speed or high frequencies. These phenomena indicate that the human sensory-motor system tends to choose an anticipatory mode rather than a reactive mode, when the target motion is relatively fast. The present research was undertaken to study what kind of mode appears in the sensory-motor system when two persons were asked to track the hand position of the partner with each other at various mean tracking frequency. The experimental results showed a transition from a mutual error-correction mode to a synchronization mode occurred in the same region of the tracking frequency with that of the transition from a reactive error-correction mode to a proactive anticipatory mode in the mechanical target tracking experiments. Present research indicated that synchronization of body motion occurred only when both of the pair subjects operated in a proactive anticipatory mode. We also presented mathematical models to explain the behavior of the error-correction mode and the synchronization mode.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop a new sparse kernel density estimator using a forward constrained regression framework, within which the nonnegative and summing-to-unity constraints of the mixing weights can easily be satisfied. Our main contribution is to derive a recursive algorithm to select significant kernels one at time based on the minimum integrated square error (MISE) criterion for both the selection of kernels and the estimation of mixing weights. The proposed approach is simple to implement and the associated computational cost is very low. Specifically, the complexity of our algorithm is in the order of the number of training data N, which is much lower than the order of N2 offered by the best existing sparse kernel density estimators. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with comparable accuracy to those of the classical Parzen window estimate and other existing sparse kernel density estimators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim: To examine the causes of prescribing and monitoring errors in English general practices and provide recommendations for how they may be overcome. Design: Qualitative interview and focus group study with purposive sampling and thematic analysis informed by Reason’s accident causation model. Participants: General practice staff participated in a combination of semi-structured interviews (n=34) and six focus groups (n=46). Setting: Fifteen general practices across three primary care trusts in England. Results: We identified seven categories of high-level error-producing conditions: the prescriber, the patient, the team, the task, the working environment, the computer system, and the primary-secondary care interface. Each of these was further broken down to reveal various error-producing conditions. The prescriber’s therapeutic training, drug knowledge and experience, knowledge of the patient, perception of risk, and their physical and emotional health, were all identified as possible causes. The patient’s characteristics and the complexity of the individual clinical case were also found to have contributed to prescribing errors. The importance of feeling comfortable within the practice team was highlighted, as well as the safety of general practitioners (GPs) in signing prescriptions generated by nurses when they had not seen the patient for themselves. The working environment with its high workload, time pressures, and interruptions, and computer related issues associated with mis-selecting drugs from electronic pick-lists and overriding alerts, were all highlighted as possible causes of prescribing errors and often interconnected. Conclusion: This study has highlighted the complex underlying causes of prescribing and monitoring errors in general practices, several of which are amenable to intervention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The hybrid Monte Carlo (HMC) method is a popular and rigorous method for sampling from a canonical ensemble. The HMC method is based on classical molecular dynamics simulations combined with a Metropolis acceptance criterion and a momentum resampling step. While the HMC method completely resamples the momentum after each Monte Carlo step, the generalized hybrid Monte Carlo (GHMC) method can be implemented with a partial momentum refreshment step. This property seems desirable for keeping some of the dynamic information throughout the sampling process similar to stochastic Langevin and Brownian dynamics simulations. It is, however, ultimate to the success of the GHMC method that the rejection rate in the molecular dynamics part is kept at a minimum. Otherwise an undesirable Zitterbewegung in the Monte Carlo samples is observed. In this paper, we describe a method to achieve very low rejection rates by using a modified energy, which is preserved to high-order along molecular dynamics trajectories. The modified energy is based on backward error results for symplectic time-stepping methods. The proposed generalized shadow hybrid Monte Carlo (GSHMC) method is applicable to NVT as well as NPT ensemble simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As low carbon technologies become more pervasive, distribution network operators are looking to support the expected changes in the demands on the low voltage networks through the smarter control of storage devices. Accurate forecasts of demand at the single household-level, or of small aggregations of households, can improve the peak demand reduction brought about through such devices by helping to plan the appropriate charging and discharging cycles. However, before such methods can be developed, validation measures are required which can assess the accuracy and usefulness of forecasts of volatile and noisy household-level demand. In this paper we introduce a new forecast verification error measure that reduces the so called “double penalty” effect, incurred by forecasts whose features are displaced in space or time, compared to traditional point-wise metrics, such as Mean Absolute Error and p-norms in general. The measure that we propose is based on finding a restricted permutation of the original forecast that minimises the point wise error, according to a given metric. We illustrate the advantages of our error measure using half-hourly domestic household electrical energy usage data recorded by smart meters and discuss the effect of the permutation restriction.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Serial sampling and stable isotope analysis performed along the growth axis of vertebrate tooth enamel records differences attributed to seasonal variation in diet, climate or animal movement. Because several months are required to obtain mature enamel in large mammals, modifications in the isotopic composition of environmental parameters are not instantaneously recorded, and stable isotope analysis of tooth enamel returns a time-averaged signal attenuated in its amplitude relative to the input signal. For convenience, stable isotope profiles are usually determined on the side of the tooth where enamel is thickest. Here we investigate the possibility of improving the time resolution by targeting the side of the tooth where enamel is thinnest. Observation of developing third molars (M3) in sheep shows that the tooth growth rate is not constant but decreases exponentially, while the angle between the first layer of enamel deposited and the enamel–dentine junction increases as a tooth approaches its maximal length. We also noted differences in thickness and geometry of enamel growth between the mesial side (i.e., the side facing the M2) and the buccal side (i.e., the side facing the cheek) of the M3. Carbon and oxygen isotope variations were measured along the M3 teeth from eight sheep raised under controlled conditions. Intra-tooth variability was systematically larger along the mesial side and the difference in amplitude between the two sides was proportional to the time of exposure to the input signal. Although attenuated, the mesial side records variations in the environmental signal more faithfully than the buccal side. This approach can be adapted to other mammals whose teeth show lateral variation in enamel thickness and could potentially be used as an internal check for diagenesis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The optimal utilisation of hyper-spectral satellite observations in numerical weather prediction is often inhibited by incorrectly assuming independent interchannel observation errors. However, in order to represent these observation-error covariance structures, an accurate knowledge of the true variances and correlations is needed. This structure is likely to vary with observation type and assimilation system. The work in this article presents the initial results for the estimation of IASI interchannel observation-error correlations when the data are processed in the Met Office one-dimensional (1D-Var) and four-dimensional (4D-Var) variational assimilation systems. The method used to calculate the observation errors is a post-analysis diagnostic which utilises the background and analysis departures from the two systems. The results show significant differences in the source and structure of the observation errors when processed in the two different assimilation systems, but also highlight some common features. When the observations are processed in 1D-Var, the diagnosed error variances are approximately half the size of the error variances used in the current operational system and are very close in size to the instrument noise, suggesting that this is the main source of error. The errors contain no consistent correlations, with the exception of a handful of spectrally close channels. When the observations are processed in 4D-Var, we again find that the observation errors are being overestimated operationally, but the overestimation is significantly larger for many channels. In contrast to 1D-Var, the diagnosed error variances are often larger than the instrument noise in 4D-Var. It is postulated that horizontal errors of representation, not seen in 1D-Var, are a significant contributor to the overall error here. Finally, observation errors diagnosed from 4D-Var are found to contain strong, consistent correlation structures for channels sensitive to water vapour and surface properties.