123 resultados para k-Error linear complexity
Resumo:
We show that the four-dimensional variational data assimilation method (4DVar) can be interpreted as a form of Tikhonov regularization, a very familiar method for solving ill-posed inverse problems. It is known from image restoration problems that L1-norm penalty regularization recovers sharp edges in the image more accurately than Tikhonov, or L2-norm, penalty regularization. We apply this idea from stationary inverse problems to 4DVar, a dynamical inverse problem, and give examples for an L1-norm penalty approach and a mixed total variation (TV) L1–L2-norm penalty approach. For problems with model error where sharp fronts are present and the background and observation error covariances are known, the mixed TV L1–L2-norm penalty performs better than either the L1-norm method or the strong constraint 4DVar (L2-norm)method. A strength of the mixed TV L1–L2-norm regularization is that in the case where a simplified form of the background error covariance matrix is used it produces a much more accurate analysis than 4DVar. The method thus has the potential in numerical weather prediction to overcome operational problems with poorly tuned background error covariance matrices.
Resumo:
We investigate the error dynamics for cycled data assimilation systems, such that the inverse problem of state determination is solved at tk, k = 1, 2, 3, ..., with a first guess given by the state propagated via a dynamical system model from time tk − 1 to time tk. In particular, for nonlinear dynamical systems that are Lipschitz continuous with respect to their initial states, we provide deterministic estimates for the development of the error ||ek|| := ||x(a)k − x(t)k|| between the estimated state x(a) and the true state x(t) over time. Clearly, observation error of size δ > 0 leads to an estimation error in every assimilation step. These errors can accumulate, if they are not (a) controlled in the reconstruction and (b) damped by the dynamical system under consideration. A data assimilation method is called stable, if the error in the estimate is bounded in time by some constant C. The key task of this work is to provide estimates for the error ||ek||, depending on the size δ of the observation error, the reconstruction operator Rα, the observation operator H and the Lipschitz constants K(1) and K(2) on the lower and higher modes of controlling the damping behaviour of the dynamics. We show that systems can be stabilized by choosing α sufficiently small, but the bound C will then depend on the data error δ in the form c||Rα||δ with some constant c. Since ||Rα|| → ∞ for α → 0, the constant might be large. Numerical examples for this behaviour in the nonlinear case are provided using a (low-dimensional) Lorenz '63 system.
Resumo:
Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.
Resumo:
Iatrogenic errors and patient safety in clinical processes are an increasing concern. The quality of process information in hardcopy or electronic form can heavily influence clinical behaviour and decision making errors. Little work has been undertaken to assess the safety impact of clinical process planning documents guiding the clinical actions and decisions. This paper investigates the clinical process documents used in elective surgery and their impact on latent and active clinical errors. Eight clinicians from a large health trust underwent extensive semi- structured interviews to understand their use of clinical documents, and their perceived impact on errors and patient safety. Samples of the key types of document used were analysed. Theories of latent organisational and active errors from the literature were combined with the EDA semiotics model of behaviour and decision making to propose the EDA Error Model. This model enabled us to identify perceptual, evaluation, knowledge and action error types and approaches to reducing their causes. The EDA error model was then used to analyse sample documents and identify error sources and controls. Types of knowledge artefact structures used in the documents were identified and assessed in terms of safety impact. This approach was combined with analysis of the questionnaire findings using existing error knowledge from the literature. The results identified a number of document and knowledge artefact issues that give rise to latent and active errors and also issues concerning medical culture and teamwork together with recommendations for further work.
Resumo:
In cooperative communication networks, owing to the nodes' arbitrary geographical locations and individual oscillators, the system is fundamentally asynchronous. This will damage some of the key properties of the space-time codes and can lead to substantial performance degradation. In this paper, we study the design of linear dispersion codes (LDCs) for such asynchronous cooperative communication networks. Firstly, the concept of conventional LDCs is extended to the delay-tolerant version and new design criteria are discussed. Then we propose a new design method to yield delay-tolerant LDCs that reach the optimal Jensen's upper bound on ergodic capacity as well as minimum average pairwise error probability. The proposed design employs stochastic gradient algorithm to approach a local optimum. Moreover, it is improved by using simulated annealing type optimization to increase the likelihood of the global optimum. The proposed method allows for flexible number of nodes, receive antennas, modulated symbols and flexible length of codewords. Simulation results confirm the performance of the newly-proposed delay-tolerant LDCs.
Resumo:
We study a two-way relay network (TWRN), where distributed space-time codes are constructed across multiple relay terminals in an amplify-and-forward mode. Each relay transmits a scaled linear combination of its received symbols and their conjugates,with the scaling factor chosen based on automatic gain control. We consider equal power allocation (EPA) across the relays, as well as the optimal power allocation (OPA) strategy given access to instantaneous channel state information (CSI). For EPA, we derive an upper bound on the pairwise-error-probability (PEP), from which we prove that full diversity is achieved in TWRNs. This result is in contrast to one-way relay networks, in which case a maximum diversity order of only unity can be obtained. When instantaneous CSI is available at the relays, we show that the OPA which minimizes the conditional PEP of the worse link can be cast as a generalized linear fractional program, which can be solved efficiently using the Dinkelback-type procedure.We also prove that, if the sum-power of the relay terminals is constrained, then the OPA will activate at most two relays.
Resumo:
(ABR) is of fundamental importance to the investiga- tion of the auditory system behavior, though its in- terpretation has a subjective nature because of the manual process employed in its study and the clinical experience required for its analysis. When analyzing the ABR, clinicians are often interested in the identi- fication of ABR signal components referred to as Jewett waves. In particular, the detection and study of the time when these waves occur (i.e., the wave la- tency) is a practical tool for the diagnosis of disorders affecting the auditory system. In this context, the aim of this research is to compare ABR manual/visual analysis provided by different examiners. Methods: The ABR data were collected from 10 normal-hearing subjects (5 men and 5 women, from 20 to 52 years). A total of 160 data samples were analyzed and a pair- wise comparison between four distinct examiners was executed. We carried out a statistical study aiming to identify significant differences between assessments provided by the examiners. For this, we used Linear Regression in conjunction with Bootstrap, as a me- thod for evaluating the relation between the responses given by the examiners. Results: The analysis sug- gests agreement among examiners however reveals differences between assessments of the variability of the waves. We quantified the magnitude of the ob- tained wave latency differences and 18% of the inves- tigated waves presented substantial differences (large and moderate) and of these 3.79% were considered not acceptable for the clinical practice. Conclusions: Our results characterize the variability of the manual analysis of ABR data and the necessity of establishing unified standards and protocols for the analysis of these data. These results may also contribute to the validation and development of automatic systems that are employed in the early diagnosis of hearing loss.
First order k-th moment finite element analysis of nonlinear operator equations with stochastic data
Resumo:
We develop and analyze a class of efficient Galerkin approximation methods for uncertainty quantification of nonlinear operator equations. The algorithms are based on sparse Galerkin discretizations of tensorized linearizations at nominal parameters. Specifically, we consider abstract, nonlinear, parametric operator equations J(\alpha ,u)=0 for random input \alpha (\omega ) with almost sure realizations in a neighborhood of a nominal input parameter \alpha _0. Under some structural assumptions on the parameter dependence, we prove existence and uniqueness of a random solution, u(\omega ) = S(\alpha (\omega )). We derive a multilinear, tensorized operator equation for the deterministic computation of k-th order statistical moments of the random solution's fluctuations u(\omega ) - S(\alpha _0). We introduce and analyse sparse tensor Galerkin discretization schemes for the efficient, deterministic computation of the k-th statistical moment equation. We prove a shift theorem for the k-point correlation equation in anisotropic smoothness scales and deduce that sparse tensor Galerkin discretizations of this equation converge in accuracy vs. complexity which equals, up to logarithmic terms, that of the Galerkin discretization of a single instance of the mean field problem. We illustrate the abstract theory for nonstationary diffusion problems in random domains.
Resumo:
Criteria are proposed for evaluating sea surface temperature (SST) retrieved from satellite infra-red imagery: bias should be small on regional scales; sensitivity to atmospheric humidity should be small; and sensitivity of retrieved SST to surface temperature should be close to 1 K K−1. Their application is illustrated for non-linear sea surface temperature (NLSST) estimates. 233929 observations from the Advanced Very High Resolution Radiometer (AVHRR) on Metop-A are matched with in situ data and numerical weather prediction (NWP) fields. NLSST coefficients derived from these matches have regional biases from −0.5 to +0.3 K. Using radiative transfer modelling we find that a 10% increase in humidity alone can change the retrieved NLSST by between −0.5 K and +0.1 K. A 1 K increase in SST changes NLSST by <0.5 K in extreme cases. The validity of estimates of sensitivity by radiative transfer modelling is confirmed empirically.
Resumo:
Most of the operational Sea Surface Temperature (SST) products derived from satellite infrared radiometry use multi-spectral algorithms. They show, in general, reasonable performances with root mean square (RMS) residuals around 0.5 K when validated against buoy measurements, but have limitations, particularly a component of the retrieval error that relates to such algorithms' limited ability to cope with the full variability of atmospheric absorption and emission. We propose to use forecast atmospheric profiles and a radiative transfer model to simulate the algorithmic errors of multi-spectral algorithms. In the practical case of SST derived from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG), we demonstrate that simulated algorithmic errors do explain a significant component of the actual errors observed for the non linear (NL) split window algorithm in operational use at the Centre de Météorologie Spatiale (CMS). The simulated errors, used as correction terms, reduce significantly the regional biases of the NL algorithm as well as the standard deviation of the differences with drifting buoy measurements. The availability of atmospheric profiles associated with observed satellite-buoy differences allows us to analyze the origins of the main algorithmic errors observed in the SEVIRI field of view: a negative bias in the inter-tropical zone, and a mid-latitude positive bias. We demonstrate how these errors are explained by the sensitivity of observed brightness temperatures to the vertical distribution of water vapour, propagated through the SST retrieval algorithm.
Resumo:
Optimal estimation (OE) improves sea surface temperature (SST) estimated from satellite infrared imagery in the “split-window”, in comparison to SST retrieved using the usual multi-channel (MCSST) or non-linear (NLSST) estimators. This is demonstrated using three months of observations of the Advanced Very High Resolution Radiometer (AVHRR) on the first Meteorological Operational satellite (Metop-A), matched in time and space to drifter SSTs collected on the global telecommunications system. There are 32,175 matches. The prior for the OE is forecast atmospheric fields from the Météo-France global numerical weather prediction system (ARPEGE), the forward model is RTTOV8.7, and a reduced state vector comprising SST and total column water vapour (TCWV) is used. Operational NLSST coefficients give mean and standard deviation (SD) of the difference between satellite and drifter SSTs of 0.00 and 0.72 K. The “best possible” NLSST and MCSST coefficients, empirically regressed on the data themselves, give zero mean difference and SDs of 0.66 K and 0.73 K respectively. Significant contributions to the global SD arise from regional systematic errors (biases) of several tenths of kelvin in the NLSST. With no bias corrections to either prior fields or forward model, the SSTs retrieved by OE minus drifter SSTs have mean and SD of − 0.16 and 0.49 K respectively. The reduction in SD below the “best possible” regression results shows that OE deals with structural limitations of the NLSST and MCSST algorithms. Using simple empirical bias corrections to improve the OE, retrieved minus drifter SSTs are obtained with mean and SD of − 0.06 and 0.44 K respectively. Regional biases are greatly reduced, such that the absolute bias is less than 0.1 K in 61% of 10°-latitude by 30°-longitude cells. OE also allows a statistic of the agreement between modelled and measured brightness temperatures to be calculated. We show that this measure is more efficient than the current system of confidence levels at identifying reliable retrievals, and that the best 75% of satellite SSTs by this measure have negligible bias and retrieval error of order 0.25 K.
Resumo:
Two recent works have adapted the Kalman–Bucy filter into an ensemble setting. In the first formulation, the ensemble of perturbations is updated by the solution of an ordinary differential equation (ODE) in pseudo-time, while the mean is updated as in the standard Kalman filter. In the second formulation, the full ensemble is updated in the analysis step as the solution of single set of ODEs in pseudo-time. Neither requires matrix inversions except for the frequently diagonal observation error covariance. We analyse the behaviour of the ODEs involved in these formulations. We demonstrate that they stiffen for large magnitudes of the ratio of background error to observational error variance, and that using the integration scheme proposed in both formulations can lead to failure. A numerical integration scheme that is both stable and is not computationally expensive is proposed. We develop transform-based alternatives for these Bucy-type approaches so that the integrations are computed in ensemble space where the variables are weights (of dimension equal to the ensemble size) rather than model variables. Finally, the performance of our ensemble transform Kalman–Bucy implementations is evaluated using three models: the 3-variable Lorenz 1963 model, the 40-variable Lorenz 1996 model, and a medium complexity atmospheric general circulation model known as SPEEDY. The results from all three models are encouraging and warrant further exploration of these assimilation techniques.
Resumo:
Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.
Resumo:
The discrete Fourier transmission spread OFDM DFTS-OFDM) based single-carrier frequency division multiple access (SC-FDMA) has been widely adopted due to its lower peak-to-average power ratio (PAPR) of transmit signals compared with OFDM. However, the offset modulation, which has lower PAPR than general modulation, cannot be directly applied into the existing SC-FDMA. When pulse-shaping filters are employed to further reduce the envelope fluctuation of transmit signals of SC-FDMA, the spectral efficiency degrades as well. In order to overcome such limitations of conventional SC-FDMA, this paper for the first time investigated cyclic prefixed OQAMOFDM (CP-OQAM-OFDM) based SC-FDMA transmission with adjustable user bandwidth and space-time coding. Firstly, we propose CP-OQAM-OFDM transmission with unequally-spaced subbands. We then apply it to SC-FDMA transmission and propose a SC-FDMA scheme with the following features: a) the transmit signal of each user is offset modulated single-carrier with frequency-domain pulse-shaping; b) the bandwidth of each user is adjustable; c) the spectral efficiency does not decrease with increasing roll-off factors. To combat both inter-symbolinterference and multiple access interference in frequencyselective fading channels, a joint linear minimum mean square error frequency domain equalization using a prior information with low complexity is developed. Subsequently, we construct space-time codes for the proposed SC-FDMA. Simulation results confirm the powerfulness of the proposed CP-OQAM-OFDM scheme (i.e., effective yet with low complexity).
Resumo:
The optimal utilisation of hyper-spectral satellite observations in numerical weather prediction is often inhibited by incorrectly assuming independent interchannel observation errors. However, in order to represent these observation-error covariance structures, an accurate knowledge of the true variances and correlations is needed. This structure is likely to vary with observation type and assimilation system. The work in this article presents the initial results for the estimation of IASI interchannel observation-error correlations when the data are processed in the Met Office one-dimensional (1D-Var) and four-dimensional (4D-Var) variational assimilation systems. The method used to calculate the observation errors is a post-analysis diagnostic which utilises the background and analysis departures from the two systems. The results show significant differences in the source and structure of the observation errors when processed in the two different assimilation systems, but also highlight some common features. When the observations are processed in 1D-Var, the diagnosed error variances are approximately half the size of the error variances used in the current operational system and are very close in size to the instrument noise, suggesting that this is the main source of error. The errors contain no consistent correlations, with the exception of a handful of spectrally close channels. When the observations are processed in 4D-Var, we again find that the observation errors are being overestimated operationally, but the overestimation is significantly larger for many channels. In contrast to 1D-Var, the diagnosed error variances are often larger than the instrument noise in 4D-Var. It is postulated that horizontal errors of representation, not seen in 1D-Var, are a significant contributor to the overall error here. Finally, observation errors diagnosed from 4D-Var are found to contain strong, consistent correlation structures for channels sensitive to water vapour and surface properties.