924 resultados para error model
Resumo:
A new physics-based technique for correcting inhomogeneities present in sub-daily temperature records is proposed. The approach accounts for changes in the sensor-shield characteristics that affect the energy balance dependent on ambient weather conditions (radiation, wind). An empirical model is formulated that reflects the main atmospheric processes and can be used in the correction step of a homogenization procedure. The model accounts for short- and long-wave radiation fluxes (including a snow cover component for albedo calculation) of a measurement system, such as a radiation shield. One part of the flux is further modulated by ventilation. The model requires only cloud cover and wind speed for each day, but detailed site-specific information is necessary. The final model has three free parameters, one of which is a constant offset. The three parameters can be determined, e.g., using the mean offsets for three observation times. The model is developed using the example of the change from the Wild screen to the Stevenson screen in the temperature record of Basel, Switzerland, in 1966. It is evaluated based on parallel measurements of both systems during a sub-period at this location, which were discovered during the writing of this paper. The model can be used in the correction step of homogenization to distribute a known mean step-size to every single measurement, thus providing a reasonable alternative correction procedure for high-resolution historical climate series. It also constitutes an error model, which may be applied, e.g., in data assimilation approaches.
Resumo:
We derive the additive-multiplicative error model for microarray intensities, and describe two applications. For the detection of differentially expressed genes, we obtain a statistic whose variance is approximately independent of the mean intensity. For the post hoc calibration (normalization) of data with respect to experimental factors, we describe a method for parameter estimation.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.
Resumo:
Energy efficiency improvement has been a key objective of China’s long-term energy policy. In this paper, we derive single-factor technical energy efficiency (abbreviated as energy efficiency) in China from multi-factor efficiency estimated by means of a translog production function and a stochastic frontier model on the basis of panel data on 29 Chinese provinces over the period 2003–2011. We find that average energy efficiency has been increasing over the research period and that the provinces with the highest energy efficiency are at the east coast and the ones with the lowest in the west, with an intermediate corridor in between. In the analysis of the determinants of energy efficiency by means of a spatial Durbin error model both factors in the own province and in first-order neighboring provinces are considered. Per capita income in the own province has a positive effect. Furthermore, foreign direct investment and population density in the own province and in neighboring provinces have positive effects, whereas the share of state-owned enterprises in Gross Provincial Product in the own province and in neighboring provinces has negative effects. From the analysis it follows that inflow of foreign direct investment and reform of state-owned enterprises are important policy handles.
Inherent errors in pollutant build-up estimation in considering urban land use as a lumped parameter
Resumo:
Stormwater quality modelling results is subject to uncertainty. The variability of input parameters is an important source of overall model error. An in-depth understanding of the variability associated with input parameters can provide knowledge on the uncertainty associated with these parameters and consequently assist in uncertainty analysis of stormwater quality models and the decision making based on modelling outcomes. This paper discusses the outcomes of a research study undertaken to analyse the variability related to pollutant build-up parameters in stormwater quality modelling. The study was based on the analysis of pollutant build-up samples collected from 12 road surfaces in residential, commercial and industrial land uses. It was found that build-up characteristics vary appreciably even within the same land use. Therefore, using land use as a lumped parameter would contribute significant uncertainties in stormwater quality modelling. Additionally, it was also found that the variability in pollutant build-up can also be significant depending on the pollutant type. This underlines the importance of taking into account specific land use characteristics and targeted pollutant species when undertaking uncertainty analysis of stormwater quality models or in interpreting the modelling outcomes.
Resumo:
Spatial navigation requires the processing of complex, disparate and often ambiguous sensory data. The neurocomputations underpinning this vital ability remain poorly understood. Controversy remains as to whether multimodal sensory information must be combined into a unified representation, consistent with Tolman's "cognitive map", or whether differential activation of independent navigation modules suffice to explain observed navigation behaviour. Here we demonstrate that key neural correlates of spatial navigation in darkness cannot be explained if the path integration system acted independently of boundary (landmark) information. In vivo recordings demonstrate that the rodent head direction (HD) system becomes unstable within three minutes without vision. In contrast, rodents maintain stable place fields and grid fields for over half an hour without vision. Using a simple HD error model, we show analytically that idiothetic path integration (iPI) alone cannot be used to maintain any stable place representation beyond two to three minutes. We then use a measure of place stability based on information theoretic principles to prove that featureless boundaries alone cannot be used to improve localization above chance level. Having shown that neither iPI nor boundaries alone are sufficient, we then address the question of whether their combination is sufficient and - we conjecture - necessary to maintain place stability for prolonged periods without vision. We addressed this question in simulations and robot experiments using a navigation model comprising of a particle filter and boundary map. The model replicates published experimental results on place field and grid field stability without vision, and makes testable predictions including place field splitting and grid field rescaling if the true arena geometry differs from the acquired boundary map. We discuss our findings in light of current theories of animal navigation and neuronal computation, and elaborate on their implications and significance for the design, analysis and interpretation of experiments.
Resumo:
iTRAQ (isobaric tags for relative or absolute quantitation) is a mass spectrometry technology that allows quantitative comparison of protein abundance by measuring peak intensities of reporter ions released from iTRAQ-tagged peptides by fragmentation during MS/MS. However, current data analysis techniques for iTRAQ struggle to report reliable relative protein abundance estimates and suffer with problems of precision and accuracy. The precision of the data is affected by variance heterogeneity: low signal data have higher relative variability; however, low abundance peptides dominate data sets. Accuracy is compromised as ratios are compressed toward 1, leading to underestimation of the ratio. This study investigated both issues and proposed a methodology that combines the peptide measurements to give a robust protein estimate even when the data for the protein are sparse or at low intensity. Our data indicated that ratio compression arises from contamination during precursor ion selection, which occurs at a consistent proportion within an experiment and thus results in a linear relationship between expected and observed ratios. We proposed that a correction factor can be calculated from spiked proteins at known ratios. Then we demonstrated that variance heterogeneity is present in iTRAQ data sets irrespective of the analytical packages, LC-MS/MS instrumentation, and iTRAQ labeling kit (4-plex or 8-plex) used. We proposed using an additive-multiplicative error model for peak intensities in MS/MS quantitation and demonstrated that a variance-stabilizing normalization is able to address the error structure and stabilize the variance across the entire intensity range. The resulting uniform variance structure simplifies the downstream analysis. Heterogeneity of variance consistent with an additive-multiplicative model has been reported in other MS-based quantitation including fields outside of proteomics; consequently the variance-stabilizing normalization methodology has the potential to increase the capabilities of MS in quantitation across diverse areas of biology and chemistry.
Resumo:
Path integration is a process with which navigators derive their current position and orientation by integrating self-motion signals along a locomotion trajectory. It has been suggested that path integration becomes disproportionately erroneous when the trajectory crosses itself. However, there is a possibility that this previous finding was confounded by effects of the length of a traveled path and the amount of turns experienced along the path, two factors that are known to affect path integration performance. The present study was designed to investigate whether the crossover of a locomotion trajectory truly increases errors of path integration. In an experiment, blindfolded human navigators were guided along four paths that varied in their lengths and turns, and attempted to walk directly back to the beginning of the paths. Only one of the four paths contained a crossover. Results showed that errors yielded from the path containing the crossover were not always larger than those observed in other paths, and the errors were attributed solely to the effects of longer path lengths or greater degrees of turns. These results demonstrated that path crossover does not always cause significant disruption in path integration processes. Implications of the present findings for models of path integration are discussed.
Resumo:
A single-source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the incoming symbols (received at their incoming edges) on their outgoing edges. Memory-free networks with delay using network coding are forced to do inter-generation network coding, as a result of which the problem of some or all sinks requiring a large amount of memory for decoding is faced. In this work, we address this problem by utilizing memory elements at the internal nodes of the network also, which results in the reduction of the number of memory elements used at the sinks. We give an algorithm which employs memory at all the nodes of the network to achieve single- generation network coding. For fixed latency, our algorithm reduces the total number of memory elements used in the network to achieve single- generation network coding. We also discuss the advantages of employing single-generation network coding together with convolutional network-error correction codes (CNECCs) for networks with unit- delay and illustrate the performance gain of CNECCs by using memory at the intermediate nodes using simulations on an example network under a probabilistic network error model.
Resumo:
In this paper, we revisit the combinatorial error model of Mazumdar et al. that models errors in high-density magnetic recording caused by lack of knowledge of grain boundaries in the recording medium. We present new upper bounds on the cardinality/rate of binary block codes that correct errors within this model. All our bounds, except for one, are obtained using combinatorial arguments based on hypergraph fractional coverings. The exception is a bound derived via an information-theoretic argument. Our bounds significantly improve upon existing bounds from the prior literature.
Resumo:
We investigate the problem of timing recovery for 2-D magnetic recording (TDMR) channels. We develop a timing error model for TDMR channel considering the phase and frequency offsets with noise. We propose a 2-D data-aided phase-locked loop (PLL) architecture for tracking variations in the position and movement of the read head in the down-track and cross-track directions and analyze the convergence of the algorithm under non-separable timing errors. We further develop a 2-D interpolation-based timing recovery scheme that works in conjunction with the 2-D PLL. We quantify the efficiency of our proposed algorithms by simulations over a 2-D magnetic recording channel with timing errors.
Resumo:
The spatial error structure of daily precipitation derived from the latest version 7 (v7) tropical rainfall measuring mission (TRMM) level 2 data products are studied through comparison with the Asian precipitation highly resolved observational data integration toward evaluation of the water resources (APHRODITE) data over a subtropical region of the Indian subcontinent for the seasonal rainfall over 6 years from June 2002 to September 2007. The data products examined include v7 data from the TRMM radiometer Microwave Imager (TMI) and radar precipitation radar (PR), namely, 2A12, 2A25, and 2B31 (combined data from PR and TMI). The spatial distribution of uncertainty from these data products were quantified based on performance metrics derived from the contingency table. For the seasonal daily precipitation over a subtropical basin in India, the data product of 2A12 showed greater skill in detecting and quantifying the volume of rainfall when compared with the 2A25 and 2B31 data products. Error characterization using various error models revealed that random errors from multiplicative error models were homoscedastic and that they better represented rainfall estimates from 2A12 algorithm. Error decomposition techniques performed to disentangle systematic and random errors verify that the multiplicative error model representing rainfall from 2A12 algorithm successfully estimated a greater percentage of systematic error than 2A25 or 2B31 algorithms. Results verify that although the radiometer derived 2A12 rainfall data is known to suffer from many sources of uncertainties, spatial analysis over the case study region of India testifies that the 2A12 rainfall estimates are in a very good agreement with the reference estimates for the data period considered.
Resumo:
Quantifying scientific uncertainty when setting total allowable catch limits for fish stocks is a major challenge, but it is a requirement in the United States since changes to national fisheries legislation. Multiple sources of error are readily identifiable, including estimation error, model specification error, forecast error, and errors associated with the definition and estimation of reference points. Our focus here, however, is to quantify the influence of estimation error and model specification error on assessment outcomes. These are fundamental sources of uncertainty in developing scientific advice concerning appropriate catch levels and although a study of these two factors may not be inclusive, it is feasible with available information. For data-rich stock assessments conducted on the U.S. west coast we report approximate coefficients of variation in terminal biomass estimates from assessments based on inversion of the assessment of the model’s Hessian matrix (i.e., the asymptotic standard error). To summarize variation “among” stock assessments, as a proxy for model specification error, we characterize variation among multiple historical assessments of the same stock. Results indicate that for 17 groundfish and coastal pelagic species, the mean coefficient of variation of terminal biomass is 18%. In contrast, the coefficient of variation ascribable to model specification error (i.e., pooled among-assessment variation) is 37%. We show that if a precautionary probability of overfishing equal to 0.40 is adopted by managers, and only model specification error is considered, a 9% reduction in the overfishing catch level is indicated.
Resumo:
本文介绍了大范围、高精度 5轴激光加工机器人系统的研究开发情况 .在提高其绝对精度的前提下 ,对大范围框架式机器人的结构、高精度机器人的误差补偿方法进行了探讨 .采用有限元分析的方法对机器人本体进行了优化设计 ,确保了高精度大型激光加工机器人设计的正确性 .基于测量数据 ,建立了机器人误差模型 ,对机器人系统误差进行了补偿 ,取得了较好的结果 ,保证机器人系统的激光加工精度