22 resultados para error analysis
Resumo:
Growth codes are a subclass of Rateless codes that have found interesting applications in data dissemination problems. Compared to other Rateless and conventional channel codes, Growth codes show improved intermediate performance which is particularly useful in applications where partial data presents some utility. In this paper, we investigate the asymptotic performance of Growth codes using the Wormald method, which was proposed for studying the Peeling Decoder of LDPC and LDGM codes. Compared to previous works, the Wormald differential equations are set on nodes' perspective which enables a numerical solution to the computation of the expected asymptotic decoding performance of Growth codes. Our framework is appropriate for any class of Rateless codes that does not include a precoding step. We further study the performance of Growth codes with moderate and large size codeblocks through simulations and we use the generalized logistic function to model the decoding probability. We then exploit the decoding probability model in an illustrative application of Growth codes to error resilient video transmission. The video transmission problem is cast as a joint source and channel rate allocation problem that is shown to be convex with respect to the channel rate. This illustrative application permits to highlight the main advantage of Growth codes, namely improved performance in the intermediate loss region.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
SUMMARY Campylobacteriosis has been the most common food-associated notifiable infectious disease in Switzerland since 1995. Contact with and ingestion of raw or undercooked broilers are considered the dominant risk factors for infection. In this study, we investigated the temporal relationship between the disease incidence in humans and the prevalence of Campylobacter in broilers in Switzerland from 2008 to 2012. We use a time-series approach to describe the pattern of the disease by incorporating seasonal effects and autocorrelation. The analysis shows that prevalence of Campylobacter in broilers, with a 2-week lag, has a significant impact on disease incidence in humans. Therefore Campylobacter cases in humans can be partly explained by contagion through broiler meat. We also found a strong autoregressive effect in human illness, and a significant increase of illness during Christmas and New Year's holidays. In a final analysis, we corrected for the sampling error of prevalence in broilers and the results gave similar conclusions.
Resumo:
An in-depth study, using simulations and covariance analysis, is performed to identify the optimal sequence of observations to obtain the most accurate orbit propagation. The accuracy of the results of an orbit determination/ improvement process depends on: tracklet length, number of observations, type of orbit, astrometric error, time interval between tracklets and observation geometry. The latter depends on the position of the object along its orbit and the location of the observing station. This covariance analysis aims to optimize the observation strategy taking into account the influence of the orbit shape, of the relative object-observer geometry and the interval between observations.
Resumo:
The Astronomical Institute of the University of Bern (AIUB) is conducting several search campaigns for space debris using optical sensors. The debris objects are discovered during systematic survey observations. In general, the result of a discovery consists in only a short observation arc, or tracklet, which is used to perform a first orbit determination in order to be able to observe t he object again in subsequent follow-up observations. The additional observations are used in the orbit improvement process to obtain accurate orbits to be included in a catalogue. In order to obtain the most accurate orbit within the time available it is necessary to optimize the follow-up observations strategy. In this paper an in‐depth study, using simulations and covariance analysis, is performed to identify the optimal sequence of follow-up observations to obtain the most accurate orbit propagation to be used for the space debris catalogue maintenance. The main factors that determine the accuracy of the results of an orbit determination/improvement process are: tracklet length, number of observations, type of orbit, astrometric error of the measurements, time interval between tracklets, and the relative position of the object along its orbit with respect to the observing station. The main aim of the covariance analysis is to optimize the follow-up strategy as a function of the object-observer geometry, the interval between follow-up observations and the shape of the orbit. This an alysis can be applied to every orbital regime but particular attention was dedicated to geostationary, Molniya, and geostationary transfer orbits. Finally the case with more than two follow-up observations and the influence of a second observing station are also analyzed.
Resumo:
Studies have shown that the discriminability of successive time intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the time-order error. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström’s sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St–Co, Co–St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St–Co than for Co–St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.
Resumo:
Keywords High-pressure fluids · Whiteschists · U–Pb dating · Oxygen isotopes · Ion microprobe · Metasomatism Introduction The subduction of crustal material to mantle depths and its chemical modification during burial and exhumation contribute to element recycling in the mantle and the formation of new crust through arc magmatism. Crustal rocks that Abstract The Dora-Maira whiteschists derive from metasomatically altered granites that experienced ultrahighpressure metamorphism at ~750 °C and 40 kbar during the Alpine orogeny. In order to investigate the P–T–time– fluid evolution of the whiteschists, we obtained U–Pb ages from zircon and monazite and combined those with trace element composition and oxygen isotopes of the accessory minerals and coexisting garnet. Zircon cores are the only remnants of the granitic protolith and still preserve a Permian age, magmatic trace element compositions and δ18O of ~10 ‰. Thermodynamic modelling of Si-rich and Si-poor whiteschist compositions shows that there are two main fluid pulses during prograde subduction between 20 and 40 kbar. In Si-poor samples, the breakdown of chlorite to garnet + fluid occurs at ~22 kbar. A first zircon rim directly overgrowing the cores has inclusions of prograde phlogopite and HREE-enriched patterns indicating zircon growth at the onset of garnet formation. A second main fluid pulse is documented close to peak metamorphic conditions in both Si-rich and Si-poor whiteschist when talc + kyanite react to garnet + coesite + fluid. A second metamorphic overgrowth on zircon with HREE depletion was observed in the Si-poor whiteschists, whereas a single metamorphic overgrowth capturing phengite and talc inclusions was observed in the Si-rich whiteschists. Garnet rims, zircon rims and monazite are in chemical and isotopic equilibrium for oxygen, demonstrating that they all formed at peak metamorphism at 35 Ma as constrained by the age of monazite (34.7 ± 0.4 Ma) and zircon rims (35.1 ± 0.8 Ma). The prograde zircon rim in Si-poor whiteschists has an age that is within error indistinguishable from the age of peak metamorphic conditions, consistent with a minimum rate of subduction of 2 cm/year for the Dora-Maira unit. Oxygen isotope values for zircon rims, monazite and garnet are equal within error at 6.4 ± 0.4 ‰, which is in line with closed-system equilibrium fractionation during prograde to peak temperatures. The resulting equilibrium Δ18Ozircon-monazite at 700 ± 20 °C is 0.1 ± 0.7 ‰. The in situ oxygen isotope data argue against an externally derived input of fluids into the whiteschists. Instead, fluidassisted zircon and monazite recrystallisation can be linked to internal dehydration reactions during prograde subduction. We propose that the major metasomatic event affecting the granite protolith was related to hydrothermal seafloor alteration post-dating Jurassic rifting, well before the onset of Alpine subduction.