965 resultados para A posteriori error estimation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The growth of five variables of the tibia (diaphyseal length, diaphyseal length plus distal epiphysis, condylo-malleolar length, sagittal diameter of the proximal epiphysis, maximum breadth of the distal epiphysis) were analysed using polynomial regression in order to evaluate their significance and capacity for age and sex determination during and after growth. Data were collected from 181 (90♂ and 91♀) individuals ranging from birth to 25 years of age and belonging to three documented collections from Western Europe. Results indicate that all five variables exhibit linear behaviour during growth, which can be expressed by a first-degree polynomial function. Sexual significant differences were observed from age 15 onward in the two epiphysis measurements and condylo-malleolar length, suggesting that these three variables could be useful for sex determination in individuals older than 15 years. Strong correlation coefficients were identified between the five tibial variables and age. These results indicate that any of the studied tibial measurements is likely to serve as a useful source for estimating sub-adult age in both archaeological and forensic samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a novel formulation to solve the problem of intra-voxel reconstruction of the fibre orientation distribution function (FOD) in each voxel of the white matter of the brain from diffusion MRI data. The majority of the state-of-the-art methods in the field perform the reconstruction on a voxel-by-voxel level, promoting sparsity of the orientation distribution. Recent methods have proposed a global denoising of the diffusion data using spatial information prior to reconstruction, while others promote spatial regularisation through an additional empirical prior on the diffusion image at each q-space point. Our approach reconciles voxelwise sparsity and spatial regularisation and defines a spatially structured FOD sparsity prior, where the structure originates from the spatial coherence of the fibre orientation between neighbour voxels. The method is shown, through both simulated and real data, to enable accurate FOD reconstruction from a much lower number of q-space samples than the state of the art, typically 15 samples, even for quite adverse noise conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[cat] Estudiem les propietats teòriques que una funció d.emparellament ha de satisfer per tal de representar un mercat laboral amb friccions dins d'un model d'equilibri general amb emparellament aleatori. Analitzem el cas Cobb-Douglas, CES i altres formes funcionals per a la funció d.emparellament. Els nostres resultats estableixen restriccions sobre els paràmetres d'aquests formes funcionals per assegurar que l.equilibri és interior. Aquestes restriccions aporten raons teòriques per escollir entre diverses formes funcionals i permeten dissenyar tests d'error d'especificació de model en els treballs empírics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[cat] Estudiem les propietats teòriques que una funció d.emparellament ha de satisfer per tal de representar un mercat laboral amb friccions dins d'un model d'equilibri general amb emparellament aleatori. Analitzem el cas Cobb-Douglas, CES i altres formes funcionals per a la funció d.emparellament. Els nostres resultats estableixen restriccions sobre els paràmetres d'aquests formes funcionals per assegurar que l.equilibri és interior. Aquestes restriccions aporten raons teòriques per escollir entre diverses formes funcionals i permeten dissenyar tests d'error d'especificació de model en els treballs empírics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Työn tavoitteena oli kehittää tutkittavan insinööriyksikön projektien kustannusestimointiprosessia, siten että yksikön johdolla olisi tulevaisuudessa käytettävänään tarkempaa kustannustietoa. Jotta tämä olisi mahdollista, ensin täytyi selvittää yksikön toimintatavat, projektien kustannusrakenteet sekä kustannusatribuutit. Tämän teki mahdolliseksi projektien kustannushistoriatiedon tutkiminen sekä asiantuntijoiden haastattelu. Työn tuloksena syntyi kohdeyksikön muiden prosessien kanssa yhteensopiva kustannusestimointiprosessi sekä –malli.Kustannusestimointimenetelmän ja –mallin perustana on kustannusatribuutit, jotka määritellään erikseen tutkittavassa ympäristössä. Kustannusatribuutit löydetään historiatietoa tutkimalla, eli analysoimalla jo päättyneitä projekteja, projektien kustannusrakenteita sekä tekijöitä, jotka ovat vaikuttaneet kustannusten syntyyn. Tämän jälkeen kustannusatribuuteille täytyy määritellä painoarvot sekä painoarvojen vaihteluvälit. Estimointimallin tarkuutta voidaan parantaa mallin kalibroinnilla. Olen käyttänyt Goal – Question – Metric (GQM) –menetelmää tutkimuksen kehyksenä.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The prediction filters are well known models for signal estimation, in communications, control and many others areas. The classical method for deriving linear prediction coding (LPC) filters is often based on the minimization of a mean square error (MSE). Consequently, second order statistics are only required, but the estimation is only optimal if the residue is independent and identically distributed (iid) Gaussian. In this paper, we derive the ML estimate of the prediction filter. Relationships with robust estimation of auto-regressive (AR) processes, with blind deconvolution and with source separation based on mutual information minimization are then detailed. The algorithm, based on the minimization of a high-order statistics criterion, uses on-line estimation of the residue statistics. Experimental results emphasize on the interest of this approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study evaluates the performance of four methods for estimating regression coefficients used to make statistical decisions regarding intervention effectiveness in single-case designs. Ordinary least squares estimation is compared to two correction techniques dealing with general trend and one eliminating autocorrelation whenever it is present. Type I error rates and statistical power are studied for experimental conditions defined by the presence or absence of treatment effect (change in level or in slope), general trend, and serial dependence. The results show that empirical Type I error rates do not approximate the nominal ones in presence of autocorrelation or general trend when ordinary and generalized least squares are applied. The techniques controlling trend show lower false alarm rates, but prove to be insufficiently sensitive to existing treatment effects. Consequently, the use of the statistical significance of the regression coefficients for detecting treatment effects is not recommended for short data series.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies evaluation of software development practices through an error analysis. The work presents software development process, software testing, software errors, error classification and software process improvement methods. The practical part of the work presents results from the error analysis of one software process. It also gives improvement ideas for the project. It was noticed that the classification of the error data was inadequate in the project. Because of this it was impossible to use the error data effectively. With the error analysis we were able to show that there were deficiencies in design and analyzing phases, implementation phase and in testing phase. The work gives ideas for improving error classification and for software development practices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En el sector suroriental de la Cuenca del Ebro, la inclinación paleomagnética obtenida en las sucesiones aluviales oligocenas es considerablemente menor que la esperable, si se considera la paleolatitud de referencia calculada para esa región durante el Oligoceno. Este error de inclinación puede deberse a diversos factores, como el control hidrodinámica de las partículas magnéticas en el medio deposicional, la compactación diferencial del sedimento durante el enterramiento, o bien a la deformación tectónica. Este trabajo se ha centrado en su estudio en dos sucesiones dominantemente aluviales, donde previamente se haa establecido su magnetoestratigrafia. Las litofacies aluviales y lacustres estudiadas se han agrupado en cinco grupos: areniscas grises, areniscas rojas y versicolores, limos rojos, lutitas rojas y calizas. Se ha demostrado la existencia de una correlación entre la abundancia de filosilicatos y el error de inclinación. De esta manera, las litofacies con un bajo porcentaje de filosilicatos (calizas y areniscas grises) presentan errores de unos 5', estadisticarnente no significativos, con respecto a la inclinación de referencia. Por el contrario, en materiales con un porcentaje más elevado de filosilicatos (limos y arcillas) el error puede llegar a los 25'. Este hecho no tiene repercusión en la interpretación de las polaridades magnéticas, pero si en las reconstmcciones palinspásticas y paleogeográficas basadas en los cálculos de paleolatitudes a partir de las paleoinclinaciones. Los resultados obtenidos demuestran la necesidad de cautela en la propuesta de conclusiones basadas exclusivamente en este tipo de información.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de laalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The heart relies on continuous energy production and imbalances herein impair cardiac function directly. The tricarboxylic acid (TCA) cycle is the primary means of energy generation in the healthy myocardium, but direct noninvasive quantification of metabolic fluxes is challenging due to the low concentration of most metabolites. Hyperpolarized (13)C magnetic resonance spectroscopy (MRS) provides the opportunity to measure cellular metabolism in real time in vivo. The aim of this work was to noninvasively measure myocardial TCA cycle flux (VTCA) in vivo within a single minute. METHODS AND RESULTS: Hyperpolarized [1-(13)C]acetate was administered at different concentrations in healthy rats. (13)C incorporation into [1-(13)C]acetylcarnitine and the TCA cycle intermediate [5-(13)C]citrate was dynamically detected in vivo with a time resolution of 3s. Different kinetic models were established and evaluated to determine the metabolic fluxes by simultaneously fitting the evolution of the (13)C labeling in acetate, acetylcarnitine, and citrate. VTCA was estimated to be 6.7±1.7μmol·g(-1)·min(-1) (dry weight), and was best estimated with a model using only the labeling in citrate and acetylcarnitine, independent of the precursor. The TCA cycle rate was not linear with the citrate-to-acetate metabolite ratio, and could thus not be quantified using a ratiometric approach. The (13)C signal evolution of citrate, i.e. citrate formation was independent of the amount of injected acetate, while the (13)C signal evolution of acetylcarnitine revealed a dose dependency with the injected acetate. The (13)C labeling of citrate did not correlate to that of acetylcarnitine, leading to the hypothesis that acetylcarnitine formation is not an indication of mitochondrial TCA cycle activity in the heart. CONCLUSIONS: Hyperpolarized [1-(13)C]acetate is a metabolic probe independent of pyruvate dehydrogenase (PDH) activity. It allows the direct estimation of VTCA in vivo, which was shown to be neither dependent on the administered acetate dose nor on the (13)C labeling of acetylcarnitine. Dynamic (13)C MRS coupled to the injection of hyperpolarized [1-(13)C]acetate can enable the measurement of metabolic changes during impaired heart function.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

CONTEXT: Complex steroid disorders such as P450 oxidoreductase deficiency or apparent cortisone reductase deficiency may be recognized by steroid profiling using chromatographic mass spectrometric methods. These methods are highly specific and sensitive, and provide a complete spectrum of steroid metabolites in a single measurement of one sample which makes them superior to immunoassays. The steroid metabolome during the fetal-neonatal transition is characterized by (a) the metabolites of the fetal-placental unit at birth, (b) the fetal adrenal androgens until its involution 3-6 months postnatally, and (c) the steroid metabolites produced by the developing endocrine organs. All these developmental events change the steroid metabolome in an age- and sex-dependent manner during the first year of life. OBJECTIVE: The aim of this study was to provide normative values for the urinary steroid metabolome of healthy newborns at short time intervals in the first year of life. METHODS: We conducted a prospective, longitudinal study to measure 67 urinary steroid metabolites in 21 male and 22 female term healthy newborn infants at 13 time-points from week 1 to week 49 of life. Urine samples were collected from newborn infants before discharge from hospital and from healthy infants at home. Steroid metabolites were measured by gas chromatography-mass spectrometry (GC-MS) and steroid concentrations corrected for urinary creatinine excretion were calculated. RESULTS: 61 steroids showed age and 15 steroids sex specificity. Highest urinary steroid concentrations were found in both sexes for progesterone derivatives, in particular 20α-DH-5α-DH-progesterone, and for highly polar 6α-hydroxylated glucocorticoids. The steroids peaked at week 3 and decreased by ∼80% at week 25 in both sexes. The decline of progestins, androgens and estrogens was more pronounced than of glucocorticoids whereas the excretion of corticosterone and its metabolites and of mineralocorticoids remained constant during the first year of life. CONCLUSION: The urinary steroid profile changes dramatically during the first year of life and correlates with the physiologic developmental changes during the fetal-neonatal transition. Thus detailed normative data during this time period permit the use of steroid profiling as a powerful diagnostic tool.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants" math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a nonnumerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.