13 resultados para Inference.
em Universidade do Minho
Resumo:
The assessment of existing timber structures is often limited to information obtained from non or semi destructive testing, as mechanical testing is in many cases not possible due to its destructive nature. Therefore, the available data provides only an indirect measurement of the reference mechanical properties of timber elements, often obtained through empirical based correlations. Moreover, the data must result from the combination of different tests, as to provide a reliable source of information for a structural analysis. Even if general guidelines are available for each typology of testing, there is still a need for a global methodology allowing to combine information from different sources and infer upon that information in a decision process. In this scope, the present work presents the implementation of a probabilistic based framework for safety assessment of existing timber elements. This methodology combines information gathered in different scales and follows a probabilistic framework allowing for the structural assessment of existing timber elements with possibility of inference and updating of its mechanical properties, through Bayesian methods. The probabilistic based framework is based in four main steps: (i) scale of information; (ii) measurement data; (iii) probability assignment; and (iv) structural analysis. In this work, the proposed methodology is implemented in a case study. Data was obtained through a multi-scale experimental campaign made to old chestnut timber beams accounting correlations of non and semi-destructive tests with mechanical properties. Finally, different inference scenarios are discussed aiming at the characterization of the safety level of the elements.
Resumo:
\The idea that social processes develop in a cyclical manner is somewhat like a `Lorelei'. Researchers are lured to it because of its theoretical promise, only to become entangled in (if not wrecked by) messy problems of empirical inference. The reasoning leading to hypotheses of some kind of cycle is often elegant enough, yet the data from repeated observations rarely display the supposed cyclical pattern. (...) In addition, various `schools' seem to exist which frequently arrive at di erent conclusions on the basis of the same data." (van der Eijk and Weber 1987:271). Much of the empirical controversies around these issues arise because of three distinct problems: the coexistence of cycles of di erent periodicities, the possibility of transient cycles and the existence of cycles without xed periodicity. In some cases, there are no reasons to expect any of these phenomena to be relevant. Seasonality caused by Christmas is one such example (Wen 2002). In such cases, researchers mostly rely on spectral analysis and Auto-Regressive Moving-Average (ARMA) models to estimate the periodicity of cycles.1 However, and this is particularly true in social sciences, sometimes there are good theoretical reasons to expect irregular cycles. In such cases, \the identi cation of periodic movement in something like the vote is a daunting task all by itself. When a pendulum swings with an irregular beat (frequency), and the extent of the swing (amplitude) is not constant, mathematical functions like sine-waves are of no use."(Lebo and Norpoth 2007:73) In the past, this di culty has led to two di erent approaches. On the one hand, some researchers dismissed these methods altogether, relying on informal alternatives that do not meet rigorous standards of statistical inference. Goldstein (1985 and 1988), studying the severity of Great power wars is one such example. On the other hand, there are authors who transfer the assumptions of spectral analysis (and ARMA models) into fundamental assumptions about the nature of social phenomena. This type of argument was produced by Beck (1991) who, in a reply to Goldstein (1988), claimed that only \ xed period models are meaningful models of cyclic phenomena".We argue that wavelet analysis|a mathematical framework developed in the mid-1980s (Grossman and Morlet 1984; Goupillaud et al. 1984) | is a very viable alternative to study cycles in political time-series. It has the advantage of staying close to the frequency domain approach of spectral analysis while addressing its main limitations. Its principal contribution comes from estimating the spectral characteristics of a time-series as a function of time, thus revealing how its di erent periodic components may change over time. The rest of article proceeds as follows. In the section \Time-frequency Analysis", we study in some detail the continuous wavelet transform and compare its time-frequency properties with the more standard tool for that purpose, the windowed Fourier transform. In the section \The British Political Pendulum", we apply wavelet analysis to essentially the same data analyzed by Lebo and Norpoth (2007) and Merrill, Grofman and Brunell (2011) and try to provide a more nuanced answer to the same question discussed by these authors: do British electoral politics exhibit cycles? Finally, in the last section, we present a concise list of future directions.
Resumo:
PhD Thesis in Bioengineering
Resumo:
Extreme value theory (EVT) deals with the occurrence of extreme phenomena. The tail index is a very important parameter appearing in the estimation of the probability of rare events. Under a semiparametric framework, inference requires the choice of a number k of upper order statistics to be considered. This is the crux of the matter and there is no definite formula to do it, since a small k leads to high variance and large values of k tend to increase the bias. Several methodologies have emerged in literature, specially concerning the most popular Hill estimator (Hill, 1975). In this work we compare through simulation well-known procedures presented in Drees and Kaufmann (1998), Matthys and Beirlant (2000), Beirlant et al. (2002) and de Sousa and Michailidis (2004), with a heuristic scheme considered in Frahm et al. (2005) within the estimation of a different tail measure but with a similar context. We will see that the new method may be an interesting alternative.
Resumo:
Extreme value models are widely used in different areas. The Birnbaum–Saunders distribution is receiving considerable attention due to its physical arguments and its good properties. We propose a methodology based on extreme value Birnbaum–Saunders regression models, which includes model formulation, estimation, inference and checking. We further conduct a simulation study for evaluating its performance. A statistical analysis with real-world extreme value environmental data using the methodology is provided as illustration.
Resumo:
Publicado em "AIP Conference Proceedings", Vol. 1648
Resumo:
A novel framework for probabilistic-based structural assessment of existing structures, which combines model identification and reliability assessment procedures, considering in an objective way different sources of uncertainty, is presented in this paper. A short description of structural assessment applications, provided in literature, is initially given. Then, the developed model identification procedure, supported in a robust optimization algorithm, is presented. Special attention is given to both experimental and numerical errors, to be considered in this algorithm convergence criterion. An updated numerical model is obtained from this process. The reliability assessment procedure, which considers a probabilistic model for the structure in analysis, is then introduced, incorporating the results of the model identification procedure. The developed model is then updated, as new data is acquired, through a Bayesian inference algorithm, explicitly addressing statistical uncertainty. Finally, the developed framework is validated with a set of reinforced concrete beams, which were loaded up to failure in laboratory.
Resumo:
In this article, we develop a specification technique for building multiplicative time-varying GARCH models of Amado and Teräsvirta (2008, 2013). The variance is decomposed into an unconditional and a conditional component such that the unconditional variance component is allowed to evolve smoothly over time. This nonstationary component is defined as a linear combination of logistic transition functions with time as the transition variable. The appropriate number of transition functions is determined by a sequence of specification tests. For that purpose, a coherent modelling strategy based on statistical inference is presented. It is heavily dependent on Lagrange multiplier type misspecification tests. The tests are easily implemented as they are entirely based on auxiliary regressions. Finite-sample properties of the strategy and tests are examined by simulation. The modelling strategy is illustrated in practice with two real examples: an empirical application to daily exchange rate returns and another one to daily coffee futures returns.
Resumo:
Genome-wide studies of African populations have the potential to reveal powerful insights into the evolution of our species, as these diverse populations have been exposed to intense selective pressures imposed by infectious diseases, diet, and environmental factors. Within Africa, the Sahel Belt extensively overlaps the geographical center of several endemic infections such as malaria, trypanosomiasis, meningitis, and hemorrhagic fevers. We screened 2.5 million single nucleotide polymorphisms in 161 individuals from 13 Sahelian populations, which together with published data cover Western, Central, and Eastern Sahel, and include both nomadic and sedentary groups. We confirmed the role of this Belt as a main corridor for human migrations across the continent. Strong admixture was observed in both Central and Eastern Sahelian populations, with North Africans and Near Eastern/Arabians, respectively, but it was inexistent in Western Sahelian populations. Genome-wide local ancestry inference in admixed Sahelian populations revealed several candidate regions that were significantly enriched for non-autochthonous haplotypes, and many showed to be under positive selection. The DARC gene region in Arabs and Nubians was enriched for African ancestry, whereas the RAB3GAP1/LCT/MCM6 region in Oromo, the TAS2R gene family in Fulani, and the ALMS1/NAT8 in Turkana and Samburu were enriched for non-African ancestry. Signals of positive selection varied in terms of geographic amplitude. Some genomic regions were selected across the Belt, the most striking example being the malaria-related DARC gene. Others were Western-specific (oxytocin, calcium, and heart pathways), Eastern-specific (lipid pathways), or even population-restricted (TAS2R genes in Fulani, which may reflect sexual selection).
Resumo:
Inter-individual heterogeneity is evident in aging; education level is known to contribute for this heterogeneity. Using a cross-sectional study design and network inference applied to resting-state fMRI data, we show that aging was associated with decreased functional connectivity in a large cortical network. On the other hand, education level, as measured by years of formal education, produced an opposite effect on the long-term. These results demonstrate the increased brain efficiency in individuals with higher education level that may mitigate the impact of age on brain functional connectivity.
Resumo:
First published online: December 16, 2014.
Resumo:
Dissertação de mestrado em Educação Especial (área de especialização em Dificuldades de Aprendizagem Específicas)
Resumo:
In recent decades, an increased interest has been evidenced in the research on multi-scale hierarchical modelling in the field of mechanics, and also in the field of wood products and timber engineering. One of the main motivations for hierar-chical modelling is to understand how properties, composition and structure at lower scale levels may influence and be used to predict the material properties on a macroscopic and structural engineering scale. This chapter presents the applicability of statistic and probabilistic methods, such as the Maximum Likelihood method and Bayesian methods, in the representation of timber’s mechanical properties and its inference accounting to prior information obtained in different importance scales. These methods allow to analyse distinct timber’s reference properties, such as density, bending stiffness and strength, and hierarchically consider information obtained through different non, semi or destructive tests. The basis and fundaments of the methods are described and also recommendations and limitations are discussed. The methods may be used in several contexts, however require an expert’s knowledge to assess the correct statistic fitting and define the correlation arrangement between properties.