871 resultados para probabilistic refinement calculus
Resumo:
The behavior of pile foundations in non liquefiable soil under seismic loading is considerably influenced by the variability in the soil and seismic design parameters. Hence, probabilistic models for the assessment of seismic pile design are necessary. Deformation of pile foundation in non liquefiable soil is dominated by inertial force from superstructure. The present study considers a pseudo-static approach based on code specified design response spectra. The response of the pile is determined by equivalent cantilever approach. The soil medium is modeled as a one-dimensional random field along the depth. The variability associated with undrained shear strength, design response spectrum ordinate, and superstructure mass is taken into consideration. Monte Carlo simulation technique is adopted to determine the probability of failure and reliability indices based on pile failure modes, namely exceedance of lateral displacement limit and moment capacity. A reliability-based design approach for the free head pile under seismic force is suggested that enables a rational choice of pile design parameters.
Resumo:
An applicative language based on the LAMBDA-Calculus is presented. The language, SLIPS (Small Language for Instruction Purposes), is described using the LAMBDA-Calculus as a metalanguage. A call-by-need mechanism of function invocation eliminates the drawbacks of both call-by-name and call-by-value. The system has been implemented in PASCAL.
Resumo:
The situation normally encountered in the high-resolution refinement of protein structures is one in which the inaccurate positions of P out of a total of N atoms are known whereas those of the remaining atoms are unknown. Fourier maps with coefficients (FN -- F'P) × exp (i[alpha]'P) and (mFN -- nF'P) exp (i[alpha]'P), where FN is the observed structure factor and F'P and [alpha]'P are the magnitude and the phase angle of the calculated structure factor corresponding to the inaccurate atomic positions, are often used to correct the positions of the P atoms and to determine those of the Q unknown atoms. A general theoretical approach is presented to elucidate the effect of errors in the positions of the known atoms on the corrected positions of the known atoms and the positions of the unknown atoms derived from such maps. The theory also leads to the optimal choice of parameters used in the different syntheses. When the errors in the positions of the input atoms are systematic, their effects are not taken care of automatically by the syntheses.
Resumo:
Minimum Description Length (MDL) is an information-theoretic principle that can be used for model selection and other statistical inference tasks. There are various ways to use the principle in practice. One theoretically valid way is to use the normalized maximum likelihood (NML) criterion. Due to computational difficulties, this approach has not been used very often. This thesis presents efficient floating-point algorithms that make it possible to compute the NML for multinomial, Naive Bayes and Bayesian forest models. None of the presented algorithms rely on asymptotic analysis and with the first two model classes we also discuss how to compute exact rational number solutions.
Resumo:
What can the statistical structure of natural images teach us about the human brain? Even though the visual cortex is one of the most studied parts of the brain, surprisingly little is known about how exactly images are processed to leave us with a coherent percept of the world around us, so we can recognize a friend or drive on a crowded street without any effort. By constructing probabilistic models of natural images, the goal of this thesis is to understand the structure of the stimulus that is the raison d etre for the visual system. Following the hypothesis that the optimal processing has to be matched to the structure of that stimulus, we attempt to derive computational principles, features that the visual system should compute, and properties that cells in the visual system should have. Starting from machine learning techniques such as principal component analysis and independent component analysis we construct a variety of sta- tistical models to discover structure in natural images that can be linked to receptive field properties of neurons in primary visual cortex such as simple and complex cells. We show that by representing images with phase invariant, complex cell-like units, a better statistical description of the vi- sual environment is obtained than with linear simple cell units, and that complex cell pooling can be learned by estimating both layers of a two-layer model of natural images. We investigate how a simplified model of the processing in the retina, where adaptation and contrast normalization take place, is connected to the nat- ural stimulus statistics. Analyzing the effect that retinal gain control has on later cortical processing, we propose a novel method to perform gain control in a data-driven way. Finally we show how models like those pre- sented here can be extended to capture whole visual scenes rather than just small image patches. By using a Markov random field approach we can model images of arbitrary size, while still being able to estimate the model parameters from the data.
Resumo:
By applying the theory of the asymptotic distribution of extremes and a certain stability criterion to the question of the domain of convergence in the probability sense, of the renormalized perturbation expansion (RPE) for the site self-energy in a cellularly disordered system, an expression has been obtained in closed form for the probability of nonconvergence of the RPE on the real-energy axis. Hence, the intrinsic mobility mu (E) as a function of the carrier energy E is deduced to be given by mu (E)= mu 0exp(-exp( mod E mod -Ec) Delta ), where Ec is a nominal 'mobility edge' and Delta is the width of the random site-energy distribution. Thus mobility falls off sharply but continuously for mod E mod >Ec, in contradistinction with the notion of an abrupt 'mobility edge' proposed by Cohen et al. and Mott. Also, the calculated electrical conductivity shows a temperature dependence in qualitative agreement with experiments on disordered semiconductors.
Resumo:
The integration of stochastic wind power has accentuated a challenge for power system stability assessment. Since the power system is a time-variant system under wind generation fluctuations, pure time-domain simulations are difficult to provide real-time stability assessment. As a result, the worst-case scenario is simulated to give a very conservative assessment of system transient stability. In this study, a probabilistic contingency analysis through a stability measure method is proposed to provide a less conservative contingency analysis which covers 5-min wind fluctuations and a successive fault. This probabilistic approach would estimate the transfer limit of a critical line for a given fault with stochastic wind generation and active control devices in a multi-machine system. This approach achieves a lower computation cost and improved accuracy using a new stability measure and polynomial interpolation, and is feasible for online contingency analysis.
Resumo:
We analyzed the development of 4th-grade students’ understanding of the transition from experimental relative frequencies of outcomes to theoretical probabilities with a focus on the foundational statistical concepts of variation and expectation. We report students’ initial and changing expectations of the outcomes of tossing one and two coins, how they related the relative frequency from their physical and computersimulated trials to the theoretical probability, and how they created and interpreted theoretical probability models. Findings include students’ progression from an initial apparent equiprobability bias in predicting outcomes of tossing two coins through to representing the outcomes of increasing the number of trials. After observing the decreasing variation from the theoretical probability as the sample size increased, students developed a deeper understanding of the relationship between relative frequency of outcomes and theoretical probability as well as their respective associations with variation and expectation. Students’ final models indicated increasing levels of probabilistic understanding.
Resumo:
Hole-doped perovskites such as La1-xCaxMnO3 present special magnetic and magnetotransport properties, and it is commonly accepted that the local atomic structure around Mn ions plays a crucial role in determining these peculiar features. Therefore experimental techniques directly probing the local atomic structure, like x-ray absorption spectroscopy (XAS), have been widely exploited to deeply understand the physics of these compounds. Quantitative XAS analysis usually concerns the extended region [extended x-ray absorption fine structure (EXAFS)] of the absorption spectra. The near-edge region [x-ray absorption near-edge spectroscopy (XANES)] of XAS spectra can provide detailed complementary information on the electronic structure and local atomic topology around the absorber. However, the complexity of the XANES analysis usually prevents a quantitative understanding of the data. This work exploits the recently developed MXAN code to achieve a quantitative structural refinement of the Mn K-edge XANES of LaMnO3 and CaMnO3 compounds; they are the end compounds of the doped manganite series LaxCa1-xMnO3. The results derived from the EXAFS and XANES analyses are in good agreement, demonstrating that a quantitative picture of the local structure can be obtained from XANES in these crystalline compounds. Moreover, the quantitative XANES analysis provides topological information not directly achievable from EXAFS data analysis. This work demonstrates that combining the analysis of extended and near-edge regions of Mn K-edge XAS spectra could provide a complete and accurate description of Mn local atomic environment in these compounds.
Resumo:
PURPOSE: To report the linkage analysis of retinitis pigmentosa (RP) in an Indian family. METHODS: Individuals were examined for symptoms of retinitis pigmentosa and their blood samples were withdrawn for genetic analysis. The disorder was tested for linkage to known 14 adRP and 22 arRP loci using microsatellite markers. RESULTS: Seventeen individuals including seven affecteds participated in the study. All affected individuals had typical RP. The age of onset of the disease ranged from 8-18 years. The disorder in this family segregated either as an autosomal recessive trait with pseudodominance or an autosomal dominant trait. Linkage to an autosomal recessive locus RP28 on chromosome 2p14-p15 was positive with a maximum two-point lod score of 3.96 at theta=0 for D2S380. All affected individuals were homozygous for alleles at D2S2320, D2S2397, D2S380, and D2S136. Recombination events placed the minimum critical region (MCR) for the RP28 gene in a 1.06 cM region between D2S2225 and D2S296. CONCLUSIONS : The present data confirmed linkage of arRP to the RP28 locus in a second Indian family. The RP28 locus was previously mapped to a 16 cM region between D2S1337 and D2S286 in a single Indian family. Haplotype analysis in this family has further narrowed the MCR for the RP28 locus to a 1.06 cM region between D2S2225 and D2S296. Of 15 genes reported in the MCR, 14 genes (KIAA0903, OTX1, MDH1, UGP2, VPS54, PELI1, HSPC159, FLJ20080, TRIP-Br2, SLC1A4, KIAA0582, RAB1A, ACTR2, and SPRED2) are either expressed in the eye or retina. Further study needs to be done to test which of these genes is mutated in patients with RP linked to the RP28 locus.
Resumo:
We study how probabilistic reasoning and inductive querying can be combined within ProbLog, a recent probabilistic extension of Prolog. ProbLog can be regarded as a database system that supports both probabilistic and inductive reasoning through a variety of querying mechanisms. After a short introduction to ProbLog, we provide a survey of the different types of inductive queries that ProbLog supports, and show how it can be applied to the mining of large biological networks.
Resumo:
The performance-based liquefaction potential analysis was carried out in the present study to estimate the liquefaction return period for Bangalore, India, through a probabilistic approach. In this approach, the entire range of peak ground acceleration (PGA) and earthquake magnitudes was used in the evaluation of liquefaction return period. The seismic hazard analysis for the study area was done using probabilistic approach to evaluate the peak horizontal acceleration at bed rock level. Based on the results of the multichannel analysis of surface wave, it was found that the study area belonged to site class D. The PGA values for the study area were evaluated for site class D by considering the local site effects. The soil resistance for the study area was characterized using the standard penetration test (SPT) values obtained from 450 boreholes. These SPT data along with the PGA values obtained from the probabilistic seismic hazard analysis were used to evaluate the liquefaction return period for the study area. The contour plot showing the spatial variation of factor of safety against liquefaction and the corrected SPT values required for preventing liquefaction for a return period of 475 years at depths of 3 and 6 m are presented in this paper. The entire process of liquefaction potential evaluation, starting from collection of earthquake data, identifying the seismic sources, evaluation of seismic hazard and the assessment of liquefaction return period were carried out, and the entire analysis was done based on the probabilistic approach.