997 resultados para Probabilities


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A set of predictor variables is said to be intrinsically multivariate predictive (IMP) for a target variable if all properly contained subsets of the predictor set are poor predictors of the. target but the full set predicts the target with great accuracy. In a previous article, the main properties of IMP Boolean variables have been analytically described, including the introduction of the IMP score, a metric based on the coefficient of determination (CoD) as a measure of predictiveness with respect to the target variable. It was shown that the IMP score depends on four main properties: logic of connection, predictive power, covariance between predictors and marginal predictor probabilities (biases). This paper extends that work to a broader context, in an attempt to characterize properties of discrete Bayesian networks that contribute to the presence of variables (network nodes) with high IMP scores. We have found that there is a relationship between the IMP score of a node and its territory size, i.e., its position along a pathway with one source: nodes far from the source display larger IMP scores than those closer to the source, and longer pathways display larger maximum IMP scores. This appears to be a consequence of the fact that nodes with small territory have larger probability of having highly covariate predictors, which leads to smaller IMP scores. In addition, a larger number of XOR and NXOR predictive logic relationships has positive influence over the maximum IMP score found in the pathway. This work presents analytical results based on a simple structure network and an analysis involving random networks constructed by computational simulations. Finally, results from a real Bayesian network application are provided. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, the effects of uncertainty and expected costs of failure on optimum structural design are investigated, by comparing three distinct formulations of structural optimization problems. Deterministic Design Optimization (DDO) allows one the find the shape or configuration of a structure that is optimum in terms of mechanics, but the formulation grossly neglects parameter uncertainty and its effects on structural safety. Reliability-based Design Optimization (RBDO) has emerged as an alternative to properly model the safety-under-uncertainty part of the problem. With RBDO, one can ensure that a minimum (and measurable) level of safety is achieved by the optimum structure. However, results are dependent on the failure probabilities used as constraints in the analysis. Risk optimization (RO) increases the scope of the problem by addressing the compromising goals of economy and safety. This is accomplished by quantifying the monetary consequences of failure, as well as the costs associated with construction, operation and maintenance. RO yields the optimum topology and the optimum point of balance between economy and safety. Results are compared for some example problems. The broader RO solution is found first, and optimum results are used as constraints in DDO and RBDO. Results show that even when optimum safety coefficients are used as constraints in DDO, the formulation leads to configurations which respect these design constraints, reduce manufacturing costs but increase total expected costs (including expected costs of failure). When (optimum) system failure probability is used as a constraint in RBDO, this solution also reduces manufacturing costs but by increasing total expected costs. This happens when the costs associated with different failure modes are distinct. Hence, a general equivalence between the formulations cannot be established. Optimum structural design considering expected costs of failure cannot be controlled solely by safety factors nor by failure probability constraints, but will depend on actual structural configuration. (c) 2011 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The number of citations received by authors in scientific journals has become a major parameter to assess individual researchers and the journals themselves through the impact factor. A fair assessment therefore requires that the criteria for selecting references in a given manuscript should be unbiased with regard to the authors or journals cited. In this paper, we assess approaches for citations considering two recommendations for authors to follow while preparing a manuscript: (i) consider similarity of contents with the topics investigated, lest related work should be reproduced or ignored; (ii) perform a systematic search over the network of citations including seminal or very related papers. We use formalisms of complex networks for two datasets of papers from the arXiv and the Web of Science repositories to show that neither of these two criteria is fulfilled in practice. By representing the texts as complex networks we estimated a similarity index between pieces of texts and found that the list of references did not contain the most similar papers in the dataset. This was quantified by calculating a consistency index, whose maximum value is one if the references in a given paper are the most similar in the dataset. For the areas of "complex networks" and "graphenes", the consistency index was only 0.11-0.23 and 0.10-0.25, respectively. To simulate a systematic search in the citation network, we employed a traditional random walk search (i.e. diffusion) and a random walk whose probabilities of transition are proportional to the number of the ingoing edges of the neighbours. The frequency of visits to the nodes (papers) in the network had a very small correlation with either the actual list of references in the papers or with the number of downloads from the arXiv repository. Therefore, apparently the authors and users of the repository did not follow the criterion related to a systematic search over the network of citations. Based on these results, we propose an approach that we believe is fairer for evaluating and complementing citations of a given author, effectively leading to a virtual scientometry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Questions Does the spatial association between isolated adult trees and understorey plants change along a gradient of sand dunes? Does this association depend on the life form of the understorey plant? Location Coastal sand dunes, southeast Brazil. Methods We recorded the occurrence of understorey plant species in 100 paired 0.25 m2 plots under adult trees and in adjacent treeless sites along an environmental gradient from beach to inland. Occurrence probabilities were modelled as a function of the fixed variables of the presence of a neighbour, distance from the seashore and life form, and a random variable, the block (i.e. the pair of plots). Generalized linear mixed models (GLMM) were fitted in a backward step-wise procedure using Akaike's information criterion (AIC) for model selection. Results The occurrence of understorey plants was affected by the presence of an adult tree neighbour, but the effect varied with the life form of the understorey species. Positive spatial association was found between isolated adult neighbour and young trees, whereas a negative association was found for shrubs. Moreover, a neutral association was found for lianas, whereas for herbs the effect of the presence of an adult neighbour ranged from neutral to negative, depended on the subgroup considered. The strength of the negative association with forbs increased with distance from the seashore. However, for the other life forms, the associational pattern with adult trees did not change along the gradient. Conclusions For most of the understorey life forms there is no evidence that the spatial association between isolated adult trees and understorey plants changes with the distance from the seashore, as predicted by the stress gradient hypothesis, a common hypothesis in the literature about facilitation in plant communities. Furthermore, the positive spatial association between isolated adult trees and young trees identified along the entire gradient studied indicates a positive feedback that explains the transition from open vegetation to forest in subtropical coastal dune environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deficiencies in calcium (Ca) and magnesium (Mg) are associated with various complications during pregnancy. To test the hypothesis that the status of these minerals is inadequate in pregnancy, a cross-sectional study was conducted of the dietary intake and status of Ca and Mg in pregnant women (n = 50) attending a general public university hospital in Brazil. Dietary intake was assessed from 4-day food records; levels of plasma Mg, erythrocyte Mg, and urinary Ca and Mg excretion were determined by flame atomic absorption spectroscopy; and type I collagen C-telopeptides were evaluated by enzyme-linked immunosorbent assay. Probabilities of inadequate Ca and Mg intake were exhibited by 58 and 98% of the study population, respectively. The mean levels of urinary Ca and Mg excretion were 8.55 and 3.77 mmol/L, respectively. Plasma C-telopeptides, plasma Mg, and erythrocyte Mg were within normal levels. Multiple linear regression analysis revealed positive relationships among urinary Ca excretion, Ca intake (P = .002) and urinary Mg excretion (P < .001) and between erythrocyte Mg and Mg intake (P = .023). It is concluded that the Ca and Mg status of participants was adequate even though the intake of Ca and Mg was lower than the recommended level. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives To evaluate the accuracy and probabilities of different fetal ultrasound parameters to predict neonatal outcome in isolated congenital diaphragmatic hernia (CDH). Methods Between January 2004 and December 2010, we evaluated prospectively 108 fetuses with isolated CDH (82 left-sided and 26 right-sided). The following parameters were evaluated: gestational age at diagnosis, side of the diaphragmatic defect, presence of polyhydramnios, presence of liver herniated into the fetal thorax (liver-up), lung-to-head ratio (LHR) and observed/expected LHR (o/e-LHR), observed/expected contralateral and total fetal lung volume (o/e-ContFLV and o/e-TotFLV) ratios, ultrasonographic fetal lung volume/fetal weight ratio (US-FLW), observed/expected contralateral and main pulmonary artery diameter (o/e-ContPA and o/eMPA) ratios and the contralateral vascularization index (Cont-VI). The outcomes were neonatal death and severe postnatal pulmonary arterial hypertension (PAH). Results Neonatal mortality was 64.8% (70/108). Severe PAH was diagnosed in 68 (63.0%) cases, of which 63 died neonatally (92.6%) (P < 0.001). Gestational age at diagnosis, side of the defect and polyhydramnios were not associated with poor outcome (P > 0.05). LHR, o/eLHR, liver-up, o/e-ContFLV, o/e-TotFLV, US-FLW, o/eContPA, o/e-MPA and Cont-VI were associated with both neonatal death and severe postnatal PAH (P < 0.001). Receiver-operating characteristics curves indicated that measuring total lung volumes (o/e-TotFLV and US-FLW) was more accurate than was considering only the contralateral lung sizes (LHR, o/e-LHR and o/e-ContFLV; P < 0.05), and Cont-VI was the most accurate ultrasound parameter to predict neonatal death and severe PAH (P < 0.001). Conclusions Evaluating total lung volumes is more accurate than is measuring only the contralateral lung size. Evaluating pulmonary vascularization (Cont-VI) is the most accurate predictor of neonatal outcome. Estimating the probability of survival and severe PAH allows classification of cases according to prognosis. Copyright (C) 2011 ISUOG. Published by John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model (Hosmer & Lemeshow, 1989) and a logistic regression with state-dependent sample selection model (Cramer, 2004) applied to simulated data. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian bank portfolio. Our simulation results so far revealed that there is no statistically significant difference in terms of predictive capacity between the naive logistic regression models and the logistic regression with state-dependent sample selection models. However, there is strong difference between the distributions of the estimated default probabilities from these two statistical modeling techniques, with the naive logistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study was to evaluate the immunoexpression of MMP-2, MMP-9 and CD31/microvascular density in squamous cell carcinomas of the floor of the mouth and to correlate the results with demographic, survival, clinical (TNM staging) and histopathological variables (tumor grade, perineural invasion, embolization and bone invasion). Data from medical records and diagnoses of 41 patients were reviewed. Histological sections were subjected to immunostaining using primary antibodies for human MMP-2, MMP-9 and CD31 and streptavidin-biotin-immunoperoxidase system. Histomorphometric analyses quantified positivity for MMPs (20 fields per slide, 100?points grade, ×200) and for CD31 (microvessels <50?µm in the area of the highest vascularization, 5 fields per slide, 100?points grade, ×400). Statistical design was composed by non-parametric Mann-Whitney U test (investigating the association between numerical variables and immunostainings), chi-square frequency test (in contingency tables), Fisher's exact test (when at least one expected frequency was less than 5 in 2×2 tables), Kaplan-Meier method (estimated probabilities of overall survival) and Iogrank test (comparison of survival curves), all with a significance level of 5%. There was a statistically significant correlation between immunostaining for MMP-2 and lymph node metastasis. Factors associated negatively with survival were N stage, histopathological grade, perineural invasion and immunostaining for MMP-9. There was no significant association between immunoexpression of CD31 and the other variables. The intensity of immunostaining for MMP-2 can be indicative of metastasis in lymph nodes and for MMP-9 of a lower probability of survival

Relevância:

10.00% 10.00%

Publicador:

Resumo:

CONTEXT AND OBJECTIVE: Epidemiology may help educators to face the challenge of establishing content guidelines for the curricula in medical schools. The aim was to develop learning objectives for a medical curriculum from an epidemiology database. DESIGN AND SETTING: Descriptive study assessing morbidity and mortality data, conducted in a private university in São Paulo. METHODS: An epidemiology database was used, with mortality and morbidity recorded as summaries of deaths and the World Health Organization's Disability-Adjusted Life Year (DALY). The scoring took into consideration probabilities for mortality and morbidity. RESULTS: The scoring presented a classification of health conditions to be used by a curriculum design committee, taking into consideration its highest and lowest quartiles, which corresponded respectively to the highest and lowest impact on morbidity and mortality. Data from three countries were used for international comparison and showed distinct results. The resulting scores indicated topics to be developed through educational taxonomy. CONCLUSION: The frequencies of the health conditions and their statistical treatment made it possible to identify topics that should be fully developed within medical education. The classification also suggested limits between topics that should be developed in depth, including knowledge and development of skills and attitudes, regarding topics that can be concisely presented at the level of knowledge.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents Bayesian solutions to inference problems for three types of social network data structures: a single observation of a social network, repeated observations on the same social network, and repeated observations on a social network developing through time. A social network is conceived as being a structure consisting of actors and their social interaction with each other. A common conceptualisation of social networks is to let the actors be represented by nodes in a graph with edges between pairs of nodes that are relationally tied to each other according to some definition. Statistical analysis of social networks is to a large extent concerned with modelling of these relational ties, which lends itself to empirical evaluation. The first paper deals with a family of statistical models for social networks called exponential random graphs that takes various structural features of the network into account. In general, the likelihood functions of exponential random graphs are only known up to a constant of proportionality. A procedure for performing Bayesian inference using Markov chain Monte Carlo (MCMC) methods is presented. The algorithm consists of two basic steps, one in which an ordinary Metropolis-Hastings up-dating step is used, and another in which an importance sampling scheme is used to calculate the acceptance probability of the Metropolis-Hastings step. In paper number two a method for modelling reports given by actors (or other informants) on their social interaction with others is investigated in a Bayesian framework. The model contains two basic ingredients: the unknown network structure and functions that link this unknown network structure to the reports given by the actors. These functions take the form of probit link functions. An intrinsic problem is that the model is not identified, meaning that there are combinations of values on the unknown structure and the parameters in the probit link functions that are observationally equivalent. Instead of using restrictions for achieving identification, it is proposed that the different observationally equivalent combinations of parameters and unknown structure be investigated a posteriori. Estimation of parameters is carried out using Gibbs sampling with a switching devise that enables transitions between posterior modal regions. The main goal of the procedures is to provide tools for comparisons of different model specifications. Papers 3 and 4, propose Bayesian methods for longitudinal social networks. The premise of the models investigated is that overall change in social networks occurs as a consequence of sequences of incremental changes. Models for the evolution of social networks using continuos-time Markov chains are meant to capture these dynamics. Paper 3 presents an MCMC algorithm for exploring the posteriors of parameters for such Markov chains. More specifically, the unobserved evolution of the network in-between observations is explicitly modelled thereby avoiding the need to deal with explicit formulas for the transition probabilities. This enables likelihood based parameter inference in a wider class of network evolution models than has been available before. Paper 4 builds on the proposed inference procedure of Paper 3 and demonstrates how to perform model selection for a class of network evolution models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atomic physics plays an important role in determining the evolution stages in a wide range of laboratory and cosmic plasmas. Therefore, the main contribution to our ability to model, infer and control plasma sources is the knowledge of underlying atomic processes. Of particular importance are reliable low temperature dielectronic recombination (DR) rate coefficients. This thesis provides systematically calculated DR rate coefficients of lithium-like beryllium and sodium ions via ∆n = 0 doubly excited resonant states. The calculations are based on complex-scaled relativistic many-body perturbation theory in an all-order formulation within the single- and double-excitation coupled-cluster scheme, including radiative corrections. Comparison of DR resonance parameters (energy levels, autoionization widths, radiative transition probabilities and strengths) between our theoretical predictions and the heavy-ion storage rings experiments (CRYRING-Stockholm and TSRHeidelberg) shows good agreement. The intruder state problem is a principal obstacle for general application of the coupled-cluster formalism on doubly excited states. Thus, we have developed a technique designed to avoid the intruder state problem. It is based on a convenient partitioning of the Hilbert space and reformulation of the conventional set of pairequations. The general aspects of this development are discussed, and the effectiveness of its numerical implementation (within the non-relativistic framework) is selectively illustrated on autoionizing doubly excited states of helium.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of four self-contained essays in economics. Tournaments and unfair treatment. This paper introduces the negative feelings associated with the perception of being unfairly treated into a tournament model and examines the impact of these perceptions on workers’ efforts and their willingness to work overtime. The effect of unfair treatment on workers’ behavior is ambiguous in the model in that two countervailing effects arise: a negative impulsive effect and a positive strategic effect. The impulsive effect implies that workers react to the perception of being unfairly treated by reducing their level of effort. The strategic effect implies that workers raise this level in order to improve their career opportunities and thereby avoid feeling even more unfairly treated in the future. An empirical test of the model using survey data from a Swedish municipal utility shows that the overall effect is negative. This suggests that employers should consider the negative impulsive effect of unfair treatment on effort and overtime in designing contracts and determining on promotions. Late careers in Sweden between 1970 and 2000. In this essay Swedish workers’ late careers between 1970 and 2000 are studied. The aim is to examine older workers’ career patterns and whether they have changed during this period. For example, is there a difference in career mobility or labor market exiting between cohorts? What affects the late career, and does this differ between cohorts? The analysis shows that between 1970 and 2000 the late careers of Swedish workers comprised of few job changes and consisted more of “trying to keep the job you had in your mid-fifties” than of climbing up the promotion ladder. There are no cohort differences in this pattern. Also a large fraction of the older workers exited the labor market before the normal retirement age of 65. During the 1970s and first part of the 1980s, 56 percent of the older workers made an early exit and the average drop-out age was 63. During the late 1980s and the 1990s the share of old workers who made an early exit had risen to 76 percent and the average drop-out age had dropped to 61.5. Different factors have affected the probabilities of an early exit between 1970 and 2000. For example, skills did affect the risk of exiting the labor market during the 1970s and up to the mid-1980s, but not in the late 1980s or the 1990s. During the first period old workers in the lowest occupations or with the lowest level of education were more likely to exit the labor market than more highly skilled workers. In the second period old workers at all levels of skill had the same probability of leaving the labor market. The growth and survival of establishments: does gender segregation matter? We empirically examine the employment dynamics that arise in Becker’s (1957) model of labor market discrimination. According to the model, firms that employ a large fraction of women will be relatively more profitable due to lower wage costs, and thus enjoy a greater probability of surviving and growing by underselling other firms in the competitive product market. In order to test these implications, we use a unique Swedish matched employer-employee data set. We find that female-dominated establishments do not enjoy any greater probability of surviving and do not grow faster than other establishments. Additionally, we find that integrated establishments, in terms of gender, age and education levels, are more successful than other establishments. Thus, attempts by legislators to integrate firms along all dimensions of diversity may have positive effects on the growth and survival of firms. Risk and overconfidence – Gender differences in financial decision-making as revealed in the TV game-show Jeopardy. We have used unique data from the Swedish version of the TV-show Jeopardy to uncover gender differences in financial decision-making by looking at the contestants’ final wagering strategies. After ruling out empirical best-responses, which do appear in Jeopardy in the US, a simple model is derived to show that risk preferences, the subjective and objective probabilities of answering correctly (individual and group competence), determine wagering strategies. The empirical model shows that, on average, women adopt more conservative and diversified strategies, while men’s strategies aim for the greatest gains. Further, women’s strategies are more responsive to the competence measures, which suggests that they are less overconfident. Together these traits make women more successful players. These results are in line with earlier findings on gender and financial trading.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]A new algorithm for evaluating the top event probability of large fault trees (FTs) is presented. This algorithm does not require any previous qualitative analysis of the FT. Indeed, its efficiency is independent of the FT logic, and it only depends on the number n of basic system components and on their failure probabilities. Our method provides exact lower and upper bounds on the top event probability by using new properties of the intrinsic order relation between binary strings. The intrinsic order enables one to select binary n-tuples with large occurrence probabilities without necessity to evaluate them. This drastically reduces the complexity of the problem from exponential (2n binary n-tuples) to linear (n Boolean variables)...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN] This paper deals with the study of some new properties of the intrinsic order graph. The intrinsic order graph is the natural graphical representation of a complex stochastic Boolean system (CSBS). A CSBS is a system depending on an arbitrarily large number n of mutually independent random Boolean variables. The intrinsic order graph displays its 2n vertices (associated to the CSBS) from top to bottom, in decreasing order of their occurrence probabilities. New relations between the intrinsic ordering and the Hamming weight (i.e., the number of 1-bits in a binary n-tuple) are derived. Further, the distribution of the weights of the 2n nodes in the intrinsic order graph is analyzed…

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]The intrinsic order is a partial order relation defined on the set {0, 1} n of all binary n-tuples. This ordering enables one to automatically compare binary n-tuple probabilities without computing them, just looking at the relative positions of their 0s & 1s. In this paper, new relations between the intrinsic ordering and the Hamming weight (i.e., the number of 1-bits in a binary n-tuple) are derived. All theoretical results are rigorously proved and illustrated through the intrinsic order graph…