975 resultados para Probabilistic graphical models
Resumo:
The idiomatic expression “In Rome be a Roman” can be applied to leadership training and development as well. Leaders who can act as role models inspire other future leaders in their behaviour, attitudes and ways of thinking. Based on two examples of current leaders in the fields of Politics and Public Administration, I support the idea that exposure to role models during their training was decisive for their career paths and current activities as prominent characters in their profession. Issues such as how students should be prepared for community or national leadership as well as cross-cultural engagement are raised here. The hypothesis of transculturalism and cross-cultural commitment as a factor of leadership is presented. Based on current literature on Leadership as well as the presented case studies, I expect to raise a debate focusing on strategies for improving leaders’ training in their cross-cultural awareness.
Resumo:
OBJECTIVE: To evaluate the potential advantages and limitations of the use of the Brazilian hospital admission authorization forms database and the probabilistic record linkage methodology for the validation of reported utilization of hospital care services in household surveys. METHODS: A total of 2,288 households interviews were conducted in the county of Duque de Caxias, Brazil. Information on the occurrence of at least one hospital admission in the year preceding the interview was obtained from a total of 10,733 household members. The 130 records of household members who reported at least one hospital admission in a public hospital were linked to a hospital database with 801,587 records, using an automatic probabilistic approach combined with an extensive clerical review. RESULTS: Seventy-four (57%) of the 130 household members were identified in the hospital database. Yet only 60 subjects (46%) showed a record of hospitalization in the hospital database in the study period. Hospital admissions due to a surgery procedure were significantly more likely to have been identified in the hospital database. The low level of concordance seen in the study can be explained by the following factors: errors in the linkage process; a telescoping effect; and an incomplete record in the hospital database. CONCLUSIONS: The use of hospital administrative databases and probabilistic linkage methodology may represent a methodological alternative for the validation of reported utilization of health care services, but some strategies should be employed in order to minimize the problems related to the use of this methodology in non-ideal conditions. Ideally, a single identifier, such as a personal health insurance number, and the universal coverage of the database would be desirable.
Resumo:
OBJECTIVES: To estimate the prevalence of occupational injuries and identify their risk factors among students in two municipalities. METHODS: A cross-sectional survey was conducted in public schools of the municipalities of Santo Antonio do Pinhal and Monteiro Lobato, Brazil. A stratified probabilistic sample was drawn from public middle and high schools of the study municipalities. A total of 781 students aged 11 to 19 years participated in the study. Students attending middle and high school answered a comprehensive questionnaire on living and working conditions, as well as aspects of work injuries, and health conditions. Multiple logistic regression models were fitted to estimate risk factors of previous and present occupational injuries. RESULTS: Of 781 students, 604 previously had or currently have jobs and 47% reported previous injuries. Among current workers (n=555), 38% reported injuries on their current job. Risk factors for work injuries with statistically significant odds ratio >2.0 included attending evening school, working as a housekeeper, waiter or brickmaker, and with potentially dangerous machines. CONCLUSIONS: The study results reinforce the need of restricting adolescent work and support communities to implement social promotion programs.
Resumo:
Long-term contractual decisions are the basis of an efficient risk management. However those types of decisions have to be supported with a robust price forecast methodology. This paper reports a different approach for long-term price forecast which tries to give answers to that need. Making use of regression models, the proposed methodology has as main objective to find the maximum and a minimum Market Clearing Price (MCP) for a specific programming period, and with a desired confidence level α. Due to the problem complexity, the meta-heuristic Particle Swarm Optimization (PSO) was used to find the best regression parameters and the results compared with the obtained by using a Genetic Algorithm (GA). To validate these models, results from realistic data are presented and discussed in detail.
Resumo:
The paper proposes a methodology to increase the probability of delivering power to any load point by identifying new investments in distribution energy systems. The proposed methodology is based on statistical failure and repair data of distribution components and it uses a fuzzy-probabilistic modeling for the components outage parameters. The fuzzy membership functions of the outage parameters of each component are based on statistical records. A mixed integer nonlinear programming optimization model is developed in order to identify the adequate investments in distribution energy system components which allow increasing the probability of delivering power to any customer in the distribution system at the minimum possible cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a 180 bus distribution network.
Resumo:
This paper presents a methodology for distribution networks reconfiguration in outage presence in order to choose the reconfiguration that presents the lower power losses. The methodology is based on statistical failure and repair data of the distribution power system components and uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. Once obtained the system states by Monte Carlo simulation, a logical programming algorithm is applied to get all possible reconfigurations for every system state. In order to evaluate the line flows and bus voltages and to identify if there is any overloading, and/or voltage violation a distribution power flow has been applied to select the feasible reconfiguration with lower power losses. To illustrate the application of the proposed methodology to a practical case, the paper includes a case study that considers a real distribution network.
Resumo:
Mestrado em Radioterapia.
Fuzzy Monte Carlo mathematical model for load curtailment minimization in transmission power systems
Resumo:
This paper presents a methodology which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states by Monte Carlo simulation. This is followed by a remedial action algorithm, based on optimal power flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. In order to illustrate the application of the proposed methodology to a practical case, the paper will include a case study for the Reliability Test System (RTS) 1996 IEEE 24 BUS.
Resumo:
This paper present a methodology to choose the distribution networks reconfiguration that presents the lower power losses. The proposed methodology is based on statistical failure and repair data of the distribution power system components and uses fuzzy-probabilistic modeling for system component outage parameters. The proposed hybrid method using fuzzy sets and Monte Carlo simulation based on the fuzzyprobabilistic models allows catching both randomness and fuzziness of component outage parameters. A logic programming algorithm is applied, once obtained the system states by Monte Carlo Simulation, to get all possible reconfigurations for each system state. To evaluate the line flows and bus voltages and to identify if there is any overloading, and/or voltage violation an AC load flow has been applied to select the feasible reconfiguration with lower power losses. To illustrate the application of the proposed methodology, the paper includes a case study that considers a 115 buses distribution network.
Resumo:
Distributed generation unlike centralized electrical generation aims to generate electrical energy on small scale as near as possible to load centers, interchanging electric power with the network. This work presents a probabilistic methodology conceived to assist the electric system planning engineers in the selection of the distributed generation location, taking into account the hourly load changes or the daily load cycle. The hourly load centers, for each of the different hourly load scenarios, are calculated deterministically. These location points, properly weighted according to their load magnitude, are used to calculate the best fit probability distribution. This distribution is used to determine the maximum likelihood perimeter of the area where each source distributed generation point should preferably be located by the planning engineers. This takes into account, for example, the availability and the cost of the land lots, which are factors of special relevance in urban areas, as well as several obstacles important for the final selection of the candidates of the distributed generation points. The proposed methodology has been applied to a real case, assuming three different bivariate probability distributions: the Gaussian distribution, a bivariate version of Freund’s exponential distribution and the Weibull probability distribution. The methodology algorithm has been programmed in MATLAB. Results are presented and discussed for the application of the methodology to a realistic case and demonstrate the ability of the proposed methodology for efficiently handling the determination of the best location of the distributed generation and their corresponding distribution networks.
Resumo:
Background: A common task in analyzing microarray data is to determine which genes are differentially expressed across two (or more) kind of tissue samples or samples submitted under experimental conditions. Several statistical methods have been proposed to accomplish this goal, generally based on measures of distance between classes. It is well known that biological samples are heterogeneous because of factors such as molecular subtypes or genetic background that are often unknown to the experimenter. For instance, in experiments which involve molecular classification of tumors it is important to identify significant subtypes of cancer. Bimodal or multimodal distributions often reflect the presence of subsamples mixtures. Consequently, there can be genes differentially expressed on sample subgroups which are missed if usual statistical approaches are used. In this paper we propose a new graphical tool which not only identifies genes with up and down regulations, but also genes with differential expression in different subclasses, that are usually missed if current statistical methods are used. This tool is based on two measures of distance between samples, namely the overlapping coefficient (OVL) between two densities and the area under the receiver operating characteristic (ROC) curve. The methodology proposed here was implemented in the open-source R software. Results: This method was applied to a publicly available dataset, as well as to a simulated dataset. We compared our results with the ones obtained using some of the standard methods for detecting differentially expressed genes, namely Welch t-statistic, fold change (FC), rank products (RP), average difference (AD), weighted average difference (WAD), moderated t-statistic (modT), intensity-based moderated t-statistic (ibmT), significance analysis of microarrays (samT) and area under the ROC curve (AUC). On both datasets all differentially expressed genes with bimodal or multimodal distributions were not selected by all standard selection procedures. We also compared our results with (i) area between ROC curve and rising area (ABCR) and (ii) the test for not proper ROC curves (TNRC). We found our methodology more comprehensive, because it detects both bimodal and multimodal distributions and different variances can be considered on both samples. Another advantage of our method is that we can analyze graphically the behavior of different kinds of differentially expressed genes. Conclusion: Our results indicate that the arrow plot represents a new flexible and useful tool for the analysis of gene expression profiles from microarrays.
Resumo:
We present a new dynamical approach to the Blumberg's equation, a family of unimodal maps. These maps are proportional to Beta(p, q) probability densities functions. Using the symmetry of the Beta(p, q) distribution and symbolic dynamics techniques, a new concept of mirror symmetry is defined for this family of maps. The kneading theory is used to analyze the effect of such symmetry in the presented models. The main result proves that two mirror symmetric unimodal maps have the same topological entropy. Different population dynamics regimes are identified, when the intrinsic growth rate is modified: extinctions, stabilities, bifurcations, chaos and Allee effect. To illustrate our results, we present a numerical analysis, where are demonstrated: monotonicity of the topological entropy with the variation of the intrinsic growth rate, existence of isentropic sets in the parameters space and mirror symmetry.
Resumo:
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.
Resumo:
We consider the quark sector of theories containing three scalar SU(2)(L) doublets in the triplet representation of A(4) (or S-4) and three generations of quarks in arbitrary A(4) (or S-4) representations. We show that for all possible choices of quark field representations and for all possible alignments of the Higgs vacuum expectation values that can constitute global minima of the scalar potential, it is not possible to obtain simultaneously nonvanishing quark masses and a nonvanishing CP-violating phase in the Cabibbo-Kobayashi-Maskawa quark mixing matrix. As a result, in this minimal form, models with three scalar fields in the triplet representation of A(4) or S-4 cannot be extended to the quark sector in a way consistent with experiment. DOI: 10.1103/PhysRevD.87.055010.
Resumo:
We produce five flavour models for the lepton sector. All five models fit perfectly well - at the 1 sigma level - the existing data on the neutrino mass-squared differences and on the lepton mixing angles. The models are based on the type I seesaw mechanism, on a Z(2) symmetry for each lepton flavour, and either on a (spontaneously broken) symmetry under the interchange of two lepton flavours or on a (spontaneously broken) CP symmetry incorporating that interchange - or on both symmetries simultaneously. Each model makes definite predictions both for the scale of the neutrino masses and for the phase delta in lepton mixing; the fifth model also predicts a correlation between the lepton mixing angles theta(12) and theta(23).