176 resultados para Ruin probability
Resumo:
In recent years, we have experienced increasing interest in the understanding of the physical properties of collisionless plasmas, mostly because of the large number of astrophysical environments (e. g. the intracluster medium (ICM)) containing magnetic fields that are strong enough to be coupled with the ionized gas and characterized by densities sufficiently low to prevent the pressure isotropization with respect to the magnetic line direction. Under these conditions, a new class of kinetic instabilities arises, such as firehose and mirror instabilities, which have been studied extensively in the literature. Their role in the turbulence evolution and cascade process in the presence of pressure anisotropy, however, is still unclear. In this work, we present the first statistical analysis of turbulence in collisionless plasmas using three-dimensional numerical simulations and solving double-isothermal magnetohydrodynamic equations with the Chew-Goldberger-Low laws closure (CGL-MHD). We study models with different initial conditions to account for the firehose and mirror instabilities and to obtain different turbulent regimes. We found that the CGL-MHD subsonic and supersonic turbulences show small differences compared to the MHD models in most cases. However, in the regimes of strong kinetic instabilities, the statistics, i.e. the probability distribution functions (PDFs) of density and velocity, are very different. In subsonic models, the instabilities cause an increase in the dispersion of density, while the dispersion of velocity is increased by a large factor in some cases. Moreover, the spectra of density and velocity show increased power at small scales explained by the high growth rate of the instabilities. Finally, we calculated the structure functions of velocity and density fluctuations in the local reference frame defined by the direction of magnetic lines. The results indicate that in some cases the instabilities significantly increase the anisotropy of fluctuations. These results, even though preliminary and restricted to very specific conditions, show that the physical properties of turbulence in collisionless plasmas, as those found in the ICM, may be very different from what has been largely believed.
Resumo:
The structural engineering community in Brazil faces new challenges with the recent occurrence of high intensity tornados. Satellite surveillance data shows that the area covering the south-east of Brazil, Uruguay and some of Argentina is one of the world most tornado-prone areas, second only to the infamous tornado alley in central United States. The design of structures subject to tornado winds is a typical example of decision making in the presence of uncertainty. Structural design involves finding a good balance between the competing goals of safety and economy. This paper presents a methodology to find the optimum balance between these goals in the presence of uncertainty. In this paper, reliability-based risk optimization is used to find the optimal safety coefficient that minimizes the total expected cost of a steel frame communications tower, subject to extreme storm and tornado wind loads. The technique is not new, but it is applied to a practical problem of increasing interest to Brazilian structural engineers. The problem is formulated in the partial safety factor format used in current design codes, with all additional partial factor introduced to serve as optimization variable. The expected cost of failure (or risk) is defined as the product of a. limit state exceedance probability by a limit state exceedance cost. These costs include costs of repairing, rebuilding, and paying compensation for injury and loss of life. The total expected failure cost is the sum of individual expected costs over all failure modes. The steel frame communications, tower subject of this study has become very common in Brazil due to increasing mobile phone coverage. The study shows that optimum reliability is strongly dependent on the cost (or consequences) of failure. Since failure consequences depend oil actual tower location, it turn,,; out that different optimum designs should be used in different locations. Failure consequences are also different for the different parties involved in the design, construction and operation of the tower. Hence, it is important that risk is well understood by the parties involved, so that proper contracts call be made. The investigation shows that when non-structural terms dominate design costs (e.g, in residential or office buildings) it is not too costly to over-design; this observation is in agreement with the observed practice for non-optimized structural systems. In this situation, is much easier to loose money by under-design. When by under-design. When structural material cost is a significant part of design cost (e.g. concrete dam or bridge), one is likely to lose significantmoney by over-design. In this situation, a cost-risk-benefit optimization analysis is highly recommended. Finally, the study also shows that under time-varying loads like tornados, the optimum reliability is strongly dependent on the selected design life.
Resumo:
The main goal of this paper is to establish some equivalence results on stability, recurrence, and ergodicity between a piecewise deterministic Markov process ( PDMP) {X( t)} and an embedded discrete-time Markov chain {Theta(n)} generated by a Markov kernel G that can be explicitly characterized in terms of the three local characteristics of the PDMP, leading to tractable criterion results. First we establish some important results characterizing {Theta(n)} as a sampling of the PDMP {X( t)} and deriving a connection between the probability of the first return time to a set for the discrete-time Markov chains generated by G and the resolvent kernel R of the PDMP. From these results we obtain equivalence results regarding irreducibility, existence of sigma-finite invariant measures, and ( positive) recurrence and ( positive) Harris recurrence between {X( t)} and {Theta(n)}, generalizing the results of [ F. Dufour and O. L. V. Costa, SIAM J. Control Optim., 37 ( 1999), pp. 1483-1502] in several directions. Sufficient conditions in terms of a modified Foster-Lyapunov criterion are also presented to ensure positive Harris recurrence and ergodicity of the PDMP. We illustrate the use of these conditions by showing the ergodicity of a capacity expansion model.
Resumo:
Diagnostic methods have been an important tool in regression analysis to detect anomalies, such as departures from error assumptions and the presence of outliers and influential observations with the fitted models. Assuming censored data, we considered a classical analysis and Bayesian analysis assuming no informative priors for the parameters of the model with a cure fraction. A Bayesian approach was considered by using Markov Chain Monte Carlo Methods with Metropolis-Hasting algorithms steps to obtain the posterior summaries of interest. Some influence methods, such as the local influence, total local influence of an individual, local influence on predictions and generalized leverage were derived, analyzed and discussed in survival data with a cure fraction and covariates. The relevance of the approach was illustrated with a real data set, where it is shown that, by removing the most influential observations, the decision about which model best fits the data is changed.
Resumo:
Consider N sites randomly and uniformly distributed in a d-dimensional hypercube. A walker explores this disordered medium going to the nearest site, which has not been visited in the last mu (memory) steps. The walker trajectory is composed of a transient part and a periodic part (cycle). For one-dimensional systems, travelers can or cannot explore all available space, giving rise to a crossover between localized and extended regimes at the critical memory mu(1) = log(2) N. The deterministic rule can be softened to consider more realistic situations with the inclusion of a stochastic parameter T (temperature). In this case, the walker movement is driven by a probability density function parameterized by T and a cost function. The cost function increases as the distance between two sites and favors hops to closer sites. As the temperature increases, the walker can escape from cycles that are reminiscent of the deterministic nature and extend the exploration. Here, we report an analytical model and numerical studies of the influence of the temperature and the critical memory in the exploration of one-dimensional disordered systems.
Resumo:
The objective of this manuscript is to discuss the existing barriers for the dissemination of medical guidelines, and to present strategies that facilitate the adaptation of the recommendations into clinical practice. The literature shows that it usually takes several years until new scientific evidence is adopted in current practice, even when there is obvious impact in patients' morbidity and mortality. There are some examples where more than thirty years have elapsed since the first case reports about the use of a effective therapy were published until its utilization became routine. That is the case of fibrinolysis for the treatment of acute myocardial infarction. Some of the main barriers for the implementation of new recommendations are: the lack of knowledge of a new guideline, personal resistance to changes, uncertainty about the efficacy of the proposed recommendation, fear of potential side-effects, difficulties in remembering the recommendations, inexistence of institutional policies reinforcing the recommendation and even economical restrains. In order to overcome these barriers a strategy that involves a program with multiple tools is always the best. That must include the implementation of easy-to-use algorithms, continuous medical education materials and lectures, electronic or paper alerts, tools to facilitate evaluation and prescription, and periodic audits to show results to the practitioners involved in the process. It is also fundamental that the medical societies involved with the specific medical issue support the program for its scientific and ethical soundness. The creation of multidisciplinary committees in each institution and the inclusion of opinion leaders that have pro-active and lasting attitudes are the key-points for the program's success. In this manuscript we use as an example the implementation of a guideline for venous thromboembolism prophylaxis, but the concepts described here can be easily applied to any other guideline. Therefore, these concepts could be very useful for institutions and services that aim at quality improvement of patient care. Changes in current medical practice recommended by guidelines may take some time. However, if there is a broader participation of opinion leaders and the use of several tools listed here, they surely have a greater probability of reaching the main objectives: improvement in provided medical care and patient safety.
Resumo:
Background: The present work aims at the application of the decision theory to radiological image quality control ( QC) in diagnostic routine. The main problem addressed in the framework of decision theory is to accept or reject a film lot of a radiology service. The probability of each decision of a determined set of variables was obtained from the selected films. Methods: Based on a radiology service routine a decision probability function was determined for each considered group of combination characteristics. These characteristics were related to the film quality control. These parameters were also framed in a set of 8 possibilities, resulting in 256 possible decision rules. In order to determine a general utility application function to access the decision risk, we have used a simple unique parameter called r. The payoffs chosen were: diagnostic's result (correct/incorrect), cost (high/low), and patient satisfaction (yes/no) resulting in eight possible combinations. Results: Depending on the value of r, more or less risk will occur related to the decision-making. The utility function was evaluated in order to determine the probability of a decision. The decision was made with patients or administrators' opinions from a radiology service center. Conclusion: The model is a formal quantitative approach to make a decision related to the medical imaging quality, providing an instrument to discriminate what is really necessary to accept or reject a film or a film lot. The method presented herein can help to access the risk level of an incorrect radiological diagnosis decision.
Resumo:
Background: In a number of malaria endemic regions, tourists and travellers face a declining risk of travel associated malaria, in part due to successful malaria control. Many millions of visitors to these regions are recommended, via national and international policy, to use chemoprophylaxis which has a well recognized morbidity profile. To evaluate whether current malaria chemo-prophylactic policy for travellers is cost effective when adjusted for endemic transmission risk and duration of exposure. a framework, based on partial cost-benefit analysis was used Methods: Using a three component model combining a probability component, a cost component and a malaria risk component, the study estimated health costs avoided through use of chemoprophylaxis and costs of disease prevention (including adverse events and pre-travel advice for visits to five popular high and low malaria endemic regions) and malaria transmission risk using imported malaria cases and numbers of travellers to malarious countries. By calculating the minimal threshold malaria risk below which the economic costs of chemoprophylaxis are greater than the avoided health costs we were able to identify the point at which chemoprophylaxis would be economically rational. Results: The threshold incidence at which malaria chemoprophylaxis policy becomes cost effective for UK travellers is an accumulated risk of 1.13% assuming a given set of cost parameters. The period a travellers need to remain exposed to achieve this accumulated risk varied from 30 to more than 365 days, depending on the regions intensity of malaria transmission. Conclusions: The cost-benefit analysis identified that chemoprophylaxis use was not a cost-effective policy for travellers to Thailand or the Amazon region of Brazil, but was cost-effective for travel to West Africa and for those staying longer than 45 days in India and Indonesia.
Resumo:
Hardy-Weinberg Equilibrium (HWE) is an important genetic property that populations should have whenever they are not observing adverse situations as complete lack of panmixia, excess of mutations, excess of selection pressure, etc. HWE for decades has been evaluated; both frequentist and Bayesian methods are in use today. While historically the HWE formula was developed to examine the transmission of alleles in a population from one generation to the next, use of HWE concepts has expanded in human diseases studies to detect genotyping error and disease susceptibility (association); Ryckman and Williams (2008). Most analyses focus on trying to answer the question of whether a population is in HWE. They do not try to quantify how far from the equilibrium the population is. In this paper, we propose the use of a simple disequilibrium coefficient to a locus with two alleles. Based on the posterior density of this disequilibrium coefficient, we show how one can conduct a Bayesian analysis to verify how far from HWE a population is. There are other coefficients introduced in the literature and the advantage of the one introduced in this paper is the fact that, just like the standard correlation coefficients, its range is bounded and it is symmetric around zero (equilibrium) when comparing the positive and the negative values. To test the hypothesis of equilibrium, we use a simple Bayesian significance test, the Full Bayesian Significance Test (FBST); see Pereira, Stern andWechsler (2008) for a complete review. The disequilibrium coefficient proposed provides an easy and efficient way to make the analyses, especially if one uses Bayesian statistics. A routine in R programs (R Development Core Team, 2009) that implements the calculations is provided for the readers.
Resumo:
The identification of genetic markers associated with chronic kidney disease (CKD) may help to predict its development. Because reduced nitric oxide (NO) bioavailability and endothelial dysfunction are involved in CKD, genetic polymorphisms in the gene encoding the enzyme involved in NO synthesis (endothelial NO synthase [eNos]) may affect the susceptibility to CKD and the development of end-stage renal disease (ESRD). We compared genotype and haplotype distributions of three relevant eNOS polymorphisms (T(-786) C in the promoter region, Glu298Asp in exon 7, and 4b/4a in intron 4) in 110 healthy control subjects and 127 ESRD patients. Genotypes for the T(-786) C and Glu298Asp polymorphisms were determined by TaqMan (R) Allele Discrimination assay and real-time polymerase chain reaction. Genotypes for the intron 4 polymorphism were determined by polymerase chain reaction and fragment separation by electrophoresis. The software program PHASE 2.1 was used to estimate the haplotypes frequencies. We considered significant a probability value of p < 0.05/number of haplotypes (p < 0.05/8 = 0.0063). We found no significant differences between groups with respect to age, ethnicity, and gender. CKD patients had higher blood pressure, total cholesterol, and creatinine levels than healthy control subjects (all p < 0.05). Genotype and allele distributions for the three eNOS polymorphisms were similar in both groups (p > 0.05). We found no significant differences in haplotype distribution between groups (p > 0.05). The lack of significant associations between eNOS polymorphisms and ESRD suggests that eNOS polymorphisms may not be relevant to the genetic component of CKD that leads to ESRD.
Resumo:
The purpose of the present research was to investigate the effects of polymorphisms of luteinizing hormone receptor (LHR) and follicle-stimulating hormone receptor (FSHR) genes, evaluated by polymerase chain reaction-restriction fragment length polymorphism in European-Zebu composite beef heifers from six different breed compositions. The polymorphism site analysis from digestion with HhaI and AluI restriction endonucleases allowed the genotype identification for LHR (TT, CT and CC) and FSHR (GG, CG and CC) genes. A high frequency of heterozygous animals was recorded in all breed compositions for both genes, except in two compositions for LHR. The probability of pregnancy (PP) at first breeding was used to evaluate the polymorphism effect on sexual precocity. The PP was analyzed as a binary trait, with a value of 1 (success) assigned to heifers that were diagnosed pregnant by rectal palpation and a value of 0 (failure) assigned to those that were not pregnant at that time. Heterozygous heifers showed a higher pregnancy rate (67 and 66% for LHR and FSHR genes, respectively), but no significant effects were observed for the genes studied (P=0.9188 and 0.8831 for LHR and FSHR, respectively) on the PP. These results do not justify the inclusion of LHR and FSHR restriction fragment length polymorphism markers in selection programs for sexual precocity in beef heifers. Nevertheless, these markers make possible the genotype characterization and may be used in additional studies to evaluate the genetic structure in other bovine populations.
Resumo:
Context. Observations in the cosmological domain are heavily dependent on the validity of the cosmic distance-duality (DD) relation, eta = D(L)(z)(1+ z)(2)/D(A)(z) = 1, an exact result required by the Etherington reciprocity theorem where D(L)(z) and D(A)(z) are, respectively, the luminosity and angular diameter distances. In the limit of very small redshifts D(A)(z) = D(L)(z) and this ratio is trivially satisfied. Measurements of Sunyaev-Zeldovich effect (SZE) and X-rays combined with the DD relation have been used to determine D(A)(z) from galaxy clusters. This combination offers the possibility of testing the validity of the DD relation, as well as determining which physical processes occur in galaxy clusters via their shapes. Aims. We use WMAP (7 years) results by fixing the conventional Lambda CDM model to verify the consistence between the validity of DD relation and different assumptions about galaxy cluster geometries usually adopted in the literature. Methods. We assume that. is a function of the redshift parametrized by two different relations: eta(z) = 1+eta(0)z, and eta(z) = 1+eta(0)z/(1+z), where eta(0) is a constant parameter quantifying the possible departure from the strict validity of the DD relation. In order to determine the probability density function (PDF) of eta(0), we consider the angular diameter distances from galaxy clusters recently studied by two different groups by assuming elliptical (isothermal) and spherical (non-isothermal) beta models. The strict validity of the DD relation will occur only if the maximum value of eta(0) PDF is centered on eta(0) = 0. Results. It was found that the elliptical beta model is in good agreement with the data, showing no violation of the DD relation (PDF peaked close to eta(0) = 0 at 1 sigma), while the spherical (non-isothermal) one is only marginally compatible at 3 sigma. Conclusions. The present results derived by combining the SZE and X-ray surface brightness data from galaxy clusters with the latest WMAP results (7-years) favors the elliptical geometry for galaxy clusters. It is remarkable that a local property like the geometry of galaxy clusters might be constrained by a global argument provided by the cosmic DD relation.
Resumo:
Aims. We derive lists of proper-motions and kinematic membership probabilities for 49 open clusters and possible open clusters in the zone of the Bordeaux PM2000 proper motion catalogue (+ 11 degrees <= delta <= + 18 degrees). We test different parametrisations of the proper motion and position distribution functions and select the most successful one. In the light of those results, we analyse some objects individually. Methods. We differenciate between cluster and field member stars, and assign membership probabilities, by applying a new and fully automated method based on both parametrisations of the proper motion and position distribution functions, and genetic algorithm optimization heuristics associated with a derivative-based hill climbing algorithm for the likelihood optimization. Results. We present a catalogue comprising kinematic parameters and associated membership probability lists for 49 open clusters and possible open clusters in the Bordeaux PM2000 catalogue region. We note that this is the first determination of proper motions for five open clusters. We confirm the non-existence of two kinematic populations in the region of 15 previously suspected non-existent objects.
Resumo:
Aims. In this work, we describe the pipeline for the fast supervised classification of light curves observed by the CoRoT exoplanet CCDs. We present the classification results obtained for the first four measured fields, which represent a one-year in-orbit operation. Methods. The basis of the adopted supervised classification methodology has been described in detail in a previous paper, as is its application to the OGLE database. Here, we present the modifications of the algorithms and of the training set to optimize the performance when applied to the CoRoT data. Results. Classification results are presented for the observed fields IRa01, SRc01, LRc01, and LRa01 of the CoRoT mission. Statistics on the number of variables and the number of objects per class are given and typical light curves of high-probability candidates are shown. We also report on new stellar variability types discovered in the CoRoT data. The full classification results are publicly available.
Resumo:
We analyze the interaction between dark energy and dark matter from a thermodynamical perspective. By assuming they have different temperatures, we study the possibility of occurring a decay from dark matter into dark energy, characterized by a negative parameter Q. We find that, if at least one of the fluids has nonvanishing chemical potential, for instance mu(x)< 0 and mu(dm)=0 or mu(x)=0 and mu(dm)> 0, the decay is possible, where mu(x) and mu(dm) are the chemical potentials of dark energy and dark matter, respectively. Using recent cosmological data, we find that, for a fairly simple interaction, the dark matter decay is favored with a probability of similar to 93% over the dark energy decay. This result comes from a likelihood analysis where only background evolution has been considered.