962 resultados para Maximum entropy method
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
In this paper, a new family of survival distributions is presented. It is derived by considering that the latent number of failure causes follows a Poisson distribution and the time for these causes to be activated follows an exponential distribution. Three different activation schemes are also considered. Moreover, we propose the inclusion of covariates in the model formulation in order to study their effect on the expected value of the number of causes and on the failure rate function. Inferential procedure based on the maximum likelihood method is discussed and evaluated via simulation. The developed methodology is illustrated on a real data set on ovarian cancer.
Resumo:
This article introduces generalized beta-generated (GBG) distributions. Sub-models include all classical beta-generated, Kumaraswamy-generated and exponentiated distributions. They are maximum entropy distributions under three intuitive conditions, which show that the classical beta generator skewness parameters only control tail entropy and an additional shape parameter is needed to add entropy to the centre of the parent distribution. This parameter controls skewness without necessarily differentiating tail weights. The GBG class also has tractable properties: we present various expansions for moments, generating function and quantiles. The model parameters are estimated by maximum likelihood and the usefulness of the new class is illustrated by means of some real data sets. (c) 2011 Elsevier B.V. All rights reserved.
Resumo:
Known as the "king of spices", black pepper (Piper nigrum), a perennial crop of the tropics, is economically the most important and the most widely used spice crop in the world. To understand its suitable bioclimatic distribution, maximum entropy based on ecological niche modeling was used to model the bioclimatic niches of the species in its Asian range. Based on known occurrences, bioclimatic areas with higher probabilities are mainly located in the eastern and western coasts of the Indian Peninsula, the east of Sumatra Island, some areas in the Malay Archipelago, and the southeast coastal areas of China. Some undocumented places were also predicted as suitable areas. According to the jackknife procedure, the minimum temperature of the coldest month, the mean monthly temperature range, and the precipitation of the wettest month were identified as highly effective factors in the distribution of black pepper and could possibly account for the crop's distribution pattern. Such climatic requirements inhibited this species from dispersing and gaining a larger geographical range.
Resumo:
Accurate estimates of the penetrance rate of autosomal dominant conditions are important, among other issues, for optimizing recurrence risks in genetic counseling. The present work on penetrance rate estimation from pedigree data considers the following situations: 1) estimation of the penetrance rate K (brief review of the method); 2) construction of exact credible intervals for K estimates; 3) specificity and heterogeneity issues; 4) penetrance rate estimates obtained through molecular testing of families; 5) lack of information about the phenotype of the pedigree generator; 6) genealogies containing grouped parent-offspring information; 7) ascertainment issues responsible for the inflation of K estimates.
Resumo:
A total of 46,089 individual monthly test-day (TD) milk yields (10 test-days), from 7,331 complete first lactations of Holstein cattle were analyzed. A standard multivariate analysis (MV), reduced rank analyses fitting the first 2, 3, and 4 genetic principal components (PC2, PC3, PC4), and analyses that fitted a factor analytic structure considering 2, 3, and 4 factors (FAS2, FAS3, FAS4), were carried out. The models included the random animal genetic effect and fixed effects of the contemporary groups (herd-year-month of test-day), age of cow (linear and quadratic effects), and days in milk (linear effect). The residual covariance matrix was assumed to have full rank. Moreover, 2 random regression models were applied. Variance components were estimated by restricted maximum likelihood method. The heritability estimates ranged from 0.11 to 0.24. The genetic correlation estimates between TD obtained with the PC2 model were higher than those obtained with the MV model, especially on adjacent test-days at the end of lactation close to unity. The results indicate that for the data considered in this study, only 2 principal components are required to summarize the bulk of genetic variation among the 10 traits.
Resumo:
Native bees are important providers of pollination services, but there are cumulative evidences of their decline. Global changes such as habitat losses, invasions of exotic species and climate change have been suggested as the main causes of the decline of pollinators. In this study, the influence of climate change on the distribution of 10 species of Brazilian bees was estimated with species distribution modelling. We used Maxent algorithm (maximum entropy) and two different scenarios, an optimistic and a pessimistic, to the years 2050 and 2080. We also evaluated the percentage reduction of species habitat based on the future scenarios of climate change through Geographic Information System (GIS). Results showed that the total area of suitable habitats decreased for all species but one under the different future scenarios. The greatest reductions in habitat area were found for Melipona bicolor bicolor and Melipona scutellaris, which occur predominantly in areas related originally to Atlantic Moist Forest. The species analysed have been reported to be pollinators of some regional crops and the consequence of their decrease for these crops needs further clarification. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The aim of this work is to study the features of a simple replicator chemical model of the relation between kinetic stability and entropy production under the action of external perturbations. We quantitatively explore the different paths leading to evolution in a toy model where two independent replicators compete for the same substrate. To do that, the same scenario described originally by Pross (J Phys Org Chem 17:312–316, 2004) is revised and new criteria to define the kinetic stability are proposed. Our results suggest that fast replicator populations are continually favored by the effects of strong stochastic environmental fluctuations capable to determine the global population, the former assumed to be the only acting evolution force. We demonstrate that the process is continually driven by strong perturbations only, and that population crashes may be useful proxies for these catastrophic environmental fluctuations. As expected, such behavior is particularly enhanced under very large scale perturbations, suggesting a likely dynamical footprint in the recovery patterns of new species after mass extinction events in the Earth’s geological past. Furthermore, the hypothesis that natural selection always favors the faster processes may give theoretical support to different studies that claim the applicability of maximum principles like the Maximum Metabolic Flux (MMF) or Maximum Entropy Productions Principle (MEPP), seen as the main goal of biological evolution.
Resumo:
This thesis is the result of my experience as a PhD student taking part in the Joint Doctoral Programme at the University of York and the University of Bologna. In my thesis I deal with topics that are of particular interest in Italy and in Great Britain. Chapter 2 focuses on the empirical test of the existence of the relationship between technological profiles and market structure claimed by Sutton’s theory (1991, 1998) in the specific economic framework of hospital care services provided by the Italian National Health Service (NHS). In order to test the empirical predictions by Sutton, we identify the relevant markets for hospital care services in Italy in terms of both product and geographic dimensions. In particular, the Elzinga and Hogarty (1978) approach has been applied to data on patients’ flows across Italian Provinces in order to derive the geographic dimension of each market. Our results provide evidence in favour of the empirical predictions of Sutton. Chapter 3 deals with the patient mobility in the Italian NHS. To analyse the determinants of patient mobility across Local Health Authorities, we estimate gravity equations in multiplicative form using a Poisson pseudo maximum likelihood method, as proposed by Santos-Silva and Tenreyro (2006). In particular, we focus on the scale effect played by the size of the pool of enrolees. In most of the cases our results are consistent with the predictions of the gravity model. Chapter 4 considers the effects of contractual and working conditions on selfassessed health and psychological well-being (derived from the General Health Questionnaire) using the British Household Panel Survey (BHPS). We consider two branches of the literature. One suggests that “atypical” contractual conditions have a significant impact on health while the other suggests that health is damaged by adverse working conditions. The main objective of our paper is to combine the two branches of the literature to assess the distinct effects of contractual and working conditions on health. The results suggest that both sets of conditions have some influence on health and psychological well-being of employees.
Resumo:
Das Standardmodell (SM) der Teilchenphysik beschreibt sehr präzise die fundamentalen Bausteine und deren Wechselwirkungen (WW). Trotz des Erfolges gibt es noch offene Fragen, die vom SM nicht beantwortet werden können. Ein noch noch nicht abgeschlossener Test besteht aus der Messung der Stärke der schwachen Kopplung zwischen Quarks. Neutrale B- bzw. $bar{B}$-Mesonen können sich innerhalb ihrer Lebensdauer über einen Prozeß der schwachen WW in ihr Antiteilchen transformieren. Durch die Messung der Bs-Oszillation kann die Kopplung Vtd zwischen den Quarksorten Top (t) und Down (d) bestimmt werden. Alle bis Ende 2005 durchgeführten Experimente lieferten lediglich eine untere Grenze für die Oszillationsfrequenz von ms>14,4ps-1. Die vorliegenden Arbeit beschreibt die Messung der Bs-Oszillationsfrequenz ms mit dem semileptonischen Kanal BsD(-)+. Die verwendeten Daten stammen aus Proton-Antiproton-Kollisionen, die im Zeitraum von April 2002 bis März 2006 mit dem DØ-Detektor am Tevatron-Beschleuniger des Fermi National Accelerator Laboratory bei einer Schwerpunktsenergie von $sqrt{s}$=1,96TeV aufgezeichnet wurden. Die verwendeten Datensätze entsprechen einer integrierten Luminosität von 1,3fb-1 (620 millionen Ereignisse). Für diese Oszillationsmessung wurde der Quarkinhalt des Bs-Mesons zur Zeit der Produktion sowie des Zerfalls bestimmt und die Zerfallszeit wurde gemessen. Nach der Rekonstruktion und Selektion der Signalereignisse legt die Ladung des Myons den Quarkinhalt des Bs-Mesons zur Zeit des Zerfalls fest. Zusätzlich wurde der Quarkinhalt des Bs-Mesons zur Zeit der Produktion markiert. b-Quarks werden in $pbar{p}$-Kollisionen paarweise produziert. Die Zerfallsprodukte des zweiten b-Hadrons legen den Quarkinhalt des Bs-Mesons zur Zeit der Produktion fest. Bei einer Sensitivität von msenss=14,5ps-1 wurde eine untere Grenze für die Oszillationsfrequenz ms>15,5ps-1 bestimmt. Die Maximum-Likelihood-Methode lieferte eine Oszillationsfrequenz ms>(20+2,5-3,0(stat+syst)0,8(syst,k))ps-1 bei einem Vertrauensniveau von 90%. Der nicht nachgewiesene Neutrinoimpuls führt zu dem systematischen Fehler (sys,k). Dieses Resultat ergibt zusammen mit der entsprechenden Oszillation des Bd-Mesons eine signifikante Messung der Kopplung Vtd, in Übereinstimmung mit weiteren Experimenten über die schwachen Quarkkopplungen.
Resumo:
Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies about liver perfusion CT are presented in order to investigate the cause of variability of this technique. Firstly analysis were made to assess the variability related to the mathematical model used to compute arterial Blood Flow (BFa) values. Results were obtained implementing algorithms based on “ maximum slope method” and “Dual input one compartment model” . Statistical analysis on simulated data demonstrated that the two methods are not interchangeable. Anyway slope method is always applicable in clinical context. Then variability related to TAC processing in the application of slope method is analyzed. Results compared with manual selection allow to identify the best automatic algorithm to compute BFa. The consistency of a Standardized Perfusion Index (SPV) was evaluated and a simplified calibration procedure was proposed. At the end the quantitative value of perfusion map was analyzed. ROI approach and map approach provide related values of BFa and this means that pixel by pixel algorithm give reliable quantitative results. Also in pixel by pixel approach slope method give better results. In conclusion the development of new automatic algorithms for a consistent computation of BFa and the analysis and definition of simplified technique to compute SPV parameter, represent an improvement in the field of liver perfusion CT analysis.
Resumo:
Despite the scientific achievement of the last decades in the astrophysical and cosmological fields, the majority of the Universe energy content is still unknown. A potential solution to the “missing mass problem” is the existence of dark matter in the form of WIMPs. Due to the very small cross section for WIMP-nuleon interactions, the number of expected events is very limited (about 1 ev/tonne/year), thus requiring detectors with large target mass and low background level. The aim of the XENON1T experiment, the first tonne-scale LXe based detector, is to be sensitive to WIMP-nucleon cross section as low as 10^-47 cm^2. To investigate the possibility of such a detector to reach its goal, Monte Carlo simulations are mandatory to estimate the background. To this aim, the GEANT4 toolkit has been used to implement the detector geometry and to simulate the decays from the various background sources: electromagnetic and nuclear. From the analysis of the simulations, the level of background has been found totally acceptable for the experiment purposes: about 1 background event in a 2 tonne-years exposure. Indeed, using the Maximum Gap method, the XENON1T sensitivity has been evaluated and the minimum for the WIMP-nucleon cross sections has been found at 1.87 x 10^-47 cm^2, at 90% CL, for a WIMP mass of 45 GeV/c^2. The results have been independently cross checked by using the Likelihood Ratio method that confirmed such results with an agreement within less than a factor two. Such a result is completely acceptable considering the intrinsic differences between the two statistical methods. Thus, in the PhD thesis it has been proven that the XENON1T detector will be able to reach the designed sensitivity, thus lowering the limits on the WIMP-nucleon cross section by about 2 orders of magnitude with respect to the current experiments.
Resumo:
The first part of this work deals with the inverse problem solution in the X-ray spectroscopy field. An original strategy to solve the inverse problem by using the maximum entropy principle is illustrated. It is built the code UMESTRAT, to apply the described strategy in a semiautomatic way. The application of UMESTRAT is shown with a computational example. The second part of this work deals with the improvement of the X-ray Boltzmann model, by studying two radiative interactions neglected in the current photon models. Firstly it is studied the characteristic line emission due to Compton ionization. It is developed a strategy that allows the evaluation of this contribution for the shells K, L and M of all elements with Z from 11 to 92. It is evaluated the single shell Compton/photoelectric ratio as a function of the primary photon energy. It is derived the energy values at which the Compton interaction becomes the prevailing process to produce ionization for the considered shells. Finally it is introduced a new kernel for the XRF from Compton ionization. In a second place it is characterized the bremsstrahlung radiative contribution due the secondary electrons. The bremsstrahlung radiation is characterized in terms of space, angle and energy, for all elements whit Z=1-92 in the energy range 1–150 keV by using the Monte Carlo code PENELOPE. It is demonstrated that bremsstrahlung radiative contribution can be well approximated with an isotropic point photon source. It is created a data library comprising the energetic distributions of bremsstrahlung. It is developed a new bremsstrahlung kernel which allows the introduction of this contribution in the modified Boltzmann equation. An example of application to the simulation of a synchrotron experiment is shown.