981 resultados para Compound Poisson Process
Resumo:
Background: Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Results: Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular sound change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. Conclusions: We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group.
Resumo:
Esta tese é composta por três artigos, nos quais são apresentadas extensões e aplicações da Teoria das Opções Reais, todas de interesse para formuladores de política econômica no Brasil. O primeiro faz uma análise original da questão da bioprospecção, ou a exploração da diversidade biológica para fins econômicos. Duas estruturas alternativas para o desenho do mecanismo de concessão, visando o uso sustentável da biodiversidade brasileira, são sugeridas: (i) um modelo de projetos de P&D com maturidade incerta, no qual a intensidade do processo de Poisson que governa o tempo de maturação é explicitamente dependente do nível da biodiversidade no local concedido; (ii) um modelo de Agente-Principal, onde o Estado delega o exercício da opção de investimento à empresa de pesquisa biotecnológica. O segundo artigo avança a analogia entre opções de venda (“put options”) e cotas de importação. Os parâmetros relevantes para apreçar as licenças são agora obtidos endogenamente, a partir da interação entre a firma importadora e os produtores domésticos. Por fim, no terceiro, é feita análise pioneira do mercado paralelo de títulos precatórios no Brasil. Um modelo para a valoração de tais títulos é construído e proposto, tendo por base o arcabouço institucional existente sobre o assunto, tanto no governo central, como nos estados e municípios.
Resumo:
A soma de variáveis aleatórias com número de parcelas é aleatório, para além do evidente interesse conceptual e teórico, tem larga ressonância na investigação do processo de risco e em processos de ramificação. Reformulamos a teoria de Panjer (1981), que permite o cálculo iterativo do risco agregado, com o recurso a valores médios de uniformes, descrevendo uma extensão da classe de Panjer, e estudando em detalhe a equação funcional que a caracteriza. Aplicamos essas ideias na caracterização de aleatoriedade discreta, exemplificando com o comportamento das fêmeas de pássaros que investem na promiscuidade de parceiros para garantir a diversidade genética da progénie, tendo no entanto o cuidado de manter as aparências de fidelidade, para garantir a cooperação do parceiro no sucesso da ninhada. Apresentamos as transformadas de Laplace e funções geradoras numa perspectiva que leva a uma introdção natural de transformadas de Pareto, cuja relevância exemplificamos.
Resumo:
This paper proposes a procedure to control on-line processes for attributes, using an Shewhart control chart with two control limits (warning limit and control limit) and will be based on a sequence of inspection (h). The inspection procedure is based on Taguchi et al. (1989), in which to inspect the item, if the number of non-conformities is higher than an upper control limit, the process needs to be stopped and some adjustment is required; and, if the last inspection h, from all items inspected present a number of non-conformities between the control limit and warning limit. The items inspected will suffer destructive inspection, being discarded after inspection. Properties of an ergodic Markov chain are used to get the expression of average cost per item and the aim was the determination of four optimized parameters: the sampling interval of the inspections (m); the constant W to draw the warning limit (W); the constant C to draw the control limit (C), where W £ C, and the length of sequence of inspections (h). Numerical examples illustrate the proposed procedure
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The BTEX (benzene, toluene, ethylbenzene and xylene) mixture is an environmental pollutant that has a high potential to contaminate water resources, especially groundwater. The bioremediation process by microorganisms has often been used as a tool for removing BTEX from contaminated sites. The application of biological assays is useful in evaluating the efficiency of bioremediation processes, besides identifying the toxicity of the original contaminants. It also allows identifying the effects of possible metabolites formed during the biodegradation process on test organisms. In this study, we evaluated the genotoxic and mutagenic potential of five different BTEX concentrations in rat hepatoma tissue culture (HTC) cells, using comet and micronucleus assays, before and after biodegradation. A mutagenic effect was observed for the highest concentration tested and for its respective non-biodegraded concentration. Genotoxicity was significant for all non-biodegraded concentrations and not significant for the biodegraded ones. According to our results, we can state that BTEX is mutagenic at concentrations close to its water solubility, and genotoxic even at lower concentrations, differing from some described results reported for the mixture components, when tested individually. Our results suggest a synergistic effect for the mixture and that the biodegradation process is a safe and efficient methodology to be applied at BTEX-contaminated sites. © 2012 Elsevier Ltd.
Resumo:
We consider a fully model-based approach for the analysis of distance sampling data. Distance sampling has been widely used to estimate abundance (or density) of animals or plants in a spatially explicit study area. There is, however, no readily available method of making statistical inference on the relationships between abundance and environmental covariates. Spatial Poisson process likelihoods can be used to simultaneously estimate detection and intensity parameters by modeling distance sampling data as a thinned spatial point process. A model-based spatial approach to distance sampling data has three main benefits: it allows complex and opportunistic transect designs to be employed, it allows estimation of abundance in small subregions, and it provides a framework to assess the effects of habitat or experimental manipulation on density. We demonstrate the model-based methodology with a small simulation study and analysis of the Dubbo weed data set. In addition, a simple ad hoc method for handling overdispersion is also proposed. The simulation study showed that the model-based approach compared favorably to conventional distance sampling methods for abundance estimation. In addition, the overdispersion correction performed adequately when the number of transects was high. Analysis of the Dubbo data set indicated a transect effect on abundance via Akaike’s information criterion model selection. Further goodness-of-fit analysis, however, indicated some potential confounding of intensity with the detection function.
Resumo:
Reproducing Fourier's law of heat conduction from a microscopic stochastic model is a long standing challenge in statistical physics. As was shown by Rieder, Lebowitz and Lieb many years ago, a chain of harmonically coupled oscillators connected to two heat baths at different temperatures does not reproduce the diffusive behaviour of Fourier's law, but instead a ballistic one with an infinite thermal conductivity. Since then, there has been a substantial effort from the scientific community in identifying the key mechanism necessary to reproduce such diffusivity, which usually revolved around anharmonicity and the effect of impurities. Recently, it was shown by Dhar, Venkateshan and Lebowitz that Fourier's law can be recovered by introducing an energy conserving noise, whose role is to simulate the elastic collisions between the atoms and other microscopic degrees of freedom, which one would expect to be present in a real solid. For a one-dimensional chain this is accomplished numerically by randomly flipping - under the framework of a Poisson process with a variable “rate of collisions" - the sign of the velocity of an oscillator. In this poster we present Langevin simulations of a one-dimensional chain of oscillators coupled to two heat baths at different temperatures. We consider both harmonic and anharmonic (quartic) interactions, which are studied with and without the energy conserving noise. With these results we are able to map in detail how the heat conductivity k is influenced by both anharmonicity and the energy conserving noise. We also present a detailed analysis of the behaviour of k as a function of the size of the system and the rate of collisions, which includes a finite-size scaling method that enables us to extract the relevant critical exponents. Finally, we show that for harmonic chains, k is independent of temperature, both with and without the noise. Conversely, for anharmonic chains we find that k increases roughly linearly with the temperature of a given reservoir, while keeping the temperature difference fixed.
Resumo:
In this paper, we study panel count data with informative observation times. We assume nonparametric and semiparametric proportional rate models for the underlying recurrent event process, where the form of the baseline rate function is left unspecified and a subject-specific frailty variable inflates or deflates the rate function multiplicatively. The proposed models allow the recurrent event processes and observation times to be correlated through their connections with the unobserved frailty; moreover, the distributions of both the frailty variable and observation times are considered as nuisance parameters. The baseline rate function and the regression parameters are estimated by maximizing a conditional likelihood function of observed event counts and solving estimation equations. Large sample properties of the proposed estimators are studied. Numerical studies demonstrate that the proposed estimation procedures perform well for moderate sample sizes. An application to a bladder tumor study is presented to illustrate the use of the proposed methods.
Resumo:
Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.
Resumo:
Light-frame wood buildings are widely built in the United States (U.S.). Natural hazards cause huge losses to light-frame wood construction. This study proposes methodologies and a framework to evaluate the performance and risk of light-frame wood construction. Performance-based engineering (PBE) aims to ensure that a building achieves the desired performance objectives when subjected to hazard loads. In this study, the collapse risk of a typical one-story light-frame wood building is determined using the Incremental Dynamic Analysis method. The collapse risks of buildings at four sites in the Eastern, Western, and Central regions of U.S. are evaluated. Various sources of uncertainties are considered in the collapse risk assessment so that the influence of uncertainties on the collapse risk of lightframe wood construction is evaluated. The collapse risks of the same building subjected to maximum considered earthquakes at different seismic zones are found to be non-uniform. In certain areas in the U.S., the snow accumulation is significant and causes huge economic losses and threatens life safety. Limited study has been performed to investigate the snow hazard when combined with a seismic hazard. A Filtered Poisson Process (FPP) model is developed in this study, overcoming the shortcomings of the typically used Bernoulli model. The FPP model is validated by comparing the simulation results to weather records obtained from the National Climatic Data Center. The FPP model is applied in the proposed framework to assess the risk of a light-frame wood building subjected to combined snow and earthquake loads. The snow accumulation has a significant influence on the seismic losses of the building. The Bernoulli snow model underestimates the seismic loss of buildings in areas with snow accumulation. An object-oriented framework is proposed in this study to performrisk assessment for lightframe wood construction. For home owners and stake holders, risks in terms of economic losses is much easier to understand than engineering parameters (e.g., inter story drift). The proposed framework is used in two applications. One is to assess the loss of the building subjected to mainshock-aftershock sequences. Aftershock and downtime costs are found to be important factors in the assessment of seismic losses. The framework is also applied to a wood building in the state of Washington to assess the loss of the building subjected to combined earthquake and snow loads. The proposed framework is proven to be an appropriate tool for risk assessment of buildings subjected to multiple hazards. Limitations and future works are also identified.
Resumo:
The determination of size as well as power of a test is a vital part of a Clinical Trial Design. This research focuses on the simulation of clinical trial data with time-to-event as the primary outcome. It investigates the impact of different recruitment patterns, and time dependent hazard structures on size and power of the log-rank test. A non-homogeneous Poisson process is used to simulate entry times according to the different accrual patterns. A Weibull distribution is employed to simulate survival times according to the different hazard structures. The current study utilizes simulation methods to evaluate the effect of different recruitment patterns on size and power estimates of the log-rank test. The size of the log-rank test is estimated by simulating survival times with identical hazard rates between the treatment and the control arm of the study resulting in a hazard ratio of one. Powers of the log-rank test at specific values of hazard ratio (≠1) are estimated by simulating survival times with different, but proportional hazard rates for the two arms of the study. Different shapes (constant, decreasing, or increasing) of the hazard function of the Weibull distribution are also considered to assess the effect of hazard structure on the size and power of the log-rank test. ^
Resumo:
Animal tracking has been addressed by different initiatives over the last two decades. Most of them rely on satellite connectivity on every single node and lack of energy-saving strategies. This paper presents several new contributions on the tracking of dynamic heterogeneous asynchronous networks (primary nodes with GPS and secondary nodes with a kinetic generator) motivated by the animal tracking paradigm with random transmissions. A simple approach based on connectivity and coverage intersection is compared with more sophisticated algorithms based on ad-hoc implementations of distributed Kalman-based filters that integrate measurement information using Consensus principles in order to provide enhanced accuracy. Several simulations varying the coverage range, the random behavior of the kinetic generator (modeled as a Poisson Process) and the periodic activation of GPS are included. In addition, this study is enhanced with HW developments and implementations on commercial off-the-shelf equipment which show the feasibility for performing these proposals on real hardware.
Resumo:
The seismic hazard of the Iberian Peninsula is analysed using a nonparametric methodology based on statistical kernel functions; the activity rate is derived from the catalogue data, both its spatial dependence (without a seismogenetic zonation) and its magnitude dependence (without using Gutenberg–Richter's law). The catalogue is that of the Instituto Geográfico Nacional, supplemented with other catalogues around the periphery; the quantification of events has been homogenised and spatially or temporally interrelated events have been suppressed to assume a Poisson process. The activity rate is determined by the kernel function, the bandwidth and the effective periods. The resulting rate is compared with that produced using Gutenberg–Richter statistics and a zoned approach. Three attenuation laws have been employed, one for deep sources and two for shallower events, depending on whether their magnitude was above or below 5. The results are presented as seismic hazard maps for different spectral frequencies and for return periods of 475 and 2475 yr, which allows constructing uniform hazard spectra.
Resumo:
The seismic hazard of the Iberian Peninsula is analysed using a nonparametric methodology based on statistical kernel functions; the activity rate is derived from the catalogue data, both its spatial dependence (without a seismogenic zonation) and its magnitude dependence (without using Gutenberg–Richter's relationship). The catalogue is that of the Instituto Geográfico Nacional, supplemented with other catalogues around the periphery; the quantification of events has been homogenised and spatially or temporally interrelated events have been suppressed to assume a Poisson process. The activity rate is determined by the kernel function, the bandwidth and the effective periods. The resulting rate is compared with that produced using Gutenberg–Richter statistics and a zoned approach. Three attenuation relationships have been employed, one for deep sources and two for shallower events, depending on whether their magnitude was above or below 5. The results are presented as seismic hazard maps for different spectral frequencies and for return periods of 475 and 2475 yr, which allows constructing uniform hazard spectra