972 resultados para Laplace-Metropolis estimator


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article, we extend the earlier work of Freeland and McCabe [Journal of time Series Analysis (2004) Vol. 25, pp. 701–722] and develop a general framework for maximum likelihood (ML) analysis of higher-order integer-valued autoregressive processes. Our exposition includes the case where the innovation sequence has a Poisson distribution and the thinning is binomial. A recursive representation of the transition probability of the model is proposed. Based on this transition probability, we derive expressions for the score function and the Fisher information matrix, which form the basis for ML estimation and inference. Similar to the results in Freeland and McCabe (2004), we show that the score function and the Fisher information matrix can be neatly represented as conditional expectations. Using the INAR(2) speci?cation with binomial thinning and Poisson innovations, we examine both the asymptotic e?ciency and ?nite sample properties of the ML estimator in relation to the widely used conditional least
squares (CLS) and Yule–Walker (YW) estimators. We conclude that, if the Poisson assumption can be justi?ed, there are substantial gains to be had from using ML especially when the thinning parameters are large.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The conflict known as the oTroubleso in Northern Ireland began during the late 1960s and is defined by political and ethno-sectarian violence between state, pro-state, and anti-state forces. Reasons for the conflict are contested and complicated by social, religious, political, and cultural disputes, with much of the debate concerning the victims of violence hardened by competing propaganda-conditioning perspectives. This article introduces a database holding information on the location of individual fatalities connected with the contemporary Irish conflict. For each victim, it includes a demographic profile, home address, manner of death, and the organization responsible. Employing geographic information system (GIS) techniques, the database is used to measure, map, and analyze the spatial distribution of conflict-related deaths between 1966 and 2007 across Belfast, the capital city of Northern Ireland, with respect to levels of segregation, social and economic deprivation, and interfacing. The GIS analysis includes a kernel density estimator designed to generate smooth intensity surfaces of the conflict-related deaths by both incident and home locations. Neighborhoods with high-intensity surfaces of deaths were those with the highest levels of segregation ( 90 percent Catholic or Protestant) and deprivation, and they were located near physical barriers, the so-called peacelines, between predominantly Catholic and predominantly Protestant communities. Finally, despite the onset of peace and the formation of a power-sharing and devolved administration (the Northern Ireland Assembly), disagreements remain over the responsibility and ocommemorationo of victims, sentiments that still uphold division and atavistic attitudes between spatially divided Catholic and Protestant populations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The characterization and the definition of the complexity of objects is an important but very difficult problem that attracted much interest in many different fields. In this paper we introduce a new measure, called network diversity score (NDS), which allows us to quantify structural properties of networks. We demonstrate numerically that our diversity score is capable of distinguishing ordered, random and complex networks from each other and, hence, allowing us to categorize networks with respect to their structural complexity. We study 16 additional network complexity measures and find that none of these measures has similar good categorization capabilities. In contrast to many other measures suggested so far aiming for a characterization of the structural complexity of networks, our score is different for a variety of reasons. First, our score is multiplicatively composed of four individual scores, each assessing different structural properties of a network. That means our composite score reflects the structural diversity of a network. Second, our score is defined for a population of networks instead of individual networks. We will show that this removes an unwanted ambiguity, inherently present in measures that are based on single networks. In order to apply our measure practically, we provide a statistical estimator for the diversity score, which is based on a finite number of samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Little is known about the microevolutionary processes shaping within river population genetic structure of aquatic organisms characterized by high levels of homing and spawning site fidelity. Using a microsatellite panel, we observed complex and highly significant levels of intrariver population genetic substructure and Isolation-by-Distance, in the Atlantic salmon stock of a large river system. Two evolutionary models have been considered explaining mechanisms promoting genetic substructuring in Atlantic salmon, the member-vagrant and metapopulation models. We show that both models can be simultaneously used to explain patterns and levels of population structuring within the Foyle system. We show that anthropogenic factors have had a large influence on contemporary population structure observed. In an analytical development, we found that the frequently used estimator of genetic differentiation, F-ST, routinely underestimated genetic differentiation by a factor three to four compared to the equivalent statistic Jost's D-est (Jost 2008). These statistics also showed a near-perfect correlation. Despite ongoing discussions regarding the usefulness of "adjusted" F-ST statistics, we argue that these could be useful to identify and quantify qualitative differences between populations, which are important from management and conservation perspectives as an indicator of existence of biologically significant variation among tributary populations or a warning of critical environmental damage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many applications in applied statistics researchers reduce the complexity of a data set by combining a group of variables into a single measure using factor analysis or an index number. We argue that such compression loses information if the data actually has high dimensionality. We advocate the use of a non-parametric estimator, commonly used in physics (the Takens estimator), to estimate the correlation dimension of the data prior to compression. The advantage of this approach over traditional linear data compression approaches is that the data does not have to be linearized. Applying our ideas to the United Nations Human Development Index we find that the four variables that are used in its construction have dimension three and the index loses information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of model selection of a univariate long memory time series is investigated once a semi parametric estimator for the long memory parameter has been used. Standard information criteria are not consistent in this case. A Modified Information Criterion (MIC) that overcomes these difficulties is introduced and proofs that show its asymptotic validity are provided. The results are general and cover a wide range of short memory processes. Simulation evidence compares the new and existing methodologies and empirical applications in monthly inflation and daily realized volatility are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A two-thermocouple sensor characterization method for use in variable flow applications is proposed. Previous offline methods for constant velocity flow are extended using sliding data windows and polynomials to accommodate variable velocity. Analysis of Monte-Carlo simulation studies confirms that the unbiased and consistent parameter estimator outperforms alternatives in the literature and has the added advantage of not requiring a priori knowledge of the time constant ratio of thermocouples. Experimental results from a test rig are also presented. © 2008 The Institute of Measurement and Control.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The optimization of full-scale biogas plant operation is of great importance to make biomass a competitive source of renewable energy. The implementation of innovative control and optimization algorithms, such as Nonlinear Model Predictive Control, requires an online estimation of operating states of biogas plants. This state estimation allows for optimal control and operating decisions according to the actual state of a plant. In this paper such a state estimator is developed using a calibrated simulation model of a full-scale biogas plant, which is based on the Anaerobic Digestion Model No.1. The use of advanced pattern recognition methods shows that model states can be predicted from basic online measurements such as biogas production, CH4 and CO2 content in the biogas, pH value and substrate feed volume of known substrates. The machine learning methods used are trained and evaluated using synthetic data created with the biogas plant model simulating over a wide range of possible plant operating regions. Results show that the operating state vector of the modelled anaerobic digestion process can be predicted with an overall accuracy of about 90%. This facilitates the application of state-based optimization and control algorithms on full-scale biogas plants and therefore fosters the production of eco-friendly energy from biomass.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates a queuing system for QoS optimization of multimedia traffic consisting of aggregated streams with diverse QoS requirements transmitted to a mobile terminal over a common downlink shared channel. The queuing system, proposed for buffer management of aggregated single-user traffic in the base station of High-Speed Downlink Packet Access (HSDPA), allows for optimum loss/delay/jitter performance for end-user multimedia traffic with delay-tolerant non-real-time streams and partially loss tolerant real-time streams. In the queuing system, the real-time stream has non-preemptive priority in service but the number of the packets in the system is restricted by a constant. The non-real-time stream has no service priority but is allowed unlimited access to the system. Both types of packets arrive in the stationary Poisson flow. Service times follow general distribution depending on the packet type. Stability condition for the model is derived. Queue length distribution for both types of customers is calculated at arbitrary epochs and service completion epochs. Loss probability for priority packets is computed. Waiting time distribution in terms of Laplace-Stieltjes transform is obtained for both types of packets. Mean waiting time and jitter are computed. Numerical examples presented demonstrate the effectiveness of the queuing system for QoS optimization of buffered end-user multimedia traffic with aggregated real-time and non-real-time streams.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present results for a variety of Monte Carlo annealing approaches, both classical and quantum, benchmarked against one another for the textbook optimization exercise of a simple one-dimensional double well. In classical (thermal) annealing, the dependence upon the move chosen in a Metropolis scheme is studied and correlated with the spectrum of the associated Markov transition matrix. In quantum annealing, the path integral Monte Carlo approach is found to yield nontrivial sampling difficulties associated with the tunneling between the two wells. The choice of fictitious quantum kinetic energy is also addressed. We find that a "relativistic" kinetic energy form, leading to a higher probability of long real-space jumps, can be considerably more effective than the standard nonrelativistic one.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the local order estimation of nonlinear autoregressive systems with exogenous inputs (NARX), which may have different local dimensions at different points. By minimizing the kernel-based local information criterion introduced in this paper, the strongly consistent estimates for the local orders of the NARX system at points of interest are obtained. The modification of the criterion and a simple procedure of searching the minimum of the criterion, are also discussed. The theoretical results derived here are tested by simulation examples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research over the past two decades on the Holocene sediments from the tide dominated west side of the lower Ganges delta has focussed on constraining the sedimentary environment through grain size distributions (GSD). GSD has traditionally been assessed through the use of probability density function (PDF) methods (e.g. log-normal, log skew-Laplace functions), but these approaches do not acknowledge the compositional nature of the data, which may compromise outcomes in lithofacies interpretations. The use of PDF approaches in GSD analysis poses a series of challenges for the development of lithofacies models, such as equifinal distribution coefficients and obscuring the empirical data variability. In this study a methodological framework for characterising GSD is presented through compositional data analysis (CODA) plus a multivariate statistical framework. This provides a statistically robust analysis of the fine tidal estuary sediments from the West Bengal Sundarbans, relative to alternative PDF approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a novel real-time power-device temperature estimation method that monitors the power MOSFET's junction temperature shift arising from thermal aging effects and incorporates the updated electrothermal models of power modules into digital controllers. Currently, the real-time estimator is emerging as an important tool for active control of device junction temperature as well as online health monitoring for power electronic systems, but its thermal model fails to address the device's ongoing degradation. Because of a mismatch of coefficients of thermal expansion between layers of power devices, repetitive thermal cycling will cause cracks, voids, and even delamination within the device components, particularly in the solder and thermal grease layers. Consequently, the thermal resistance of power devices will increase, making it possible to use thermal resistance (and junction temperature) as key indicators for condition monitoring and control purposes. In this paper, the predicted device temperature via threshold voltage measurements is compared with the real-time estimated ones, and the difference is attributed to the aging of the device. The thermal models in digital controllers are frequently updated to correct the shift caused by thermal aging effects. Experimental results on three power MOSFETs confirm that the proposed methodologies are effective to incorporate the thermal aging effects in the power-device temperature estimator with good accuracy. The developed adaptive technologies can be applied to other power devices such as IGBTs and SiC MOSFETs, and have significant economic implications. 

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work proposes an extended version of the well-known tree-augmented naive Bayes (TAN) classifier where the structure learning step is performed without requiring features to be connected to the class. Based on a modification of Edmonds’ algorithm, our structure learning procedure explores a superset of the structures that are considered by TAN, yet achieves global optimality of the learning score function in a very efficient way (quadratic in the number of features, the same complexity as learning TANs). A range of experiments show that we obtain models with better accuracy than TAN and comparable to the accuracy of the state-of-the-art classifier averaged one-dependence estimator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers inference from multinomial data and addresses the problem of choosing the strength of the Dirichlet prior under a mean-squared error criterion. We compare the Maxi-mum Likelihood Estimator (MLE) and the most commonly used Bayesian estimators obtained by assuming a prior Dirichlet distribution with non-informative prior parameters, that is, the parameters of the Dirichlet are equal and altogether sum up to the so called strength of the prior. Under this criterion, MLE becomes more preferable than the Bayesian estimators at the increase of the number of categories k of the multinomial, because non-informative Bayesian estimators induce a region where they are dominant that quickly shrinks with the increase of k. This can be avoided if the strength of the prior is not kept constant but decreased with the number of categories. We argue that the strength should decrease at least k times faster than usual estimators do.