920 resultados para Sub-registry. Empirical bayesian estimator. General equation. Balancing adjustment factor


Relevância:

40.00% 40.00%

Publicador:

Resumo:

In an Arab oil producing country in the Middle East such as Kuwait, Oil industry is considered as the main and most important industry of the country. This industry’s importance emerged from the significant role it plays in both country’s national economy and also global economy. Moreover, Oil industry’s criticality comes from its interconnectivity with national security and power in the Middle East region. Hence, conducting this research in this crucial industry had certainly added values to companies in this industry as it investigated thoroughly the main components of the TQM implementation process and identified which components affects significantly TQM’s implementation and its gained business results. In addition, as the Oil sector is a large sector that is known for its richness of employees with different national cultures and backgrounds. Thus, this culture-heterogeneous industry seems to be the most appropriate environment to address and satisfy a need in the literature to investigate the national culture values’ effects on TQM implementation process. Furthermore, this research has developed a new conceptual model of TQM implementation process in the Kuwaiti Oil industry that applies in general to operations and productions organizations at the Kuwaiti business environment and in specific to organizations in the Oil industry, as well it serves as a good theoretical model for improving operations and production level of the oil industry in other developing and developed countries. Thus, such research findings minimized the literature’s gap found the limited amount of empirical research of TQM implementation in well-developed industries existing in an Arab, developing countries and specifically in Kuwait, where there was no coherent national model for a universal TQM implementation in the Kuwaiti Oil industry in specific and Kuwaiti business environment in general. Finally, this newly developed research framework, which emerged from the literature search, was validated by rigorous quantitative analysis tools including SPSS and Structural Equation Modeling. The quantitative findings of questionnaires collected were supported by the qualitative findings of interviews conducted.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

According to the textbook approach, the developmental states of the Far East have been considered as strong and autonomous entities. Although their bureaucratic elites have remained isolated from direct pressures stemming from society, the state capacity has also been utilised in order to allocate resources in the interest of the whole society. Yet, society – by and large –has remained weak and subordinated to the state elite. On the other hand, the general perception of Sub-Saharan Africa (SSA) has been just the opposite. The violent and permanent conflict amongst rent-seeking groups for influence and authority over resources has culminated in a situation where states have become extremely weak and fragmented, while society – depending on the capacity of competing groups for mobilising resources to organise themselves mostly on a regional or local level (resulting in local petty kingdoms) – has never had the chance to evolve as a strong player. State failure in the literature, therefore, – in the context of SSA – refers not just to a weak and captured state but also to a non-functioning, and sometimes even non-existent society, too. Recently, however, the driving forces of globalisation might have triggered serious changes in the above described status quo. Accordingly, our hypothesis is the following: globalisation, especially the dynamic changes of technology, capital and communication have made the simplistic “strong state–weak society” (in Asia) and “weak state–weak society” (in Africa) categorisation somewhat obsolete. While our comparative study has a strong emphasis on the empirical scrutiny of trying to uncover the dynamics of changes in state–society relations in the two chosen regions both qualitatively and quantitatively, it also aims at complementing the meaning and essence of the concepts and methodology of stateness, state capacity and state-society relations, the well-known building blocks of the seminal works of Evans (1995), Leftwich (1995), Migdal (1988) or Myrdal (1968).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

According to the textbook approach, the developmental states of the Far East have been considered as strong and autonomous entities. Although their bureaucratic elites have remained isolated from direct pressures stemming from society, the state capacity has also been utilised in order to allocate resources in the interest of the whole society. Yet, society – by and large –has remained weak and subordinated to the state elite. On the other hand, the general perception of Sub-Saharan Africa (SSA) has been just the opposite. The violent and permanent conflict amongst rent-seeking groups for influence and authority over resources has culminated in a situation where states have become extremely weak and fragmented, while society – depending on the capacity of competing groups for mobilising resources to organise themselves mostly on a regional or local level (resulting in local petty kingdoms) – has never had the chance to evolve as a strong player. State failure in the literature, therefore, – in the context of SSA – refers not just to a weak and captured state but also to a non-functioning, and sometimes even non-existent society, too. Recently, however, the driving forces of globalisation might have triggered serious changes in the above described status quo. Accordingly, our hypothesis is the following: globalisation, especially the dynamic changes of technology, capital and communication have made the simplistic “strong state–weak society” (in Asia) and “weak state–weak society” (in Africa) categorisation somewhat obsolete. While our comparative study has a strong emphasis on the empirical scrutiny of trying to uncover the dynamics of changes in state–society relations in the two chosen regions both qualitatively and quantitatively, it also aims at complementing the meaning and essence of the concepts and methodology of stateness, state capacity and state-society relations, the well-known building blocks of the seminal works of Evans (1995), Leftwich (1995), Migdal (1988) or Myrdal (1968).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The transducer function mu for contrast perception describes the nonlinear mapping of stimulus contrast onto an internal response. Under a signal detection theory approach, the transducer model of contrast perception states that the internal response elicited by a stimulus of contrast c is a random variable with mean mu(c). Using this approach, we derive the formal relations between the transducer function, the threshold-versus-contrast (TvC) function, and the psychometric functions for contrast detection and discrimination in 2AFC tasks. We show that the mathematical form of the TvC function is determined only by mu, and that the psychometric functions for detection and discrimination have a common mathematical form with common parameters emanating from, and only from, the transducer function mu and the form of the distribution of the internal responses. We discuss the theoretical and practical implications of these relations, which have bearings on the tenability of certain mathematical forms for the psychometric function and on the suitability of empirical approaches to model validation. We also present the results of a comprehensive test of these relations using two alternative forms of the transducer model: a three-parameter version that renders logistic psychometric functions and a five-parameter version using Foley's variant of the Naka-Rushton equation as transducer function. Our results support the validity of the formal relations implied by the general transducer model, and the two versions that were contrasted account for our data equally well.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we study the notion of degree forsubmanifolds embedded in an equiregular sub-Riemannian manifold and we provide the definition of their associated area functional. In this setting we prove that the Hausdorff dimension of a submanifold coincides with its degree, as stated by Gromov. Using these general definitions we compute the first variation for surfaces embedded in low dimensional manifolds and we obtain the partial differential equation associated to minimal surfaces. These minimal surfaces have several applications in the neurogeometry of the visual cortex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O objetivo deste estudo foi analisar o equilíbrio muscular dos flexores e extensores (RFE) de joelho ao longo de uma temporada de treinamento em jogadores de futebol categoria sub-20. Fizeram parte da amostra 15 sujeitos pertencentes à equipe sub-20 da Associação Atlética Ponte Preta de Campinas. Os atletas participaram de um macrociclo de preparação (MP) de 29 semanas, composto por período preparatório e competitivo que foram divididos em quatro mesociclos: etapa geral (M1), etapa especial (M2), etapa pré-competitiva (M3) e etapa competitiva (M4). A RFE de ambos os membros foi determinada em dinamômetro isocinético utilizando o pico de torque (PT) obtido em três séries consecutivas de cinco repetições com velocidade de 60º/s. Avaliação isocinética foi realizada em quatro momentos ao longo do MP, sempre ao final de cada mesociclo (M1, M2, M3 e M4). Para análise estatística, foi empregado teste Friedman de medidas repetidas, seguida do teste de Wilcoxon e teste U de Mann-Whitney, com nível de significância de p<0,05. O PT nos músculos flexores de joelho, em ambos os membros, no M2 e M3 foram superiores aos observados em M1 e M4. O PT dos extensores de joelho em M1 foi significantemente inferior aos demais momentos do estudo (M2, M3 e M4), em ambos os membros. A RFE, em ambos os membros, foi inferior em M1 quando comparado a M2 e M3. A comparação da RFE entre os membros não revelou diferenças significantes em nenhum dos momentos do estudo (M1, M2, M3 e M4). Os resultados encontrados na presente investigação indicaram existência de alterações na magnitude da RFE, porém dentro da normalidade, e, manutenção da proporcionalidade entre os membros ao longo do MP. Esses resultados sugerem que não existem períodos sensíveis para a ocorrência de lesões em virtude de desequilíbrios musculares ao longo do MP em jogadores de futebol da categoria sub-20.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the solutions of the gap equations of the magnetic-color-flavor-locked (MCFL) phase of paired quark matter in a magnetic field, and taking into consideration the separation between the longitudinal and transverse pressures due to the field-induced breaking of the spatial rotational symmetry, the equation of state of the MCFL phase is self-consistently determined. This result is then used to investigate the possibility of absolute stability, which turns out to require a field-dependent ""bag constant"" to hold. That is, only if the bag constant varies with the magnetic field, there exists a window in the magnetic field vs bag constant plane for absolute stability of strange matter. Implications for stellar models of magnetized (self-bound) strange stars and hybrid (MCFL core) stars are calculated and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze the irreversibility and the entropy production in nonequilibrium interacting particle systems described by a Fokker-Planck equation by the use of a suitable master equation representation. The irreversible character is provided either by nonconservative forces or by the contact with heat baths at distinct temperatures. The expression for the entropy production is deduced from a general definition, which is related to the probability of a trajectory in phase space and its time reversal, that makes no reference a priori to the dissipated power. Our formalism is applied to calculate the heat conductance in a simple system consisting of two Brownian particles each one in contact to a heat reservoir. We show also the connection between the definition of entropy production rate and the Jarzynski equality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the H(infinity) recursive estimation problem for general rectangular time-variant descriptor systems in discrete time. Riccati-equation based recursions for filtered and predicted estimates are developed based on a data fitting approach and game theory. In this approach, the nature determines a state sequence seeking to maximize the estimation cost, whereas the estimator tries to find an estimate that brings the estimation cost to a minimum. A solution exists for a specified gamma-level if the resulting cost is positive. In order to present some computational alternatives to the H(infinity) filters developed, they are rewritten in information form along with the respective array algorithms. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the optimal linear estimates recursion problem for discrete-time linear systems in its more general formulation. The system is allowed to be in descriptor form, rectangular, time-variant, and with the dynamical and measurement noises correlated. We propose a new expression for the filter recursive equations which presents an interesting simple and symmetric structure. Convergence of the associated Riccati recursion and stability properties of the steady-state filter are provided. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study in detail the so-called beta-modified Weibull distribution, motivated by the wide use of the Weibull distribution in practice, and also for the fact that the generalization provides a continuous crossover towards cases with different shapes. The new distribution is important since it contains as special sub-models some widely-known distributions, such as the generalized modified Weibull, beta Weibull, exponentiated Weibull, beta exponential, modified Weibull and Weibull distributions, among several others. It also provides more flexibility to analyse complex real data. Various mathematical properties of this distribution are derived, including its moments and moment generating function. We examine the asymptotic distributions of the extreme values. Explicit expressions are also derived for the chf, mean deviations, Bonferroni and Lorenz curves, reliability and entropies. The estimation of parameters is approached by two methods: moments and maximum likelihood. We compare by simulation the performances of the estimates from these methods. We obtain the expected information matrix. Two applications are presented to illustrate the proposed distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Leaf wetness duration (LWD) is related to plant disease occurrence and is therefore a key parameter in agrometeorology. As LWD is seldom measured at standard weather stations, it must be estimated in order to ensure the effectiveness of warning systems and the scheduling of chemical disease control. Among the models used to estimate LWD, those that use physical principles of dew formation and dew and/or rain evaporation have shown good portability and sufficiently accurate results for operational use. However, the requirement of net radiation (Rn) is a disadvantage foroperational physical models, since this variable is usually not measured over crops or even at standard weather stations. With the objective of proposing a solution for this problem, this study has evaluated the ability of four models to estimate hourly Rn and their impact on LWD estimates using a Penman-Monteith approach. A field experiment was carried out in Elora, Ontario, Canada, with measurements of LWD, Rn and other meteorological variables over mowed turfgrass for a 58 day period during the growing season of 2003. Four models for estimating hourly Rn based on different combinations of incoming solar radiation (Rg), airtemperature (T), relative humidity (RH), cloud cover (CC) and cloud height (CH), were evaluated. Measured and estimated hourly Rn values were applied in a Penman-Monteith model to estimate LWD. Correlating measured and estimated Rn, we observed that all models performed well in terms of estimating hourly Rn. However, when cloud data were used the models overestimated positive Rn and underestimated negative Rn. When only Rg and T were used to estimate hourly Rn, the model underestimated positive Rn and no tendency was observed for negative Rn. The best performance was obtained with Model I, which presented, in general, the smallest mean absolute error (MAE) and the highest C-index. When measured LWD was compared to the Penman-Monteith LWD, calculated with measured and estimated Rn, few differences were observed. Both precision and accuracy were high, with the slopes of the relationships ranging from 0.96 to 1.02 and R-2 from 0.85 to 0.92, resulting in C-indices between 0.87 and 0.93. The LWD mean absolute errors associated with Rn estimates were between 1.0 and 1.5h, which is sufficient for use in plant disease management schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper applies Hierarchical Bayesian Models to price farm-level yield insurance contracts. This methodology considers the temporal effect, the spatial dependence and spatio-temporal models. One of the major advantages of this framework is that an estimate of the premium rate is obtained directly from the posterior distribution. These methods were applied to a farm-level data set of soybean in the State of the Parana (Brazil), for the period between 1994 and 2003. The model selection was based on a posterior predictive criterion. This study improves considerably the estimation of the fair premium rates considering the small number of observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the years, crop insurance programs became the focus of agricultural policy in the USA, Spain, Mexico, and more recently in Brazil. Given the increasing interest in insurance, accurate calculation of the premium rate is of great importance. We address the crop-yield distribution issue and its implications in pricing an insurance contract considering the dynamic structure of the data and incorporating the spatial correlation in the Hierarchical Bayesian framework. Results show that empirical (insurers) rates are higher in low risk areas and lower in high risk areas. Such methodological improvement is primarily important in situations of limited data.