967 resultados para Marginal distribution costs


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effectiveness of vaccinating males against the human papillomavirus (HPV) remains a controversial subject. Many existing studies conclude that increasing female coverage is more effective than diverting resources into male vaccination. Recently, several empirical studies on HPV immunization have been published, providing evidence of the fact that marginal vaccination costs increase with coverage. In this study, we use a stochastic agent-based modeling framework to revisit the male vaccination debate in light of these new findings. Within this framework, we assess the impact of coverage-dependent marginal costs of vaccine distribution on optimal immunization strategies against HPV. Focusing on the two scenarios of ongoing and new vaccination programs, we analyze different resource allocation policies and their effects on overall disease burden. Our results suggest that if the costs associated with vaccinating males are relatively close to those associated with vaccinating females, then coverage-dependent, increasing marginal costs may favor vaccination strategies that entail immunization of both genders. In particular, this study emphasizes the necessity for further empirical research on the nature of coverage-dependent vaccination costs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ejected mass distribution of Type Ia supernovae (SNe Ia) directly probes progenitor evolutionary history and explosion mechanisms, with implications for their use as cosmological probes. Although the Chandrasekhar mass is a natural mass scale for the explosion of white dwarfs as SNe Ia, models allowing SNe Ia to explode at other masses have attracted much recent attention. Using an empirical relation between the ejected mass and the light-curve width, we derive ejected masses Mej and 56Ni masses MNi for a sample of 337 SNe Ia with redshifts z <0.7 used in recent cosmological analyses. We use hierarchical Bayesian inference to reconstruct the joint Mej-MNi distribution, accounting for measurement errors. The inferred marginal distribution of Mej has a long tail towards sub-Chandrasekhar masses, but cuts off sharply above 1.4 M⊙. Our results imply that 25-50 per cent of normal SNe Ia are inconsistent with Chandrasekhar-mass explosions, with almost all of these being sub-Chandrasekhar mass; super-Chandrasekhar-mass explosions make up no more than 1 per cent of all spectroscopically normal SNe Ia. We interpret the SN Ia width-luminosity relation as an underlying relation between Mej and MNi, and show that the inferred relation is not naturally explained by the predictions of any single known explosion mechanism.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Lucas (1987) has shown a surprising result in business-cycle research: the welfare cost of business cycles are very small. Our paper has several original contributions. First, in computing welfare costs, we propose a novel setup that separates the effects of uncertainty stemming from business-cycle fluctuations and economic-growth variation. Second, we extend the sample from which to compute the moments of consumption: the whole of the literature chose primarily to work with post-WWII data. For this period, actual consumption is already a result of counter-cyclical policies, and is potentially smoother than what it otherwise have been in their absence. So, we employ also pre-WWII data. Third, we take an econometric approach and compute explicitly the asymptotic standard deviation of welfare costs using the Delta Method. Estimates of welfare costs show major differences for the pre-WWII and the post-WWII era. They can reach up to 15 times for reasonable parameter values -β=0.985, and ∅=5. For example, in the pre-WWII period (1901-1941), welfare cost estimates are 0.31% of consumption if we consider only permanent shocks and 0.61% of consumption if we consider only transitory shocks. In comparison, the post-WWII era is much quieter: welfare costs of economic growth are 0.11% and welfare costs of business cycles are 0.037% - the latter being very close to the estimate in Lucas (0.040%). Estimates of marginal welfare costs are roughly twice the size of the total welfare costs. For the pre-WWII era, marginal welfare costs of economic-growth and business- cycle fluctuations are respectively 0.63% and 1.17% of per-capita consumption. The same figures for the post-WWII era are, respectively, 0.21% and 0.07% of per-capita consumption.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Lucas(1987) has shown a surprising result in business-cycle research: the welfare cost of business cycles are very small. Our paper has several original contributions. First, in computing welfare costs, we propose a novel setup that separates the effects of uncertainty stemming from business-cycle uctuations and economic-growth variation. Second, we extend the sample from which to compute the moments of consumption: the whole of the literature chose primarily to work with post-WWII data. For this period, actual consumption is already a result of counter-cyclical policies, and is potentially smoother than what it otherwise have been in their absence. So, we employ also pre-WWII data. Third, we take an econometric approach and compute explicitly the asymptotic standard deviation of welfare costs using the Delta Method. Estimates of welfare costs show major diferences for the pre-WWII and the post-WWII era. They can reach up to 15 times for reasonable parameter values = 0:985, and = 5. For example, in the pre-WWII period (1901-1941), welfare cost estimates are 0.31% of consumption if we consider only permanent shocks and 0.61% of consumption if we consider only transitory shocks. In comparison, the post-WWII era is much quieter: welfare costs of economic growth are 0.11% and welfare costs of business cycles are 0.037% the latter being very close to the estimate in Lucas (0.040%). Estimates of marginal welfare costs are roughly twice the size of the total welfare costs. For the pre-WWII era, marginal welfare costs of economic-growth and business-cycle uctuations are respectively 0.63% and 1.17% of per-capita consumption. The same gures for the post-WWII era are, respectively, 0.21% and 0.07% of per-capita consumption.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Incluye Bibliografía

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we propose a random intercept Poisson model in which the random effect is assumed to follow a generalized log-gamma (GLG) distribution. This random effect accommodates (or captures) the overdispersion in the counts and induces within-cluster correlation. We derive the first two moments for the marginal distribution as well as the intraclass correlation. Even though numerical integration methods are, in general, required for deriving the marginal models, we obtain the multivariate negative binomial model from a particular parameter setting of the hierarchical model. An iterative process is derived for obtaining the maximum likelihood estimates for the parameters in the multivariate negative binomial model. Residual analysis is proposed and two applications with real data are given for illustration. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper considers statistical models in which two different types of events, such as the diagnosis of a disease and the remission of the disease, occur alternately over time and are observed subject to right censoring. We propose nonparametric estimators for the joint distribution of bivariate recurrence times and the marginal distribution of the first recurrence time. In general, the marginal distribution of the second recurrence time cannot be estimated due to an identifiability problem, but a conditional distribution of the second recurrence time can be estimated non-parametrically. In literature, statistical methods have been developed to estimate the joint distribution of bivariate recurrence times based on data of the first pair of censored bivariate recurrence times. These methods are efficient in the current model because recurrence times of higher orders are not used. Asymptotic properties of the estimators are established. Numerical studies demonstrate the estimator performs well with practical sample sizes. We apply the proposed method to a Denmark psychiatric case register data set for illustration of the methods and theory.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation studies technological change in the context of energy and environmental economics. Technology plays a key role in reducing greenhouse gas emissions from the transportation sector. Chapter 1 estimates a structural model of the car industry that allows for endogenous product characteristics to investigate how gasoline taxes, R&D subsidies and competition affect fuel efficiency and vehicle prices in the medium-run, both through car-makers' decisions to adopt technologies and through their investments in knowledge capital. I use technology adoption and automotive patents data for 1986-2006 to estimate this model. I show that 92% of fuel efficiency improvements between 1986 and 2006 were driven by technology adoption, while the role of knowledge capital is largely to reduce the marginal production costs of fuel-efficient cars. A counterfactual predicts that an additional $1/gallon gasoline tax in 2006 would have increased the technology adoption rate, and raised average fuel efficiency by 0.47 miles/gallon, twice the annual fuel efficiency improvement in 2003-2006. An R&D subsidy that would reduce the marginal cost of knowledge capital by 25% in 2006 would have raised investment in knowledge capital. This subsidy would have raised fuel efficiency only by 0.06 miles/gallon in 2006, but would have increased variable profits by $2.3 billion over all firms that year. Passenger vehicle fuel economy standards in the United States will require substantial improvements in new vehicle fuel economy over the next decade. Economic theory suggests that vehicle manufacturers adopt greater fuel-saving technologies for vehicles with larger market size. Chapter 2 documents a strong connection between market size, measured by sales, and technology adoption. Using variation consumer demographics and purchasing pattern to account for the endogeneity of market size, we find that a 10 percent increase in market size raises vehicle fuel efficiency by 0.3 percent, as compared to a mean improvement of 1.4 percent per year over 1997-2013. Historically, fuel price and demographic-driven market size changes have had large effects on technology adoption. Furthermore, fuel taxes would induce firms to adopt fuel-saving technologies on their most efficient cars, thereby polarizing the fuel efficiency distribution of the new vehicle fleet.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the study of traffic safety, expected crash frequencies across sites are generally estimated via the negative binomial model, assuming time invariant safety. Since the time invariant safety assumption may be invalid, Hauer (1997) proposed a modified empirical Bayes (EB) method. Despite the modification, no attempts have been made to examine the generalisable form of the marginal distribution resulting from the modified EB framework. Because the hyper-parameters needed to apply the modified EB method are not readily available, an assessment is lacking on how accurately the modified EB method estimates safety in the presence of the time variant safety and regression-to-the-mean (RTM) effects. This study derives the closed form marginal distribution, and reveals that the marginal distribution in the modified EB method is equivalent to the negative multinomial (NM) distribution, which is essentially the same as the likelihood function used in the random effects Poisson model. As a result, this study shows that the gamma posterior distribution from the multivariate Poisson-gamma mixture can be estimated using the NM model or the random effects Poisson model. This study also shows that the estimation errors from the modified EB method are systematically smaller than those from the comparison group method by simultaneously accounting for the RTM and time variant safety effects. Hence, the modified EB method via the NM model is a generalisable method for estimating safety in the presence of the time variant safety and the RTM effects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we consider the third-moment structure of a class of time series models. It is often argued that the marginal distribution of financial time series such as returns is skewed. Therefore it is of importance to know what properties a model should possess if it is to accommodate unconditional skewness. We consider modeling the unconditional mean and variance using models that respond nonlinearly or asymmetrically to shocks. We investigate the implications of these models on the third-moment structure of the marginal distribution as well as conditions under which the unconditional distribution exhibits skewness and nonzero third-order autocovariance structure. In this respect, an asymmetric or nonlinear specification of the conditional mean is found to be of greater importance than the properties of the conditional variance. Several examples are discussed and, whenever possible, explicit analytical expressions provided for all third-order moments and cross-moments. Finally, we introduce a new tool, the shock impact curve, for investigating the impact of shocks on the conditional mean squared error of return series.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we address the problem of the separation and recovery of convolutively mixed autoregressive processes in a Bayesian framework. Solving this problem requires the ability to solve integration and/or optimization problems of complicated posterior distributions. We thus propose efficient stochastic algorithms based on Markov chain Monte Carlo (MCMC) methods. We present three algorithms. The first one is a classical Gibbs sampler that generates samples from the posterior distribution. The two other algorithms are stochastic optimization algorithms that allow to optimize either the marginal distribution of the sources, or the marginal distribution of the parameters of the sources and mixing filters, conditional upon the observation. Simulations are presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

提出基因重要度的概念,通过实验证明基因重要度对于单变量边缘分布算法(Unvaried Marginal Distribution Algo-rithm,UMOA)收敛的重要性.由此提出一种基于基因重要度的进化算法.该算法首先对组成染色体的各基因进行重要度排序,随后对重要度大的基因先进行收敛操作,每次收敛当前重要度最大的基因,直到所有基因全部收敛.实验数据表明,本算法的收敛速度更快,而且更容易求出满意解.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As the World Wide Web (Web) is increasingly adopted as the infrastructure for large-scale distributed information systems, issues of performance modeling become ever more critical. In particular, locality of reference is an important property in the performance modeling of distributed information systems. In the case of the Web, understanding the nature of reference locality will help improve the design of middleware, such as caching, prefetching, and document dissemination systems. For example, good measurements of reference locality would allow us to generate synthetic reference streams with accurate performance characteristics, would allow us to compare empirically measured streams to explain differences, and would allow us to predict expected performance for system design and capacity planning. In this paper we propose models for both temporal and spatial locality of reference in streams of requests arriving at Web servers. We show that simple models based only on document popularity (likelihood of reference) are insufficient for capturing either temporal or spatial locality. Instead, we rely on an equivalent, but numerical, representation of a reference stream: a stack distance trace. We show that temporal locality can be characterized by the marginal distribution of the stack distance trace, and we propose models for typical distributions and compare their cache performance to our traces. We also show that spatial locality in a reference stream can be characterized using the notion of self-similarity. Self-similarity describes long-range correlations in the dataset, which is a property that previous researchers have found hard to incorporate into synthetic reference strings. We show that stack distance strings appear to be strongly self-similar, and we provide measurements of the degree of self-similarity in our traces. Finally, we discuss methods for generating synthetic Web traces that exhibit the properties of temporal and spatial locality that we measured in our data.