915 resultados para asymptotic optimality


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Asymptotic chi-squared test statistics for testing the equality ofmoment vectors are developed. The test statistics proposed aregeneralizedWald test statistics that specialize for different settings by inserting andappropriate asymptotic variance matrix of sample moments. Scaled teststatisticsare also considered for dealing with situations of non-iid sampling. Thespecializationwill be carried out for testing the equality of multinomial populations, andtheequality of variance and correlation matrices for both normal andnon-normaldata. When testing the equality of correlation matrices, a scaled versionofthe normal theory chi-squared statistic is proven to be an asymptoticallyexactchi-squared statistic in the case of elliptical data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the problem of scheduling a multiclass $M/M/m$ queue with Bernoulli feedback on $m$ parallel servers to minimize time-average linear holding costs. We analyze the performance of a heuristic priority-index rule, which extends Klimov's optimal solution to the single-server case: servers select preemptively customers with larger Klimov indices. We present closed-form suboptimality bounds (approximate optimality) for Klimov's rule, which imply that its suboptimality gap is uniformly bounded above with respect to (i) external arrival rates, as long as they stay within system capacity;and (ii) the number of servers. It follows that its relativesuboptimality gap vanishes in a heavy-traffic limit, as external arrival rates approach system capacity (heavy-traffic optimality). We obtain simpler expressions for the special no-feedback case, where the heuristic reduces to the classical $c \mu$ rule. Our analysis is based on comparing the expected cost of Klimov's ruleto the value of a strong linear programming (LP) relaxation of the system's region of achievable performance of mean queue lengths. In order to obtain this relaxation, we derive and exploit a new set ofwork decomposition laws for the parallel-server system. We further report on the results of a computational study on the quality of the $c \mu$ rule for parallel scheduling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We extend to score, Wald and difference test statistics the scaled and adjusted corrections to goodness-of-fit test statistics developed in Satorra and Bentler (1988a,b). The theory is framed in the general context of multisample analysis of moment structures, under general conditions on the distribution of observable variables. Computational issues, as well as the relation of the scaled and corrected statistics to the asymptotic robust ones, is discussed. A Monte Carlo study illustrates thecomparative performance in finite samples of corrected score test statistics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies the efficiency of equilibria in a productive OLG economy where the process of financial intermediation is characterized by costly state verification. Both competitive equilibria and Constrained Pareto Optimal allocations are characterized. It is shown that market outcomes can be socially inefficient, even when a weaker notion than Pareto optimality is considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a polyhedral framework for establishing general structural properties on optimal solutions of stochastic scheduling problems, where multiple job classes vie for service resources: the existence of an optimal priority policy in a given family, characterized by a greedoid(whose feasible class subsets may receive higher priority), where optimal priorities are determined by class-ranking indices, under restricted linear performance objectives (partial indexability). This framework extends that of Bertsimas and Niño-Mora (1996), which explained the optimality of priority-index policies under all linear objectives (general indexability). We show that, if performance measures satisfy partial conservation laws (with respect to the greedoid), which extend previous generalized conservation laws, then theproblem admits a strong LP relaxation over a so-called extended greedoid polytope, which has strong structural and algorithmic properties. We present an adaptive-greedy algorithm (which extends Klimov's) taking as input the linear objective coefficients, which (1) determines whether the optimal LP solution is achievable by a policy in the given family; and (2) if so, computes a set of class-ranking indices that characterize optimal priority policies in the family. In the special case of project scheduling, we show that, under additional conditions, the optimal indices can be computed separately for each project (index decomposition). We further apply the framework to the important restless bandit model (two-action Markov decision chains), obtaining new index policies, that extend Whittle's (1988), and simple sufficient conditions for their validity. These results highlight the power of polyhedral methods (the so-called achievable region approach) in dynamic and stochastic optimization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a new unifying framework for investigating throughput-WIP(Work-in-Process) optimal control problems in queueing systems,based on reformulating them as linear programming (LP) problems withspecial structure: We show that if a throughput-WIP performance pairin a stochastic system satisfies the Threshold Property we introducein this paper, then we can reformulate the problem of optimizing alinear objective of throughput-WIP performance as a (semi-infinite)LP problem over a polygon with special structure (a thresholdpolygon). The strong structural properties of such polygones explainthe optimality of threshold policies for optimizing linearperformance objectives: their vertices correspond to the performancepairs of threshold policies. We analyze in this framework theversatile input-output queueing intensity control model introduced byChen and Yao (1990), obtaining a variety of new results, including (a)an exact reformulation of the control problem as an LP problem over athreshold polygon; (b) an analytical characterization of the Min WIPfunction (giving the minimum WIP level required to attain a targetthroughput level); (c) an LP Value Decomposition Theorem that relatesthe objective value under an arbitrary policy with that of a giventhreshold policy (thus revealing the LP interpretation of Chen andYao's optimality conditions); (d) diminishing returns and invarianceproperties of throughput-WIP performance, which underlie thresholdoptimality; (e) a unified treatment of the time-discounted andtime-average cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the fixed design regression model, additional weights areconsidered for the Nadaraya--Watson and Gasser--M\"uller kernel estimators.We study their asymptotic behavior and the relationships between new andclassical estimators. For a simple family of weights, and considering theIMSE as global loss criterion, we show some possible theoretical advantages.An empirical study illustrates the performance of the weighted estimatorsin finite samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article we propose using small area estimators to improve the estimatesof both the small and large area parameters. When the objective is to estimateparameters at both levels accurately, optimality is achieved by a mixed sampledesign of fixed and proportional allocations. In the mixed sample design, oncea sample size has been determined, one fraction of it is distributedproportionally among the different small areas while the rest is evenlydistributed among them. We use Monte Carlo simulations to assess theperformance of the direct estimator and two composite covariant-freesmall area estimators, for different sample sizes and different sampledistributions. Performance is measured in terms of Mean Squared Errors(MSE) of both small and large area parameters. It is found that the adoptionof small area composite estimators open the possibility of 1) reducingsample size when precision is given, or 2) improving precision for a givensample size.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Eurymetopum is an Andean clerid genus with 22 species. We modeled the ecological niches of 19 species with Maxent and used them as potential distributional maps to identify patterns of richness and endemicity. All modeled species maps were overlapped in a single map in order to determine richness. We performed an optimality analysis with NDM/VNDM in a grid of 1º latitude-longitude in order to identify endemism. We found a highly rich area, located between 32º and 41º south latitude, where the richest pixels have 16 species. One area of endemism was identified, located in the Maule and Valdivian Forest biogeographic provinces, which extends also to the Santiago province of the Central Chilean subregion, and contains four endemic species (E. parallelum, E. prasinum, E. proteus, and E. viride), as well as 16 non-endemic species. The sympatry of these phylogenetically unrelated species might indicate ancient vicariance processes, followed by episodes of dispersal. Based on our results, we suggest a close relationship between these provinces, with the Maule representing a complex area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The well--known Minkowski's? $(x)$ function is presented as the asymptotic distribution function of an enumeration of the rationals in (0,1] based on their continued fraction representation. Besides, the singularity of ?$(x)$ is clearly proved in two ways: by exhibiting a set of measure one in which ?ï$(x)$ = 0; and again by actually finding a set of measure one which is mapped onto a set of measure zero and viceversa. These sets are described by means of metrical properties of different systems for real number representation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new direction of research in Competitive Location theory incorporatestheories of Consumer Choice Behavior in its models. Following thisdirection, this paper studies the importance of consumer behavior withrespect to distance or transportation costs in the optimality oflocations obtained by traditional Competitive Location models. To dothis, it considers different ways of defining a key parameter in thebasic Maximum Capture model (MAXCAP). This parameter will reflectvarious ways of taking into account distance based on several ConsumerChoice Behavior theories. The optimal locations and the deviation indemand captured when the optimal locations of the other models are usedinstead of the true ones, are computed for each model. A metaheuristicbased on GRASP and Tabu search procedure is presented to solve all themodels. Computational experience and an application to 55-node networkare also presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although it is commonly accepted that most macroeconomic variables are nonstationary, it is often difficult to identify the source of the non-stationarity. In particular, it is well-known that integrated and short memory models containing trending components that may display sudden changes in their parameters share some statistical properties that make their identification a hard task. The goal of this paper is to extend the classical testing framework for I(1) versus I(0)+ breaks by considering a a more general class of models under the null hypothesis: non-stationary fractionally integrated (FI) processes. A similar identification problem holds in this broader setting which is shown to be a relevant issue from both a statistical and an economic perspective. The proposed test is developed in the time domain and is very simple to compute. The asymptotic properties of the new technique are derived and it is shown by simulation that it is very well-behaved in finite samples. To illustrate the usefulness of the proposed technique, an application using inflation data is also provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies the effects of uncertain lifetime on capitalaccumulation and growth and also the sensitivity of thoseeffects to the existence of a perfect annuities market. Themodel is an overlapping generations model with uncertainlifetimes. The technology is convex and such that the marginalproduct of capital is bounded away from zero. A contribution ofthis paper is to show that the existence of accidental bequestsmay lead the economy to an equilibrium that exhibits asymptoticgrowth, which is impossible in an economy with a perfect annuitiesmarket or with certain lifetimes. This paper also shows that ifindividuals face a positive probability of surviving in everyperiod, they may be willing to save at any age. This effect ofuncertain lifetime on savings may also lead the economy to anequilibrium exhibiting asymptotic growth even if there exists aperfect annuities market.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the statistical properties of three estimation methods for a model of learning that is often fitted to experimental data: quadratic deviation measures without unobserved heterogeneity, and maximum likelihood withand without unobserved heterogeneity. After discussing identification issues, we show that the estimators are consistent and provide their asymptotic distribution. Using Monte Carlo simulations, we show that ignoring unobserved heterogeneity can lead to seriously biased estimations in samples which have the typical length of actual experiments. Better small sample properties areobtained if unobserved heterogeneity is introduced. That is, rather than estimating the parameters for each individual, the individual parameters are considered random variables, and the distribution of those random variables is estimated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider an agent who has to repeatedly make choices in an uncertainand changing environment, who has full information of the past, who discountsfuture payoffs, but who has no prior. We provide a learning algorithm thatperforms almost as well as the best of a given finite number of experts orbenchmark strategies and does so at any point in time, provided the agentis sufficiently patient. The key is to find the appropriate degree of forgettingdistant past. Standard learning algorithms that treat recent and distant pastequally do not have the sequential epsilon optimality property.