51 resultados para Quadratic multiple knapsack problem
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Planar polynomial vector fields which admit invariant algebraic curves, Darboux integrating factors or Darboux first integrals are of special interest. In the present paper we solve the inverse problem for invariant algebraic curves with a given multiplicity and for integrating factors, under generic assumptions regarding the (multiple) invariant algebraic curves involved. In particular we prove, in this generic scenario, that the existence of a Darboux integrating factor implies Darboux integrability. Furthermore we construct examples where the genericity assumption does not hold and indicate that the situation is different for these.
Resumo:
This paper proposes an heuristic for the scheduling of capacity requests and the periodic assignment of radio resources in geostationary (GEO) satellite networks with star topology, using the Demand Assigned Multiple Access (DAMA) protocol in the link layer, and Multi-Frequency Time Division Multiple Access (MF-TDMA) and Adaptive Coding and Modulation (ACM) in the physical layer.
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.
Resumo:
The problems arising in commercial distribution are complex and involve several players and decision levels. One important decision is relatedwith the design of the routes to distribute the products, in an efficient and inexpensive way.This article deals with a complex vehicle routing problem that can beseen as a new extension of the basic vehicle routing problem. The proposed model is a multi-objective combinatorial optimization problemthat considers three objectives and multiple periods, which models in a closer way the real distribution problems. The first objective is costminimization, the second is balancing work levels and the third is amarketing objective. An application of the model on a small example, with5 clients and 3 days, is presented. The results of the model show the complexity of solving multi-objective combinatorial optimization problems and the contradiction between the several distribution management objective.
Resumo:
It is common in econometric applications that several hypothesis tests arecarried out at the same time. The problem then becomes how to decide whichhypotheses to reject, accounting for the multitude of tests. In this paper,we suggest a stepwise multiple testing procedure which asymptoticallycontrols the familywise error rate at a desired level. Compared to relatedsingle-step methods, our procedure is more powerful in the sense that itoften will reject more false hypotheses. In addition, we advocate the useof studentization when it is feasible. Unlike some stepwise methods, ourmethod implicitly captures the joint dependence structure of the teststatistics, which results in increased ability to detect alternativehypotheses. We prove our method asymptotically controls the familywise errorrate under minimal assumptions. We present our methodology in the context ofcomparing several strategies to a common benchmark and deciding whichstrategies actually beat the benchmark. However, our ideas can easily beextended and/or modied to other contexts, such as making inference for theindividual regression coecients in a multiple regression framework. Somesimulation studies show the improvements of our methods over previous proposals. We also provide an application to a set of real data.
Resumo:
Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.
Resumo:
In the n{body problem a central con guration is formed when the position vector of each particle with respect to the center of mass is a common scalar multiple of its acceleration vector. Lindstrom showed for n = 3 and for n > 4 that if n ? 1 masses are located at xed points in the plane, then there are only a nite number of ways to position the remaining nth mass in such a way that they de ne a central con guration. Lindstrom leaves open the case n = 4. In this paper we prove the case n = 4 using as variables the mutual distances between the particles.
Resumo:
It is known that, in a locally presentable category, localization exists with respect to every set of morphisms, while the statement that localization with respect to every (possibly proper) class of morphisms exists in locally presentable categories is equivalent to a large-cardinal axiom from set theory. One proves similarly, on one hand, that homotopy localization exists with respect to sets of maps in every cofibrantly generated, left proper, simplicial model category M whose underlying category is locally presentable. On the other hand, as we show in this article, the existence of localization with respect to possibly proper classes of maps in a model category M satisfying the above assumptions is implied by a large-cardinal axiom called Vopënka's principle, although we do not know if the reverse implication holds. We also show that, under the same assumptions on M, every endofunctor of M that is idempotent up to homotopy is equivalent to localization with respect to some class S of maps, and if Vopënka's principle holds then S can be chosen to be a set. There are examples showing that the latter need not be true if M is not cofibrantly generated. The above assumptions on M are satisfied by simplicial sets and symmetric spectra over simplicial sets, among many other model categories.
Resumo:
Using the continuation method we prove that the circular and the elliptic symmetric periodic orbits of the planar rotating Kepler problem can be continued into periodic orbits of the planar collision restricted 3–body problem. Additionally, we also continue to this restricted problem the so called “comets orbits”.
Resumo:
We say the endomorphism problem is solvable for an element W in a free group F if it can be decided effectively whether, given U in F, there is an endomorphism Φ of F sending W to U. This work analyzes an approach due to C. Edmunds and improved by C. Sims. Here we prove that the approach provides an efficient algorithm for solving the endomorphism problem when W is a two- generator word. We show that when W is a two-generator word this algorithm solves the problem in time polynomial in the length of U. This result gives a polynomial-time algorithm for solving, in free groups, two-variable equations in which all the variables occur on one side of the equality and all the constants on the other side.
Resumo:
The paper is devoted to the study of a type of differential systems which appear usually in the study of some Hamiltonian systems with 2 degrees of freedom. We prove the existence of infinitely many periodic orbits on each negative energy level. All these periodic orbits pass near the total collision. Finally we apply these results to study the existence of periodic orbits in the charged collinear 3–body problem.
Resumo:
The division problem consists of allocating an amount of a perfectly divisible good among a group of n agents with single-peaked preferences. A rule maps preference profiles into n shares of the amount to be allocated. A rule is bribe-proof if no group of agents can compensate another agent to misrepresent his preference and, after an appropriate redistribution of their shares, each obtain a strictly preferred share. We characterize all bribe-proof rules as the class of efficient, strategy-proof, and weak replacement monotonic rules. In addition, we identify the functional form of all bribe-proof and tops-only rules.
Resumo:
The division problem consists of allocating an amount M of a perfectly divisible good among a group of n agents. Sprumont (1991) showed that if agents have single-peaked preferences over their shares, the uniform rule is the unique strategy-proof, efficient, and anonymous rule. Ching and Serizawa (1998) extended this result by showing that the set of single-plateaued preferences is the largest domain, for all possible values of M, admitting a rule (the extended uniform rule) satisfying strategy-proofness, efficiency and symmetry. We identify, for each M and n, a maximal domain of preferences under which the extended uniform rule also satisfies the properties of strategy-proofness, efficiency, continuity, and "tops-onlyness". These domains (called weakly single-plateaued) are strictly larger than the set of single-plateaued preferences. However, their intersection, when M varies from zero to infinity, coincides with the set of single-plateaued preferences.
Resumo:
Ever since the appearance of the ARCH model [Engle(1982a)], an impressive array of variance specifications belonging to the same class of models has emerged [i.e. Bollerslev's (1986) GARCH; Nelson's (1990) EGARCH]. This recent domain has achieved very successful developments. Nevertheless, several empirical studies seem to show that the performance of such models is not always appropriate [Boulier(1992)]. In this paper we propose a new specification: the Quadratic Moving Average Conditional heteroskedasticity model. Its statistical properties, such as the kurtosis and the symmetry, as well as two estimators (Method of Moments and Maximum Likelihood) are studied. Two statistical tests are presented, the first one tests for homoskedasticity and the second one, discriminates between ARCH and QMACH specification. A Monte Carlo study is presented in order to illustrate some of the theoretical results. An empirical study is undertaken for the DM-US exchange rate.