89 resultados para collection problems


Relevância:

20.00% 20.00%

Publicador:

Resumo:

When using a polynomial approximating function the most contentious aspect of the Heat Balance Integral Method is the choice of power of the highest order term. In this paper we employ a method recently developed for thermal problems, where the exponent is determined during the solution process, to analyse Stefan problems. This is achieved by minimising an error function. The solution requires no knowledge of an exact solution and generally produces significantly better results than all previous HBI models. The method is illustrated by first applying it to standard thermal problems. A Stefan problem with an analytical solution is then discussed and results compared to the approximate solution. An ablation problem is also analysed and results compared against a numerical solution. In both examples the agreement is excellent. A Stefan problem where the boundary temperature increases exponentially is analysed. This highlights the difficulties that can be encountered with a time dependent boundary condition. Finally, melting with a time-dependent flux is briefly analysed without applying analytical or numerical results to assess the accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work in this paper concerns the study of conventional and refined heat balance integral methods for a number of phase change problems. These include standard test problems, both with one and two phase changes, which have exact solutions to enable us to test the accuracy of the approximate solutions. We also consider situations where no analytical solution is available and compare these to numerical solutions. It is popular to use a quadratic profile as an approximation of the temperature, but we show that a cubic profile, seldom considered in the literature, is far more accurate in most circumstances. In addition, the refined integral method can give greater improvement still and we develop a variation on this method which turns out to be optimal in some cases. We assess which integral method is better for various problems, showing that it is largely dependent on the specified boundary conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El projecte Statmedia 3 ha consolidat definitivament la proposta de les assignatures Bioestadística de Biologia, Anàlisi de dades de Ciències Ambientals i d’Estadística Matemàtica de la Diplomatura renovant una part del material creat amb Statmedia 2. S’han inclòs a més Matemàtiques d’Ambientals i Introducció a la Probabilitat del Grau d’Estadística. L’anterior MQD abastava només pràctiques mentre que aquest projecte permet una oferta diversa d’activitats individualitzades. La individualització consisteix en que cada estudiant rep una proposta de cas personalitzada amb dades diferents. Les activitats poden ser programades presencialment o no, però la clau de l’èxit de l’activitat és que l’alumne obtingui reconeixement del seu treball en l’avaluació continuada. La valoració que fan als alumnes de Statmedia és molt bo, i observem que es produeix una millora en els resultats acadèmics. Statmedia 3 ha implicat un important esforç en la vessant informàtica del projecte, la barreja de tecnologies que utilitzem son punteres: Ajax, servlets i applets Java... Hem posat a punt un assistent on-line per dissenyar documents i planificar activitats que facilita la tasca dels professors. La nostre participació en primera línea del procés de convergència a l’EEES ens ha permès anticipar alguns canvis, i s’ha traduït en que el claustre del Departament d’Estadística assumís que Statmedia és una metodologia essencial dels seus plans docents. El projecte continua en un quart projecte MQD consecutiu, on desplegarem la nova tecnologia implementada. L’objectiu principal serà dotar a les assignatures dels 7 graus on participa el departament d’activitats individualitzades en forma de casos pràctics, problemes i proves diverses. La col·lecció de material emmagatzemada en la nostra biblioteca, forjada després de quasi deu anys de treball continuat, juntament amb l’experiència acumulada de com utilitzar Statmedia de la forma més eficient han començat a ser explotades en els nous graus aquest mateix curs 2009-2010.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the use of probabilistic or randomized algorithms for solving combinatorial optimization problems. Our approach employs non-uniform probability distributions to add a biased random behavior to classical heuristics so a large set of alternative good solutions can be quickly obtained in a natural way and without complex conguration processes. This procedure is especially useful in problems where properties such as non-smoothness or non-convexity lead to a highly irregular solution space, for which the traditional optimization methods, both of exact and approximate nature, may fail to reach their full potential. The results obtained are promising enough to suggest that randomizing classical heuristics is a powerful method that can be successfully applied in a variety of cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper develops a stability theory for the optimal value and the optimal set mapping of optimization problems posed in a Banach space. The problems considered in this paper have an arbitrary number of inequality constraints involving lower semicontinuous (not necessarily convex) functions and one closed abstract constraint set. The considered perturbations lead to problems of the same type as the nominal one (with the same space of variables and the same number of constraints), where the abstract constraint set can also be perturbed. The spaces of functions involved in the problems (objective and constraints) are equipped with the metric of the uniform convergence on the bounded sets, meanwhile in the space of closed sets we consider, coherently, the Attouch-Wets topology. The paper examines, in a unified way, the lower and upper semicontinuity of the optimal value function, and the closedness, lower and upper semicontinuity (in the sense of Berge) of the optimal set mapping. This paper can be seen as a second part of the stability theory presented in [17], where we studied the stability of the feasible set mapping (completed here with the analysis of the Lipschitz-like property).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"Vegeu el resum a l'inici del document del fitxer adjunt"

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a new, accurate form of the heat balance integral method, termed the Combined Integral Method (or CIM). The application of this method to Stefan problems is discussed. For simple test cases the results are compared with exact and asymptotic limits. In particular, it is shown that the CIM is more accurate than the second order, large Stefan number, perturbation solution for a wide range of Stefan numbers. In the initial examples it is shown that the CIM reduces the standard problem, consisting of a PDE defined over a domain specified by an ODE, to the solution of one or two algebraic equations. The latter examples, where the boundary temperature varies with time, reduce to a set of three first order ODEs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper the two main drawbacks of the heat balance integral methods are examined. Firstly we investigate the choice of approximating function. For a standard polynomial form it is shown that combining the Heat Balance and Refined Integral methods to determine the power of the highest order term will either lead to the same, or more often, greatly improved accuracy on standard methods. Secondly we examine thermal problems with a time-dependent boundary condition. In doing so we develop a logarithmic approximating function. This new function allows us to model moving peaks in the temperature profile, a feature that previous heat balance methods cannot capture. If the boundary temperature varies so that at some time t & 0 it equals the far-field temperature, then standard methods predict that the temperature is everywhere at this constant value. The new method predicts the correct behaviour. It is also shown that this function provides even more accurate results, when coupled with the new CIM, than the polynomial profile. Analysis primarily focuses on a specified constant boundary temperature and is then extended to constant flux, Newton cooling and time dependent boundary conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper provides a natural way of reaching an agreement between two prominent proposals in a bankruptcy problem. Particularly, using the fact that such problems can be faced from two different points of views, awards and losses, we justify the average of any pair of dual bankruptcy rules through the definition a double recursive process. Finally, by considering three posible sets of equity principles that a particular society may agree on, we retrieve the average of old and well known bankruptcy rules, the Constrained Equal Awards and the Constrained Equal Losses rules, Piniles’ rule and its dual rule, and the Constrained Egalitarian rule and its dual rule. Keywords: Bankruptcy problems, Midpoint, Bounds, Duality, Recursivity. JEL classification: C71, D63, D71.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The commitment among agents has always been a difficult task, especially when they have to decide how to distribute the available amount of a scarce resource among all. On the one hand, there are a multiplicity of possible ways for assigning the available amount; and, on the other hand, each agent is going to propose that distribution which provides her the highest possible award. In this paper, with the purpose of making this agreement easier, firstly we use two different sets of basic properties, called Commonly Accepted Equity Principles, to delimit what agents can propose as reasonable allocations. Secondly, we extend the results obtained by Chun (1989) and Herrero (2003), obtaining new characterizations of old and well known bankruptcy rules. Finally, using the fact that bankruptcy problems can be analyzed from awards and losses, we define a mechanism which provides a new justification of the convex combinations of bankruptcy rules. Keywords: Bankruptcy problems, Unanimous Concessions procedure, Diminishing Claims mechanism, Piniles’ rule, Constrained Egalitarian rule. JEL classification: C71, D63, D71.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a distribution problem, and specfii cally in bankruptcy issues, the Proportional (P) and the Egalitarian (EA) divisions are two of the most popular ways to resolve the conflict. The Constrained Equal Awards rule (CEA) is introduced in bankruptcy literature to ensure that no agent receives more than her claim, a problem that can arise when using the egalitarian division. We propose an alternative modi cation, by using a convex combination of P and EA. The recursive application of this new rule finishes at the CEA rule. Our solution concept ensures a minimum amount to each agent, and distributes the remaining estate in a proportional way. Keywords: Bankruptcy problems, Proportional rule, Equal Awards, Convex combination of rules, Lorenz dominance. JEL classi fication: C71, D63, D71.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The idea of ensuring a guarantee (a minimum amount of the resources) to each agent has recently acquired great relevance, in both social and politi- cal terms. Furthermore, the notion of Solidarity has been treated frequently in redistribution problems to establish that any increment of the resources should be equally distributed taking into account some relevant characteris- tics. In this paper, we combine these two general concepts, guarantee and solidarity, to characterize the uniform rules in bankruptcy problems (Con- strained Equal Awards and Constrained Equal Losses rules). Keywords: Constrained Equal Awards, Constrained Equal Losses, Lower bounds, Bankruptcy problems, Solidarity. JEL classification: C71, D63, D71.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The solution for the ‘Contested Garment Problem’, proposed in the Babylonic Talmud, suggests that each agent should receive at least some part of the resources whenever the demand overcomes the available amount. In this context, we propose a new method to define lower bounds on awards, an idea that has underlied the theoretical analysis of bankruptcy problems from its beginning (O’Neill, 1982) to present day (Dominguez and Thomson, 2006). Specifically, starting from the fact that a society establishes its own set of ‘Commonly Accepted Equity Principles’, our proposal ensures to each agent the smallest amount she gets according to all the admissible rules. As in general this new bound will not exhaust the estate, we analyze its recursive application for different sets of equity principles. Keywords: Bankruptcy problems, Bankruptcy rules, Lower bounds, Recursive process

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A parts based model is a parametrization of an object class using a collection of landmarks following the object structure. The matching of parts based models is one of the problems where pairwise Conditional Random Fields have been successfully applied. The main reason of their effectiveness is tractable inference and learning due to the simplicity of involved graphs, usually trees. However, these models do not consider possible patterns of statistics among sets of landmarks, and thus they sufffer from using too myopic information. To overcome this limitation, we propoese a novel structure based on a hierarchical Conditional Random Fields, which we explain in the first part of this memory. We build a hierarchy of combinations of landmarks, where matching is performed taking into account the whole hierarchy. To preserve tractable inference we effectively sample the label set. We test our method on facial feature selection and human pose estimation on two challenging datasets: Buffy and MultiPIE. In the second part of this memory, we present a novel approach to multiple kernel combination that relies on stacked classification. This method can be used to evaluate the landmarks of the parts-based model approach. Our method is based on combining responses of a set of independent classifiers for each individual kernel. Unlike earlier approaches that linearly combine kernel responses, our approach uses them as inputs to another set of classifiers. We will show that we outperform state-of-the-art methods on most of the standard benchmark datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presentation in CODAWORK'03, session 4: Applications to archeometry