15 resultados para Cluster size
em Archivo Digital para la Docencia y la Investigación - Repositorio Institucional de la Universidad del País Vasco
Resumo:
116 p.
Resumo:
In this paper we introduce four scenario Cluster based Lagrangian Decomposition (CLD) procedures for obtaining strong lower bounds to the (optimal) solution value of two-stage stochastic mixed 0-1 problems. At each iteration of the Lagrangian based procedures, the traditional aim consists of obtaining the solution value of the corresponding Lagrangian dual via solving scenario submodels once the nonanticipativity constraints have been dualized. Instead of considering a splitting variable representation over the set of scenarios, we propose to decompose the model into a set of scenario clusters. We compare the computational performance of the four Lagrange multiplier updating procedures, namely the Subgradient Method, the Volume Algorithm, the Progressive Hedging Algorithm and the Dynamic Constrained Cutting Plane scheme for different numbers of scenario clusters and different dimensions of the original problem. Our computational experience shows that the CLD bound and its computational effort depend on the number of scenario clusters to consider. In any case, our results show that the CLD procedures outperform the traditional LD scheme for single scenarios both in the quality of the bounds and computational effort. All the procedures have been implemented in a C++ experimental code. A broad computational experience is reported on a test of randomly generated instances by using the MIP solvers COIN-OR and CPLEX for the auxiliary mixed 0-1 cluster submodels, this last solver within the open source engine COIN-OR. We also give computational evidence of the model tightening effect that the preprocessing techniques, cut generation and appending and parallel computing tools have in stochastic integer optimization. Finally, we have observed that the plain use of both solvers does not provide the optimal solution of the instances included in the testbed with which we have experimented but for two toy instances in affordable elapsed time. On the other hand the proposed procedures provide strong lower bounds (or the same solution value) in a considerably shorter elapsed time for the quasi-optimal solution obtained by other means for the original stochastic problem.
Resumo:
We present a scheme to generate clusters submodels with stage ordering from a (symmetric or a nonsymmetric one) multistage stochastic mixed integer optimization model using break stage. We consider a stochastic model in compact representation and MPS format with a known scenario tree. The cluster submodels are built by storing first the 0-1 the variables, stage by stage, and then the continuous ones, also stage by stage. A C++ experimental code has been implemented for reordering the stochastic model as well as the cluster decomposition after the relaxation of the non-anticipativiy constraints until the so-called breakstage. The computational experience shows better performance of the stage ordering in terms of elapsed time in a randomly generated testbed of multistage stochastic mixed integer problems.
Resumo:
Lan honekin Donostiako bizitza kalitatea ikertu nahi dute, hau da etxebizitzetako bizilagunen eta etxebizitzen egitura eta erabileraren arteko erlazio posiblea. Biztanle mota, etxebizitza mota eta Donostiako eremu homogeneoen arteko lotura bilatzen saiatuko dira, Aldagai Anitzeko Analisia, batez ere Osagai Nagusizko Analisia, erabiliz aldagai kopurua murrizteko eta faktore edo osagai gutxitan aldagaien informazio gehiena jasotzeko. Jarraian, Cluster aplikatuko dute auzo homogeneoen taldeak osatzeko.
Resumo:
Este artículo trata sobre el desarrollo de áreas comerciales en los alrededores de las ciudades y de cómo el centro urbano ha ido perdiendo atractivo comercial. Esta situación, común en la mayor parte de los países de nuestro entorno, plantea importantes problemas para el comercio tradicional de centro ciudad, que ve como gran parte de sus clientes optan por la oferta de la periferia, con la siguiente fuga de ingresos.
Resumo:
5 p.
Resumo:
We conduct experiments to investigate the effects of different majority requirements on bargaining outcomes in small and large groups. In particular, we use a Baron-Ferejohn protocol and investigate the effects of decision rules on delay (number of bargaining rounds needed to reach agreement) and measures of "fairness" (inclusiveness of coalitions, equality of the distribution within a coalition). We find that larger groups and unanimity rule are associated with significantly larger decision making costs in the sense that first round proposals more often fail, leading to more costly delay. The higher rate of failure under unanimity rule and in large groups is a combination of three facts: (1) in these conditions, a larger number of individuals must agree, (2) an important fraction of individuals reject offers below the equal share, and (3) proposers demand more (relative to the equal share) in large groups.
Resumo:
The learning of probability distributions from data is a ubiquitous problem in the fields of Statistics and Artificial Intelligence. During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models due to their advantageous theoretical properties. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k, which controls the complexity of the model. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose a family of algorithms which approximates this problem with a computational complexity of O(k · n^2 log n) in the worst case, where n is the number of implied random variables. The structures of the decomposable models that solve the maximum likelihood problem are called maximal k-order decomposable graphs. Our proposals, called fractal trees, construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy based on the particular features of this type of structures. Additionally, we propose a prune-and-graft procedure which transforms a maximal k-order decomposable graph into another one, increasing its likelihood. We have implemented two particular fractal tree algorithms called parallel fractal tree and sequential fractal tree. These algorithms can be considered a natural extension of Chow and Liu’s algorithm, from k = 2 to arbitrary values of k. Both algorithms have been compared against other efficient approaches in artificial and real domains, and they have shown a competitive behavior to deal with the maximum likelihood problem. Due to their low computational complexity they are especially recommended to deal with high dimensional domains.
Resumo:
[EN] The objective of this study was to determine whether a short training program, using real foods, would decreased their portion-size estimation errors after training. 90 student volunteers (20.18±0.44 y old) of the University of the Basque Country (Spain) were trained in observational techniques and tested in food-weight estimation during and after a 3-hour training period. The program included 57 commonly consumed foods that represent a variety of forms (125 different shapes). Estimates of food weight were compared with actual weights. Effectiveness of training was determined by examining change in the absolute percentage error for all observers and over all foods over time. Data were analyzed using SPSS vs. 13.0. The portion-size errors decreased after training for most of the foods. Additionally, the accuracy of their estimates clearly varies by food group and forms. Amorphous was the food type estimated least accurately both before and after training. Our findings suggest that future dietitians can be trained to estimate quantities by direct observation across a wide range of foods. However this training may have been too brief for participants to fully assimilate the application.
Resumo:
The evaluation and comparison of internal cluster validity indices is a critical problem in the clustering area. The methodology used in most of the evaluations assumes that the clustering algorithms work correctly. We propose an alternative methodology that does not make this often false assumption. We compared 7 internal cluster validity indices with both methodologies and concluded that the results obtained with the proposed methodology are more representative of the actual capabilities of the compared indices.
Resumo:
[ES] El País Vasco es internacionalmente reconocido por su gastronomía y sus grandes cocineros; de hecho, es el territorio del mundo con más estrellas Michelin por kilómetro cuadrado. Esta notoriedad e imagen repercuten muy positivamente en todo el sector gastronómico y en la imagen y proyección turística del País Vasco y se ha logrado gracias a la labor sostenida de un grupo inicial de cocineros, a los que siguieron otros, que realizan importantes esfuerzos de colaboración, sin dejar de competir entre ellos (tratándose de un claro ejemplo de coopetition). El análisis de la relación entre estos grandes cocineros vascos y su entorno, permite identificar un cluster que actualmente se encuentra en fase de madurez con un futuro esperanzador y que ha arrojado importantes beneficios al sector, a cada uno de sus integrantes y a la región en su conjunto muy especialmente en términos de innovación, notoriedad y reputación. Para la realización de este trabajo se ha utilizado, además de la revisión bibliográfica y documental pertinente, una metodología cualitativa, consistente en la realización de entrevistas en profundidad a los siete cocineros fundadores y patronos del Basque Culinary Center (primera Facultad Universitaria de Estudios Gastronómicos de Europa, dependiente de la Universidad de Mondragón). El trabajo es uno de los frutos extraídos de un contrato de colaboración entre el Instituto de Economía Aplicada a la empresa de la UPV/EHU e Innobasque (Agencia Vasca para la Innovación), en el que esta última fijó tanto los objetivos de la investigación como la metodología a utilizar.
Resumo:
Póster presentado en The Energy and Materials Research Conference - EMR2015 celebrado en Madrid (España) entre el 25-27 de febrero de 2015
Resumo:
[EN] This study analyzes the relationship between board size and economic-financial performance in a sample of European firms that constitute the EUROSTOXX50 Index. Based on previous literature, resource dependency and agency theories, and considering regulation developed by the OECD and European Union on the normative of corporate governance for each country in the sample, the authors propose the hypotheses of both positive linear and quadratic relationships between the researched parameters. Using ROA as a benchmark of financial performance and the number of members of the board as measurement of the board size, two OLS estimations are performed. To confirm the robustness of the results the empirical study is tested with two other similar financial ratios, ROE and Tobin s Q. Due to the absence of significant results, an additional factor, firm size, is employed in order to check if it affects firm performance. Delving further into the nature of this relationship, it is revealed that there exists a strong and negative relation between firm size and financial performance. Consequently, it can be asseverated that the generic recommendation one size fits all cannot be applied in this case; which conforms to the Recommendations of the European Union that dissuade using generic models for all countries.
Resumo:
This paper analyses the economic inequality in the municipalities of the Basque Country during the period 1996 and 2010. We have used dates from the Udalmap database mainly the GDP per capita. We have drawn Lorenz Curves and also we have computed Gini indexes to analyse the evolution of inequality during this period. Therefore, we have concluded that there has been an increase of the economic inequality in the municipalities of the Basque Country during this period of time.
Resumo:
In the present study we have investigated the population genetic structure of albacore (Thunnus alalunga, Bonnaterre 1788) and assessed the loss of genetic diversity, likely due to overfishing, of albacore population in the North Atlantic Ocean. For this purpose, 1,331 individuals from 26 worldwide locations were analyzed by genotyping 75 novel nuclear SNPs. Our results indicated the existence of four genetically homogeneous populations delimited within the Mediterranean Sea, the Atlantic Ocean, the Indian Ocean and the Pacific Ocean. Current definition of stocks allows the sustainable management of albacore since no stock includes more than one genetic entity. In addition, short-and long-term effective population sizes were estimated for the North Atlantic Ocean albacore population, and results showed no historical decline for this population. Therefore, the genetic diversity and, consequently, the adaptive potential of this population have not been significantly affected by overfishing.