60 resultados para Parallel mechanisms
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
This paper examines the governance of Spanish Banks around two main issues. First, does a poor economic performance activate those governance interventions that favor the removal of executive directors and the merger of non-performing banks? And second, does the relationship between governance intervention and economic performance vary with the ownership form of the bank? Our results show that a bad performance does activate governance mechanisms in banks, although for the case of Savings Banks intervention is confined to a merger or acquisition. Nevertheless, the distinct ownership structure of Savings Banks does not fully protect non-performing banks from disappearing. Product-market competition compensates for those weak internal governance mechanisms that result from an ownership form which gives voice to several stakeholder groups.
Resumo:
This paper shows how a high level matrix programming language may be used to perform Monte Carlo simulation, bootstrapping, estimation by maximum likelihood and GMM, and kernel regression in parallel on symmetric multiprocessor computers or clusters of workstations. The implementation of parallelization is done in a way such that an investigator may use the programs without any knowledge of parallel programming. A bootable CD that allows rapid creation of a cluster for parallel computing is introduced. Examples show that parallelization can lead to important reductions in computational time. Detailed discussion of how the Monte Carlo problem was parallelized is included as an example for learning to write parallel programs for Octave.
Resumo:
Ma (1996) studied the random order mechanism, a matching mechanism suggested by Roth and Vande Vate (1990) for marriage markets. By means of an example he showed that the random order mechanism does not always reach all stable matchings. Although Ma's (1996) result is true, we show that the probability distribution he presented - and therefore the proof of his Claim 2 - is not correct. The mistake in the calculations by Ma (1996) is due to the fact that even though the example looks very symmetric, some of the calculations are not as ''symmetric.''
Resumo:
This note describes ParallelKnoppix, a bootable CD that allows creation of a Linux cluster in very little time. An experienced user can create a cluster ready to execute MPI programs in less than 10 minutes. The computers used may be heterogeneous machines, of the IA-32 architecture. When the cluster is shut down, all machines except one are in their original state, and the last can be returned to its original state by deleting a directory. The system thus provides a means of using non-dedicated computers to create a cluster. An example session is documented.
Resumo:
For the many-to-one matching model in which firms have substitutable and quota q-separable preferences over subsets of workers we show that the workers-optimal stable mechanism is group strategy-proof for the workers. In order to prove this result, we also show that under this domain of preferences (which contains the domain of responsive preferences of the college admissions problem) the workers-optimal stable matching is weakly Pareto optimal for the workers and the Blocking Lemma holds as well. We exhibit an example showing that none of these three results remain true if the preferences of firms are substitutable but not quota q-separable.
Resumo:
This paper studies behavior in experiments with a linear voluntary contributions mechanism for public goods conducted in Japan, the Netherlands, Spain and the USA. The same experimental design was used in the four countries. Our 'contribution function' design allows us to obtain a view of subjects' behavior from two complementary points of view. If yields information about situations where, in purely pecuniary terms, it is a dominant strategy to contribute all the endowment and about situations where it is a dominant strategy to contribute nothing. Our results show, first, that differences in behavior across countries are minor. We find that when people play "the same game" they behave similarly. Second, for all four countries our data are inconsistent with the explanation that subjects contribute only out of confusion. A common cooperative motivation is needed to explain the date.
Resumo:
According to the account of the European Union (EU) decision making proposed in this paper, this is a bargaining process during which actors shift their policy positions with a view to reaching agreements on controversial issues.
Resumo:
We study simply-connected irreducible non-locally symmetric pseudo-Riemannian Spin(q) manifolds admitting parallel quaternionic spinors.
Resumo:
El objetivo de esta investigación es aportar evidencia sobre las fuentes de las economías de aglomeración para el caso español. De todas las maneras posibles que se han tomado en la literatura para medir las economías de aglomeración, nosotros lo analizamos a partir de las decisiones de localización de las empresas manufactureras. La literatura reciente ha puesto de relieve que el análisis basado en la disyuntiva localización / urbanización (relaciones dentro de un mismo sector) no es suficiente para entender las economías de aglomeración. Sin embargo, las relaciones entre los diferentes sectores sí resultan significativas al examinar por qué las empresas que pertenecen a diferentes sectores se localizan unas al lado de las otras. Con esto en mente, intentamos explicar que relaciones entre diferentes sectores pueden explicar coaglomeración. Para ello, nos centramos en aquellas relaciones entre sectores definidos a partir de los mecanismos de aglomeración de Marshall, es decir, labor market, input sharing y knowledge spillovers. Trabajamos con el labor market pooling en la medida en que los dos sectores utilizan los mismos trabajadores (clasificación de ocupaciones). Con el segundo mecanismo de Marshall, input sharing, introducimos cómo dos sectores tienen una relación de comprador / vendedor. Por último, nos referimos a dos sectores que utilizan las mismas tecnologías en cuanto a los knowledge spillovers. Con el fin de capturar todos los efectos de los mecanismos de aglomeracion en España, en esta investigación trabajamos con dos ámbitos geográficos, los municipios y los mercados de trabajo locales. La literatura existente nunca se ha puesto de acuerdo en cual es el ámbito geográfico en el que mejor trabajan los mecanismos Marshall, por lo que hemos cubierto todas las unidades geográficas potenciales.
Resumo:
We propose a smooth multibidding mechanism for environments where a group of agents have to choose one out of several projects (possibly with the help of a social planner). Our proposal is related to the multibidding mechanism (Pérez-Castrillo and Wettstein, 2002) but it is "smoother" in the sense that small variations in an agent's bids do not lead to dramatic changes in the probability of selecting a project. This mechanism is shown to possess several interesting properties. Unlike in the study by Pérez Castrillo and Wettstein (2002), the equilibrium outcome is unique. Second, it ensures an equal sharing of the surplus that it induces. Finally, it enables reaching an outcome as close to effciency as is desired.
Resumo:
Performance prediction and application behavior modeling have been the subject of exten- sive research that aim to estimate applications performance with an acceptable precision. A novel approach to predict the performance of parallel applications is based in the con- cept of Parallel Application Signatures that consists in extract an application most relevant parts (phases) and the number of times they repeat (weights). Executing these phases in a target machine and multiplying its exeuction time by its weight an estimation of the application total execution time can be made. One of the problems is that the performance of an application depends on the program workload. Every type of workload affects differently how an application performs in a given system and so affects the signature execution time. Since the workloads used in most scientific parallel applications have dimensions and data ranges well known and the behavior of these applications are mostly deterministic, a model of how the programs workload affect its performance can be obtained. We create a new methodology to model how a program’s workload affect the parallel application signature. Using regression analysis we are able to generalize each phase time execution and weight function to predict an application performance in a target system for any type of workload within predefined range. We validate our methodology using a synthetic program, benchmarks applications and well known real scientific applications.
Resumo:
COMPSs és un entorn de programació paral·lela desenvolupat per BSC-CNS. Aquest projecte busca estendre aquest entorn per tal de dotar-lo de funcionalitats inicialment no suportades. Aquest conjunt d’extensions radiquen principalment en la implementació de mecanismes que permetin incrementar la flexibilitat, robustesa i polivalència del sistema.
Resumo:
This paper proposes a parallel architecture for estimation of the motion of an underwater robot. It is well known that image processing requires a huge amount of computation, mainly at low-level processing where the algorithms are dealing with a great number of data. In a motion estimation algorithm, correspondences between two images have to be solved at the low level. In the underwater imaging, normalised correlation can be a solution in the presence of non-uniform illumination. Due to its regular processing scheme, parallel implementation of the correspondence problem can be an adequate approach to reduce the computation time. Taking into consideration the complexity of the normalised correlation criteria, a new approach using parallel organisation of every processor from the architecture is proposed
Resumo:
Technological limitations and power constraints are resulting in high-performance parallel computing architectures that are based on large numbers of high-core-count processors. Commercially available processors are now at 8 and 16 cores and experimental platforms, such as the many-core Intel Single-chip Cloud Computer (SCC) platform, provide much higher core counts. These trends are presenting new sets of challenges to HPC applications including programming complexity and the need for extreme energy efficiency.In this work, we first investigate the power behavior of scientific PGAS application kernels on the SCC platform, and explore opportunities and challenges for power management within the PGAS framework. Results obtained via empirical evaluation of Unified Parallel C (UPC) applications on the SCC platform under different constraints, show that, for specific operations, the potential for energy savings in PGAS is large; and power/performance trade-offs can be effectively managed using a cross-layerapproach. We investigate cross-layer power management using PGAS language extensions and runtime mechanisms that manipulate power/performance tradeoffs. Specifically, we present the design, implementation and evaluation of such a middleware for application-aware cross-layer power management of UPC applications on the SCC platform. Finally, based on our observations, we provide a set of recommendations and insights that can be used to support similar power management for PGAS applications on other many-core platforms.
Resumo:
Es tracta d'un projecte que proposa una aplicació per al calibratge automàtic de models P-sistema. Per a fer-ho primer es farà un estudi sobre els models P-sistema i el procediment seguit pels investigadors per desenvolupar aquest tipus de models. Es desenvoluparà una primera solució sèrie per al problema, i s'analitzaran els seus punts febles. Seguidament es proposarà una versió paral·lela que millori significativament el temps d'execució, tot mantenint una alta eficiència i escalabilitat.