63 resultados para XYZ compliant parallel mechanism
Resumo:
In a previous paper [Hidalgo et al., Phys. Rev. Lett. 103, 118001 (2009)] it was shown that square particles deposited in a silo tend to align with a diagonal parallel to the gravity, giving rise to a deposit with very particular properties. Here we explore, both experimentally and numerically, the effect on these properties of the filling mechanism. In particular, we modify the volume fraction of the initial configuration from which the grains are deposited. Starting from a very dilute case, increasing the volume fraction results in an enhancement of the disorder in the final deposit characterized by a decrease of the final packing fraction and a reduction of the number of particles oriented with their diagonal in the direction of gravity. However, for very high initial volume fractions, the final packing fraction increases again. This result implies that two deposits with the same final packing fraction can be obtained from very different initial conditions. The structural properties of such deposits are analyzed, revealing that, although the final volume fraction is the same, their micromechanical properties notably differ.
Resumo:
Pérez-Castrillo and Wettstein (2002) and Veszteg (2004) propose the use of a multibidding mechanism for situations where agents have to choose a common project. Examples are decisions involving public goods (or public "bads"). We report experimental results to test the practical tractability and effectiveness of the multibidding mechanisms in environments where agents hold private information concerning their valuation of the projects. The mechanism performed quite well in the laboratory: it provided the ex post efficient outcome in roughly three quarters of the cases across the treatments; moreover, the largest part of the subject pool formed their bids according to the theoretical bidding behavior.
Resumo:
This paper shows how a high level matrix programming language may be used to perform Monte Carlo simulation, bootstrapping, estimation by maximum likelihood and GMM, and kernel regression in parallel on symmetric multiprocessor computers or clusters of workstations. The implementation of parallelization is done in a way such that an investigator may use the programs without any knowledge of parallel programming. A bootable CD that allows rapid creation of a cluster for parallel computing is introduced. Examples show that parallelization can lead to important reductions in computational time. Detailed discussion of how the Monte Carlo problem was parallelized is included as an example for learning to write parallel programs for Octave.
Resumo:
This note describes ParallelKnoppix, a bootable CD that allows creation of a Linux cluster in very little time. An experienced user can create a cluster ready to execute MPI programs in less than 10 minutes. The computers used may be heterogeneous machines, of the IA-32 architecture. When the cluster is shut down, all machines except one are in their original state, and the last can be returned to its original state by deleting a directory. The system thus provides a means of using non-dedicated computers to create a cluster. An example session is documented.
Resumo:
The paper presents a foundation model for Marxian theories of the breakdown of capitalism based on a new falling rate of profit mechanism. All of these theories are based on one or more of "the historical tendencies": a rising capital-wage bill ratio, a rising capitalist share and a falling rate of profit. The model is a foundation in the sense that it generates these tendencies in the context of a model with a constant subsistence wage. The newly discovered generating mechanism is based on neo-classical reasoning for a model with land. It is non-Ricardian in that land augmenting technical progress can be unboundedly rapid. Finally, since the model has no steady state, it is necessary to use a new technique, Chaplygin's method, to prove the result.
Resumo:
The paper presents a foundation model for Marxian theories of the breakdown of capitalism based on a new falling rate of profit mechanism. All of these theories are based on one or more of ?the historical tendencies?: a rising capital-wage bill ratio, a rising capitalist share and a falling rate of profit. The model is a foundation in the sense that it generates these tendencies in the context of a model with a constant subsistence wage. The newly discovered generating mechanism is based on neo-classical reasoning for a model with land. It is non-Ricardian in that land augmenting technical progress can be unboundedly rapid. Finally, since the model has no steady state, it is necessary to use a new technique, Chaplygin?s method, to prove the result.
Resumo:
vegeu resum en el fitxer adjunt a l'inici del treball de recerca
Resumo:
In the presence of cost uncertainty, limited liability introduces the possibility of default in procurement with its associated bank-ruptcy costs. When financial soundness is not perfectly observable, we show that incentive compatibility implies that financially less sound contractors are selected with higher probability in any feasible mechanism. Informational rents are associated with unsound financial situations. By selecting the financially weakest contractor, stronger price competition (auctions) may not only increase the probability of default but also expected rents. Thus, weak conditions are suffcient for auctions to be suboptimal. In particular, we show that pooling firms with higher assets may reduce the cost of procurement even when default is costless for the sponsor.
Resumo:
We study simply-connected irreducible non-locally symmetric pseudo-Riemannian Spin(q) manifolds admitting parallel quaternionic spinors.
Resumo:
El projecte ha permès finançar el suport tècnic necessari per a poder desenvolupar materials informatitzats corresponents a activitats teòrico-pràctiques de l’assignatura troncal de la llicenciatura de Psicologia (actualment també en el Grau) “Percepció i Atenció”. Els materials desenvolupats corresponen a diferents punts del programa de l’assignatura i són els següents: demostració de la tècnica d’ombrejat per l’anàlisi de l’atenció focalitzada; parpelleig atencional en sèries de presentacions ràpides d’informació visual (RSVP); canvis encoberts de l’atenció i el mecanisme d’inhibició de retorn; efectes dels filtrats sobre la percepció de la parla i la música; il·lusions auditives i els principis d’organització de la informació sonora complexa; la percepció categòrica dels sons de la parla i la naturalesa continua del processament lèxic (paradigma d’obertura successiva o gating). Per totes aquelles activitats amb continguts de llenguatge, s’han desenvolupat dues versions equivalents, catalana i castellana, per permetre que els estudiants fessin la pràctica en la seva llengua dominant. A la primera fase del projecte, al llarg del curs 2006-07, es van preparar els materials i la programació de les diferents pràctiques i es van poder identificar alguns problemes que es van solucionar posteriorment. En el curs 2007-08 totes les activitats de pràctiques ja es van fer accessibles als estudiants (Plataforma Moodle, Campus Virtual) i la valoració sobre el seu funcionament, feta pels estudiants mitjançant qüestionaris, va ser satisfactòria en més d’un 95% dels casos (els únics problemes detectats estaven relacionats amb les característiques dels ordinadors del usuaris i del navegador utilitzat per accedir als materials). La valoració de les activitats per part dels estudiants va ser globalment positiva i, en el seu ús continuat al llarg dels cursos 2008-09 i 2009-10, s’ha observat una participació creixent (accés voluntari a les activitats) i un aprofitament millor de la informació presentada, que es tradueix en millores en les puntuacions obtingudes en les avaluacions de l'assignatura.
Resumo:
In this paper we consider a representative a priori unstable Hamiltonian system with 2+1/2 degrees of freedom, to which we apply the geometric mechanism for diffusion introduced in the paper Delshams et al., Mem.Amer.Math. Soc. 2006, and generalized in Delshams and Huguet, Nonlinearity 2009, and provide explicit, concrete and easily verifiable conditions for the existence of diffusing orbits. The simplification of the hypotheses allows us to perform explicitly the computations along the proof, which contribute to present in an easily understandable way the geometric mechanism of diffusion. In particular, we fully describe the construction of the scattering map and the combination of two types of dynamics on a normally hyperbolic invariant manifold.
Resumo:
En el entorno actual, diversas ramas de las ciencias, tienen la necesidad de auxiliarse de la computación de altas prestaciones para la obtención de resultados a relativamente corto plazo. Ello es debido fundamentalmente, al alto volumen de información que necesita ser procesada y también al costo computacional que demandan dichos cálculos. El beneficio al realizar este procesamiento de manera distribuida y paralela, logra acortar los tiempos de espera en la obtención de los resultados y de esta forma posibilita una toma decisiones con mayor anticipación. Para soportar ello, existen fundamentalmente dos modelos de programación ampliamente extendidos: el modelo de paso de mensajes a través de librerías basadas en el estándar MPI, y el de memoria compartida con la utilización de OpenMP. Las aplicaciones híbridas son aquellas que combinan ambos modelos con el fin de aprovechar en cada caso, las potencialidades específicas del paralelismo en cada uno. Lamentablemente, la práctica ha demostrado que la utilización de esta combinación de modelos, no garantiza necesariamente una mejoría en el comportamiento de las aplicaciones. Por lo tanto, un análisis de los factores que influyen en el rendimiento de las mismas, nos beneficiaría a la hora de implementarlas pero también, sería un primer paso con el fin de llegar a predecir su comportamiento. Adicionalmente, supondría una vía para determinar que parámetros de la aplicación modificar con el fin de mejorar su rendimiento. En el trabajo actual nos proponemos definir una metodología para la identificación de factores de rendimiento en aplicaciones híbridas y en congruencia, la identificación de algunos factores que influyen en el rendimiento de las mismas.
Resumo:
Performance prediction and application behavior modeling have been the subject of exten- sive research that aim to estimate applications performance with an acceptable precision. A novel approach to predict the performance of parallel applications is based in the con- cept of Parallel Application Signatures that consists in extract an application most relevant parts (phases) and the number of times they repeat (weights). Executing these phases in a target machine and multiplying its exeuction time by its weight an estimation of the application total execution time can be made. One of the problems is that the performance of an application depends on the program workload. Every type of workload affects differently how an application performs in a given system and so affects the signature execution time. Since the workloads used in most scientific parallel applications have dimensions and data ranges well known and the behavior of these applications are mostly deterministic, a model of how the programs workload affect its performance can be obtained. We create a new methodology to model how a program’s workload affect the parallel application signature. Using regression analysis we are able to generalize each phase time execution and weight function to predict an application performance in a target system for any type of workload within predefined range. We validate our methodology using a synthetic program, benchmarks applications and well known real scientific applications.
Resumo:
This paper proposes a parallel architecture for estimation of the motion of an underwater robot. It is well known that image processing requires a huge amount of computation, mainly at low-level processing where the algorithms are dealing with a great number of data. In a motion estimation algorithm, correspondences between two images have to be solved at the low level. In the underwater imaging, normalised correlation can be a solution in the presence of non-uniform illumination. Due to its regular processing scheme, parallel implementation of the correspondence problem can be an adequate approach to reduce the computation time. Taking into consideration the complexity of the normalised correlation criteria, a new approach using parallel organisation of every processor from the architecture is proposed
Resumo:
How many times a given process p preempts, either voluntarily or involuntarily, is an important threat to computer's processes throughput. Whenever running cpu-bound processes on a multi-core system without an actual system grid engine as commonly found on Grid Clusters, their performance and stability are directly related to their accurate implementation and the system reliability which is, to an extend, an important caveat most of the times so difficult to detect. Context Switching is time-consuming. Thus, if we could develop a tool capable of detecting it and gather data from every single performed Context Switch, we would beable to study this data and present some results that should pin-point at whatever their main cause could be.