92 resultados para design rules
Resumo:
Un reto al ejecutar las aplicaciones en un cluster es lograr mejorar las prestaciones utilizando los recursos de manera eficiente, y este reto es mayor al utilizar un ambiente distribuido. Teniendo en cuenta este reto, se proponen un conjunto de reglas para realizar el cómputo en cada uno de los nodos, basado en el análisis de cómputo y comunicaciones de las aplicaciones, se analiza un esquema de mapping de celdas y un método para planificar el orden de ejecución, tomando en consideración la ejecución por prioridad, donde las celdas de fronteras tienen una mayor prioridad con respecto a las celdas internas. En la experimentación se muestra el solapamiento del computo interno con las comunicaciones de las celdas fronteras, obteniendo resultados donde el Speedup aumenta y los niveles de eficiencia se mantienen por encima de un 85%, finalmente se obtiene ganancias de los tiempos de ejecución, concluyendo que si se puede diseñar un esquemas de solapamiento que permita que la ejecución de las aplicaciones SPMD en un cluster se hagan de forma eficiente.
Resumo:
In a procurement setting, this paper examines agreements between a buyer and one of the suppliers which would increase their joint surplus. The provisions of such agreements depend on the buyer's ability to design the rules of the final procurement auction. When the buyer has no such ability, their joint surplus can be increased by an agreement which grants to the preferred supplier a right-of-first-refusal on the lowest price offer from the other suppliers. When the buyer does have this ability, one agreement which maximizes their joint surplus includes a revelation game for the cost of the preferred supplier and a reserve price in the procurement auction based on that cost.
Resumo:
The objective of this paper is to correct and improve the results obtained by Van der Ploeg (1984a, 1984b) and utilized in the theoretical literature related to feedback stochastic optimal control sensitive to constant exogenous risk-aversion (see, Jacobson, 1973, Karp, 1987 and Whittle, 1981, 1989, 1990, among others) or to the classic context of risk-neutral decision-makers (see, Chow, 1973, 1976a, 1976b, 1977, 1978, 1981, 1993). More realistic and attractive, this new approach is placed in the context of a time-varying endogenous risk-aversion which is under the control of the decision-maker. It has strong qualitative implications on the agent's optimal policy during the entire planning horizon.
Resumo:
This study focuses on identification and exploitation processes among Finnish design entrepreneurs (i.e. selfemployed industrial designers). More specifically, this study strives to find out what design entrepreneurs do when they create new ventures, how venture ideas are identified and how entrepreneurial processes are organized to identify and exploit such venture ideas in the given industrial context. Indeed, what does educated and creative individuals do when they decide to create new ventures, where do the venture ideas originally come from, and moreover, how are venture ideas identified and developed into viable business concepts that are introduced on the markets? From an academic perspective: there is a need to increase our understanding of the interaction between the identification and exploitation of emerging ventures, in this and other empirical contexts. Rather than assuming that venture ideas are constant in time, this study examines how emerging ideas are adjusted to enable exploitation in dynamic market settings. It builds on the insights from previous entrepreneurship process research. The interpretations from the theoretical discussion build on the assumption that the subprocesses of identification and exploitation interact, and moreover, they are closely entwined with each other (e.g. McKelvie & Wiklund, 2004, Davidsson, 2005). This explanation challenges the common assumption that entrepreneurs would first identify venture ideas and then exploit them (e.g. Shane, 2003). The assumption is that exploitation influences identification, just as identification influences exploitation. Based on interviews with design entrepreneurs and external actors (e.g. potential customers, suppliers and collaborators), it appears as identification and exploitation of venture ideas are carried out in close interaction between a number of actors, rather than alone by entrepreneurs. Due to their available resources, design entrepreneurs have a desire to focus on identification related activities and to find external actors that take care of exploitation related activities. The involvement of external actors may have a direct impact on decisionmaking and various activities along the processes of identification and exploitation, which is something that previous research does not particularly emphasize. For instance, Bhave (1994) suggests both operative and strategic feedback from the market, but does not explain how external parties are actually involved in the decisionmaking, and in carrying out various activities along the entrepreneurial process.
Resumo:
The objective of this paper is to identify empirically the logic behind short-term interest rates setting
Resumo:
We consider negotiations selecting one-dimensional policies. Individuals have single-peaked preferences, and they are impatient. Decisions arise from a bargaining game with random proposers and (super) majority approval, ranging from the simple majority up to unanimity. The existence and uniqueness of stationary subgame perfect equilibrium is established, and its explicit characterization provided. We supply an explicit formula to determine the unique alternative that prevails, as impatience vanishes, for each majority. As an application, we examine the efficiency of majority rules. For symmetric distributions of peaks unanimity is the unanimously preferred majority rule. For asymmetric populations rules maximizing social surplus are characterized.
Resumo:
The objective of this study is the empirical identification of the monetary policy rules pursued in individual countries of EU before and after the launch of European Monetary Union. In particular, we have employed an estimation of the augmented version of the Taylor rule (TR) for 25 countries of the EU in two periods (1992-1998, 1999-2006). While uniequational estimation methods have been used to identify the policy rules of individual central banks, for the rule of the European Central Bank has been employed a dynamic panel setting. We have found that most central banks really followed some interest rate rule but its form was usually different from the original TR (proposing that domestic interest rate responds only to domestic inflation rate and output gap). Crucial features of policy rules in many countries have been the presence of interest rate smoothing as well as response to foreign interest rate. Any response to domestic macroeconomic variables have been missing in the rules of countries with inflexible exchange rate regimes and the rules consisted in mimicking of the foreign interest rates. While we have found response to long-term interest rates and exchange rate in rules of some countries, the importance of monetary growth and asset prices has been generally negligible. The Taylor principle (the response of interest rates to domestic inflation rate must be more than unity as a necessary condition for achieving the price stability) has been confirmed only in large economies and economies troubled with unsustainable inflation rates. Finally, the deviation of the actual interest rate from the rule-implied target rate can be interpreted as policy shocks (these deviation often coincided with actual turbulent periods).
Resumo:
This paper has three objectives. First, it aims at revealing the logic of interest rate setting pursued by monetary authorities of 12 new EU members. Using estimation of an augmented Taylor rule, we find that this setting was not always consistent with the official monetary policy. Second, we seek to shed light on the inflation process of these countries. To this end, we carry out an estimation of an open economy Philips curve (PC). Our main finding is that inflation rates were not only driven by backward persistency but also held a forward-looking component. Finally, we assess the viability of existing monetary arrangements for price stability. The analysis of the conditional inflation variance obtained from GARCH estimation of PC is used for this purpose. We conclude that inflation targeting is preferable to an exchange rate peg because it allowed decreasing the inflation rate and anchored its volatility.
Resumo:
In the presence of cost uncertainty, limited liability introduces the possibility of default in procurement with its associated bank-ruptcy costs. When financial soundness is not perfectly observable, we show that incentive compatibility implies that financially less sound contractors are selected with higher probability in any feasible mechanism. Informational rents are associated with unsound financial situations. By selecting the financially weakest contractor, stronger price competition (auctions) may not only increase the probability of default but also expected rents. Thus, weak conditions are suffcient for auctions to be suboptimal. In particular, we show that pooling firms with higher assets may reduce the cost of procurement even when default is costless for the sponsor.
Resumo:
The goal of this paper is to reexamine the optimal design and efficiency of loyalty rewards in markets for final consumption goods. While the literature has emphasized the role of loyalty rewards as endogenous switching costs (which distort the efficient allocation of consumers), in this paper I analyze the ability of alternative designs to foster consumer participation and increase total surplus. First, the efficiency of loyalty rewards depend on their specific design. A commitment to the price of repeat purchases can involve substantial efficiency gains by reducing price-cost margins. However, discount policies imply higher future regular prices and are likely to reduce total surplus. Second, firms may prefer to set up inefficient rewards (discounts), especially in those circumstances where a commitment to the price of repeat purchases triggers Coasian dynamics.
Resumo:
We examine whether and how main central banks responded to episodes of financial stress over the last three decades. We employ a new methodology for monetary policy rules estimation, which allows for time-varying response coefficients as well as corrects for endogeneity. This flexible framework applied to the U.S., U.K., Australia, Canada and Sweden together with a new financial stress dataset developed by the International Monetary Fund allows not only testing whether the central banks responded to financial stress but also detects the periods and type of stress that were the most worrying for monetary authorities and to quantify the intensity of policy response. Our findings suggest that central banks often change policy
Resumo:
The idea of ensuring a guarantee (a minimum amount of the resources) to each agent has recently acquired great relevance, in both social and politi- cal terms. Furthermore, the notion of Solidarity has been treated frequently in redistribution problems to establish that any increment of the resources should be equally distributed taking into account some relevant characteris- tics. In this paper, we combine these two general concepts, guarantee and solidarity, to characterize the uniform rules in bankruptcy problems (Con- strained Equal Awards and Constrained Equal Losses rules). Keywords: Constrained Equal Awards, Constrained Equal Losses, Lower bounds, Bankruptcy problems, Solidarity. JEL classification: C71, D63, D71.
Resumo:
Given the urgence of a new paradigm in wireless digital trasmission which should allow for higher bit rate, lower latency and tigher delay constaints, it has been proposed to investigate the fundamental building blocks that at the circuital/device level, will boost the change towards a more efficient network architecture, with high capacity, higher bandwidth and a more satisfactory end user experience. At the core of each transciever, there are inherently analog devices capable of providing the carrier signal, the oscillators. It is strongly believed that many limitations in today's communication protocols, could be relieved by permitting high carrier frequency radio transmission, and having some degree of reconfigurability. This led us to studying distributed oscillator architectures which work in the microwave range and possess wideband tuning capability. As microvave oscillators are essentially nonlinear devices, a full nonlinear analyis, synthesis, and optimization had to be considered for their implementation. Consequently, all the most used nonlinear numerical techniques in commercial EDA software had been reviewed. An application of all the aforementioned techniques has been shown, considering a systems of three coupled oscillator ("triple push" oscillator) in which the stability of the various oscillating modes has been studied. Provided that a certain phase distribution is maintained among the oscillating elements, this topology permits a rise in the output power of the third harmonic; nevertheless due to circuit simmetry, "unwanted" oscillating modes coexist with the intenteded one. Starting with the necessary background on distributed amplification and distributed oscillator theory, the design of a four stage reverse mode distributed voltage controlled oscillator (DVCO) using lumped elments has been presented. All the design steps have been reported and for the first time a method for an optimized design with reduced variations in the output power has been presented. Ongoing work is devoted to model a wideband DVCO and to implement a frequency divider.
Resumo:
This case study introduces our continuous work to enhance the virtual classroom in order to provide faculty and students with an environment open to their needs, compliant with learning standards and, therefore compatible with other e-learning environments, and based on open source software. The result is a modulable, sustainable and interoperable learning environment that can be adapted to different teaching and learning situations by incorporating the LMS integrated tools as well as wikis, blogs, forums and Moodle activities among others.
Resumo:
The present paper shows de design of an experimental study conducted with large groups using educational innovation methodologies at the Polytechnic University of Madrid. Concretely, we have chosen the course titled "History and Politics of Sports" that belongs to the Physical Activity and Sport Science Degree. The selection of this course is because the syllabus is basically theoretical and there are four large groups of freshmen students who do not have previous experiences in a teaching-learning process based on educational innovation. It is hope that the results of this research can be extrapolated to other courses with similar characteristics.