991 resultados para Aristizabal Londoño, Tatyana
Resumo:
391 : graf.
Resumo:
This paper proposes an extended version of the basic New Keynesian monetary (NKM) model which contemplates revision processes of output and inflation data in order to assess the importance of data revisions on the estimated monetary policy rule parameters and the transmission of policy shocks. Our empirical evidence based on a structural econometric approach suggests that although the initial announcements of output and inflation are not rational forecasts of revised output and inflation data, ignoring the presence of non well-behaved revision processes may not be a serious drawback in the analysis of monetary policy in this framework. However, the transmission of inflation-push shocks is largely affected by considering data revisions. The latter being especially true when the nominal stickiness parameter is estimated taking into account data revision processes.
Resumo:
282 págs.Correo electrónico: bea.muro@gmail.com
Resumo:
[EN] The higher education regulation process in Europe, known as the Bologna Process, has involved many changes, mainly in relation to methodology and assessment. The paper given below relates to implementing the new EU study plans into the Teacher Training College of Vitoria-Gasteiz; it is the first interdisciplinary paper written involving teaching staff and related to the Teaching Profession module, the first contained in the structure of the new plans. The coordination of teaching staff is one of the main lines of work in the Bologna Process, which is also essential to develop the right skills and maximise the role of students as an active learning component. The use of active, interdisciplinary methodologies has opened up a new dimension in universities, requiring the elimination of the once componential, individual structure, making us look for new areas of exchange that make it possible for students' training to be developed jointly.
Resumo:
Revista con LCC: Reconocimiento – NoComercial – SinObraDerivada (by-nc-nd)
Resumo:
[ES] Este artículo aborda los resultados de un estudio sobre la obra de Mariasun Landa, llevado a cabo en la Escuela Universitaria de Vitoria-Gasteiz, con el objetivo de detectar la presencia del género en la obra de la autora vasca a través de los estudiantes de la asignatura de Didáctica de la Lengua y la Literatura y profesores expertos en dicha autora. Nuestra investigación trata de determinar hasta qué punto se subsume en las lecturas realizadas por los estudiantes investigados, la presencia de la autora y su imaginario femenino.
Resumo:
As a necessary condition for the validity of the present value model, the price-dividend ratio must be stationary. However, significant market episodes seem to provide evidence of prices significantly drifting apart from dividends while other episodes show prices anchoring back to dividends. This paper investigates the stationarity of this ratio in the context of a Markov- switching model à la Hamilton (1989) where an asymmetric speed of adjustment towards a unique attractor is introduced. A three-regime model displays the best regime identification and reveals that the first part of the 90’s boom (1985-1995) and the post-war period are characterized by a stationary state featuring a slow reverting process to a relatively high attractor. Interestingly, the latter part of the 90’s boom (1996-2000), characterized by a growing price-dividend ratio, is entirely attributed to a stationary regime featuring a highly reverting process to the attractor. Finally, the post-Lehman Brothers episode of the subprime crisis can be classified into a temporary nonstationary regime.
Resumo:
The surface electronic structure of the narrow-gap seminconductor BiTeI exhibits a large Rashba-splitting which strongly depends on the surface termination. Here we report on a detailed investigation of the surface morphology and electronic properties of cleaved BiTeI single crystals by scanning tunneling microscopy, photoelectron spectroscopy (ARPES, XPS), electron diffraction (SPA-LEED) and density functional theory calculations. Our measurements confirm a previously reported coexistence of Te- and I-terminated surface areas
Resumo:
Dentre os óxidos de nitrogênio, o N2O é um gás do efeito estufa altamente nocivo. Devido ao potencial contaminante que este possui, torna-se importante a implementação de processos capazes de reduzir a sua emissão, bem como a dos NOx. Tradicionalmente, têm-se empregado catalisadores baseados em metais nobres, porém estes apresentam como principal desvantagem o elevado custo. Desse modo, sempre houve o interesse pelo uso de outros tipos de catalisadores e metais neste sistema de reação. Nesse contexto, na presente dissertação procurou-se sintetizar precursores de catalisadores tipo hidrotalcita Cu-AlCO3 e avaliar o seu desempenho na reação de redução do NO pelo CO, visando melhorar a atividade e a seletividade a N2. Foram estudados diversos parâmetros de síntese e diferentes composições. Os parâmetros mais influentes na síntese foram a relação molar H2O/(Al+Cu) e a temperatura de secagem do sólido, cujos melhores valores foram 434 e 25C, respectivamente. Testaram-se dois sólidos, o primeiro composto pela fase hidrotalcita quase pura e o segundo com uma clara mistura entre fases hidrotalcita e malaquita. As análises térmica e química revelaram presença da fase malaquita em ambos os materiais com porcentagens de 14 e 40%, respectivamente. Os resultados de difração de raios X indicaram a presença da fase CuO para os catalisadores provenientes da calcinação dos materiais tipo hidrotalcita, porém a espectroscopia Raman evidenciou a presença de Cu2O no catalisador proveniente do material com maior mistura de fases. Os ciclos redox mostraram uma melhora na redutibilidade dos catalisadores após um ciclo de oxidação-redução. Além disso, foi estudado o impacto do envelhecimento térmico a 900C por 12 h no desempenho dos catalisadores. Pelos resultados de teste catalítico os melhores desempenhos foram alcançados pelos catalisadores envelhecidos, contudo o catalisador proveniente do precursor mais puro apresentou-se melhor tanto novo como envelhecido em termos de menor rendimento de N2O. Uma comparação com catalisadores à base de metal nobre mostrou um bom desempenho dos catalisadores à base de cobre, com a vantagem destes apresentarem menor emissão de N2O em temperaturas menores
Resumo:
The effects of commercial fishing with crab pots on the physical condition of the snow crab (Chionoecetes opilio) and southern Tanner crab (C. bairdi) were investigated in the Bering Sea and in Russian waters of the Sea of Okhotsk. In crabs that were subjected to pot hauling, the presence of gas embolism and the deformation of gill lamellae were found in histopathological investigations. Crab vitality, which was characterized subjectively through observation of behavioral responses, depended on not only the number of pot hauls but also the time between hauls. Immediately after repeated pot hauls at short time intervals (≤3 days), we observed a rapid decline in vitality of crabs. When hauling intervals were increased to >3 days, the condition of crabs did not significantly change. After repeated pot hauls, concentration of the respiratory pigment hemocyanin ([Hc]) was often lower in the hemolymph of crabs than in the hemolymph of freshly caught animals. Our research indicated that changes in [Hc] in crabs after repeated pot hauls were caused by the effects of decompression and not by starvation of crabs in pots or exposure of crabs to air. We suggest that the decrease in [Hc] in hemolymph of snow and southern Tanner crabs was a response to the adverse effects of decompression and air-bubble disease. The decrease in [Hc] in affected crabs may be a result of mechanisms that regulate internal pressure in damaged gills to optimize respiratory circulation.
Resumo:
5 hojas : ilustraciones, fotografías.
Resumo:
In this paper we examine a number of admission control and scheduling protocols for high-performance web servers based on a 2-phase policy for serving HTTP requests. The first "registration" phase involves establishing the TCP connection for the HTTP request and parsing/interpreting its arguments, whereas the second "service" phase involves the service/transmission of data in response to the HTTP request. By introducing a delay between these two phases, we show that the performance of a web server could be potentially improved through the adoption of a number of scheduling policies that optimize the utilization of various system components (e.g. memory cache and I/O). In addition, to its premise for improving the performance of a single web server, the delineation between the registration and service phases of an HTTP request may be useful for load balancing purposes on clusters of web servers. We are investigating the use of such a mechanism as part of the Commonwealth testbed being developed at Boston University.
Resumo:
The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand (VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper, we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reservations and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed frame-work uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings, typically yielding losses that are an order of magnitude or more below our analytically derived bounds.
Resumo:
We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.
Resumo:
We propose Trade & Cap (T&C), an economics-inspired mechanism that incentivizes users to voluntarily coordinate their consumption of the bandwidth of a shared resource (e.g., a DSLAM link) so as to converge on what they perceive to be an equitable allocation, while ensuring efficient resource utilization. Under T&C, rather than acting as an arbiter, an Internet Service Provider (ISP) acts as an enforcer of what the community of rational users sharing the resource decides is a fair allocation of that resource. Our T&C mechanism proceeds in two phases. In the first, software agents acting on behalf of users engage in a strategic trading game in which each user agent selfishly chooses bandwidth slots to reserve in support of primary, interactive network usage activities. In the second phase, each user is allowed to acquire additional bandwidth slots in support of presumed open-ended need for fluid bandwidth, catering to secondary applications. The acquisition of this fluid bandwidth is subject to the remaining "buying power" of each user and by prevalent "market prices" – both of which are determined by the results of the trading phase and a desirable aggregate cap on link utilization. We present analytical results that establish the underpinnings of our T&C mechanism, including game-theoretic results pertaining to the trading phase, and pricing of fluid bandwidth allocation pertaining to the capping phase. Using real network traces, we present extensive experimental results that demonstrate the benefits of our scheme, which we also show to be practical by highlighting the salient features of an efficient implementation architecture.