993 resultados para Lock-In
Resumo:
Abstract of Bazin et al. (2013): An accurate and coherent chronological framework is essential for the interpretation of climatic and environmental records obtained from deep polar ice cores. Until now, one common ice core age scale had been developed based on an inverse dating method (Datice), combining glaciological modelling with absolute and stratigraphic markers between 4 ice cores covering the last 50 ka (thousands of years before present) (Lemieux-Dudon et al., 2010). In this paper, together with the companion paper of Veres et al. (2013), we present an extension of this work back to 800 ka for the NGRIP, TALDICE, EDML, Vostok and EDC ice cores using an improved version of the Datice tool. The AICC2012 (Antarctic Ice Core Chronology 2012) chronology includes numerous new gas and ice stratigraphic links as well as improved evaluation of background and associated variance scenarios. This paper concentrates on the long timescales between 120-800 ka. In this framework, new measurements of d18Oatm over Marine Isotope Stage (MIS) 11-12 on EDC and a complete d18Oatm record of the TALDICE ice cores permit us to derive additional orbital gas age constraints. The coherency of the different orbitally deduced ages (from d18Oatm, dO2/N2 and air content) has been verified before implementation in AICC2012. The new chronology is now independent of other archives and shows only small differences, most of the time within the original uncertainty range calculated by Datice, when compared with the previous ice core reference age scale EDC3, the Dome F chronology, or using a comparison between speleothems and methane. For instance, the largest deviation between AICC2012 and EDC3 (5.4 ka) is obtained around MIS 12. Despite significant modifications of the chronological constraints around MIS 5, now independent of speleothem records in AICC2012, the date of Termination II is very close to the EDC3 one. Abstract of Veres et al. (2013): The deep polar ice cores provide reference records commonly employed in global correlation of past climate events. However, temporal divergences reaching up to several thousand years (ka) exist between ice cores over the last climatic cycle. In this context, we are hereby introducing the Antarctic Ice Core Chronology 2012 (AICC2012), a new and coherent timescale developed for four Antarctic ice cores, namely Vostok, EPICA Dome C (EDC), EPICA Dronning Maud Land (EDML) and Talos Dome (TALDICE), alongside the Greenlandic NGRIP record. The AICC2012 timescale has been constructed using the Bayesian tool Datice (Lemieux-Dudon et al., 2010) that combines glaciological inputs and data constraints, including a wide range of relative and absolute gas and ice stratigraphic markers. We focus here on the last 120 ka, whereas the companion paper by Bazin et al. (2013) focuses on the interval 120-800 ka. Compared to previous timescales, AICC2012 presents an improved timing for the last glacial inception, respecting the glaciological constraints of all analyzed records. Moreover, with the addition of numerous new stratigraphic markers and improved calculation of the lock-in depth (LID) based on d15N data employed as the Datice background scenario, the AICC2012 presents a slightly improved timing for the bipolar sequence of events over Marine Isotope Stage 3 associated with the seesaw mechanism, with maximum differences of about 600 yr with respect to the previous Datice-derived chronology of Lemieux-Dudon et al. (2010), hereafter denoted LD2010. Our improved scenario confirms the regional differences for the millennial scale variability over the last glacial period: while the EDC isotopic record (events of triangular shape) displays peaks roughly at the same time as the NGRIP abrupt isotopic increases, the EDML isotopic record (events characterized by broader peaks or even extended periods of high isotope values) reached the isotopic maximum several centuries before. It is expected that the future contribution of both other long ice core records and other types of chronological constraints to the Datice tool will lead to further refinements in the ice core chronologies beyond the AICC2012 chronology. For the time being however, we recommend that AICC2012 be used as the preferred chronology for the Vostok, EDC, EDML and TALDICE ice core records, both over the last glacial cycle (this study), and beyond (following Bazin et al., 2013). The ages for NGRIP in AICC2012 are virtually identical to those of GICC05 for the last 60.2 ka, whereas the ages beyond are independent of those in GICC05modelext (as in the construction of AICC2012, the GICC05modelext was included only via the background scenarios and not as age markers). As such, where issues of phasing between Antarctic records included in AICC2012 and NGRIP are involved, the NGRIP ages in AICC2012 should therefore be taken to avoid introducing false offsets. However for issues involving only Greenland ice cores, there is not yet a strong basis to recommend superseding GICC05modelext as the recommended age scale for Greenland ice cores.
Resumo:
This paper proposes a general equilibrium model of a monocentric city based on Fujita and Krugman (1995). Two rates of transport costs per distance and for the same good are introduced. The model assumes that lower transport costs are available at a few points on a line. These lower costs represent new transport facilities, such as high-speed motorways and railways. Findings is that new transport facilities connecting the city and hinterlands strengthen the lock-in effects, which describes whether a city remains where it is forever after being created. Furthermore, the effect intensifies with better agricultural technologies and a larger population in the economy. The relationship between indirect utility and population size has an inverted U-shape, even if new transport facilities are used. However, the population size that maximizes indirect utility is smaller than that found in Fujita and Krugman (1995).
Resumo:
Cloud computing and, more particularly, private IaaS, is seen as a mature technology with a myriad solutions tochoose from. However, this disparity of solutions and products has instilled in potential adopters the fear of vendor and data lock-in. Several competing and incompatible interfaces and management styles have given even more voice to these fears. On top of this, cloud users might want to work with several solutions at the same time, an integration that is difficult to achieve in practice. In this paper, we propose a management architecture that tries to tackle these problems; it offers a common way of managing several cloud solutions, and an interface that can be tailored to the needs of the user. This management architecture is designed in a modular way, and using a generic information model. We have validated our approach through the implementation of the components needed for this architecture to support a sample private IaaS solution: OpenStack
Resumo:
Cloud computing and, more particularly, private IaaS, is seen as a mature technol- ogy with a myriad solutions to choose from. However, this disparity of solutions and products has instilled in potential adopters the fear of vendor and data lock- in. Several competing and incompatible interfaces and management styles have increased even more these fears. On top of this, cloud users might want to work with several solutions at the same time, an integration that is difficult to achieve in practice. In this Master Thesis I propose a management architecture that tries to solve these problems; it provides a generalized control mechanism for several cloud infrastructures, and an interface that can meet the requirements of the users. This management architecture is designed in a modular way, and using a generic infor- mation model. I have validated the approach through the implementation of the components needed for this architecture to support a sample private IaaS solution: OpenStack.
Resumo:
Recientemente, el paradigma de la computación en la nube ha recibido mucho interés por parte tanto de la industria como del mundo académico. Las infraestructuras cloud públicas están posibilitando nuevos modelos de negocio y ayudando a reducir costes. Sin embargo, una compañía podría desear ubicar sus datos y servicios en sus propias instalaciones, o tener que atenerse a leyes de protección de datos. Estas circunstancias hacen a las infraestructuras cloud privadas ciertamente deseables, ya sea para complementar a las públicas o para sustituirlas por completo. Por desgracia, las carencias en materia de estándares han impedido que las soluciones para la gestión de infraestructuras privadas se hayan desarrollado adecuadamente. Además, la multitud de opciones disponibles ha creado en los clientes el miedo a depender de una tecnología concreta (technology lock-in). Una de las causas de este problema es la falta de alineación entre la investigación académica y los productos comerciales, ya que aquella está centrada en el estudio de escenarios idealizados sin correspondencia con el mundo real, mientras que éstos consisten en soluciones desarrolladas sin tener en cuenta cómo van a encajar con los estándares más comunes o sin preocuparse de hacer públicos sus resultados. Con objeto de resolver este problema, propongo un sistema de gestión modular para infraestructuras cloud privadas enfocado en tratar con las aplicaciones en lugar de centrarse únicamente en los recursos hardware. Este sistema de gestión sigue el paradigma de la computación autónoma y está diseñado en torno a un modelo de información sencillo, desarrollado para ser compatible con los estándares más comunes. Este modelo divide el entorno en dos vistas, que sirven para separar aquello que debe preocupar a cada actor involucrado del resto de información, pero al mismo tiempo permitiendo relacionar el entorno físico con las máquinas virtuales que se despliegan encima de él. En dicho modelo, las aplicaciones cloud están divididas en tres tipos genéricos (Servicios, Trabajos de Big Data y Reservas de Instancias), para que así el sistema de gestión pueda sacar partido de las características propias de cada tipo. El modelo de información está complementado por un conjunto de acciones de gestión atómicas, reversibles e independientes, que determinan las operaciones que se pueden llevar a cabo sobre el entorno y que es usado para hacer posible la escalabilidad en el entorno. También describo un motor de gestión encargado de, a partir del estado del entorno y usando el ya mencionado conjunto de acciones, la colocación de recursos. Está dividido en dos niveles: la capa de Gestores de Aplicación, encargada de tratar sólo con las aplicaciones; y la capa del Gestor de Infraestructura, responsable de los recursos físicos. Dicho motor de gestión obedece un ciclo de vida con dos fases, para así modelar mejor el comportamiento de una infraestructura real. El problema de la colocación de recursos es atacado durante una de las fases (la de consolidación) por un resolutor de programación entera, y durante la otra (la online) por un heurístico hecho ex-profeso. Varias pruebas han demostrado que este acercamiento combinado es superior a otras estrategias. Para terminar, el sistema de gestión está acoplado a arquitecturas de monitorización y de actuadores. Aquella estando encargada de recolectar información del entorno, y ésta siendo modular en su diseño y capaz de conectarse con varias tecnologías y ofrecer varios modos de acceso. ABSTRACT The cloud computing paradigm has raised in popularity within the industry and the academia. Public cloud infrastructures are enabling new business models and helping to reduce costs. However, the desire to host company’s data and services on premises, and the need to abide to data protection laws, make private cloud infrastructures desirable, either to complement or even fully substitute public oferings. Unfortunately, a lack of standardization has precluded private infrastructure management solutions to be developed to a certain level, and a myriad of diferent options have induced the fear of lock-in in customers. One of the causes of this problem is the misalignment between academic research and industry ofering, with the former focusing in studying idealized scenarios dissimilar from real-world situations, and the latter developing solutions without taking care about how they f t with common standards, or even not disseminating their results. With the aim to solve this problem I propose a modular management system for private cloud infrastructures that is focused on the applications instead of just the hardware resources. This management system follows the autonomic system paradigm, and is designed around a simple information model developed to be compatible with common standards. This model splits the environment in two views that serve to separate the concerns of the stakeholders while at the same time enabling the traceability between the physical environment and the virtual machines deployed onto it. In it, cloud applications are classifed in three broad types (Services, Big Data Jobs and Instance Reservations), in order for the management system to take advantage of each type’s features. The information model is paired with a set of atomic, reversible and independent management actions which determine the operations that can be performed over the environment and is used to realize the cloud environment’s scalability. From the environment’s state and using the aforementioned set of actions, I also describe a management engine tasked with the resource placement. It is divided in two tiers: the Application Managers layer, concerned just with applications; and the Infrastructure Manager layer, responsible of the actual physical resources. This management engine follows a lifecycle with two phases, to better model the behavior of a real infrastructure. The placement problem is tackled during one phase (consolidation) by using an integer programming solver, and during the other (online) with a custom heuristic. Tests have demonstrated that this combined approach is superior to other strategies. Finally, the management system is paired with monitoring and actuators architectures. The former able to collect the necessary information from the environment, and the later modular in design and capable of interfacing with several technologies and ofering several access interfaces.
Resumo:
Embora todas as constituições incluam direitos, e muitas delas incluam direitos sociais, a verdade é que algumas são mais generosas do que outras a este respeito. Mas nenhuma se aproxima da Constituição da República Portuguesa de 1976 no que toca à extensão e detalhe do seu catálogo de direitos sociais, económicos e culturais. As principais teorias sobre as origens de instituições geraram hipóteses explicativas da constitucionalização desta segunda geração de direitos. Sucede, porém, que estas hipóteses não conseguem explicar de forma totalmente convincente o processo de constitucionalização dos direitos sociais. Isto é ainda mais verdade em casos como o do nosso país, cujo carácter discrepante os tornam ainda mais difíceis de explicar. Neste artigo, estas teorias e respectivas hipóteses serão testadas por relação ao caso português o qual será, sempre que se revelar necessário, comparado com o espanhol. Visamos alcançar dois objectivos com este exercício. Por um lado, pretendemos identificar as limitações das explicações dominantes, incluindo as teorias e hipóteses sobre os mecanismos causais responsáveis pela inclusão de direitos sociais nas constituições. Por outro lado, o nosso propósito é o de conceber explicações alternativas sempre que as existentes se revelem inadequadas ou insuficientes.
Resumo:
Many European and American observers of the EC have criticized "intergovemmentalist" ac counts for exaggerating the extent of member state control over the process of European integra tion. This essay seeks to ground these criticisms in a "historical institutionalist" account that stresses the need to study European integration as a political process which unfolds over time. Such a perspective highlights the limits of member-state control over long-term institutional de velopment, due to preoccupation with shorHerm concerns, the ubiquity of unintended conse quences, and processes that "lock in" past decusions and make reassertions of member-state control difficult. Brief examination of the evolution of social policy in the EC suggests the limita tions of treating the EC as an international regime facilitating collective action among essentially sovereign states. It is ore useful to view integration as a "path-dependent" process that has pro duced a fragmented but still discernible "multitiered" European polity.
Resumo:
The proximate causes and processes involved in loss of breeds are outlined. The path-dependent effect and Swanson's dominance-effect are discussed in relation to lock-in of breed selection. These effects help to explain genetic erosion. It is shown that the extension of markets and economic globalisation have contributed significantly to the loss of breeds. The decoupling of animal husbandry from surrounding natural environmental conditions is further eroding the stock of genetic resources. Recent trends in animal husbandry raise serious sustainability issues, apart from animal welfare concerns. The extension of markets and economic globalisation have contributed significantly to the rapid loss of domestic breeds, especially livestock. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
SMEs with a weak internal R&D capacity show the tendency to shy away from using external sources of technical expertise. The tendency deters providers of industrial modernization services from supporting such structurally weak SMEs. This paper examines how Japan's local technology centres - kosetsushi - remove the bottleneck and reach out to a significant proportion of SMEs with a weak R&D capacity in their localities. Kosetsushi centres sustain habitual interactions with client firms through 'low information gap' services solving immediate needs and lead the clients to a riskier and longer path toward innovation capacity building. This gives kosetsushi centres a position distinct from universities and consultancies in the regional innovation system. While long-term relationships between kosetsushi centres and their client firms can increase switching costs and produce lock-in effects, a case study of two kosetsushi centres illustrates the importance of 'low-information gap' services and relational assets created thereby to the modernization of SMEs with a weak internal R&D capacity. The paper calls for long-term commitment by the public sector if it addresses the issue through modernization services.
Resumo:
Strategic sourcing has increased in importance in recent years, and now plays an important role in companies’ planning. The current volatility in supply markets means companies face multiple challenges involving lock-in situations, supplier bankruptcies or supply security issues. In addition, their exposure can increase due to natural disasters, as witnessed recently in the form of bird flu, volcanic ash and tsunamis. Therefore, the primary focus of this study is risk management in the context of strategic sourcing. The study presents a literature review on sourcing based on the 15 years from 1998–2012, and considers 131 academic articles. The literature describes strategic sourcing as a strategic, holistic process in managing supplier relationships, with a long-term focus on adding value to the company and realising competitive advantage. Few studies discovered the real risk impact and status of risk management in strategic sourcing, and evaluation across countries and industries was limited, with the construction sector particularly under-researched. This methodology is founded on a qualitative study of twenty cases across Ger-many and the United Kingdom from the construction sector and electronics manufacturing industries. While considering risk management in the context of strategic sourcing, the thesis takes into account six dimensions that cover trends in strategic sourcing, theoretical and practical sourcing models, risk management, supply and demand management, critical success factors and the strategic supplier evaluation. The study contributes in several ways. First, recent trends are traced and future needs identified across the research dimensions of countries, industries and companies. Second, it evaluates critical success factors in contemporary strategic sourcing. Third, it explores the application of theoretical and practical sourcing models in terms of effectiveness and sustainability. Fourth, based on the case study findings, a risk-oriented strategic sourcing framework and a model for strategic sourcing are developed. These are based on the validation of contemporary requirements and a critical evaluation of the existing situation. It contemplates the empirical findings and leads to a structured process to manage risk in strategic sourcing. The risk-oriented framework considers areas such as trends, corporate and sourcing strategy, critical success factors, strategic supplier selection criteria, risk assessment, reporting, strategy alignment and reporting. The proposed model highlights the essential dimensions in strategic sourcing and guides us to a new definition of strategic sourcing supported by this empirical study.
Resumo:
Purpose – The purpose of this paper is to investigate an underexplored aspect of outsourcing involving a mixed strategy in which parallel production is continued in-house at the same time as outsourcing occurs. Design/methodology/approach – The study applied a multiple case study approach and drew on qualitative data collected through in-depth interviews with wood product manufacturing companies. Findings – The paper posits that there should be a variety of mixed strategies between the two governance forms of “make” or “buy.” In order to address how companies should consider the extent to which they outsource, the analysis was structured around two ends of a continuum: in-house dominance or outsourcing dominance. With an in-house-dominant strategy, outsourcing complements an organization's own production to optimize capacity utilization and outsource less cost-efficient production, or is used as a tool to learn how to outsource. With an outsourcing-dominant strategy, in-house production helps maintain complementary competencies and avoids lock-in risk. Research limitations/implications – This paper takes initial steps toward an exploration of different mixed strategies. Additional research is required to understand the costs of different mixed strategies compared with insourcing and outsourcing, and to study parallel production from a supplier viewpoint. Practical implications – This paper suggests that managers should think twice before rushing to a “me too” outsourcing strategy in which in-house capacities are completely closed. It is important to take a dynamic view of outsourcing that maintains a mixed strategy as an option, particularly in situations that involve an underdeveloped supplier market and/or as a way to develop resources over the long term. Originality/value – The concept of combining both “make” and “buy” is not new. However, little if any research has focussed explicitly on exploring the variety of different types of mixed strategies that exist on the continuum between insourcing and outsourcing.