967 resultados para Lock-In


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cloud computing and, more particularly, private IaaS, is seen as a mature technology with a myriad solutions tochoose from. However, this disparity of solutions and products has instilled in potential adopters the fear of vendor and data lock-in. Several competing and incompatible interfaces and management styles have given even more voice to these fears. On top of this, cloud users might want to work with several solutions at the same time, an integration that is difficult to achieve in practice. In this paper, we propose a management architecture that tries to tackle these problems; it offers a common way of managing several cloud solutions, and an interface that can be tailored to the needs of the user. This management architecture is designed in a modular way, and using a generic information model. We have validated our approach through the implementation of the components needed for this architecture to support a sample private IaaS solution: OpenStack

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cloud computing and, more particularly, private IaaS, is seen as a mature technol- ogy with a myriad solutions to choose from. However, this disparity of solutions and products has instilled in potential adopters the fear of vendor and data lock- in. Several competing and incompatible interfaces and management styles have increased even more these fears. On top of this, cloud users might want to work with several solutions at the same time, an integration that is difficult to achieve in practice. In this Master Thesis I propose a management architecture that tries to solve these problems; it provides a generalized control mechanism for several cloud infrastructures, and an interface that can meet the requirements of the users. This management architecture is designed in a modular way, and using a generic infor- mation model. I have validated the approach through the implementation of the components needed for this architecture to support a sample private IaaS solution: OpenStack.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recientemente, el paradigma de la computación en la nube ha recibido mucho interés por parte tanto de la industria como del mundo académico. Las infraestructuras cloud públicas están posibilitando nuevos modelos de negocio y ayudando a reducir costes. Sin embargo, una compañía podría desear ubicar sus datos y servicios en sus propias instalaciones, o tener que atenerse a leyes de protección de datos. Estas circunstancias hacen a las infraestructuras cloud privadas ciertamente deseables, ya sea para complementar a las públicas o para sustituirlas por completo. Por desgracia, las carencias en materia de estándares han impedido que las soluciones para la gestión de infraestructuras privadas se hayan desarrollado adecuadamente. Además, la multitud de opciones disponibles ha creado en los clientes el miedo a depender de una tecnología concreta (technology lock-in). Una de las causas de este problema es la falta de alineación entre la investigación académica y los productos comerciales, ya que aquella está centrada en el estudio de escenarios idealizados sin correspondencia con el mundo real, mientras que éstos consisten en soluciones desarrolladas sin tener en cuenta cómo van a encajar con los estándares más comunes o sin preocuparse de hacer públicos sus resultados. Con objeto de resolver este problema, propongo un sistema de gestión modular para infraestructuras cloud privadas enfocado en tratar con las aplicaciones en lugar de centrarse únicamente en los recursos hardware. Este sistema de gestión sigue el paradigma de la computación autónoma y está diseñado en torno a un modelo de información sencillo, desarrollado para ser compatible con los estándares más comunes. Este modelo divide el entorno en dos vistas, que sirven para separar aquello que debe preocupar a cada actor involucrado del resto de información, pero al mismo tiempo permitiendo relacionar el entorno físico con las máquinas virtuales que se despliegan encima de él. En dicho modelo, las aplicaciones cloud están divididas en tres tipos genéricos (Servicios, Trabajos de Big Data y Reservas de Instancias), para que así el sistema de gestión pueda sacar partido de las características propias de cada tipo. El modelo de información está complementado por un conjunto de acciones de gestión atómicas, reversibles e independientes, que determinan las operaciones que se pueden llevar a cabo sobre el entorno y que es usado para hacer posible la escalabilidad en el entorno. También describo un motor de gestión encargado de, a partir del estado del entorno y usando el ya mencionado conjunto de acciones, la colocación de recursos. Está dividido en dos niveles: la capa de Gestores de Aplicación, encargada de tratar sólo con las aplicaciones; y la capa del Gestor de Infraestructura, responsable de los recursos físicos. Dicho motor de gestión obedece un ciclo de vida con dos fases, para así modelar mejor el comportamiento de una infraestructura real. El problema de la colocación de recursos es atacado durante una de las fases (la de consolidación) por un resolutor de programación entera, y durante la otra (la online) por un heurístico hecho ex-profeso. Varias pruebas han demostrado que este acercamiento combinado es superior a otras estrategias. Para terminar, el sistema de gestión está acoplado a arquitecturas de monitorización y de actuadores. Aquella estando encargada de recolectar información del entorno, y ésta siendo modular en su diseño y capaz de conectarse con varias tecnologías y ofrecer varios modos de acceso. ABSTRACT The cloud computing paradigm has raised in popularity within the industry and the academia. Public cloud infrastructures are enabling new business models and helping to reduce costs. However, the desire to host company’s data and services on premises, and the need to abide to data protection laws, make private cloud infrastructures desirable, either to complement or even fully substitute public oferings. Unfortunately, a lack of standardization has precluded private infrastructure management solutions to be developed to a certain level, and a myriad of diferent options have induced the fear of lock-in in customers. One of the causes of this problem is the misalignment between academic research and industry ofering, with the former focusing in studying idealized scenarios dissimilar from real-world situations, and the latter developing solutions without taking care about how they f t with common standards, or even not disseminating their results. With the aim to solve this problem I propose a modular management system for private cloud infrastructures that is focused on the applications instead of just the hardware resources. This management system follows the autonomic system paradigm, and is designed around a simple information model developed to be compatible with common standards. This model splits the environment in two views that serve to separate the concerns of the stakeholders while at the same time enabling the traceability between the physical environment and the virtual machines deployed onto it. In it, cloud applications are classifed in three broad types (Services, Big Data Jobs and Instance Reservations), in order for the management system to take advantage of each type’s features. The information model is paired with a set of atomic, reversible and independent management actions which determine the operations that can be performed over the environment and is used to realize the cloud environment’s scalability. From the environment’s state and using the aforementioned set of actions, I also describe a management engine tasked with the resource placement. It is divided in two tiers: the Application Managers layer, concerned just with applications; and the Infrastructure Manager layer, responsible of the actual physical resources. This management engine follows a lifecycle with two phases, to better model the behavior of a real infrastructure. The placement problem is tackled during one phase (consolidation) by using an integer programming solver, and during the other (online) with a custom heuristic. Tests have demonstrated that this combined approach is superior to other strategies. Finally, the management system is paired with monitoring and actuators architectures. The former able to collect the necessary information from the environment, and the later modular in design and capable of interfacing with several technologies and ofering several access interfaces.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Embora todas as constituições incluam direitos, e muitas delas incluam direitos sociais, a verdade é que algumas são mais generosas do que outras a este respeito. Mas nenhuma se aproxima da Constituição da República Portuguesa de 1976 no que toca à extensão e detalhe do seu catálogo de direitos sociais, económicos e culturais. As principais teorias sobre as origens de instituições geraram hipóteses explicativas da constitucionalização desta segunda geração de direitos. Sucede, porém, que estas hipóteses não conseguem explicar de forma totalmente convincente o processo de constitucionalização dos direitos sociais. Isto é ainda mais verdade em casos como o do nosso país, cujo carácter discrepante os tornam ainda mais difíceis de explicar. Neste artigo, estas teorias e respectivas hipóteses serão testadas por relação ao caso português o qual será, sempre que se revelar necessário, comparado com o espanhol. Visamos alcançar dois objectivos com este exercício. Por um lado, pretendemos identificar as limitações das explicações dominantes, incluindo as teorias e hipóteses sobre os mecanismos causais responsáveis pela inclusão de direitos sociais nas constituições. Por outro lado, o nosso propósito é o de conceber explicações alternativas sempre que as existentes se revelem inadequadas ou insuficientes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many European and American observers of the EC have criticized "intergovemmentalist" ac­ counts for exaggerating the extent of member state control over the process of European integra­ tion. This essay seeks to ground these criticisms in a "historical institutionalist" account that stresses the need to study European integration as a political process which unfolds over time. Such a perspective highlights the limits of member-state control over long-term institutional de­ velopment, due to preoccupation with shorHerm concerns, the ubiquity of unintended conse­ quences, and processes that "lock in" past decusions and make reassertions of member-state control difficult. Brief examination of the evolution of social policy in the EC suggests the limita­ tions of treating the EC as an international regime facilitating collective action among essentially sovereign states. It is ore useful to view integration as a "path-dependent" process that has pro­ duced a fragmented but still discernible "multitiered" European polity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The proximate causes and processes involved in loss of breeds are outlined. The path-dependent effect and Swanson's dominance-effect are discussed in relation to lock-in of breed selection. These effects help to explain genetic erosion. It is shown that the extension of markets and economic globalisation have contributed significantly to the loss of breeds. The decoupling of animal husbandry from surrounding natural environmental conditions is further eroding the stock of genetic resources. Recent trends in animal husbandry raise serious sustainability issues, apart from animal welfare concerns. The extension of markets and economic globalisation have contributed significantly to the rapid loss of domestic breeds, especially livestock. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

SMEs with a weak internal R&D capacity show the tendency to shy away from using external sources of technical expertise. The tendency deters providers of industrial modernization services from supporting such structurally weak SMEs. This paper examines how Japan's local technology centres - kosetsushi - remove the bottleneck and reach out to a significant proportion of SMEs with a weak R&D capacity in their localities. Kosetsushi centres sustain habitual interactions with client firms through 'low information gap' services solving immediate needs and lead the clients to a riskier and longer path toward innovation capacity building. This gives kosetsushi centres a position distinct from universities and consultancies in the regional innovation system. While long-term relationships between kosetsushi centres and their client firms can increase switching costs and produce lock-in effects, a case study of two kosetsushi centres illustrates the importance of 'low-information gap' services and relational assets created thereby to the modernization of SMEs with a weak internal R&D capacity. The paper calls for long-term commitment by the public sector if it addresses the issue through modernization services.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Strategic sourcing has increased in importance in recent years, and now plays an important role in companies’ planning. The current volatility in supply markets means companies face multiple challenges involving lock-in situations, supplier bankruptcies or supply security issues. In addition, their exposure can increase due to natural disasters, as witnessed recently in the form of bird flu, volcanic ash and tsunamis. Therefore, the primary focus of this study is risk management in the context of strategic sourcing. The study presents a literature review on sourcing based on the 15 years from 1998–2012, and considers 131 academic articles. The literature describes strategic sourcing as a strategic, holistic process in managing supplier relationships, with a long-term focus on adding value to the company and realising competitive advantage. Few studies discovered the real risk impact and status of risk management in strategic sourcing, and evaluation across countries and industries was limited, with the construction sector particularly under-researched. This methodology is founded on a qualitative study of twenty cases across Ger-many and the United Kingdom from the construction sector and electronics manufacturing industries. While considering risk management in the context of strategic sourcing, the thesis takes into account six dimensions that cover trends in strategic sourcing, theoretical and practical sourcing models, risk management, supply and demand management, critical success factors and the strategic supplier evaluation. The study contributes in several ways. First, recent trends are traced and future needs identified across the research dimensions of countries, industries and companies. Second, it evaluates critical success factors in contemporary strategic sourcing. Third, it explores the application of theoretical and practical sourcing models in terms of effectiveness and sustainability. Fourth, based on the case study findings, a risk-oriented strategic sourcing framework and a model for strategic sourcing are developed. These are based on the validation of contemporary requirements and a critical evaluation of the existing situation. It contemplates the empirical findings and leads to a structured process to manage risk in strategic sourcing. The risk-oriented framework considers areas such as trends, corporate and sourcing strategy, critical success factors, strategic supplier selection criteria, risk assessment, reporting, strategy alignment and reporting. The proposed model highlights the essential dimensions in strategic sourcing and guides us to a new definition of strategic sourcing supported by this empirical study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to investigate an underexplored aspect of outsourcing involving a mixed strategy in which parallel production is continued in-house at the same time as outsourcing occurs. Design/methodology/approach – The study applied a multiple case study approach and drew on qualitative data collected through in-depth interviews with wood product manufacturing companies. Findings – The paper posits that there should be a variety of mixed strategies between the two governance forms of “make” or “buy.” In order to address how companies should consider the extent to which they outsource, the analysis was structured around two ends of a continuum: in-house dominance or outsourcing dominance. With an in-house-dominant strategy, outsourcing complements an organization's own production to optimize capacity utilization and outsource less cost-efficient production, or is used as a tool to learn how to outsource. With an outsourcing-dominant strategy, in-house production helps maintain complementary competencies and avoids lock-in risk. Research limitations/implications – This paper takes initial steps toward an exploration of different mixed strategies. Additional research is required to understand the costs of different mixed strategies compared with insourcing and outsourcing, and to study parallel production from a supplier viewpoint. Practical implications – This paper suggests that managers should think twice before rushing to a “me too” outsourcing strategy in which in-house capacities are completely closed. It is important to take a dynamic view of outsourcing that maintains a mixed strategy as an option, particularly in situations that involve an underdeveloped supplier market and/or as a way to develop resources over the long term. Originality/value – The concept of combining both “make” and “buy” is not new. However, little if any research has focussed explicitly on exploring the variety of different types of mixed strategies that exist on the continuum between insourcing and outsourcing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work a self-referenced technique for fiberoptic intensity sensors using virtual lock-in amplifiers is proposed and discussed. The topology is compatible with WDM networks so multiple remote sensors can simultaneously be interrogated. A hybrid approach combining both silica fiber Bragg gratings and polymer optical fiber Bragg gratings is analyzed. The feasibility of the proposed solution for potential medical environments and biomedical applications is shown and tested using a selfreferenced configuration based on a power ratio parameter.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cloud computing can be defined as a distributed computational model by through resources (hardware, storage, development platforms and communication) are shared, as paid services accessible with minimal management effort and interaction. A great benefit of this model is to enable the use of various providers (e.g a multi-cloud architecture) to compose a set of services in order to obtain an optimal configuration for performance and cost. However, the multi-cloud use is precluded by the problem of cloud lock-in. The cloud lock-in is the dependency between an application and a cloud platform. It is commonly addressed by three strategies: (i) use of intermediate layer that stands to consumers of cloud services and the provider, (ii) use of standardized interfaces to access the cloud, or (iii) use of models with open specifications. This paper outlines an approach to evaluate these strategies. This approach was performed and it was found that despite the advances made by these strategies, none of them actually solves the problem of lock-in cloud. In this sense, this work proposes the use of Semantic Web to avoid cloud lock-in, where RDF models are used to specify the features of a cloud, which are managed by SPARQL queries. In this direction, this work: (i) presents an evaluation model that quantifies the problem of cloud lock-in, (ii) evaluates the cloud lock-in from three multi-cloud solutions and three cloud platforms, (iii) proposes using RDF and SPARQL on management of cloud resources, (iv) presents the cloud Query Manager (CQM), an SPARQL server that implements the proposal, and (v) comparing three multi-cloud solutions in relation to CQM on the response time and the effectiveness in the resolution of cloud lock-in.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Since the 1950s the global consumption of natural resources has skyrocketed, both in magnitude and in the range of resources used. Closely coupled with emissions of greenhouse gases, land consumption, pollution of environmental media, and degradation of ecosystems, as well as with economic development, increasing resource use is a key issue to be addressed in order to keep the planet Earth in a safe and just operating space. This requires thinking about absolute reductions in resource use and associated environmental impacts, and, when put in the context of current re-focusing on economic growth at the European level, absolute decoupling, i.e., maintaining economic development while absolutely reducing resource use and associated environmental impacts. Changing behavioural, institutional and organisational structures that lock-in unsustainable resource use is, thus, a formidable challenge as existing world views, social practices, infrastructures, as well as power structures, make initiating change difficult. Hence, policy mixes are needed that will target different drivers in a systematic way. When designing policy mixes for decoupling, the effect of individual instruments on other drivers and on other instruments in a mix should be considered and potential negative effects be mitigated. This requires smart and time-dynamic policy packaging. This Special Issue investigates the following research questions: What is decoupling and how does it relate to resource efficiency and environmental policy? How can we develop and realize policy mixes for decoupling economic development from resource use and associated environmental impacts? And how can we do this in a systemic way, so that all relevant dimensions and linkages—including across economic and social issues, such as production, consumption, transport, growth and wellbeing­—are taken into account? In addressing these questions, the overarching goals of this Special Issue are to: address the challenges related to more sustainable resource-use; contribute to the development of successful policy tools and practices for sustainable development and resource efficiency (particularly through the exploration of socio-economic, scientific, and integrated aspects of sustainable development); and inform policy debates and policy-making. The Special Issue draws on findings from the EU and other countries to offer lessons of international relevance for policy mixes for more sustainable resource-use, with findings of interest to policy makers in central and local government and NGOs, decision makers in business, academics, researchers, and scientists.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

After years of deliberation, the EU commission sped up the reform process of a common EU digital policy considerably in 2015 by launching the EU digital single market strategy. In particular, two core initiatives of the strategy were agreed upon: General Data Protection Regulation and the Network and Information Security (NIS) Directive law texts. A new initiative was additionally launched addressing the role of online platforms. This paper focuses on the platform privacy rationale behind the data protection legislation, primarily based on the proposal for a new EU wide General Data Protection Regulation. We analyse the legislation rationale from an Information System perspective to understand the role user data plays in creating platforms that we identify as “processing silos”. Generative digital infrastructure theories are used to explain the innovative mechanisms that are thought to govern the notion of digitalization and successful business models that are affected by digitalization. We foresee continued judicial data protection challenges with the now proposed Regulation as the adoption of the “Internet of Things” continues. The findings of this paper illustrate that many of the existing issues can be addressed through legislation from a platform perspective. We conclude by proposing three modifications to the governing rationale, which would not only improve platform privacy for the data subject, but also entrepreneurial efforts in developing intelligent service platforms. The first modification is aimed at improving service differentiation on platforms by lessening the ability of incumbent global actors to lock-in the user base to their service/platform. The second modification posits limiting the current unwanted tracking ability of syndicates, by separation of authentication and data store services from any processing entity. Thirdly, we propose a change in terms of how security and data protection policies are reviewed, suggesting a third party auditing procedure.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this study, the dynamic response of a vertical flexible cylinder vibrating at low mode numbers with combined x-y motion was investigated in a towing tank. The uniform flow was simulated by towing the flexible cylinder along the tank in still water; therefore, the turbulence intensity of the free flow was negligible in obtaining more reliable results. A lower branch of dominant frequencies with micro vibration amplitude was found in both cross-flow and in-line directions. This justifiable discrepancy was likely caused by an initial lock-in. The maximum attainable amplitude, modal analysis and x-y trajectory in cross-flow and in-line directions are reported here and compared with previous literature, along with some good agreements and different observations that were obtained from the study. Drag and lift coefficients are also evaluated by making use of a generalized integral transform technique approach, yielding an alternative method to study fluid force acting upon a flexible cylinder.