993 resultados para Lock-In
Resumo:
In this work a self-referenced technique for fiberoptic intensity sensors using virtual lock-in amplifiers is proposed and discussed. The topology is compatible with WDM networks so multiple remote sensors can simultaneously be interrogated. A hybrid approach combining both silica fiber Bragg gratings and polymer optical fiber Bragg gratings is analyzed. The feasibility of the proposed solution for potential medical environments and biomedical applications is shown and tested using a selfreferenced configuration based on a power ratio parameter.
Resumo:
Cloud computing can be defined as a distributed computational model by through resources (hardware, storage, development platforms and communication) are shared, as paid services accessible with minimal management effort and interaction. A great benefit of this model is to enable the use of various providers (e.g a multi-cloud architecture) to compose a set of services in order to obtain an optimal configuration for performance and cost. However, the multi-cloud use is precluded by the problem of cloud lock-in. The cloud lock-in is the dependency between an application and a cloud platform. It is commonly addressed by three strategies: (i) use of intermediate layer that stands to consumers of cloud services and the provider, (ii) use of standardized interfaces to access the cloud, or (iii) use of models with open specifications. This paper outlines an approach to evaluate these strategies. This approach was performed and it was found that despite the advances made by these strategies, none of them actually solves the problem of lock-in cloud. In this sense, this work proposes the use of Semantic Web to avoid cloud lock-in, where RDF models are used to specify the features of a cloud, which are managed by SPARQL queries. In this direction, this work: (i) presents an evaluation model that quantifies the problem of cloud lock-in, (ii) evaluates the cloud lock-in from three multi-cloud solutions and three cloud platforms, (iii) proposes using RDF and SPARQL on management of cloud resources, (iv) presents the cloud Query Manager (CQM), an SPARQL server that implements the proposal, and (v) comparing three multi-cloud solutions in relation to CQM on the response time and the effectiveness in the resolution of cloud lock-in.
Resumo:
Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.
Resumo:
Since the 1950s the global consumption of natural resources has skyrocketed, both in magnitude and in the range of resources used. Closely coupled with emissions of greenhouse gases, land consumption, pollution of environmental media, and degradation of ecosystems, as well as with economic development, increasing resource use is a key issue to be addressed in order to keep the planet Earth in a safe and just operating space. This requires thinking about absolute reductions in resource use and associated environmental impacts, and, when put in the context of current re-focusing on economic growth at the European level, absolute decoupling, i.e., maintaining economic development while absolutely reducing resource use and associated environmental impacts. Changing behavioural, institutional and organisational structures that lock-in unsustainable resource use is, thus, a formidable challenge as existing world views, social practices, infrastructures, as well as power structures, make initiating change difficult. Hence, policy mixes are needed that will target different drivers in a systematic way. When designing policy mixes for decoupling, the effect of individual instruments on other drivers and on other instruments in a mix should be considered and potential negative effects be mitigated. This requires smart and time-dynamic policy packaging. This Special Issue investigates the following research questions: What is decoupling and how does it relate to resource efficiency and environmental policy? How can we develop and realize policy mixes for decoupling economic development from resource use and associated environmental impacts? And how can we do this in a systemic way, so that all relevant dimensions and linkages—including across economic and social issues, such as production, consumption, transport, growth and wellbeing—are taken into account? In addressing these questions, the overarching goals of this Special Issue are to: address the challenges related to more sustainable resource-use; contribute to the development of successful policy tools and practices for sustainable development and resource efficiency (particularly through the exploration of socio-economic, scientific, and integrated aspects of sustainable development); and inform policy debates and policy-making. The Special Issue draws on findings from the EU and other countries to offer lessons of international relevance for policy mixes for more sustainable resource-use, with findings of interest to policy makers in central and local government and NGOs, decision makers in business, academics, researchers, and scientists.
Resumo:
After years of deliberation, the EU commission sped up the reform process of a common EU digital policy considerably in 2015 by launching the EU digital single market strategy. In particular, two core initiatives of the strategy were agreed upon: General Data Protection Regulation and the Network and Information Security (NIS) Directive law texts. A new initiative was additionally launched addressing the role of online platforms. This paper focuses on the platform privacy rationale behind the data protection legislation, primarily based on the proposal for a new EU wide General Data Protection Regulation. We analyse the legislation rationale from an Information System perspective to understand the role user data plays in creating platforms that we identify as “processing silos”. Generative digital infrastructure theories are used to explain the innovative mechanisms that are thought to govern the notion of digitalization and successful business models that are affected by digitalization. We foresee continued judicial data protection challenges with the now proposed Regulation as the adoption of the “Internet of Things” continues. The findings of this paper illustrate that many of the existing issues can be addressed through legislation from a platform perspective. We conclude by proposing three modifications to the governing rationale, which would not only improve platform privacy for the data subject, but also entrepreneurial efforts in developing intelligent service platforms. The first modification is aimed at improving service differentiation on platforms by lessening the ability of incumbent global actors to lock-in the user base to their service/platform. The second modification posits limiting the current unwanted tracking ability of syndicates, by separation of authentication and data store services from any processing entity. Thirdly, we propose a change in terms of how security and data protection policies are reviewed, suggesting a third party auditing procedure.
Resumo:
In this study, the dynamic response of a vertical flexible cylinder vibrating at low mode numbers with combined x-y motion was investigated in a towing tank. The uniform flow was simulated by towing the flexible cylinder along the tank in still water; therefore, the turbulence intensity of the free flow was negligible in obtaining more reliable results. A lower branch of dominant frequencies with micro vibration amplitude was found in both cross-flow and in-line directions. This justifiable discrepancy was likely caused by an initial lock-in. The maximum attainable amplitude, modal analysis and x-y trajectory in cross-flow and in-line directions are reported here and compared with previous literature, along with some good agreements and different observations that were obtained from the study. Drag and lift coefficients are also evaluated by making use of a generalized integral transform technique approach, yielding an alternative method to study fluid force acting upon a flexible cylinder.
Resumo:
This dissertation comprises three chapters. The first chapter motivates the use of a novel data set combining survey and administrative sources for the study of internal labor migration. By following a sample of individuals from the American Community Survey (ACS) across their employment outcomes over time according to the Longitudinal Employer-Household Dynamics (LEHD) database, I construct a measure of geographic labor mobility that allows me to exploit information about individuals prior to their move. This enables me to explore aspects of the migration decision, such as homeownership and employment status, in ways that have not previously been possible. In the second chapter, I use this data set to test the theory that falling home prices affect a worker’s propensity to take a job in a different metropolitan area from where he is currently located. Employing a within-CBSA and time estimation that compares homeowners to renters in their propensities to relocate for jobs, I find that homeowners who have experienced declines in the nominal value of their homes are approximately 12% less likely than average to take a new job in a location outside of the metropolitan area where they currently reside. This evidence is consistent with the hypothesis that housing lock-in has contributed to the decline in labor mobility of homeowners during the recent housing bust. The third chapter focuses on a sample of unemployed workers in the same data set, in order to compare the unemployment durations of those who find subsequent employment by relocating to a new metropolitan area, versus those who find employment in their original location. Using an instrumental variables strategy to address the endogeneity of the migration decision, I find that out-migrating for a new job significantly reduces the time to re-employment. These results stand in contrast to OLS estimates, which suggest that those who move have longer unemployment durations. This implies that those who migrate for jobs in the data may be particularly disadvantaged in their ability to find employment, and thus have strong short-term incentives to relocate.
Resumo:
The diversity in the way cloud providers o↵er their services, give their SLAs, present their QoS, or support di↵erent technologies, makes very difficult the portability and interoperability of cloud applications, and favours the well-known vendor lock-in problem. We propose a model to describe cloud applications and the required resources in an agnostic, and providers- and resources-independent way, in which individual application modules, and entire applications, may be re-deployed using different services without modification. To support this model, and after the proposal of a variety of cross-cloud application management tools by different authors, we propose going one step further in the unification of cloud services with a management approach in which IaaS and PaaS services are integrated into a unified interface. We provide support for deploying applications whose components are distributed on different cloud providers, indistinctly using IaaS and PaaS services.
Resumo:
The loss of prestressing force over time influences the long-term deflection of the prestressed concrete element. Prestress losses are inherently complex due to the interaction of concrete creep, concrete shrinkage, and steel relaxation. Implementing advanced materials such as ultra-high performance concrete (UHPC) further complicates the estimation of prestress losses because of the changes in material models dependent on curing regime. Past research shows compressive creep is "locked in" when UHPC cylinders are subjected to thermal treatment before being loaded in compression. However, the current precasting manufacturing process would typically load the element (through prestressing strand release from the prestressing bed) before the element would be taken to the curing facility. Members of many ages are stored until curing could be applied to all of them at once. This research was conducted to determine the impact of variable curing times for UHPC on the prestress losses, and hence deflections. Three UHPC beams, a rectangular section, a modified bulb tee section, and a pi-girder, were assessed for losses and deflections using an incremental time step approach and material models specific to UHPC based on compressive creep and shrinkage testing. Results show that although it is important for prestressed UHPC beams to be thermally treated, to "lock in" material properties, the timing of thermal treatment leads to negligible differences in long-term deflections. Results also show that for UHPC elements that are thermally treated, changes in deflection are caused only by external loads because prestress losses are "locked-in" following thermal treatment.
Resumo:
Some organizations end up reimplementing the same class of business process over and over: an "administrative process", which consists of managing a form through several states and involving various roles in the organization. This results in wasted time that could be dedicated to better understanding the process or dealing with the fine details that are specific to the process. Existing virtual office solutions require specific training and infrastructure andmay result in vendor lock-in. In this paper, we propose using a high-level domain-specific language (AdminDSL) to describe the administrative process and a separate code generator targeting a standard web framework. We have implemented the approach using Xtext, EGL and the Django web framework, and we illustrate it through two case studies: a synthetic examination process which illustrates the architecture of the generated code, and a real-world workplace survey process that identified several future avenues for improvement.
Resumo:
Background : Port-related bloodstream infection (PRBSI) is a common complication associated with long-term use of ports systems. Systemic antimicrobial therapy (ST) and removal of the device is the standard management of PRBSI. However, a conservative management combining ST with antibiotic lock therapy (ALT) without port removal has been suggested as an alternative management option for infections due to gram-positive skin colonizers with low virulence.¦Objectives : i) to assess the frequency of management of PRBSI in onco-hematological patients by combining the ALT with ST, without catheter removal and ii) to analyze the efficacy of such an approach.¦Methods : Retrospective observational study over a 6-year period between 2005 and 2010, including patients who where diagnosed with PRBSI and who were treated with ST and ALT. PRBSI diagnosis consisted in clinical signs of bacteremia with blood cultures positive for gram-positive skin colonizers. The primary endpoint was failure to cure the PRBSI.¦Results : 61 port infections were analysed, of which 23 PRBSI met the inclusion criteria. All the patients were suffering from haematological conditions and 75% were neutropenic at the time of PRBSI diagnosis. S. epidermidis was responsible for 91% of PRBSI (21/23). The median duration of ST was 14 days (range 7-35) and the median duration of ALT was 15 days (range 8-41). Failure to cure the PRBSI requiring port removal was observed in 4 patients, but was not associated with severe infectious complications. Kaplan-Meier analysis showed a success rate in port salvage at day 180 (6 months) of 78% (95%CI 59-97%).¦Conclusion : The success rate observed in the present study suggests that combining ST and ALT is an effective option to conservatively treat PRBSI caused by pathogens of low virulence such as S. epidermidis.
Resumo:
Survey map of the Second Welland Canal created by the Welland Canal Company showing a portion of the Grantham Township near Port Dalhousie. Identified structures associated with the Canal include the new towing path. The surveyors' measurements and notes can be seen in red and black ink and pencil. Features of the First Welland Canal are noted in red ink and include the old Lock 2, old towing path and the original bed of the Twelve Mile Creek. Local area landmarks are also identified and include streets and roads (ex. Side Line and Old Road), four unnamed bridges, and a tree stump along the old towing path. A New Road to Port Dalhousie is featured in red ink. Properties and property owners of note are: Concession 3 Lots 21, 22 and 23, Concession 4 Lots 21, 22 and 23, Jabez Johnson, Adam Gould, Peter Weaver and Samuel Wood.
Resumo:
Survey map of the Second Welland Canal created by the Welland Canal Company showing a portion of the Grantham Township sometimes referred to as the Welland Vale. Identified structures associated with the Canal include Lock 2, several weirs, and the Lock Tender's House. The surveyors' measurements and notes can be seen in red and black ink and pencil. Features of the First Welland Canal are noted in red ink and include the old towing path and the Old Canal itself. Local area landmarks and businesses are also identified and include streets and roads (ex. Side Line and Old Road to Port Dalhousie), J. C. Clark's Ice House, J. L. Ranney Store House, a burnt mill, barrel shed, a building leased to Michael Kerrins, and a number of unidentified structures (possibly houses or cabins) belonging to D. Cain, R. Cain, W. Weaver and W. Huddy. A New Road to St. Catharines is featured in red ink. Properties and property owners of note are: Concession 5 Lots 20, 21 and 22, Concession 6 Lots 20 and 21, Thomas Adams, John Gould, George Rykert, Theophilus Mack, William H. Merritt, J. L. Ranney, and the Board of Works.
Resumo:
Survey map of the Second Welland Canal created by the Welland Canal Company showing the canal as it crosses Chippewa Creek in the Thorold Township near Welland. Identified structures and features associated with the Canal include the towing path, the old canal, the aqueduct lock, the new aqueduct, and the waterway itself. The surveyors' measurements and notes can be seen in red and black ink and pencil. Local area landmarks are also identified and include roads (ex. Aqueduct Road, and Road to Weland), Chippewa Creek, the Spoil Bank, a house and a barn. Properties and property owners of note are: Lots 239, 247, and 248, Joseph Burgar, and Smith Shotwell.
Resumo:
Sketch in the lock of the new canal above St. Catharines. The sketch is unsigned, Aug. 18, 1899