1000 resultados para AXISYMMETRICAL CLOUD CORES


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing is a new development that is based on the premise that data and applications are stored centrally and can be accessed through the Internet. Thisarticle sets up a broad analysis of how the emergence of clouds relates to European competition law, network regulation and electronic commerce regulation, which we relate to challenges for the further development of cloud services in Europe: interoperability and data portability between clouds; issues relating to vertical integration between clouds and Internet Service Providers; and potential problems for clouds to operate on the European Internal Market. We find that these issues are not adequately addressed across the legal frameworks that we analyse, and argue for further research into how to better facilitate innovative convergent services such as cloud computing through European policy – especially in light of the ambitious digital agenda that the European Commission has set out.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of the Internet has made it possible to transfer data ‘around the globe at the click of a mouse’. Especially fresh business models such as cloud computing, the newest driver to illustrate the speed and breadth of the online environment, allow this data to be processed across national borders on a routine basis. A number of factors cause the Internet to blur the lines between public and private space: Firstly, globalization and the outsourcing of economic actors entrain an ever-growing exchange of personal data. Secondly, the security pressure in the name of the legitimate fight against terrorism opens the access to a significant amount of data for an increasing number of public authorities.And finally,the tools of the digital society accompany everyone at each stage of life by leaving permanent individual and borderless traces in both space and time. Therefore, calls from both the public and private sectors for an international legal framework for privacy and data protection have become louder. Companies such as Google and Facebook have also come under continuous pressure from governments and citizens to reform the use of data. Thus, Google was not alone in calling for the creation of ‘global privacystandards’. Efforts are underway to review established privacy foundation documents. There are similar efforts to look at standards in global approaches to privacy and data protection. The last remarkable steps were the Montreux Declaration, in which the privacycommissioners appealed to the United Nations ‘to prepare a binding legal instrument which clearly sets out in detail the rights to data protection and privacy as enforceable human rights’. This appeal was repeated in 2008 at the 30thinternational conference held in Strasbourg, at the 31stconference 2009 in Madrid and in 2010 at the 32ndconference in Jerusalem. In a globalized world, free data flow has become an everyday need. Thus, the aim of global harmonization should be that it doesn’t make any difference for data users or data subjects whether data processing takes place in one or in several countries. Concern has been expressed that data users might seek to avoid privacy controls by moving their operations to countries which have lower standards in their privacy laws or no such laws at all. To control that risk, some countries have implemented special controls into their domestic law. Again, such controls may interfere with the need for free international data flow. A formula has to be found to make sure that privacy at the international level does not prejudice this principle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Applying location-focused data protection law within the context of a location-agnostic cloud computing framework is fraught with difficulties. While the Proposed EU Data Protection Regulation has introduced a lot of changes to the current data protection framework, the complexities of data processing in the cloud involve various layers and intermediaries of actors that have not been properly addressed. This leaves some gaps in the regulation when analyzed in cloud scenarios. This paper gives a brief overview of the relevant provisions of the regulation that will have an impact on cloud transactions and addresses the missing links. It is hoped that these loopholes will be reconsidered before the final version of the law is passed in order to avoid unintended consequences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Under the brand name “sciebo – the Campuscloud” (derived from “science box”) a consortium of more than 20 research and applied science universities started a large scale cloud service for about 500,000 students and researchers in North Rhine-Westphalia, Germany’s most populous state. Starting with the much anticipated data privacy compliant sync & share functionality, sciebo offers the potential to become a more general cloud platform for collaboration and research data management which will be actively pursued in upcoming scientific and infrastructural projects. This project report describes the formation of the venture, its targets and the technical and the legal solution as well as the current status and the next steps.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing environmental conditions and number of users, application performance might suffer, leading to Service Level Agreement (SLA) violations and inefficient use of hardware resources. We introduce a system for controlling the complexity of scaling applications composed of multiple services using mechanisms based on fulfillment of SLAs. We present how service monitoring information can be used in conjunction with service level objectives, predictions, and correlations between performance indicators for optimizing the allocation of services belonging to distributed applications. We validate our models using experiments and simulations involving a distributed enterprise information system. We show how discovering correlations between application performance indicators can be used as a basis for creating refined service level objectives, which can then be used for scaling the application and improving the overall application's performance under similar conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Seven rib-eye rolls, lip on (112A), were each cut into eight 2.54 centimeter thick steaks starting from the blade end. Steaks were randomly assigned to one of four treatment groups; 1) round versus square cores using Instron [inst1], 2) round versus square cores using Warner- Bratzler [inst2], 3) Instron versus Warner-Bratzler using round cores [rdsq1], and 4) Instron versus Warner- Bratzler using square cores [rdsq2]. Subsequently, steaks from each group were broiled in a General Electric industrial broiler grill to an internal temperature of 63 §C. Steaks were held overnight at 2 §C. Two steaks from each rib were placed into each instrument/core treatment group. Steaks were then divided into three sections identified as: a) lateral, b) medial, and c) central. Three 1.27 centimeter cores from each section were taken from each steak for a total of nine cores per steak and sheared once through the center. The results indicated that there was a significant difference ( p> .05) between round and square cores for both Warner- Bratzler and Instron. In all mean groups tested, square cores had higher shear values than did round cores. There was no indication of differences between instruments, and no significant interactions between instruments and core types.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern cloud-based applications and infrastructures may include resources and services (components) from multiple cloud providers, are heterogeneous by nature and require adjustment, composition and integration. The specific application requirements can be met with difficulty by the current static predefined cloud integration architectures and models. In this paper, we propose the Intercloud Operations and Management Framework (ICOMF) as part of the more general Intercloud Architecture Framework (ICAF) that provides a basis for building and operating a dynamically manageable multi-provider cloud ecosystem. The proposed ICOMF enables dynamic resource composition and decomposition, with a main focus on translating business models and objectives to cloud services ensembles. Our model is user-centric and focuses on the specific application execution requirements, by leveraging incubating virtualization techniques. From a cloud provider perspective, the ecosystem provides more insight into how to best customize the offerings of virtualized resources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe a system for performing SLA-driven management and orchestration of distributed infrastructures composed of services supporting mobile computing use cases. In particular, we focus on a Follow-Me Cloud scenario in which we consider mobile users accessing cloud-enable services. We combine a SLA-driven approach to infrastructure optimization, with forecast-based performance degradation preventive actions and pattern detection for supporting mobile cloud infrastructure management. We present our system's information model and architecture including the algorithmic support and the proposed scenarios for system evaluation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Future Communication Architecture for Mobile Cloud Services: Mobile Cloud Networking (MCN) is a EU FP7 Large-scale Integrating Project (IP) funded by the European Commission. MCN project was launched in November 2012 for the period of 36 month. In total top-tier 19 partners from industry and academia commit to jointly establish the vision of Mobile Cloud Networking, to develop a fully cloud-based mobile communication and application platform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract. Ancient Lake Ohrid is a steep-sided, oligotrophic, karst lake that was tectonically formed most likely within the Pliocene and often referred to as a hotspot of endemic biodiversity. This study aims on tracing significant lake level fluctuations at Lake Ohrid using high-resolution acoustic data in combination with lithological, geochemical, and chronological information from two sediment cores recovered from sub-aquatic terrace levels at ca. 32 and 60m water depth. According to our data, significant lake level fluctuations with prominent lowstands of ca. 60 and 35m below the present water level occurred during Marine Isotope Stage (MIS) 6 and MIS 5, respectively. The effect of these lowstands on biodiversity in most coastal parts of the lake is negligible, due to only small changes in lake surface area, coastline, and habitat. In contrast, biodiversity in shallower areas was more severely affected due to disconnection of today sublacustrine springs from the main water body. Multichannel seismic data from deeper parts of the lake clearly image several clinoform structures stacked on top of each other. These stacked clinoforms indicate significantly lower lake levels prior to MIS 6 and a stepwise rise of water level with intermittent stillstands since its existence as water-filled body, which might have caused enhanced expansion of endemic species within Lake Ohrid.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Virtualisation of cellular networks can be seen as a way to significantly reduce the complexity of processes, required nowadays to provide reliable cellular networks. The Future Communication Architecture for Mobile Cloud Services: Mobile Cloud Networking (MCN) is a EU FP7 Large-scale Integrating Project (IP) funded by the European Commission that is focusing on cloud computing concepts to achieve virtualisation of cellular networks. It aims at the development of a fully cloud-based mobile communication and application platform, or more specifically, it aims to investigate, implement and evaluate the technological foundations for the mobile communication system of Long Term Evolution (LTE), based on Mobile Network plus Decentralized Computing plus Smart Storage offered as one atomic service: On-Demand, Elastic and Pay-As-You-Go. This paper provides a brief overview of the MCN project and discusses the challenges that need to be solved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For atmospheric CO2 reconstructions using ice cores, the technique to release the trapped air from the ice samples is essential for the precision and accuracy of the measurements. We present here a new dry extraction technique in combination with a new gas analytical system that together show significant improvements with respect to current systems. Ice samples (3–15 g) are pulverised using a novel centrifugal ice microtome (CIM) by shaving the ice in a cooled vacuum chamber (−27 °C) in which no friction occurs due to the use of magnetic bearings. Both, the shaving principle of the CIM and the use of magnetic bearings have not been applied so far in this field. Shaving the ice samples produces finer ice powder and releases a minimum of 90% of the trapped air compared to 50%–70% when needle crushing is employed. In addition, the friction-free motion with an optimized design to reduce contaminations of the inner surfaces of the device result in a reduced system offset of about 2.0 ppmv compared to 4.9 ppmv. The gas analytical part shows a higher precision than the corresponding part of our previous system by a factor of two, and all processes except the loading and cleaning of the CIM now run automatically. Compared to our previous system, the complete system shows a 3 times better measurement reproducibility of about 1.1 ppmv (1 σ) which is similar to the best reproducibility of other systems applied in this field. With this high reproducibility, no replicate measurements are required anymore for most future measurement campaigns resulting in a possible output of 12–20 measurements per day compared to a maximum of 6 with other systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Toba eruption that occurred some 74 ka ago in Sumatra, Indonesia, is among the largest volcanic events on Earth over the last 2 million years. Tephra from this eruption has been spread over vast areas in Asia, where it constitutes a major time marker close to the Marine Isotope Stage 4/5 boundary. As yet, no tephra associated with Toba has been identified in Greenland or Antarctic ice cores. Based on new accurate dating of Toba tephra and on accurately dated European stalagmites, the Toba event is known to occur between the onsets of Greenland interstadials (GI) 19 and 20. Furthermore, the existing linking of Greenland and Antarctic ice cores by gas records and by the bipolar seesaw hypothesis suggests that the Antarctic counterpart is situated between Antarctic Isotope Maxima (AIM) 19 and 20. In this work we suggest a direct synchronization of Greenland (NGRIP) and Antarctic (EDML) ice cores at the Toba eruption based on matching of a pattern of bipolar volcanic spikes. Annual layer counting between volcanic spikes in both cores allows for a unique match. We first demonstrate this bipolar matching technique at the already synchronized Laschamp geomagnetic excursion (41 ka BP) before we apply it to the suggested Toba interval. The Toba synchronization pattern covers some 2000 yr in GI-20 and AIM-19/20 and includes nine acidity peaks that are recognized in both ice cores. The suggested bipolar Toba synchronization has decadal precision. It thus allows a determination of the exact phasing of inter-hemispheric climate in a time interval of poorly constrained ice core records, and it allows for a discussion of the climatic impact of the Toba eruption in a global perspective. The bipolar linking gives no support for a long-term global cooling caused by the Toba eruption as Antarctica experiences a major warming shortly after the event. Furthermore, our bipolar match provides a way to place palaeo-environmental records other than ice cores into a precise climatic context.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The important active and passive role of mineral dust aerosol in the climate and the global carbon cycle over the last glacial/interglacial cycles has been recognized. However, little data on the most important aeolian dust-derived biological micronutrient, iron (Fe), has so far been available from ice-cores from Greenland or Antarctica. Furthermore, Fe deposition reconstructions derived from the palaeoproxies particulate dust and calcium differ significantly from the Fe flux data available. The ability to measure high temporal resolution Fe data in polar ice-cores is crucial for the study of the timing and magnitude of relationships between geochemical events and biological responses in the open ocean. This work adapts an existing flow injection analysis (FIA) methodology for low-level trace Fe determinations with an existing glaciochemical analysis system, continuous flow analysis (CFA) of ice-cores. Fe-induced oxidation of N,N′-dimethyl-p-pheylenediamine (DPD) is used to quantify the biologically more important and easily leachable Fe fraction released in a controlled digestion step at pH ∼1.0. The developed method was successfully applied to the determination of labile Fe in ice-core samples collected from the Antarctic Byrd ice-core and the Greenland Ice-Core Project (GRIP) ice-core.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of “resources-on-demand” and “pay-as-you-go”, scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client’s site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation.