768 resultados para Cloud Computing, Risk Assessment, Security, Framework


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measuring pollinator performance has become increasingly important with emerging needs for risk assessment in conservation and sustainable agriculture that require multi-year and multi-site comparisons across studies. However, comparing pollinator performance across studies is difficult because of the diversity of concepts and disparate methods in use. Our review of the literature shows many unresolved ambiguities. Two different assessment concepts predominate: the first estimates stigmatic pollen deposition and the underlying pollinator behaviour parameters, while the second estimates the pollinator’s contribution to plant reproductive success, for example in terms of seed set. Both concepts include a number of parameters combined in diverse ways and named under a diversity of synonyms and homonyms. However, these concepts are overlapping because pollen deposition success is the most frequently used proxy for assessing the pollinator’s contribution to plant reproductive success. We analyse the diverse concepts and methods in the context of a new proposed conceptual framework with a modular approach based on pollen deposition, visit frequency, and contribution to seed set relative to the plant’s maximum female reproductive potential. A system of equations is proposed to optimize the balance between idealised theoretical concepts and practical operational methods. Our framework permits comparisons over a range of floral phenotypes, and spatial and temporal scales, because scaling up is based on the same fundamental unit of analysis, the single visit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The method of entropy has been useful in evaluating inconsistency on human judgments. This paper illustrates an entropy-based decision support system called e-FDSS to the solution of multicriterion risk and decision analysis in projects of construction small and medium enterprises (SMEs). It is optimized and solved by fuzzy logic, entropy, and genetic algorithms. A case study demonstrated the use of entropy in e-FDSS on analyzing multiple risk criteria in the predevelopment stage of SME projects. Survey data studying the degree of impact of selected project risk criteria on different projects were input into the system in order to evaluate the preidentified project risks in an impartial environment. Without taking into account the amount of uncertainty embedded in the evaluation process; the results showed that all decision vectors are indeed full of bias and the deviations of decisions are finally quantified providing a more objective decision and risk assessment profile to the stakeholders of projects in order to search and screen the most profitable projects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Body area networks (BANs) are emerging as enabling technology for many human-centered application domains such as health-care, sport, fitness, wellness, ergonomics, emergency, safety, security, and sociality. A BAN, which basically consists of wireless wearable sensor nodes usually coordinated by a static or mobile device, is mainly exploited to monitor single assisted livings. Data generated by a BAN can be processed in real-time by the BAN coordinator and/or transmitted to a server-side for online/offline processing and long-term storing. A network of BANs worn by a community of people produces large amount of contextual data that require a scalable and efficient approach for elaboration and storage. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of body sensor data streams. In this paper, we motivate the introduction of Cloud-assisted BANs along with the main challenges that need to be addressed for their development and management. The current state-of-the-art is overviewed and framed according to the main requirements for effective Cloud-assisted BAN architectures. Finally, relevant open research issues in terms of efficiency, scalability, security, interoperability, prototyping, dynamic deployment and management, are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The usefulness of stress myocardial perfusion scintigraphy for cardiovascular (CV) risk stratification in chronic kidney disease remains controversial. We tested the hypothesis that different clinical risk profiles influence the test. We assessed the prognostic value of myocardial scintigraphy in 892 consecutive renal transplant candidates classified into four risk groups: very high (aged epsilon 50 years, diabetes and CV disease), high (two factors), intermediate (one factor) and low (no factor). The incidence of CV events and death was 20 and 18, respectively (median follow-up 22 months). Altered stress testing was associated with an increased probability of cardiovascular events only in intermediate-risk (one risk factor) patients [30.3 versus 10, hazard ratio (HR) 2.37, confidence interval (CI) 1.693.33, P 0.0001]. Low-risk patients did well regardless of scan results. In patients with two or three risk factors, an altered stress test did not add to the already increased CV risk. Myocardial scintigraphy was related to overall mortality only in intermediate-risk patients (HR 2.8, CI 1.55.1, P 0.007). CV risk stratification based on myocardial stress testing is useful only in patients with just one risk factor. Screening may avoid unnecessary testing in 60 of patients, help stratifying for risk of events and provide an explanation for the inconsistent performance of myocardial scintigraphy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To assess the cardiovascular risk, using the Framingham risk score, in a sample of hypertensive individuals coming from a public primary care unit. METHODS: The caseload comprised hypertensive individuals according to criteria established by the JNC VII, 2003, of 2003, among 1601 patients followed up in 1999, at the Cardiology and Arterial Hypertension Outpatients Clinic of the Teaching Primary Care Unit, at the Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo. The patients were selected by draw, aged over 20 years, both genders, excluding pregnant women. It was a descriptive, cross-sectional, observational study. The Framingham risk score was used to stratify cardiovascular risk of developing coronary artery disease (death or non-fatal acute myocardial infarction). RESULTS: Age range of 27-79 years ( = 63.2 ± 9.58). Out of 382 individuals studied, 270 (70.7%) were female and 139 (36.4%) were characterized as high cardiovascular risk for presenting diabetes mellitus, atherosclerosis documented by event or procedure. Out of 243 stratified patients, 127 (52.3%) had HDL-C < 50 mg/dL; 210 (86.4%) had systolic blood pressure > 120 mmHg; 46 (18.9%) were smokers; 33 (13.6%) had a high cardiovascular risk. Those added to 139 enrolled directly as high cardiovascular risk, totaled up 172 (45%); 77 (20.2%) of medium cardiovascular risk and 133 (34.8%) of low risk. The highest percentage of high cardiovascular risk individuals was aged over 70 years; those of medium risk were aged over 60 years; and the low risk patients were aged 50 to 69 years. CONCLUSION: The significant number of high and medium cardiovascular risk individuals indicates the need to closely follow them up.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La natura distribuita del Cloud Computing, che comporta un'elevata condivisione delle risorse e una moltitudine di accessi ai sistemi informatici, permette agli intrusi di sfruttare questa tecnologia a scopi malevoli. Per contrastare le intrusioni e gli attacchi ai dati sensibili degli utenti, vengono implementati sistemi di rilevamento delle intrusioni e metodi di difesa in ambiente virtualizzato, allo scopo di garantire una sicurezza globale fondata sia sul concetto di prevenzione, sia su quello di cura: un efficace sistema di sicurezza deve infatti rilevare eventuali intrusioni e pericoli imminenti, fornendo una prima fase difensiva a priori, e, al contempo, evitare fallimenti totali, pur avendo subito danni, e mantenere alta la qualità del servizio, garantendo una seconda fase difensiva, a posteriori. Questa tesi illustra i molteplici metodi di funzionamento degli attacchi distribuiti e dell'hacking malevolo, con particolare riferimento ai pericoli di ultima generazione, e definisce le principali strategie e tecniche atte a garantire sicurezza, protezione e integrità dei dati all'interno di un sistema Cloud.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, data handling and data analysis in High Energy Physics requires a vast amount of computational power and storage. In particular, the world-wide LHC Com- puting Grid (LCG), an infrastructure and pool of services developed and deployed by a ample community of physicists and computer scientists, has demonstrated to be a game changer in the efficiency of data analyses during Run-I at the LHC, playing a crucial role in the Higgs boson discovery. Recently, the Cloud computing paradigm is emerging and reaching a considerable adoption level by many different scientific organizations and not only. Cloud allows to access and utilize not-owned large computing resources shared among many scientific communities. Considering the challenging requirements of LHC physics in Run-II and beyond, the LHC computing community is interested in exploring Clouds and see whether they can provide a complementary approach - or even a valid alternative - to the existing technological solutions based on Grid. In the LHC community, several experiments have been adopting Cloud approaches, and in particular the experience of the CMS experiment is of relevance to this thesis. The LHC Run-II has just started, and Cloud-based solutions are already in production for CMS. However, other approaches of Cloud usage are being thought of and are at the prototype level, as the work done in this thesis. This effort is of paramount importance to be able to equip CMS with the capability to elastically and flexibly access and utilize the computing resources needed to face the challenges of Run-III and Run-IV. The main purpose of this thesis is to present forefront Cloud approaches that allow the CMS experiment to extend to on-demand resources dynamically allocated as needed. Moreover, a direct access to Cloud resources is presented as suitable use case to face up with the CMS experiment needs. Chapter 1 presents an overview of High Energy Physics at the LHC and of the CMS experience in Run-I, as well as preparation for Run-II. Chapter 2 describes the current CMS Computing Model, and Chapter 3 provides Cloud approaches pursued and used within the CMS Collaboration. Chapter 4 and Chapter 5 discuss the original and forefront work done in this thesis to develop and test working prototypes of elastic extensions of CMS computing resources on Clouds, and HEP Computing “as a Service”. The impact of such work on a benchmark CMS physics use-cases is also demonstrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The 5th generation of mobile networking introduces the concept of “Network slicing”, the network will be “sliced” horizontally, each slice will be compliant with different requirements in terms of network parameters such as bandwidth, latency. This technology is built on logical instead of physical resources, relies on virtual network as main concept to retrieve a logical resource. The Network Function Virtualisation provides the concept of logical resources for a virtual network function, enabling the concept virtual network; it relies on the Software Defined Networking as main technology to realize the virtual network as resource, it also define the concept of virtual network infrastructure with all components needed to enable the network slicing requirements. SDN itself uses cloud computing technology to realize the virtual network infrastructure, NFV uses also the virtual computing resources to enable the deployment of virtual network function instead of having custom hardware and software for each network function. The key of network slicing is the differentiation of slice in terms of Quality of Services parameters, which relies on the possibility to enable QoS management in cloud computing environment. The QoS in cloud computing denotes level of performances, reliability and availability offered. QoS is fundamental for cloud users, who expect providers to deliver the advertised quality characteristics, and for cloud providers, who need to find the right tradeoff between QoS levels that has possible to offer and operational costs. While QoS properties has received constant attention before the advent of cloud computing, performance heterogeneity and resource isolation mechanisms of cloud platforms have significantly complicated QoS analysis and deploying, prediction, and assurance. This is prompting several researchers to investigate automated QoS management methods that can leverage the high programmability of hardware and software resources in the cloud.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ogni giorno vengono generati grandi moli di dati attraverso sorgenti diverse. Questi dati, chiamati Big Data, sono attualmente oggetto di forte interesse nel settore IT (Information Technology). I processi digitalizzati, le interazioni sui social media, i sensori ed i sistemi mobili, che utilizziamo quotidianamente, sono solo un piccolo sottoinsieme di tutte le fonti che contribuiscono alla produzione di questi dati. Per poter analizzare ed estrarre informazioni da questi grandi volumi di dati, tante sono le tecnologie che sono state sviluppate. Molte di queste sfruttano approcci distribuiti e paralleli. Una delle tecnologie che ha avuto maggior successo nel processamento dei Big Data, e Apache Hadoop. Il Cloud Computing, in particolare le soluzioni che seguono il modello IaaS (Infrastructure as a Service), forniscono un valido strumento all'approvvigionamento di risorse in maniera semplice e veloce. Per questo motivo, in questa proposta, viene utilizzato OpenStack come piattaforma IaaS. Grazie all'integrazione delle tecnologie OpenStack e Hadoop, attraverso Sahara, si riesce a sfruttare le potenzialita offerte da un ambiente cloud per migliorare le prestazioni dell'elaborazione distribuita e parallela. Lo scopo di questo lavoro e ottenere una miglior distribuzione delle risorse utilizzate nel sistema cloud con obiettivi di load balancing. Per raggiungere questi obiettivi, si sono rese necessarie modifiche sia al framework Hadoop che al progetto Sahara.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Light-frame wood buildings are widely built in the United States (U.S.). Natural hazards cause huge losses to light-frame wood construction. This study proposes methodologies and a framework to evaluate the performance and risk of light-frame wood construction. Performance-based engineering (PBE) aims to ensure that a building achieves the desired performance objectives when subjected to hazard loads. In this study, the collapse risk of a typical one-story light-frame wood building is determined using the Incremental Dynamic Analysis method. The collapse risks of buildings at four sites in the Eastern, Western, and Central regions of U.S. are evaluated. Various sources of uncertainties are considered in the collapse risk assessment so that the influence of uncertainties on the collapse risk of lightframe wood construction is evaluated. The collapse risks of the same building subjected to maximum considered earthquakes at different seismic zones are found to be non-uniform. In certain areas in the U.S., the snow accumulation is significant and causes huge economic losses and threatens life safety. Limited study has been performed to investigate the snow hazard when combined with a seismic hazard. A Filtered Poisson Process (FPP) model is developed in this study, overcoming the shortcomings of the typically used Bernoulli model. The FPP model is validated by comparing the simulation results to weather records obtained from the National Climatic Data Center. The FPP model is applied in the proposed framework to assess the risk of a light-frame wood building subjected to combined snow and earthquake loads. The snow accumulation has a significant influence on the seismic losses of the building. The Bernoulli snow model underestimates the seismic loss of buildings in areas with snow accumulation. An object-oriented framework is proposed in this study to performrisk assessment for lightframe wood construction. For home owners and stake holders, risks in terms of economic losses is much easier to understand than engineering parameters (e.g., inter story drift). The proposed framework is used in two applications. One is to assess the loss of the building subjected to mainshock-aftershock sequences. Aftershock and downtime costs are found to be important factors in the assessment of seismic losses. The framework is also applied to a wood building in the state of Washington to assess the loss of the building subjected to combined earthquake and snow loads. The proposed framework is proven to be an appropriate tool for risk assessment of buildings subjected to multiple hazards. Limitations and future works are also identified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern cloud-based applications and infrastructures may include resources and services (components) from multiple cloud providers, are heterogeneous by nature and require adjustment, composition and integration. The specific application requirements can be met with difficulty by the current static predefined cloud integration architectures and models. In this paper, we propose the Intercloud Operations and Management Framework (ICOMF) as part of the more general Intercloud Architecture Framework (ICAF) that provides a basis for building and operating a dynamically manageable multi-provider cloud ecosystem. The proposed ICOMF enables dynamic resource composition and decomposition, with a main focus on translating business models and objectives to cloud services ensembles. Our model is user-centric and focuses on the specific application execution requirements, by leveraging incubating virtualization techniques. From a cloud provider perspective, the ecosystem provides more insight into how to best customize the offerings of virtualized resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a need to validate risk assessment tools for hospitalised medical patients at risk of venous thromboembolism (VTE). We investigated whether a predefined cut-off of the Geneva Risk Score, as compared to the Padua Prediction Score, accurately distinguishes low-risk from high-risk patients regardless of the use of thromboprophylaxis. In the multicentre, prospective Explicit ASsessment of Thromboembolic RIsk and Prophylaxis for Medical PATients in SwitzErland (ESTIMATE) cohort study, 1,478 hospitalised medical patients were enrolled of whom 637 (43%) did not receive thromboprophylaxis. The primary endpoint was symptomatic VTE or VTE-related death at 90 days. The study is registered at ClinicalTrials.gov, number NCT01277536. According to the Geneva Risk Score, the cumulative rate of the primary endpoint was 3.2% (95% confidence interval [CI] 2.2-4.6%) in 962 high-risk vs 0.6% (95% CI 0.2-1.9%) in 516 low-risk patients (p=0.002); among patients without prophylaxis, this rate was 3.5% vs 0.8% (p=0.029), respectively. In comparison, the Padua Prediction Score yielded a cumulative rate of the primary endpoint of 3.5% (95% CI 2.3-5.3%) in 714 high-risk vs 1.1% (95% CI 0.6-2.3%) in 764 low-risk patients (p=0.002); among patients without prophylaxis, this rate was 3.2% vs 1.5% (p=0.130), respectively. Negative likelihood ratio was 0.28 (95% CI 0.10-0.83) for the Geneva Risk Score and 0.51 (95% CI 0.28-0.93) for the Padua Prediction Score. In conclusion, among hospitalised medical patients, the Geneva Risk Score predicted VTE and VTE-related mortality and compared favourably with the Padua Prediction Score, particularly for its accuracy to identify low-risk patients who do not require thromboprophylaxis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Content Distribution Networks are mandatory components of modern web architectures, with plenty of vendors offering their services. Despite its maturity, new paradigms and architecture models are still being developed in this area. Cloud Computing, on the other hand, is a more recent concept which has expanded extremely quickly, with new services being regularly added to cloud management software suites such as OpenStack. The main contribution of this paper is the architecture and the development of an open source CDN that can be provisioned in an on-demand, pay-as-you-go model thereby enabling the CDN as a Service paradigm. We describe our experience with integration of CDNaaS framework in a cloud environment, as a service for enterprise users. We emphasize the flexibility and elasticity of such a model, with each CDN instance being delivered on-demand and associated to personalized caching policies as well as an optimized choice of Points of Presence based on exact requirements of an enterprise customer. Our development is based on the framework developed in the Mobile Cloud Networking EU FP7 project, which offers its enterprise users a common framework to instantiate and control services. CDNaaS is one of the core support components in this project as is tasked to deliver different type of multimedia content to several thousands of users geographically distributed. It integrates seamlessly in the MCN service life-cycle and as such enjoys all benefits of a common design environment, allowing for an improved interoperability with the rest of the services within the MCN ecosystem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.