768 resultados para Cloud Computing, Risk Assessment, Security, Framework


Relevância:

100.00% 100.00%

Publicador:

Resumo:

As a consequence of flood impacts, communities inhabiting mountain areas are increasingly affected by considerable damage to infrastructure and property. The design of effective flood risk mitigation strategies and their subsequent implementation is crucial for a sustainable development in mountain areas. The assessment of the dynamic evolution of flood risk is the pillar of any subsequent planning process that is targeted at a reduction of the expected adverse consequences of the hazard impact. Given these premises, firstly, a comprehensive method to derive flood hazard process scenarios for well-defined areas at risk is presented. Secondly, conceptualisations of a static and dynamic flood risk assessment are provided. These are based on formal schemes to compute the risk mitigation performance of devised mitigation strategies within the framework of economic cost-benefit analysis. In this context, techniques suitable to quantify the expected losses induced by the identified flood impacts are provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Studies are suggesting that hurricane hazard patterns (e.g. intensity and frequency) may change as a consequence of the changing global climate. As hurricane patterns change, it can be expected that hurricane damage risks and costs may change as a result. This indicates the necessity to develop hurricane risk assessment models that are capable of accounting for changing hurricane hazard patterns, and develop hurricane mitigation and climatic adaptation strategies. This thesis proposes a comprehensive hurricane risk assessment and mitigation strategies that account for a changing global climate and that has the ability of being adapted to various types of infrastructure including residential buildings and power distribution poles. The framework includes hurricane wind field models, hurricane surge height models and hurricane vulnerability models to estimate damage risks due to hurricane wind speed, hurricane frequency, and hurricane-induced storm surge and accounts for the timedependant properties of these parameters as a result of climate change. The research then implements median insured house values, discount rates, housing inventory, etc. to estimate hurricane damage costs to residential construction. The framework was also adapted to timber distribution poles to assess the impacts climate change may have on timber distribution pole failure. This research finds that climate change may have a significant impact on the hurricane damage risks and damage costs of residential construction and timber distribution poles. In an effort to reduce damage costs, this research develops mitigation/adaptation strategies for residential construction and timber distribution poles. The costeffectiveness of these adaptation/mitigation strategies are evaluated through the use of a Life-Cycle Cost (LCC) analysis. In addition, a scenario-based analysis of mitigation strategies for timber distribution poles is included. For both residential construction and timber distribution poles, adaptation/mitigation measures were found to reduce damage costs. Finally, the research develops the Coastal Community Social Vulnerability Index (CCSVI) to include the social vulnerability of a region to hurricane hazards within this hurricane risk assessment. This index quantifies the social vulnerability of a region, by combining various social characteristics of a region with time-dependant parameters of hurricanes (i.e. hurricane wind and hurricane-induced storm surge). Climate change was found to have an impact on the CCSVI (i.e. climate change may have an impact on the social vulnerability of hurricane-prone regions).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Applying location-focused data protection law within the context of a location-agnostic cloud computing framework is fraught with difficulties. While the Proposed EU Data Protection Regulation has introduced a lot of changes to the current data protection framework, the complexities of data processing in the cloud involve various layers and intermediaries of actors that have not been properly addressed. This leaves some gaps in the regulation when analyzed in cloud scenarios. This paper gives a brief overview of the relevant provisions of the regulation that will have an impact on cloud transactions and addresses the missing links. It is hoped that these loopholes will be reconsidered before the final version of the law is passed in order to avoid unintended consequences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of cloud computing is extending to all kind of systems, including the ones that are part of Critical Infrastructures, and measuring the reliability is becoming more difficult. Computing is becoming the 5th utility, in part thanks to the use of cloud services. Cloud computing is used now by all types of systems and organizations, including critical infrastructure, creating hidden inter-dependencies on both public and private cloud models. This paper investigates the use of cloud computing by critical infrastructure systems, the reliability and continuity of services risks associated with their use by critical systems. Some examples are presented of their use by different critical industries, and even when the use of cloud computing by such systems is not widely extended, there is a future risk that this paper presents. The concepts of macro and micro dependability and the model we introduce are useful for inter-dependency definition and for analyzing the resilience of systems that depend on other systems, specifically in the cloud model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The 4CaaSt project aims at developing a PaaS framework that enables flexible definition, marketing, deployment and management of Cloud-based services and applications. The major innovations proposed by 4CaaSt are the blueprint and its lifecycle management, a one stop shop for Cloud services and a PaaS level resource management featuring elasticity. 4CaaSt also provides a portfolio of ready to use Cloud native services and Cloud-aware immigrant technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present competitive environment, companies are wondering how to reduce their IT costs while increasing their efficiency and agility to react when changes in the business processes are required. Cloud Computing is the latest paradigm to optimize the use of IT resources considering ?everything as a service? and receiving these services from the Cloud (Internet) instead of owning and managing hardware and software assets. The benefits from the model are clear. However, there are also concerns and issues to be solved before Cloud Computing spreads across the different industries. This model will allow a pay-per-use model for the IT services and many benefits like cost savings, agility to react when business demands changes and simplicity because there will not be any infrastructure to operate and administrate. It will be comparable to the well known utilities like electricity, water or gas companies. However, this paper underlines several risk factors of the model. Leading technology companies should research on solutions to minimize the risks described in this article. Keywords - Cloud Computing, Utility Computing, Elastic Computing, Enterprise Agility

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Access to information and continuous education represent critical factors for physicians and researchers over the world. For African professionals, this situation is even more problematic due to the frequently difficult access to technological infrastructures and basic information. Both education and information technologies (e.g., including hardware, software or networking) are expensive and unaffordable for many African professionals. Thus, the use of e-learning and an open approach to information exchange and software use have been already proposed to improve medical informatics issues in Africa. In this context, the AFRICA BUILD project, supported by the European Commission, aims to develop a virtual platform to provide access to a wide range of biomedical informatics and learning resources to professionals and researchers in Africa. A consortium of four African and four European partners work together in this initiative. In this framework, we have developed a prototype of a cloud-computing infrastructure to demonstrate, as a proof of concept, the feasibility of this approach. We have conducted the experiment in two different locations in Africa: Burundi and Egypt. As shown in this paper, technologies such as cloud computing and the use of open source medical software for a large range of case present significant challenges and opportunities for developing countries, such as many in Africa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La informática se está convirtiendo en la quinta utilidad (gas, agua, luz, teléfono) en parte debido al impacto de Cloud Computing en las mayorías de las organizaciones. Este uso de informática es usada por cada vez más tipos de sistemas, incluidos Sistemas Críticos. Esto tiene un impacto en la complejidad internad y la fiabilidad de los sistemas de la organización y los que se ofrecen a los clientes. Este trabajo investiga el uso de Cloud Computing por sistemas críticos, centrándose en las dependencias y especialmente en la fiabilidad de estos sistemas. Se han presentado algunos ejemplos de su uso, y aunque su utilización en sistemas críticos no está extendido, se presenta cual puede llegar a ser su impacto. El objetivo de este trabajo es primero definir un modelo que pueda representar de una forma cuantitativa las interdependencias en fiabilidad y interdependencia para las organizaciones que utilicen estos sistemas, y aplicar este modelo en un sistema crítico del campo de sanidad y mostrar sus resultados. Los conceptos de “macro-dependability” y “micro-dependability” son introducidos en el modelo para la definición de interdependencia y para analizar la fiabilidad de sistemas que dependen de otros sistemas. ABSTRACT With the increasing utilization of Internet services and cloud computing by most organizations (both private and public), it is clear that computing is becoming the 5th utility (along with water, electricity, telephony and gas). These technologies are used for almost all types of systems, and the number is increasing, including Critical Infrastructure systems. Even if Critical Infrastructure systems appear not to rely directly on cloud services, there may be hidden inter-dependencies. This is true even for private cloud computing, which seems more secure and reliable. The critical systems can began in some cases with a clear and simple design, but evolved as described by Egan to "rafted" networks. Because they are usually controlled by one or few organizations, even when they are complex systems, their dependencies can be understood. The organization oversees and manages changes. These CI systems have been affected by the introduction of new ICT models like global communications, PCs and the Internet. Even virtualization took more time to be adopted by Critical systems, due to their strategic nature, but once that these technologies have been proven in other areas, at the end they are adopted as well, for different reasons such as costs. A new technology model is happening now based on some previous technologies (virtualization, distributing and utility computing, web and software services) that are offered in new ways and is called cloud computing. The organizations are migrating more services to the cloud; this will have impact in their internal complexity and in the reliability of the systems they are offering to the organization itself and their clients. Not always this added complexity and associated risks to their reliability are seen. As well, when two or more CI systems are interacting, the risks of one can affect the rest, sharing the risks. This work investigates the use of cloud computing by critical systems, and is focused in the dependencies and reliability of these systems. Some examples are presented together with the associated risks. A framework is introduced for analysing the dependability and resilience of a system that relies on cloud services and how to improve them. As part of the framework, the concepts of micro and macro dependability are introduced to explain the internal and external dependability on services supplied by an external cloud. A pharmacovigilance model system has been used for framework validation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human health problems and solutions. Urban gardening has spread worldwide in recent years as it enhances food security and selfsupply and promotes community integration. However urban soils are significantly enriched in trace elements relative to background levels. Exposure to the soil in urban gardens may therefore result in adverse health effects depending on the degree of contact during gardening, infant recreational activities and ingestion of vegetables grown in them. In order to evaluate this potential risk, 36 composite samples were collected from the top 20 cm of the soil of 6 urban gardens in Madrid. The aqua regia (pseudototal) and glycine-extractable (bioaccessible) concentrations of Co, Cr, Cu, Ni, Pb and Zn were determined by atomic absorption spectrophotometry. Additionally, pH, texture, Fe, Ca, and Mn concentrations, and organic matter and calcium carbonate contents were determined in all urban gardens and their influence on trace element bioaccessibility was analyzed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently, student dropout rates are a matter of concern among universities. Many research studies, aimed at discovering the causes, have been carried out. However, few solutions, that could serve all students and related problems, have been proposed so far. One such problem is caused by the lack of the "knowledge chain educational links" that occurs when students move onto higher studies without mastering their basic studies. Most regulated studies imparted at universities are designed so that some basic subjects serve as support for other, more complicated, subjects, thus forming a complicated knowledge network. When a link in this chain fails, student frustration occurs as it prevents him from fully understanding the following educational links. In this proposal we try to mitigate these disasters that stem, for the most part, the student?s frustration beyond his college stay. On one hand, we make a dissertation on the student?s learning process, which we divide into a series of phases that amount to what we call the "learning lifecycle." Also, we analyze at what stage the action by the stakeholders involved in this scenario: teachers and students; is most important. On the other hand, we consider that Information and Communication Technologies ICT, such as Cloud Computing, can help develop new ways, allowing for the teaching of higher education, while easing and facilitating the student?s learning process. But, methods and processes need to be defined as to direct the use of such technologies; in the teaching process in general, and within higher education in particular; in order to achieve optimum results. Our methodology integrates, as another actor, the ICT into the "Learning Lifecycle". We stimulate students to stop being merely spectators of their own education, and encourage them to take an active part in their training process. To do this, we offer a set of useful tools to determine not only academic failure causes, (for self assessment), but also to remedy these failures (with corrective actions); "discovered the causes it is easier to determine solutions?. We believe this study will be useful for both students and teachers. Students learn from their own experience and improve their learning process, while obtaining all of the "knowledge chain educational links? required in their studies. We stand by the motto "Studying to learn instead of studying to pass". Teachers will also be benefited by detecting where and how to strengthen their teaching proposals. All of this will also result in decreasing dropout rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fault tree methodology is the most widespread risk assessment tool by which one is able to predict - in principle - the outcome of an event whenever it is reduced to simpler ones by the logic operations conjunction and disjunction according to the basics of Boolean algebra. The object of this work is to present an algorithm by which, using the corresponding computer code, one is able to predict - in practice - the outcome of an event whenever its fault tree is given in the usual form.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Евгений Николов, Димитрина Полимирова - Докладът представя текущото състояние на “облачните изчисления” и “облачните информационни атаки” в светлината на компютърната вирусология и информационната сигурност. Обсъдени са категориите “облачни възможни информационни атаки” и “облачни успешни информационни атаки”. Коментирана е архитектурата на “облачните изчисления” и основните компоненти, които изграждат тяхната инфраструктура, съответно “клиенти” (“clients”), „центрове за съхранение на данни“ (“datacenters”) и „разпределени сървъри“ (“dirstributed servers”). Коментирани са и услугите, които се предлагат от “облачните изчисления” – SaaS, HaaS и PaaS. Посочени са предимствата и недостатъците на компонентите и услугите по отношение на “облачните информационни атаки”. Направен е анализ на текущото състояние на “облачните информационни атаки” на територията на България, Балканския полуостров и Югоизточна Европа по отношение на компонентите и на услугите. Резултатите са представени под формата на 3D графични обекти. На края са направени съответните изводи и препоръки под формата на заключение.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Napjaink informatikai világának talán legkeresettebb hívó szava a cloud computing, vagy magyar fordításban, a számítási felhő. A fordítás forrása az EU-s (Digitális Menetrend magyar változata, 2010) A számítási felhő üzleti modelljének részletes leírását adja (Bőgel, 2009). Bőgel György ismerteti az új, közműszerű informatikai szolgáltatás kialakulását és gazdasági előnyeit, nagy jövőt jósolva a számítási felhőnek az üzleti modellek versenyében. A szerző – a számítási felhő üzleti előnyei mellett – nagyobb hangsúlyt fektet dolgozatában a gyors elterjedést gátló tényezőkre, és arra, hogy mit jelentenek az előnyök és a hátrányok egy üzleti, informatikai vagy megfelelőségi vezető számára. Nem csökkentve a cloud modell gazdasági jelentőségét, fontosnak tartja, hogy a problémákról és a kockázatokról is szóljon. Kiemeli, hogy a kockázatokban – különösen a biztonsági és adatvédelmi kockázatokban – lényeges különbségek vannak az Európai Gazdasági Térség és a világ többi része, pl. az Amerikai Egyesült Államok között. A cikkben rámutat ezekre a különbségekre, és az olvasó magyarázatot kap arra is, hogy miért várható a számítási felhő lassabb terjedése Európában, mint a világ más részein. Bemutatja az EU erőfeszítéseit is a számítási felhő európai terjedésének elősegítésére, tekintettel a modell versenyképességet növelő hatására. / === / One of the most popular concept of the recent web searches is cloud computing. Several authors present detailed description of the new service model and it's business benefits and cite the optimistic prognoses of the cloud experts regarding the competition of information system service models. The author analyses the operational benefits of the cloud application and give a detailed description of the inhibitors of the fast expansion of the service modell. He also analyses the pros and cons of the cloud for a business manager, an information and a compliance officer. When understanding the advantages of the cloud, it is equally important to review the problems and risks associated with the model. The paper gives a list of the expected cloud-specific risks. It also explains the differences in security and data protection approach between the European Economic Area and the rest of the world, including the USA. The explains why slower expansion of the cloud modell is expected in Europe than in the rest of the world. The efforts of the EU Committee in helping to spread the cloud model is also presented, as the EU's officers consider the model as an important element of competitiveness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Product quality planning is a fundamental part of quality assurance in manufacturing. It is composed of the distribution of quality aims over each phase in product development and the deployment of quality operations and resources to accomplish these aims. This paper proposes a quality planning methodology based on risk assessment and the planning tasks of product development are translated into evaluation of risk priorities. Firstly, a comprehensive model for quality planning is developed to address the deficiencies of traditional quality function deployment (QFD) based quality planning. Secondly, a novel failure knowledge base (FKB) based method is discussed. Then a mathematical method and algorithm of risk assessment is presented for target decomposition, measure selection, and sequence optimization. Finally, the proposed methodology has been implemented in a web based prototype software system, QQ-Planning, to solve the problem of quality planning regarding the distribution of quality targets and the deployment of quality resources, in such a way that the product requirements are satisfied and the enterprise resources are highly utilized. © Springer-Verlag Berlin Heidelberg 2010.