768 resultados para Cloud Computing, Risk Assessment, Security, Framework


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the context of recent attempts to redefine the 'skin notation' concept, a position paper summarizing an international workshop on the topic stated that the skin notation should be a hazard indicator related to the degree of toxicity and the potential for transdermal exposure of a chemical. Within the framework of developing a web-based tool integrating this concept, we constructed a database of 7101 agents for which a percutaneous permeation constant can be estimated (using molecular weight and octanol-water partition constant), and for which at least one of the following toxicity indices could be retrieved: Inhalation occupational exposure limit (n=644), Oral lethal dose 50 (LD50, n=6708), cutaneous LD50 (n=1801), Oral no observed adverse effect level (NOAEL, n=1600), and cutaneous NOAEL (n=187). Data sources included the Registry of toxic effects of chemical substances (RTECS, MDL information systems, Inc.), PHYSPROP (Syracuse Research Corp.) and safety cards from the International Programme on Chemical Safety (IPCS). A hazard index, which corresponds to the product of exposure duration and skin surface exposed that would yield an internal dose equal to a toxic reference dose was calculated. This presentation provides a descriptive summary of the database, correlations between toxicity indices, and an example of how the web tool will help industrial hygienist decide on the possibility of a dermal risk using the hazard index.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyse the impact of working and contractual conditions, particularly exposure to job risks, on the probability of acquiring a permanent disability, controlling for other personal and firm characteristics. We postulate a model in which this impact is mediated by the choice of occupation, with a level of risk associated with it. We assume this choice is endogenous, and that it depends on preferences and opportunities in the labour market, both of which may differ between immigrants and natives. To test this hypothesis we apply a bivariate probit model to data for 2006 from the Continuous Sample of Working Lives provided by the Spanish Social Security system, containing records for over a million workers. We find that risk exposure increases the probability of permanent disability arising from any cause - by almost 5%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Pancreaticoduodenectomies (PD) still have a substantial mortality rate. Recently, different scores have been published to predict the mortality risk pre-operatively after PD. This retrospective study was designed to perform an external assessment of an Early Mortality Risk Score (EMRS). METHODS: From 2000 to 2012, all PD cases performed at our institution were documented. Only patients treated for pancreatic head adenocarcinomas were included. Survival time and EMRS (based on age, tumour size, tumour differentiation and comorbidities) were calculated for every patient. Relative risks (RR) of early death 9 and 12 months after PD were then calculated. RESULTS: Of 270 PD for various aetiologies, 120 PD for adenocarcinomas were included. The median follow-up was 37 months, and the overall median survival was 19 months. EMRS of 4 showed a mortality RR of 5.1 at 9 months (P = 0.048) and of 4.5 at 12 months (P = 0.020). CONCLUSIONS: EMRS of 4 is a predictor of tumour-related mortality at 9 and 12 months after PD for adenocarcinoma. The EMRS was externally assessed in our patient cohort and can be implemented in clinical practice. Clinical implications of this score still need to be studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Occupational hygiene practitioners typically assess the risk posed by occupational exposure by comparing exposure measurements to regulatory occupational exposure limits (OELs). In most jurisdictions, OELs are only available for exposure by the inhalation pathway. Skin notations are used to indicate substances for which dermal exposure may lead to health effects. However, these notations are either present or absent and provide no indication of acceptable levels of exposure. Furthermore, the methodology and framework for assigning skin notation differ widely across jurisdictions resulting in inconsistencies in the substances that carry notations. The UPERCUT tool was developed in response to these limitations. It helps occupational health stakeholders to assess the hazard associated with dermal exposure to chemicals. UPERCUT integrates dermal quantitative structure-activity relationships (QSARs) and toxicological data to provide users with a skin hazard index called the dermal hazard ratio (DHR) for the substance and scenario of interest. The DHR is the ratio between the estimated 'received' dose and the 'acceptable' dose. The 'received' dose is estimated using physico-chemical data and information on the exposure scenario provided by the user (body parts exposure and exposure duration), and the 'acceptable' dose is estimated using inhalation OELs and toxicological data. The uncertainty surrounding the DHR is estimated with Monte Carlo simulation. Additional information on the selected substances includes intrinsic skin permeation potential of the substance and the existence of skin notations. UPERCUT is the only available tool that estimates the absorbed dose and compares this to an acceptable dose. In the absence of dermal OELs it provides a systematic and simple approach for screening dermal exposure scenarios for 1686 substances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The GMO Risk Assessment and Communication of Evidence (GRACE; www.grace-fp7.eu) project is funded by the European Commission within the 7th Framework Programme. A key objective of GRACE is to conduct 90-day animal feeding trials, animal studies with an extended time frame as well as analytical, in vitro and in silico studies on genetically modified (GM) maize in order to comparatively evaluate their use in GM plant risk assessment. In the present study, the results of two 90-day feeding trials with two different GM maize MON810 varieties, their near-isogenic non-GM varieties and four additional conventional maize varieties are presented. The feeding trials were performed by taking into account the guidance for such studies published by the EFSA Scientific Committee in 2011 and the OECD Test Guideline 408. The results obtained show that the MON810 maize at a level of up to 33 % in the diet did not induce adverse effects in male and female Wistar Han RCC rats after subchronic exposure, independently of the two different genetic backgrounds of the event

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this thesis is to investigate projects funded in European 7th framework Information and Communication Technology- work programme. The research has been limited to issue ”Pervasive and trusted network and service infrastructure” and the aim is to find out which are the most important topics into which research will concentrate in the future. The thesis will provide important information for the Department of Information Technology in Lappeenranta University of Technology. First in this thesis will be investigated what are the requirements for the projects which were funded in “Pervasive and trusted network and service infrastructure” – programme 2007. Second the projects funded according to “Pervasive and trusted network and service infrastructure”-programme will be listed in to tables and the most important keywords will be gathered. Finally according to the keyword appearances the vision of the most important future topics will be defined. According to keyword-analysis the wireless networks are in important role in the future and core networks will be implemented with fiber technology to ensure fast data transfer. Software development favors Service Oriented Architecture (SOA) and open source solutions. The interoperability and ensuring the privacy are in key role in the future. 3D in all forms and content delivery are important topics as well. When all the projects were compared, the most important issue was discovered to be SOA which leads the way to cloud computing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nykyaikaiset pilvipalvelut tarjoavat suurille yrityksille mahdollisuuden tehostaa laskennallista tietojenkäsittelyä. Pilvipalveluiden käyttöönotto tuo mukanaan kuitenkin esimerkiksi useita tietoturvakysymyksiä, joiden vuoksi käyttöönoton tulee olla tarkasti suunniteltua. Tämä tutkimus esittelee kirjallisuuskatsaukseen perustuvan, asteittaisen suunnitelman pilvipalveluiden käyttöönotolle energialiiketoimintaympäristössä. Kohdeyrityksen sisäiset haastattelut ja katsaus nykyisiin energiateollisuuden pilviratkaisuihin muodostavat kokonaiskuvan käyttöönoton haasteista ja mahdollisuuksista. Tutkimuksen päätavoitteena on esittää ratkaisut tyypillisiin pilvipalvelun käyttöönotossa esiintyviin ongelmiin käyttöönottomallin avulla. Tutkimuksessa rakennettu käyttöönottomalli testattiin esimerkkitapauksen avulla ja malli todettiin toimivaksi. Ulkoisten palveluiden herättämien tietoturvakysymysten takia käyttöönoton ensimmäiset osiot, kuten lopputuotteen määrittely ja huolellinen suunnittelu, ovat koko käyttöönottoprosessin ydin. Lisäksi pilvipalveluiden käyttöönotto vaatii nykyiseltä käyttöympäristöltä uusia teknisiä ja hallinnollisia taitoja. Tutkimuksen tulokset osoittavat pilvipalveluiden monipuolisen hyödyn erityisesti laskentatehon tarpeen vaihdellessa. Käyttöönottomallin rinnalle luotu kustannusvertailu tukee kirjallisuuskatsauksessa esille tuotuja hyötyjä ja tarjoaa kohdeyritykselle perusteen tutkimuksen eteenpäin viemiselle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Russia approved ambitious reform plan for the electricity sector in 2001 including privatisation of the country’s huge thermal generation assets. So far the sector had suffered from power shortages, aging infrastructure, substantial electricity losses, and weak productivity and profitability numbers. There was obvious need for foreign investments and technologies. The reform was rather successful; the generation assets were privatised in auctions in 2007-2008 and three European energy companies, E.On, Enel and Fortum, invested in and obtained together over 10% of the Russian production assets. The novelty of these foreign investments serves unique object for the study. The political risk is involved in the FDI due to the industry’s social and economic importance. The research’s objective was to identify and analyse the political risk that foreign investors face in the Russian electricity sector. The research had qualitative study method and the empirical data was collected by interviewing. The research’s theoretical framework was based on the existing political risk theories and it focused to understand the Russian government in relation to the country’s stability and define both macro-level and micro-level sources of political risk for the foreign direct investments in the sector. The research concludes that the centralised and obscure political decision-making, economic constriction, high level of governmental control in economy and corruption form the country’s internal macro-level risk sources for the foreign investors in the sector. Additionally the retribution due to the companies’ home country actions, possible violent confrontations at the Russian borders and the currency instability are externally originated risk sources. In the electricity industry there is risk of tightened governmental control and increased regulation and taxation. Similarly the company-level risk sources link to the unreformed heating sector, bargaining with the authorities, diplomatic stress between host and home countries and to companies and government’s divergent perspective for the profit-making. The research stresses the foreign companies’ ability to cope with the characteristics of Russian political environment. In addition to frequent political and market risk assessment, the companies need to focus on currency protection against rouble’s rate fluctuation and actively build good company-citizenship in the country. Good relationship is needed with the Russian political authorities. The political risk identification and the research’s conclusive framework also enable political risk study assessments for other industries in Russia

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Le sujet principal de cette thèse porte sur les mesures de risque. L'objectif général est d'investiguer certains aspects des mesures de risque dans les applications financières. Le cadre théorique de ce travail est celui des mesures cohérentes de risque telle que définie dans Artzner et al (1999). Mais ce n'est pas la seule classe de mesure du risque que nous étudions. Par exemple, nous étudions aussi quelques aspects des "statistiques naturelles de risque" (en anglais natural risk statistics) Kou et al (2006) et des mesures convexes du risque Follmer and Schied(2002). Les contributions principales de cette thèse peuvent être regroupées selon trois axes: allocation de capital, évaluation des risques et capital requis et solvabilité. Dans le chapitre 2 nous caractérisons les mesures de risque avec la propriété de Lebesgue sur l'ensemble des processus bornés càdlàg (continu à droite, limité à gauche). Cette caractérisation nous permet de présenter deux applications dans l'évaluation des risques et l'allocation de capital. Dans le chapitre 3, nous étendons la notion de statistiques naturelles de risque à l'espace des suites infinies. Cette généralisation nous permet de construire de façon cohérente des mesures de risque pour des bases de données de n'importe quelle taille. Dans le chapitre 4, nous discutons le concept de "bonnes affaires" (en anglais Good Deals), pour notamment caractériser les situations du marché où ces positions pathologiques sont présentes. Finalement, dans le chapitre 5, nous essayons de relier les trois chapitres en étendant la définition de "bonnes affaires" dans un cadre plus large qui comprendrait les mesures de risque analysées dans les chapitres 2 et 3.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional inventory models focus on risk-neutral decision makers, i.e., characterizing replenishment strategies that maximize expected total profit, or equivalently, minimize expected total cost over a planning horizon. In this paper, we propose a framework for incorporating risk aversion in multi-period inventory models as well as multi-period models that coordinate inventory and pricing strategies. In each case, we characterize the optimal policy for various measures of risk that have been commonly used in the finance literature. In particular, we show that the structure of the optimal policy for a decision maker with exponential utility functions is almost identical to the structure of the optimal risk-neutral inventory (and pricing) policies. Computational results demonstrate the importance of this approach not only to risk-averse decision makers, but also to risk-neutral decision makers with limited information on the demand distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las tecnologías de la información han empezado a ser un factor importante a tener en cuenta en cada uno de los procesos que se llevan a cabo en la cadena de suministro. Su implementación y correcto uso otorgan a las empresas ventajas que favorecen el desempeño operacional a lo largo de la cadena. El desarrollo y aplicación de software han contribuido a la integración de los diferentes miembros de la cadena, de tal forma que desde los proveedores hasta el cliente final, perciben beneficios en las variables de desempeño operacional y nivel de satisfacción respectivamente. Por otra parte es importante considerar que su implementación no siempre presenta resultados positivos, por el contrario dicho proceso de implementación puede verse afectado seriamente por barreras que impiden maximizar los beneficios que otorgan las TIC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the many models developed for phosphorus concentration prediction at differing spatial and temporal scales, there has been little effort to quantify uncertainty in their predictions. Model prediction uncertainty quantification is desirable, for informed decision-making in river-systems management. An uncertainty analysis of the process-based model, integrated catchment model of phosphorus (INCA-P), within the generalised likelihood uncertainty estimation (GLUE) framework is presented. The framework is applied to the Lugg catchment (1,077 km2), a River Wye tributary, on the England–Wales border. Daily discharge and monthly phosphorus (total reactive and total), for a limited number of reaches, are used to initially assess uncertainty and sensitivity of 44 model parameters, identified as being most important for discharge and phosphorus predictions. This study demonstrates that parameter homogeneity assumptions (spatial heterogeneity is treated as land use type fractional areas) can achieve higher model fits, than a previous expertly calibrated parameter set. The model is capable of reproducing the hydrology, but a threshold Nash-Sutcliffe co-efficient of determination (E or R 2) of 0.3 is not achieved when simulating observed total phosphorus (TP) data in the upland reaches or total reactive phosphorus (TRP) in any reach. Despite this, the model reproduces the general dynamics of TP and TRP, in point source dominated lower reaches. This paper discusses why this application of INCA-P fails to find any parameter sets, which simultaneously describe all observed data acceptably. The discussion focuses on uncertainty of readily available input data, and whether such process-based models should be used when there isn’t sufficient data to support the many parameters.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many producers of geographic information are now disseminating their data using open web service protocols, notably those published by the Open Geospatial Consortium. There are many challenges inherent in running robust and reliable services at reasonable cost. Cloud computing provides a new kind of scalable infrastructure that could address many of these challenges. In this study we implement a Web Map Service for raster imagery within the Google App Engine environment. We discuss the challenges of developing GIS applications within this framework and the performance characteristics of the implementation. Results show that the application scales well to multiple simultaneous users and performance will be adequate for many applications, although concerns remain over issues such as latency spikes. We discuss the feasibility of implementing services within the free usage quotas of Google App Engine and the possibility of extending the approaches in this paper to other GIS applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A full assessment of para-­virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-­‐metal, as well as on para-­‐virtualization. The idea is to see what the overheads of para-­‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-­‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-­‐metal, then on the para-­‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-­‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-­‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-­‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-­‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-­‐native performance. You can deploy both para-­‐virtualization and full virtualization across various virtualized systems. Para-­‐virtualization is an OS-­‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-­‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-­‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-­‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.