145 resultados para Cloud discharge


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the compelling case for moving towards cloud computing, the upstream oil & gas industry faces several technical challenges—most notably, a pronounced emphasis on data security, a reliance on extremely large data sets, and significant legacy investments in information technology infrastructure—that make a full migration to the public cloud difficult at present. Private and hybrid cloud solutions have consequently emerged within the industry to yield as much benefit from cloud-based technologies as possible while working within these constraints. This paper argues, however, that the move to private and hybrid clouds will very likely prove only to be a temporary stepping stone in the industry's technological evolution. By presenting evidence from other market sectors that have faced similar challenges in their journey to the cloud, we propose that enabling technologies and conditions will probably fall into place in a way that makes the public cloud a far more attractive option for the upstream oil & gas industry in the years ahead. The paper concludes with a discussion about the implications of this projected shift towards the public cloud, and calls for more of the industry's services to be offered through cloud-based “apps.”

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Decreased ability to perform Activities of Daily Living (ADLs) during hospitalisation has negative consequences for patients and health service delivery. Objective: To develop an Index to stratify patients at lower and higher risk of a significant decline in ability to perform ADLs at discharge. Design: Prospective two cohort study comprising a derivation (n=389; mean age 82.3 years; SD� 7.1) and a validation cohort (n=153; mean age 81.5 years; SD� 6.1). Patients and setting: General medical patients aged = 70 years admitted to three university-affiliated acute care hospitals in Brisbane, Australia. Measurement and main results: The short ADL Scale was used to identify a significant decline in ability to perform ADLs from premorbid to discharge. In the derivation cohort, 77 patients (19.8%) experienced a significant decline. Four significant factors were identified for patients independent at baseline: 'requiring moderate assistance to being totally dependent on others with bathing'; 'difficulty understanding others (frequently or all the time)'; 'requiring moderate assistance to being totally dependent on others with performing housework'; a 'history of experiencing at least one fall in the previous 90 days prior to hospital admission' in addition to 'independent at baseline', which was protective against decline at discharge. 'Difficulty understanding others (frequently or all the time)' and 'requiring moderate assistance to being totally dependent on others with performing housework' were also predictors for patients dependent in ADLs at baseline. Sensitivity, specificity, Positive Predictive Value (PPV), and Negative Predictive Value (NPV) of the DADLD dichotomised risk scores were: 83.1% (95% CI 72.8; 90.7); 60.5% (95% CI 54.8; 65.9); 34.2% (95% CI 27.5; 41.5); 93.5% (95% CI 89.2; 96.5). In the validation cohort, 47 patients (30.7%) experienced a significant decline. Sensitivity, specificity, PPV and NPV of the DADLD were: 78.7% (95% CI 64.3; 89.3); 69.8% (95% CI 60.1, 78.3); 53.6% (95% CI 41.2; 65.7); 88.1% (95% CI 79.2; 94.1). Conclusions: The DADLD Index is a useful tool for identifying patients at higher risk of decline in ability to perform ADLs at discharge.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Model calculations, which include the effects of turbulence during subsequent solar nebula evolution after the collapse of a cool interstellar cloud, can reconcile some of the apparent differences between physical parameters obtained from theory and the cosmochemical record. Two important aspects of turbulence in a protoplanetary cloud include the growth and transport of solid grains. While the physical effects of the process can be calculated and compared with the probable remains of the nebula formulation period, the more subtle effects on primitive grains and their survival in the cosmochemical record cannot be readily evaluated. The environment offered by the Space Station (or Space Shuttle) experimental facility can provide the vacuum and low gravity conditions for sufficiently long time periods required for experimental verification of these cosmochemical models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Normally, vehicles queued at an intersection reach maximum flow rate after the fourth vehicle and results in a start-up lost time. This research demonstrated that the Enlarged Stopping Distance (ESD) concept could assist in reducing the start-up time and therefore increase traffic flow capacity at signalised intersections. In essence ESD gives sufficient space for a queuing vehicle to accelerate simultaneously without having to wait for the front vehicle to depart, hence reducing start-up lost time. In practice, the ESD concept would be most effective when enlarged stopping distance between the first and second vehicle allowing faster clearance of the intersection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"The music industry is going through a period of immense change brought about in part by the digital revolution. What is the role of music in the age of computers and the internet? How has the music industry been transformed by the economic and technological upheavals of recent years, and how is it likely to change in the future? This is the first major study of the music industry in the new millennium. Wikström provides an international overview of the music industry and its future prospects in the world of global entertainment. He illuminates the workings of the music industry, and captures the dynamics at work in the production of musical culture between the transnational media conglomerates, the independent music companies and the public." -- back cover Table of Contents Introduction: Music in the Cloud Chapter 1: A Copyright Industry. Chapter 2: Inside the Music Industry Chapter 3: Music and the Media Chapter 4: Making Music - An Industrial or Creative Process Chapter 5: The Social and Creative Music Fan Chapter 6: Future Sounds

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Groundwater flow models are usually characterized as being either transient flow models or steady state flow models. Given that steady state groundwater flow conditions arise as a long time asymptotic limit of a particular transient response, it is natural for us to seek a finite estimate of the amount of time required for a particular transient flow problem to effectively reach steady state. Here, we introduce the concept of mean action time (MAT) to address a fundamental question: How long does it take for a groundwater recharge process or discharge processes to effectively reach steady state? This concept relies on identifying a cumulative distribution function, $F(t;x)$, which varies from $F(0;x)=0$ to $F(t;x) \to \infty$ as $t\to \infty$, thereby providing us with a measurement of the progress of the system towards steady state. The MAT corresponds to the mean of the associated probability density function $f(t;x) = \dfrac{dF}{dt}$, and we demonstrate that this framework provides useful analytical insight by explicitly showing how the MAT depends on the parameters in the model and the geometry of the problem. Additional theoretical results relating to the variance of $f(t;x)$, known as the variance of action time (VAT), are also presented. To test our theoretical predictions we include measurements from a laboratory–scale experiment describing flow through a homogeneous porous medium. The laboratory data confirms that the theoretical MAT predictions are in good agreement with measurements from the physical model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main theme of this thesis is to allow the users of cloud services to outsource their data without the need to trust the cloud provider. The method is based on combining existing proof-of-storage schemes with distance-bounding protocols. Specifically, cloud customers will be able to verify the confidentiality, integrity, availability, fairness (or mutual non-repudiation), data freshness, geographic assurance and replication of their stored data directly, without having to rely on the word of the cloud provider.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Retaining customers is a relevant topic throughout all service industries. However, only limited attention has been directed towards studying the antecedents of subscription renewal in the context of operational cloud enterprise systems. Cloud services have historically been offered as subscription-based services with the (theoretical) possibility of seamless service cancellation, in contrast to classical IT-Outsourcing contracts or license-based software installations of on-premise enterprise systems. In this work, we investigate the central concept of subscription renewal by focusing on different facets of IS success and their relevance for distinct employee cohorts. Analyzing inter-cohort differences has strong practical implications, as it helps IT vendors to focus on specific IT-related factors when trying to retain customers. Therefore an empirical study was undertaken. The hypotheses were developed on an individual level and tested using survey responses of IT decision makers within companies which adopted cloud enterprise systems. Gathered data was then analyzed using PLS. The results show that subscription renewal intention of the strategic cohort is mainly based on perceived system quality, whereas information quality explains most of the variance of subscription renewal in the management cohort. Beneath the cloud enterprise systems specific contributions, the work adds to the theoretical body of research related to IS success and IS continuation, as well as stakeholder perspectives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this research, we suggest appropriate information technology (IT) governance structures to manage the cloud computing resources. The interest in acquiring IT resources a utility is gaining momentum. Cloud computing resources present organizations with opportunities to manage their IT expenditure on an ongoing basis, and are providing organizations access to modern IT resources to innovate and manage their continuity. However, cloud computing resources are no silver bullet. Organizations would need to have appropriate governance structures and policies in place to ensure its effective management and fit into existing business processes to leverage the promised opportunities. Using a mixed method design, we identified four possible governance structures for managing the cloud computing resources. These structures are a chief cloud officer, a cloud management committee, a cloud service facilitation centre, and a cloud relationship centre. These governance structures ensure appropriate direction of cloud computing resources from its acquisition to fit into the organizations business processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research suggests information technology (IT) governance structures to manage cloud computing resources. The interest in acquiring IT resources as a utility from the cloud is gaining momentum. Cloud computing resources present organizations with opportunities to manage their IT expenditure on an ongoing basis, and are providing organizations access to modern IT resources to innovate and manage their continuity. However, cloud computing resources are no silver bullet. Organizations would need to have appropriate governance structures and policies in place to manage the cloud resources. The subsequent decisions from these governance structures will ensure effective management of cloud resources. This management will facilitate a better fit of cloud resources into organizations existing processes to achieve business (process-level) and financial (firm-level) objectives. Using a triangulation approach, we suggest four possible governance structures for managing the cloud computing resources. These structures are a chief cloud officer, a cloud management committee, a cloud service facilitation centre, and a cloud relationship centre. We also propose that these governance structures would relate to organizations cloud-related business objectives directly and indirectly to cloud-related financial objectives. Perceptive field survey data from actual and prospective cloud service adopters confirmed that the suggested structures would contribute directly to cloud-related business objectives and indirectly to cloud-related financial objectives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electrostatic discharges have been identified as the most likely cause in a number of incidents of fire and explosion with unexplained ignitions. The lack of data and suitable models for this ignition mechanism creates a void in the analysis to quantify the importance of static electricity as a credible ignition mechanism. Quantifiable hazard analysis of the risk of ignition by static discharge cannot, therefore, be entirely carried out with our current understanding of this phenomenon. The study of electrostatics has been ongoing for a long time. However, it was not until the wide spread use of electronics that research was developed for the protection of electronics from electrostatic discharges. Current experimental models for electrostatic discharge developed for intrinsic safety with electronics are inadequate for ignition analysis and typically are not supported by theoretical analysis. A preliminary simulation and experiment with low voltage was designed to investigate the characteristics of energy dissipation and provided a basis for a high voltage investigation. It was seen that for a low voltage the discharge energy represents about 10% of the initial capacitive energy available and that the energy dissipation was within 10 ns of the initial discharge. The potential difference is greatest at the initial break down when the largest amount of the energy is dissipated. The discharge pathway is then established and minimal energy is dissipated as energy dissipation becomes greatly influenced by other components and stray resistance in the discharge circuit. From the initial low voltage simulation work, the importance of the energy dissipation and the characteristic of the discharge were determined. After the preliminary low voltage work was completed, a high voltage discharge experiment was designed and fabricated. Voltage and current measurement were recorded on the discharge circuit allowing the discharge characteristic to be recorded and energy dissipation in the discharge circuit calculated. Discharge energy calculations show consistency with the low voltage work relating to discharge energy with about 30-40% of the total initial capacitive energy being discharged in the resulting high voltage arc. After the system was characterised and operation validated, high voltage ignition energy measurements were conducted on a solution of n-Pentane evaporating in a 250 cm3 chamber. A series of ignition experiments were conducted to determine the minimum ignition energy of n-Pentane. The data from the ignition work was analysed with standard statistical regression methods for tests that return binary (yes/no) data and found to be in agreement with recent publications. The research demonstrates that energy dissipation is heavily dependent on the circuit configuration and most especially by the discharge circuit's capacitance and resistance. The analysis established a discharge profile for the discharges studied and validates the application of this methodology for further research into different materials and atmospheres; by systematically looking at discharge profiles of test materials with various parameters (e.g., capacitance, inductance, and resistance). Systematic experiments looking at the discharge characteristics of the spark will also help understand the way energy is dissipated in an electrostatic discharge enabling a better understanding of the ignition characteristics of materials in terms of energy and the dissipation of that energy in an electrostatic discharge.