958 resultados para cloud user controls
Resumo:
Service user and carer involvement (SUCI) in social work education in England is required by the profession’s regulator, the Health and Care Professions Council. However, a recent study of 83 HEIs in England reported that despite considerable progress in SUCI, there is no evidence that the learning derived from it is being transferred to social work practice. In this article we describe a study that examines the question: ‘What impact does SUCI have on the skills, knowledge and values of student social workers at the point of qualification and beyond?’ Students at universities in England and Northern Ireland completed online questionnaires and participated in focus groups, spanning a period immediately pre-qualification and between six to nine months post-qualification. From our findings, we identify four categories that influence the impact of service user involvement on students’ learning: student factors; service user and carer factors; programme factors; and practice factors; each comprises of a number of sub-categories. We propose that the model developed can be used by social work educators, service user and carer contributors and practitioners to maximise the impact of SUCI. We argue that our findings also have implications for employment-based learning routes and post-qualifying education.
Resumo:
Demand Side Management (DSM) plays an important role in Smart Grid. It has large scale access points, massive users, heterogeneous infrastructure and dispersive participants. Moreover, cloud computing which is a service model is characterized by resource on-demand, high reliability and large scale integration and so on and the game theory is a useful tool to the dynamic economic phenomena. In this study, a scheme design of cloud + end technology is proposed to solve technical and economic problems of the DSM. The architecture of cloud + end is designed to solve technical problems in the DSM. In particular, a construct model of cloud + end is presented to solve economic problems in the DSM based on game theories. The proposed method is tested on a DSM cloud + end public service system construction in a city of southern China. The results demonstrate the feasibility of these integrated solutions which can provide a reference for the popularization and application of the DSM in china.
Resumo:
When orchestrating Web service workflows, the geographical placement of the orchestration engine (s) can greatly affect workflow performance. Data may have to be transferred across long geographical distances, which in turn increases execution time and degrades the overall performance of a workflow. In this paper, we present a framework that, given a DAG-based workflow specification, computes the optimal Amazon EC2 cloud regions to deploy the orchestration engines and execute a workflow. The framework incorporates a constraint model that solves the workflow deployment problem, which is generated using an automated constraint modelling system. The feasibility of the framework is evaluated by executing different sample workflows representative of scientific workloads. The experimental results indicate that the framework reduces the workflow execution time and provides a speed up of 1.3x-2.5x over centralised approaches.
Resumo:
We present two physical layer secure transmission schemes for multi-user multi-relay networks, where the communication from M users to the base station is assisted by direct links and by N decode-and-forward relays. In this network, we consider that a passive eavesdropper exists to overhear the transmitted information, which entails exploiting the advantages of both direct and relay links for physical layer security enhancement. To fulfill this requirement, we investigate two criteria for user and relay selection and examine the achievable secrecy performance. Criterion I performs a joint user and relay selection, while Criterion II performs separate user and relay selections, with a lower implementation complexity. We derive a tight lower bound on the secrecy outage probability for Criterion I and an accurate analytical expression for the secrecy outage probability for Criterion II. We further derive the asymptotic secrecy outage probabilities at high transmit signal-to-noise ratios and high main-to-eavesdropper ratios for both criteria. We demonstrate that the secrecy diversity order is min (MN, M + N) for Criterion I, and N for Criterion II. Finally, we present numerical and simulation results to validate the proposed analysis, and show the occurrence condition of the secrecy outage probability floor
Resumo:
Cloud data centres are implemented as large-scale clusters with demanding requirements for service performance, availability and cost of operation. As a result of scale and complexity, data centres typically exhibit large numbers of system anomalies resulting from operator error, resource over/under provisioning, hardware or software failures and security issus anomalies are inherently difficult to identify and resolve promptly via human inspection. Therefore, it is vital in a cloud system to have automatic system monitoring that detects potential anomalies and identifies their source. In this paper we present a lightweight anomaly detection tool for Cloud data centres which combines extended log analysis and rigorous correlation of system metrics, implemented by an efficient correlation algorithm which does not require training or complex infrastructure set up. The LADT algorithm is based on the premise that there is a strong correlation between node level and VM level metrics in a cloud system. This correlation will drop significantly in the event of any performance anomaly at the node-level and a continuous drop in the correlation can indicate the presence of a true anomaly in the node. The log analysis of LADT assists in determining whether the correlation drop could be caused by naturally occurring cloud management activity such as VM migration, creation, suspension, termination or resizing. In this way, any potential anomaly alerts are reasoned about to prevent false positives that could be caused by the cloud operator’s activity. We demonstrate LADT with log analysis in a Cloud environment to show how the log analysis is combined with the correlation of systems metrics to achieve accurate anomaly detection.
Resumo:
Scheduling jobs with deadlines, each of which defines the latest time that a job must be completed, can be challenging on the cloud due to incurred costs and unpredictable performance. This problem is further complicated when there is not enough information to effectively schedule a job such that its deadline is satisfied, and the cost is minimised. In this paper, we present an approach to schedule jobs, whose performance are unknown before execution, with deadlines on the cloud. By performing a sampling phase to collect the necessary information about those jobs, our approach delivers the scheduling decision within 10% cost and 16% violation rate when compared to the ideal setting, which has complete knowledge about each of the jobs from the beginning. It is noted that our proposed algorithm outperforms existing approaches, which use a fixed amount of resources by reducing the violation cost by at least two times.