881 resultados para offshore service providers


Relevância:

80.00% 80.00%

Publicador:

Resumo:

By 2010, cloud computing had become established as a new model of IT provisioning for service providers. New market players and businesses emerged, threatening the business models of established market players. This teaching case explores the challenges arising through the impact of the new cloud computing technology on an established, multinational IT service provider called ITSP. Should the incumbent vendors adopt cloud computing offerings? And, if so, what form should those offerings take? The teaching case focuses on the strategic dimensions of technological developments, their threats and opportunities. It requires strategic decision making and forecasting under high uncertainty. The critical question is whether cloud computing is a disruptive technology or simply an alternative channel to supply computing resources over the Internet. The case challenges students to assess this new technology and plan ITSP’s responses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objectives: To explicate the organisational change agenda of the COAG coordinated care trials within the Australian health system and to illuminate the role of science in this process. Methods and Results: This article briefly outlines the COAG coordinated care trial aims and the effect of the trial as a change initiative in rural South Australia. It is proposed that although the formal trial outcomes are still not clear, the trial had significant impact upon health service delivery in some sites. The trial involved standard research methods with control and intervention groups and with key hypotheses being tested to compare the costs and service utilization profile of intervention and control groups. Formal results indicate that costs were not significantly different between intervention and control groups across all sites, but that the trial, nonetheless, had a powerful impact on the attitude and behaviours of service providers in the rural trial on Eyre Peninsula in particular. Some of the key structural changes now in place are outlined. Conclusions: The COAG trial has had many and varied impacts upon those organisations and individual providers involved with it. It is argued here that since successive initiatives had been implemented before final evaluation results were published, other agendas were served by the trial apart from those of standard scientific research and hypothesis testing. That is, the main impact of the coordinated care trial in Eyre Region at least has been change by stealth, and not through scientific research and demonstration. Implications: The COAG trials have set in train a series of structural and procedural changes in the methods of delivery and management of primary health care systems; changes that are embodied in the Enhanced Primary Care packages (EPC) and other initiatives recently introduced by the Commonwealth Government. These changes have occurred and are occurring across the system without formal evidence as to their efficacy, suggesting that other financial motives are driving these new approaches apart from the goal of improving health outcomes for consumers. Also, if science is to be used in this way to drive policy and procedural change ahead of actual outcome evidence, it is important that we examine the more subtle agendas of such research projects in future if the integrity of the scientific method is to be maintained. The occurrence of such phenomena questions the very foundation of scientific endeavour and weakens the application of scientific principles in the arena of social and political science.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Rapid expansion in numbers of cloud service providers (CSPs) has resulted in the introduction of various brokering services and cloud marketplaces designed to help customers to assess and choose CSPs. The problem of evaluating marketplaces and also separate sub-offerings supplied by cloud service providers (CSPs) has become vitally important because cloud marketplaces can now incorporate sub-offerings from different CSPs into a combined package tailored for each customer. In this paper, we introduce a new concept which we refer to as a cloud omnibus system (COS) and which we define as a cloud marketing system evaluating whole cloud marketplaces, CSPs, brokers and separate service sub-offerings from CSPs (sometimes combined into packages tailored for customers). COS is based on trust and management of trust and is beyond the state-of-the-art in providing a formal trust measuring mechanism for customers by introducing three types of trust which we refer to as ‘direct’, ‘relative’ and ‘transparent’.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In cloud environments, IT solutions are delivered to users via shared infrastructure, enabling cloud service providers to deploy applications as services according to user QoS (Quality of Service) requirements. One consequence of this cloud model is the huge amount of energy consumption and significant carbon footprints caused by large cloud infrastructures. A key and common objective of cloud service providers is thus to develop cloud application deployment and management solutions with minimum energy consumption while guaranteeing performance and other QoS specified in Service Level Agreements (SLAs). However, finding the best deployment configuration that maximises energy efficiency while guaranteeing system performance is an extremely challenging task, which requires the evaluation of system performance and energy consumption under various workloads and deployment configurations. In order to simplify this process we have developed Stress Cloud, an automatic performance and energy consumption analysis tool for cloud applications in real-world cloud environments. Stress Cloud supports the modelling of realistic cloud application workloads, the automatic generation of load tests, and the profiling of system performance and energy consumption. We demonstrate the utility of Stress Cloud by analysing the performance and energy consumption of a cloud application under a broad range of different deployment configurations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: People with communication disability often struggle to convey their health information to multiple service providers and are at increased risk of adverse health outcomes related to the poor exchange of health information. OBJECTIVE: The purpose of this article was to (a) review the literature informing future research on the Australian personally controlled electronic health record, 'My Health Record' (MyHR), specifically to include people with communication disability and their family members or service providers, and (b) to propose a range of suitable methodologies that might be applied in research to inform training, policy and practice in relation to supporting people with communication disability and their representatives to engage in using MyHR. METHOD: The authors reviewed the literature and, with a cross-disciplinary perspective, considered ways to apply sociotechnical, health informatics, and inclusive methodologies to research on MyHR use by adults with communication disability. RESEARCH OUTCOMES: This article outlines a range of research methods suitable for investigating the use of MyHR by people who have communication disability associated with a range of acquired or lifelong health conditions, and their family members, and direct support workers. CONCLUSION: In planning the allocation of funds towards the health and well-being of adults with disabilities, both disability and health service providers must consider the supports needed for people with communication disability to use MyHR. There is an urgent need to focus research efforts on MyHR in populations with communication disability, who struggle to communicate their health information across multiple health and disability service providers. The design of studies and priorities for future research should be set in consultation with people with communication disability and their representatives.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The problems of entrenched high unemployment in Australia, and the need to improve the support given to people who are affected by unemployment, require new thinking and new ideas in order to bring about policy change.Therefore, Jobs Australia commissioned this research paper to ask Professor Andrew Scott to elaborate on his analysis of the possible relevance to Australia of the Danish approach to employment security which he expounded in his 2014 book, Northern Lights.In particular, we asked Professor Scott to outline practical steps which Australia might consider taking which are feasible and realistic: cutting ‘with the grain’ of Australia’s own distinctive institutional and policy approaches in order to shape new, better-designed policies which might reduce the poverty and uncertainty now faced by so many people in this country.It is very important that Australia now learn from overseas, and not only look at English-speaking countries in which, after all, in many cases, the problems are worse than ours in terms of higher inequalities and larger numbers of long-term unemployed.It is appropriate, in a true spirit of embracing globalisation, to look at the best performing nations in terms of tackling unemployment, and what may possibly be learned from them to apply in the challenges we face here in Australia.Jobs Australia is the national peak body representing not-for-profit organisations that help disadvantaged people find work.We are the largest network of employment and related service providers in Australia and we are funded and owned by our members.I am pleased to endorse the thrust of the arguments put forward in this paper and for Jobs Australia to publish it in this format in order to open up debate and to seek more engagement from key policy-makers with the ideas presented here.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The first North American outbreak of highly pathogenic avian influenza (HPAI) involving a virus of Eurasian A/goose/Guangdong/1/1996 (H5N1) lineage began in the Fraser Valley of British Columbia, Canada in late November 2014. A total of 11 commercial and 1 non-commercial (backyard) operations were infected before the outbreak was terminated. Control measures included movement restrictions that were placed on a total of 404 individual premises, 150 of which were located within a 3 km radius of an infected premise(s) (IP). A complete epidemiological investigation revealed that the source of this HPAI H5N2 virus for 4 of the commercial IPs and the single non-commercial IP likely involved indirect contact with wild birds. Three IPs were associated with the movement of birds or service providers and localized/environmental spread was suspected as the source of infection for the remaining 4 IPs. Viral phylogenies, as determined by Bayesian Inference and Maximum Likelihood methods, were used to validate the epidemiologically inferred transmission network. The phylogenetic clustering of concatenated viral genomes and the median-joining phylogenetic network of the viruses supported, for the most part, the transmission network that was inferred by the epidemiologic analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To date, there is no research examining how adults with Amyotrophic Lateral Sclerosis (ALS) or Motor Neurone Disease (MND) and severe communication disability use Twitter, nor the use of Twitter in relation to ALS/MND beyond its use for fundraising and raising awareness. In this paper we (a) outline a rationale for the use of Twitter as a method of communication and information exchange for adults with ALS/MND, (b) detail multiple qualitative and quantitative methods used to analyse Twitter networks and tweet content in the our studies, and (c) present the results of two studies designed to provide insights on the use of Twitter by an adult with ALS/MND and by #ALS and #MND hashtag communities in Twitter. We will also discuss findings across the studies, implications for health service providers in Twitter, and directions for future Twitter research in relation to ALS/MND.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Online social networks make it easier for people to find and communicate with other people based on shared interests, values, membership in particular groups, etc. Common social networks such as Facebook and Twitter have hundreds of millions or even billions of users scattered all around the world sharing interconnected data. Users demand low latency access to not only their own data but also theirfriends’ data, often very large, e.g. videos, pictures etc. However, social network service providers have a limited monetary capital to store every piece of data everywhere to minimise users’ data access latency. Geo-distributed cloud services with virtually unlimited capabilities are suitable for large scale social networks data storage in different geographical locations. Key problems including how to optimally store and replicate these huge datasets and how to distribute the requests to different datacenters are addressed in this paper. A novel genetic algorithm-based approach is used to find a near-optimal number of replicas for every user’s data and a near-optimal placement of replicas to minimise monetary cost while satisfying latency requirements for all users. Experiments on a large Facebook dataset demonstrate our technique’s effectiveness in outperforming other representative placement and replication strategies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Travail dirigé présenté à la Faculté des études supérieures et postdoctorales en vue de l’obtention du grade de Maître ès sciences (M.Sc.) en criminologie, option Criminalistique et information

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose – The paper examines from a practitioner’s perspective the European Quality in Social Services (EQUASS) Assurance standard, a certification programme for European social service organisations to implement a sector-specific Quality Management System. In particular, it analyses the adoption motives, the internalisation of the standard, the impacts, the satisfaction and the renew intentions. Design/methodology/approach – This study uses a cross-sectional, questionnaire-based survey methodology. From the 381 organisations emailed, 196 responses coming from eight different European countries were considered valid (51.4%). Data from closed-ended questions were analysed using simple descriptive statistical techniques. Content analysis was employed to analyse practitioner’s comments to open-ended questions. Findings – It shows that social service providers typically implement the certification for internal reasons, and internalise EQUASS principles and practices in daily usage. EQUASS Assurance produces benefits mainly at the operational and customer levels, whereas its main pitfalls include increased workload and bureaucracy. The majority of respondents (85.2%) are very satisfied or satisfied with the certification, suggesting that it meets their expectations. Certification renewal intentions are also high but some respondents report that the final decision depends on several factors. The insights gained through the qualitative data are also described. Practical implications – It can be helpful to managers, consultants and Local License Holders working (or planning to work) with this standard. It can inform the work of the EQUASS Technical Working Group in the forthcoming revision of the standard. Originality/value – This is the largest survey conducted so far about EQUASS Assurance in terms of number of respondents, participating countries and topics covered.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The telecommunication industry is entering a new era. The increased traffic demands imposed by the huge number of always-on connections require a quantum leap in the field of enabling techniques. Furthermore, subscribers expect ever increasing quality of experience with its joys and wonders, while network operators and service providers aim for cost-efficient networks. These requirements require a revolutionary change in the telecommunications industry, as shown by the success of virtualization in the IT industry, which is now driving the deployment and expansion of cloud computing. Telecommunications providers are currently rethinking their network architecture from one consisting of a multitude of black boxes with specialized network hardware and software to a new architecture consisting of “white box” hardware running a multitude of specialized network software. This network software may be data plane software providing network functions virtualization (NVF) or control plane software providing centralized network management — software defined networking (SDN). It is expected that these architectural changes will permeate networks as wide ranging in size as the Internet core networks, to metro networks, to enterprise networks and as wide ranging in functionality as converged packet-optical networks, to wireless core networks, to wireless radio access networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: The goal of this work is to review the current literature on continuity of care in the treatment of people with dual diagnosis. In particular, this review set out to clarify how continuity of care has been defined, applied, and assessed in treatment and to enhance its application in both research and clinical practice. METHODS: To identify articles for review, the term "continuity" and combinations of "substance" and "treatment" were searched in electronic databases. The search was restricted to quantitative articles published in English after 1980. Papers were required to discuss "continuity" in treatment samples that included a proportion of patients with a dual diagnosis. RESULTS: A total of 18 non-randomized studies met the inclusion criteria. Analysis revealed six core types of continuity in this treatment context: continuity of relationship with provider(s), continuity across services, continuity through transfer, continuity as regularity and intensity of care, continuity as responsive to changing patient need, and successful linkage of the patient. Patient age, ethnicity, medical status, living status, and the type of mental health and/or substance use disorder influenced the continuity of care experienced in treatment. Some evidence suggested that achieving continuity of care was associated with positive patient and treatment-related outcomes. CONCLUSIONS: This review summarizes how continuity of care has been understood, applied, and assessed in the literature to date. Findings provide a platform for future researchers and service providers to implement and evaluate continuity of care in a consistent manner and to determine its significance in the treatment of people with a dual diagnosis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The proliferation of cloud computing allows users to flexibly store, re-compute or transfer large generated datasets with multiple cloud service providers. However, due to the pay-As-you-go model, the total cost of using cloud services depends on the consumption of storage, computation and bandwidth resources which are three key factors for the cost of IaaS-based cloud resources. In order to reduce the total cost for data, given cloud service providers with different pricing models on their resources, users can flexibly choose a cloud service to store a generated dataset, or delete it and choose a cloud service to regenerate it whenever reused. However, finding the minimum cost is a complicated yet unsolved problem. In this paper, we propose a novel algorithm that can calculate the minimum cost for storing and regenerating datasets in clouds, i.e. whether datasets should be stored or deleted, and furthermore where to store or to regenerate whenever they are reused. This minimum cost also achieves the best trade-off among computation, storage and bandwidth costs in multiple clouds. Comprehensive analysis and rigid theorems guarantee the theoretical soundness of the paper, and general (random) simulations conducted with popular cloud service providers' pricing models demonstrate the excellent performance of our approach.