975 resultados para Service Utilization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Title from cover.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives The aims of the study were to describe the prevalence and associations of mental health disorder (MHD) among a cohort of HIV-infected patients attending the Victorian HIV/AIDS Service between 1984 and 2000, and to examine whether antiretroviral therapy use or mortality was influenced by MHD (defined as a record of service provision by psychiatric services on the Victorian Psychiatric Case Register). It was hypothesized that HIV-positive individuals with MHD would have poorer treatment outcomes, reduced responses to highly active antiretroviral therapy (HAART) and increased mortality compared with those without MHD. Methods This is a retrospective cohort of 2981 individuals (73% of the Victorian population diagnosed with HIV infection) captured on an HIV database which was electronically matched with the public Victorian Psychiatric Case Register (VPCR) (accounting for 95% of public system psychiatry service provision). The prevalence, dates and recorded specifics of mental health disorders at the time of the electronic match on 1 June 2000 are described. The association with recorded MHD, gender, age, AIDS illness, HIV exposure category, duration and type of antiviral therapy, treatment era (prior to 1986, post-1987 and pre-HAART, and post-HAART) on hospitalization and mortality at 1 September 2001 was assessed. Results Five hundred and twenty-five individuals (17.6% of the Victorian HIV-positive population) were recorded with MHD, most frequently coded as attributable to substance dependence/abuse or affective disorder. MHD was diagnosed prior to HIV in 33% and, of those diagnosed after HIV, 93.8% were recorded more than 1 year after the HIV diagnosis. Schizophrenia was recorded in 6% of the population with MHD. Hospitalizations for both psychiatric and nonpsychiatric illness were more frequent in those with MHD (relative risk 5.4; 95% confidence interval 3.7, 8.2). The total number of antiretrovirals used (median 6.4 agents vs 5.5 agents) was greater in those with MHD. When adjusted for antiretroviral treatment era, HIV exposure category, CD4 cell count and antiretroviral therapy, survival was not affected by MHD. Conclusions MHD is frequent in this population with HIV infection and is associated with increased healthcare utilization but not with reduced survival.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Access to Allied Psychological Services component of Australia's Better Outcomes in Mental Health Care program enables eligible general practitioners to refer consumers to allied health professionals for affordable, evidence-based mental health care, via 108 projects conducted by Divisions of General Practice. The current study profiled the models of service delivery across these projects, and examined whether particular models were associated with differential levels of access to services. We found: 76% of projects were retaining their allied health professionals under contract, 28% via direct employment, and 7% some other way; Allied health professionals were providing services from GPs' rooms in 63% of projects, from their own rooms in 63%, from a third location in 42%; and The referral mechanism of choice was direct referral in 51% of projects, a voucher system in 27%, a brokerage system in 24%, and a register system in 25%. Many of these models were being used in combination. No model was predictive of differential levels of access, suggesting that the approach of adapting models to the local context is proving successful.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Though technology holds significant promise for enhanced teaching and learning it is unlikely to meet this promise without a principled approach to course design. There is burgeoning discourse about the use of technological tools and models in higher education, but much of the discussion is fixed upon distance learning or technology based courses. This paper will develop and propose a balanced model for effective teaching and learning for “on campus” higher education, with particular emphasis on the opportunities for revitalisation available through the judicious utilisation of new technologies. It will explore the opportunities available for the creation of more authentic learning environments through the principled design. Finally it will demonstrate with a case study how these have come together enabling the creation of an effective and authentic learning environment for one pre-service teacher education course at the University of Queensland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research is focused on the optimisation of resource utilisation in wireless mobile networks with the consideration of the users’ experienced quality of video streaming services. The study specifically considers the new generation of mobile communication networks, i.e. 4G-LTE, as the main research context. The background study provides an overview of the main properties of the relevant technologies investigated. These include video streaming protocols and networks, video service quality assessment methods, the infrastructure and related functionalities of LTE, and resource allocation algorithms in mobile communication systems. A mathematical model based on an objective and no-reference quality assessment metric for video streaming, namely Pause Intensity, is developed in this work for the evaluation of the continuity of streaming services. The analytical model is verified by extensive simulation and subjective testing on the joint impairment effects of the pause duration and pause frequency. Various types of the video contents and different levels of the impairments have been used in the process of validation tests. It has been shown that Pause Intensity is closely correlated with the subjective quality measurement in terms of the Mean Opinion Score and this correlation property is content independent. Based on the Pause Intensity metric, an optimised resource allocation approach is proposed for the given user requirements, communication system specifications and network performances. This approach concerns both system efficiency and fairness when establishing appropriate resource allocation algorithms, together with the consideration of the correlation between the required and allocated data rates per user. Pause Intensity plays a key role here, representing the required level of Quality of Experience (QoE) to ensure the best balance between system efficiency and fairness. The 3GPP Long Term Evolution (LTE) system is used as the main application environment where the proposed research framework is examined and the results are compared with existing scheduling methods on the achievable fairness, efficiency and correlation. Adaptive video streaming technologies are also investigated and combined with our initiatives on determining the distribution of QoE performance across the network. The resulting scheduling process is controlled through the prioritization of users by considering their perceived quality for the services received. Meanwhile, a trade-off between fairness and efficiency is maintained through an online adjustment of the scheduler’s parameters. Furthermore, Pause Intensity is applied to act as a regulator to realise the rate adaptation function during the end user’s playback of the adaptive streaming service. The adaptive rates under various channel conditions and the shape of the QoE distribution amongst the users for different scheduling policies have been demonstrated in the context of LTE. Finally, the work for interworking between mobile communication system at the macro-cell level and the different deployments of WiFi technologies throughout the macro-cell is presented. A QoEdriven approach is proposed to analyse the offloading mechanism of the user’s data (e.g. video traffic) while the new rate distribution algorithm reshapes the network capacity across the macrocell. The scheduling policy derived is used to regulate the performance of the resource allocation across the fair-efficient spectrum. The associated offloading mechanism can properly control the number of the users within the coverages of the macro-cell base station and each of the WiFi access points involved. The performance of the non-seamless and user-controlled mobile traffic offloading (through the mobile WiFi devices) has been evaluated and compared with that of the standard operator-controlled WiFi hotspots.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed applications are exposed as reusable components that are dynamically discovered and integrated to create new applications. These new applications, in the form of aggregate services, are vulnerable to failure due to the autonomous and distributed nature of their integrated components. This vulnerability creates the need for adaptability in aggregate services. The need for adaptation is accentuated for complex long-running applications as is found in scientific Grid computing, where distributed computing nodes may participate to solve computation and data-intensive problems. Such applications integrate services for coordinated problem solving in areas such as Bioinformatics. For such applications, when a constituent service fails, the application fails, even though there are other nodes that can substitute for the failed service. This concern is not addressed in the specification of high-level composition languages such as that of the Business Process Execution Language (BPEL). We propose an approach to transparently autonomizing existing BPEL processes in order to make them modifiable at runtime and more resilient to the failures in their execution environment. By transparent introduction of adaptive behavior, adaptation preserves the original business logic of the aggregate service and does not tangle the code for adaptive behavior with that of the aggregate service. The major contributions of this dissertation are: first, we assessed the effectiveness of BPEL language support in developing adaptive mechanisms. As a result, we identified the strengths and limitations of BPEL and came up with strategies to address those limitations. Second, we developed a technique to enhance existing BPEL processes transparently in order to support dynamic adaptation. We proposed a framework which uses transparent shaping and generative programming to make BPEL processes adaptive. Third, we developed a technique to dynamically discover and bind to substitute services. Our technique was evaluated and the result showed that dynamic utilization of components improves the flexibility of adaptive BPEL processes. Fourth, we developed an extensible policy-based technique to specify how to handle exceptional behavior. We developed a generic component that introduces adaptive behavior for multiple BPEL processes. Fifth, we identify ways to apply our work to facilitate adaptability in composite Grid services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The total time a customer spends in the business process system, called the customer cycle-time, is a major contributor to overall customer satisfaction. Business process analysts and designers are frequently asked to design process solutions with optimal performance. Simulation models have been very popular to quantitatively evaluate the business processes; however, simulation is time-consuming and it also requires extensive modeling experiences to develop simulation models. Moreover, simulation models neither provide recommendations nor yield optimal solutions for business process design. A queueing network model is a good analytical approach toward business process analysis and design, and can provide a useful abstraction of a business process. However, the existing queueing network models were developed based on telephone systems or applied to manufacturing processes in which machine servers dominate the system. In a business process, the servers are usually people. The characteristics of human servers should be taken into account by the queueing model, i.e. specialization and coordination. ^ The research described in this dissertation develops an open queueing network model to do a quick analysis of business processes. Additionally, optimization models are developed to provide optimal business process designs. The queueing network model extends and improves upon existing multi-class open-queueing network models (MOQN) so that the customer flow in the human-server oriented processes can be modeled. The optimization models help business process designers to find the optimal design of a business process with consideration of specialization and coordination. ^ The main findings of the research are, first, parallelization can reduce the cycle-time for those customer classes that require more than one parallel activity; however, the coordination time due to the parallelization overwhelms the savings from parallelization under the high utilization servers since the waiting time significantly increases, thus the cycle-time increases. Third, the level of industrial technology employed by a company and coordination time to mange the tasks have strongest impact on the business process design; as the level of industrial technology employed by the company is high; more division is required to improve the cycle-time; as the coordination time required is high; consolidation is required to improve the cycle-time. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation develops a process improvement method for service operations based on the Theory of Constraints (TOC), a management philosophy that has been shown to be effective in manufacturing for decreasing WIP and improving throughput. While TOC has enjoyed much attention and success in the manufacturing arena, its application to services in general has been limited. The contribution to industry and knowledge is a method for improving global performance measures based on TOC principles. The method proposed in this dissertation will be tested using discrete event simulation based on the scenario of the service factory of airline turnaround operations. To evaluate the method, a simulation model of aircraft turn operations of a U.S. based carrier was made and validated using actual data from airline operations. The model was then adjusted to reflect an application of the Theory of Constraints for determining how to deploy the scarce resource of ramp workers. The results indicate that, given slight modifications to TOC terminology and the development of a method for constraint identification, the Theory of Constraints can be applied with success to services. Bottlenecks in services must be defined as those processes for which the process rates and amount of work remaining are such that completing the process will not be possible without an increase in the process rate. The bottleneck ratio is used to determine to what degree a process is a constraint. Simulation results also suggest that redefining performance measures to reflect a global business perspective of reducing costs related to specific flights versus the operational local optimum approach of turning all aircraft quickly results in significant savings to the company. Savings to the annual operating costs of the airline were simulated to equal 30% of possible current expenses for misconnecting passengers with a modest increase in utilization of the workers through a more efficient heuristic of deploying them to the highest priority tasks. This dissertation contributes to the literature on service operations by describing a dynamic, adaptive dispatch approach to manage service factory operations similar to airline turnaround operations using the management philosophy of the Theory of Constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. ^ In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complex three-dimensional (3-D) structure of tropical forests generates a diversity of light environments for canopy and understory trees. Understanding diurnal and seasonal changes in light availability is critical for interpreting measurements of net ecosystem exchange and improving ecosystem models. Here, we used the Discrete Anisotropic Radiative Transfer (DART) model to simulate leaf absorption of photosynthetically active radiation (lAPAR) for an Amazon forest. The 3-D model scene was developed from airborne lidar data, and local measurements of leaf reflectance, aerosols, and PAR were used to model lAPAR under direct and diffuse illumination conditions. Simulated lAPAR under clear-sky and cloudy conditions was corrected for light saturation effects to estimate light utilization, the fraction of lAPAR available for photosynthesis. Although the fraction of incoming PAR absorbed by leaves was consistent throughout the year (0.80?0.82), light utilization varied seasonally (0.67?0.74), with minimum values during the Amazon dry season. Shadowing and light saturation effects moderated potential gains in forest productivity from increasing PAR during dry-season months when the diffuse fraction from clouds and aerosols was low. Comparisons between DART and other models highlighted the role of 3-D forest structure to account for seasonal changes in light utilization. Our findings highlight how directional illumination and forest 3-D structure combine to influence diurnal and seasonal variability in light utilization, independent of further changes in leaf area, leaf age, or environmental controls on canopy photosynthesis. Changing illumination geometry constitutes an alternative biophysical explanation for observed seasonality in Amazon forest productivity without changes in canopy phenology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses a framework in which catalog service communities are built, linked for interaction, and constantly monitored and adapted over time. A catalog service community (represented as a peer node in a peer-to-peer network) in our system can be viewed as domain specific data integration mediators representing the domain knowledge and the registry information. The query routing among communities is performed to identify a set of data sources that are relevant to answering a given query. The system monitors the interactions between the communities to discover patterns that may lead to restructuring of the network (e.g., irrelevant peers removed, new relationships created, etc.).