863 resultados para Utility
Resumo:
Whilst alcohol is a common feature of many social gatherings, there are numerous immediate and long-term health and social harms associated with its abuse. Alcohol consumption is the world’s third largest risk factor for disease and disability with almost 4% of all deaths worldwide attributed to alcohol. Not surprisingly, alcohol use and binge drinking by young people is of particular concern with Australian data reporting that 39% of young people (18-19yrs) admitted drinking at least weekly and 32% drank to levels that put them at risk of alcohol-related harm. The growing market penetration and connectivity of smartphones may be an opportunities for innovation in promoting health-related self-management of substance use. However, little is known about how best to harness and optimise this technology for health-related intervention and behaviour change. This paper explores the utility and interface of smartphone technology as a health intervention tool to monitor and moderate alcohol use. A review of the psychological health applications of this technology will be presented along with the findings of a series of focus groups, surveys and behavioural field trials of several drink-monitoring applications. Qualitative and quantitative data will be presented on the perceptions, preferences and utility of the design, usability and functionality of smartphone apps to monitoring and moderate alcohol use. How these findings have shaped the development and evolution of the OnTrack app will be specifically discussed, along with future directions and applications of this technology in health intervention, prevention and promotion.
Resumo:
Despite the Revised International Prognostic Index's (R-IPI) undoubted utility in diffuse large B-cell lymphoma (DLBCL), significant clinical heterogeneity within R-IPI categories persists. Emerging evidence indicates that circulating host immunity is a robust and R-IPI independent prognosticator, most likely reflecting the immune status of the intratumoral microenvironment. We hypothesized that direct quantification of immunity within lymphomatous tissue would better permit stratification within R-IPI categories. We analyzed 122 newly diagnosed consecutive DLBCL patients treated with rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone (R-CHOP) chemo-immunotherapy. Median follow-up was 4 years. As expected, the R-IPI was a significant predictor of outcome with 5-year overall survival (OS) 87% for very good, 87% for good, and 51% for poor-risk R-IPI scores (P < 0.001). Consistent with previous reports, systemic immunity also predicted outcome (86% OS for high lymphocyte to monocyte ratio [LMR], versus 63% with low LMR, P = 0.01). Multivariate analysis confirmed LMR as independently prognostic. Flow cytometry on fresh diagnostic lymphoma tissue, identified CD4+ T-cell infiltration as the most significant predictor of outcome with ≥23% infiltration dividing the cohort into high and low risk groups with regard to event-free survival (EFS, P = 0.007) and OS (P = 0.003). EFS and OS were independent of the R-IPI and LMR. Importantly, within very good/good R-IPI patients, CD4+ T-cells still distinguished patients with different 5 year OS (high 96% versus low 63%, P = 0.02). These results illustrate the importance of circulating and local intratumoral immunity in DLBCL treated with R-CHOP.
Resumo:
Background: Patients with Crohn’s disease (CD) often require surgery at some stage of disease course. Prediction of CD outcome is influenced by clinical, environmental, serological, and genetic factors (eg, NOD2). Being able to identify CD patients at high risk of surgical intervention should assist clinicians to decide whether or not to prescribe early aggressive treatment with immunomodulators. Methods: We performed a retrospective analysis of selected clinical (age at diagnosis, perianal disease, active smoking) and genetic (NOD2 genotype) data obtained for a population-based CD cohort from the Canterbury Inflammatory Bowel Disease study. Logistic regression was used to identify predictors of complicated outcome in these CD patients (ie, need for inflammatory bowel disease-related surgery). Results: Perianal disease and the NOD2 genotype were the only independent factors associated with the need for surgery in this patient group (odds ratio=2.84 and 1.60, respectively). By combining the associated NOD2 genotype with perianal disease we generated a single “clinicogenetic” variable. This was strongly associated with increased risk of surgery (odds ratio=3.84, P=0.00, confidence interval, 2.28-6.46) and offered moderate predictive accuracy (positive predictive value=0.62). Approximately 1/3 of surgical outcomes in this population are attributable to the NOD2+PA variable (attributable risk=0.32). Conclusions: Knowledge of perianal disease and NOD2 genotype in patients presenting with CD may offer clinicians some decision-making utility for early diagnosis of complicated CD progression and initiating intensive treatment to avoid surgical intervention. Future studies should investigate combination effects of other genetic, clinical, and environmental factors when attempting to identify predictors of complicated CD outcomes.
Resumo:
Background Loss of heterozygosity (LOH) is an important marker for one of the 'two-hits' required for tumor suppressor gene inactivation. Traditional methods for mapping LOH regions require the comparison of both tumor and patient-matched normal DNA samples. However, for many archival samples, patient-matched normal DNA is not available leading to the under-utilization of this important resource in LOH studies. Here we describe a new method for LOH analysis that relies on the genome-wide comparison of heterozygosity of single nucleotide polymorphisms (SNPs) between cohorts of cases and un-matched healthy control samples. Regions of LOH are defined by consistent decreases in heterozygosity across a genetic region in the case cohort compared to the control cohort. Methods DNA was collected from 20 Follicular Lymphoma (FL) tumor samples, 20 Diffuse Large B-cell Lymphoma (DLBCL) tumor samples, neoplastic B-cells of 10 B-cell Chronic Lymphocytic Leukemia (B-CLL) patients and Buccal cell samples matched to 4 of these B-CLL patients. The cohort heterozygosity comparison method was developed and validated using LOH derived in a small cohort of B-CLL by traditional comparisons of tumor and normal DNA samples, and compared to the only alternative method for LOH analysis without patient matched controls. LOH candidate regions were then generated for enlarged cohorts of B-CLL, FL and DLBCL samples using our cohort heterozygosity comparison method in order to evaluate potential LOH candidate regions in these non-Hodgkin's lymphoma tumor subtypes. Results Using a small cohort of B-CLL samples with patient-matched normal DNA we have validated the utility of this method and shown that it displays more accuracy and sensitivity in detecting LOH candidate regions compared to the only alternative method, the Hidden Markov Model (HMM) method. Subsequently, using B-CLL, FL and DLBCL tumor samples we have utilised cohort heterozygosity comparisons to localise LOH candidate regions in these subtypes of non-Hodgkin's lymphoma. Detected LOH regions included both previously described regions of LOH as well as novel genomic candidate regions. Conclusions We have proven the efficacy of the use of cohort heterozygosity comparisons for genome-wide mapping of LOH and shown it to be in many ways superior to the HMM method. Additionally, the use of this method to analyse SNP microarray data from 3 common forms of non-Hodgkin's lymphoma yielded interesting tumor suppressor gene candidates, including the ETV3 gene that was highlighted in both B-CLL and FL.
Resumo:
Evidence-based practice in entrepreneurship requires effective communication of research findings. We focus on how research synopses can “promote” research to entrepreneurs. Drawing on marketing communications literature, we examine how message characteristics of research synopses affect their appeal. We demonstrate the utility of conjoint analysis in this context and find message length, media richness and source credibility to have positive influences. We find mixed support for a hypothesized negative influence of jargon, and for our predictions that participants’ involvement with academic research moderates these effects. Exploratory analyses reveal latent classes of entrepreneurs with differing preferences, particularly for message length and jargon.
Resumo:
The goal of this project was to initiate the use of an internet-based student response system in a large, first year chemistry class at a typical Australian university, and to verify its popularity and utility. A secondary goal was to influence other academic staff to adopt the system, initiating change at the discipline and Faculty level. The first goal was achieved with a high response rate using a commercial on-line system; however, the number of students engaging with the system dropped gradually during each class and over the course of the semester. Factors affecting student and staff adoption and continuance with technology are explored using established models.
Resumo:
A microgrid can span over a large area, especially in rural townships. In such cases, the distributed generators (DGs) must be controlled in a decentralized fashion, based on the locally available measurements. The main concerns are control of system voltage magnitude and frequency, which can either lead to system instability or voltage collapse. In this chapter, the operational challenges of load frequency control in a microgrid are discussed and few methods are proposed to meet these challenges. In particular, issues of power sharing, power quality and system stability are addressed, when the system operates under decentralized control. The main focus of this chapter is to provide solutions to improve the system performance in different situations. The scenarios considered are (a) when the system stability margin is low, (b) when the line impedance has a high R to X ratio, (c) when the system contains unbalanced and/or distorted loads. Also a scheme is proposed in which a microgrid can be frequency isolated from a utility grid while being capable of bidirectional power transfer. In all these cases, the use of angle droop in converter interfaced DGs is adopted. It has been shown that this results in a more responsive control action compared to the traditional frequency based droop control.
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Resumo:
In this research, we suggest appropriate information technology (IT) governance structures to manage the cloud computing resources. The interest in acquiring IT resources a utility is gaining momentum. Cloud computing resources present organizations with opportunities to manage their IT expenditure on an ongoing basis, and are providing organizations access to modern IT resources to innovate and manage their continuity. However, cloud computing resources are no silver bullet. Organizations would need to have appropriate governance structures and policies in place to ensure its effective management and fit into existing business processes to leverage the promised opportunities. Using a mixed method design, we identified four possible governance structures for managing the cloud computing resources. These structures are a chief cloud officer, a cloud management committee, a cloud service facilitation centre, and a cloud relationship centre. These governance structures ensure appropriate direction of cloud computing resources from its acquisition to fit into the organizations business processes.
Resumo:
This research suggests information technology (IT) governance structures to manage cloud computing resources. The interest in acquiring IT resources as a utility from the cloud is gaining momentum. Cloud computing resources present organizations with opportunities to manage their IT expenditure on an ongoing basis, and are providing organizations access to modern IT resources to innovate and manage their continuity. However, cloud computing resources are no silver bullet. Organizations would need to have appropriate governance structures and policies in place to manage the cloud resources. The subsequent decisions from these governance structures will ensure effective management of cloud resources. This management will facilitate a better fit of cloud resources into organizations existing processes to achieve business (process-level) and financial (firm-level) objectives. Using a triangulation approach, we suggest four possible governance structures for managing the cloud computing resources. These structures are a chief cloud officer, a cloud management committee, a cloud service facilitation centre, and a cloud relationship centre. We also propose that these governance structures would relate to organizations cloud-related business objectives directly and indirectly to cloud-related financial objectives. Perceptive field survey data from actual and prospective cloud service adopters confirmed that the suggested structures would contribute directly to cloud-related business objectives and indirectly to cloud-related financial objectives.
Resumo:
Passenger flow studies in airport terminals have shown consistent statistical relationships between airport spatial layout and pedestrian movement, facilitating prediction of movement from terminal designs. However, these studies are done at an aggregate level and do not incorporate how individual passengers make decisions at a microscopic level. Therefore, they do not explain the formation of complex movement flows. In addition, existing models mostly focus on standard airport processing procedures such as immigration and security, but seldom consider discretionary activities of passengers, and thus are not able to truly describe the full range of passenger flows within airport terminals. As the route-choice decision-making of passengers involves many uncertain factors within the airport terminals, the mechanisms to fulfill the capacity of managing the route-choice have proven difficult to acquire and quantify. Could the study of cognitive factors of passengers (i.e. human mental preferences of deciding which on-airport facility to use) be useful to tackle these issues? Assuming the movement in virtual simulated environments can be analogous to movement in real environments, passenger behaviour dynamics can be similar to those generated in virtual experiments. Three levels of dynamics have been devised for motion control: the localised field, tactical level, and strategic level. A localised field refers to basic motion capabilities, such as walking speed, direction and avoidance of obstacles. The other two fields represent cognitive route-choice decision-making. This research views passenger flow problems via a "bottom-up approach", regarding individual passengers as independent intelligent agents who can behave autonomously and are able to interact with others and the ambient environment. In this regard, passenger flow formation becomes an emergent phenomenon of large numbers of passengers interacting with others. In the thesis, first, the passenger flow in airport terminals was investigated. Discretionary activities of passengers were integrated with standard processing procedures in the research. The localised field for passenger motion dynamics was constructed by a devised force-based model. Next, advanced traits of passengers (such as their desire to shop, their comfort with technology and their willingness to ask for assistance) were formulated to facilitate tactical route-choice decision-making. The traits consist of quantified measures of mental preferences of passengers when they travel through airport terminals. Each category of the traits indicates a decision which passengers may take. They were inferred through a Bayesian network model by analysing the probabilities based on currently available data. Route-choice decision-making was finalised by calculating corresponding utility results based on those probabilities observed. Three sorts of simulation outcomes were generated: namely, queuing length before checkpoints, average dwell time of passengers at service facilities, and instantaneous space utilisation. Queuing length reflects the number of passengers who are in a queue. Long queues no doubt cause significant delay in processing procedures. The dwell time of each passenger agent at the service facilities were recorded. The overall dwell time of passenger agents at typical facility areas were analysed so as to demonstrate portions of utilisation in the temporal aspect. For the spatial aspect, the number of passenger agents who were dwelling within specific terminal areas can be used to estimate service rates. All outcomes demonstrated specific results by typical simulated passenger flows. They directly reflect terminal capacity. The simulation results strongly suggest that integrating discretionary activities of passengers makes the passenger flows more intuitive, observing probabilities of mental preferences by inferring advanced traits make up an approach capable of carrying out tactical route-choice decision-making. On the whole, the research studied passenger flows in airport terminals by an agent-based model, which investigated individual characteristics of passengers and their impact on psychological route-choice decisions of passengers. Finally, intuitive passenger flows in airport terminals were able to be realised in simulation.
Resumo:
This thesis offered a step forward in the development of cheap and effective materials for water treatment. It described the modification of naturally abundant clay minerals with organic molecules, and used the modified clays as effective adsorbents for the removal of recalcitrant organic water pollutants. The outcome of the study greatly extended our understanding of the synthesis and characteristic properties of clay and modified clay minerals, provided optimistic evaluation of the modified clays for environmental remediation and offered potential utility for clay minerals in the industry and environment.
Resumo:
Most large cities around the world are undergoing rapid transport sector development to cater for increased urbanization. Subsequently the issues of mobility, access equity, congestion, operational safety and above all environmental sustainability are becoming increasingly crucial in transport planning and policy making. The popular response in addressing these issues has been demand management, through improvement of motorised public transport (MPT) modes (bus, train, tram) and non-motorized transport (NMT) modes (walk, bicycle); improved fuel technology. Relatively little attention has however been given to another readily available and highly sustainable component of the urban transport system, non-motorized public transport (NMPT) such as the pedicab that operates on a commercial basis and serves as an NMT taxi; and has long standing history in many Asian cities; relatively stable in existence in Latin America; and reemerging and expanding in Europe, North America and Australia. Consensus at policy level on the apparent benefits, costs and management approach for NMPT integration has often been a major transport planning problem. Within this context, this research attempts to provide a more complete analysis of the current existence rationale and possible future, or otherwise, of NMPT as a regular public transport system. The analytical process is divided into three major stages. Stage 1 reviews the status and role condition of NMPT as regular public transport on a global scale- in developing cities and developed cities. The review establishes the strong ongoing and future potential role of NMPT in major developing cities. Stage 2 narrows down the status review to a case study city of a developing country in order to facilitate deeper role review and status analysis of the mode. Dhaka, capital city of Bangladesh, has been chosen due to its magnitude of NMPT presence. The review and analysis reveals the multisectoral and dominant role of NMPT in catering for the travel need of Dhaka transport users. The review also indicates ad-hoc, disintegrated policy planning in management of NMPT and the need for a planning framework to facilitate balanced integration between NMPT and MT in future. Stage 3 develops an integrated, multimodal planning framework (IMPF), based on a four-step planning process. This includes defining the purpose and scope of the planning exercise, determining current deficiencies and preferred characteristics for the proposed IMPF, selection of suitable techniques to address the deficiencies and needs of the transport network while laying out the IMPF and finally, development of a delivery plan for the IMPF based on a selected layout technique and integration approach. The output of the exercise is a planning instrument (decision tool) that can be used to assign a road hierarchy in order to allocate appropriate traffic to appropriate network type, particularly to facilitate the operational balance between MT and NMT. The instrument is based on a partial restriction approach of motorised transport (MT) and NMT, structured on the notion of functional hierarchy approach, and distributes/prioritises MT and NMT such that functional needs of the network category is best complemented. The planning instrument based on these processes and principles offers a six-level road hierarchy with a different composition of network-governing attributes and modal priority, for the current Dhaka transport network, in order to facilitate efficient integration of NMT with MT. A case study application of the instrument on a small transport network of Dhaka also demonstrates the utility, flexibility and adoptability of the instrument in logically allocating corridors with particular positions in the road hierarchy paradigm. Although the tool is useful in enabling balanced distribution of NMPT with MT at different network levels, further investigation is required with reference to detailed modal variations, scales and locations of a network to further generalise the framework application.
Resumo:
This paper presents a novel framework for the modelling of passenger facilitation in a complex environment. The research is motivated by the challenges in the airport complex system, where there are multiple stakeholders, differing operational objectives and complex interactions and interdependencies between different parts of the airport system. Traditional methods for airport terminal modelling do not explicitly address the need for understanding causal relationships in a dynamic environment. Additionally, existing Bayesian Network (BN) models, which provide a means for capturing causal relationships, only present a static snapshot of a system. A method to integrate a BN complex systems model with stochastic queuing theory is developed based on the properties of the Poisson and Exponential distributions. The resultant Hybrid Queue-based Bayesian Network (HQBN) framework enables the simulation of arbitrary factors, their relationships, and their effects on passenger flow and vice versa. A case study implementation of the framework is demonstrated on the inbound passenger facilitation process at Brisbane International Airport. The predicted outputs of the model, in terms of cumulative passenger flow at intermediary and end points in the inbound process, are found to have an $R^2$ goodness of fit of 0.9994 and 0.9982 respectively over a 10 hour test period. The utility of the framework is demonstrated on a number of usage scenarios including real time monitoring and `what-if' analysis. This framework provides the ability to analyse and simulate a dynamic complex system, and can be applied to other socio-technical systems such as hospitals.
Resumo:
Utility functions in Bayesian experimental design are usually based on the posterior distribution. When the posterior is found by simulation, it must be sampled from for each future data set drawn from the prior predictive distribution. Many thousands of posterior distributions are often required. A popular technique in the Bayesian experimental design literature to rapidly obtain samples from the posterior is importance sampling, using the prior as the importance distribution. However, importance sampling will tend to break down if there is a reasonable number of experimental observations and/or the model parameter is high dimensional. In this paper we explore the use of Laplace approximations in the design setting to overcome this drawback. Furthermore, we consider using the Laplace approximation to form the importance distribution to obtain a more efficient importance distribution than the prior. The methodology is motivated by a pharmacokinetic study which investigates the effect of extracorporeal membrane oxygenation on the pharmacokinetics of antibiotics in sheep. The design problem is to find 10 near optimal plasma sampling times which produce precise estimates of pharmacokinetic model parameters/measures of interest. We consider several different utility functions of interest in these studies, which involve the posterior distribution of parameter functions.