393 resultados para task type
Resumo:
Background: An estimated 285 million people worldwide have diabetes and its prevalence is predicted to increase to 439 million by 2030. For the year 2010, it is estimated that 3.96 million excess deaths in the age group 20-79 years are attributable to diabetes around the world. Self-management is recognised as an integral part of diabetes care. This paper describes the protocol of a randomised controlled trial of an automated interactive telephone system aiming to improve the uptake and maintenance of essential diabetes self-management behaviours. ---------- Methods/Design: A total of 340 individuals with type 2 diabetes will be randomised, either to the routine care arm, or to the intervention arm in which participants receive the Telephone-Linked Care (TLC) Diabetes program in addition to their routine care. The intervention requires the participants to telephone the TLC Diabetes phone system weekly for 6 months. They receive the study handbook and a glucose meter linked to a data uploading device. The TLC system consists of a computer with software designed to provide monitoring, tailored feedback and education on key aspects of diabetes self-management, based on answers voiced or entered during the current or previous conversations. Data collection is conducted at baseline (Time 1), 6-month follow-up (Time 2), and 12-month follow-up (Time 3). The primary outcomes are glycaemic control (HbA1c) and quality of life (Short Form-36 Health Survey version 2). Secondary outcomes include anthropometric measures, blood pressure, blood lipid profile, psychosocial measures as well as measures of diet, physical activity, blood glucose monitoring, foot care and medication taking. Information on utilisation of healthcare services including hospital admissions, medication use and costs is collected. An economic evaluation is also planned.---------- Discussion: Outcomes will provide evidence concerning the efficacy of a telephone-linked care intervention for self-management of diabetes. Furthermore, the study will provide insight into the potential for more widespread uptake of automated telehealth interventions, globally.
Resumo:
This study aimed to determine whether two brief, low cost interventions would reduce young drivers’ optimism bias for their driving skills and accident risk perceptions. This tendency for such drivers to perceive themselves as more skilful and less prone to driving accidents than their peers may lead to less engagement in precautionary driving behaviours and a greater engagement in more dangerous driving behaviour. 243 young drivers (aged 17 - 25 years) were randomly allocated to one of three groups: accountability, insight or control. All participants provided both overall and specific situation ratings of their driving skills and accident risk relative to a typical young driver. Prior to completing the questionnaire, those in the accountability condition were first advised that their driving skills and accident risk would be later assessed via a driving simulator. Those in the insight condition first underwent a difficult computer-based hazard perception task designed to provide participants with insight into their potential limitations when responding to hazards in difficult and unpredictable driving situations. Participants in the control condition completed only the questionnaire. Results showed that the accountability manipulation was effective in reducing optimism bias in terms of participants’ comparative ratings of their accident risk in specific situations, though only for less experienced drivers. In contrast, among more experienced males, participants in the insight condition showed greater optimism bias for overall accident risk than their counterparts in the accountability or control groups. There were no effects of the manipulations on drivers’ skills ratings. The differential effects of the two types of manipulations on optimism bias relating to one’s accident risk in different subgroups of the young driver sample highlight the importance of targeting interventions for different levels of experience. Accountability interventions may be beneficial for less experienced young drivers but the results suggest exercising caution with the use of insight type interventions, particularly hazard perception style tasks, for more experienced young drivers typically still in the provisional stage of graduated licensing systems.
Resumo:
Since 2000-2001, dengue virus type 1 has circulated in the Pacific region. However, in 2007, type 4 reemerged and has almost completely displaced the strains of type 1. If only 1 serotype circulates at any time and is replaced approximately every 5 years, DENV-3 may reappear in 2012.
Resumo:
With the emergence of multi-core processors into the mainstream, parallel programming is no longer the specialized domain it once was. There is a growing need for systems to allow programmers to more easily reason about data dependencies and inherent parallelism in general purpose programs. Many of these programs are written in popular imperative programming languages like Java and C]. In this thesis I present a system for reasoning about side-effects of evaluation in an abstract and composable manner that is suitable for use by both programmers and automated tools such as compilers. The goal of developing such a system is to both facilitate the automatic exploitation of the inherent parallelism present in imperative programs and to allow programmers to reason about dependencies which may be limiting the parallelism available for exploitation in their applications. Previous work on languages and type systems for parallel computing has tended to focus on providing the programmer with tools to facilitate the manual parallelization of programs; programmers must decide when and where it is safe to employ parallelism without the assistance of the compiler or other automated tools. None of the existing systems combine abstraction and composition with parallelization and correctness checking to produce a framework which helps both programmers and automated tools to reason about inherent parallelism. In this work I present a system for abstractly reasoning about side-effects and data dependencies in modern, imperative, object-oriented languages using a type and effect system based on ideas from Ownership Types. I have developed sufficient conditions for the safe, automated detection and exploitation of a number task, data and loop parallelism patterns in terms of ownership relationships. To validate my work, I have applied my ideas to the C] version 3.0 language to produce a language extension called Zal. I have implemented a compiler for the Zal language as an extension of the GPC] research compiler as a proof of concept of my system. I have used it to parallelize a number of real-world applications to demonstrate the feasibility of my proposed approach. In addition to this empirical validation, I present an argument for the correctness of the type system and language semantics I have proposed as well as sketches of proofs for the correctness of the sufficient conditions for parallelization proposed.
Resumo:
Research on workforce diversity gained momentum in the 1990s. However, empirical findings to date on the link between gender diversity and performance have been inconsistent. Based on contrasting theories, this paper proposes a positive linear and a negative linear prediction of the gender diversity-performance relationship. The paper also proposes that industry type (services vs. manufacturing) moderates the gender diversity-performance relationship such that the relationship will be positive in service organisations and negative in manufacturing organisations. The results show partial support for the positive linear gender diversity-performance relationship and for the moderating effect of industry type. The study contributes to the field of diversity by showing that workforce gender diversity can have a different impact on organisational performance in different industries.
Resumo:
This paper analyzes effects of different practice task constraints on heart rate (HR) variability during 4v4 smallsided football games. Participants were sixteen football players divided into two age groups (U13, Mean age: 12.4±0.5 yrs; U15: 14.6±0.5). The task consisted of a 4v4 sub-phase without goalkeepers, on a 25x15 m field, of 15 minutes duration with an active recovery period of 6 minutes between each condition. We recorded players’ heart rates using heart rate monitors (Polar Team System, Polar Electro, Kempele, Finland) as scoring mode was manipulated (line goal: scoring by dribbling past an extended line; double goal: scoring in either of two lateral goals; and central goal: scoring only in one goal). Subsequently, %HR reserve was calculated with the Karvonen formula. We performed a time-series analysis of HR for each individual in each condition. Mean data for intra-participant variability showed that autocorrelation function was associated with more short-range dependence processes in the “line goal” condition, compared to other conditions, demonstrating that the “line goal” constraint induced more randomness in HR response. Relative to inter-individual variability, line goal constraints demonstrated lower %CV and %RMSD (U13: 9% and 19%; U15: 10% and 19%) compared with double goal (U13: 12% and 21%; U15: 12% and 21%) and central goal (U13: 14% and 24%; U15: 13% and 24%) task constraints, respectively. Results suggested that line goal constraints imposed more randomness on cardiovascular stimulation of each individual and lower inter-individual variability than double goal and central goal constraints.
Resumo:
Gaze and movement behaviors of association football goalkeepers were compared under two video simulation conditions (i.e., verbal and joystick movement responses) and three in situ conditions (i.e., verbal, simplified body movement, and interceptive response). The results showed that the goalkeepers spent more time fixating on information from the penalty kick taker’s movements than ball location for all perceptual judgment conditions involving limited movement (i.e., verbal responses, joystick movement, and simplified body movement). In contrast, an equivalent amount of time was spent fixating on the penalty taker’s relative motions and the ball location for the in situ interception condition, which required the goalkeepers to attempt to make penalty saves. The data suggest that gaze and movement behaviors function differently, depending on the experimental task constraints selected for empirical investigations. These findings highlight the need for research on perceptual— motor behaviors to be conducted in representative experimental conditions to allow appropriate generalization of conclusions to performance environments.
Resumo:
Digital forensic examiners often need to identify the type of a file or file fragment based only on the content of the file. Content-based file type identification schemes typically use a byte frequency distribution with statistical machine learning to classify file types. Most algorithms analyze the entire file content to obtain the byte frequency distribution, a technique that is inefficient and time consuming. This paper proposes two techniques for reducing the classification time. The first technique selects a subset of features based on the frequency of occurrence. The second speeds classification by sampling several blocks from the file. Experimental results demonstrate that up to a fifteen-fold reduction in file size analysis time can be achieved with limited impact on accuracy.
Resumo:
Background By 2025, it is estimated that approximately 1.8 million Australian adults (approximately 8.4% of the adult population) will have diabetes, with the majority having type 2 diabetes. Weight management via improved physical activity and diet is the cornerstone of type 2 diabetes management. However, the majority of weight loss trials in diabetes have evaluated short-term, intensive clinic-based interventions that, while producing short-term outcomes, have failed to address issues of maintenance and broad population reach. Telephone-delivered interventions have the potential to address these gaps. Methods/Design Using a two-arm randomised controlled design, this study will evaluate an 18-month, telephone-delivered, behavioural weight loss intervention focussing on physical activity, diet and behavioural therapy, versus usual care, with follow-up at 24 months. Three-hundred adult participants, aged 20-75 years, with type 2 diabetes, will be recruited from 10 general practices via electronic medical records search. The Social-Cognitive Theory driven intervention involves a six-month intensive phase (4 weekly calls and 11 fortnightly calls) and a 12-month maintenance phase (one call per month). Primary outcomes, assessed at 6, 18 and 24 months, are: weight loss, physical activity, and glycaemic control (HbA1c), with weight loss and physical activity also measured at 12 months. Incremental cost-effectiveness will also be examined. Study recruitment began in February 2009, with final data collection expected by February 2013. Discussion This is the first study to evaluate the telephone as the primary method of delivering a behavioural weight loss intervention in type 2 diabetes. The evaluation of maintenance outcomes (6 months following the end of intervention), the use of accelerometers to objectively measure physical activity, and the inclusion of a cost-effectiveness analysis will advance the science of broad reach approaches to weight control and health behaviour change, and will build the evidence base needed to advocate for the translation of this work into population health practice.
Resumo:
In this paper we extend the concept of speaker annotation within a single-recording, or speaker diarization, to a collection wide approach we call speaker attribution. Accordingly, speaker attribution is the task of clustering expectantly homogenous intersession clusters obtained using diarization according to common cross-recording identities. The result of attribution is a collection of spoken audio across multiple recordings attributed to speaker identities. In this paper, an attribution system is proposed using mean-only MAP adaptation of a combined-gender UBM to model clusters from a perfect diarization system, as well as a JFA-based system with session variability compensation. The normalized cross-likelihood ratio is calculated for each pair of clusters to construct an attribution matrix and the complete linkage algorithm is employed to conduct clustering of the inter-session clusters. A matched cluster purity and coverage of 87.1% was obtained on the NIST 2008 SRE corpus.
Resumo:
PCR-based cancer diagnosis requires detection of rare mutations in k- ras, p53 or other genes. The assumption has been that mutant and wild-type sequences amplify with near equal efficiency, so that they are eventually present in proportions representative of the starting material. Work on factor IX suggests that this assumption is invalid for one case of near- sequence identity. To test the generality of this phenomenon and its relevance to cancer diagnosis, primers distant from point mutations in p53 and k-ras were used to amplify wild-type and mutant sequences from these genes. A substantial bias against PCR amplification of mutants was observed for two regions of the p53 gene and one region of k-ras. For k-ras and p53, bias was observed when the wild-type and mutant sequences were amplified separately or when mixed in equal proportions before PCR. Bias was present with proofreading and non-proofreading polymerase. Mutant and wild-type segments of the factor V, cystic fibrosis transmembrane conductance regulator and prothrombin genes were amplified and did not exhibit PCR bias. Therefore, the assumption of equal PCR efficiency for point mutant and wild-type sequences is invalid in several systems. Quantitative or diagnostic PCR will require validation for each locus, and enrichment strategies may be needed to optimize detection of mutants.
Resumo:
Stem cells have attracted tremendous interest in recent times due to their promise in providing innovative new treatments for a great range of currently debilitating diseases. This is due to their potential ability to regenerate and repair damaged tissue, and hence restore lost body function, in a manner beyond the body's usual healing process. Bone marrow-derived mesenchymal stem cells or bone marrow stromal cells are one type of adult stem cells that are of particular interest. Since they are derived from a living human adult donor, they do not have the ethical issues associated with the use of human embryonic stem cells. They are also able to be taken from a patient or other donors with relative ease and then grown readily in the laboratory for clinical application. Despite the attractive properties of bone marrow stromal cells, there is presently no quick and easy way to determine the quality of a sample of such cells. Presently, a sample must be grown for weeks and subject to various time-consuming assays, under the direction of an expert cell biologist, to determine whether it will be useful. Hence there is a great need for innovative new ways to assess the quality of cell cultures for research and potential clinical application. The research presented in this thesis investigates the use of computerised image processing and pattern recognition techniques to provide a quicker and simpler method for the quality assessment of bone marrow stromal cell cultures. In particular, aim of this work is to find out whether it is possible, through the use of image processing and pattern recognition techniques, to predict the growth potential of a culture of human bone marrow stromal cells at early stages, before it is readily apparent to a human observer. With the above aim in mind, a computerised system was developed to classify the quality of bone marrow stromal cell cultures based on phase contrast microscopy images. Our system was trained and tested on mixed images of both healthy and unhealthy bone marrow stromal cell samples taken from three different patients. This system, when presented with 44 previously unseen bone marrow stromal cell culture images, outperformed human experts in the ability to correctly classify healthy and unhealthy cultures. The system correctly classified the health status of an image 88% of the time compared to an average of 72% of the time for human experts. Extensive training and testing of the system on a set of 139 normal sized images and 567 smaller image tiles showed an average performance of 86% and 85% correct classifications, respectively. The contributions of this thesis include demonstrating the applicability and potential of computerised image processing and pattern recognition techniques to the task of quality assessment of bone marrow stromal cell cultures. As part of this system, an image normalisation method has been suggested and a new segmentation algorithm has been developed for locating cell regions of irregularly shaped cells in phase contrast images. Importantly, we have validated the efficacy of both the normalisation and segmentation method, by demonstrating that both methods quantitatively improve the classification performance of subsequent pattern recognition algorithms, in discriminating between cell cultures of differing health status. We have shown that the quality of a cell culture of bone marrow stromal cells may be assessed without the need to either segment individual cells or to use time-lapse imaging. Finally, we have proposed a set of features, that when extracted from the cell regions of segmented input images, can be used to train current state of the art pattern recognition systems to predict the quality of bone marrow stromal cell cultures earlier and more consistently than human experts.