45 resultados para data gathering algorithm
em University of Queensland eSpace - Australia
Resumo:
The modelling of inpatient length of stay (LOS) has important implications in health care studies. Finite mixture distributions are usually used to model the heterogeneous LOS distribution, due to a certain proportion of patients sustaining-a longer stay. However, the morbidity data are collected from hospitals, observations clustered within the same hospital are often correlated. The generalized linear mixed model approach is adopted to accommodate the inherent correlation via unobservable random effects. An EM algorithm is developed to obtain residual maximum quasi-likelihood estimation. The proposed hierarchical mixture regression approach enables the identification and assessment of factors influencing the long-stay proportion and the LOS for the long-stay patient subgroup. A neonatal LOS data set is used for illustration, (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
Hannenhalli and Pevzner developed the first polynomial-time algorithm for the combinatorial problem of sorting of signed genomic data. Their algorithm solves the minimum number of reversals required for rearranging a genome to another when gene duplication is nonexisting. In this paper, we show how to extend the Hannenhalli-Pevzner approach to genomes with multigene families. We propose a new heuristic algorithm to compute the reversal distance between two genomes with multigene families via the concept of binary integer programming without removing gene duplicates. The experimental results on simulated and real biological data demonstrate that the proposed algorithm is able to find the reversal distance accurately. ©2005 IEEE
Resumo:
Data refinements are refinement steps in which a program’s local data structures are changed. Data refinement proof obligations require the software designer to find an abstraction relation that relates the states of the original and new program. In this paper we describe an algorithm that helps a designer find an abstraction relation for a proposed refinement. Given sufficient time and space, the algorithm can find a minimal abstraction relation, and thus show that the refinement holds. As it executes, the algorithm displays mappings that cannot be in any abstraction relation. When the algorithm is not given sufficient resources to terminate, these mappings can help the designer find a suitable abstraction relation. The same algorithm can be used to test an abstraction relation supplied by the designer.
Resumo:
Introduction The objective of this study was to analyse the accommodation needs of people with intellectual disability over the age of 18 years in Toowoomba and contiguous shires. In 2004, a group of carers established Toowoomba Intellectual Disability Support Association (TIDSA) to address the issue of the lack of supported accommodation for people with intellectual disability over the age of 18 and the concerns of ageing carers. The Centre for Rural and Remote Area Health (CRRAH) was engaged by TIDSA to ascertain this need and undertook a research project funded by the Queensland Gambling Community Benefit Fund. While data specifically relating to people with intellectual disability and their carers are difficult to obtain, the Australian Bureau of Statistics report that carers of people with a disability are more likely to be female and at least 65 years of age. Projections by the National Centre for Social and Economic Modelling (NATSEM) show that disability rates are increasing and carer rates are decreasing. Thus the problem of appropriate support to the increasing number of ageing carers and those who they care for will be a major challenge to policy makers and is an issue of immediate concern. In general, what was once the norm of accommodating people with intellectual disability in large institutions is now changing to accommodating into community-based residences (Annison, 2000; Young, Ashman, Sigafoos, & Grevell, 2001). However, in Toowoomba and contiguous shires, TIDSA have noted that the availability of suitable accommodation for people with intellectual disability over the age of 18 years is declining with no new options available in an environment of increasing demand. Most effort seemed to be directed towards crisis provision. Method This study employed two phases of data gathering, the first being the distribution of a questionnaire through local service providers and upon individual request to the carers of people with intellectual disability over the age of 18. The questionnaire comprised of Likert-type items intended to measure various aspects of current and future accommodation issues. Most questions were followed with space for free-response comments to provide the opportunity for carers to further clarify and expand on their responses. The second phase comprised semi-structured interviews conducted with ten carers and ten people with intellectual disability who had participated in the Phase One questionnaire. Interviews were transcribed verbatim and subjected to content analysis where major themes were explored. Results Age and gender Carer participants in this study totalled 150. The mean age of these carers was 61.5 years and ranged from 40 – 91 years. Females comprised 78% of the sample (mean age = 61.49; range from 40-91) and 22% were male (mean age = 61.7 range from 43-81). The mean age of people with intellectual disability in our study was 37.2 years ranging from 18 – 79 years with 40% female (mean age = 39.5; range from 19-79) and 60% male (mean age = 35.6; range from 18-59). The average age of carers caring for a person over the age of 18 who is living at home is 61 years. The average age of the carer who cares for a person who is living away from home is 62 years. The overall age range of both these groups of carers is between 40 and 81 years. The oldest group of carers (mean age = 70 years) were those where the person with intellectual disability lives away from home in a large residential facility. Almost one quarter of people with an intellectual disability who currently live at home is cared for by one primary carer and this is almost exclusively a parent.
Resumo:
Institutional research can be defined as "the activity in which the research effort of an academic institution is directed at the solution of its own problems and to the enhancement of its own performance" (Woodward, 1993, p. 113). This paper describes and reflects on an attempt at the University of Queensland to address the need for course quality appraisal for improvement. The strategy, Continuous Curriculum Review (CCR) is simply an attempt to trial and promote regular comprehensive data collection for developing 'snapshot' views of whole curricula so that decisions about what to change and what to change first can be made in an empirically defensible and timely manner. The strategy and reporting protocols that were developed are described, and the costs and benefits of engaging in this kind of data gathering exercise for quality assurance and quality enhancement purposes are discussed.
Resumo:
In this study of articulation issues related to languages other than English (LOTE), "articulation" is defined and the challenges surrounding it are overviewed. Data taken from an independent school's admission documents over a 4-year period provide insights and reveal trends concerning students' preferences for language study, LOTE study continuity, and reasons for LOTE selection. The data also provides an accounting of some multiple LOTE learning experiences. The analysis indicates that many students who begin a LOTE in the early grades are thwarted in becoming proficient, because (1) continuation in the language is impossible due to unavailability of instruction; (2) expanded learning is hampered by teachers' inability to deal with a range of learners, (3) extended learning is hampered by administrative decisions or policies, or (4) students lose interest in the first LOTE and switch to another. Finally, a call is made for data gathering and research in local contexts to gain a better understanding of LOTE articulation challenges at the local, state, national, and international levels.
Resumo:
The objectives of this study were to evaluate the outcomes of our patients admitted with hip fractures, and to benchmark these results with other hospitals, initially in Europe and subsequently in Australia. The Standardised Audit of Hip Fractures in Europe (SAHFE) questionnaires was used as the data gathering instrument. The participants were all patients admitted to Redcliffe Hospital with a fractured neck of femur prior to surgery. This paper reports the results of the first 70 consecutive patients admitted to Redcliffe Hospital with a fractured neck of femur from November 1st 2000. The main outcome measures were mobility, independence, residence prior to fracture; type of fracture and surgical repair; and time to surgery, survival rates and discharge destination. Results: 43 patients were admitted from home, but only 13 returned home directly from the orthopaedic ward. It is hoped that most of the 26 transferred to the rehabilitation ward will ultimately return home. 7 patients died, these were aged 82 to 102, and all had premorbid disease. Delays in surgery were apparent for 13 patients, mainly due to administrative problems. Conclusions: We support the recommendation in the Fifteenth Scottish Intercollegiate Guidelines Network Publication on the management of hip fractures, that all units treating this condition should enter an audit to evaluate their management. (author abstract)
Resumo:
Objective: Inpatient length of stay (LOS) is an important measure of hospital activity, health care resource consumption, and patient acuity. This research work aims at developing an incremental expectation maximization (EM) based learning approach on mixture of experts (ME) system for on-line prediction of LOS. The use of a batchmode learning process in most existing artificial neural networks to predict LOS is unrealistic, as the data become available over time and their pattern change dynamically. In contrast, an on-line process is capable of providing an output whenever a new datum becomes available. This on-the-spot information is therefore more useful and practical for making decisions, especially when one deals with a tremendous amount of data. Methods and material: The proposed approach is illustrated using a real example of gastroenteritis LOS data. The data set was extracted from a retrospective cohort study on all infants born in 1995-1997 and their subsequent admissions for gastroenteritis. The total number of admissions in this data set was n = 692. Linked hospitalization records of the cohort were retrieved retrospectively to derive the outcome measure, patient demographics, and associated co-morbidities information. A comparative study of the incremental learning and the batch-mode learning algorithms is considered. The performances of the learning algorithms are compared based on the mean absolute difference (MAD) between the predictions and the actual LOS, and the proportion of predictions with MAD < 1 day (Prop(MAD < 1)). The significance of the comparison is assessed through a regression analysis. Results: The incremental learning algorithm provides better on-line prediction of LOS when the system has gained sufficient training from more examples (MAD = 1.77 days and Prop(MAD < 1) = 54.3%), compared to that using the batch-mode learning. The regression analysis indicates a significant decrease of MAD (p-value = 0.063) and a significant (p-value = 0.044) increase of Prop(MAD
Resumo:
New residential scale photovoltaic (PV) arrays are commonly connected to the grid by a single DC-AC inverter connected to a series string of PV modules, or many small DC-AC inverters which connect one or two modules directly to the AC grid. This paper shows that a "converter-per-module" approach offers many advantages including individual module maximum power point tracking, which gives great flexibility in module layout, replacement, and insensitivity to shading; better protection of PV sources, and redundancy in the case of source or converter failure; easier and safer installation and maintenance; and better data gathering. Simple nonisolated per-module DC-DC converters can be series connected to create a high voltage string connected to a simplified DC-AC inverter. These advantages are available without the cost or efficiency penalties of individual DC-AC grid connected inverters. Buck, boost, buck-boost and Cuk converters are possible cascadable converters. The boost converter is best if a significant step up is required, such as with a short string of 12 PV modules. A string of buck converters requires many more modules, but can always deliver any combination of module power. The buck converter is the most efficient topology for a given cost. While flexible in voltage ranges, buck-boost and Cuk converters are always at an efficiency or alternatively cost disadvantage.
Resumo:
Motivation: Prediction methods for identifying binding peptides could minimize the number of peptides required to be synthesized and assayed, and thereby facilitate the identification of potential T-cell epitopes. We developed a bioinformatic method for the prediction of peptide binding to MHC class II molecules. Results: Experimental binding data and expert knowledge of anchor positions and binding motifs were combined with an evolutionary algorithm (EA) and an artificial neural network (ANN): binding data extraction --> peptide alignment --> ANN training and classification. This method, termed PERUN, was implemented for the prediction of peptides that bind to HLA-DR4(B1*0401). The respective positive predictive values of PERUN predictions of high-, moderate-, low- and zero-affinity binder-a were assessed as 0.8, 0.7, 0.5 and 0.8 by cross-validation, and 1.0, 0.8, 0.3 and 0.7 by experimental binding. This illustrates the synergy between experimentation and computer modeling, and its application to the identification of potential immunotheraaeutic peptides.
Resumo:
The use of computational fluid dynamics simulations for calibrating a flush air data system is described, In particular, the flush air data system of the HYFLEX hypersonic vehicle is used as a case study. The HYFLEX air data system consists of nine pressure ports located flush with the vehicle nose surface, connected to onboard pressure transducers, After appropriate processing, surface pressure measurements can he converted into useful air data parameters. The processing algorithm requires an accurate pressure model, which relates air data parameters to the measured pressures. In the past, such pressure models have been calibrated using combinations of flight data, ground-based experimental results, and numerical simulation. We perform a calibration of the HYFLEX flush air data system using computational fluid dynamics simulations exclusively, The simulations are used to build an empirical pressure model that accurately describes the HYFLEX nose pressure distribution ol cr a range of flight conditions. We believe that computational fluid dynamics provides a quick and inexpensive way to calibrate the air data system and is applicable to a broad range of flight conditions, When tested with HYFLEX flight data, the calibrated system is found to work well. It predicts vehicle angle of attack and angle of sideslip to accuracy levels that generally satisfy flight control requirements. Dynamic pressure is predicted to within the resolution of the onboard inertial measurement unit. We find that wind-tunnel experiments and flight data are not necessary to accurately calibrate the HYFLEX flush air data system for hypersonic flight.
Resumo:
We tested the effects of four data characteristics on the results of reserve selection algorithms. The data characteristics were nestedness of features (land types in this case), rarity of features, size variation of sites (potential reserves) and size of data sets (numbers of sites and features). We manipulated data sets to produce three levels, with replication, of each of these data characteristics while holding the other three characteristics constant. We then used an optimizing algorithm and three heuristic algorithms to select sites to solve several reservation problems. We measured efficiency as the number or total area of selected sites, indicating the relative cost of a reserve system. Higher nestedness increased the efficiency of all algorithms (reduced the total cost of new reserves). Higher rarity reduced the efficiency of all algorithms (increased the total cost of new reserves). More variation in site size increased the efficiency of all algorithms expressed in terms of total area of selected sites. We measured the suboptimality of heuristic algorithms as the percentage increase of their results over optimal (minimum possible) results. Suboptimality is a measure of the reliability of heuristics as indicative costing analyses. Higher rarity reduced the suboptimality of heuristics (increased their reliability) and there is some evidence that more size variation did the same for the total area of selected sites. We discuss the implications of these results for the use of reserve selection algorithms as indicative and real-world planning tools.
Resumo:
To translate and transfer solution data between two totally different meshes (i.e. mesh 1 and mesh 2), a consistent point-searching algorithm for solution interpolation in unstructured meshes consisting of 4-node bilinear quadrilateral elements is presented in this paper. The proposed algorithm has the following significant advantages: (1) The use of a point-searching strategy allows a point in one mesh to be accurately related to an element (containing this point) in another mesh. Thus, to translate/transfer the solution of any particular point from mesh 2 td mesh 1, only one element in mesh 2 needs to be inversely mapped. This certainly minimizes the number of elements, to which the inverse mapping is applied. In this regard, the present algorithm is very effective and efficient. (2) Analytical solutions to the local co ordinates of any point in a four-node quadrilateral element, which are derived in a rigorous mathematical manner in the context of this paper, make it possible to carry out an inverse mapping process very effectively and efficiently. (3) The use of consistent interpolation enables the interpolated solution to be compatible with an original solution and, therefore guarantees the interpolated solution of extremely high accuracy. After the mathematical formulations of the algorithm are presented, the algorithm is tested and validated through a challenging problem. The related results from the test problem have demonstrated the generality, accuracy, effectiveness, efficiency and robustness of the proposed consistent point-searching algorithm. Copyright (C) 1999 John Wiley & Sons, Ltd.
Resumo:
The new technologies for Knowledge Discovery from Databases (KDD) and data mining promise to bring new insights into a voluminous growing amount of biological data. KDD technology is complementary to laboratory experimentation and helps speed up biological research. This article contains an introduction to KDD, a review of data mining tools, and their biological applications. We discuss the domain concepts related to biological data and databases, as well as current KDD and data mining developments in biology.
Resumo:
Matrix population models, elasticity analysis and loop analysis can potentially provide powerful techniques for the analysis of life histories. Data from a capture-recapture study on a population of southern highland water skinks (Eulamprus tympanum) were used to construct a matrix population model. Errors in elasticities were calculated by using the parametric bootstrap technique. Elasticity and loop analyses were then conducted to identify the life history stages most important to fitness. The same techniques were used to investigate the relative importance of fast versus slow growth, and rapid versus delayed reproduction. Mature water skinks were long-lived, but there was high immature mortality. The most sensitive life history stage was the subadult stage. It is suggested that life history evolution in E. tympanum may be strongly affected by predation, particularly by birds. Because our population declined over the study, slow growth and delayed reproduction were the optimal life history strategies over this period. Although the techniques of evolutionary demography provide a powerful approach for the analysis of life histories, there are formidable logistical obstacles in gathering enough high-quality data for robust estimates of the critical parameters.