980 resultados para Tables (Data).
Resumo:
Purpose: All currently considered parametric models used for decomposing videokeratoscopy height data are viewercentered and hence describe what the operator sees rather than what the surface is. The purpose of this study was to ascertain the applicability of an object-centered representation to modeling of corneal surfaces. Methods: A three-dimensional surface decomposition into a series of spherical harmonics is considered and compared with the traditional Zernike polynomial expansion for a range of videokeratoscopic height data. Results: Spherical harmonic decomposition led to significantly better fits to corneal surfaces (in terms of the root mean square error values) than the corresponding Zernike polynomial expansions with the same number of coefficients, for all considered corneal surfaces, corneal diameters, and model orders. Conclusions: Spherical harmonic decomposition is a viable alternative to Zernike polynomial decomposition. It achieves better fits to videokeratoscopic height data and has the advantage of an object-centered representation that could be particularly suited to the analysis of multiple corneal measurements.
Resumo:
Problem-based learning (PBL) is a pedagogical methodology that presents the learner with a problem to be solved to stimulate and situate learning. This paper presents key characteristics of a problem-based learning environment that determines its suitability as a data source for workrelated research studies. To date, little has been written about the availability and validity of PBL environments as a data source and its suitability for work-related research. We describe problembased learning and use a research project case study to illustrate the challenges associated with industry work samples. We then describe the PBL course used in our research case study and use this example to illustrate the key attributes of problem-based learning environments and show how the chosen PBL environment met the work-related research requirements of the research case study. We propose that the more realistic the PBL work context and work group composition, the better the PBL environment as a data source for a work-related research. The work context is more realistic when relevant and complex project-based problems are tackled in industry-like work conditions over longer time frames. Work group composition is more realistic when participants with industry-level education and experience enact specialized roles in different disciplines within a professional community.
Resumo:
In this article we explore young children's development of mathematical knowledge and reasoning processes as they worked two modelling problems (the Butter Beans Problem and the Airplane Problem). The problems involve authentic situations that need to be interpreted and described in mathematical ways. Both problems include tables of data, together with background information containing specific criteria to be considered in the solution process. Four classes of third-graders (8 years of age) and their teachers participated in the 6-month program, which included preparatory modelling activities along with professional development for the teachers. In discussing our findings we address: (a) Ways in which the children applied their informal, personal knowledge to the problems; (b) How the children interpreted the tables of data, including difficulties they experienced; (c) How the children operated on the data, including aggregating and comparing data, and looking for trends and patterns; (c) How the children developed important mathematical ideas; and (d) Ways in which the children represented their mathematical understandings.
Resumo:
An educational priority of many nations is to enhance mathematical learning in early childhood. One area in need of special attention is that of statistics. This paper argues for a renewed focus on statistical reasoning in the beginning school years, with opportunities for children to engage in data modelling activities. Such modelling involves investigations of meaningful phenomena, deciding what is worthy of attention (i.e., identifying complex attributes), and then progressing to organising, structuring, visualising, and representing data. Results are reported from the first year of a three-year longitudinal study in which three classes of first-grade children and their teachers engaged in activities that required the creation of data models. The theme of “Looking after our Environment,” a component of the children’s science curriculum at the time, provided the context for the activities. Findings focus on how the children dealt with given complex attributes and how they generated their own attributes in classifying broad data sets, and the nature of the models the children created in organising, structuring, and representing their data.
Resumo:
Objective: To quantify the extent to which alcohol related injuries are adequately identified in hospitalisation data using ICD-10-AM codes indicative of alcohol involvement. Method: A random sample of 4373 injury-related hospital separations from 1 July 2002 to 30 June 2004 were obtained from a stratified random sample of 50 hospitals across 4 states in Australia. From this sample, cases were identified as involving alcohol if they contained an ICD-10-AM diagnosis or external cause code referring to alcohol, or if the text description extracted from the medical records mentioned alcohol involvement. Results: Overall, identification of alcohol involvement using ICD codes detected 38% of the alcohol-related sample, whilst almost 94% of alcohol-related cases were identified through a search of the text extracted from the medical records. The resultant estimate of alcohol involvement in injury-related hospitalisations in this sample was 10%. Emergency department records were the most likely to identify whether the injury was alcohol-related with almost three-quarters of alcohol-related cases mentioning alcohol in the text abstracted from these records. Conclusions and Implications: The current best estimates of the frequency of hospital admissions where alcohol is involved prior to the injury underestimate the burden by around 62%. This is a substantial underestimate that has major implications for public policy, and highlights the need for further work on improving the quality and completeness of routine administrative data sources for identification of alcohol-related injuries.
Resumo:
Objective: To examine the sources of coding discrepancy for injury morbidity data and explore the implications of these sources for injury surveillance.-------- Method: An on-site medical record review and recoding study was conducted for 4373 injury-related hospital admissions across Australia. Codes from the original dataset were compared to the recoded data to explore the reliability of coded data aand sources of discrepancy.---------- Results: The most common reason for differences in coding overall was assigning the case to a different external cause category with 8.5% assigned to a different category. Differences in the specificity of codes assigned within a category accounted for 7.8% of coder difference. Differences in intent assignment accounted for 3.7% of the differences in code assignment.---------- Conclusions: In the situation where 8 percent of cases are misclassified by major category, the setting of injury targets on the basis of extent of burden is a somewhat blunt instrument Monitoring the effect of prevention programs aimed at reducing risk factors is not possible in datasets with this level of misclassification error in injury cause subcategories. Future research is needed to build the evidence base around the quality and utility of the ICD classification system and application of use of this for injury surveillance in the hospital environment.
Resumo:
User-Based intelligent systems are already commonplace in a student’s online digital life. Each time they browse, search, buy, join, comment, play, travel, upload, download, a system collects, analyses and processes data in an effort to customise content and further improve services. This panel session will explore how intelligent systems, particularly those that gather data from mobile devices, can offer new possibilities to assist in the delivery of customised, personal and engaging learning experiences. The value of intelligent systems for education lies in their ability to formulate authentic and complex learner profiles that bring together and systematically integrate a student’s personal world with a formal curriculum framework. As we well know, a mobile device can collect data relating to a student’s interests (gathered from search history, applications and communications), location, surroundings and proximity to others (GPS, Bluetooth). However, what has been less explored is the opportunity for a mobile device to map the movements and activities of a student from moment to moment and over time. This longitudinal data provides a holistic profile of a student, their state and surroundings. Analysing this data may allow us to identify patterns that reveal a student’s learning processes; when and where they work best and for how long. Through revealing a student’s state and surroundings outside of schools hour, this longitudinal data may also highlight opportunities to transform a student’s everyday world into an inventory for learning, punctuating their surroundings with learning recommendations. This would in turn lead to new ways to acknowledge and validate and foster informal learning, making it legitimate within a formal curriculum.
Resumo:
The dynamic interaction between building systems and external climate is extremely complex, involving a large number of difficult-to-predict variables. In order to study the impact of climate change on the built environment, the use of building simulation techniques together with forecast weather data are often necessary. Since most of building simulation programs require hourly meteorological input data for their thermal comfort and energy evaluation, the provision of suitable weather data becomes critical. In this paper, the methods used to prepare future weather data for the study of the impact of climate change are reviewed. The advantages and disadvantages of each method are discussed. The inherent relationship between these methods is also illustrated. Based on these discussions and the analysis of Australian historic climatic data, an effective framework and procedure to generate future hourly weather data is presented. It is shown that this method is not only able to deal with different levels of available information regarding the climate change, but also can retain the key characters of a “typical” year weather data for a desired period.
Resumo:
We estimate the cost of droughts by matching rainfall data with individual life satisfaction. Our context is Australia over the period 2001 to 2004, which included a particularly severe drought. Using fixed-effect models, we find that a drought in spring has a detrimental effect on life satisfaction equivalent to an annual reduction in income of A$18,000. This effect, however, is only found for individuals living in rural areas. Using our estimates, we calculate that the predicted doubling of the frequency of spring droughts will lead to the equivalent loss in life satisfaction of just over 1% of GDP annually.
Resumo:
Patients with chest discomfort or other symptoms suggestive of acute coronary syndrome (ACS) are one of the most common categories seen in many Emergency Departments (EDs). While the recognition of patients at high-risk of ACS has improved steadily, identifying the majority of chest pain presentations who fall into the low-risk group remains a challenge. Research in this area needs to be transparent, robust, applicable to all hospitals from large tertiary centres to rural and remote sites, and to allow direct comparison between different studies with minimum patient spectrum bias. A standardised approach to the research framework using a common language for data definitions must be adopted to achieve this. The aim was to create a common framework for a standardised data definitions set that would allow maximum value when extrapolating research findings both within Australasian ED practice, and across similar populations worldwide. Therefore a comprehensive data definitions set for the investigation of non-traumatic chest pain patients with possible ACS was developed, specifically for use in the ED setting. This standardised data definitions set will facilitate ‘knowledge translation’ by allowing extrapolation of useful findings into the real-life practice of emergency medicine.
Resumo:
Seasonal patterns have been found in a remarkable range of health conditions, including birth defects, respiratory infections and cardiovascular disease. Accurately estimating the size and timing of seasonal peaks in disease incidence is an aid to understanding the causes and possibly to developing interventions. With global warming increasing the intensity of seasonal weather patterns around the world, a review of the methods for estimating seasonal effects on health is timely. This is the first book on statistical methods for seasonal data written for a health audience. It describes methods for a range of outcomes (including continuous, count and binomial data) and demonstrates appropriate techniques for summarising and modelling these data. It has a practical focus and uses interesting examples to motivate and illustrate the methods. The statistical procedures and example data sets are available in an R package called ‘season’. Adrian Barnett is a senior research fellow at Queensland University of Technology, Australia. Annette Dobson is a Professor of Biostatistics at The University of Queensland, Australia. Both are experienced medical statisticians with a commitment to statistical education and have previously collaborated in research in the methodological developments and applications of biostatistics, especially to time series data. Among other projects, they worked together on revising the well-known textbook "An Introduction to Generalized Linear Models," third edition, Chapman Hall/CRC, 2008. In their new book they share their knowledge of statistical methods for examining seasonal patterns in health.
Resumo:
Aims: To describe a local data linkage project to match hospital data with the Australian Institute of Health and Welfare (AIHW) National Death Index (NDI) to assess longterm outcomes of intensive care unit patients. Methods: Data were obtained from hospital intensive care and cardiac surgery databases on all patients aged 18 years and over admitted to either of two intensive care units at a tertiary-referral hospital between 1 January 1994 and 31 December 2005. Date of death was obtained from the AIHW NDI by probabilistic software matching, in addition to manual checking through hospital databases and other sources. Survival was calculated from time of ICU admission, with a censoring date of 14 February 2007. Data for patients with multiple hospital admissions requiring intensive care were analysed only from the first admission. Summary and descriptive statistics were used for preliminary data analysis. Kaplan-Meier survival analysis was used to analyse factors determining long-term survival. Results: During the study period, 21 415 unique patients had 22 552 hospital admissions that included an ICU admission; 19 058 surgical procedures were performed with a total of 20 092 ICU admissions. There were 4936 deaths. Median follow-up was 6.2 years, totalling 134 203 patient years. The casemix was predominantly cardiac surgery (80%), followed by cardiac medical (6%), and other medical (4%). The unadjusted survival at 1, 5 and 10 years was 97%, 84% and 70%, respectively. The 1-year survival ranged from 97% for cardiac surgery to 36% for cardiac arrest. An APACHE II score was available for 16 877 patients. In those discharged alive from hospital, the 1, 5 and 10-year survival varied with discharge location. Conclusions: ICU-based linkage projects are feasible to determine long-term outcomes of ICU patients
Resumo:
The recently proposed data-driven background dataset refinement technique provides a means of selecting an informative background for support vector machine (SVM)-based speaker verification systems. This paper investigates the characteristics of the impostor examples in such highly-informative background datasets. Data-driven dataset refinement individually evaluates the suitability of candidate impostor examples for the SVM background prior to selecting the highest-ranking examples as a refined background dataset. Further, the characteristics of the refined dataset were analysed to investigate the desired traits of an informative SVM background. The most informative examples of the refined dataset were found to consist of large amounts of active speech and distinctive language characteristics. The data-driven refinement technique was shown to filter the set of candidate impostor examples to produce a more disperse representation of the impostor population in the SVM kernel space, thereby reducing the number of redundant and less-informative examples in the background dataset. Furthermore, data-driven refinement was shown to provide performance gains when applied to the difficult task of refining a small candidate dataset that was mis-matched to the evaluation conditions.
Resumo:
This study assesses the recently proposed data-driven background dataset refinement technique for speaker verification using alternate SVM feature sets to the GMM supervector features for which it was originally designed. The performance improvements brought about in each trialled SVM configuration demonstrate the versatility of background dataset refinement. This work also extends on the originally proposed technique to exploit support vector coefficients as an impostor suitability metric in the data-driven selection process. Using support vector coefficients improved the performance of the refined datasets in the evaluation of unseen data. Further, attempts are made to exploit the differences in impostor example suitability measures from varying features spaces to provide added robustness.