288 resultados para Statistical tools
Resumo:
This work examined a new method of detecting small water filled cracks in underground insulation ('water trees') using data from commecially available non-destructive testing equipment. A testing facility was constructed and a computer simulation of the insulation designed in order to test the proposed ageing factor - the degree of non-linearity. This was a large industry-backed project involving an ARC linkage grant, Ergon Energy and the University of Queensland, as well as the Queensland University of Technology.
Resumo:
Background Foot dorsiflexion plays an essential role in both controlling balance and human gait. Electromyography (EMG) and sonomyography (SMG) can provide information on several aspects of muscle function. The aim was to establish the relationship between the EMG and SMG variables during isotonic contractions of foot dorsiflexors. Methods Twenty-seven healthy young adults performed the foot dorsiflexion test on a device designed ad hoc. EMG variables were maximum peak and area under the curve. Muscular architecture variables were muscle thickness and pennation angle. Descriptive statistical analysis, inferential analysis and a multivariate linear regression model were carried out. The confidence level was established with a statistically significant p-value of less than 0.05. Results The correlation between EMG variables and SMG variables was r = 0.462 (p < 0.05). The linear regression model to the dependent variable “peak normalized tibialis anterior (TA)” from the independent variables “pennation angle and thickness”, was significant (p = 0.002) with an explained variance of R2 = 0.693 and SEE = 0.16. Conclusions There is a significant relationship and degree of contribution between EMG and SMG variables during isotonic contractions of the TA muscle. Our results suggest that EMG and SMG can be feasible tools for monitoring and assessment of foot dorsiflexors. TA muscle parameterization and assessment is relevant in order to know that increased strength accelerates the recovery of lower limb injuries.
Resumo:
Species distribution models (SDMs) are considered to exemplify Pattern rather than Process based models of a species' response to its environment. Hence when used to map species distribution, the purpose of SDMs can be viewed as interpolation, since species response is measured at a few sites in the study region, and the aim is to interpolate species response at intermediate sites. Increasingly, however, SDMs are also being used to also extrapolate species-environment relationships beyond the limits of the study region as represented by the training data. Regardless of whether SDMs are to be used for interpolation or extrapolation, the debate over how to implement SDMs focusses on evaluating the quality of the SDM, both ecologically and mathematically. This paper proposes a framework that includes useful tools previously employed to address uncertainty in habitat modelling. Together with existing frameworks for addressing uncertainty more generally when modelling, we then outline how these existing tools help inform development of a broader framework for addressing uncertainty, specifically when building habitat models. As discussed earlier we focus on extrapolation rather than interpolation, where the emphasis on predictive performance is diluted by the concerns for robustness and ecological relevance. We are cognisant of the dangers of excessively propagating uncertainty. Thus, although the framework provides a smorgasbord of approaches, it is intended that the exact menu selected for a particular application, is small in size and targets the most important sources of uncertainty. We conclude with some guidance on a strategic approach to identifying these important sources of uncertainty. Whilst various aspects of uncertainty in SDMs have previously been addressed, either as the main aim of a study or as a necessary element of constructing SDMs, this is the first paper to provide a more holistic view.
Resumo:
We defined a new statistical fluid registration method with Lagrangian mechanics. Although several authors have suggested that empirical statistics on brain variation should be incorporated into the registration problem, few algorithms have included this information and instead use regularizers that guarantee diffeomorphic mappings. Here we combine the advantages of a large-deformation fluid matching approach with empirical statistics on population variability in anatomy. We reformulated the Riemannian fluid algorithmdeveloped in [4], and used a Lagrangian framework to incorporate 0 th and 1st order statistics in the regularization process. 92 2D midline corpus callosum traces from a twin MRI database were fluidly registered using the non-statistical version of the algorithm (algorithm 0), giving initial vector fields and deformation tensors. Covariance matrices were computed for both distributions and incorporated either separately (algorithm 1 and algorithm 2) or together (algorithm 3) in the registration. We computed heritability maps and two vector and tensorbased distances to compare the power and the robustness of the algorithms.
Resumo:
In this paper, we used a nonconservative Lagrangian mechanics approach to formulate a new statistical algorithm for fluid registration of 3-D brain images. This algorithm is named SAFIRA, acronym for statistically-assisted fluid image registration algorithm. A nonstatistical version of this algorithm was implemented, where the deformation was regularized by penalizing deviations from a zero rate of strain. In, the terms regularizing the deformation included the covariance of the deformation matrices Σ and the vector fields (q). Here, we used a Lagrangian framework to reformulate this algorithm, showing that the regularizing terms essentially allow nonconservative work to occur during the flow. Given 3-D brain images from a group of subjects, vector fields and their corresponding deformation matrices are computed in a first round of registrations using the nonstatistical implementation. Covariance matrices for both the deformation matrices and the vector fields are then obtained and incorporated (separately or jointly) in the nonconservative terms, creating four versions of SAFIRA. We evaluated and compared our algorithms' performance on 92 3-D brain scans from healthy monozygotic and dizygotic twins; 2-D validations are also shown for corpus callosum shapes delineated at midline in the same subjects. After preliminary tests to demonstrate each method, we compared their detection power using tensor-based morphometry (TBM), a technique to analyze local volumetric differences in brain structure. We compared the accuracy of each algorithm variant using various statistical metrics derived from the images and deformation fields. All these tests were also run with a traditional fluid method, which has been quite widely used in TBM studies. The versions incorporating vector-based empirical statistics on brain variation were consistently more accurate than their counterparts, when used for automated volumetric quantification in new brain images. This suggests the advantages of this approach for large-scale neuroimaging studies.
Resumo:
Background: Magnetic resonance diffusion tensor imaging (DTI) shows promise in the early detection of microstructural pathophysiological changes in the brain. Objectives: To measure microstructural differences in the brains of participants with amnestic mild cognitive impairment (MCI) compared with an age-matched control group using an optimised DTI technique with fully automated image analysis tools and to investigate the correlation between diffusivity measurements and neuropsychological performance scores across groups. Methods: 34 participants (17 participants with MCI, 17 healthy elderly adults) underwent magnetic resonance imaging (MRI)-based DTI. To control for the effects of anatomical variation, diffusion images of all participants were registered to standard anatomical space. Significant statistical differences in diffusivity measurements between the two groups were determined on a pixel-by-pixel basis using gaussian random field theory. Results: Significantly raised mean diffusivity measurements (p<0.001) were observed in the left and right entorhinal cortices (BA28), posterior occipital-parietal cortex (BA18 and BA19), right parietal supramarginal gyrus (BA40) and right frontal precentral gyri (BA4 and BA6) in participants with MCI. With respect to fractional anisotropy, participants with MCI had significantly reduced measurements (p<0.001) in the limbic parahippocampal subgyral white matter, right thalamus and left posterior cingulate. Pearson's correlation coefficients calculated across all participants showed significant correlations between neuropsychological assessment scores and regional measurements of mean diffusivity and fractional anisotropy. Conclusions: DTI-based diffusivity measures may offer a sensitive method of detecting subtle microstructural brain changes associated with preclinical Alzheimer's disease.
Resumo:
There is a major effort in medical imaging to develop algorithms to extract information from DTI and HARDI, which provide detailed information on brain integrity and connectivity. As the images have recently advanced to provide extraordinarily high angular resolution and spatial detail, including an entire manifold of information at each point in the 3D images, there has been no readily available means to view the results. This impedes developments in HARDI research, which need some method to check the plausibility and validity of image processing operations on HARDI data or to appreciate data features or invariants that might serve as a basis for new directions in image segmentation, registration, and statistics. We present a set of tools to provide interactive display of HARDI data, including both a local rendering application and an off-screen renderer that works with a web-based viewer. Visualizations are presented after registration and averaging of HARDI data from 90 human subjects, revealing important details for which there would be no direct way to appreciate using conventional display of scalar images.
Resumo:
The Australian housing sector contributes about a fifth of national greenhouse gas (GHG) emissions. GHG emissions contribute to climate change which leads to an increase in the occurrence or intensity of natural disasters and damage of houses. To ensure housing performance in the face of climate change, various rating tools for residential property have been introduced in different countries. The aim of this paper is to present a preliminary comparison between international and Australian rating tools in terms of purpose, use and sustainability elements for residential property. The methodologies used are to review, classify, compare and identify similarities and differences between rating tools. Two international tools, Building Research Establishment Environmental Assessment Methodology (BREEAM) (UK) and Leadership in Energy and Environmental Design for Homes (LEED-Homes) (USA), will be compared to two Australian tools, Green Star – Multi Unit Residential v1 and EnviroDevelopment. All four rating tools include management, energy, water and material aspects. The findings reveal thirteen elements that fall under three categories: spatial planning, occupants’ health and comfort, and environmental conditions. The variations in different tools may result from differences in local prevailing climate. Not all sustainability elements covered by international rating tools are included in the Australian rating tools. The voluntary nature of the tools implies they are not broadly applied in their respective market and that there is a policy implementation gap. A comprehensive rating tool could be developed in Australia to promote and lessen the confusion about sustainable housing, which in turn assist in improving the supply and demand of sustainable housing.
Resumo:
The community is the basic unit of urban development, and appropriate assessment tools are needed for communities to evaluate and facilitate decision making concerning sustainable community development and reduce the detrimental effects of urban community actions on the environment. Existing research into sustainable community rating tools focuses primarily on those that are internationally recognized to describe their advantages and future challenges. However, the differences between rating tools due to different regional conditions, situations and characteristics have yet to be addressed. In doing this, this paper examines three sustainable community rating tools in Australia, namely Green Star-Communities PILOT, EnviroDevelopment and VicUrban Sustainability Charter (Master Planned Community Assessment Tool). In order to identify their similarities, differences and advantages these are compared in terms of sustainability coverage, prerequisites, adaptation to locality, scoring and weighting, participation, presentation of results, and application process. These results provide the stakeholders of sustainable community development projects with a better understanding of the available rating tools in Australia and assist with evaluation and decision making.
Resumo:
This thesis presents the results of a study into ways that technology can be appropriated and designed to support urban rail commuters in their daily journeys. The study evaluated a mobile application prototype deployed along the Brisbane passenger rail network. This prototype was designed to support social interaction between passengers sharing the same trains. This thesis provides a step forward in showing the relevance of increasingly creating solutions that contribute to a more enjoyable and attractive public transport service.
Resumo:
This chapter addresses opportunities for problem posing in developing young children’s statistical literacy, with a focus on student-directed investigations. Although the notion of problem posing has broadened in recent years, there nevertheless remains limited research on how problem posing can be integrated within the regular mathematics curriculum, especially in the areas of statistics and probability. The chapter first reviews briefly aspects of problem posing that have featured in the literature over the years. Consideration is next given to the importance of developing children’s statistical literacy in which problem posing is an inherent feature. Some findings from a school playground investigation conducted in four, fourth-grade classes illustrate the different ways in which children posed investigative questions, how they made predictions about their outcomes and compared these with their findings, and the ways in which they chose to represent their findings.
Resumo:
As statistical education becomes more firmly embedded in the school curriculum and its value across the curriculum is recognised, attention moves from knowing procedures, such as calculating a mean or drawing a graph, to understanding the purpose of a statistical investigation in decision making in many disciplines. As students learn to complete the stages of an investigation, the question of meaningful assessment of the process arises. This paper considers models for carrying out a statistical inquiry and, based on a four-phase model, creates a developmental squence that can be used for the assessment of outcomes from each of the four phases as well as for the complete inquiry. The developmental sequence is based on the SOLO model, focussing on the "observed" outcomes during the inquiry process.
Resumo:
Bayesian networks (BNs) are tools for representing expert knowledge or evidence. They are especially useful for synthesising evidence or belief concerning a complex intervention, assessing the sensitivity of outcomes to different situations or contextual frameworks and framing decision problems that involve alternative types of intervention. Bayesian networks are useful extensions to logic maps when initiating a review or to facilitate synthesis and bridge the gap between evidence acquisition and decision-making. Formal elicitation techniques allow development of BNs on the basis of expert opinion. Such applications are useful alternatives to ‘empty’ reviews, which identify knowledge gaps but fail to support decision-making. Where review evidence exists, it can inform the development of a BN. We illustrate the construction of a BN using a motivating example that demonstrates how BNs can ensure coherence, transparently structure the problem addressed by a complex intervention and assess sensitivity to context, all of which are critical components of robust reviews of complex interventions. We suggest that BNs should be utilised to routinely synthesise reviews of complex interventions or empty reviews where decisions must be made despite poor evidence.
Resumo:
Educating responsive graduates. Graduate competencies include reliability, communication skills and ability to work in teams. Students using Collaborative technologies adapt to a new working environment, working in teams and using collaborative technologies for learning. Collaborative Technologies were used not simply for delivery of learning but innovatively to supplement and enrich research-based learning, providing a space for active engagement and interaction with resources and team. This promotes the development of responsive ‘intellectual producers’, able to effectively communicate, collaborate and negotiate in complex work environments. Exploiting technologies. Students use ‘new’ technologies to work collaboratively, allowing them to experience the reality of distributed workplaces incorporating both flexibility and ‘real’ time responsiveness. Students are responsible and accountable for individual and group work contributions in a highly transparent and readily accessible workspace. This experience provides a model of an effective learning tool. Navigating uncertainty and complexity. Collaborative technologies allows students to develop critical thinking and reflective skills as they develop a group product. In this forum students build resilience by taking ownership and managing group work, and navigating the uncertainties and complexities of group dynamics as they constructively and professionally engage in team dialogue and learn to focus on the goal of the team task.
Resumo:
This article examines a social media assignment used to teach and practice statistical literacy with over 400 students each semester in large-lecture traditional, fully online, and flipped sections of an introductory-level statistics course. Following the social media assignment, students completed a survey on how they approached the assignment. Drawing from the authors’ experiences with the project and the survey results, this article offers recommendations for developing social media assignments in large courses that focus on the interplay between the social media tool and the implications of assignment prompts.