183 resultados para STATISTICAL-MECHANICS
Resumo:
This work examined a new method of detecting small water filled cracks in underground insulation ('water trees') using data from commecially available non-destructive testing equipment. A testing facility was constructed and a computer simulation of the insulation designed in order to test the proposed ageing factor - the degree of non-linearity. This was a large industry-backed project involving an ARC linkage grant, Ergon Energy and the University of Queensland, as well as the Queensland University of Technology.
Resumo:
In this paper, we develop and validate a new Statistically Assisted Fluid Registration Algorithm (SAFIRA) for brain images. A non-statistical version of this algorithm was first implemented in [2] and re-formulated using Lagrangian mechanics in [3]. Here we extend this algorithm to 3D: given 3D brain images from a population, vector fields and their corresponding deformation matrices are computed in a first round of registrations using the non-statistical implementation. Covariance matrices for both the deformation matrices and the vector fields are then obtained and incorporated (separately or jointly) in the regularizing (i.e., the non-conservative Lagrangian) terms, creating four versions of the algorithm. We evaluated the accuracy of each algorithm variant using the manually labeled LPBA40 dataset, which provides us with ground truth anatomical segmentations. We also compared the power of the different algorithms using tensor-based morphometry -a technique to analyze local volumetric differences in brain structure- applied to 46 3D brain scans from healthy monozygotic twins.
Resumo:
This chapter addresses opportunities for problem posing in developing young children’s statistical literacy, with a focus on student-directed investigations. Although the notion of problem posing has broadened in recent years, there nevertheless remains limited research on how problem posing can be integrated within the regular mathematics curriculum, especially in the areas of statistics and probability. The chapter first reviews briefly aspects of problem posing that have featured in the literature over the years. Consideration is next given to the importance of developing children’s statistical literacy in which problem posing is an inherent feature. Some findings from a school playground investigation conducted in four, fourth-grade classes illustrate the different ways in which children posed investigative questions, how they made predictions about their outcomes and compared these with their findings, and the ways in which they chose to represent their findings.
Resumo:
As statistical education becomes more firmly embedded in the school curriculum and its value across the curriculum is recognised, attention moves from knowing procedures, such as calculating a mean or drawing a graph, to understanding the purpose of a statistical investigation in decision making in many disciplines. As students learn to complete the stages of an investigation, the question of meaningful assessment of the process arises. This paper considers models for carrying out a statistical inquiry and, based on a four-phase model, creates a developmental squence that can be used for the assessment of outcomes from each of the four phases as well as for the complete inquiry. The developmental sequence is based on the SOLO model, focussing on the "observed" outcomes during the inquiry process.
Resumo:
This article examines a social media assignment used to teach and practice statistical literacy with over 400 students each semester in large-lecture traditional, fully online, and flipped sections of an introductory-level statistics course. Following the social media assignment, students completed a survey on how they approached the assignment. Drawing from the authors’ experiences with the project and the survey results, this article offers recommendations for developing social media assignments in large courses that focus on the interplay between the social media tool and the implications of assignment prompts.
Resumo:
Due to its remarkable mechanical and biological properties, there is considerable interest in understanding, and replicating, spider silk's stress-processing mechanisms and structure-function relationships. Here, we investigate the role of water in the nanoscale mechanics of the different regions in the spider silk fibre, and their relative contributions to stress processing. We propose that the inner core region, rich in spidroin II, retains water due to its inherent disorder, thereby providing a mechanism to dissipate energy as it breaks a sacrificial amide-water bond and gains order under strain, forming a stronger amide-amide bond. The spidroin I-rich outer core is more ordered under ambient conditions and is inherently stiffer and stronger, yet does not on its own provide high toughness. The markedly different interactions of the two proteins with water, and their distribution across the fibre, produce a stiffness differential and provide a balance between stiffness, strength and toughness under ambient conditions. Under wet conditions, this balance is destroyed as the stiff outer core material reverts to the behaviour of the inner core.
Resumo:
This paper presents a novel three-dimensional hybrid smoothed finite element method (H-SFEM) for solid mechanics problems. In 3D H-SFEM, the strain field is assumed to be the weighted average between compatible strains from the finite element method (FEM) and smoothed strains from the node-based smoothed FEM with a parameter α equipped into H-SFEM. By adjusting α, the upper and lower bound solutions in the strain energy norm and eigenfrequencies can always be obtained. The optimized α value in 3D H-SFEM using a tetrahedron mesh possesses a close-to-exact stiffness of the continuous system, and produces ultra-accurate solutions in terms of displacement, strain energy and eigenfrequencies in the linear and nonlinear problems. The novel domain-based selective scheme is proposed leading to a combined selective H-SFEM model that is immune from volumetric locking and hence works well for nearly incompressible materials. The proposed 3D H-SFEM is an innovative and unique numerical method with its distinct features, which has great potential in the successful application for solid mechanics problems.
Resumo:
The export of sediments from coastal catchments can have detrimental impacts on estuaries and near shore reef ecosystems such as the Great Barrier Reef. Catchment management approaches aimed at reducing sediment loads require monitoring to evaluate their effectiveness in reducing loads over time. However, load estimation is not a trivial task due to the complex behaviour of constituents in natural streams, the variability of water flows and often a limited amount of data. Regression is commonly used for load estimation and provides a fundamental tool for trend estimation by standardising the other time specific covariates such as flow. This study investigates whether load estimates and resultant power to detect trends can be enhanced by (i) modelling the error structure so that temporal correlation can be better quantified, (ii) making use of predictive variables, and (iii) by identifying an efficient and feasible sampling strategy that may be used to reduce sampling error. To achieve this, we propose a new regression model that includes an innovative compounding errors model structure and uses two additional predictive variables (average discounted flow and turbidity). By combining this modelling approach with a new, regularly optimised, sampling strategy, which adds uniformity to the event sampling strategy, the predictive power was increased to 90%. Using the enhanced regression model proposed here, it was possible to detect a trend of 20% over 20 years. This result is in stark contrast to previous conclusions presented in the literature. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We consider the development of statistical models for prediction of constituent concentration of riverine pollutants, which is a key step in load estimation from frequent flow rate data and less frequently collected concentration data. We consider how to capture the impacts of past flow patterns via the average discounted flow (ADF) which discounts the past flux based on the time lapsed - more recent fluxes are given more weight. However, the effectiveness of ADF depends critically on the choice of the discount factor which reflects the unknown environmental cumulating process of the concentration compounds. We propose to choose the discount factor by maximizing the adjusted R-2 values or the Nash-Sutcliffe model efficiency coefficient. The R2 values are also adjusted to take account of the number of parameters in the model fit. The resulting optimal discount factor can be interpreted as a measure of constituent exhaustion rate during flood events. To evaluate the performance of the proposed regression estimators, we examine two different sampling scenarios by resampling fortnightly and opportunistically from two real daily datasets, which come from two United States Geological Survey (USGS) gaging stations located in Des Plaines River and Illinois River basin. The generalized rating-curve approach produces biased estimates of the total sediment loads by -30% to 83%, whereas the new approaches produce relatively much lower biases, ranging from -24% to 35%. This substantial improvement in the estimates of the total load is due to the fact that predictability of concentration is greatly improved by the additional predictors.
Resumo:
Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.
Resumo:
This dissertation proposed a novel experimental model combining a defect configuration with an active instrumented fixation device to investigate the influence of mechanics on bone healing. The proposed defect configuration aimed to minimise physiological loading within an experimental fracture gap and the instrumented fixator was used for the application of controlled displacements and in vivo stiffness monitoring of the healing process. This thesis has provided a novel approach to advance current knowledge and understanding of mechanobiology, which has been limited in previous experimental models.
Resumo:
In this paper, we tackle the problem of unsupervised domain adaptation for classification. In the unsupervised scenario where no labeled samples from the target domain are provided, a popular approach consists in transforming the data such that the source and target distributions be- come similar. To compare the two distributions, existing approaches make use of the Maximum Mean Discrepancy (MMD). However, this does not exploit the fact that prob- ability distributions lie on a Riemannian manifold. Here, we propose to make better use of the structure of this man- ifold and rely on the distance on the manifold to compare the source and target distributions. In this framework, we introduce a sample selection method and a subspace-based method for unsupervised domain adaptation, and show that both these manifold-based techniques outperform the cor- responding approaches based on the MMD. Furthermore, we show that our subspace-based approach yields state-of- the-art results on a standard object recognition benchmark.
Resumo:
The past decade has brought a proliferation of statistical genetic (linkage) analysis techniques, incorporating new methodology and/or improvement of existing methodology in gene mapping, specifically targeted towards the localization of genes underlying complex disorders. Most of these techniques have been implemented in user-friendly programs and made freely available to the genetics community. Although certain packages may be more 'popular' than others, a common question asked by genetic researchers is 'which program is best for me?'. To help researchers answer this question, the following software review aims to summarize the main advantages and disadvantages of the popular GENEHUNTER package.
Resumo:
Masonry under compression is affected by the properties of its constituents and their interfaces. In spite of extensive investigations of the behaviour of masonry under compression, the information in the literature cannot be regarded as comprehensive due to ongoing inventions of new generation products – for example, polymer modified thin layer mortared masonry and drystack masonry. As comprehensive experimental studies are very expensive, an analytical model inspired by damage mechanics is developed and applied to the prediction of the compressive behaviour of masonry in this paper. The model incorporates a parabolic progressively softening stress-strain curve for the units and a progressively stiffening stress-strain curve until a threshold strain for the combined mortar and the unit-mortar interfaces is reached. The model simulates the mutual constraints imposed by each of these constituents through their respective tensile and compressive behaviour and volumetric changes. The advantage of the model is that it requires only the properties of the constituents and considers masonry as a continuum and computes the average properties of the composite masonry prisms/wallettes; it does not require discretisation of prism or wallette similar to the finite element methods. The capability of the model in capturing the phenomenological behaviour of masonry with appropriate elastic response, stiffness degradation and post peak softening is presented through numerical examples. The fitting of the experimental data to the model parameters is demonstrated through calibration of some selected test data on units and mortar from the literature; the calibrated model is shown to predict the responses of the experimentally determined masonry built using the corresponding units and mortar quite well. Through a series of sensitivity studies, the model is also shown to predict the masonry strength appropriately for changes to the properties of the units and mortar, the mortar joint thickness and the ratio of the height of unit to mortar joint thickness. The unit strength is shown to affect the masonry strength significantly. Although the mortar strength has only a marginal effect, reduction in mortar joint thickness is shown to have a profound effect on the masonry strength. The results obtained from the model are compared with the various provisions in the Australian Masonry Structures Standard AS3700 (2011) and Eurocode 6.
Resumo:
Early detection of (pre-)signs of ulceration on a diabetic foot is valuable for clinical practice. Hyperspectral imaging is a promising technique for detection and classification of such (pre-)signs. However, the number of the spectral bands should be limited to avoid overfitting, which is critical for pixel classification with hyperspectral image data. The goal was to design a detector/classifier based on spectral imaging (SI) with a small number of optical bandpass filters. The performance and stability of the design were also investigated. The selection of the bandpass filters boils down to a feature selection problem. A dataset was built, containing reflectance spectra of 227 skin spots from 64 patients, measured with a spectrometer. Each skin spot was annotated manually by clinicians as "healthy" or a specific (pre-)sign of ulceration. Statistical analysis on the data set showed the number of required filters is between 3 and 7, depending on additional constraints on the filter set. The stability analysis revealed that shot noise was the most critical factor affecting the classification performance. It indicated that this impact could be avoided in future SI systems with a camera sensor whose saturation level is higher than 106, or by postimage processing.