887 resultados para Non-gaussian statistical mechanics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates profiling and differentiating customers through the use of statistical data mining techniques. The business application of our work centres on examining individuals’ seldomly studied yet critical consumption behaviour over an extensive time period within the context of the wireless telecommunication industry; consumption behaviour (as oppose to purchasing behaviour) is behaviour that has been performed so frequently that it become habitual and involves minimal intentions or decision making. Key variables investigated are the activity initialised timestamp and cell tower location as well as the activity type and usage quantity (e.g., voice call with duration in seconds); and the research focuses are on customers’ spatial and temporal usage behaviour. The main methodological emphasis is on the development of clustering models based on Gaussian mixture models (GMMs) which are fitted with the use of the recently developed variational Bayesian (VB) method. VB is an efficient deterministic alternative to the popular but computationally demandingMarkov chainMonte Carlo (MCMC) methods. The standard VBGMMalgorithm is extended by allowing component splitting such that it is robust to initial parameter choices and can automatically and efficiently determine the number of components. The new algorithm we propose allows more effective modelling of individuals’ highly heterogeneous and spiky spatial usage behaviour, or more generally human mobility patterns; the term spiky describes data patterns with large areas of low probability mixed with small areas of high probability. Customers are then characterised and segmented based on the fitted GMM which corresponds to how each of them uses the products/services spatially in their daily lives; this is essentially their likely lifestyle and occupational traits. Other significant research contributions include fitting GMMs using VB to circular data i.e., the temporal usage behaviour, and developing clustering algorithms suitable for high dimensional data based on the use of VB-GMM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: To compare the intraocular pressure readings obtained with the iCare rebound tonometer and the 7CR non-contact tonometer with those measured by Goldmann applanation tonometry in treated glaucoma patients. Design: A prospective, cross sectional study was conducted in a private tertiary glaucoma clinic. Participants: 109 (54M:55F) patients including only eyes under medical treatment for glaucoma. Methods: Measurement by Goldmann applanation tonometry, iCare rebound tonometry and 7CR non-contact tonometry. Main Outcome Measures: Intraocular pressure. Results: There were strong correlations between the intraocular pressure measurements obtained with Goldmann and both the rebound and non-contact tonometers (Spearman r values ≥ 0.79, p < 0.001). However, there were small, statistically significant differences between the average readings for each tonometer. For the rebound tonometer, the mean intraocular pressure was slightly higher compared to the Goldmann applanation tonometer in the right eyes (p = 0.02), and similar in the left eyes (p = 0.93) however these differences did not reach statistical significance. The Goldmann correlated measurements from the noncontact tonometer were lower than the average Goldmann reading for both right (p < 0.001) and left (p > 0.01) eyes. The corneal compensated measurements from the non-contact tonometer were significantly higher compared to the other tonometers (p ≤ 0.001). Conclusions: The iCare rebound tonometer and the 7CR non-contact tonometer measure IOP in fundamentally different ways to the Goldmann applanation tonometer. The resulting IOP values vary between the instruments and will need to be considered when comparing clinical versus home acquired measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PySSM is a Python package that has been developed for the analysis of time series using linear Gaussian state space models (SSM). PySSM is easy to use; models can be set up quickly and efficiently and a variety of different settings are available to the user. It also takes advantage of scientific libraries Numpy and Scipy and other high level features of the Python language. PySSM is also used as a platform for interfacing between optimised and parallelised Fortran routines. These Fortran routines heavily utilise Basic Linear Algebra (BLAS) and Linear Algebra Package (LAPACK) functions for maximum performance. PySSM contains classes for filtering, classical smoothing as well as simulation smoothing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective The aim of this study was to demonstrate the potential of near-infrared (NIR) spectroscopy for categorizing cartilage degeneration induced in animal models. Method Three models of osteoarthritic degeneration were induced in laboratory rats via one of the following methods: (i) menisectomy (MSX); (ii) anterior cruciate ligament transaction (ACLT); and (iii) intra-articular injection of mono-ido-acetete (1 mg) (MIA), in the right knee joint, with 12 rats per model group. After 8 weeks, the animals were sacrificed and tibial knee joints were collected. A custom-made nearinfrared (NIR) probe of diameter 5 mm was placed on the cartilage surface and spectral data were acquired from each specimen in the wavenumber range 4 000 – 12 500 cm−1. Following spectral data acquisition, the specimens were fixed and Safranin–O staining was performed to assess disease severity based on the Mankin scoring system. Using multivariate statistical analysis based on principal component analysis and partial least squares regression, the spectral data were then related to the Mankinscores of the samples tested. Results Mild to severe degenerative cartilage changes were observed in the subject animals. The ACLT models showed mild cartilage degeneration, MSX models moderate, and MIA severe cartilage degenerative changes both morphologically and histologically. Our result demonstrate that NIR spectroscopic information is capable of separating the cartilage samples into different groups relative to the severity of degeneration, with NIR correlating significantly with their Mankinscore (R2 = 88.85%). Conclusion We conclude that NIR is a viable tool for evaluating articularcartilage health and physical properties such as change in thickness with degeneration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: The ability to regulate joint stiffness and coordinate movement during landing when impaired by muscle fatigue has important implications for knee function. Unfortunately, the literature examining fatigue effects on landing mechanics suffers from a lack of consensus. Inconsistent results can be attributed to variable fatigue models, as well as grouping variable responses between individuals when statistically detecting differences between conditions. There remains a need to examine fatigue effects on knee function during landing with attention to these methodological limitations. Aim: The purpose of this study therefore, was to examine the effects of isokinetic fatigue on pre-impact muscle activity and post-impact knee mechanics during landing using singlesubject analysis. Methodology: Sixteen male university students (22.6+3.2 yrs; 1.78+0.07 m; 75.7+6.3 kg) performed maximal concentric and eccentric knee extensions in a reciprocal manner on an isokinetic dynamometer and step-landing trials on 2 occasions. On the first occasion each participant performed 20 step-landing trials from a knee-high platform followed by 75 maximal contractions on the isokinetic dynamometer. The isokinetic data was used to calculate the operational definition of fatigue. On the second occasion, with a minimum rest of 14 days, participants performed 2 sets of 20 step landing trials, followed by isokinetic exercise until the operational definition of fatigue was met and a final post-fatigue set of 20 step-landing trials. Results: Single-subject analyses revealed that isokinetic fatigue of the quadriceps induced variable responses in pre impact activation of knee extensors and flexors (frequency, onset timing and amplitude) and post-impact knee mechanics(stiffness and coordination). In general however, isokinetic fatigue induced sig nificant (p<0.05) reductions in quadriceps activation frequency, delayed onset and increased amplitude. In addition, knee stiffness was significantly (p<0.05) increased in some individuals, as well as impaired sagittal coordination. Conclusions: Pre impact activation and post-impact mechanics were adjusted in patterns that were unique to the individual, which could not be identified using traditional group-based statistical analysis. The results suggested that individuals optimised knee function differently to satisfy competing demands, such as minimising energy expenditure, as well as maximising joint stability and sensory information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Evidence concerning the alteration of knee function during landing suffers from a lack of consensus. This uncertainty can be attributed to methodological flaws, particularly in relation to the statistical analysis of variable human movement data. Aim: The aim of this study was to compare single-subject and group analysis in quantifying alterations in the magnitude and within-participant variability of knee mechanics during a step landing task. Methods: A group of healthy men (N = 12) stepped-down from a knee-high platform for 60 consecutive trials, each trial separated by a 1-minute rest. The magnitude and within-participant variability of sagittal knee stiffness and coordination of the landing leg during the immediate postimpact period were evaluated. Coordination of the knee was quantified in the sagittal plane by calculating the mean absolute relative phase of sagittal shank and thigh motion (MARP1) and between knee rotation and knee flexion (MARP2). Changes across trials were compared between both group and single-subject statistical analyses. Results: The group analysis detected significant reductions in MARP1 magnitude. However, the single-subject analyses detected changes in all dependent variables, which included increases in variability with task repetition. Between-individual variation was also present in the timing, size and direction of alterations to task repetition. Conclusion: The results have important implications for the interpretation of existing information regarding the adaptation of knee mechanics to interventions such as fatigue, footwear or landing height. It is proposed that a familiarisation session be incorporated in future experiments on a single-subject basis prior to an intervention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fire safety of light gauge cold-formed steel frame (LSF) wall systems is significant to the build-ing design. Gypsum plasterboard is widely used as a fire safety material in the building industry. It contains gypsum (CaSO4.2H2O), Calcium Carbonate (CaCO3) and most importantly free and chemically bound water in its crystal structure. The dehydration of the gypsum and the decomposition of Calcium Carbonate absorb heat, which gives the gypsum plasterboard fire resistant qualities. Recently a new composite panel system was developed, where a thin insulation layer was used externally between two plasterboards to improve the fire performance of LSF walls. In this research, finite element thermal models of both the traditional LSF wall panels with cavity insulation and the new LSF composite wall panels were developed to simulate their thermal behaviour under standard and realistic design fire conditions. Suitable thermal properties of gypsum plaster-board, insulation materials and steel were used. The developed models were then validated by comparing their results with fire test results. This paper presents the details of the developed finite element models of non-load bearing LSF wall panels and the thermal analysis results. It has shown that finite element models can be used to simulate the thermal behaviour of LSF walls with varying configurations of insulations and plasterboards. The results show that the use of cavity insulation was detrimental to the fire rating of LSF walls while the use of external insulation offered superior thermal protection. Effects of real fire conditions are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a need for an accurate real-time quantitative system that would enhance decision-making in the treatment of osteoarthritis. To achieve this objective, significant research is required that will enable articular cartilage properties to be measured and categorized for health and functionality without the need for laboratory tests involving biopsies for pathological evaluation. Such a system would provide the capability of access to the internal condition of the cartilage matrix and thus extend the vision-based arthroscopy that is currently used beyond the subjective evaluation of surgeons. The system required must be able to non-destructively probe the entire thickness of the cartilage and its immediate subchondral bone layer. In this thesis, near infrared spectroscopy is investigated for the purpose mentioned above. The aim is to relate it to the structure and load bearing properties of the cartilage matrix to the near infrared absorption spectrum and establish functional relationships that will provide objective, quantitative and repeatable categorization of cartilage condition outside the area of visible degradation in a joint. Based on results from traditional mechanical testing, their innovative interpretation and relationship with spectroscopic data, new parameters were developed. These were then evaluated for their consistency in discriminating between healthy viable and degraded cartilage. The mechanical and physico-chemical properties were related to specific regions of the near infrared absorption spectrum that were identified as part of the research conducted for this thesis. The relationships between the tissue's near infrared spectral response and the new parameters were modeled using multivariate statistical techniques based on partial least squares regression (PLSR). With significantly high levels of statistical correlation, the modeled relationships were demonstrated to possess considerable potential in predicting the properties of unknown tissue samples in a quick and non-destructive manner. In order to adapt near infrared spectroscopy for clinical applications, a balance between probe diameter and the number of active transmit-receive optic fibres must be optimized. This was achieved in the course of this research, resulting in an optimal probe configuration that could be adapted for joint tissue evaluation. Furthermore, as a proof-of-concept, a protocol for obtaining the new parameters from the near infrared absorption spectra of cartilage was developed and implemented in a graphical user interface (GUI)-based software, and used to assess cartilage-on-bone samples in vitro. This conceptual implementation has been demonstrated, in part by the individual parametric relationship with the near infrared absorption spectrum, the capacity of the proposed system to facilitate real-time, non-destructive evaluation of cartilage matrix integrity. In summary, the potential of the optical near infrared spectroscopy for evaluating articular cartilage and bone laminate has been demonstrated in this thesis. The approach could have a spin-off for other soft tissues and organs of the body. It builds on the earlier work of the group at QUT, enhancing the near infrared component of the ongoing research on developing a tool for cartilage evaluation that goes beyond visual and subjective methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cognitive radio is an emerging technology proposing the concept of dynamic spec- trum access as a solution to the looming problem of spectrum scarcity caused by the growth in wireless communication systems. Under the proposed concept, non- licensed, secondary users (SU) can access spectrum owned by licensed, primary users (PU) so long as interference to PU are kept minimal. Spectrum sensing is a crucial task in cognitive radio whereby the SU senses the spectrum to detect the presence or absence of any PU signal. Conventional spectrum sensing assumes the PU signal as ‘stationary’ and remains in the same activity state during the sensing cycle, while an emerging trend models PU as ‘non-stationary’ and undergoes state changes. Existing studies have focused on non-stationary PU during the transmission period, however very little research considered the impact on spectrum sensing when the PU is non-stationary during the sensing period. The concept of PU duty cycle is developed as a tool to analyse the performance of spectrum sensing detectors when detecting non-stationary PU signals. New detectors are also proposed to optimise detection with respect to duty cycle ex- hibited by the PU. This research consists of two major investigations. The first stage investigates the impact of duty cycle on the performance of existing detec- tors and the extent of the problem in existing studies. The second stage develops new detection models and frameworks to ensure the integrity of spectrum sensing when detecting non-stationary PU signals. The first investigation demonstrates that conventional signal model formulated for stationary PU does not accurately reflect the behaviour of a non-stationary PU. Therefore the performance calculated and assumed to be achievable by the conventional detector does not reflect actual performance achieved. Through analysing the statistical properties of duty cycle, performance degradation is proved to be a problem that cannot be easily neglected in existing sensing studies when PU is modelled as non-stationary. The second investigation presents detectors that are aware of the duty cycle ex- hibited by a non-stationary PU. A two stage detection model is proposed to improve the detection performance and robustness to changes in duty cycle. This detector is most suitable for applications that require long sensing periods. A second detector, the duty cycle based energy detector is formulated by integrat- ing the distribution of duty cycle into the test statistic of the energy detector and suitable for short sensing periods. The decision threshold is optimised with respect to the traffic model of the PU, hence the proposed detector can calculate average detection performance that reflect realistic results. A detection framework for the application of spectrum sensing optimisation is proposed to provide clear guidance on the constraints on sensing and detection model. Following this framework will ensure the signal model accurately reflects practical behaviour while the detection model implemented is also suitable for the desired detection assumption. Based on this framework, a spectrum sensing optimisation algorithm is further developed to maximise the sensing efficiency for non-stationary PU. New optimisation constraints are derived to account for any PU state changes within the sensing cycle while implementing the proposed duty cycle based detector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The gross overrepresentation of Indigenous peoples in prison populations suggests that sentencing may be a discriminatory process. Using findings from recent (1991–2011) multivariate statistical sentencing analyses from the United States, Canada, and Australia, we review the 3 key hypotheses advanced as plausible explanations for baseline sentencing discrepancies between Indigenous and non-Indigenous adult criminal defendants: (a) differential involvement, (b) negative discrimination, and (c) positive discrimination. Overall, the prior research shows strong support for the differential involvement thesis and some support for the discrimination theses (positive and negative). We argue that where discrimination is found, it may be explained by the lack of a more complete set of control variables in researchers’ multivariate models and/or differing political and social contexts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Here mixed convection boundary layer flow of a viscous fluid along a heated vertical semi-infinite plate is investigated in a non-absorbing medium. The relationship between convection and thermal radiation is established via boundary condition of second kind on the thermally radiating vertical surface. The governing boundary layer equations are transformed into dimensionless parabolic partial differential equations with the help of appropriate transformations and the resultant system is solved numerically by applying straightforward finite difference method along with Gaussian elimination technique. It is worthy to note that Prandlt number, Pr, is taken to be small (<< 1) which is appropriate for liquid metals. Moreover, the numerical results are demonstrated graphically by showing the effects of important physical parameters, namely, the modified Richardson number (or mixed convection parameter), Ri*, and surface radiation parameter, R, in terms of local skin friction and local Nusselt number coefficients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The unsteady boundary-layer development for thermomagnetic convection of paramagnetic fluids inside a square cavity has been considered in this study. The cavity is placed in a microgravity condition (no gravitation acceleration) and under a uniform magnetic field which acts vertically. A ramp temperature boundary condition is applied on left vertical side wall of the cavity where the temperature initially increases with time up to some specific time and maintain constant thereafter. A distinct magnetic convection boundary layer is developed adjacent to the left vertical wall due to the effect of the magnetic body force generated on the paramagnetic fluid. An improved scaling analysis has been performed using triple-layer integral method and verified by numerical simulations. The Prandtl number has been chosen greater than unity varied over 5-100. Moreover, the effect of various values of the magnetic parameter and magnetic Rayleigh number on the fluid flow and heat transfer has been shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of conduction-convection-radiation on natural convection flow of Newtonian optically thick gray fluid, confined in a non-Darcian porous media square cavity is numerically studied. For the gray fluid consideration is given to Rosseland diffusion approximation. Further assuming that (i) the temperature of the left vertical wall is varying linearly with height, (ii) cooled right vertical and top walls and (iii) the bottom wall is uniformly-heated. The governing equations are solved using the Alternate Direct Implicit method together with the Successive Over Relaxation technique. The investigation of the effect of governing parameters namely the Forschheimer resistance (Γ), the Planck constant (Rd), and the temperature difference (Δ), on flow pattern and heat transfer characteristics has been carried out. It was seen that the reduction of flow and heat transfer occurs as the Forschheimer resistance is increased. On the other hand both the strength of flow and heat transfer increases as the temperature ratio, Δ, is increased.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: The Trendelenburg Test (TT) is used to assess the functional strength of the hip abductor muscles (HABD), their ability to control frontal plane motion of the pelvis, and the ability of the lumbopelvic complex to transfer load into single leg stance. Rationale: Although a standard method to perform the test has been described for use within clinical populations, no study has directly investigated Trendelenburg’s hypotheses. Purpose: To investigate the validity of the TT using an ultrasound guided nerve block (UNB) of the superior gluteal nerve and determine whether the reduction in HABD strength would result in the theorized mechanical compensatory strategies measured during the TT. Methods: Quasi-experimental design using a convenience sample of nine healthy males. Only subjects with no current or previous injury to the lumbar spine, pelvis, or lower extremities, and no previous surgeries were included. Force dynamometry was used to evaluation HABD strength (%BW). 2D mechanics were used to evaluate contralateral pelvic drop (cMPD), change in contralateral pelvic drop (∆cMPD), ipsilateral hip adduction (iHADD) and ipsilateral trunk sway (TRUNK) measured in degrees (°). All measures were collected prior to and following a UNB on the superior gluteal nerve performed by an interventional radiologist. Results: Subjects’ age was median 31yrs (IQR:22-32yrs); and weight was median 73kg (IQR:67-81kg). An average 52% reduction of HABD strength (z=2.36,p=0.02) resulted following the UNB. No differences were found in cMPD or ∆cMPD (z=0.01,p= 0.99, z=-0.67,p=0.49). Individual changes in biomechanics show no consistency between subjects and non-systematic changes across the group. One subject demonstrated the mechanical compensations described by Trendelenburg. Discussion: The TT should not be used as screening measure for HABD strength in populations demonstrating strength greater than 30%BW but reserved for use with populations with marked HABD weakness. Importance: This study presents data regarding a critical level of HABD strength required to support the pelvis during the TT.