710 resultados para Variance Models
Resumo:
Aims The aim of this cross sectional study is to explore levels of physical activity and sitting behaviour amongst a sample of pregnant Australian women (n = 81), and investigate whether reported levels of physical activity and/or time spent sitting were associated with depressive symptom scores after controlling for potential covariates. Methods Study participants were women who attended the antenatal clinic of a large Brisbane maternity hospital between October and November 2006. Data relating to participants. current levels of physical activity, sitting behaviour, depressive symptoms, demographic characteristics and exposure to known risk factors for depression during pregnancy were collected; via on-site survey, follow-up telephone interview (approximately one week later) and post delivery access to participant hospital records. Results Participants were aged 29.5 (¡¾ 5.6) years and mostly partnered (86.4%) with a gross household income above $26,000 per annum (88.9%). Levels of physical activity were generally low, with only 28.4 % of participants reporting sufficient total activity and 16% of participants reporting sufficient planned (leisure-time) activity. The sample mean for depressive symptom scores measured by the Hospital Anxiety and Depression Scale (HADS-D) was 6.38 (¡¾ 2.55). The mean depressive symptom scores for participants who reported total moderate-to-vigorous activity levels of sufficient, insufficient, and none, were 5.43 (¡¾ 1.56), 5.82 (¡¾ 1.77) and 7.63 (¡¾ 3.25), respectively. Hierarchical multivariable linear regression modelling indicated that after controlling for covariates, a statistically significant difference of 1.09 points was observed between mean depressive symptom scores of participants who reported sufficient total physical activity, compared with participants who reported they were engaging in no moderate-to-vigorous activity in a typical week (p = 0.05) but this did not reach the criteria for a clinically meaningful difference. Total physical activity was contributed 2.2% to the total 30.3% of explained variance within this model. The other main contributors to explained variance in multivariable regression models were anxiety symptom scores and the number of existing children. Further, a trend was observed between higher levels of planned sitting behaviour and higher depressive symptom scores (p = 0.06); this correlation was not clinically meaningful. Planned sitting contributed 3.2% to the total 31.3 % of explained variance. The number of regression covariates and limited sample size led to a less than ideal ratio of covariates to participants, probably attenuating this relationship. Specific information about the sitting-based activities in which participants engaged may have provided greater insight about the relationship between planned sitting and depressive symptoms, but these data were not captured by the present study. Conclusions The finding that higher levels of physical activity were associated with lower levels of depressive symptoms is consistent with the current body of existing literature in pregnant women, and with a larger body of evidence based in general population samples. Although this result was not considered clinically meaningful, the criterion for a clinically meaningful result was an a priori decision based on quality of life literature in non-pregnant populations and may not truly reflect a difference in symptoms that is meaningful to pregnant women. Further investigation to establish clinically meaningful criteria for continuous depressive symptom data in pregnant women is required. This result may have implications relating to prevention and management options for depression during pregnancy. The observed trend between planned sitting and depressive symptom scores is consistent with literature based on leisure-time sitting behaviour in general population samples, and suggests that further research in this area, with larger samples of pregnant women and more specific sitting data is required to explore potential associations between activities such as television viewing and depressive symptoms, as this may be an area of behaviour that is amenable to modification.
Resumo:
Early models of bankruptcy prediction employed financial ratios drawn from pre-bankruptcy financial statements and performed well both in-sample and out-of-sample. Since then there has been an ongoing effort in the literature to develop models with even greater predictive performance. A significant innovation in the literature was the introduction into bankruptcy prediction models of capital market data such as excess stock returns and stock return volatility, along with the application of the Black–Scholes–Merton option-pricing model. In this note, we test five key bankruptcy models from the literature using an upto- date data set and find that they each contain unique information regarding the probability of bankruptcy but that their performance varies over time. We build a new model comprising key variables from each of the five models and add a new variable that proxies for the degree of diversification within the firm. The degree of diversification is shown to be negatively associated with the risk of bankruptcy. This more general model outperforms the existing models in a variety of in-sample and out-of-sample tests.
Resumo:
Analytical expressions are derived for the mean and variance, of estimates of the bispectrum of a real-time series assuming a cosinusoidal model. The effects of spectral leakage, inherent in discrete Fourier transform operation when the modes present in the signal have a nonintegral number of wavelengths in the record, are included in the analysis. A single phase-coupled triad of modes can cause the bispectrum to have a nonzero mean value over the entire region of computation owing to leakage. The variance of bispectral estimates in the presence of leakage has contributions from individual modes and from triads of phase-coupled modes. Time-domain windowing reduces the leakage. The theoretical expressions for the mean and variance of bispectral estimates are derived in terms of a function dependent on an arbitrary symmetric time-domain window applied to the record. the number of data, and the statistics of the phase coupling among triads of modes. The theoretical results are verified by numerical simulations for simple test cases and applied to laboratory data to examine phase coupling in a hypothesis testing framework
Resumo:
In 2005, Stephen Abram, vice president of Innovation at SirsiDynix, challenged library and information science (LIS) professionals to start becoming “librarian 2.0.” In the last few years, discussion and debate about the “core competencies” needed by librarian 2.0 have appeared in the “biblioblogosphere” (blogs written by LIS professionals). However, beyond these informal blog discussions few systematic and empirically based studies have taken place. A project funded by the Australian Learning and Teaching Council fills this gap. The project identifies the key skills, knowledge, and attributes required by “librarian 2.0.” Eighty-one members of the Australian LIS profession participated in a series of focus groups. Eight themes emerged as being critical to “librarian 2.0”: technology, communication, teamwork, user focus, business savvy, evidence based practice, learning and education, and personal traits. Guided by these findings interviews with 36 LIS educators explored the current approaches used within contemporary LIS education to prepare graduates to become “librarian 2.0”. This video presents an example of ‘great practice’ in current LIS educative practice in helping to foster web 2.0 professionals.
Resumo:
In 2005, Stephen Abram, vice president of Innovation at SirsiDynix, challenged library and information science (LIS) professionals to start becoming “librarian 2.0.” In the last few years, discussion and debate about the “core competencies” needed by librarian 2.0 have appeared in the “biblioblogosphere” (blogs written by LIS professionals). However, beyond these informal blog discussions few systematic and empirically based studies have taken place. A project funded by the Australian Learning and Teaching Council fills this gap. The project identifies the key skills, knowledge, and attributes required by “librarian 2.0.” Eighty-one members of the Australian LIS profession participated in a series of focus groups. Eight themes emerged as being critical to “librarian 2.0”: technology, communication, teamwork, user focus, business savvy, evidence based practice, learning and education, and personal traits. Guided by these findings interviews with 36 LIS educators explored the current approaches used within contemporary LIS education to prepare graduates to become “librarian 2.0”. This video presents an example of ‘great practice’ in current LIS education as it strives to foster web 2.0 professionals.
Resumo:
In 2005, Stephen Abram, vice president of Innovation at SirsiDynix, challenged library and information science (LIS) professionals to start becoming “librarian 2.0.” In the last few years, discussion and debate about the “core competencies” needed by librarian 2.0 have appeared in the “biblioblogosphere” (blogs written by LIS professionals). However, beyond these informal blog discussions few systematic and empirically based studies have taken place. A project funded by the Australian Learning and Teaching Council fills this gap. The project identifies the key skills, knowledge, and attributes required by “librarian 2.0.” Eighty-one members of the Australian LIS profession participated in a series of focus groups. Eight themes emerged as being critical to “librarian 2.0”: technology, communication, teamwork, user focus, business savvy, evidence based practice, learning and education, and personal traits. Guided by these findings interviews with 36 LIS educators explored the current approaches used within contemporary LIS education to prepare graduates to become “librarian 2.0”. This video presents an example of ‘great practice’ in current LIS education as it strives to foster web 2.0 professionals.
Resumo:
Multivariate volatility forecasts are an important input in many financial applications, in particular portfolio optimisation problems. Given the number of models available and the range of loss functions to discriminate between them, it is obvious that selecting the optimal forecasting model is challenging. The aim of this thesis is to thoroughly investigate how effective many commonly used statistical (MSE and QLIKE) and economic (portfolio variance and portfolio utility) loss functions are at discriminating between competing multivariate volatility forecasts. An analytical investigation of the loss functions is performed to determine whether they identify the correct forecast as the best forecast. This is followed by an extensive simulation study examines the ability of the loss functions to consistently rank forecasts, and their statistical power within tests of predictive ability. For the tests of predictive ability, the model confidence set (MCS) approach of Hansen, Lunde and Nason (2003, 2011) is employed. As well, an empirical study investigates whether simulation findings hold in a realistic setting. In light of these earlier studies, a major empirical study seeks to identify the set of superior multivariate volatility forecasting models from 43 models that use either daily squared returns or realised volatility to generate forecasts. This study also assesses how the choice of volatility proxy affects the ability of the statistical loss functions to discriminate between forecasts. Analysis of the loss functions shows that QLIKE, MSE and portfolio variance can discriminate between multivariate volatility forecasts, while portfolio utility cannot. An examination of the effective loss functions shows that they all can identify the correct forecast at a point in time, however, their ability to discriminate between competing forecasts does vary. That is, QLIKE is identified as the most effective loss function, followed by portfolio variance which is then followed by MSE. The major empirical analysis reports that the optimal set of multivariate volatility forecasting models includes forecasts generated from daily squared returns and realised volatility. Furthermore, it finds that the volatility proxy affects the statistical loss functions’ ability to discriminate between forecasts in tests of predictive ability. These findings deepen our understanding of how to choose between competing multivariate volatility forecasts.
Resumo:
Orthopaedic fracture fixation implants are increasingly being designed using accurate 3D models of long bones based on computer tomography (CT). Unlike CT, magnetic resonance imaging (MRI) does not involve ionising radiation and is therefore a desirable alternative to CT. This study aims to quantify the accuracy of MRI-based 3D models compared to CT-based 3D models of long bones. The femora of five intact cadaver ovine limbs were scanned using a 1.5T MRI and a CT scanner. Image segmentation of CT and MRI data was performed using a multi-threshold segmentation method. Reference models were generated by digitising the bone surfaces free of soft tissue with a mechanical contact scanner. The MRI- and CT-derived models were validated against the reference models. The results demonstrated that the CT-based models contained an average error of 0.15mm while the MRI-based models contained an average error of 0.23mm. Statistical validation shows that there are no significant differences between 3D models based on CT and MRI data. These results indicate that the geometric accuracy of MRI based 3D models was comparable to that of CT-based models and therefore MRI is a potential alternative to CT for generation of 3D models with high geometric accuracy.
Resumo:
The behaviour of ion channels within cardiac and neuronal cells is intrinsically stochastic in nature. When the number of channels is small this stochastic noise is large and can have an impact on the dynamics of the system which is potentially an issue when modelling small neurons and drug block in cardiac cells. While exact methods correctly capture the stochastic dynamics of a system they are computationally expensive, restricting their inclusion into tissue level models and so approximations to exact methods are often used instead. The other issue in modelling ion channel dynamics is that the transition rates are voltage dependent, adding a level of complexity as the channel dynamics are coupled to the membrane potential. By assuming that such transition rates are constant over each time step, it is possible to derive a stochastic differential equation (SDE), in the same manner as for biochemical reaction networks, that describes the stochastic dynamics of ion channels. While such a model is more computationally efficient than exact methods we show that there are analytical problems with the resulting SDE as well as issues in using current numerical schemes to solve such an equation. We therefore make two contributions: develop a different model to describe the stochastic ion channel dynamics that analytically behaves in the correct manner and also discuss numerical methods that preserve the analytical properties of the model.
Resumo:
We consider the problem of how to construct robust designs for Poisson regression models. An analytical expression is derived for robust designs for first-order Poisson regression models where uncertainty exists in the prior parameter estimates. Given certain constraints in the methodology, it may be necessary to extend the robust designs for implementation in practical experiments. With these extensions, our methodology constructs designs which perform similarly, in terms of estimation, to current techniques, and offers the solution in a more timely manner. We further apply this analytic result to cases where uncertainty exists in the linear predictor. The application of this methodology to practical design problems such as screening experiments is explored. Given the minimal prior knowledge that is usually available when conducting such experiments, it is recommended to derive designs robust across a variety of systems. However, incorporating such uncertainty into the design process can be a computationally intense exercise. Hence, our analytic approach is explored as an alternative.
Resumo:
Inverse problems based on using experimental data to estimate unknown parameters of a system often arise in biological and chaotic systems. In this paper, we consider parameter estimation in systems biology involving linear and non-linear complex dynamical models, including the Michaelis–Menten enzyme kinetic system, a dynamical model of competence induction in Bacillus subtilis bacteria and a model of feedback bypass in B. subtilis bacteria. We propose some novel techniques for inverse problems. Firstly, we establish an approximation of a non-linear differential algebraic equation that corresponds to the given biological systems. Secondly, we use the Picard contraction mapping, collage methods and numerical integration techniques to convert the parameter estimation into a minimization problem of the parameters. We propose two optimization techniques: a grid approximation method and a modified hybrid Nelder–Mead simplex search and particle swarm optimization (MH-NMSS-PSO) for non-linear parameter estimation. The two techniques are used for parameter estimation in a model of competence induction in B. subtilis bacteria with noisy data. The MH-NMSS-PSO scheme is applied to a dynamical model of competence induction in B. subtilis bacteria based on experimental data and the model for feedback bypass. Numerical results demonstrate the effectiveness of our approach.
Resumo:
Endocytosis is the process by which cells internalise molecules including nutrient proteins from the extracellular media. In one form, macropinocytosis, the membrane at the cell surface ruffles and folds over to give rise to an internalised vesicle. Negatively charged phospholipids within the membrane called phosphoinositides then undergo a series of transformations that are critical for the correct trafficking of the vesicle within the cell, and which are often pirated by pathogens such as Salmonella. Advanced fluorescent video microscopy imaging now allows the detailed observation and quantification of these events in live cells over time. Here we use these observations as a basis for building differential equation models of the transformations. An initial investigation of these interactions was modelled with reaction rates proportional to the sum of the concentrations of the individual constituents. A first order linear system for the concentrations results. The structure of the system enables analytical expressions to be obtained and the problem becomes one of determining the reaction rates which generate the observed data plots. We present results with reaction rates which capture the general behaviour of the reactions so that we now have a complete mathematical model of phosphoinositide transformations that fits the experimental observations. Some excellent fits are obtained with modulated exponential functions; however, these are not solutions of the linear system. The question arises as to how the model may be modified to obtain a system whose solution provides a more accurate fit.
Resumo:
Acoustic sensors play an important role in augmenting the traditional biodiversity monitoring activities carried out by ecologists and conservation biologists. With this ability however comes the burden of analysing large volumes of complex acoustic data. Given the complexity of acoustic sensor data, fully automated analysis for a wide range of species is still a significant challenge. This research investigates the use of citizen scientists to analyse large volumes of environmental acoustic data in order to identify bird species. Specifically, it investigates ways in which the efficiency of a user can be improved through the use of species identification tools and the use of reputation models to predict the accuracy of users with unidentified skill levels. Initial experimental results are reported.