377 resultados para Applied Statistics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article describes research conducted for the Japanese government in the wake of the magnitude 9.0 earthquake and tsunami that struck eastern Japan on March 11, 2011. In this study, material stock analysis (MSA) is used to examine the losses of building and infrastructure materials after this disaster. Estimates of the magnitude of material stock that has lost its social function as a result of a disaster can indicate the quantities required for reconstruction, help garner a better understanding of the volumes of waste flows generated by that disaster, and also help in the course of policy deliberations in the recovery of disaster-stricken areas. Calculations of the lost building and road materials in the five prefectures most affected were undertaken. Analysis in this study is based on the use of geographical information systems (GIS) databases and statistics; it aims to (1) describe in spatial terms what construction materials were lost, (2) estimate the amount of infrastructure material needed to rehabilitate disaster areas, and (3) indicate the amount of lost material stock that should be taken into consideration during government policy deliberations. Our analysis concludes that the material stock losses of buildings and road infrastructure are 31.8 and 2.1 million tonnes, respectively. This research approach and the use of spatial MSA can be useful for urban planners and may also convey more appropriate information about disposal based on the work of municipalities in disaster-afflicted areas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In structural brain MRI, group differences or changes in brain structures can be detected using Tensor-Based Morphometry (TBM). This method consists of two steps: (1) a non-linear registration step, that aligns all of the images to a common template, and (2) a subsequent statistical analysis. The numerous registration methods that have recently been developed differ in their detection sensitivity when used for TBM, and detection power is paramount in epidemological studies or drug trials. We therefore developed a new fluid registration method that computes the mappings and performs statistics on them in a consistent way, providing a bridge between TBM registration and statistics. We used the Log-Euclidean framework to define a new regularizer that is a fluid extension of the Riemannian elasticity, which assures diffeomorphic transformations. This regularizer constrains the symmetrized Jacobian matrix, also called the deformation tensor. We applied our method to an MRI dataset from 40 fraternal and identical twins, to revealed voxelwise measures of average volumetric differences in brain structure for subjects with different degrees of genetic resemblance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a new algorithm to compute the voxel-wise genetic contribution to brain fiber microstructure using diffusion tensor imaging (DTI) in a dataset of 25 monozygotic (MZ) twins and 25 dizygotic (DZ) twin pairs (100 subjects total). First, the structural and DT scans were linearly co-registered. Structural MR scans were nonlinearly mapped via a 3D fluid transformation to a geometrically centered mean template, and the deformation fields were applied to the DTI volumes. After tensor re-orientation to realign them to the anatomy, we computed several scalar and multivariate DT-derived measures including the geodesic anisotropy (GA), the tensor eigenvalues and the full diffusion tensors. A covariance-weighted distance was measured between twins in the Log-Euclidean framework [2], and used as input to a maximum-likelihood based algorithm to compute the contributions from genetics (A), common environmental factors (C) and unique environmental ones (E) to fiber architecture. Quanititative genetic studies can take advantage of the full information in the diffusion tensor, using covariance weighted distances and statistics on the tensor manifold.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Twin studies are a major research direction in imaging genetics, a new field, which combines algorithms from quantitative genetics and neuroimaging to assess genetic effects on the brain. In twin imaging studies, it is common to estimate the intraclass correlation (ICC), which measures the resemblance between twin pairs for a given phenotype. In this paper, we extend the commonly used Pearson correlation to a more appropriate definition, which uses restricted maximum likelihood methods (REML). We computed proportion of phenotypic variance due to additive (A) genetic factors, common (C) and unique (E) environmental factors using a new definition of the variance components in the diffusion tensor-valued signals. We applied our analysis to a dataset of Diffusion Tensor Images (DTI) from 25 identical and 25 fraternal twin pairs. Differences between the REML and Pearson estimators were plotted for different sample sizes, showing that the REML approach avoids severe biases when samples are smaller. Measures of genetic effects were computed for scalar and multivariate diffusion tensor derived measures including the geodesic anisotropy (tGA) and the full diffusion tensors (DT), revealing voxel-wise genetic contributions to brain fiber microstructure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modelling fluvial processes is an effective way to reproduce basin evolution and to recreate riverbed morphology. However, due to the complexity of alluvial environments, deterministic modelling of fluvial processes is often impossible. To address the related uncertainties, we derive a stochastic fluvial process model on the basis of the convective Exner equation that uses the statistics (mean and variance) of river velocity as input parameters. These statistics allow for quantifying the uncertainty in riverbed topography, river discharge and position of the river channel. In order to couple the velocity statistics and the fluvial process model, the perturbation method is employed with a non-stationary spectral approach to develop the Exner equation as two separate equations: the first one is the mean equation, which yields the mean sediment thickness, and the second one is the perturbation equation, which yields the variance of sediment thickness. The resulting solutions offer an effective tool to characterize alluvial aquifers resulting from fluvial processes, which allows incorporating the stochasticity of the paleoflow velocity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The practice of statistics is the focus of the world in which professional statisticians live. To understand meaningfully what this practice is about, students need to engage in it themselves. Acknowledging the limitations of a genuine classroom setting, this study attempted to expose four classes of year 5 students (n=91) to an authentic experience of the practice of statistics. Setting an overall context of people’s habits that are considered environmentally friendly, the students sampled their class and set criteria for being environmentally friendly based on questions from the Australian Bureau of Statistics CensusAtSchool site. They then analysed the data and made decisions, acknowledging their degree of certainty, about three populations based on their criteria: their class, year 5 students in their school and year 5 students in Australia. The next step was to collect a random sample the size of their class from an Australian Bureau of Statistics ‘population’, analyse it and again make a decision about Australian year 5 students. At the end, they suggested what further research they might do. The analysis of students’ responses gives insight into primary students’ capacity to appreciate and understand decision making, and to participate in the practice of statistics, a topic that has received very little attention in the literature. Based on the total possible score of 23 from student workbook entries, 80 % of students achieved at least a score of 11.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The fate and transport of tricyclazole and imidacloprid in paddy plots after nursery-box application was monitored. Water and surface soil samples were collected over a period of 35 days. Rates of dissipation from paddy waters and soils were also measured. Dissipation of the two pesticides from paddy water can be described by first-order kinetics. In the soil, only the dissipation of imidacloprid fitted to the simple first-order kinetics, whereas tricyclazole concentrations fluctuated until the end of the monitoring period. Mean half-life (DT50) values for tricyclazole were 11.8 and 305 days, respectively, in paddy water and surface soil. The corresponding values of imidacloprid were 2.0 and 12.5 days, respectively, in water and in surface soil. Less than 0.9% of tricyclazole and 0.1% of imidacloprid were lost through runoff during the monitoring period even under 6.3 cm of rainfall. The pesticide formulation seemed to affect the environmental fate of these pesticides when these results were compared to those of other studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In analysis of longitudinal data, the variance matrix of the parameter estimates is usually estimated by the 'sandwich' method, in which the variance for each subject is estimated by its residual products. We propose smooth bootstrap methods by perturbing the estimating functions to obtain 'bootstrapped' realizations of the parameter estimates for statistical inference. Our extensive simulation studies indicate that the variance estimators by our proposed methods can not only correct the bias of the sandwich estimator but also improve the confidence interval coverage. We applied the proposed method to a data set from a clinical trial of antibiotics for leprosy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Epidemiological and clinical studies suggest comorbidity between prostate cancer (PCA) and cardiovascular disease (CVD) risk factors. However, the relationship between these two phenotypes is still not well understood. Here we sought to identify shared genetic loci between PCA and CVD risk factors. Methods We applied a genetic epidemiology method based on conjunction false discovery rate (FDR) that combines summary statistics from different genome-wide association studies (GWAS), and allows identification of genetic overlap between two phenotypes. We evaluated summary statistics from large, multi-centre GWA studies of PCA (n = 50 000) and CVD risk factors (n = 200 000) [triglycerides (TG), low-density lipoprotein (LDL) cholesterol and high-density lipoprotein (HDL) cholesterol, systolic blood pressure, body mass index, waist-hip ratio and type 2 diabetes (T2D)]. Enrichment of single nucleotide polymorphisms (SNPs) associated with PCA and CVD risk factors was assessed with conditional quantile-quantile plots and the Anderson-Darling test. Moreover, we pinpointed shared loci using conjunction FDR. Results We found the strongest enrichment of P-values in PCA was conditional on LDL and conditional on TG. In contrast, we found only weak enrichment conditional on HDL or conditional on the other traits investigated. Conjunction FDR identified altogether 17 loci; 10 loci were associated with PCA and LDL, 3 loci were associated with PCA and TG and additionally 4 loci were associated with PCA, LDL and TG jointly (conjunction FDR < 0.01). For T2D, we detected one locus adjacent to HNF1B. Conclusions We found polygenic overlap between PCA predisposition and blood lipids, in particular LDL and TG, and identified 17 pleiotropic gene loci between PCA and LDL, and PCA and TG, respectively. These findings provide novel pathobiological insights and may have implications for trials using targeting lipid-lowering agents in a prevention or cancer setting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the analysis of tagging data, it has been found that the least-squares method, based on the increment function known as the Fabens method, produces biased estimates because individual variability in growth is not allowed for. This paper modifies the Fabens method to account for individual variability in the length asymptote. Significance tests using t-statistics or log-likelihood ratio statistics may be applied to show the level of individual variability. Simulation results indicate that the modified method reduces the biases in the estimates to negligible proportions. Tagging data from tiger prawns (Penaeus esculentus and Penaeus semisulcatus) and rock lobster (Panulirus ornatus) are analysed as an illustration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Between-subject and within-subject variability is ubiquitous in biology and physiology and understanding and dealing with this is one of the biggest challenges in medicine. At the same time it is difficult to investigate this variability by experiments alone. A recent modelling and simulation approach, known as population of models (POM), allows this exploration to take place by building a mathematical model consisting of multiple parameter sets calibrated against experimental data. However, finding such sets within a high-dimensional parameter space of complex electrophysiological models is computationally challenging. By placing the POM approach within a statistical framework, we develop a novel and efficient algorithm based on sequential Monte Carlo (SMC). We compare the SMC approach with Latin hypercube sampling (LHS), a method commonly adopted in the literature for obtaining the POM, in terms of efficiency and output variability in the presence of a drug block through an in-depth investigation via the Beeler-Reuter cardiac electrophysiological model. We show improved efficiency via SMC and that it produces similar responses to LHS when making out-of-sample predictions in the presence of a simulated drug block.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To strive to improve the rehabilitation program of individuals with transfemoral amputation fitted with bone-anchored prosthesis based on data from direct measurements of the load applied on the residuum we first of all need to understand the load applied on the fixation. Therefore the load applied on the residuum was first directly measured during standardized activities of daily living such as straight line level walking, ascending and descending stairs and a ramp and walking around a circle. From measuring the load in standardized activities of daily living the load was also measured during different phases of the rehabilitation program such as during walking with walking aids and during load bearing exercises.[1-15] The rehabilitation program for individuals with a transfemoral amputation fitted with an OPRA implant relies on a combination of dynamic and static load bearing exercises.[16-20] This presentation will focus on the study of a set of experimental static load bearing exercises. [1] A group of eleven individuals with unilateral transfemoral amputation fitted with an OPRA implant participated in this study. The load on the implant during the static load bearing exercises was measured using a portable system including a commercial transducer embedded in a short pylon, a laptop and a customized software package. This apparatus was previously shown effective in a proof-of-concept study published by Prof. Frossard. [1-9] The analysis of the static load bearing exercises included an analysis of the reliability as well as the loading compliance. The analysis of the loading reliability showed a high reliability between the loading sessions indicating a correct repetition of the LBE by the participants. [1, 5] The analysis of the loading compliance showed a significant lack of axial compliance leading to a systematic underloading of the long axis of the implant during the proposed experimental static LBE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Population structure, including population stratification and cryptic relatedness, can cause spurious associations in genome-wide association studies (GWAS). Usually, the scaled median or mean test statistic for association calculated from multiple single-nucleotide-polymorphisms across the genome is used to assess such effects, and 'genomic control' can be applied subsequently to adjust test statistics at individual loci by a genomic inflation factor. Published GWAS have clearly shown that there are many loci underlying genetic variation for a wide range of complex diseases and traits, implying that a substantial proportion of the genome should show inflation of the test statistic. Here, we show by theory, simulation and analysis of data that in the absence of population structure and other technical artefacts, but in the presence of polygenic inheritance, substantial genomic inflation is expected. Its magnitude depends on sample size, heritability, linkage disequilibrium structure and the number of causal variants. Our predictions are consistent with empirical observations on height in independent samples of ~4000 and ~133,000 individuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Environmental changes have put great pressure on biological systems leading to the rapid decline of biodiversity. To monitor this change and protect biodiversity, animal vocalizations have been widely explored by the aid of deploying acoustic sensors in the field. Consequently, large volumes of acoustic data are collected. However, traditional manual methods that require ecologists to physically visit sites to collect biodiversity data are both costly and time consuming. Therefore it is essential to develop new semi-automated and automated methods to identify species in automated audio recordings. In this study, a novel feature extraction method based on wavelet packet decomposition is proposed for frog call classification. After syllable segmentation, the advertisement call of each frog syllable is represented by a spectral peak track, from which track duration, dominant frequency and oscillation rate are calculated. Then, a k-means clustering algorithm is applied to the dominant frequency, and the centroids of clustering results are used to generate the frequency scale for wavelet packet decomposition (WPD). Next, a new feature set named adaptive frequency scaled wavelet packet decomposition sub-band cepstral coefficients is extracted by performing WPD on the windowed frog calls. Furthermore, the statistics of all feature vectors over each windowed signal are calculated for producing the final feature set. Finally, two well-known classifiers, a k-nearest neighbour classifier and a support vector machine classifier, are used for classification. In our experiments, we use two different datasets from Queensland, Australia (18 frog species from commercial recordings and field recordings of 8 frog species from James Cook University recordings). The weighted classification accuracy with our proposed method is 99.5% and 97.4% for 18 frog species and 8 frog species respectively, which outperforms all other comparable methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In competitive combat sporting environments like boxing, the statistics on a boxer's performance, including the amount and type of punches thrown, provide a valuable source of data and feedback which is routinely used for coaching and performance improvement purposes. This paper presents a robust framework for the automatic classification of a boxer's punches. Overhead depth imagery is employed to alleviate challenges associated with occlusions, and robust body-part tracking is developed for the noisy time-of-flight sensors. Punch recognition is addressed through both a multi-class SVM and Random Forest classifiers. A coarse-to-fine hierarchical SVM classifier is presented based on prior knowledge of boxing punches. This framework has been applied to shadow boxing image sequences taken at the Australian Institute of Sport with 8 elite boxers. Results demonstrate the effectiveness of the proposed approach, with the hierarchical SVM classifier yielding a 96% accuracy, signifying its suitability for analysing athletes punches in boxing bouts.