851 resultados para statistical methods


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Statistical methods of describing prosody were used to study fluency, expressiveness and their relationship among 8-10-year-old readers. There were robust relationships between expressiveness and variables associated with pitch mobility; and between fluency and measures associated with temporal organization. Interactions indicated that the relationships were not simple. Differences between groups depended on sentence content and position. Some measures offer a basis for rules aimed at assigning individuals to skill categories. The effects suggest psychological hypotheses about the underlying mechanisms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An optimal search theory, the so-called Levy-flight foraging hypothesis(1), predicts that predators should adopt search strategies known as Levy flights where prey is sparse and distributed unpredictably, but that Brownian movement is sufficiently efficient for locating abundant prey(2-4). Empirical studies have generated controversy because the accuracy of statistical methods that have been used to identify Levy behaviour has recently been questioned(5,6). Consequently, whether foragers exhibit Levy flights in the wild remains unclear. Crucially, moreover, it has not been tested whether observed movement patterns across natural landscapes having different expected resource distributions conform to the theory's central predictions. Here we use maximum-likelihood methods to test for Levy patterns in relation to environmental gradients in the largest animal movement data set assembled for this purpose. Strong support was found for Levy search patterns across 14 species of open-ocean predatory fish (sharks, tuna, billfish and ocean sunfish), with some individuals switching between Levy and Brownian movement as they traversed different habitat types. We tested the spatial occurrence of these two principal patterns and found Levy behaviour to be associated with less productive waters (sparser prey) and Brownian movements to be associated with productive shelf or convergence-front habitats (abundant prey). These results are consistent with the Levy-flight foraging hypothesis(1,7), supporting the contention(8,9) that organism search strategies naturally evolved in such a way that they exploit optimal Levy patterns.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The quick, easy way to master all the statistics you'll ever need The bad news first: if you want a psychology degree you'll need to know statistics. Now for the good news: Psychology Statistics For Dummies. Featuring jargon-free explanations, step-by-step instructions and dozens of real-life examples, Psychology Statistics For Dummies makes the knotty world of statistics a lot less baffling. Rather than padding the text with concepts and procedures irrelevant to the task, the authors focus only on the statistics psychology students need to know. As an alternative to typical, lead-heavy statistics texts or supplements to assigned course reading, this is one book psychology students won't want to be without. Ease into statistics – start out with an introduction to how statistics are used by psychologists, including the types of variables they use and how they measure them Get your feet wet – quickly learn the basics of descriptive statistics, such as central tendency and measures of dispersion, along with common ways of graphically depicting information Meet your new best friend – learn the ins and outs of SPSS, the most popular statistics software package among psychology students, including how to input, manipulate and analyse data Analyse this – get up to speed on statistical analysis core concepts, such as probability and inference, hypothesis testing, distributions, Z-scores and effect sizes Correlate that – get the lowdown on common procedures for defining relationships between variables, including linear regressions, associations between categorical data and more Analyse by inference – master key methods in inferential statistics, including techniques for analysing independent groups designs and repeated-measures research designs Open the book and find: Ways to describe statistical data How to use SPSS statistical software Probability theory and statistical inference Descriptive statistics basics How to test hypotheses Correlations and other relationships between variables Core concepts in statistical analysis for psychology Analysing research designs Learn to: Use SPSS to analyse data Master statistical methods and procedures using psychology-based explanations and examples Create better reports Identify key concepts and pass your course

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Gene expression data can provide a very rich source of information for elucidating the biological function on the pathway level if the experimental design considers the needs of the statistical analysis methods. The purpose of this paper is to provide a comparative analysis of statistical methods for detecting the differentially expression of pathways (DEP). In contrast to many other studies conducted so far, we use three novel simulation types, producing a more realistic correlation structure than previous simulation methods. This includes also the generation of surrogate data from two large-scale microarray experiments from prostate cancer and ALL. As a result from our comprehensive analysis of 41,004 parameter configurations, we find that each method should only be applied if certain conditions of the data from a pathway are met. Further, we provide method-specific estimates for the optimal sample size for microarray experiments aiming to identify DEP in order to avoid an underpowered design. Our study highlights the sensitivity of the studied methods on the parameters of the system. © 2012 Tripahti and Emmert-Streib.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Evidence suggests that in prokaryotes sequence-dependent transcriptional pauses a?ect the dynamics of transcription and translation, as well as of small genetic circuits. So far, a few pause-prone sequences have been identi?ed from in vitro measurements of transcription elongation kinetics.

Results: Using a stochastic model of gene expression at the nucleotide and codon levels with realistic parameter values, we investigate three di?erent but related questions and present statistical methods for their analysis. First, we show that information from in vivo RNA and protein temporal numbers is su?cient to discriminate between models with and without a pause site in their coding sequence. Second, we demonstrate that it is possible to separate a large variety of models from each other with pauses of various durations and locations in the template by means of a hierarchical clustering and a random forest classi?er. Third, we introduce an approximate likelihood function that allows to estimate the location of a pause site.

Conclusions: This method can aid in detecting unknown pause-prone sequences from temporal measurements of RNA and protein numbers at a genome-wide scale and thus elucidate possible roles that these sequences play in the dynamics of genetic networks and phenotype.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cross-sectional and longitudinal data consistently indicate that mathematical difficulties are more prevalent in older than in younger children (e.g. Department of Education, 2011). Children’s trajectories can take a variety of shapes such as linear, flat, curvilinear and uneven, and shape has been found to vary within children and across tasks (J Jordan et al. 2009). There has been an increase in the use of statistical methods which are specifically designed to study development, and this has greatly improved our understanding of children’s mathematical development. However, the effects of many cognitive and social variables (e.g. working memory and verbal ability) on mathematical development are unclear. It is likely that greater consistency between studies will be achieved by adopting a componential approach to study mathematics, rather than treating mathematics as a unitary concept.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Kawakawa/Oruanui tephra (KOT) is a key chronostratigraphic marker in terrestrial and marine deposits of the New Zealand (NZ) sector of the southwest Pacific. Erupted early during the Last Glacial Maximum (LGM), the wide distribution of the KOT enables inter-regional alignment of proxy records and facilitates comparison between NZ climatic variations and those from well-dated records elsewhere. We present 22 new radiocarbon ages for the KOT from sites and materials considered optimal for dating, and apply Bayesian statistical methods via OxCal4.1.7 that incorporate stratigraphic information to develop a new age probability model for KOT. The revised calibrated age, ±2 standard deviations, for the eruption of the KOT is 25,360 ± 160 cal yr BP. The age revision provides a basis for refining marine reservoir ages for the LGM in the southwest Pacific.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aims/hypothesis: Diabetic nephropathy is a major diabetic complication, and diabetes is the leading cause of end-stage renal disease (ESRD). Family studies suggest a hereditary component for diabetic nephropathy. However, only a few genes have been associated with diabetic nephropathy or ESRD in diabetic patients. Our aim was to detect novel genetic variants associated with diabetic nephropathy and ESRD. Methods: We exploited a novel algorithm, ‘Bag of Naive Bayes’, whose marker selection strategy is complementary to that of conventional genome-wide association models based on univariate association tests. The analysis was performed on a genome-wide association study of 3,464 patients with type 1 diabetes from the Finnish Diabetic Nephropathy (FinnDiane) Study and subsequently replicated with 4,263 type 1 diabetes patients from the Steno Diabetes Centre, the All Ireland-Warren 3-Genetics of Kidneys in Diabetes UK collection (UK–Republic of Ireland) and the Genetics of Kidneys in Diabetes US Study (GoKinD US). Results: Five genetic loci (WNT4/ZBTB40-rs12137135, RGMA/MCTP2-rs17709344, MAPRE1P2-rs1670754, SEMA6D/SLC24A5-rs12917114 and SIK1-rs2838302) were associated with ESRD in the FinnDiane study. An association between ESRD and rs17709344, tagging the previously identified rs12437854 and located between the RGMA and MCTP2 genes, was replicated in independent case–control cohorts. rs12917114 near SEMA6D was associated with ESRD in the replication cohorts under the genotypic model (p < 0.05), and rs12137135 upstream of WNT4 was associated with ESRD in Steno. Conclusions/interpretation: This study supports the previously identified findings on the RGMA/MCTP2 region and suggests novel susceptibility loci for ESRD. This highlights the importance of applying complementary statistical methods to detect novel genetic variants in diabetic nephropathy and, in general, in complex diseases.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, the issue of life expectancy has become of upmost importance to pension providers, insurance companies and the government bodies in the developed world. Significant and consistent improvements in mortality rates and, hence, life expectancy have led to unprecedented increases in the cost of providing for older ages. This has resulted in an explosion of stochastic mortality models forecasting trends in mortality data in order to anticipate future life expectancy and, hence, quantify the costs of providing for future aging populations. Many stochastic models of mortality rates identify linear trends in mortality rates by time, age and cohort, and forecast these trends into the future using standard statistical methods. The modeling approaches used failed to capture the effects of any structural change in the trend and, thus, potentially produced incorrect forecasts of future mortality rates. In this paper, we look at a range of leading stochastic models of mortality and test for structural breaks in the trend time series.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In semiconductor fabrication processes, effective management of maintenance operations is fundamental to decrease costs associated with failures and downtime. Predictive Maintenance (PdM) approaches, based on statistical methods and historical data, are becoming popular for their predictive capabilities and low (potentially zero) added costs. We present here a PdM module based on Support Vector Machines for prediction of integral type faults, that is, the kind of failures that happen due to machine usage and stress of equipment parts. The proposed module may also be employed as a health factor indicator. The module has been applied to a frequent maintenance problem in semiconductor manufacturing industry, namely the breaking of the filament in the ion-source of ion-implantation tools. The PdM has been tested on a real production dataset. © 2013 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, the issue of life expectancy has become of utmost importance to pension providers, insurance companies, and government bodies in the developed world. Significant and consistent improvements in mortality rates and hence life expectancy have led to unprecedented increases in the cost of providing for older ages. This has resulted in an explosion of stochastic mortality models forecasting trends in mortality data to anticipate future life expectancy and hence quantify the costs of providing for future aging populations. Many stochastic models of mortality rates identify linear trends in mortality rates by time, age, and cohort and forecast these trends into the future by using standard statistical methods. These approaches rely on the assumption that structural breaks in the trend do not exist or do not have a significant impact on the mortality forecasts. Recent literature has started to question this assumption. In this paper, we carry out a comprehensive investigation of the presence or of structural breaks in a selection of leading mortality models. We find that structural breaks are present in the majority of cases. In particular, we find that allowing for structural break, where present, improves the forecast result significantly.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The River Bush must reach a standard of good ecological potential (GEP) by 2015 due to the requirements of the water framework directive. The role of sediments within a water body is extremely important to all aspects of a river's regime. The aim of this research is to investigate the effects of Altnahinch Dam on sediment distribution in the River Bush (a heavily modified water body) with comparison made against the Glendun River (an unmodified water body). Samples collected from the rivers were analysed by physical (pebble count, sieve analysis) and statistical methods (ANOVA, GRADISTAT). An increase in fine sediments upstream of the dam provides evidence that the dam is impacting sediment distribution. Downstream effects are not shown to be significant. The output of this study also implies similar impacts at other drinking water storage impoundments. This research recommends that a sediment management plan be put in place for Altnahinch Dam and that further studies be carried-out concentrating on fine sediment distribution upstream of the dam. 

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: High risk medications are commonly prescribed to older US patients. Currently, less is known about high risk medication prescribing in other Western Countries, including the UK. We measured trends and correlates of high risk medication prescribing in a subset of the older UK population (community/institutionalized) to inform harm minimization efforts. Methods: Three cross-sectional samples from primary care electronic clinical records (UK Clinical Practice Research Datalink, CPRD) in fiscal years 2003/04, 2007/08 and 2011/12 were taken. This yielded a sample of 13,900 people aged 65 years or over from 504 UK general practices. High risk medications were defined by 2012 Beers Criteria adapted for the UK. Using descriptive statistical methods and regression modelling, prevalence of ‘any’ (drugs prescribed at least once per year) and ‘long-term’ (drugs prescribed all quarters of year) high risk medication prescribing and correlates were determined. Results: While polypharmacy rates have risen sharply, high risk medication prevalence has remained stable across a decade. A third of older (65+) people are exposed to high risk medications, but only half of the total prevalence was long-term (any = 38.4 % [95 % CI: 36.3, 40.5]; long-term = 17.4 % [15.9, 19.9] in 2011/12). Long-term but not any high risk medication exposure was associated with older ages (85 years or over). Women and people with higher polypharmacy burden were at greater risk of exposure; lower socio-economic status was not associated. Ten drugs/drug classes accounted for most of high risk medication prescribing in 2011/12. Conclusions: High risk medication prescribing has not increased over time against a background of increasing polypharmacy in the UK. Half of patients receiving high risk medications do so for less than a year. Reducing or optimising the use of a limited number of drugs could dramatically reduce high risk medications in older people. Further research is needed to investigate why the oldest old and women are at greater risk. Interventions to reduce high risk medications may need to target shorter and long-term use separately.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Single component geochemical maps are the most basic representation of spatial elemental distributions and commonly used in environmental and exploration geochemistry. However, the compositional nature of geochemical data imposes several limitations on how the data should be presented. The problems relate to the constant sum problem (closure), and the inherently multivariate relative information conveyed by compositional data. Well known is, for instance, the tendency of all heavy metals to show lower values in soils with significant contributions of diluting elements (e.g., the quartz dilution effect); or the contrary effect, apparent enrichment in many elements due to removal of potassium during weathering. The validity of classical single component maps is thus investigated, and reasonable alternatives that honour the compositional character of geochemical concentrations are presented. The first recommended such method relies on knowledge-driven log-ratios, chosen to highlight certain geochemical relations or to filter known artefacts (e.g. dilution with SiO2 or volatiles). This is similar to the classical normalisation approach to a single element. The second approach uses the (so called) log-contrasts, that employ suitable statistical methods (such as classification techniques, regression analysis, principal component analysis, clustering of variables, etc.) to extract potentially interesting geochemical summaries. The caution from this work is that if a compositional approach is not used, it becomes difficult to guarantee that any identified pattern, trend or anomaly is not an artefact of the constant sum constraint. In summary the authors recommend a chain of enquiry that involves searching for the appropriate statistical method that can answer the required geological or geochemical question whilst maintaining the integrity of the compositional nature of the data. The required log-ratio transformations should be applied followed by the chosen statistical method. Interpreting the results may require a closer working relationship between statisticians, data analysts and geochemists.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conventional practice in Regional Geochemistry includes as a final step of any geochemical campaign the generation of a series of maps, to show the spatial distribution of each of the components considered. Such maps, though necessary, do not comply with the compositional, relative nature of the data, which unfortunately make any conclusion based on them sensitive
to spurious correlation problems. This is one of the reasons why these maps are never interpreted isolated. This contribution aims at gathering a series of statistical methods to produce individual maps of multiplicative combinations of components (logcontrasts), much in the flavor of equilibrium constants, which are designed on purpose to capture certain aspects of the data.
We distinguish between supervised and unsupervised methods, where the first require an external, non-compositional variable (besides the compositional geochemical information) available in an analogous training set. This external variable can be a quantity (soil density, collocated magnetics, collocated ratio of Th/U spectral gamma counts, proportion of clay particle fraction, etc) or a category (rock type, land use type, etc). In the supervised methods, a regression-like model between the external variable and the geochemical composition is derived in the training set, and then this model is mapped on the whole region. This case is illustrated with the Tellus dataset, covering Northern Ireland at a density of 1 soil sample per 2 square km, where we map the presence of blanket peat and the underlying geology. The unsupervised methods considered include principal components and principal balances
(Pawlowsky-Glahn et al., CoDaWork2013), i.e. logcontrasts of the data that are devised to capture very large variability or else be quasi-constant. Using the Tellus dataset again, it is found that geological features are highlighted by the quasi-constant ratios Hf/Nb and their ratio against SiO2; Rb/K2O and Zr/Na2O and the balance between these two groups of two variables; the balance of Al2O3 and TiO2 vs. MgO; or the balance of Cr, Ni and Co vs. V and Fe2O3. The largest variability appears to be related to the presence/absence of peat.