15 resultados para statistical techniques

em CentAUR: Central Archive University of Reading - UK


Relevância:

70.00% 70.00%

Publicador:

Resumo:

As in any field of scientific inquiry, advancements in the field of second language acquisition (SLA) rely in part on the interpretation and generalizability of study findings using quantitative data analysis and inferential statistics. While statistical techniques such as ANOVA and t-tests are widely used in second language research, this review article provides a review of a class of newer statistical models that have not yet been widely adopted in the field, but have garnered interest in other fields of language research. The class of statistical models called mixed-effects models are introduced, and the potential benefits of these models for the second language researcher are discussed. A simple example of mixed-effects data analysis using the statistical software package R (R Development Core Team, 2011) is provided as an introduction to the use of these statistical techniques, and to exemplify how such analyses can be reported in research articles. It is concluded that mixed-effects models provide the second language researcher with a powerful tool for the analysis of a variety of types of second language acquisition data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper provides for the first time an objective short-term (8 yr) climatology of African convective weather systems based on satellite imagery. Eight years of infrared International Satellite Cloud Climatology Project-European Space Agency's Meteorological Satellite (ISCCP-Meteosat) satellite imagery has been analyzed using objective feature identification, tracking, and statistical techniques for the July, August, and September periods and the region of Africa and the adjacent Atlantic ocean. This allows various diagnostics to be computed and used to study the distribution of mesoscale and synoptic-scale convective weather systems from mesoscale cloud clusters and squall lines to tropical cyclones. An 8-yr seasonal climatology (1983-90) and the seasonal cycle of this convective activity are presented and discussed. Also discussed is the dependence of organized convection for this region, on the orography, convective, and potential instability and vertical wind shear using European Centre for Medium-Range Weather Forecasts reanalysis data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

If the fundamental precepts of Farming Systems Research were to be taken literally then it would imply that for each farm 'unique' solutions should be sought. This is an unrealistic expectation, but it has led to the idea of a recommendation domain, implying creating a taxonomy of farms, in order to increase the general applicability of recommendations. Mathematical programming models are an established means of generating recommended solutions, but for such models to be effective they have to be constructed for 'truly' typical or representative situations. The multi-variate statistical techniques provide a means of creating the required typologies, particularly when an exhaustive database is available. This paper illustrates the application of this methodology in two different studies that shared the common purpose of identifying types of farming systems in their respective study areas. The issues related with the use of factor and cluster analyses for farm typification prior to building representative mathematical programming models for Chile and Pakistan are highlighted. (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The complex interactions between the determinants of food purchase under risk are explored using the SPARTA model, based on the theory of planned behaviour, and estimated through a combination of multivariate statistical techniques. The application investigates chicken consumption choices in two scenarios: ( a) a 'standard' purchasing situation; and (b) following a hypothetical Salmonella scare. The data are from a nationally representative survey of 2,725 respondents from five European countries: France, Germany, Italy, the Netherlands and the United Kingdom. Results show that the effects and interactions of behavioural determinants vary significantly within Europe. Only in the case of a food scare do risk perceptions and trust come into play. The policy priority should be on building and maintaining trust in food and health authorities and research institutions, while food chain actors could mitigate the consequences of a food scare through public trust. No relationship is found between socio-demographic variables and consumer trust in food safety information.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over the last decade, a number of new methods of population genetic analysis based on likelihood have been introduced. This review describes and explains the general statistical techniques that have recently been used, and discusses the underlying population genetic models. Experimental papers that use these methods to infer human demographic and phylogeographic history are reviewed. It appears that the use of likelihood has hitherto had little impact in the field of human population genetics, which is still primarily driven by more traditional approaches. However, with the current uncertainty about the effects of natural selection, population structure and ascertainment of single-nucleotide polymorphism markers, it is suggested that likelihood-based methods may have a greater impact in the future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dysregulation of lipid and glucose metabolism in the postprandial state are recognised as important risk factors for the development of cardiovascular disease and type 2 diabetes. Our objective was to create a comprehensive, standardised database of postprandial studies to provide insights into the physiological factors that influence postprandial lipid and glucose responses. Data were collated from subjects (n = 467) taking part in single and sequential meal postprandial studies conducted by researchers at the University of Reading, to form the DISRUPT (DIetary Studies: Reading Unilever Postprandial Trials) database. Subject attributes including age, gender, genotype, menopausal status, body mass index, blood pressure and a fasting biochemical profile, together with postprandial measurements of triacylglycerol (TAG), non-esterified fatty acids, glucose, insulin and TAG-rich lipoprotein composition are recorded. A particular strength of the studies is the frequency of blood sampling, with on average 10-13 blood samples taken during each postprandial assessment, and the fact that identical test meal protocols were used in a number of studies, allowing pooling of data to increase statistical power. The DISRUPT database is the most comprehensive postprandial metabolism database that exists worldwide and preliminary analysis of the pooled sequential meal postprandial dataset has revealed both confirmatory and novel observations with respect to the impact of gender and age on the postprandial TAG response. Further analysis of the dataset using conventional statistical techniques along with integrated mathematical models and clustering analysis will provide a unique opportunity to greatly expand current knowledge of the aetiology of inter-individual variability in postprandial lipid and glucose responses.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Streamwater nitrate dynamics in the River Hafren, Plynlimon, mid-Wales were investigated over decadal to sub-daily timescales using a range of statistical techniques. Long-term data were derived from weekly grab samples (1984–2010) and high-frequency data from 7-hourly samples (2007–2009) both measured at two sites: a headwater stream draining moorland and a downstream site below plantation forest. This study is one of the first to analyse upland streamwater nitrate dynamics across such a wide range of timescales and report on the principal mechanisms identified. The data analysis provided no clear evidence that the long-term decline in streamwater nitrate concentrations was related to a decline in atmospheric deposition alone, because nitrogen deposition first increased and then decreased during the study period. Increased streamwater temperature and denitrification may also have contributed to the decline in stream nitrate concentrations, the former through increased N uptake rates and the latter resultant from increased dissolved organic carbon concentrations. Strong seasonal cycles, with concentration minimums in the summer, were driven by seasonal flow minimums and seasonal biological activity enhancing nitrate uptake. Complex diurnal dynamics were observed, with seasonal changes in phase and amplitude of the cycling, and the diurnal dynamics were variable along the river. At the moorland site, a regular daily cycle, with minimum concentrations in the early afternoon, corresponding with peak air temperatures, indicated the importance of instream biological processing. At the downstream site, the diurnal dynamics were a composite signal, resultant from advection, dispersion and nitrate processing in the soils of the lower catchment. The diurnal streamwater nitrate dynamics were also affected by drought conditions. Enhanced diurnal cycling in Spring 2007 was attributed to increased nitrate availability in the post-drought period as well as low flow rates and high temperatures over this period. The combination of high-frequency short-term measurements and long-term monitoring provides a powerful tool for increasing understanding of the controls of element fluxes and concentrations in surface waters.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Much UK research and market practice on portfolio strategy and performance benchmarking relies on a sector‐geography subdivision of properties. Prior tests of the appropriateness of such divisions have generally relied on aggregated or hypothetical return data. However, the results found in aggregate may not hold when individual buildings are considered. This paper makes use of a dataset of individual UK property returns. A series of multivariate exploratory statistical techniques are utilised to test whether the return behaviour of individual properties conforms to their a priori grouping. The results suggest strongly that neither standard sector nor regional classifications provide a clear demarcation of individual building performance. This has important implications for both portfolio strategy and performance measurement and benchmarking. However, there do appear to be size and yield effects that help explain return behaviour at the property level.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Anthropogenic aerosols in the atmosphere have the potential to affect regional-scale land hydrology through solar dimming. Increased aerosol loading may have reduced historical surface evaporation over some locations, but the magnitude and extent of this effect is uncertain. Any reduction in evaporation due to historical solar dimming may have resulted in an increase in river flow. Here we formally detect and quantify the historical effect of changing aerosol concentrations, via solar radiation, on observed river flow over the heavily industrialized, northern extra-tropics. We use a state-of-the-art estimate of twentieth century surface meteorology as input data for a detailed land surface model, and show that the simulations capture the observed strong inter-annual variability in runoff in response to climatic fluctuations. Using statistical techniques, we identify a detectable aerosol signal in the observed river flow both over the combined region, and over individual river basins in Europe and North America. We estimate that solar dimming due to rising aerosol concentrations in the atmosphere around 1980 led to an increase in river runoff by up to 25% in the most heavily polluted regions in Europe. We propose that, conversely, these regions may experience reduced freshwater availability in the future, as air quality improvements are set to lower aerosol loading and solar dimming.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Individual-based models (IBMs) can simulate the actions of individual animals as they interact with one another and the landscape in which they live. When used in spatially-explicit landscapes IBMs can show how populations change over time in response to management actions. For instance, IBMs are being used to design strategies of conservation and of the exploitation of fisheries, and for assessing the effects on populations of major construction projects and of novel agricultural chemicals. In such real world contexts, it becomes especially important to build IBMs in a principled fashion, and to approach calibration and evaluation systematically. We argue that insights from physiological and behavioural ecology offer a recipe for building realistic models, and that Approximate Bayesian Computation (ABC) is a promising technique for the calibration and evaluation of IBMs. IBMs are constructed primarily from knowledge about individuals. In ecological applications the relevant knowledge is found in physiological and behavioural ecology, and we approach these from an evolutionary perspective by taking into account how physiological and behavioural processes contribute to life histories, and how those life histories evolve. Evolutionary life history theory shows that, other things being equal, organisms should grow to sexual maturity as fast as possible, and then reproduce as fast as possible, while minimising per capita death rate. Physiological and behavioural ecology are largely built on these principles together with the laws of conservation of matter and energy. To complete construction of an IBM information is also needed on the effects of competitors, conspecifics and food scarcity; the maximum rates of ingestion, growth and reproduction, and life-history parameters. Using this knowledge about physiological and behavioural processes provides a principled way to build IBMs, but model parameters vary between species and are often difficult to measure. A common solution is to manually compare model outputs with observations from real landscapes and so to obtain parameters which produce acceptable fits of model to data. However, this procedure can be convoluted and lead to over-calibrated and thus inflexible models. Many formal statistical techniques are unsuitable for use with IBMs, but we argue that ABC offers a potential way forward. It can be used to calibrate and compare complex stochastic models and to assess the uncertainty in their predictions. We describe methods used to implement ABC in an accessible way and illustrate them with examples and discussion of recent studies. Although much progress has been made, theoretical issues remain, and some of these are outlined and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: MHC Class I molecules present antigenic peptides to cytotoxic T cells, which forms an integral part of the adaptive immune response. Peptides are bound within a groove formed by the MHC heavy chain. Previous approaches to MHC Class I-peptide binding prediction have largely concentrated on the peptide anchor residues located at the P2 and C-terminus positions. Results: A large dataset comprising MHC-peptide structural complexes was created by remodelling pre-determined x-ray crystallographic structures. Static energetic analysis, following energy minimisation, was performed on the dataset in order to characterise interactions between bound peptides and the MHC Class I molecule, partitioning the interactions within the groove into van der Waals, electrostatic and total non-bonded energy contributions. Conclusion: The QSAR techniques of Genetic Function Approximation (GFA) and Genetic Partial Least Squares (G/PLS) algorithms were used to identify key interactions between the two molecules by comparing the calculated energy values with experimentally-determined BL50 data. Although the peptide termini binding interactions help ensure the stability of the MHC Class I-peptide complex, the central region of the peptide is also important in defining the specificity of the interaction. As thermodynamic studies indicate that peptide association and dissociation may be driven entropically, it may be necessary to incorporate entropic contributions into future calculations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optimal state estimation from given observations of a dynamical system by data assimilation is generally an ill-posed inverse problem. In order to solve the problem, a standard Tikhonov, or L2, regularization is used, based on certain statistical assumptions on the errors in the data. The regularization term constrains the estimate of the state to remain close to a prior estimate. In the presence of model error, this approach does not capture the initial state of the system accurately, as the initial state estimate is derived by minimizing the average error between the model predictions and the observations over a time window. Here we examine an alternative L1 regularization technique that has proved valuable in image processing. We show that for examples of flow with sharp fronts and shocks, the L1 regularization technique performs more accurately than standard L2 regularization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regional climate downscaling has arrived at an important juncture. Some in the research community favour continued refinement and evaluation of downscaling techniques within a broader framework of uncertainty characterisation and reduction. Others are calling for smarter use of downscaling tools, accepting that conventional, scenario-led strategies for adaptation planning have limited utility in practice. This paper sets out the rationale and new functionality of the Decision Centric (DC) version of the Statistical DownScaling Model (SDSM-DC). This tool enables synthesis of plausible daily weather series, exotic variables (such as tidal surge), and climate change scenarios guided, not determined, by climate model output. Two worked examples are presented. The first shows how SDSM-DC can be used to reconstruct and in-fill missing records based on calibrated predictor-predictand relationships. Daily temperature and precipitation series from sites in Africa, Asia and North America are deliberately degraded to show that SDSM-DC can reconstitute lost data. The second demonstrates the application of the new scenario generator for stress testing a specific adaptation decision. SDSM-DC is used to generate daily precipitation scenarios to simulate winter flooding in the Boyne catchment, Ireland. This sensitivity analysis reveals the conditions under which existing precautionary allowances for climate change might be insufficient. We conclude by discussing the wider implications of the proposed approach and research opportunities presented by the new tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Social network has gained remarkable attention in the last decade. Accessing social network sites such as Twitter, Facebook LinkedIn and Google+ through the internet and the web 2.0 technologies has become more affordable. People are becoming more interested in and relying on social network for information, news and opinion of other users on diverse subject matters. The heavy reliance on social network sites causes them to generate massive data characterised by three computational issues namely; size, noise and dynamism. These issues often make social network data very complex to analyse manually, resulting in the pertinent use of computational means of analysing them. Data mining provides a wide range of techniques for detecting useful knowledge from massive datasets like trends, patterns and rules [44]. Data mining techniques are used for information retrieval, statistical modelling and machine learning. These techniques employ data pre-processing, data analysis, and data interpretation processes in the course of data analysis. This survey discusses different data mining techniques used in mining diverse aspects of the social network over decades going from the historical techniques to the up-to-date models, including our novel technique named TRCM. All the techniques covered in this survey are listed in the Table.1 including the tools employed as well as names of their authors.