903 resultados para Geo-statistical model
Resumo:
The spatial data set delineates areas with similar environmental properties regarding soil, terrain morphology, climate and affiliation to the same administrative unit (NUTS3 or comparable units in size) at a minimum pixel size of 1km2. The scope of developing this data set is to provide a link between spatial environmental information (e.g. soil properties) and statistical data (e.g. crop distribution) available at administrative level. Impact assessment of agricultural management on emissions of pollutants or radiative active gases, or analysis regarding the influence of agricultural management on the supply of ecosystem services, require the proper spatial coincidence of the driving factors. The HSU data set provides e.g. the link between the agro-economic model CAPRI and biophysical assessment of environmental impacts (updating previously spatial units, Leip et al. 2008), for the analysis of policy scenarios. Recently, a statistical model to disaggregate crop information available from regional statistics to the HSU has been developed (Lamboni et al. 2016). The HSU data set consists of the spatial layers provided in vector and raster format as well as attribute tables with information on the properties of the HSU. All input data for the delineation the HSU is publicly available. For some parameters the attribute tables provide the link between the HSU data set and e.g. the soil map(s) rather than the data itself. The HSU data set is closely linked the USCIE data set.
The North Sea autumn spawning Herring (Clupea harengus L.) Spawning Component Abundance Index (SCAI)
Resumo:
The North Sea autumn-spawning herring (Clupea harengus) stock consists of a set of different spawning components. The dynamics of the entire stock have been well characterized, but although time-series of larval abundance indices are available for the individual components, study of the dynamics at the component level has historically been hampered by missing observations and high sampling noise. A simple state-space statistical model is developed that is robust to these problems, gives a good fit to the data, and proves capable of both handling and predicting missing observations well. Furthermore, the sum of the fitted abundance indices across all components proves an excellent proxy for the biomass of the total stock, even though the model utilizes information at the individual-component level. The Orkney-Shetland component appears to have recovered faster from historic depletion events than the other components, whereas the Downs component has been the slowest. These differences give rise to changes in stock composition, which are shown to vary widely within a relatively short time. The modelling framework provides a valuable tool for studying and monitoring the dynamics of the individual components of the North Sea herring stock.
Resumo:
Background: To investigate the association between selected social and behavioural (infant feeding and preventive dental practices) variables and the presence of early childhood caries in preschool children within the north Brisbane region. Methods: A cross sectional sample of 2515 children aged four to five years were examined in a preschool setting using prevalence (percentage with caries) and severity (dmft) indices. A self-administered questionnaire obtained information regarding selected social and behavioural variables. The data were modelled using multiple logistic regression analysis at the 5 per cent level of significance. Results: The final explanatory model for caries presence in four to five year old children included the variables breast feeding from three to six months of age (OR=0.7, CI=0.5, 1.0), sleeping with the bottle (OR=1.9, CI=1.5, 2.4), sipping from the bottle (OR=1.6, CI=1.2, 2.0), ethnicity other than Caucasian (OR=1.9, CI=1.4, 2.5), annual family income $20,000-$35,000 (OR = 1.7, CI=1.3, 2.3) and annual family income less than $20,000 (OR=2.1, CI=1.5, 2.8). Conclusion: A statistical model for early childhood caries in preschool children within the north Brisbane region has been constructed using selected social and behavioural determinants. Epidemiological data can be used for improved public oral health service planning and resource allocation within the region.
Resumo:
Areas of the landscape that are priorities for conservation should be those that are both vulnerable to threatening processes and that if lost or degraded, will result in conservation targets being compromised. While much attention is directed towards understanding the patterns of biodiversity, much less is given to determining the areas of the landscape most vulnerable to threats. We assessed the relative vulnerability of remaining areas of native forest to conversion to plantations in the ecologically significant temperate rainforest region of south central Chile. The area of the study region is 4.2 million ha and the extent of plantations is approximately 200000 ha. First, the spatial distribution of native forest conversion to plantations was determined. The variables related to the spatial distribution of this threatening process were identified through the development of a classification tree and the generation of a multivariate. spatially explicit, statistical model. The model of native forest conversion explained 43% of the deviance and the discrimination ability of the model was high. Predictions were made of where native forest conversion is likely to occur in the future. Due to patterns of climate, topography, soils and proximity to infrastructure and towns, remaining forest areas differ in their relative risk of being converted to plantations. Another factor that may increase the vulnerability of remaining native forest in a subset of the study region is the proposed construction of a highway. We found that 90% of the area of existing plantations within this region is within 2.5 km of roads. When the predictions of native forest conversion were recalculated accounting for the construction of this highway, it was found that: approximately 27000 ha of native forest had an increased probability of conversion. The areas of native forest identified to be vulnerable to conversion are outside of the existing reserve network. (C) 2004 Elsevier Ltd. All tights reserved.
Resumo:
All signals that appear to be periodic have some sort of variability from period to period regardless of how stable they appear to be in a data plot. A true sinusoidal time series is a deterministic function of time that never changes and thus has zero bandwidth around the sinusoid's frequency. A zero bandwidth is impossible in nature since all signals have some intrinsic variability over time. Deterministic sinusoids are used to model cycles as a mathematical convenience. Hinich [IEEE J. Oceanic Eng. 25 (2) (2000) 256-261] introduced a parametric statistical model, called the randomly modulated periodicity (RMP) that allows one to capture the intrinsic variability of a cycle. As with a deterministic periodic signal the RMP can have a number of harmonics. The likelihood ratio test for this model when the amplitudes and phases are known is given in [M.J. Hinich, Signal Processing 83 (2003) 1349-13521. A method for detecting a RMP whose amplitudes and phases are unknown random process plus a stationary noise process is addressed in this paper. The only assumption on the additive noise is that it has finite dependence and finite moments. Using simulations based on a simple RMP model we show a case where the new method can detect the signal when the signal is not detectable in a standard waterfall spectrograrn display. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Background: There is a recognized need to move from mortality to morbidity outcome predictions following traumatic injury. However, there are few morbidity outcome prediction scoring methods and these fail to incorporate important comorbidities or cofactors. This study aims to develop and evaluate a method that includes such variables. Methods: This was a consecutive case series registered in the Queensland Trauma Registry that consented to a prospective 12-month telephone conducted follow-up study. A multivariable statistical model was developed relating Trauma Registry data to trichotomized 12-month post-injury outcome (categories: no limitations, minor limitations and major limitations). Cross-validation techniques using successive single hold-out samples were then conducted to evaluate the model's predictive capabilities. Results: In total, 619 participated, with 337 (54%) experiencing no limitations, 101 (16%) experiencing minor limitations and 181 (29%) experiencing major limitations 12 months after injury. The final parsimonious multivariable statistical model included whether the injury was in the lower extremity body region, injury severity, age, length of hospital stay, pulse at admission and whether the participant was admitted to an intensive care unit. This model explained 21% of the variability in post-injury outcome. Predictively, 64% of those with no limitations, 18% of those with minor limitations and 37% of those with major limitations were correctly identified. Conclusion: Although carefully developed, this statistical model lacks the predictive power necessary for its use as a basis of a useful prognostic tool. Further research is required to identify variables other than those routinely used in the Trauma Registry to develop a model with the necessary predictive utility.
Resumo:
Esta pesquisa teve como objetivo verificar se existe diferença entre os retornos das Empresas listadas no IBOVESPA e nos Níveis de Governança Corporativa criados pela BOVESPA em dezembro de 2000, visando diferenciar as empresas que voluntariamente adotassem práticas adicionais de governança corporativa. O principal objetivo desde processo é melhorar a transparência entre o investidor e as empresas, reduzindo assim a assimetria de informação. Para efetuar esta pesquisa primeiramente foi realizada uma revisão bibliográfica sobre a assimetria da informação, teoria dos custos de transação, teoria da agência e um histórico da governança corporativa no mundo e no Brasil. Para verificar se existe diferença entre os retornos das empresas listadas no IBOVESPA e nos níveis diferenciados de governança corporativa utilizou-se o modelo estatístico ANOVA e Teste t em uma amostra composta por todas as empresas listadas no IBOVESPA em 31/03/2011 que tiveram ações negociadas entre 2006 e 2010 que são os últimos cinco anos da implantação dos níveis de governança no Brasil. Como resultado obteve-se que as empresa listadas no Novo Mercado e no Nível 1 possuem maior retorno que as empresas listadas no mercado tradicional.
Resumo:
The aim of this study was to determine the cues used to signal avoidance of difficult driving situations and to test the hypothesis that drivers with relatively poor high contrast visual acuity (HCVA) have fewer crashes than drivers with relatively poor normalised low contrast visual acuity (NLCVA). This is because those with poorer HCVA are well aware of their difficulties and avoid dangerous driving situations while those poorer NLCVA are often unaware of the extent of their problem. Age, self-reported situation avoidance and HCVA were collected during a practice based study of 690 drivers. Screening was also carried out on 7254 drivers at various venues, mainly motorway sites, throughout the UK. Age, self-reported situation avoidance and prior crash involvement were recorded and Titmus vision screeners were used to measure HCVA and NLCVA. Situation avoidance increased in reduced visibility conditions and was influenced by age and HCVA. Only half of the drivers used visual cues to signal situation avoidance and most of these drivers used high rather than low contrast cues. A statistical model designed to remove confounding interrelationships between variables showed, for drivers that did not report situation avoidance, that crash involvement decreased for drivers with below average HCVA and increased for those with below average NLCVA. These relationships accounted for less than 1% of the crash variance, so the hypothesis was not strongly supported. © 2002 The College of Optometrists.
Resumo:
Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered. © 2002 The College of Optometrists.
Resumo:
A visualization plot of a data set of molecular data is a useful tool for gaining insight into a set of molecules. In chemoinformatics, most visualization plots are of molecular descriptors, and the statistical model most often used to produce a visualization is principal component analysis (PCA). This paper takes PCA, together with four other statistical models (NeuroScale, GTM, LTM, and LTM-LIN), and evaluates their ability to produce clustering in visualizations not of molecular descriptors but of molecular fingerprints. Two different tasks are addressed: understanding structural information (particularly combinatorial libraries) and relating structure to activity. The quality of the visualizations is compared both subjectively (by visual inspection) and objectively (with global distance comparisons and local k-nearest-neighbor predictors). On the data sets used to evaluate clustering by structure, LTM is found to perform significantly better than the other models. In particular, the clusters in LTM visualization space are consistent with the relationships between the core scaffolds that define the combinatorial sublibraries. On the data sets used to evaluate clustering by activity, LTM again gives the best performance but by a smaller margin. The results of this paper demonstrate the value of using both a nonlinear projection map and a Bernoulli noise model for modeling binary data.
Resumo:
The target of no-reference (NR) image quality assessment (IQA) is to establish a computational model to predict the visual quality of an image. The existing prominent method is based on natural scene statistics (NSS). It uses the joint and marginal distributions of wavelet coefficients for IQA. However, this method is only applicable to JPEG2000 compressed images. Since the wavelet transform fails to capture the directional information of images, an improved NSS model is established by contourlets. In this paper, the contourlet transform is utilized to NSS of images, and then the relationship of contourlet coefficients is represented by the joint distribution. The statistics of contourlet coefficients are applicable to indicate variation of image quality. In addition, an image-dependent threshold is adopted to reduce the effect of content to the statistical model. Finally, image quality can be evaluated by combining the extracted features in each subband nonlinearly. Our algorithm is trained and tested on the LIVE database II. Experimental results demonstrate that the proposed algorithm is superior to the conventional NSS model and can be applied to different distortions. © 2009 Elsevier B.V. All rights reserved.
Resumo:
2000 Mathematics Subject Classification: 62P10, 62J12.
Resumo:
2010 Mathematics Subject Classification: 94A17.
Resumo:
This study explores factors related to the prompt difficulty in Automated Essay Scoring. The sample was composed of 6,924 students. For each student, there were 1-4 essays, across 20 different writing prompts, for a total of 20,243 essays. E-rater® v.2 essay scoring engine developed by the Educational Testing Service was used to score the essays. The scoring engine employs a statistical model that incorporates 10 predictors associated with writing characteristics of which 8 were used. The Rasch partial credit analysis was applied to the scores to determine the difficulty levels of prompts. In addition, the scores were used as outcomes in the series of hierarchical linear models (HLM) in which students and prompts constituted the cross-classification levels. This methodology was used to explore the partitioning of the essay score variance.^ The results indicated significant differences in prompt difficulty levels due to genre. Descriptive prompts, as a group, were found to be more difficult than the persuasive prompts. In addition, the essay score variance was partitioned between students and prompts. The amount of the essay score variance that lies between prompts was found to be relatively small (4 to 7 percent). When the essay-level, student-level-and prompt-level predictors were included in the model, it was able to explain almost all variance that lies between prompts. Since in most high-stakes writing assessments only 1-2 prompts per students are used, the essay score variance that lies between prompts represents an undesirable or "noise" variation. Identifying factors associated with this "noise" variance may prove to be important for prompt writing and for constructing Automated Essay Scoring mechanisms for weighting prompt difficulty when assigning essay score.^
Resumo:
Solar activity indicators, each as sunspot numbers, sunspot area and flares, over the Sun’s photosphere are not considered to be symmetric between the northern and southern hemispheres of the Sun. This behavior is also known as the North-South Asymmetry of the different solar indices. Among the different conclusions obtained by several authors, we can point that the N-S asymmetry is a real and systematic phenomenon and is not due to random variability. In the present work, the probability distributions from the Marshall Space Flight Centre (MSFC) database are investigated using a statistical tool arises from well-known Non-Extensive Statistical Mechanics proposed by C. Tsallis in 1988. We present our results and discuss their physical implications with the help of theoretical model and observations. We obtained that there is a strong dependence between the nonextensive entropic parameter q and long-term solar variability presents in the sunspot area data. Among the most important results, we highlight that the asymmetry index q reveals the dominance of the North against the South. This behavior has been discussed and confirmed by several authors, but in no time they have given such behavior to a statistical model property. Thus, we conclude that this parameter can be considered as an effective measure for diagnosing long-term variations of solar dynamo. Finally, our dissertation opens a new approach for investigating time series in astrophysics from the perspective of non-extensivity.