950 resultados para Geo-statistical model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Commercial explosives behave non-ideally in rock blasting. A direct and convenient measure of non-ideality is the detonation velocity. In this study, an alternative model fitted to experimental unconfined detonation velocity data is proposed and the effect of confinement on the detonation velocity is modelled. Unconfined data of several explosives showing various levels of nonideality were successfully modelled. The effect of confinement on detonation velocity was modelled empirically based on field detonation velocity measurements. Confined detonation velocity is a function of the ideal detonation velocity, unconfined detonation velocity at a given blasthole diameter and rock stiffness. For a given explosive and charge diameter, as confinement increases detonation velocity increases. The confinement model is implemented in a simple engineering based non-ideal detonation model. A number of simulations are carried out and analysed to predict the explosive performance parameters for the adopted blasting conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this work was to model lung cancer mortality as a function of past exposure to tobacco and to forecast age-sex-specific lung cancer mortality rates. A 3-factor age-period-cohort (APC) model, in which the period variable is replaced by the product of average tar content and adult tobacco consumption per capita, was estimated for the US, UK, Canada and Australia by the maximum likelihood method. Age- and sex-specific tobacco consumption was estimated from historical data on smoking prevalence and total tobacco consumption. Lung cancer mortality was derived from vital registration records. Future tobacco consumption, tar content and the cohort parameter were projected by autoregressive moving average (ARIMA) estimation. The optimal exposure variable was found to be the product of average tar content and adult cigarette consumption per capita, lagged for 2530 years for both males and females in all 4 countries. The coefficient of the product of average tar content and tobacco consumption per capita differs by age and sex. In all models, there was a statistically significant difference in the coefficient of the period variable by sex. In all countries, male age-standardized lung cancer mortality rates peaked in the 1980s and declined thereafter. Female mortality rates are projected to peak in the first decade of this century. The multiplicative models of age, tobacco exposure and cohort fit the observed data between 1950 and 1999 reasonably well, and time-series models yield plausible past trends of relevant variables. Despite a significant reduction in tobacco consumption and average tar content of cigarettes sold over the past few decades, the effect on lung cancer mortality is affected by the time lag between exposure and established disease. As a result, the burden of lung cancer among females is only just reaching, or soon will reach, its peak but has been declining for I to 2 decades in men. Future sex differences in lung cancer mortality are likely to be greater in North America than Australia and the UK due to differences in exposure patterns between the sexes. (c) 2005 Wiley-Liss, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sorghum is the main dryland summer crop in NE Australia and a number of agricultural businesses would benefit from an ability to forecast production likelihood at regional scale. In this study we sought to develop a simple agro-climatic modelling approach for predicting shire (statistical local area) sorghum yield. Actual shire yield data, available for the period 1983-1997 from the Australian Bureau of Statistics, were used to train the model. Shire yield was related to a water stress index (SI) that was derived from the agro-climatic model. The model involved a simple fallow and crop water balance that was driven by climate data available at recording stations within each shire. Parameters defining the soil water holding capacity, maximum number of sowings (MXNS) in any year, planting rainfall requirement, and critical period for stress during the crop cycle were optimised as part of the model fitting procedure. Cross-validated correlations (CVR) ranged from 0.5 to 0.9 at shire scale. When aggregated to regional and national scales, 78-84% of the annual variation in sorghum yield was explained. The model was used to examine trends in sorghum productivity and the approach to using it in an operational forecasting system was outlined. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Standard factorial designs sometimes may be inadequate for experiments that aim to estimate a generalized linear model, for example, for describing a binary response in terms of several variables. A method is proposed for finding exact designs for such experiments that uses a criterion allowing for uncertainty in the link function, the linear predictor, or the model parameters, together with a design search. Designs are assessed and compared by simulation of the distribution of efficiencies relative to locally optimal designs over a space of possible models. Exact designs are investigated for two applications, and their advantages over factorial and central composite designs are demonstrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A modified UNIQUAC model has been extended to describe and predict the equilibrium relative humidity and moisture content for wood. The method is validated over a range of moisture content from oven-dried state to fiber saturation point, and over a temperature range of 20-70 degrees C. Adjustable parameters and binary interaction parameters of the UNIQUAC model were estimated from experimental data for Caribbean pine and Hoop pine as well as data available in the literature. The two group-interaction parameters for the wood-moisture system were consistent with using function group contributions for H2O, -OH and -CHO. The result reconfirms that the main contributors to water adsorption in cell walls are the hydroxyl groups of the carbohydrates in cellulose and hemicelluloses. This provides some physical insight into the intermolecular force and energy between bound water and the wood material. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the creation of 3D statistical shape models of the knee bones and their use to embed information into a segmentation system for MRIs of the knee. We propose utilising the strong spatial relationship between the cartilages and the bones in the knee by embedding this information into the created models. This information can then be used to automate the initialisation of segmentation algorithms for the cartilages. The approach used to automatically generate the 3D statistical shape models of the bones is based on the point distribution model optimisation framework of Davies. Our implementation of this scheme uses a parameterized surface extraction algorithm, which is used as the basis for the optimisation scheme that automatically creates the 3D statistical shape models. The current approach is illustrated by generating 3D statistical shape models of the patella, tibia and femoral bones from a segmented database of the knee. The use of these models to embed spatial relationship information to aid in the automation of segmentation algorithms for the cartilages is then illustrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, Drǎgulescu and Yakovenko proposed an analytical formula for computing the probability density function of stock log returns, based on the Heston model, which they tested empirically. Their research design inadvertently favourably biased the fit of the data to the Heston model, thus overstating their empirical results. Furthermore, Drǎgulescu and Yakovenko did not perform any goodness-of-fit statistical tests. This study employs a research design that facilitates statistical tests of the goodness-of-fit of the Heston model to empirical returns. Robustness checks are also performed. In brief, the Heston model outperformed the Gaussian model only at high frequencies and even so does not provide a statistically acceptable fit to the data. The Gaussian model performed (marginally) better at medium and low frequencies, at which points the extra parameters of the Heston model have adverse impacts on the test statistics. © 2005 Taylor & Francis Group Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As wireless network technologies evolve towards an All-IP framework, Next Generation Wireless Communication Devices demand better use of spectral resources by employing advanced techniques of silence suppression. This paper presents an analysis of VoIP call data and compares the statistical results based on observed patterns of talk spurts and silence lengths to those achieved by a modified on-off voice model for silence suppression in wireless networks. As talk spurts and silence lengths are sensitive to varying word lengths, temporal structure and other prosodic aspects of speech, the impact of the use of various languages, dialects and gender of speakers on these results is also assessed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molecular transport in phase space is crucial for chemical reactions because it defines how pre-reactive molecular configurations are found during the time evolution of the system. Using Molecular Dynamics (MD) simulated atomistic trajectories we test the assumption of the normal diffusion in the phase space for bulk water at ambient conditions by checking the equivalence of the transport to the random walk model. Contrary to common expectations we have found that some statistical features of the transport in the phase space differ from those of the normal diffusion models. This implies a non-random character of the path search process by the reacting complexes in water solutions. Our further numerical experiments show that a significant long period of non-stationarity in the transition probabilities of the segments of molecular trajectories can account for the observed non-uniform filling of the phase space. Surprisingly, the characteristic periods in the model non-stationarity constitute hundreds of nanoseconds, that is much longer time scales compared to typical lifetime of known liquid water molecular structures (several picoseconds).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Properties of computing Boolean circuits composed of noisy logical gates are studied using the statistical physics methodology. A formula-growth model that gives rise to random Boolean functions is mapped onto a spin system, which facilitates the study of their typical behavior in the presence of noise. Bounds on their performance, derived in the information theory literature for specific gates, are straightforwardly retrieved, generalized and identified as the corresponding macroscopic phase transitions. The framework is employed for deriving results on error-rates at various function-depths and function sensitivity, and their dependence on the gate-type and noise model used. These are difficult to obtain via the traditional methods used in this field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When constructing and using environmental models, it is typical that many of the inputs to the models will not be known perfectly. In some cases, it will be possible to make observations, or occasionally physics-based uncertainty propagation, to ascertain the uncertainty on these inputs. However, such observations are often either not available or even possible, and another approach to characterising the uncertainty on the inputs must be sought. Even when observations are available, if the analysis is being carried out within a Bayesian framework then prior distributions will have to be specified. One option for gathering or at least estimating this information is to employ expert elicitation. Expert elicitation is well studied within statistics and psychology and involves the assessment of the beliefs of a group of experts about an uncertain quantity, (for example an input / parameter within a model), typically in terms of obtaining a probability distribution. One of the challenges in expert elicitation is to minimise the biases that might enter into the judgements made by the individual experts, and then to come to a consensus decision within the group of experts. Effort is made in the elicitation exercise to prevent biases clouding the judgements through well-devised questioning schemes. It is also important that, when reaching a consensus, the experts are exposed to the knowledge of the others in the group. Within the FP7 UncertWeb project (http://www.uncertweb.org/), there is a requirement to build a Webbased tool for expert elicitation. In this paper, we discuss some of the issues of building a Web-based elicitation system - both the technological aspects and the statistical and scientific issues. In particular, we demonstrate two tools: a Web-based system for the elicitation of continuous random variables and a system designed to elicit uncertainty about categorical random variables in the setting of landcover classification uncertainty. The first of these examples is a generic tool developed to elicit uncertainty about univariate continuous random variables. It is designed to be used within an application context and extends the existing SHELF method, adding a web interface and access to metadata. The tool is developed so that it can be readily integrated with environmental models exposed as web services. The second example was developed for the TREES-3 initiative which monitors tropical landcover change through ground-truthing at confluence points. It allows experts to validate the accuracy of automated landcover classifications using site-specific imagery and local knowledge. Experts may provide uncertainty information at various levels: from a general rating of their confidence in a site validation to a numerical ranking of the possible landcover types within a segment. A key challenge in the web based setting is the design of the user interface and the method of interacting between the problem owner and the problem experts. We show the workflow of the elicitation tool, and show how we can represent the final elicited distributions and confusion matrices using UncertML, ready for integration into uncertainty enabled workflows.We also show how the metadata associated with the elicitation exercise is captured and can be referenced from the elicited result, providing crucial lineage information and thus traceability in the decision making process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topic of this thesis is the development of knowledge based statistical software. The shortcomings of conventional statistical packages are discussed to illustrate the need to develop software which is able to exhibit a greater degree of statistical expertise, thereby reducing the misuse of statistical methods by those not well versed in the art of statistical analysis. Some of the issues involved in the development of knowledge based software are presented and a review is given of some of the systems that have been developed so far. The majority of these have moved away from conventional architectures by adopting what can be termed an expert systems approach. The thesis then proposes an approach which is based upon the concept of semantic modelling. By representing some of the semantic meaning of data, it is conceived that a system could examine a request to apply a statistical technique and check if the use of the chosen technique was semantically sound, i.e. will the results obtained be meaningful. Current systems, in contrast, can only perform what can be considered as syntactic checks. The prototype system that has been implemented to explore the feasibility of such an approach is presented, the system has been designed as an enhanced variant of a conventional style statistical package. This involved developing a semantic data model to represent some of the statistically relevant knowledge about data and identifying sets of requirements that should be met for the application of the statistical techniques to be valid. Those areas of statistics covered in the prototype are measures of association and tests of location.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topic of my research is consumer brand equity (CBE). My thesis is that the success or otherwise of a brand is better viewed from the consumers’ perspective. I specifically focus on consumers as a unique group of stakeholders whose involvement with brands is crucial to the overall success of branding strategy. To this end, this research examines the constellation of ideas on brand equity that have hitherto been offered by various scholars. Through a systematic integration of the concepts and practices identified but these scholars (concepts and practices such as: competitiveness, consumer searching, consumer behaviour, brand image, brand relevance, consumer perceived value, etc.), this research identifies CBE as a construct that is shaped, directed and made valuable by the beliefs, attitudes and the subjective preferences of consumers. This is done by examining the criteria on the basis of which the consumers evaluate brands and make brand purchase decisions. Understanding the criteria by which consumers evaluate brands is crucial for several reasons. First, as the basis upon which consumers select brands changes with consumption norms and technology, understanding the consumer choice process will help in formulating branding strategy. Secondly, an understanding of these criteria will help in formulating a creative and innovative agenda for ‘new brand’ propositions. Thirdly, it will also influence firms’ ability to simulate and mould the plasticity of demand for existing brands. In examining these three issues, this thesis presents a comprehensive account of CBE. This is because the first issue raised in the preceding paragraph deals with the content of CBE. The second issue addresses the problem of how to develop a reliable and valid measuring instrument for CBE. The third issue examines the structural and statistical relationships between the factors of CBE and the consequences of CBE on consumer perceived value (CPV). Using LISREL-SIMPLIS 8.30, the study finds direct and significant influential links between consumer brand equity and consumer value perception.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Firstly, we numerically model a practical 20 Gb/s undersea configuration employing the Return-to-Zero Differential Phase Shift Keying data format. The modelling is completed using the Split-Step Fourier Method to solve the Generalised Nonlinear Schrdinger Equation. We optimise the dispersion map and per-channel launch power of these channels and investigate how the choice of pre/post compensation can influence the performance. After obtaining these optimal configurations, we investigate the Bit Error Rate estimation of these systems and we see that estimation based on Gaussian electrical current systems is appropriate for systems of this type, indicating quasi-linear behaviour. The introduction of narrower pulses due to the deployment of quasi-linear transmission decreases the tolerance to chromatic dispersion and intra-channel nonlinearity. We used tools from Mathematical Statistics to study the behaviour of these channels in order to develop new methods to estimate Bit Error Rate. In the final section, we consider the estimation of Eye Closure Penalty, a popular measure of signal distortion. Using a numerical example and assuming the symmetry of eye closure, we see that we can simply estimate Eye Closure Penalty using Gaussian statistics. We also see that the statistics of the logical ones dominates the statistics of the logical ones dominates the statistics of signal distortion in the case of Return-to-Zero On-Off Keying configurations.