920 resultados para diffusive viscoelastic model, global weak solution, error estimate
Resumo:
Site index prediction models are an important aid for forest management and planning activities. This paper introduces a multiple regression model for spatially mapping and comparing site indices for two Pinus species (Pinus elliottii Engelm. and Queensland hybrid, a P. elliottii x Pinus caribaea Morelet hybrid) based on independent variables derived from two major sources: g-ray spectrometry (potassium (K), thorium (Th), and uranium (U)) and a digital elevation model (elevation, slope, curvature, hillshade, flow accumulation, and distance to streams). In addition, interpolated rainfall was tested. Species were coded as a dichotomous dummy variable; interaction effects between species and the g-ray spectrometric and geomorphologic variables were considered. The model explained up to 60% of the variance of site index and the standard error of estimate was 1.9 m. Uranium, elevation, distance to streams, thorium, and flow accumulation significantly correlate to the spatial variation of the site index of both species, and hillshade, curvature, elevation and slope accounted for the extra variability of one species over the other. The predicted site indices varied between 20.0 and 27.3 m for P. elliottii, and between 23.1 and 33.1 m for Queensland hybrid; the advantage of Queensland hybrid over P. elliottii ranged from 1.8 to 6.8 m, with the mean at 4.0 m. This compartment-based prediction and comparison study provides not only an overview of forest productivity of the whole plantation area studied but also a management tool at compartment scale.
Resumo:
Near infrared spectroscopy (NIRS) can be used for the on-line, non-invasive assessment of fruit for eating quality attributes such as total soluble solids (TSS). The robustness of multivariate calibration models, based on NIRS in a partial transmittance optical geometry, for the assessment of TSS of intact rockmelons (Cucumis melo) was assessed. The mesocarp TSS was highest around the fruit equator and increased towards the seed cavity. Inner mesocarp TSS levels decreased towards both the proximal and distal ends of the fruit, but more so towards the proximal end. The equatorial region of the fruit was chosen as representative of the fruit for near infrared assessment of TSS. The spectral window for model development was optimised at 695-1045 nm, and the data pre-treatment procedure was optimised to second-derivative absorbance without scatter correction. The 'global' modified partial least squares (MPLS) regression modelling procedure of WINISI (ver. 1.04) was found to be superior with respect to root mean squared error of prediction (RMSEP) and bias for model predictions of TSS across seasons, compared with the 'local' MPLS regression procedure. Updating of the model with samples selected randomly from the independent validation population demonstrated improvement in both RMSEP and bias with addition of approximately 15 samples.
Resumo:
This paper describes the development of a model, based on Bayesian networks, to estimate the likelihood that sheep flocks are infested with lice at shearing and to assist farm managers or advisers to assess whether or not to apply a lousicide treatment. The risk of lice comes from three main sources: (i) lice may have been present at the previous shearing and not eradicated; (ii) lice may have been introduced with purchased sheep; and (iii) lice may have entered with strays. A Bayesian network is used to assess the probability of each of these events independently and combine them for an overall assessment. Rubbing is a common indicator of lice but there are other causes too. If rubbing has been observed, an additional Bayesian network is used to assess the probability that lice are the cause. The presence or absence of rubbing and its possible cause are combined with these networks to improve the overall risk assessment.
Resumo:
Background: Standard methods for quantifying IncuCyte ZOOM™ assays involve measurements that quantify how rapidly the initially-vacant area becomes re-colonised with cells as a function of time. Unfortunately, these measurements give no insight into the details of the cellular-level mechanisms acting to close the initially-vacant area. We provide an alternative method enabling us to quantify the role of cell motility and cell proliferation separately. To achieve this we calibrate standard data available from IncuCyte ZOOM™ images to the solution of the Fisher-Kolmogorov model. Results: The Fisher-Kolmogorov model is a reaction-diffusion equation that has been used to describe collective cell spreading driven by cell migration, characterised by a cell diffusivity, D, and carrying capacity limited proliferation with proliferation rate, λ, and carrying capacity density, K. By analysing temporal changes in cell density in several subregions located well-behind the initial position of the leading edge we estimate λ and K. Given these estimates, we then apply automatic leading edge detection algorithms to the images produced by the IncuCyte ZOOM™ assay and match this data with a numerical solution of the Fisher-Kolmogorov equation to provide an estimate of D. We demonstrate this method by applying it to interpret a suite of IncuCyte ZOOM™ assays using PC-3 prostate cancer cells and obtain estimates of D, λ and K. Comparing estimates of D, λ and K for a control assay with estimates of D, λ and K for assays where epidermal growth factor (EGF) is applied in varying concentrations confirms that EGF enhances the rate of scratch closure and that this stimulation is driven by an increase in D and λ, whereas K is relatively unaffected by EGF. Conclusions: Our approach for estimating D, λ and K from an IncuCyte ZOOM™ assay provides more detail about cellular-level behaviour than standard methods for analysing these assays. In particular, our approach can be used to quantify the balance of cell migration and cell proliferation and, as we demonstrate, allow us to quantify how the addition of growth factors affects these processes individually.
Resumo:
We consider the motion of a diffusive population on a growing domain, 0 < x < L(t ), which is motivated by various applications in developmental biology. Individuals in the diffusing population, which could represent molecules or cells in a developmental scenario, undergo two different kinds of motion: (i) undirected movement, characterized by a diffusion coefficient, D, and (ii) directed movement, associated with the underlying domain growth. For a general class of problems with a reflecting boundary at x = 0, and an absorbing boundary at x = L(t ), we provide an exact solution to the partial differential equation describing the evolution of the population density function, C(x,t ). Using this solution, we derive an exact expression for the survival probability, S(t ), and an accurate approximation for the long-time limit, S = limt→∞ S(t ). Unlike traditional analyses on a nongrowing domain, where S ≡ 0, we show that domain growth leads to a very different situation where S can be positive. The theoretical tools developed and validated in this study allow us to distinguish between situations where the diffusive population reaches the moving boundary at x = L(t ) from other situations where the diffusive population never reaches the moving boundary at x = L(t ). Making this distinction is relevant to certain applications in developmental biology, such as the development of the enteric nervous system (ENS). All theoretical predictions are verified by implementing a discrete stochastic model.
Resumo:
The application of multilevel control strategies for load-frequency control of interconnected power systems is assuming importance. A large multiarea power system may be viewed as an interconnection of several lower-order subsystems, with possible change of interconnection pattern during operation. The solution of the control problem involves the design of a set of local optimal controllers for the individual areas, in a completely decentralised environment, plus a global controller to provide the corrective signal to account for interconnection effects. A global controller, based on the least-square-error principle suggested by Siljak and Sundareshan, has been applied for the LFC problem. A more recent work utilises certain possible beneficial aspects of interconnection to permit more desirable system performances. The paper reports the application of the latter strategy to LFC of a two-area power system. The power-system model studied includes the effects of excitation system and governor controls. A comparison of the two strategies is also made.
Resumo:
We use Bayesian model selection techniques to test extensions of the standard flat LambdaCDM paradigm. Dark-energy and curvature scenarios, and primordial perturbation models are considered. To that end, we calculate the Bayesian evidence in favour of each model using Population Monte Carlo (PMC), a new adaptive sampling technique which was recently applied in a cosmological context. The Bayesian evidence is immediately available from the PMC sample used for parameter estimation without further computational effort, and it comes with an associated error evaluation. Besides, it provides an unbiased estimator of the evidence after any fixed number of iterations and it is naturally parallelizable, in contrast with MCMC and nested sampling methods. By comparison with analytical predictions for simulated data, we show that our results obtained with PMC are reliable and robust. The variability in the evidence evaluation and the stability for various cases are estimated both from simulations and from data. For the cases we consider, the log-evidence is calculated with a precision of better than 0.08. Using a combined set of recent CMB, SNIa and BAO data, we find inconclusive evidence between flat LambdaCDM and simple dark-energy models. A curved Universe is moderately to strongly disfavoured with respect to a flat cosmology. Using physically well-motivated priors within the slow-roll approximation of inflation, we find a weak preference for a running spectral index. A Harrison-Zel'dovich spectrum is weakly disfavoured. With the current data, tensor modes are not detected; the large prior volume on the tensor-to-scalar ratio r results in moderate evidence in favour of r=0.
Resumo:
Aims We combine measurements of weak gravitational lensing from the CFHTLS-Wide survey, supernovae Ia from CFHT SNLS and CMB anisotropies from WMAP5 to obtain joint constraints on cosmological parameters, in particular, the dark-energy equation-of-state parameter w. We assess the influence of systematics in the data on the results and look for possible correlations with cosmological parameters. Methods We implemented an MCMC algorithm to sample the parameter space of a flat CDM model with a dark-energy component of constant w. Systematics in the data are parametrised and included in the analysis. We determine the influence of photometric calibration of SNIa data on cosmological results by calculating the response of the distance modulus to photometric zero-point variations. The weak lensing data set is tested for anomalous field-to-field variations and a systematic shape measurement bias for high-redshift galaxies. Results Ignoring photometric uncertainties for SNLS biases cosmological parameters by at most 20% of the statistical errors, using supernovae alone; the parameter uncertainties are underestimated by 10%. The weak-lensing field-to-field variance between 1 deg2-MegaCam pointings is 5-15% higher than predicted from N-body simulations. We find no bias in the lensing signal at high redshift, within the framework of a simple model, and marginalising over cosmological parameters. Assuming a systematic underestimation of the lensing signal, the normalisation increases by up to 8%. Combining all three probes we obtain -0.10 < 1 + w < 0.06 at 68% confidence ( -0.18 < 1 + w < 0.12 at 95%), including systematic errors. Our results are therefore consistent with the cosmological constant . Systematics in the data increase the error bars by up to 35%; the best-fit values change by less than 0.15.
Resumo:
This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.
Resumo:
This study presents a comprehensive mathematical formulation model for a short-term open-pit mine block sequencing problem, which considers nearly all relevant technical aspects in open-pit mining. The proposed model aims to obtain the optimum extraction sequences of the original-size (smallest) blocks over short time intervals and in the presence of real-life constraints, including precedence relationship, machine capacity, grade requirements, processing demands and stockpile management. A hybrid branch-and-bound and simulated annealing algorithm is developed to solve the problem. Computational experiments show that the proposed methodology is a promising way to provide quantitative recommendations for mine planning and scheduling engineers.
Resumo:
A high temperature source has been developed and coupled to a high resolution Fourier transform spectrometer to record emission spectra of acetylene around 3 mu m up to 1455 K under Doppler limited resolution (0.015 cm(-1)). The nu(3)-ground state (GS) and nu(2)+nu(4)+nu(5)(Sigma(+)(u) and Delta(u))-GS bands and 76 related hot bands, counting e and f parities separately, are assigned using semiautomatic methods based on a global model to reproduce all related vibration-rotation states. Significantly higher J-values than previously reported are observed for 40 known substates while 37 new e or f vibrational substates, up to about 6000 cm(-1), are identified and characterized by vibration-rotation parameters. The 3 811 new or improved data resulting from the analysis are merged into the database presented by Robert et al. [Mol. Phys. 106, 2581 (2008)], now including 15 562 lines accessing vibrational states up to 8600 cm(-1). A global model, updated as compared to the one in the previous paper, allows all lines in the database to be simultaneously fitted, successfully. The updates are discussed taking into account, in particular, the systematic inclusion of Coriolis interaction.
Resumo:
Fuzzy Waste Load Allocation Model (FWLAM), developed in an earlier study, derives the optimal fractional levels, for the base flow conditions, considering the goals of the Pollution Control Agency (PCA) and dischargers. The Modified Fuzzy Waste Load Allocation Model (MFWLAM) developed subsequently is a stochastic model and considers the moments (mean, variance and skewness) of water quality indicators, incorporating uncertainty due to randomness of input variables along with uncertainty due to imprecision. The risk of low water quality is reduced significantly by using this modified model, but inclusion of new constraints leads to a low value of acceptability level, A, interpreted as the maximized minimum satisfaction in the system. To improve this value, a new model, which is a combination Of FWLAM and MFWLAM, is presented, allowing for some violations in the constraints of MFWLAM. This combined model is a multiobjective optimization model having the objectives, maximization of acceptability level and minimization of violation of constraints. Fuzzy multiobjective programming, goal programming and fuzzy goal programming are used to find the solutions. For the optimization model, Probabilistic Global Search Lausanne (PGSL) is used as a nonlinear optimization tool. The methodology is applied to a case study of the Tunga-Bhadra river system in south India. The model results in a compromised solution of a higher value of acceptability level as compared to MFWLAM, with a satisfactory value of risk. Thus the goal of risk minimization is achieved with a comparatively better value of acceptability level.
Resumo:
We investigate the ability of a global atmospheric general circulation model (AGCM) to reproduce observed 20 year return values of the annual maximum daily precipitation totals over the continental United States as a function of horizontal resolution. We find that at the high resolutions enabled by contemporary supercomputers, the AGCM can produce values of comparable magnitude to high quality observations. However, at the resolutions typical of the coupled general circulation models used in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, the precipitation return values are severely underestimated.
Resumo:
A model of polymer translocation based on the stochastic dynamics of the number of monomers on one side of a pore-containing surface is formulated in terms of a one-dimensional generalized Langevin equation, in which the random force is assumed to be characterized by long-ranged temporal correlations. The model is introduced to rationalize anomalies in measured and simulated values of the average time of passage through the pore, which in general cannot be satisfactorily accounted for by simple Brownian diffusion mechanisms. Calculations are presented of the mean first passage time for barrier crossing and of the mean square displacement of a monomeric segment, in the limits of strong and weak diffusive bias. The calculations produce estimates of the exponents in various scaling relations that are in satisfactory agreement with available data.
Resumo:
Specialist scholarly books, including monographs, allow researchers to present their work, pose questions and to test and extend areas of theory through long-form writing. In spite of the fact that research communities all over the world value monographs and depend heavily on them as a requirement of tenure and promotion in many disciplines, sales of this kind of book are in free fall, with some estimates suggesting declines of as much as 90% over twenty years (Willinsky 2006). Cashstrapped monograph publishers have found themselves caught in a negative cycle of increasing prices and falling sales, with few resources left to support experimentation, business model innovation or engagement with digital technology and Open Access (OA). This chapter considers an important attempt to tackle failing markets for scholarly monographs, and to enable the wider adoption of OA licenses for book-length works: the 2012 – 2014 Knowledge Unlatched pilot. Knowledge Unlatched is a bold attempt to reconfigure the market for specialist scholarly books: moving it beyond the sale of ‘content’ towards a model that supports the services valued by scholarly and wider communities in the context of digital possibility. Its success has powerful implications for the way we understand copyright’s role in the creative industries, and the potential for established institutions and infrastructure to support the open and networked dynamics of a digital age.