906 resultados para EQUATION-ERROR MODELS
Resumo:
Many geological formations consist of crystalline rocks that have very low matrix permeability but allow flow through an interconnected network of fractures. Understanding the flow of groundwater through such rocks is important in considering disposal of radioactive waste in underground repositories. A specific area of interest is the conditioning of fracture transmissivities on measured values of pressure in these formations. This is the process where the values of fracture transmissivities in a model are adjusted to obtain a good fit of the calculated pressures to measured pressure values. While there are existing methods to condition transmissivity fields on transmissivity, pressure and flow measurements for a continuous porous medium there is little literature on conditioning fracture networks. Conditioning fracture transmissivities on pressure or flow values is a complex problem because the measurements are not linearly related to the fracture transmissivities and they are also dependent on all the fracture transmissivities in the network. We present a new method for conditioning fracture transmissivities on measured pressure values based on the calculation of certain basis vectors; each basis vector represents the change to the log transmissivity of the fractures in the network that results in a unit increase in the pressure at one measurement point whilst keeping the pressure at the remaining measurement points constant. The fracture transmissivities are updated by adding a linear combination of basis vectors and coefficients, where the coefficients are obtained by minimizing an error function. A mathematical summary of the method is given. This algorithm is implemented in the existing finite element code ConnectFlow developed and marketed by Serco Technical Services, which models groundwater flow in a fracture network. Results of the conditioning are shown for a number of simple test problems as well as for a realistic large scale test case.
Resumo:
Doctor of Philosophy in Mathematics
Resumo:
The development of oil wells drilling requires additional cares mainly if the drilling is in offshore ultra deep water with low overburden pressure gradients which cause low fracture gradients and, consequently, difficult the well drilling by the reduction of the operational window. To minimize, in the well planning phases, the difficulties faced by the drilling in those sceneries, indirect models are used to estimate fracture gradient that foresees approximate values for leakoff tests. These models generate curves of geopressures that allow detailed analysis of the pressure behavior for the whole well. Most of these models are based on the Terzaghi equation, just differentiating in the determination of the values of rock tension coefficient. This work proposes an alternative method for prediction of fracture pressure gradient based on a geometric correlation that relates the pressure gradients proportionally for a given depth and extrapolates it for the whole well depth, meaning that theses parameters vary in a fixed proportion. The model is based on the application of analytical proportion segments corresponding to the differential pressure related to the rock tension. The study shows that the proposed analytical proportion segments reaches values of fracture gradient with good agreement with those available for leakoff tests in the field area. The obtained results were compared with twelve different indirect models for fracture pressure gradient prediction based on the compacting effect. For this, a software was developed using Matlab language. The comparison was also made varying the water depth from zero (onshore wellbores) to 1500 meters. The leakoff tests are also used to compare the different methods including the one proposed in this work. The presented work gives good results for error analysis compared to other methods and, due to its simplicity, justify its possible application
Resumo:
In this thesis we present a mathematical formulation of the interaction between microorganisms such as bacteria or amoebae and chemicals, often produced by the organisms themselves. This interaction is called chemotaxis and leads to cellular aggregation. We derive some models to describe chemotaxis. The first is the pioneristic Keller-Segel parabolic-parabolic model and it is derived by two different frameworks: a macroscopic perspective and a microscopic perspective, in which we start with a stochastic differential equation and we perform a mean-field approximation. This parabolic model may be generalized by the introduction of a degenerate diffusion parameter, which depends on the density itself via a power law. Then we derive a model for chemotaxis based on Cattaneo's law of heat propagation with finite speed, which is a hyperbolic model. The last model proposed here is a hydrodynamic model, which takes into account the inertia of the system by a friction force. In the limit of strong friction, the model reduces to the parabolic model, whereas in the limit of weak friction, we recover a hyperbolic model. Finally, we analyze the instability condition, which is the condition that leads to aggregation, and we describe the different kinds of aggregates we may obtain: the parabolic models lead to clusters or peaks whereas the hyperbolic models lead to the formation of network patterns or filaments. Moreover, we discuss the analogy between bacterial colonies and self gravitating systems by comparing the chemotactic collapse and the gravitational collapse (Jeans instability).
Resumo:
Suppose two or more variables are jointly normally distributed. If there is a common relationship between these variables it would be very important to quantify this relationship by a parameter called the correlation coefficient which measures its strength, and the use of it can develop an equation for predicting, and ultimately draw testable conclusion about the parent population. This research focused on the correlation coefficient ρ for the bivariate and trivariate normal distribution when equal variances and equal covariances are considered. Particularly, we derived the maximum Likelihood Estimators (MLE) of the distribution parameters assuming all of them are unknown, and we studied the properties and asymptotic distribution of . Showing this asymptotic normality, we were able to construct confidence intervals of the correlation coefficient ρ and test hypothesis about ρ. With a series of simulations, the performance of our new estimators were studied and were compared with those estimators that already exist in the literature. The results indicated that the MLE has a better or similar performance than the others.
Resumo:
An important aspect of constructing discrete velocity models (DVMs) for the Boltzmann equation is to obtain the right number of collision invariants. It is a well-known fact that DVMs can also have extra collision invariants, so called spurious collision invariants, in plus to the physical ones. A DVM with only physical collision invariants, and so without spurious ones, is called normal. For binary mixtures also the concept of supernormal DVMs was introduced, meaning that in addition to the DVM being normal, the restriction of the DVM to any single species also is normal. Here we introduce generalizations of this concept to DVMs for multicomponent mixtures. We also present some general algorithms for constructing such models and give some concrete examples of such constructions. One of our main results is that for any given number of species, and any given rational mass ratios we can construct a supernormal DVM. The DVMs are constructed in such a way that for half-space problems, as the Milne and Kramers problems, but also nonlinear ones, we obtain similar structures as for the classical discrete Boltzmann equation for one species, and therefore we can apply obtained results for the classical Boltzmann equation.
Resumo:
Understanding how virus strains offer protection against closely related emerging strains is vital for creating effective vaccines. For many viruses, including Foot-and-Mouth Disease Virus (FMDV) and the Influenza virus where multiple serotypes often co-circulate, in vitro testing of large numbers of vaccines can be infeasible. Therefore the development of an in silico predictor of cross-protection between strains is important to help optimise vaccine choice. Vaccines will offer cross-protection against closely related strains, but not against those that are antigenically distinct. To be able to predict cross-protection we must understand the antigenic variability within a virus serotype, distinct lineages of a virus, and identify the antigenic residues and evolutionary changes that cause the variability. In this thesis we present a family of sparse hierarchical Bayesian models for detecting relevant antigenic sites in virus evolution (SABRE), as well as an extended version of the method, the extended SABRE (eSABRE) method, which better takes into account the data collection process. The SABRE methods are a family of sparse Bayesian hierarchical models that use spike and slab priors to identify sites in the viral protein which are important for the neutralisation of the virus. In this thesis we demonstrate how the SABRE methods can be used to identify antigenic residues within different serotypes and show how the SABRE method outperforms established methods, mixed-effects models based on forward variable selection or l1 regularisation, on both synthetic and viral datasets. In addition we also test a number of different versions of the SABRE method, compare conjugate and semi-conjugate prior specifications and an alternative to the spike and slab prior; the binary mask model. We also propose novel proposal mechanisms for the Markov chain Monte Carlo (MCMC) simulations, which improve mixing and convergence over that of the established component-wise Gibbs sampler. The SABRE method is then applied to datasets from FMDV and the Influenza virus in order to identify a number of known antigenic residue and to provide hypotheses of other potentially antigenic residues. We also demonstrate how the SABRE methods can be used to create accurate predictions of the important evolutionary changes of the FMDV serotypes. In this thesis we provide an extended version of the SABRE method, the eSABRE method, based on a latent variable model. The eSABRE method takes further into account the structure of the datasets for FMDV and the Influenza virus through the latent variable model and gives an improvement in the modelling of the error. We show how the eSABRE method outperforms the SABRE methods in simulation studies and propose a new information criterion for selecting the random effects factors that should be included in the eSABRE method; block integrated Widely Applicable Information Criterion (biWAIC). We demonstrate how biWAIC performs equally to two other methods for selecting the random effects factors and combine it with the eSABRE method to apply it to two large Influenza datasets. Inference in these large datasets is computationally infeasible with the SABRE methods, but as a result of the improved structure of the likelihood, we are able to show how the eSABRE method offers a computational improvement, leading it to be used on these datasets. The results of the eSABRE method show that we can use the method in a fully automatic manner to identify a large number of antigenic residues on a variety of the antigenic sites of two Influenza serotypes, as well as making predictions of a number of nearby sites that may also be antigenic and are worthy of further experiment investigation.
Resumo:
Knowledge of the geographical distribution of timber tree species in the Amazon is still scarce. This is especially true at the local level, thereby limiting natural resource management actions. Forest inventories are key sources of information on the occurrence of such species. However, areas with approved forest management plans are mostly located near access roads and the main industrial centers. The present study aimed to assess the spatial scale effects of forest inventories used as sources of occurrence data in the interpolation of potential species distribution models. The occurrence data of a group of six forest tree species were divided into four geographical areas during the modeling process. Several sampling schemes were then tested applying the maximum entropy algorithm, using the following predictor variables: elevation, slope, exposure, normalized difference vegetation index (NDVI) and height above the nearest drainage (HAND). The results revealed that using occurrence data from only one geographical area with unique environmental characteristics increased both model overfitting to input data and omission error rates. The use of a diagonal systematic sampling scheme and lower threshold values led to improved model performance. Forest inventories may be used to predict areas with a high probability of species occurrence, provided they are located in forest management plan regions representative of the environmental range of the model projection area.
Resumo:
Fleck and Johnson (Int. J. Mech. Sci. 29 (1987) 507) and Fleck et al. (Proc. Inst. Mech. Eng. 206 (1992) 119) have developed foil rolling models which allow for large deformations in the roll profile, including the possibility that the rolls flatten completely. However, these models require computationally expensive iterative solution techniques. A new approach to the approximate solution of the Fleck et al. (1992) Influence Function Model has been developed using both analytic and approximation techniques. The numerical difficulties arising from solving an integral equation in the flattened region have been reduced by applying an Inverse Hilbert Transform to get an analytic expression for the pressure. The method described in this paper is applicable to cases where there is or there is not a flat region.