940 resultados para data-driven modelling


Relevância:

50.00% 50.00%

Publicador:

Resumo:

Acknowledgements: We thank Iain Malcolm of Marine Scotland Science for access to data from the Girnock and the Scottish Environment Protection Agency for historical stage-discharge relationships. CS contributions on this paper were in part supported by the NERC/JPI SIWA project (NE/M019896/1).

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Aim The spread of non-indigenous species in marine ecosystems world-wide is one of today's most serious environmental concerns. Using mechanistic modelling, we investigated how global change relates to the invasion of European coasts by a non-native marine invertebrate, the Pacific oyster Crassostrea gigas. Location Bourgneuf Bay on the French Atlantic coast was considered as the northern boundary of C. gigas expansion at the time of its introduction to Europe in the 1970s. From this latitudinal reference, variations in the spatial distribution of the C. gigas reproductive niche were analysed along the north-western European coast from Gibraltar to Norway. Methods The effects of environmental variations on C. gigas physiology and phenology were studied using a bioenergetics model based on Dynamic Energy Budget theory. The model was forced with environmental time series including in situ phytoplankton data, and satellite data of sea surface temperature and suspended particulate matter concentration. Results Simulation outputs were successfully validated against in situ oyster growth data. In Bourgneuf Bay, the rise in seawater temperature and phytoplankton concentration has increased C. gigas reproductive effort and led to precocious spawning periods since the 1960s. At the European scale, seawater temperature increase caused a drastic northward shift (1400 km within 30 years) in the C. gigas reproductive niche and optimal thermal conditions for early life stage development. Main conclusions We demonstrated that the poleward expansion of the invasive species C. gigas is related to global warming and increase in phytoplankton abundance. The combination of mechanistic bioenergetics modelling with in situ and satellite environmental data is a valuable framework for ecosystem studies. It offers a generic approach to analyse historical geographical shifts and to predict the biogeographical changes expected to occur in a climate-changing world.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present finite element simulations of temperature gradient driven rock alteration and mineralization in fluid saturated porous rock masses. In particular, we explore the significance of production/annihilation terms in the mass balance equations and the dependence of the spatial patterns of rock alteration upon the ratio of the roll over time of large scale convection cells to the relaxation time of the chemical reactions. Special concepts such as the gradient reaction criterion or rock alteration index (RAI) are discussed in light of the present, more general theory. In order to validate the finite element simulation, we derive an analytical solution for the rock alteration index of a benchmark problem on a two-dimensional rectangular domain. Since the geometry and boundary conditions of the benchmark problem can be easily and exactly modelled, the analytical solution is also useful for validating other numerical methods, such as the finite difference method and the boundary element method, when they are used to dear with this kind of problem. Finally, the potential of the theory is illustrated by means of finite element studies related to coupled flow problems in materially homogeneous and inhomogeneous porous rock masses. (C) 1998 Elsevier Science S.A. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Numerical methods ave used to solve double diffusion driven reactive flow transport problems in deformable fluid-saturated porous media. in particular, thp temperature dependent reaction rate in the non-equilibrium chemical reactions is considered. A general numerical solution method, which is a combination of the finite difference method in FLAG and the finite element method in FIDAP, to solve the fully coupled problem involving material deformation, pore-fluid flow, heat transfer and species transport/chemical reactions in deformable fluid-saturated porous media has been developed The coupled problem is divided into two subproblems which are solved interactively until the convergence requirement is met. Owing to the approximate nature of the numerical method, if is essential to justify the numerical solutions through some kind of theoretical analysis. This has been highlighted in this paper The related numerical results, which are justified by the theoretical analysis, have demonstrated that the proposed solution method is useful for and applicable to a wide range of fully coupled problems in the field of science and engineering.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Matrix population models, elasticity analysis and loop analysis can potentially provide powerful techniques for the analysis of life histories. Data from a capture-recapture study on a population of southern highland water skinks (Eulamprus tympanum) were used to construct a matrix population model. Errors in elasticities were calculated by using the parametric bootstrap technique. Elasticity and loop analyses were then conducted to identify the life history stages most important to fitness. The same techniques were used to investigate the relative importance of fast versus slow growth, and rapid versus delayed reproduction. Mature water skinks were long-lived, but there was high immature mortality. The most sensitive life history stage was the subadult stage. It is suggested that life history evolution in E. tympanum may be strongly affected by predation, particularly by birds. Because our population declined over the study, slow growth and delayed reproduction were the optimal life history strategies over this period. Although the techniques of evolutionary demography provide a powerful approach for the analysis of life histories, there are formidable logistical obstacles in gathering enough high-quality data for robust estimates of the critical parameters.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Examples from the Murray-Darling basin in Australia are used to illustrate different methods of disaggregation of reconnaissance-scale maps. One approach for disaggregation revolves around the de-convolution of the soil-landscape paradigm elaborated during a soil survey. The descriptions of soil ma units and block diagrams in a soil survey report detail soil-landscape relationships or soil toposequences that can be used to disaggregate map units into component landscape elements. Toposequences can be visualised on a computer by combining soil maps with digital elevation data. Expert knowledge or statistics can be used to implement the disaggregation. Use of a restructuring element and k-means clustering are illustrated. Another approach to disaggregation uses training areas to develop rules to extrapolate detailed mapping into other, larger areas where detailed mapping is unavailable. A two-level decision tree example is presented. At one level, the decision tree method is used to capture mapping rules from the training area; at another level, it is used to define the domain over which those rules can be extrapolated. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The principle of using induction rules based on spatial environmental data to model a soil map has previously been demonstrated Whilst the general pattern of classes of large spatial extent and those with close association with geology were delineated small classes and the detailed spatial pattern of the map were less well rendered Here we examine several strategies to improve the quality of the soil map models generated by rule induction Terrain attributes that are better suited to landscape description at a resolution of 250 m are introduced as predictors of soil type A map sampling strategy is developed Classification error is reduced by using boosting rather than cross validation to improve the model Further the benefit of incorporating the local spatial context for each environmental variable into the rule induction is examined The best model was achieved by sampling in proportion to the spatial extent of the mapped classes boosting the decision trees and using spatial contextual information extracted from the environmental variables.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The present paper addresses two major concerns that were identified when developing neural network based prediction models and which can limit their wider applicability in the industry. The first problem is that it appears neural network models are not readily available to a corrosion engineer. Therefore the first part of this paper describes a neural network model of CO2 corrosion which was created using a standard commercial software package and simple modelling strategies. It was found that such a model was able to capture practically all of the trends noticed in the experimental data with acceptable accuracy. This exercise has proven that a corrosion engineer could readily develop a neural network model such as the one described below for any problem at hand, given that sufficient experimental data exist. This applies even in the cases when the understanding of the underlying processes is poor. The second problem arises from cases when all the required inputs for a model are not known or can be estimated with a limited degree of accuracy. It seems advantageous to have models that can take as input a range rather than a single value. One such model, based on the so-called Monte Carlo approach, is presented. A number of comparisons are shown which have illustrated how a corrosion engineer might use this approach to rapidly test the sensitivity of a model to the uncertainities associated with the input parameters. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We focus on mixtures of factor analyzers from the perspective of a method for model-based density estimation from high-dimensional data, and hence for the clustering of such data. This approach enables a normal mixture model to be fitted to a sample of n data points of dimension p, where p is large relative to n. The number of free parameters is controlled through the dimension of the latent factor space. By working in this reduced space, it allows a model for each component-covariance matrix with complexity lying between that of the isotropic and full covariance structure models. We shall illustrate the use of mixtures of factor analyzers in a practical example that considers the clustering of cell lines on the basis of gene expressions from microarray experiments. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

25th Annual Conference of the European Cetacean Society, Cadiz, Spain 21-23 March 2011.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Research on the problem of feature selection for clustering continues to develop. This is a challenging task, mainly due to the absence of class labels to guide the search for relevant features. Categorical feature selection for clustering has rarely been addressed in the literature, with most of the proposed approaches having focused on numerical data. In this work, we propose an approach to simultaneously cluster categorical data and select a subset of relevant features. Our approach is based on a modification of a finite mixture model (of multinomial distributions), where a set of latent variables indicate the relevance of each feature. To estimate the model parameters, we implement a variant of the expectation-maximization algorithm that simultaneously selects the subset of relevant features, using a minimum message length criterion. The proposed approach compares favourably with two baseline methods: a filter based on an entropy measure and a wrapper based on mutual information. The results obtained on synthetic data illustrate the ability of the proposed expectation-maximization method to recover ground truth. An application to real data, referred to official statistics, shows its usefulness.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this study, we concentrate on modelling gross primary productivity using two simple approaches to simulate canopy photosynthesis: "big leaf" and "sun/shade" models. Two approaches for calibration are used: scaling up of canopy photosynthetic parameters from the leaf to the canopy level and fitting canopy biochemistry to eddy covariance fluxes. Validation of the models is achieved by using eddy covariance data from the LBA site C14. Comparing the performance of both models we conclude that numerically (in terms of goodness of fit) and qualitatively, (in terms of residual response to different environmental variables) sun/shade does a better job. Compared to the sun/shade model, the big leaf model shows a lower goodness of fit and fails to respond to variations in the diffuse fraction, also having skewed responses to temperature and VPD. The separate treatment of sun and shade leaves in combination with the separation of the incoming light into direct beam and diffuse make sun/shade a strong modelling tool that catches more of the observed variability in canopy fluxes as measured by eddy covariance. In conclusion, the sun/shade approach is a relatively simple and effective tool for modelling photosynthetic carbon uptake that could be easily included in many terrestrial carbon models.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Programa Doutoral em Matemática e Aplicações.