893 resultados para Biogeography, Bioregions, Subregion, Statistical Modelling, GIS, Finite Mixture Models
Resumo:
Two ongoing projects at ESSC that involve the development of new techniques for extracting information from airborne LiDAR data and combining this information with environmental models will be discussed. The first project in conjunction with Bristol University is aiming to improve 2-D river flood flow models by using remote sensing to provide distributed data for model calibration and validation. Airborne LiDAR can provide such models with a dense and accurate floodplain topography together with vegetation heights for parameterisation of model friction. The vegetation height data can be used to specify a friction factor at each node of a model’s finite element mesh. A LiDAR range image segmenter has been developed which converts a LiDAR image into separate raster maps of surface topography and vegetation height for use in the model. Satellite and airborne SAR data have been used to measure flood extent remotely in order to validate the modelled flood extent. Methods have also been developed for improving the models by decomposing the model’s finite element mesh to reflect floodplain features such as hedges and trees having different frictional properties to their surroundings. Originally developed for rural floodplains, the segmenter is currently being extended to provide DEMs and friction parameter maps for urban floods, by fusing the LiDAR data with digital map data. The second project is concerned with the extraction of tidal channel networks from LiDAR. These networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt-marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. A semi-automatic technique has been developed to extract networks from LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low level algorithms first extract channel fragments based mainly on image properties then a high level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism.
Resumo:
Eddy current testing by current deflection detects surface cracks and geometric features by sensing the re-routing of currents. Currents are diverted by cracks in two ways: down the walls, and along their length at the surface. Current deflection utilises the latter currents, detecting them via their tangential magnetic field. Results from 3-D finite element computer modelling, which show the two forms of deflection, are presented. Further results indicate that the current deflection technique is suitable for the detection of surface cracks in smooth materials with varying material properties.
Resumo:
Lava domes comprise core, carapace, and clastic talus components. They can grow endogenously by inflation of a core and/or exogenously with the extrusion of shear bounded lobes and whaleback lobes at the surface. Internal structure is paramount in determining the extent to which lava dome growth evolves stably, or conversely the propensity for collapse. The more core lava that exists within a dome, in both relative and absolute terms, the more explosive energy is available, both for large pyroclastic flows following collapse and in particular for lateral blast events following very rapid removal of lateral support to the dome. Knowledge of the location of the core lava within the dome is also relevant for hazard assessment purposes. A spreading toe, or lobe of core lava, over a talus substrate may be both relatively unstable and likely to accelerate to more violent activity during the early phases of a retrogressive collapse. Soufrière Hills Volcano, Montserrat has been erupting since 1995 and has produced numerous lava domes that have undergone repeated collapse events. We consider one continuous dome growth period, from August 2005 to May 2006 that resulted in a dome collapse event on 20th May 2006. The collapse event lasted 3 h, removing the whole dome plus dome remnants from a previous growth period in an unusually violent and rapid collapse event. We use an axisymmetrical computational Finite Element Method model for the growth and evolution of a lava dome. Our model comprises evolving core, carapace and talus components based on axisymmetrical endogenous dome growth, which permits us to model the interface between talus and core. Despite explicitly only modelling axisymmetrical endogenous dome growth our core–talus model simulates many of the observed growth characteristics of the 2005–2006 SHV lava dome well. Further, it is possible for our simulations to replicate large-scale exogenous characteristics when a considerable volume of talus has accumulated around the lower flanks of the dome. Model results suggest that dome core can override talus within a growing dome, potentially generating a region of significant weakness and a potential locus for collapse initiation.
Resumo:
During many lava dome-forming eruptions, persistent rockfalls and the concurrent development of a substantial talus apron around the foot of the dome are important aspects of the observed activity. An improved understanding of internal dome structure, including the shape and internal boundaries of the talus apron, is critical for determining when a lava dome is poised for a major collapse and how this collapse might ensue. We consider a period of lava dome growth at the Soufrière Hills Volcano, Montserrat, from August 2005 to May 2006, during which a 100 × 106 m3 lava dome developed that culminated in a major dome-collapse event on 20 May 2006. We use an axi-symmetrical Finite Element Method model to simulate the growth and evolution of the lava dome, including the development of the talus apron. We first test the generic behaviour of this continuum model, which has core lava and carapace/talus components. Our model describes the generation rate of talus, including its spatial and temporal variation, as well as its post-generation deformation, which is important for an improved understanding of the internal configuration and structure of the dome. We then use our model to simulate the 2005 to 2006 Soufrière Hills dome growth using measured dome volumes and extrusion rates to drive the model and generate the evolving configuration of the dome core and carapace/talus domains. The evolution of the model is compared with the observed rockfall seismicity using event counts and seismic energy parameters, which are used here as a measure of rockfall intensity and hence a first-order proxy for volumes. The range of model-derived volume increments of talus aggraded to the talus slope per recorded rockfall event, approximately 3 × 103–13 × 103 m3 per rockfall, is high with respect to estimates based on observed events. From this, it is inferred that some of the volumetric growth of the talus apron (perhaps up to 60–70%) might have occurred in the form of aseismic deformation of the talus, forced by an internal, laterally spreading core. Talus apron growth by this mechanism has not previously been identified, and this suggests that the core, hosting hot gas-rich lava, could have a greater lateral extent than previously considered.
Resumo:
Many different individuals, who have their own expertise and criteria for decision making, are involved in making decisions on construction projects. Decision-making processes are thus significantly affected by communication, in which a dynamic performance of human intentions leads to unpredictable outcomes. In order to theorise the decision making processes including communication, it is argued here that the decision making processes resemble evolutionary dynamics in terms of both selection and mutation, which can be expressed by the replicator-mutator equation. To support this argument, a mathematical model of decision making has been made from an analogy with evolutionary dynamics, in which there are three variables: initial support rate, business hierarchy, and power of persuasion. On the other hand, a survey of patterns in decision making in construction projects has also been performed through self-administered mail questionnaire to construction practitioners. Consequently, comparison between the numerical analysis of mathematical model and the statistical analysis of empirical data has shown a significant potential of the replicator-mutator equation as a tool to study dynamic properties of intentions in communication.
Resumo:
This study suggests a statistical strategy for explaining how food purchasing intentions are influenced by different levels of risk perception and trust in food safety information. The modelling process is based on Ajzen's Theory of Planned Behaviour and includes trust and risk perception as additional explanatory factors. Interaction and endogeneity across these determinants is explored through a system of simultaneous equations, while the SPARTA equation is estimated through an ordered probit model. Furthermore, parameters are allowed to vary as a function of socio-demographic variables. The application explores chicken purchasing intentions both in a standard situation and conditional to an hypothetical salmonella scare. Data were collected through a nationally representative UK wide survey of 533 UK respondents in face-to-face, in-home interviews. Empirical findings show that interactions exist among the determinants of planned behaviour and socio-demographic variables improve the model's performance. Attitudes emerge as the key determinant of intention to purchase chicken, while trust in food safety information provided by media reduces the likelihood to purchase. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Networks are ubiquitous in natural, technological and social systems. They are of increasing relevance for improved understanding and control of infectious diseases of plants, animals and humans, given the interconnectedness of today's world. Recent modelling work on disease development in complex networks shows: the relative rapidity of pathogen spread in scale-free compared with random networks, unless there is high local clustering; the theoretical absence of an epidemic threshold in scale-free networks of infinite size, which implies that diseases with low infection rates can spread in them, but the emergence of a threshold when realistic features are added to networks (e.g. finite size, household structure or deactivation of links); and the influence on epidemic dynamics of asymmetrical interactions. Models suggest that control of pathogens spreading in scale-free networks should focus on highly connected individuals rather than on mass random immunization. A growing number of empirical applications of network theory in human medicine and animal disease ecology confirm the potential of the approach, and suggest that network thinking could also benefit plant epidemiology and forest pathology, particularly in human-modified pathosystems linked by commercial transport of plant and disease propagules. Potential consequences for the study and management of plant and tree diseases are discussed.
Resumo:
A physically motivated statistical model is used to diagnose variability and trends in wintertime ( October - March) Global Precipitation Climatology Project (GPCP) pentad (5-day mean) precipitation. Quasi-geostrophic theory suggests that extratropical precipitation amounts should depend multiplicatively on the pressure gradient, saturation specific humidity, and the meridional temperature gradient. This physical insight has been used to guide the development of a suitable statistical model for precipitation using a mixture of generalized linear models: a logistic model for the binary occurrence of precipitation and a Gamma distribution model for the wet day precipitation amount. The statistical model allows for the investigation of the role of each factor in determining variations and long-term trends. Saturation specific humidity q(s) has a generally negative effect on global precipitation occurrence and with the tropical wet pentad precipitation amount, but has a positive relationship with the pentad precipitation amount at mid- and high latitudes. The North Atlantic Oscillation, a proxy for the meridional temperature gradient, is also found to have a statistically significant positive effect on precipitation over much of the Atlantic region. Residual time trends in wet pentad precipitation are extremely sensitive to the choice of the wet pentad threshold because of increasing trends in low-amplitude precipitation pentads; too low a choice of threshold can lead to a spurious decreasing trend in wet pentad precipitation amounts. However, for not too small thresholds, it is found that the meridional temperature gradient is an important factor for explaining part of the long-term trend in Atlantic precipitation.
Resumo:
The assumption that negligible work is involved in the formation of new surfaces in the machining of ductile metals, is re-examined in the light of both current Finite Element Method (FEM) simulations of cutting and modern ductile fracture mechanics. The work associated with separation criteria in FEM models is shown to be in the kJ/m2 range rather than the few J/m2 of the surface energy (surface tension) employed by Shaw in his pioneering study of 1954 following which consideration of surface work has been omitted from analyses of metal cutting. The much greater values of surface specific work are not surprising in terms of ductile fracture mechanics where kJ/m2 values of fracture toughness are typical of the ductile metals involved in machining studies. This paper shows that when even the simple Ernst–Merchant analysis is generalised to include significant surface work, many of the experimental observations for which traditional ‘plasticity and friction only’ analyses seem to have no quantitative explanation, are now given meaning. In particular, the primary shear plane angle φ becomes material-dependent. The experimental increase of φ up to a saturated level, as the uncut chip thickness is increased, is predicted. The positive intercepts found in plots of cutting force vs. depth of cut, and in plots of force resolved along the primary shear plane vs. area of shear plane, are shown to be measures of the specific surface work. It is demonstrated that neglect of these intercepts in cutting analyses is the reason why anomalously high values of shear yield stress are derived at those very small uncut chip thicknesses at which the so-called size effect becomes evident. The material toughness/strength ratio, combined with the depth of cut to form a non-dimensional parameter, is shown to control ductile cutting mechanics. The toughness/strength ratio of a given material will change with rate, temperature, and thermomechanical treatment and the influence of such changes, together with changes in depth of cut, on the character of machining is discussed. Strength or hardness alone is insufficient to describe machining. The failure of the Ernst–Merchant theory seems less to do with problems of uniqueness and the validity of minimum work, and more to do with the problem not being properly posed. The new analysis compares favourably and consistently with the wide body of experimental results available in the literature. Why considerable progress in the understanding of metal cutting has been achieved without reference to significant surface work is also discussed.
Resumo:
The application of prediction theories has been widely practised for many years in many industries such as manufacturing, defence and aerospace. Although these theories are not new, their application has not been widely used within the building services industry. Collectively, the building services industry should take a deeper look at these approaches in comparison with the traditional deterministic approaches currently being practised. By extending the application into this industry, this paper seeks to provide the industry with an overview of how simplified stochastic modelling coupled with availability and reliability predictions using historical data compiled from various sources could enhance the quality of building services systems.
Resumo:
A basic principle in data modelling is to incorporate available a priori information regarding the underlying data generating mechanism into the modelling process. We adopt this principle and consider grey-box radial basis function (RBF) modelling capable of incorporating prior knowledge. Specifically, we show how to explicitly incorporate the two types of prior knowledge: the underlying data generating mechanism exhibits known symmetric property and the underlying process obeys a set of given boundary value constraints. The class of orthogonal least squares regression algorithms can readily be applied to construct parsimonious grey-box RBF models with enhanced generalisation capability.
Resumo:
A finite difference scheme is presented for the inviscid terms of the equations of compressible fluid dynamics with general non-equilibrium chemistry and internal energy.
Resumo:
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.
Resumo:
This paper addresses the statistical mechanics of ideal polymer chains next to a hard wall. The principal quantity of interest, from which all monomer densities can be calculated, is the partition function, G N(z) , for a chain of N discrete monomers with one end fixed a distance z from the wall. It is well accepted that in the limit of infinite N , G N(z) satisfies the diffusion equation with the Dirichlet boundary condition, G N(0) = 0 , unless the wall possesses a sufficient attraction, in which case the Robin boundary condition, G N(0) = - x G N ′(0) , applies with a positive coefficient, x . Here we investigate the leading N -1/2 correction, D G N(z) . Prior to the adsorption threshold, D G N(z) is found to involve two distinct parts: a Gaussian correction (for z <~Unknown control sequence '\lesssim' aN 1/2 with a model-dependent amplitude, A , and a proximal-layer correction (for z <~Unknown control sequence '\lesssim' a described by a model-dependent function, B(z).
Resumo:
A connection between a fuzzy neural network model with the mixture of experts network (MEN) modelling approach is established. Based on this linkage, two new neuro-fuzzy MEN construction algorithms are proposed to overcome the curse of dimensionality that is inherent in the majority of associative memory networks and/or other rule based systems. The first construction algorithm employs a function selection manager module in an MEN system. The second construction algorithm is based on a new parallel learning algorithm in which each model rule is trained independently, for which the parameter convergence property of the new learning method is established. As with the first approach, an expert selection criterion is utilised in this algorithm. These two construction methods are equivalent in their effectiveness in overcoming the curse of dimensionality by reducing the dimensionality of the regression vector, but the latter has the additional computational advantage of parallel processing. The proposed algorithms are analysed for effectiveness followed by numerical examples to illustrate their efficacy for some difficult data based modelling problems.