895 resultados para Computational Geometry and Object Modelling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The multiphase flow of fluids in the unsaturated porous medium is considered as a three phase flow of water, NAPL, and air simultaneously in the porous medium. The adaptive solution fully implicit modified sequential method is used for the numerical modelling. The effect of capillarity and heterogeneity effect at the interface between the media is studied and it is observed that the interface criteria has to be taken into account for the correct prediction of NAPL migration especially in heterogeneous media. The modified Newton Raphson method is used for the linearization and Hestines and Steifel Conjugate Gradient method is used as the solver.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a novel three-dimensional imaging method by which the object is captured by a coded cameras array (CCA) and computationally reconstructed as a series of longitudinal layered surface images of the object. The distribution of cameras in array, named code pattern, is crucial for reconstructed images fidelity when the correlation decoding is used. We use DIRECT global optimization algorithm to design the code patterns that possess proper imaging property. We have conducted primary experiments to verify and test the performance of the proposed method with a simple discontinuous object and a small-scale CCA including nine cameras. After certain procedures such as capturing, photograph integrating, computational reconstructing and filtering, etc., we obtain reconstructed longitudinal layered surface images of the object with higher signal-to-noise ratio. The results of experiments show that the proposed method is feasible. It is a promising method to be used in fields such as remote sensing, machine vision, etc. (c) 2006 Elsevier GmbH. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reviews the development of computational fluid dynamics (CFD) specifically for turbomachinery simulations and with a particular focus on application to problems with complex geometry. The review is structured by considering this development as a series of paradigm shifts, followed by asymptotes. The original S1-S2 blade-blade-throughflow model is briefly described, followed by the development of two-dimensional then three-dimensional blade-blade analysis. This in turn evolved from inviscid to viscous analysis and then from steady to unsteady flow simulations. This development trajectory led over a surprisingly small number of years to an accepted approach-a 'CFD orthodoxy'. A very important current area of intense interest and activity in turbomachinery simulation is in accounting for real geometry effects, not just in the secondary air and turbine cooling systems but also associated with the primary path. The requirements here are threefold: capturing and representing these geometries in a computer model; making rapid design changes to these complex geometries; and managing the very large associated computational models on PC clusters. Accordingly, the challenges in the application of the current CFD orthodoxy to complex geometries are described in some detail. The main aim of this paper is to argue that the current CFD orthodoxy is on a new asymptote and is not in fact suited for application to complex geometries and that a paradigm shift must be sought. In particular, the new paradigm must be geometry centric and inherently parallel without serial bottlenecks. The main contribution of this paper is to describe such a potential paradigm shift, inspired by the animation industry, based on a fundamental shift in perspective from explicit to implicit geometry and then illustrate this with a number of applications to turbomachinery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An approach to reconfiguring control systems in the event of major failures is advocated. The approach relies on the convergence of several technologies which are currently emerging: Constrained predictive control, High-fidelity modelling of complex systems, Fault detection and identification, and Model approximation and simplification. Much work is needed, both theoretical and algorithmic, to make this approach practical, but we believe that there is enough evidence, especially from existing industrial practice, for the scheme to be considered realistic. After outlining the problem and proposed solution, the paper briefly reviews constrained predictive control and object-oriented modelling, which are the essential ingredients for practical implementation. The prospects for automatic model simplification are also reviewed briefly. The paper emphasizes some emerging trends in industrial practice, especially as regards modelling and control of complex systems. Examples from process control and flight control are used to illustrate some of the ideas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is an increasing demand for optimising complete systems and the devices within that system, including capturing the interactions between the various multi-disciplinary (MD) components involved. Furthermore confidence in robust solutions is esential. As a consequence the computational cost rapidly increases and in many cases becomes infeasible to perform such conceptual designs. A coherent design methodology is proposed, where the aim is to improve the design process by effectively exploiting the potential of computational synthesis, search and optimisation and conventional simulation, with a reduction of the computational cost. This optimization framework consists of a hybrid optimization algorithm to handles multi-fidelity simulations. Simultaneously and in order to handles uncertainty without recasting the model and at affordable computational cost, a stochastic modelling method known as non-intrusive polynomial chaos is introduced. The effectiveness of the design methodology is demonstrated with the optimisation of a submarine propulsion system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A multi-objective design optimisation study has been carried out with the objectives to improve the overall efficiency of the device and to reduce the fuel consumption for the proposed micro-scale combustor design configuration. In a previous study we identified the topology of the combustion chamber that produced improved behaviour of the device in terms of the above design criteria. We now extend our design approach, and we propose a new configuration by the addition of a micro-cooling channel that will improve the thermal behaviour of the design as previously suggested in literature. Our initial numerical results revealed an improvement of 2.6% in the combustion efficiency when we applied the micro-cooling channel to an optimum design configuration we identified from our earlier multi-objective optimisation study, and under the same operating conditions. The computational modelling of the combustion process is implemented in the commercial computational fluid dynamics package ANSYS-CFX using Finite Rate Chemistry and a single step hydrogen-air reaction. With this model we try to balance good accuracy of the combustion solution and at the same time practicality within the context of an optimisation process. The whole design system comprises also the ANSYS-ICEM CFD package for the automatic geometry and mesh generation and the Multi-Objective Tabu Search algorithm for the design space exploration. We model the design problem with 5 geometrical parameters and 3 operational parameters subject to 5 design constraints that secure practicality and feasibility of the new optimum design configurations. The final results demonstrate the reliability and efficiency of the developed computational design system and most importantly we assess the practicality and manufacturability of the revealed optimum design configurations of micro-combustor devices. Copyright © 2013 by ASME.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The behaviour of cast-iron tunnel segments used in London Underground tunnels was investigated using the 3-D finite element (FE) method. A numerical model of the structural details of cast-iron segmental joints such as bolts, panel and flanges was developed and its performance was validated against a set of full-scale tests. Using the verified model, the influence of structural features such as caulking groove and bolt pretension was examined for both rotational and shear loading conditions. Since such detailed modelling of bolts increases the computational time when a full scale segmental tunnel is analysed, it is proposed to replace the bolt model to a set of spring models. The parameters for the bolt-spring models, which consider the geometry and material properties of the bolt, are proposed. The performance of the combined bolt-spring and solid segmental models are evaluated against a more conventional shell-spring model. © 2014 Elsevier Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of detecting a large number of different classes of objects in cluttered scenes. Traditional approaches require applying a battery of different classifiers to the image, at multiple locations and scales. This can be slow and can require a lot of training data, since each classifier requires the computation of many different image features. In particular, for independently trained detectors, the (run-time) computational complexity, and the (training-time) sample complexity, scales linearly with the number of classes to be detected. It seems unlikely that such an approach will scale up to allow recognition of hundreds or thousands of objects. We present a multi-class boosting procedure (joint boosting) that reduces the computational and sample complexity, by finding common features that can be shared across the classes (and/or views). The detectors for each class are trained jointly, rather than independently. For a given performance level, the total number of features required, and therefore the computational cost, is observed to scale approximately logarithmically with the number of classes. The features selected jointly are closer to edges and generic features typical of many natural structures instead of finding specific object parts. Those generic features generalize better and reduce considerably the computational cost of an algorithm for multi-class object detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Infant formula is often produced as an agglomerated powder using a spray drying process. Pneumatic conveying is commonly used for transporting this product within a manufacturing plant. The transient mechanical loads imposed by this process cause some of the agglomerates to disintegrate, which has implications for key quality characteristics of the formula including bulk density and wettability. This thesis used both experimental and modelling approaches to investigate this breakage during conveying. One set of conveying trials had the objective of establishing relationships between the geometry and operating conditions of the conveying system and the resulting changes in bulk properties of the infant formula upon conveying. A modular stainless steel pneumatic conveying rig was constructed for these trials. The mode of conveying and air velocity had a statistically-significant effect on bulk density at a 95% level, while mode of conveying was the only factor which significantly influenced D[4,3] or wettability. A separate set of conveying experiments investigated the effect of infant formula composition, rather than the pneumatic conveying parameters, and also assessed the relationships between the mechanical responses of individual agglomerates of four infant formulae and their compositions. The bulk densities before conveying, and the forces and strains at failure of individual agglomerates, were related to the protein content. The force at failure and stiffness of individual agglomerates were strongly correlated, and generally increased with increasing protein to fat ratio while the strain at failure decreased. Two models of breakage were developed at different scales; the first was a detailed discrete element model of a single agglomerate. This was calibrated using a novel approach based on Taguchi methods which was shown to have considerable advantages over basic parameter studies which are widely used. The data obtained using this model compared well to experimental results for quasi-static uniaxial compression of individual agglomerates. The model also gave adequate results for dynamic loading simulations. A probabilistic model of pneumatic conveying was also developed; this was suitable for predicting breakage in large populations of agglomerates and was highly versatile: parts of the model could easily be substituted by the researcher according to their specific requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Buried heat sources can be investigated by examining thermal infrared images and comparing these with the results of theoretical models which predict the thermal anomaly a given heat source may generate. Key factors influencing surface temperature include the geometry and temperature of the heat source, the surface meteorological environment, and the thermal conductivity and anisotropy of the rock. In general, a geothermal heat flux of greater than 2% of solar insolation is required to produce a detectable thermal anomaly in a thermal infrared image. A heat source of, for example, 2-300K greater than the average surface temperature must be a t depth shallower than 50m for the detection of the anomaly in a thermal infrared image, for typical terrestrial conditions. Atmospheric factors are of critical importance. While the mean atmospheric temperature has little significance, the convection is a dominant factor, and can act to swamp the thermal signature entirely. Given a steady state heat source that produces a detectable thermal anomaly, it is possible to loosely constrain the physical properties of the heat source and surrounding rock, using the surface thermal anomaly as a basis. The success of this technique is highly dependent on the degree to which the physical properties of the host rock are known. Important parameters include the surface thermal properties and thermal conductivity of the rock. Modelling of transient thermal situations was carried out, to assess the effect of time dependant thermal fluxes. One-dimensional finite element models can be readily and accurately applied to the investigation of diurnal heat flow, as with thermal inertia models. Diurnal thermal models of environments on Earth, the Moon and Mars were carried out using finite elements and found to be consistent with published measurements. The heat flow from an injection of hot lava into a near surface lava tube was considered. While this approach was useful for study, and long term monitoring in inhospitable areas, it was found to have little hazard warning utility, as the time taken for the thermal energy to propagate to the surface in dry rock (several months) in very long. The resolution of the thermal infrared imaging system is an important factor. Presently available satellite based systems such as Landsat (resolution of 120m) are inadequate for detailed study of geothermal anomalies. Airborne systems, such as TIMS (variable resolution of 3-6m) are much more useful for discriminating small buried heat sources. Planned improvements in the resolution of satellite based systems will broaden the potential for application of the techniques developed in this thesis. It is important to note, however, that adequate spatial resolution is a necessary but not sufficient condition for successful application of these techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A 3D model of melt pool created by a moving arc type heat sources has been developed. The model solves the equations of turbulent fluid flow, heat transfer and electromagnetic field to demonstrate the flow behaviour phase-change in the pool. The coupled effects of buoyancy, capillary (Marangoni) and electromagnetic (Lorentz) forces are included within an unstructured finite volume mesh environment. The movement of the welding arc along the workpiece is accomplished via a moving co-ordinator system. Additionally a method enabling movement of the weld pool surface by fluid convection is presented whereby the mesh in the liquid region is allowed to move through a free surface. The surface grid lines move to restore equilibrium at the end of each computational time step and interior grid points then adjust following the solution of a Laplace equation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to develop a mathematical model with the ability to predict particle degradation during dilute phase pneumatic conveying. A numerical procedure, based on a matrix representation of degradation processes, is presented to determine the particle impact degradation propensity from a small number of particle single impact tests carried out in a new designed laboratory scale degradation tester. A complete model of particle degradation during dilute phase pneumatic conveying is then described, where the calculation of degradation propensity is coupled with a flow model of the solids and gas phases in the pipeline. Numerical results are presented for degradation of granulated sugar in an industrial scale pneumatic conveyor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The domain decomposition method is directed to electronic packaging simulation in this article. The objective is to address the entire simulation process chain, to alleviate user interactions where they are heavy to mechanization by component approach to streamline the model simulation process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dosators and other dosing mechanisms operating on generally similar principles are very widely used in the pharmaceutical industry for capsule filling, and for dosing products that are delivered to the customer in powder form such as inhalers. This is a trend that is set to increase. However a significant problem for this technology is being able to predict how accurately and reliably, new drug formulations will be dosed from these machines prior to manufacture. This paper presents a review of the literature relating to powder dosators which considers mathematical models for predicting dosator performance, the effects of the dosator geometry and machine settings on the accuracy of the dose weight. An overview of a model based on classical powder mechanics theory that has been developed at The University of Greenwich is presented. The model uses inputs from a range of powder characterisation tests including, wall friction, bulk density, stress ratio and permeability. To validate the model it is anticipated that it will be trialled for a range of powders alongside a single shot dosator test rig.