882 resultados para Simulation and prediction


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis focuses on improving the simulation skills and the theoretical understanding of the subtropical low cloud response to climate change.

First, an energetically consistent forcing framework is designed and implemented for the large eddy simulation (LES) of the low-cloud response to climate change. The three representative current-day subtropical low cloud regimes of cumulus (Cu), cumulus-over-stratocumulus, and stratocumulus (Sc) are all well simulated with this framework, and results are comparable to the conventional fixed-SST approach. However, the cumulus response to climate warming subject to energetic constraints differs significantly from the conventional approach with fixed SST. Under the energetic constraint, the subtropics warm less than the tropics, since longwave (LW) cooling is more efficient with the drier subtropical free troposphere. The surface latent heat flux (LHF) also increases only weakly subject to the surface energetic constraint. Both factors contribute to an increased estimated inversion strength (EIS), and decreased inversion height. The decreased Cu-depth contributes to a decrease of liquid water path (LWP) and weak positive cloud feedback. The conventional fixed-SST approach instead simulates a strong increase in LHF and deepening of the Cu layer, leading to a weakly negative cloud feedback. This illustrates the importance of energetic constraints to the simulation and understanding of the sign and magnitude of low-cloud feedback.

Second, an extended eddy-diffusivity mass-flux (EDMF) closure for the unified representation of sub-grid scale (SGS) turbulence and convection processes in general circulation models (GCM) is presented. The inclusion of prognostic terms and the elimination of the infinitesimal updraft fraction assumption makes it more flexible for implementation in models across different scales. This framework can be consistently extended to formulate multiple updrafts and downdrafts, as well as variances and covariances. It has been verified with LES in different boundary layer regimes in the current climate, and further development and implementation of this closure may help to improve our simulation skills and understanding of low-cloud feedback through GCMs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A new 2-D quality-guided phase-unwrapping algorithm, based on the placement of the branch cuts, is presented. Its framework consists of branch cut placing guided by an original quality map and reliability ordering performed on a final quality map. To improve the noise immunity of the new algorithm, a new quality map, which is used as the original quality map to guide the placement of the branch cuts, is proposed. After a complete description of the algorithm and the quality map, several wrapped images are used to examine the effectiveness of the algorithm. Computer simulation and experimental results make it clear that the proposed algorithm works effectively even when a wrapped phase map contains error sources, such as phase discontinuities, noise, and undersampling. (c) 2005 Society of Photo-Optical Instrumentation Engineers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes the development of an automated design optimization system that makes use of a high fidelity Reynolds-Averaged CFD analysis procedure to minimize the fan forcing and fan BOGV (bypass outlet guide vane) losses simultaneously taking into the account the down-stream pylon and RDF (radial drive fairing) distortions. The design space consists of the OGV's stagger angle, trailing-edge recambering, axial and circumferential positions leading to a variable pitch optimum design. An advanced optimization system called SOFT (Smart Optimisation for Turbomachinery) was used to integrate a number of pre-processor, simulation and in-house grid generation codes and postprocessor programs. A number of multi-objective, multi-point optimiztion were carried out by SOFT on a cluster of workstations and are reported herein.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computer Aided Control Engineering involves three parallel streams: Simulation and modelling, Control system design (off-line), and Controller implementation. In industry the bottleneck problem has always been modelling, and this remains the case - that is where control (and other) engineers put most of their technical effort. Although great advances in software tools have been made, the cost of modelling remains very high - too high for some sectors. Object-oriented modelling, enabling truly re-usable models, seems to be the key enabling technology here. Software tools to support control systems design have two aspects to them: aiding and managing the work-flow in particular projects (whether of a single engineer or of a team), and provision of numerical algorithms to support control-theoretic and systems-theoretic analysis and design. The numerical problems associated with linear systems have been largely overcome, so that most problems can be tackled routinely without difficulty - though problems remain with (some) systems of extremely large dimensions. Recent emphasis on control of hybrid and/or constrained systems is leading to the emerging importance of geometric algorithms (ellipsoidal approximation, polytope projection, etc). Constantly increasing computational power is leading to renewed interest in design by optimisation, an example of which is MPC. The explosion of embedded control systems has highlighted the importance of autocode generation, directly from modelling/simulation products to target processors. This is the 'new kid on the block', and again much of the focus of commercial tools is on this part of the control engineer's job. Here the control engineer can no longer ignore computer science (at least, for the time being). © 2006 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computer Aided Control Engineering involves three parallel streams: Simulation and modelling, Control system design (off-line), and Controller implementation. In industry the bottleneck problem has always been modelling, and this remains the case - that is where control (and other) engineers put most of their technical effort. Although great advances in software tools have been made, the cost of modelling remains very high - too high for some sectors. Object-oriented modelling, enabling truly re-usable models, seems to be the key enabling technology here. Software tools to support control systems design have two aspects to them: aiding and managing the work-flow in particular projects (whether of a single engineer or of a team), and provision of numerical algorithms to support control-theoretic and systems-theoretic analysis and design. The numerical problems associated with linear systems have been largely overcome, so that most problems can be tackled routinely without difficulty - though problems remain with (some) systems of extremely large dimensions. Recent emphasis on control of hybrid and/or constrained systems is leading to the emerging importance of geometric algorithms (ellipsoidal approximation, polytope projection, etc). Constantly increasing computational power is leading to renewed interest in design by optimisation, an example of which is MPC. The explosion of embedded control systems has highlighted the importance of autocode generation, directly from modelling/simulation products to target processors. This is the 'new kid on the block', and again much of the focus of commercial tools is on this part of the control engineer's job. Here the control engineer can no longer ignore computer science (at least, for the time being). ©2006 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper deals with the experimental evaluation of a flow analysis system based on the integration between an under-resolved Navier-Stokes simulation and experimental measurements with the mechanism of feedback (referred to as Measurement-Integrated simulation), applied to the case of a planar turbulent co-flowing jet. The experiments are performed with inner-to-outer-jet velocity ratio around 2 and the Reynolds number based on the inner-jet heights about 10000. The measurement system is a high-speed PIV, which provides time-resolved data of the flow-field, on a field of view which extends to 20 jet heights downstream the jet outlet. The experimental data can thus be used both for providing the feedback data for the simulations and for validation of the MI-simulations over a wide region. The effect of reduced data-rate and spatial extent of the feedback (i.e. measurements are not available at each simulation time-step or discretization point) was investigated. At first simulations were run with full information in order to obtain an upper limit of the MI-simulations performance. The results show the potential of this methodology of reproducing first and second order statistics of the turbulent flow with good accuracy. Then, to deal with the reduced data different feedback strategies were tested. It was found that for small data-rate reduction the results are basically equivalent to the case of full-information feedback but as the feedback data-rate is reduced further the error increases and tend to be localized in regions of high turbulent activity. Moreover, it is found that the spatial distribution of the error looks qualitatively different for different feedback strategies. Feedback gain distributions calculated by optimal control theory are presented and proposed as a mean to make it possible to perform MI-simulations based on localized measurements only. So far, we have not been able to low error between measurements and simulations by using these gain distributions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Climate change is expected to have significant impact on the future thermal performance of buildings. Building simulation and sensitivity analysis can be employed to predict these impacts, guiding interventions to adapt buildings to future conditions. This article explores the use of simulation to study the impact of climate change on a theoretical office building in the UK, employing a probabilistic approach. The work studies (1) appropriate performance metrics and underlying modelling assumptions, (2) sensitivity of computational results to identify key design parameters and (3) the impact of zonal resolution. The conclusions highlight the importance of assumptions in the field of electricity conversion factors, proper management of internal heat gains, and the need to use an appropriately detailed zonal resolution. © 2010 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Vertically-aligned carbon nanotubes (VA-CNTs) were rapidly grown from ethanol and their chemistry has been studied using a "cold-gas" chemical vapor deposition (CVD) method. Ethanol vapor was preheated in a furnace, cooled down and then flowed over cobalt catalysts upon ribbon-shaped substrates at 800 °C, while keeping the gas unheated. CNTs were obtained from ethanol on a sub-micrometer scale without preheating, but on a millimeter scale with preheating at 1000 °C. Acetylene was predicted to be the direct precursor by gas chromatography and gas-phase kinetic simulation, and actually led to millimeter-tall VA-CNTs without preheating when fed with hydrogen and water. There was, however a difference in CNT structure, i.e. mainly few-wall tubes from pyrolyzed ethanol and mainly single-wall tubes for unheated acetylene, and the by-products from ethanol pyrolysis possibly caused this difference. The "cold-gas" CVD, in which the gas-phase and catalytic reactions are separately controlled, allowed us to further understand CNT growth. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper reports on research that uses building performance simulation and uncertainty analysis to assess the risks that projected climate change poses to the thermal performance of buildings, and to their critical functions. The work takes meteorological climate change predictions as a starting point, but also takes into account developments and uncertainties in technology, occupancy, intervention and renovation, and others. Four cases are studied in depth to explore the prospects of the quantification of said climate change risks. The research concludes that quantification of the risks posed by climate change is possible, but only with many restrictive assumptions on the input side.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Localization of chess-board vertices is a common task in computer vision, underpinning many applications, but relatively little work focusses on designing a specific feature detector that is fast, accurate and robust. In this paper the `Chess-board Extraction by Subtraction and Summation' (ChESS) feature detector, designed to exclusively respond to chess-board vertices, is presented. The method proposed is robust against noise, poor lighting and poor contrast, requires no prior knowledge of the extent of the chess-board pattern, is computationally very efficient, and provides a strength measure of detected features. Such a detector has significant application both in the key field of camera calibration, as well as in Structured Light 3D reconstruction. Evidence is presented showing its robustness, accuracy, and efficiency in comparison to other commonly used detectors both under simulation and in experimental 3D reconstruction of flat plate and cylindrical objects

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fluids with controllable flow properties have gained considerable interest in the past few years. Some of these fluids such as magnetorheologic fluids are now widely applied to active dampers and valves. Although these fluids show promising properties for microsystems, their applicability is limited to the microscale since particles suspended in these fluids tend to obstruct microchannels. This paper investigates the applicability of electrorheologic liquid crystals (LCs) in microsystems. Since LCs do not contain suspended particles, they show intrinsic advantages over classic rheologic fluids in micro-applications. This paper presents a novel physical model that describes the static and the dynamic behaviour of electrorheologic LCs. The developed model is validated by comparing simulations and measurements performed on a rectangular microchannel. This assessment shows that the model presented in this paper is able to simulate both static and dynamic properties accurately. Therefore, this model is useful for the understanding, simulation and optimization of devices using LCs as electrorheological fluid. In addition, measurements performed in this paper reveal remarkable properties of LCs, such as high bandwidths and high changes in flow resistance. © 2006 IOP Publishing Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper addresses the design of mobile sensor networks for optimal data collection. The development is strongly motivated by the application to adaptive ocean sampling for an autonomous ocean observing and prediction system. A performance metric, used to derive optimal paths for the network of mobile sensors, defines the optimal data set as one which minimizes error in a model estimate of the sampled field. Feedback control laws are presented that stably coordinate sensors on structured tracks that have been optimized over a minimal set of parameters. Optimal, closed-loop solutions are computed in a number of low-dimensional cases to illustrate the methodology. Robustness of the performance to the influence of a steady flow field on relatively slow-moving mobile sensors is also explored © 2006 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The standard design process for the Siemens Industrial Turbomachinery, Lincoln, Dry Low Emissions combustion systems has adopted the Eddy Dissipation Model with Finite Rate Chemistry for reacting computational fluid dynamics simulations. The major drawbacks of this model have been the over-prediction of temperature and lack of species data limiting the applicability of the model. A novel combustion model referred to as the Scalar Dissipation Rate Model has been developed recently based on a flamelet type assumption. Previous attempts to adopt the flamelet philosophy with alternative closure models have failed, with the prediction of unphysical phenomenon. The Scalar Dissipation Rate Model (SDRM) was developed from a physical understanding of scalar dissipation rate, signifying the rate of mixing of hot and cold fluids at scales relevant to sustain combustion, in flames and was validated using direct numerical simulations data and experimental measurements. This paper reports on the first industrial application of the SDRM to SITL DLE combustion system. Previous applications have considered ideally premixed laboratory scale flames. The industrial application differs significantly in the complexity of the geometry, unmixedness and operating pressures. The model was implemented into ANSYS-CFX using their inbuilt command language. Simulations were run transiently using Scale Adaptive Simulation turbulence model, which switches between Large Eddy Simulation and Unsteady Reynolds Averaged Navier Stokes using a blending function. The model was validated in a research SITL DLE combustion system prior to being applied to the actual industrial geometry at real operating conditions. This system consists of the SGT-100 burner with a glass square-sectioned combustor allowing for detailed diagnostics. This paper shows the successful validation of the SDRM against time averaged temperature and velocity within measurement errors. The successful validation allowed application of the SDRM to the SGT-100 twin shaft at the relevant full load conditions. Limited validation data was available due to the complexity of measurement in the real geometry. Comparison of surface temperatures and combustor exit temperature profiles showed an improvement compared to EDM/FRC model. Furthermore, no unphysical phenomena were predicted. This paper presents the successful application of the SDRM to the industrial combustion system. The model shows a marked improvement in the prediction of temperature over the EDM/FRC model previously used. This is of significant importance in the future applications of combustion CFD for understanding of hardware mechanical integrity, combustion emissions and dynamics of the flame. Copyright © 2012 by ASME.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper discusses road damage caused by heavy commercial vehicles. Chapter 1 presents some important terminology and a brief historical review of road construction and vehicle-road interaction, from ancient times to the present day. The main types of vehicle-generated road damage, and the methods that are used by pavement engineers to analyze them are discussed in Chapter 2. Attention is also given to the main features of the response of road surfaces to vehicle loads and mathematical models that have been developed to predict road response. Chapter 3 reviews the effects on road damage of vehicle features which can be studied without consideration of vehicle dynamics. These include gross vehicle weight, axle and tire configurations, tire contact conditions and static load sharing in axle group suspensions. The dynamic tire forces generated by heavy vehicles are examined in Chapter 4. The discussion includes their simulation and measurement, their principal characteristics, the effects of tires and suspension design on dynamic forces, and the potential benefits of using advanced suspensions for minimizing dynamic tire forces. Chapter 5 discusses methods for estimating the effects of dynamic tire forces on road damage. The two main approaches are either to examine the statistics of the forces themselves; or to calculate the response of a pavement model to the forces, and to calculate the resulting wear using a material damage model. The issues involved in assessing vehicles for 'road friendliness' are discussed in Chapter 6. Possible assessment methods include measuring strains in an instrumented pavement traversed by the vehicle, measuring dynamic tire forces, or measuring vehicle parameters such as the 'natural frequency' and 'damping ratio'. Each of these measurements involves different assumptions and analysis methods for converting the results into some measure of road damage. Chapter 7 includes a summary of the main conclusions of the paper and recommendations for tire and suspension design, road design and construction, and for vehicle regulations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Localization of chess-board vertices is a common task in computer vision, underpinning many applications, but relatively little work focusses on designing a specific feature detector that is fast, accurate and robust. In this paper the 'Chess-board Extraction by Subtraction and Summation' (ChESS) feature detector, designed to exclusively respond to chess-board vertices, is presented. The method proposed is robust against noise, poor lighting and poor contrast, requires no prior knowledge of the extent of the chess-board pattern, is computationally very efficient, and provides a strength measure of detected features. Such a detector has significant application both in the key field of camera calibration, as well as in structured light 3D reconstruction. Evidence is presented showing its superior robustness, accuracy, and efficiency in comparison to other commonly used detectors, including Harris & Stephens and SUSAN, both under simulation and in experimental 3D reconstruction of flat plate and cylindrical objects. © 2013 Elsevier Inc. All rights reserved.