67 resultados para Numerical Algorithms and Problems
Resumo:
The long observational record is critical to our understanding of the Earth’s climate, but most observing systems were not developed with a climate objective in mind. As a result, tremendous efforts have gone into assessing and reprocessing the data records to improve their usefulness in climate studies. The purpose of this paper is to both review recent progress in reprocessing and reanalyzing observations, and summarize the challenges that must be overcome in order to improve our understanding of climate and variability. Reprocessing improves data quality through more scrutiny and improved retrieval techniques for individual observing systems, while reanalysis merges many disparate observations with models through data assimilation, yet both aim to provide a climatology of Earth processes. Many challenges remain, such as tracking the improvement of processing algorithms and limited spatial coverage. Reanalyses have fostered significant research, yet reliable global trends in many physical fields are not yet attainable, despite significant advances in data assimilation and numerical modeling. Oceanic reanalyses have made significant advances in recent years, but will only be discussed here in terms of progress toward integrated Earth system analyses. Climate data sets are generally adequate for process studies and large-scale climate variability. Communication of the strengths, limitations and uncertainties of reprocessed observations and reanalysis data, not only among the community of developers, but also with the extended research community, including the new generations of researchers and the decision makers is crucial for further advancement of the observational data records. It must be emphasized that careful investigation of the data and processing methods are required to use the observations appropriately.
Resumo:
The pipe sizing of water networks via evolutionary algorithms is of great interest because it allows the selection of alternative economical solutions that meet a set of design requirements. However, available evolutionary methods are numerous, and methodologies to compare the performance of these methods beyond obtaining a minimal solution for a given problem are currently lacking. A methodology to compare algorithms based on an efficiency rate (E) is presented here and applied to the pipe-sizing problem of four medium-sized benchmark networks (Hanoi, New York Tunnel, GoYang and R-9 Joao Pessoa). E numerically determines the performance of a given algorithm while also considering the quality of the obtained solution and the required computational effort. From the wide range of available evolutionary algorithms, four algorithms were selected to implement the methodology: a PseudoGenetic Algorithm (PGA), Particle Swarm Optimization (PSO), a Harmony Search and a modified Shuffled Frog Leaping Algorithm (SFLA). After more than 500,000 simulations, a statistical analysis was performed based on the specific parameters each algorithm requires to operate, and finally, E was analyzed for each network and algorithm. The efficiency measure indicated that PGA is the most efficient algorithm for problems of greater complexity and that HS is the most efficient algorithm for less complex problems. However, the main contribution of this work is that the proposed efficiency ratio provides a neutral strategy to compare optimization algorithms and may be useful in the future to select the most appropriate algorithm for different types of optimization problems.
Resumo:
Recent interest in the validation of general circulation models (GCMs) has been devoted to objective methods. A small number of authors have used the direct synoptic identification of phenomena together with a statistical analysis to perform the objective comparison between various datasets. This paper describes a general method for performing the synoptic identification of phenomena that can be used for an objective analysis of atmospheric, or oceanographic, datasets obtained from numerical models and remote sensing. Methods usually associated with image processing have been used to segment the scene and to identify suitable feature points to represent the phenomena of interest. This is performed for each time level. A technique from dynamic scene analysis is then used to link the feature points to form trajectories. The method is fully automatic and should be applicable to a wide range of geophysical fields. An example will be shown of results obtained from this method using data obtained from a run of the Universities Global Atmospheric Modelling Project GCM.
Resumo:
GODIVA2 is a dynamic website that provides visual access to several terabytes of physically distributed, four-dimensional environmental data. It allows users to explore large datasets interactively without the need to install new software or download and understand complex data. Through the use of open international standards, GODIVA2 maintains a high level of interoperability with third-party systems, allowing diverse datasets to be mutually compared. Scientists can use the system to search for features in large datasets and to diagnose the output from numerical simulations and data processing algorithms. Data providers around Europe have adopted GODIVA2 as an INSPIRE-compliant dynamic quick-view system for providing visual access to their data.
Resumo:
Mega-scale glacial lineations (MSGLs) are longitudinally aligned corrugations (ridge-groove structures 6-100 km long) in sediment produced subglacially. They are indicators of fast flow and a common signature of ice-stream beds. We develop a qualitative theory that accounts for their formation, and use numerical modelling, and observations of ice-stream beds to provide supporting evidence. Ice in contact with a rough (scale of 10-10(3) m) bedrock surface will mimic the form of the bed. Because of flow acceleration and convergence in ice-stream onset zones, the ice-base roughness elements experience transverse strain, transforming them from irregular bumps into longitudinally aligned keels of ice protruding downwards. Where such keels slide across a soft sedimentary bed, they plough through the sediments, carving elongate grooves, and deforming material up into intervening ridges. This explains MSGLs and has important implications for ice-stream mechanics. Groove ploughing provides the means to acquire new lubricating sediment and to transport large volumes of it downstream. Keels may provide basal drag in the force budget of ice streams, thereby playing a role in flow regulation and stability We speculate that groove ploughing permits significant ice-stream widening, thus facilitating high-magnitude ice discharge.
Resumo:
Bloom-forming and toxin-producing cyanobacteria remain a persistent nuisance across the world. Modelling of cyanobacteria in freshwaters is an important tool for understanding their population dynamics and predicting the location and timing of the bloom events in lakes and rivers. In this article, a new deterministic model is introduced which simulates the growth and movement of cyanobacterial blooms in river systems. The model focuses on the mathematical description of the bloom formation, vertical migration and lateral transport of colonies within river environments by taking into account the four major factors that affect the cyanobacterial bloom formation in freshwaters: light, nutrients, temperature and river flow. The model consists of two sub-models: a vertical migration model with respect to growth of cyanobacteria in relation to light, nutrients and temperature; and a hydraulic model to simulate the horizontal movement of the bloom. This article presents the model algorithms and highlights some important model results. The effects of nutrient limitation, varying illumination and river flow characteristics on cyanobacterial movement are simulated. The results indicate that under high light intensities and in nutrient-rich waters colonies sink further as a result of carbohydrate accumulation in the cells. In turbulent environments, vertical migration is retarded by vertical velocity component generated by turbulent shear stress. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
There is a growing interest in using stochastic parametrizations in numerical weather and climate prediction models. Previously, Palmer (2001) outlined the issues that give rise to the need for a stochastic parametrization and the forms such a parametrization could take. In this article a method is presented that uses a comparison between a standard-resolution version and a high-resolution version of the same model to gain information relevant for a stochastic parametrization in that model. A correction term that could be used in a stochastic parametrization is derived from the thermodynamic equations of both models. The origin of the components of this term is discussed. It is found that the component related to unresolved wave-wave interactions is important and can act to compensate for large parametrized tendencies. The correction term is not proportional to the parametrized tendency. Finally, it is explained how the correction term could be used to give information about the shape of the random distribution to be used in a stochastic parametrization. Copyright © 2009 Royal Meteorological Society
Resumo:
A multiple factor parametrization is described to permit the efficient calculation of collision efficiency (E) between electrically charged aerosol particles and neutral cloud droplets in numerical models of cloud and climate. The four-parameter representation summarizes the results obtained from a detailed microphysical model of E, which accounts for the different forces acting on the aerosol in the path of falling cloud droplets. The parametrization's range of validity is for aerosol particle radii of 0.4 to 10 mu m, aerosol particle densities of I to 2.0 g cm(-3), aerosol particle charges from neutral to 100 elementary charges and drop radii from 18.55 to 142 mu m. The parametrization yields values of E well within an order of magnitude of the detailed model's values, from a dataset of 3978 E values. Of these values 95% have modelled to parametrized ratios between 0.5 and 1.5 for aerosol particle sizes ranging between 0.4 and 2.0 mu m, and about 96% in the second size range. This parametrization speeds up the calculation of E by a factor of similar to 10(3) compared with the original microphysical model, permitting the inclusion of electric charge effects in numerical cloud and climate models.
Resumo:
Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest that the long-run drivers of Brazilian sugar prices are oil prices and that there are nonlinearities in the adjustment processes of sugar and ethanol prices to oil price but linear adjustment between ethanol and sugar prices.
Resumo:
The requirement to rapidly and efficiently evaluate ruminant feedstuffs places increased emphasis on in vitro systems. However, despite the developmental work undertaken and widespread application of such techniques, little attention has been paid to the incubation medium. Considerable research using in vitro systems is conducted in resource-poor developing countries that often have difficulties associated with technical expertise, sourcing chemicals and/or funding to cover analytical and equipment costs. Such limitations have, to date, restricted vital feed evaluation programmes in these regions. This paper examines the function and relevance of the buffer, nutrient, and reducing solution components within current in vitro media, with the aim of identifying where simplification can be achieved. The review, supported by experimental work, identified no requirement to change the carbonate or phosphate salts, which comprise the main buffer components. The inclusion of microminerals provided few additional nutrients over that already supplied by the rumen fluid and substrate, and so may be omitted. Nitrogen associated with the inoculum was insufficient to support degradation and a level of 25 mg N/g substrate is recommended. A sulphur inclusion level of 4-5 mg S/g substrate is proposed, with S levels lowered through omission of sodium sulphide and replacement of magnesium sulphate with magnesium chloride. It was confirmed that a highly reduced medium was not required, provided that anaerobic conditions were rapidly established. This allows sodium sulphide, part of the reducing solution, to be omitted. Further, as gassing with CO2 directly influences the quantity of gas released, it is recommended that minimum CO, levels be used and that gas flow and duration, together with the volume of medium treated, are detailed in experimental procedures. It is considered that these simplifications will improve safety and reduce costs and problems associated with sourcing components, while maintaining analytical precision. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Project managers in the construction industry increasingly seek to learn from other industrial sectors. Knowledge sharing between different contexts is thus viewed as an essential source of competitive advantage. It is important therefore for project managers from all sectors to address and develop appropriate methods of knowledge sharing. However, too often it is assumed that knowledge freely exists and can be captured and shared between contexts. Such assumptions belie complexities and problems awaiting the unsuspecting knowledge-sharing protagonist. Knowledge per se is a problematic esoteric concept that does not lend itself easily to codification. Specifically tacit knowledge possessed by individuals, presents particular methodological issues for those considering harnessing its utility in return for competitive advantage. The notion that knowledge is also embedded in specific social contexts compounds this complexity. It is argued that knowledge is highly individualistic and concomitant with the various surrounding contexts within which it is shaped and enacted. Indeed, these contexts are also shaped as a consequence of knowledge adding further complexity to the problem domain. Current methods of knowledge capture, transfer and, sharing fall short of addressing these problematic issues. Research is presented that addresses these problems and proposes an alternative method of knowledge sharing. Drawing on data and observations collected from its application, the findings clearly demonstrate the crucial role of re-contextualisation, social interaction and dialectic debate in understanding knowledge sharing.
Resumo:
A numerical study of fluid mechanics and heat transfer in a scraped surface heat exchanger with non-Newtonian power law fluids is undertaken. Numerical results are generated for 2D steady-state conditions using finite element methods. The effect of blade design and material properties, and especially the independent effects of shear thinning and heat thinning on the flow and heat transfer, are studied. The results show that the gaps at the root of the blades, where the blades are connected to the inner cylinder, remove the stagnation points, reduce the net force on the blades and shift the location of the central stagnation point. The shear thinning property of the fluid reduces the local viscous dissipation close to the singularity corners, i.e. near the tip of the blades, and as a result the local fluid temperature is regulated. The heat thinning effect is greatest for Newtonian fluids where the viscous dissipation and the local temperature are highest at the tip of the blades. Where comparison is possible, very good agreement is found between the numerical results and the available data. Aspects of scraped surface heat exchanger design are assessed in the light of the results. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
A finite element numerical study has been carried out on the isothermal flow of power law fluids in lid-driven cavities with axial throughflow. The effects of the tangential flow Reynolds number (Re-U), axial flow Reynolds number (Re-W), cavity aspect ratio and shear thinning property of the fluids on tangential and axial velocity distributions and the frictional pressure drop are studied. Where comparison is possible, very good agreement is found between current numerical results and published asymptotic and numerical results. For shear thinning materials in long thin cavities in the tangential flow dominated flow regime, the numerical results show that the frictional pressure drop lies between two extreme conditions, namely the results for duct flow and analytical results from lubrication theory. For shear thinning materials in a lid-driven cavity, the interaction between the tangential flow and axial flow is very complex because the flow is dependent on the flow Reynolds numbers and the ratio of the average axial velocity and the lid velocity. For both Newtonian and shear thinning fluids, the axial velocity peak is shifted and the frictional pressure drop is increased with increasing tangential flow Reynolds number. The results are highly relevant to industrial devices such as screw extruders and scraped surface heat exchangers. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Elevated levels of low-density-lipoprotein cholesterol (LDL-C) in the plasma are a well-established risk factor for the development of coronary heart disease. Plasma LDL-C levels are in part determined by the rate at which LDL particles are removed from the bloodstream by hepatic uptake. The uptake of LDL by mammalian liver cells occurs mainly via receptor-mediated endocytosis, a process which entails the binding of these particles to specific receptors in specialised areas of the cell surface, the subsequent internalization of the receptor-lipoprotein complex, and ultimately the degradation and release of the ingested lipoproteins' constituent parts. We formulate a mathematical model to study the binding and internalization (endocytosis) of LDL and VLDL particles by hepatocytes in culture. The system of ordinary differential equations, which includes a cholesterol-dependent pit production term representing feedback regulation of surface receptors in response to intracellular cholesterol levels, is analysed using numerical simulations and steady-state analysis. Our numerical results show good agreement with in vitro experimental data describing LDL uptake by cultured hepatocytes following delivery of a single bolus of lipoprotein. Our model is adapted in order to reflect the in vivo situation, in which lipoproteins are continuously delivered to the hepatocyte. In this case, our model suggests that the competition between the LDL and VLDL particles for binding to the pits on the cell surface affects the intracellular cholesterol concentration. In particular, we predict that when there is continuous delivery of low levels of lipoproteins to the cell surface, more VLDL than LDL occupies the pit, since VLDL are better competitors for receptor binding. VLDL have a cholesterol content comparable to LDL particles; however, due to the larger size of VLDL, one pit-bound VLDL particle blocks binding of several LDLs, and there is a resultant drop in the intracellular cholesterol level. When there is continuous delivery of lipoprotein at high levels to the hepatocytes, VLDL particles still out-compete LDL particles for receptor binding, and consequently more VLDL than LDL particles occupy the pit. Although the maximum intracellular cholesterol level is similar for high and low levels of lipoprotein delivery, the maximum is reached more rapidly when the lipoprotein delivery rates are high. The implications of these results for the design of in vitro experiments is discussed.
Resumo:
The Danish Eulerian Model (DEM) is a powerful air pollution model, designed to calculate the concentrations of various dangerous species over a large geographical region (e.g. Europe). It takes into account the main physical and chemical processes between these species, the actual meteorological conditions, emissions, etc.. This is a huge computational task and requires significant resources of storage and CPU time. Parallel computing is essential for the efficient practical use of the model. Some efficient parallel versions of the model were created over the past several years. A suitable parallel version of DEM by using the Message Passing Interface library (AIPI) was implemented on two powerful supercomputers of the EPCC - Edinburgh, available via the HPC-Europa programme for transnational access to research infrastructures in EC: a Sun Fire E15K and an IBM HPCx cluster. Although the implementation is in principal, the same for both supercomputers, few modifications had to be done for successful porting of the code on the IBM HPCx cluster. Performance analysis and parallel optimization was done next. Results from bench marking experiments will be presented in this paper. Another set of experiments was carried out in order to investigate the sensitivity of the model to variation of some chemical rate constants in the chemical submodel. Certain modifications of the code were necessary to be done in accordance with this task. The obtained results will be used for further sensitivity analysis Studies by using Monte Carlo simulation.