912 resultados para Numerical Algorithms and Problems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest that the long-run drivers of Brazilian sugar prices are oil prices and that there are nonlinearities in the adjustment processes of sugar and ethanol prices to oil price but linear adjustment between ethanol and sugar prices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The requirement to rapidly and efficiently evaluate ruminant feedstuffs places increased emphasis on in vitro systems. However, despite the developmental work undertaken and widespread application of such techniques, little attention has been paid to the incubation medium. Considerable research using in vitro systems is conducted in resource-poor developing countries that often have difficulties associated with technical expertise, sourcing chemicals and/or funding to cover analytical and equipment costs. Such limitations have, to date, restricted vital feed evaluation programmes in these regions. This paper examines the function and relevance of the buffer, nutrient, and reducing solution components within current in vitro media, with the aim of identifying where simplification can be achieved. The review, supported by experimental work, identified no requirement to change the carbonate or phosphate salts, which comprise the main buffer components. The inclusion of microminerals provided few additional nutrients over that already supplied by the rumen fluid and substrate, and so may be omitted. Nitrogen associated with the inoculum was insufficient to support degradation and a level of 25 mg N/g substrate is recommended. A sulphur inclusion level of 4-5 mg S/g substrate is proposed, with S levels lowered through omission of sodium sulphide and replacement of magnesium sulphate with magnesium chloride. It was confirmed that a highly reduced medium was not required, provided that anaerobic conditions were rapidly established. This allows sodium sulphide, part of the reducing solution, to be omitted. Further, as gassing with CO2 directly influences the quantity of gas released, it is recommended that minimum CO, levels be used and that gas flow and duration, together with the volume of medium treated, are detailed in experimental procedures. It is considered that these simplifications will improve safety and reduce costs and problems associated with sourcing components, while maintaining analytical precision. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Project managers in the construction industry increasingly seek to learn from other industrial sectors. Knowledge sharing between different contexts is thus viewed as an essential source of competitive advantage. It is important therefore for project managers from all sectors to address and develop appropriate methods of knowledge sharing. However, too often it is assumed that knowledge freely exists and can be captured and shared between contexts. Such assumptions belie complexities and problems awaiting the unsuspecting knowledge-sharing protagonist. Knowledge per se is a problematic esoteric concept that does not lend itself easily to codification. Specifically tacit knowledge possessed by individuals, presents particular methodological issues for those considering harnessing its utility in return for competitive advantage. The notion that knowledge is also embedded in specific social contexts compounds this complexity. It is argued that knowledge is highly individualistic and concomitant with the various surrounding contexts within which it is shaped and enacted. Indeed, these contexts are also shaped as a consequence of knowledge adding further complexity to the problem domain. Current methods of knowledge capture, transfer and, sharing fall short of addressing these problematic issues. Research is presented that addresses these problems and proposes an alternative method of knowledge sharing. Drawing on data and observations collected from its application, the findings clearly demonstrate the crucial role of re-contextualisation, social interaction and dialectic debate in understanding knowledge sharing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A numerical study of fluid mechanics and heat transfer in a scraped surface heat exchanger with non-Newtonian power law fluids is undertaken. Numerical results are generated for 2D steady-state conditions using finite element methods. The effect of blade design and material properties, and especially the independent effects of shear thinning and heat thinning on the flow and heat transfer, are studied. The results show that the gaps at the root of the blades, where the blades are connected to the inner cylinder, remove the stagnation points, reduce the net force on the blades and shift the location of the central stagnation point. The shear thinning property of the fluid reduces the local viscous dissipation close to the singularity corners, i.e. near the tip of the blades, and as a result the local fluid temperature is regulated. The heat thinning effect is greatest for Newtonian fluids where the viscous dissipation and the local temperature are highest at the tip of the blades. Where comparison is possible, very good agreement is found between the numerical results and the available data. Aspects of scraped surface heat exchanger design are assessed in the light of the results. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A finite element numerical study has been carried out on the isothermal flow of power law fluids in lid-driven cavities with axial throughflow. The effects of the tangential flow Reynolds number (Re-U), axial flow Reynolds number (Re-W), cavity aspect ratio and shear thinning property of the fluids on tangential and axial velocity distributions and the frictional pressure drop are studied. Where comparison is possible, very good agreement is found between current numerical results and published asymptotic and numerical results. For shear thinning materials in long thin cavities in the tangential flow dominated flow regime, the numerical results show that the frictional pressure drop lies between two extreme conditions, namely the results for duct flow and analytical results from lubrication theory. For shear thinning materials in a lid-driven cavity, the interaction between the tangential flow and axial flow is very complex because the flow is dependent on the flow Reynolds numbers and the ratio of the average axial velocity and the lid velocity. For both Newtonian and shear thinning fluids, the axial velocity peak is shifted and the frictional pressure drop is increased with increasing tangential flow Reynolds number. The results are highly relevant to industrial devices such as screw extruders and scraped surface heat exchangers. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Elevated levels of low-density-lipoprotein cholesterol (LDL-C) in the plasma are a well-established risk factor for the development of coronary heart disease. Plasma LDL-C levels are in part determined by the rate at which LDL particles are removed from the bloodstream by hepatic uptake. The uptake of LDL by mammalian liver cells occurs mainly via receptor-mediated endocytosis, a process which entails the binding of these particles to specific receptors in specialised areas of the cell surface, the subsequent internalization of the receptor-lipoprotein complex, and ultimately the degradation and release of the ingested lipoproteins' constituent parts. We formulate a mathematical model to study the binding and internalization (endocytosis) of LDL and VLDL particles by hepatocytes in culture. The system of ordinary differential equations, which includes a cholesterol-dependent pit production term representing feedback regulation of surface receptors in response to intracellular cholesterol levels, is analysed using numerical simulations and steady-state analysis. Our numerical results show good agreement with in vitro experimental data describing LDL uptake by cultured hepatocytes following delivery of a single bolus of lipoprotein. Our model is adapted in order to reflect the in vivo situation, in which lipoproteins are continuously delivered to the hepatocyte. In this case, our model suggests that the competition between the LDL and VLDL particles for binding to the pits on the cell surface affects the intracellular cholesterol concentration. In particular, we predict that when there is continuous delivery of low levels of lipoproteins to the cell surface, more VLDL than LDL occupies the pit, since VLDL are better competitors for receptor binding. VLDL have a cholesterol content comparable to LDL particles; however, due to the larger size of VLDL, one pit-bound VLDL particle blocks binding of several LDLs, and there is a resultant drop in the intracellular cholesterol level. When there is continuous delivery of lipoprotein at high levels to the hepatocytes, VLDL particles still out-compete LDL particles for receptor binding, and consequently more VLDL than LDL particles occupy the pit. Although the maximum intracellular cholesterol level is similar for high and low levels of lipoprotein delivery, the maximum is reached more rapidly when the lipoprotein delivery rates are high. The implications of these results for the design of in vitro experiments is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Danish Eulerian Model (DEM) is a powerful air pollution model, designed to calculate the concentrations of various dangerous species over a large geographical region (e.g. Europe). It takes into account the main physical and chemical processes between these species, the actual meteorological conditions, emissions, etc.. This is a huge computational task and requires significant resources of storage and CPU time. Parallel computing is essential for the efficient practical use of the model. Some efficient parallel versions of the model were created over the past several years. A suitable parallel version of DEM by using the Message Passing Interface library (AIPI) was implemented on two powerful supercomputers of the EPCC - Edinburgh, available via the HPC-Europa programme for transnational access to research infrastructures in EC: a Sun Fire E15K and an IBM HPCx cluster. Although the implementation is in principal, the same for both supercomputers, few modifications had to be done for successful porting of the code on the IBM HPCx cluster. Performance analysis and parallel optimization was done next. Results from bench marking experiments will be presented in this paper. Another set of experiments was carried out in order to investigate the sensitivity of the model to variation of some chemical rate constants in the chemical submodel. Certain modifications of the code were necessary to be done in accordance with this task. The obtained results will be used for further sensitivity analysis Studies by using Monte Carlo simulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New algorithms and microcomputer-programs for generating original multilayer designs (and printing a spectral graph) from refractive-index input are presented. The programs are characterised TSHEBYSHEV, HERPIN, MULTILAYER-SPECTRUM and have originated new designs of narrow-stopband, non-polarizing edge, and Tshebyshev optical filter. Computation procedure is an exact synthesis (so far that is possible) numerical refinement not having been needed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An important goal in computational neuroanatomy is the complete and accurate simulation of neuronal morphology. We are developing computational tools to model three-dimensional dendritic structures based on sets of stochastic rules. This paper reports an extensive, quantitative anatomical characterization of simulated motoneurons and Purkinje cells. We used several local and global algorithms implemented in the L-Neuron and ArborVitae programs to generate sets of virtual neurons. Parameters statistics for all algorithms were measured from experimental data, thus providing a compact and consistent description of these morphological classes. We compared the emergent anatomical features of each group of virtual neurons with those of the experimental database in order to gain insights on the plausibility of the model assumptions, potential improvements to the algorithms, and non-trivial relations among morphological parameters. Algorithms mainly based on local constraints (e.g., branch diameter) were successful in reproducing many morphological properties of both motoneurons and Purkinje cells (e.g. total length, asymmetry, number of bifurcations). The addition of global constraints (e.g., trophic factors) improved the angle-dependent emergent characteristics (average Euclidean distance from the soma to the dendritic terminations, dendritic spread). Virtual neurons systematically displayed greater anatomical variability than real cells, suggesting the need for additional constraints in the models. For several emergent anatomical properties, a specific algorithm reproduced the experimental statistics better than the others did. However, relative performances were often reversed for different anatomical properties and/or morphological classes. Thus, combining the strengths of alternative generative models could lead to comprehensive algorithms for the complete and accurate simulation of dendritic morphology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Searching for the optimum tap-length that best balances the complexity and steady-state performance of an adaptive filter has attracted attention recently. Among existing algorithms that can be found in the literature, two of which, namely the segmented filter (SF) and gradient descent (GD) algorithms, are of particular interest as they can search for the optimum tap-length quickly. In this paper, at first, we carefully compare the SF and GD algorithms and show that the two algorithms are equivalent in performance under some constraints, but each has advantages/disadvantages relative to the other. Then, we propose an improved variable tap-length algorithm using the concept of the pseudo fractional tap-length (FT). Updating the tap-length with instantaneous errors in a style similar to that used in the stochastic gradient [or least mean squares (LMS)] algorithm, the proposed FT algorithm not only retains the advantages from both the SF and the GD algorithms but also has significantly less complexity than existing algorithms. Both performance analysis and numerical simulations are given to verify the new proposed algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Of the technologies currently available for producing energy from renewable sources in the British climate all except one depend on a single ingredient, namely land. Therefore other than offshore wind generation, which has been slow and expensive to establish, renewables have had to be derived almost entirely from the land, whether as sites for turbines or areas on which to grow feedstocks for biomass and biofuels. Of these, only wind turbines have been developed in any number while economic conditions have until now been unfavourable for biomass and biofuel. The UK is unlikely to meet its present targets under the Kyoto agreement, due to a mixture of limited funding and problems of policy. Peter Prag examines the present position and the potential outlook.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses how numerical gradient estimation methods may be used in order to reduce the computational demands on a class of multidimensional clustering algorithms. The study is motivated by the recognition that several current point-density based cluster identification algorithms could benefit from a reduction of computational demand if approximate a-priori estimates of the cluster centres present in a given data set could be supplied as starting conditions for these algorithms. In this particular presentation, the algorithm shown to benefit from the technique is the Mean-Tracking (M-T) cluster algorithm, but the results obtained from the gradient estimation approach may also be applied to other clustering algorithms and their related disciplines.