951 resultados para one-meson-exchange: independent-particle shell model
Resumo:
The fluid–particle interaction and the impact of different heat transfer conditions on pyrolysis of biomass inside a 150 g/h fluidised bed reactor are modelled. Two different size biomass particles (350 µm and 550 µm in diameter) are injected into the fluidised bed. The different biomass particle sizes result in different heat transfer conditions. This is due to the fact that the 350 µm diameter particle is smaller than the sand particles of the reactor (440 µm), while the 550 µm one is larger. The bed-to-particle heat transfer for both cases is calculated according to the literature. Conductive heat transfer is assumed for the larger biomass particle (550 µm) inside the bed, while biomass–sand contacts for the smaller biomass particle (350 µm) were considered unimportant. The Eulerian approach is used to model the bubbling behaviour of the sand, which is treated as a continuum. Biomass reaction kinetics is modelled according to the literature using a two-stage, semi-global model which takes into account secondary reactions. The particle motion inside the reactor is computed using drag laws, dependent on the local volume fraction of each phase. FLUENT 6.2 has been used as the modelling framework of the simulations with the whole pyrolysis model incorporated in the form of User Defined Function (UDF).
Resumo:
There is an alternative model of the 1-way ANOVA called the 'random effects' model or ‘nested’ design in which the objective is not to test specific effects but to estimate the degree of variation of a particular measurement and to compare different sources of variation that influence the measurement in space and/or time. The most important statistics from a random effects model are the components of variance which estimate the variance associated with each of the sources of variation influencing a measurement. The nested design is particularly useful in preliminary experiments designed to estimate different sources of variation and in the planning of appropriate sampling strategies.
Resumo:
Ion exchange resins are used for many purposes in various areas of science and commerce. One example is the use of cation exchange resins in the nuclear industry for the clean up of radioactively contaminated water (for example the removal of 137Cs). However, during removal of radionuclides, the resin itself becomes radioactively contaminated, and must be treated as Intermediate Level Waste. This radioactive contamination of the resin creates a disposal problem. Conventionally, there are two main avenues of disposal for industrial wastes, landfill burial or incineration. However, these are regarded as inappropriate for the disposal of the cation exchange resin involved in this project. Thus, a method involving the use of Fenton's Reagent (Hydrogen Peroxide/soluble Iron catalyst) to destroy the resin by wet oxidation has been developed. This process converts 95% of the solid resin to gaseous CO2, thus greatly reducing the volume of radioactive waste that has to be disposed of. However, hydrogen peroxide is an expensive reagent, and is a major component of the cost of any potential plant for the destruction of ion exchange resin. The aim of my project has been to discover a way of improving the efficiency of the destruction of the resin thus reducing the cost involved in the use of hydrogen peroxide. The work on this problem has been concentrated in two main areas:-1) Use of analytical techniques such as NMR and IR to follow the process of the hydrogen peroxide destruction of both resin beads and model systems such as water soluble calixarenes. 2) Use of various physical and chemical techniques in an attempt to improve the overall efficiency of hydrogen peroxide utilization. Examples of these techniques include UV irradiation, both with and without a photocatalyst, oxygen carrying molecules and various stirring regimes.
Resumo:
This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.
Resumo:
High velocity oxyfuel (HVOF) thermal spraying is one of the most significant developments in the thermal spray industry since the development of the original plasma spray technique. The first investigation deals with the combustion and discrete particle models within the general purpose commercial CFD code FLUENT to solve the combustion of kerosene and couple the motion of fuel droplets with the gas flow dynamics in a Lagrangian fashion. The effects of liquid fuel droplets on the thermodynamics of the combusting gas flow are examined thoroughly showing that combustion process of kerosene is independent on the initial fuel droplet sizes. The second analysis copes with the full water cooling numerical model, which can assist on thermal performance optimisation or to determine the best method for heat removal without the cost of building physical prototypes. The numerical results indicate that the water flow rate and direction has noticeable influence on the cooling efficiency but no noticeable effect on the gas flow dynamics within the thermal spraying gun. The third investigation deals with the development and implementation of discrete phase particle models. The results indicate that most powder particles are not melted upon hitting the substrate to be coated. The oxidation model confirms that HVOF guns can produce metallic coating with low oxidation within the typical standing-off distance about 30cm. Physical properties such as porosity, microstructure, surface roughness and adhesion strength of coatings produced by droplet deposition in a thermal spray process are determined to a large extent by the dynamics of deformation and solidification of the particles impinging on the substrate. Therefore, is one of the objectives of this study to present a complete numerical model of droplet impact and solidification. The modelling results show that solidification of droplets is significantly affected by the thermal contact resistance/substrate surface roughness.
Resumo:
Soil erosion is one of the most pressing issues facing developing countries. The need for soil erosion assessment is paramount as a successful and productive agricultural base is necessary for economic growth and stability. In Ghana, a country with an expanding population and high potential for economic growth, agriculture is an important resource; however, most of the crop production is restricted to low technology shifting cultivation agriculture. The high intensity seasonal rainfall coincides with the early growing period of many of the crops meaning that plots are very susceptible to erosion, especially on steep sided valleys in the region south of Lake Volta. This research investigated the processes of soil erosion by rainfall with the aim of producing a sediment yield model for a small semi-agricultural catchment in rural Ghana. Various types of modelling techniques were considered to discover those most applicable to the sub-tropical environment of Southern Ghana. Once an appropriate model had been developed and calibrated, the aim was to look at how to enable the scaling up of the model using sub-catchments to calculate sedimentation rates of Lake Volta. An experimental catchment was located in Ghana, south west of Lake Volta, where data on rainstorms and the associated streamflow, sediment loads and soil data (moisture content, classification and particle size distribution) was collected to calibrate the model. Additional data was obtained from the Soil Research Institute in Ghana to explore calibration of the Universal Soil Loss Equation (USLE, Wischmeier and Smith, 1978) for Ghanaian soils and environment. It was shown that the USLE could be successfully converted to provide meaningful soil loss estimates in the Ghanaian environment. However, due to experimental difficulties, the proposed theory and methodology of the sediment yield model could only be tested in principle. Future work may include validation of the model and subsequent scaling up to estimate sedimentation rates in Lake Volta.
Resumo:
The soil-plant-moisture subsystem is an important component of the hydrological cycle. Over the last 20 or so years a number of computer models of varying complexity have represented this subsystem with differing degrees of success. The aim of this present work has been to improve and extend an existing model. The new model is less site specific thus allowing for the simulation of a wide range of soil types and profiles. Several processes, not included in the original model, are simulated by the inclusion of new algorithms, including: macropore flow; hysteresis and plant growth. Changes have also been made to the infiltration, water uptake and water flow algorithms. Using field data from various sources, regression equations have been derived which relate parameters in the suction-conductivity-moisture content relationships to easily measured soil properties such as particle-size distribution data. Independent tests have been performed on laboratory data produced by Hedges (1989). The parameters found by regression for the suction relationships were then used in equations describing the infiltration and macropore processes. An extensive literature review produced a new model for calculating plant growth from actual transpiration, which was itself partly determined by the root densities and leaf area indices derived by the plant growth model. The new infiltration model uses intensity/duration curves to disaggregate daily rainfall inputs into hourly amounts. The final model has been calibrated and tested against field data, and its performance compared to that of the original model. Simulations have also been carried out to investigate the effects of various parameters on infiltration, macropore flow, actual transpiration and plant growth. Qualitatively comparisons have been made between these results and data given in the literature.
Resumo:
The 21-day experimental gingivitis model, an established noninvasive model of inflammation in response to increasing bacterial accumulation in humans, is designed to enable the study of both the induction and resolution of inflammation. Here, we have analyzed gingival crevicular fluid, an oral fluid comprising a serum transudate and tissue exudates, by LC-MS/MS using Fourier transform ion cyclotron resonance mass spectrometry and iTRAQ isobaric mass tags, to establish meta-proteomic profiles of inflammation-induced changes in proteins in healthy young volunteers. Across the course of experimentally induced gingivitis, we identified 16 bacterial and 186 human proteins. Although abundances of the bacterial proteins identified did not vary temporally, Fusobacterium outer membrane proteins were detected. Fusobacterium species have previously been associated with periodontal health or disease. The human proteins identified spanned a wide range of compartments (both extracellular and intracellular) and functions, including serum proteins, proteins displaying antibacterial properties, and proteins with functions associated with cellular transcription, DNA binding, the cytoskeleton, cell adhesion, and cilia. PolySNAP3 clustering software was used in a multilayered analytical approach. Clusters of proteins that associated with changes to the clinical parameters included neuronal and synapse associated proteins.
Resumo:
In Statnote 9, we described a one-way analysis of variance (ANOVA) ‘random effects’ model in which the objective was to estimate the degree of variation of a particular measurement and to compare different sources of variation in space and time. The illustrative scenario involved the role of computer keyboards in a University communal computer laboratory as a possible source of microbial contamination of the hands. The study estimated the aerobic colony count of ten selected keyboards with samples taken from two keys per keyboard determined at 9am and 5pm. This type of design is often referred to as a ‘nested’ or ‘hierarchical’ design and the ANOVA estimated the degree of variation: (1) between keyboards, (2) between keys within a keyboard, and (3) between sample times within a key. An alternative to this design is a 'fixed effects' model in which the objective is not to measure sources of variation per se but to estimate differences between specific groups or treatments, which are regarded as 'fixed' or discrete effects. This statnote describes two scenarios utilizing this type of analysis: (1) measuring the degree of bacterial contamination on 2p coins collected from three types of business property, viz., a butcher’s shop, a sandwich shop, and a newsagent and (2) the effectiveness of drugs in the treatment of a fungal eye infection.
Resumo:
The recent explosive growth of voice over IP (VoIP) solutions calls for accurate modelling of VoIP traffic. This study presents measurements of ON and OFF periods of VoIP activity from a significantly large database of VoIP call recordings consisting of native speakers speaking in some of the world's most widely spoken languages. The impact of the languages and the varying dynamics of caller interaction on the ON and OFF period statistics are assessed. It is observed that speaker interactions dominate over language dependence which makes monologue-based data unreliable for traffic modelling. The authors derive a semi-Markov model which accurately reproduces the statistics of composite dialogue measurements. © The Institution of Engineering and Technology 2013.
Resumo:
This study re-examines the one-dimensional equilibrium model of Gibilaro and Rowe (1974) for a segregating gas fluidized bed. The model was based on volumetric jetsam concentration and divided the bed contents into bulk and wake phases, taking account of bulk and wake flux, segregation, exchange between the bulk and wake phases, and axial mixing. Due to the complex nature of the model and its unstable solution, the lack of computing power at the time prevented the authors from doing little more than the analytical solutions to specific cases of this model. This paper provides a numerical total solution and allows the effect of the respective parameters to be compared for the first time. There is also a comparison with experimental results, which showed a reasonable agreement.
Resumo:
The aim of this paper is to examine the short term dynamics of foreign exchange rate spreads. Using a vector autoregressive model (VAR) we show that most of the variation in the spread comes from the long run dependencies between past and future spreads rather than being caused by changes in inventory, adverse selection, cost of carry or order processing costs. We apply the Integrated Cumulative Sum of Squares (ICSS) algorithm of Inclan and Tiao (1994) to discover how often spread volatility changes. We find that spread volatility shifts are relatively uncommon and shifts in one currency spread tend not to spillover to other currency spreads. © 2013.
Resumo:
Tensile tests were carried out using specimens of 2009 aluminium alloy reinforced by either SiC whiskers or particles. The size distributions of the whiskers and particles in the matrix were obtained by image analysis. It was found that failure was a result of uniform void nucleation and coalescence in the as fabricated composites, or a result of fast crack propagation initiated by a flaw developed at clusters of SiC in the aged or stretched and aged composites. The strengths of the as fabricated composites were estimated based on the results of image analysis using continuum mechanics and dislocation theories. The estimation indicated that the tensile strengths are largely contributed to by composite strengthening, supplemented by residual dislocation strengthening and work hardening. Owing to the flaw controlled failure, the tensile strengths of the aged or stretched and aged composites were independent of aging time, aging temperature, and the amount of stretching. The elastic moduli of the composites were estimated using the Halpin-Tsai model and a good correlation was found between the measured and estimated moduli. © 1996 The Institute of Materials.