994 resultados para modelling of uptake kinetics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Organic Rankine Cycle (ORC) is the most commonly used method for recovering energy from small sources of heat. The investigation of the ORC in supercritical condition is a new research area as it has a potential to generate high power and thermal efficiency in a waste heat recovery system. This paper presents a steady state ORC model in supercritical condition and its simulations with a real engine’s exhaust data. The key component of ORC, evaporator, is modelled using finite volume method, modelling of all other components of the waste heat recovery system such as pump, expander and condenser are also presented. The aim of this paper is to investigate the effects of mass flow rate and evaporator outlet temperature on the efficiency of the waste heat recovery process. Additionally, the necessity of maintaining an optimum evaporator outlet temperature is also investigated. Simulation results show that modification of mass flow rate is the key to changing the operating temperature at the evaporator outlet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper details the theory and implementation of a composite damage model, addressing damage within a ply (intralaminar) and delamination (interlaminar), for the simulation of crushing of laminated composite structures. It includes a more accurate determination of the characteristic length to achieve mesh objectivity in capturing intralaminar damage consisting of matrix cracking and fibre failure, a load-history dependent material response, an isotropic hardening nonlinear matrix response, as well as a more physically-based interactive matrix-dominated damage mechanism. The developed damage model requires a set of material parameters obtained from a combination of standard and non-standard material characterisation tests. The fidelity of the model mitigates the need to manipulate, or "calibrate", the input data to achieve good agreement with experimental results. The intralaminar damage model was implemented as a VUMAT subroutine, and used in conjunction with an existing interlaminar damage model, in Abaqus/Explicit. This approach was validated through the simulation of the crushing of a cross-ply composite tube with a tulip-shaped trigger, loaded in uniaxial compression. Despite the complexity of the chosen geometry, excellent correlation was achieved with experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A detailed study of bi-material composites, using meshless methods (MMs), is presented in this paper. Firstly, representative volume elements (RVEs) for different bi-material combinations are analysed by the element-free Galerkin (EFG) method in order to confirm the effective properties of heterogeneous material through homogenization. The results are shown to be in good agreement with experimental results and those obtained using the finite element method (FEM) which required a higher node density. Secondly, a functionally graded material (FGM), with a crack, is analysed using the EFG method. This investigation was motivated by the possibility of replacing the distinct fibrematrix interface with a FGM interface. Finally, an illustrative example showing crack propagation, in a two-dimension micro-scale model of a SiC/Al composite is presented. 

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional experimental economics methods often consume enormous resources of qualified human participants, and the inconsistence of a participant’s decisions among repeated trials prevents investigation from sensitivity analyses. The problem can be solved if computer agents are capable of generating similar behaviors as the given participants in experiments. An experimental economics based analysis method is presented to extract deep information from questionnaire data and emulate any number of participants. Taking the customers’ willingness to purchase electric vehicles (EVs) as an example, multi-layer correlation information is extracted from a limited number of questionnaires. Multi-agents mimicking the inquired potential customers are modelled through matching the probabilistic distributions of their willingness embedded in the questionnaires. The authenticity of both the model and the algorithm is validated by comparing the agent-based Monte Carlo simulation results with the questionnaire-based deduction results. With the aid of agent models, the effects of minority agents with specific preferences on the results are also discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extrusion is one of the major methods for processing polymeric materials and the thermal homogeneity of the process output is a major concern for manufacture of high quality extruded products. Therefore, accurate process thermal monitoring and control are important for product quality control. However, most industrial extruders use single point thermocouples for the temperature monitoring/control although their measurements are highly affected by the barrel metal wall temperature. Currently, no industrially established thermal profile measurement technique is available. Furthermore, it has been shown that the melt temperature changes considerably with the die radial position and hence point/bulk measurements are not sufficient for monitoring and control of the temperature across the melt flow. The majority of process thermal control methods are based on linear models which are not capable of dealing with process nonlinearities. In this work, the die melt temperature profile of a single screw extruder was monitored by a thermocouple mesh technique. The data obtained was used to develop a novel approach of modelling the extruder die melt temperature profile under dynamic conditions (i.e. for predicting the die melt temperature profile in real-time). These newly proposed models were in good agreement with the measured unseen data. They were then used to explore the effects of process settings, material and screw geometry on the die melt temperature profile. The results showed that the process thermal homogeneity was affected in a complex manner by changing the process settings, screw geometry and material.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given the growing interest in thermal processing methods, this study describes the use of an advanced rheological technique, capillary rheometry, to accurately determine the thermorheological properties of two pharmaceutical polymers, Eudragit E100 (E100) and hydroxypropylcellulose JF (HPC) and their blends, both in the presence and absence of a model therapeutic agent (quinine, as the base and hydrochloride salt). Furthermore, the glass transition temperatures (Tg) of the cooled extrudates produced using capillary rheometry were characterised using Dynamic Mechanical Thermal Analysis (DMTA) thereby enabling correlations to be drawn between the information derived from capillary rheometry and the glass transition properties of the extrudates. The shear viscosities of E100 and HPC (and their blends) decreased as functions of increasing temperature and shear rates, with the shear viscosity of E100 being significantly greater than that of HPC at all temperatures and shear rates. All platforms were readily processed at shear rates relevant to extrusion (approximately 200–300 s−1) and injection moulding (approximately 900 s−1). Quinine base was observed to lower the shear viscosities of E100 and E100/HPC blends during processing and the Tg of extrudates, indicative of plasticisation at processing temperatures and when cooled (i.e. in the solid state). Quinine hydrochloride (20% w/w) increased the shear viscosities of E100 and HPC and their blends during processing and did not affect the Tg of the parent polymer. However, the shear viscosities of these systems were not prohibitive to processing at shear rates relevant to extrusion and injection moulding. As the ratio of E100:HPC increased within the polymer blends the effects of quinine base on the lowering of both shear viscosity and Tg of the polymer blends increased, reflecting the greater solubility of quinine within E100. In conclusion, this study has highlighted the importance of capillary rheometry in identifying processing conditions, polymer miscibility and plasticisation phenomena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The technique of externally bonding fibre reinforced polymer (FRP) composites has been becoming popular worldwide for retrofitting existing reinforced concrete (RC) structures. A major failure mode in such strengthened structures is the debonding of FRP from the concrete substrate. The bond behaviour between FRP and concrete thus plays a crucial role in these structures. The FRP-to-concrete bond behaviour has been extensively investigated experimentally, commonly using the pull-off test of FRP-to-concrete bonded joint. Comparatively, much less research has been concerned with the numerical simulation of this bond behaviour, chiefly due to difficulties in accurately modelling the complex behaviour of concrete. This paper proposes a robust finite element (FE) model for simulating the bond behaviour in the entire loading process in the pull-off test. A concrete damage plasticity model based on the plastic degradation theory is proposed to overcome the weakness of the elastic degradation theory which has been commonly adopted in previous studies. The model produces results in very close agreement with test data. © Tsinghua University Press, Beijing and Springer-Verlag Berlin Heidelberg 2011.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Externally bonding of FRP composites is an effective technique for retrofitting historical masonry arch structures. A major failure mode in such strengthened structures is the debonding of FRP from the masonry. The bond behaviour between FRP and masonry thus plays a crucial role in these structures. Major challenges exist in the finite element modelling of such structures, such as modelling of mixed Mode-I and Mode-II bond behaviour between the FRP and the curved masonry substrate, modelling of existing damages in the masonry arches, consideration of loading history in the unstrengthened and strengthened structure etc. This paper presents a rigorous FE model for simulating FRP strengthened masonry arch structures. A detailed solid model was developed for simulating the masonry and a mixed-mode interface model was used for simulating the FRP-to-masonry bond behaviour. The model produces results in very close agreement with test results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, the behaviour of iron ore fines with varying levels of adhesion was investigated using a confined compression test and a uniaxial test. The uniaxial test was conducted using the semi-automated uniaxial EPT tester in which the cohesive strength of a bulk solid is evaluated from an unconfined compression test following a period of consolidation to a pre-defined vertical stress. The iron ore fines were also tested by measuring both the vertical and circumferential strains on the cylindrical container walls under vertical loading in a separate confined compression tester - the K0 tester, to determine the lateral pressure ratio. Discrete Element Method simulations of both experiments were carried out and the predictions were compared with the experimental observations. A recently developed DEM contact model for cohesive solids, an Elasto-Plastic Adhesive model, was used. This particle contact model uses hysteretic non-linear loading and unloading paths and an adhesion parameter which is a function of the maximum contact overlap. The model parameters for the simulations are phenomenologically based to reproduce the key bulk characteristics exhibited by the solid. The simulation results show a good agreement in capturing the stress history dependent behaviour depicted by the flow function of the cohesive iron ore fines while also providing a reasonably good match for the lateral pressure ratio observed during the confined compression K0 tests. This demonstrates the potential for the DEM model to be used in the simulation of bulk handling applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present optical and near-infrared (NIR) photometry and spectroscopy as well as modelling of the lightcurves of the Type IIb supernova (SN) 2011dh. Our extensive dataset, for which we present the observations obtained after day 100, spans two years, and complemented with Spitzer mid-infrared (MIR) data, we use it to build an optical-to-MIR bolometric lightcurve between days 3 and 732. To model the bolometric lightcurve before day 400 we use a grid of hydrodynamical SN models, which allows us to determine the errors in the derived quantities, and a bolometric correction determined with steady-state non-local thermodynamic equilibrium (NLTE) modelling. Using this method we find a helium core mass of 3.1<sup>+0.7</sup><inf>-0.4</inf> M<inf>⊙</inf> for SN 2011dh, consistent within error bars with previous results obtained using the bolometric lightcurve before day 80. We compute bolometric and broad-band lightcurves between days 100 and 500 from spectral steady-state NLTE models, presented and discussed in a companion paper. The preferred 12 M<inf>⊙</inf> (initial mass) model, previously found to agree well with the observed spectra, shows a good overall agreement with the observed lightcurves, although some discrepancies exist. Time-dependent NLTE modelling shows that after day ∼600 a steady-state assumption is no longer valid. The radioactive energy deposition in this phase is likely dominated by the positrons emitted in the decay of <sup>56</sup>Co, but seems insufficient to reproduce the lightcurves, and what energy source is dominating the emitted flux is unclear. We find an excess in the K and the MIR bands developing between days 100 and 250, during which an increase in the optical decline rate is also observed. A local origin of the excess is suggested by the depth of the He I 20 581 Å absorption. Steady-state NLTE models with a modest dust opacity in the core (τ = 0.44), turned on during this period, reproduce the observed behaviour, but an additional excess in the Spitzer 4.5 μm band remains. Carbon-monoxide (CO) first-overtone band emission is detected at day 206, and possibly at day 89, and assuming the additional excess to bedominated by CO fundamental band emission, we find fundamental to first-overtone band ratios considerably higher than observed in SN 1987A. The profiles of the [O i] 6300 Å and Mg i] 4571 Å lines show a remarkable similarity, suggesting that these lines originate from a common nuclear burning zone (O/Ne/Mg), and using small scale fluctuations in the line profiles we estimate a filling factor of ≲ 0.07 for the emitting material. This paper concludes our extensive observational and modelling work on SN 2011dh. The results from hydrodynamical modelling, steady-state NLTE modelling, and stellar evolutionary progenitor analysis are all consistent, and suggest an initial mass of ∼12 M<inf>⊙</inf> for the progenitor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radiative pressure exerted by line interactions is a prominent driver of outflows in astrophysical systems, being at work in the outflows emerging from hot stars or from the accretion discs of cataclysmic variables, massive young stars and active galactic nuclei. In this work, a new radiation hydrodynamical approach to model line-driven hot-star winds is presented. By coupling a Monte Carlo radiative transfer scheme with a finite volume fluid dynamical method, line-driven mass outflows may be modelled self-consistently, benefiting from the advantages of Monte Carlo techniques in treating multiline effects, such as multiple scatterings, and in dealing with arbitrary multidimensional configurations. In this work, we introduce our approach in detail by highlighting the key numerical techniques and verifying their operation in a number of simplified applications, specifically in a series of self-consistent, one-dimensional, Sobolev-type, hot-star wind calculations. The utility and accuracy of our approach are demonstrated by comparing the obtained results with the predictions of various formulations of the so-called CAK theory and by confronting the calculations with modern sophisticated techniques of predicting the wind structure. Using these calculations, we also point out some useful diagnostic capabilities our approach provides. Finally, we discuss some of the current limitations of our method, some possible extensions and potential future applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of a virtual testing environment, as a cost-effective industrial design tool in the design and analysis of composite structures, requires the need to create models efficiently, as well as accelerate the analysis by reducing the number of degrees of freedom, while still satisfying the need for accurately tracking the evolution of a debond, delamination or crack front. The eventual aim is to simulate both damage initiation and propagation in components with realistic geometrical features, where crack propagation paths are not trivial. Meshless approaches, and the Element-Free Galerkin (EFG) method, are particularly suitable for problems involving changes in topology and have been successfully applied to simulate damage in homogeneous materials and concrete. In this work, the method is utilized to model initiation and mixed-mode propagation of cracks in composite laminates, and to simulate experimentally-observed crack migration which is difficult to model using standard finite element analysis. N