899 resultados para computer modelling
Resumo:
Non-Fourier models of heat conduction are increasingly being considered in the modeling of microscale heat transfer in engineering and biomedical heat transfer problems. The dual-phase-lagging model, incorporating time lags in the heat flux and the temperature gradient, and some of its particular cases and approximations, result in heat conduction modeling equations in the form of delayed or hyperbolic partial differential equations. In this work, the application of difference schemes for the numerical solution of lagging models of heat conduction is considered. Numerical schemes for some DPL approximations are developed, characterizing their properties of convergence and stability. Examples of numerical computations are included.
Resumo:
Different kinds of algorithms can be chosen so as to compute elementary functions. Among all of them, it is worthwhile mentioning the shift-and-add algorithms due to the fact that they have been specifically designed to be very simple and to save computer resources. In fact, almost the only operations usually involved with these methods are additions and shifts, which can be easily and efficiently performed by a digital processor. Shift-and-add algorithms allow fairly good precision with low cost iterations. The most famous algorithm belonging to this type is CORDIC. CORDIC has the capability of approximating a wide variety of functions with only the help of a slight change in their iterations. In this paper, we will analyze the requirements of some engineering and industrial problems in terms of type of operands and functions to approximate. Then, we will propose the application of shift-and-add algorithms based on CORDIC to these problems. We will make a comparison between the different methods applied in terms of the precision of the results and the number of iterations required.
Resumo:
Wood is a natural and traditional building material, as popular today as ever, and presents advantages. Physically, wood is strong and stiff, but compared with other materials like steel is light and flexible. Wood material can absorb sound very effectively and it is a relatively good heat insulator. But dry wood burns quite easily and produces a great deal of heat energy. The main disadvantage is the high level of combustion when exposed to fire, where the point at which it catches fire is from 200–400°C. After fire exposure, is need to determine if the charred wooden structures are safe for future use. Design methods require the use of computer modelling to predict the fire exposure and the capacity of structures to resist those action. Also, large or small scale experimental tests are necessary to calibrate and verify the numerical models. The thermal model is essential for wood structures exposed to fire, because predicts the charring rate as a function of fire exposure. The charring rate calculation of most structural wood elements allows simple calculations, but is more complicated for situations where the fire exposure is non-standard and in wood elements protected with other materials. In this work, the authors present different case studies using numerical models, that will help professionals analysing woods elements and the type of information needed to decide whether the charred structures are adequate or not to use. Different thermal models representing wooden cellular slabs, used in building construction for ceiling or flooring compartments, will be analysed and submitted to different fire scenarios (with the standard fire curve exposure). The same numerical models, considering insulation material inside the wooden cellular slabs, will be tested to compare and determine the fire time resistance and the charring rate calculation.
Resumo:
Three-dimensional computer modelling techniques are being used to develop a probabilistic model of turbulence-related spray transport around various plant architectures to investigate the influence of plant architectures and crop geometry on the sprayapplication process. Plant architecture models that utilise a set of growth rules expressed in the Lindenmayer systems (L-systems) formalism have been developed and programmed using L-studio software. Modules have been added to simulate the movement ofdroplets through the air and deposition on the plant canopy. Deposition of spray on an artificial plant structure was measured in the wind tunnel at the University of Queensland, Gatton campus and the results compared to the model simulation. Further trials are planned to measure the deposition of spray droplets on various crop and weed species and the results from these trials will be used to refine and validate the combined spray and plant architecture model.
Resumo:
This thesis is devoted to the tribology at the head~to~tape interface of linear tape recording systems, OnStream ADRTM system being used as an experimental platform, Combining experimental characterisation with computer modelling, a comprehensive picture of the mechanisms involved in a tape recording system is drawn. The work is designed to isolate the mechanisms responsible for the physical spacing between head and tape with the aim of minimising spacing losses and errors and optimising signal output. Standard heads-used in ADR current products-and prototype heads- DLC and SPL coated and dummy heads built from a AI203-TiC and alternative single-phase ceramics intended to constitute the head tape-bearing surface-are tested in controlled environment for up to 500 hours (exceptionally 1000 hours), Evidences of wear on the standard head are mainly observable as a preferential wear of the TiC phase of the AI203-TiC ceramic, The TiC grains are believed to delaminate due to a fatigue wear mechanism, a hypothesis further confirmed via modelling, locating the maximum von Mises equivalent stress at a depth equivalent to the TiC recession (20 to 30 nm). Debris of TiC delaminated residues is moreover found trapped within the pole-tip recession, assumed therefore to provide three~body abrasive particles, thus increasing the pole-tip recession. Iron rich stain is found over the cycled standard head surface (preferentially over the pole-tip and to a lesser extent over the TiC grains) at any environment condition except high temperature/humidity, where mainly organic stain was apparent, Temperature (locally or globally) affects staining rate and aspect; stain transfer is generally promoted at high temperature. Humidity affects transfer rate and quantity; low humidity produces, thinner stains at higher rate. Stain generally targets preferentially head materials with high electrical conductivity, i.e. Permalloy and TiC. Stains are found to decrease the friction at the head-to-tape interface, delay the TiC recession hollow-out and act as a protective soft coating reducing the pole-tip recession. This is obviously at the expense of an additional spacing at the head-to-tape interface of the order of 20 nm. Two kinds of wear resistant coating are tested: diamond like carbon (DLC) and superprotective layer (SPL), 10 nm and 20 to 40 nm thick, respectively. DLC coating disappears within 100 hours due possibly to abrasive and fatigue wear. SPL coatings are generally more resistant, particularly at high temperature and low humidity, possibly in relation with stain transfer. 20 nm coatings are found to rely on the substrate wear behaviour whereas 40 nm coatings are found to rely on the adhesive strength at the coating/substrate interface. These observations seem to locate the wear-driving forces 40 nm below the surface, hence indicate that for coatings in the 10 nm thickness range-· i,e. compatible with high-density recording-the substrate resistance must be taken into account. Single-phase ceramic as candidate for wear-resistant tape-bearing surface are tested in form of full-contour dummy-heads. The absence of a second phase eliminates the preferential wear observed at the AI203-TiC surface; very low wear rates and no evidence of brittle fracture are observed.
Resumo:
The work described in this thesis is an attempt to provide improved understanding of the effects of several factors affecting diffusion in hydrated cement pastes and to aid the prediction of ionic diffusion processes in cement-based materials. Effect of pore structure on diffusion was examined by means of comparative diffusion studies of quaternary ammonium ions with different ionic radii. Diffusivities of these ions in hydrated pastes of ordinary portland cement with or without addition of fly ash were determined by a quasi-steady state technique. The restriction of the pore geometry on diffusion was evaluated from the change of diffusivity in response to the change of ionic radius. The pastes were prepared at three water-cement ratios, 0.35, 0.50 and 0.65. Attempts were made to study the effect of surface charge or the electrochemical double layer at the pore/solution interface on ionic diffusion. An approach was to evaluate the zeta potentials of hydrated cement pastes through streaming potential measurements. Another approach was the comparative studies of the diffusion kinetics of chloride and dissolved oxygen in hydrated pastes of ordinary portland cement with addition of 0 and 20% fly ash. An electrochemical technique for the determination of oxygen diffusivity was also developed. Non-steady state diffusion of sodium potassium, chloride and hydroxyl ions in hydrated ordinary portland cement paste of water-cement ratio 0.5 was studied with the aid of computer-modelling. The kinetics of both diffusion and ionic binding were considered for the characterization of the concentration profiles by Fick's first and second laws. The effect of the electrostatic interactions between ions on the overall diffusion rates was also considered. A general model concerning the prediction of ionic diffusion processes in cement-based materials has been proposed.
Resumo:
The research is concerned with the application of the computer simulation technique to study the performance of reinforced concrete columns in a fire environment. The effect of three different concrete constitutive models incorporated in the computer simulation on the structural response of reinforced concrete columns exposed to fire is investigated. The material models differed mainly in respect to the formulation of the mechanical properties of concrete. The results from the simulation have clearly illustrated that a more realistic response of a reinforced concrete column exposed to fire is given by a constitutive model with transient creep or appropriate strain effect The assessment of the relative effect of the three concrete material models is considered from the analysis by adopting the approach of a parametric study, carried out using the results from a series of analyses on columns heated on three sides which produce substantial thermal gradients. Three different loading conditions were used on the column; axial loading and eccentric loading both to induce moments in the same sense and opposite sense to those induced by the thermal gradient. An axially loaded column heated on four sides was also considered. The computer modelling technique adopted separated the thermal and structural responses into two distinct computer programs. A finite element heat transfer analysis was used to determine the thermal response of the reinforced concrete columns when exposed to the ISO 834 furnace environment. The temperature distribution histories obtained were then used in conjunction with a structural response program. The effect of the occurrence of spalling on the structural behaviour of reinforced concrete column is also investigated. There is general recognition of the potential problems of spalling but no real investigation into what effect spalling has on the fire resistance of reinforced concrete members. In an attempt to address the situation, a method has been developed to model concrete columns exposed to fire which incorporates the effect of spalling. A total of 224 computer simulations were undertaken by varying the amounts of concrete lost during a specified period of exposure to fire. An array of six percentages of spalling were chosen for one range of simulation while a two stage progressive spalling regime was used for a second range. The quantification of the reduction in fire resistance of the columns against the amount of spalling, heating and loading patterns, and the time at which the concrete spalls appears to indicate that it is the amount of spalling which is the most significant variable in the reduction of fire resistance.
Resumo:
This thesis covers both experimental and computer investigations into the dynamic behaviour of mechanical seals. The literature survey shows no investigations on the effect of vibration on mechanical seals of the type common in the various process industries. Typical seal designs are discussed. A form of Reynolds' equation has been developed that permits the calculation of stiffnesses and damping coefficients for the fluid film. The dynamics of the mechanical seal floating ring have been investigated using approximate formulae, and it has been shown that the floating ring will behave as a rigid body. Some elements, such as the radial damping due to the fluid film, are small and may be neglected. The equations of motion of the floating ring have been developed utilising the significant elements, and a solution technique described. The stiffness and damping coefficients of nitrile rubber o-rings have been obtained. These show a wide variation, with a constant stiffness up to 60 Hz. The importance of the effect of temperature on the properties is discussed. An unsuccessful test rig is described in the appendices. The dynamic behaviour of a mechanical seal has been investigated experimentally, including the effect of changes of speed, sealed pressure and seal geometry. The results, as expected, show that high vibration levels result in both high leakage and seal temperatures. Computer programs have been developed to solve Reynolds' Equation and the equations of motion. Two solution techniques for this latter program were developed, the unsuccesful technique is described in the appendices. Some stability problems were encountered, but despite these the solution shows good agreement with some of the experimental conditions. Possible reasons for the discrepancies are discussed. Various suggestions for future work in this field are given. These include the combining of the programs and more extensive experimental and computer modelling.
Resumo:
The processing conducted by the visual system requires the combination of signals that are detected at different locations in the visual field. The processes by which these signals are combined are explored here using psychophysical experiments and computer modelling. Most of the work presented in this thesis is concerned with the summation of contrast over space at detection threshold. Previous investigations of this sort have been confounded by the inhomogeneity in contrast sensitivity across the visual field. Experiments performed in this thesis find that the decline in log contrast sensitivity with eccentricity is bilinear, with an initial steep fall-off followed by a shallower decline. This decline is scale-invariant for spatial frequencies of 0.7 to 4 c/deg. A detailed map of the inhomogeneity is developed, and applied to area summation experiments both by incorporating it into models of the visual system and by using it to compensate stimuli in order to factor out the effects of the inhomogeneity. The results of these area summation experiments show that the summation of contrast over area is spatially extensive (occurring over 33 stimulus carrier cycles), and that summation behaviour is the same in the fovea, parafovea, and periphery. Summation occurs according to a fourth-root summation rule, consistent with a “noisy energy” model. This work is extended to investigate the visual deficit in amblyopia, finding that area summation is normal in amblyopic observers. Finally, the methods used to study the summation of threshold contrast over area are adapted to investigate the integration of coherent orientation signals in a texture. The results of this study are described by a two-stage model, with a mandatory local combination stage followed by flexible global pooling of these local outputs. In each study, the results suggest a more extensive combination of signals in vision than has been previously understood.
Resumo:
Nowadays, road safety and traffic congestion are major concerns worldwide. This is why research on vehicular communication is very vital. In static scenarios vehicles behave typically like in an office network where nodes transmit without moving and with no defined position. This paper analyses the impact of context information on existing popular rate adaptation algorithms. Our simulation was done in MATLAB by observing the impact of context information on these algorithms. Simulation was performed for both static and mobile cases.Our simulations are based on IEEE 802.11p wireless standard. For static scenarios vehicles do not move and without defined positions, while for the mobile case, vehicles are mobile with uniformly selected speed and randomized positions. Network performance are analysed using context information. Our results show that in mobility when context information is used, the system performance can be improved for all three rate adaptation algorithms. That can be explained by that with range checking, when many vehicles are out of communication range, less vehicles contend for network resources, thereby increasing the network performances. © 2013 IEEE.
Resumo:
We investigate an application of the method of fundamental solutions (MFS) to the backward heat conduction problem (BHCP). We extend the MFS in Johansson and Lesnic (2008) [5] and Johansson et al. (in press) [6] proposed for one and two-dimensional direct heat conduction problems, respectively, with the sources placed outside the space domain of interest. Theoretical properties of the method, as well as numerical investigations, are included, showing that accurate and stable results can be obtained efficiently with small computational cost.
Resumo:
Although crisp data are fundamentally indispensable for determining the profit Malmquist productivity index (MPI), the observed values in real-world problems are often imprecise or vague. These imprecise or vague data can be suitably characterized with fuzzy and interval methods. In this paper, we reformulate the conventional profit MPI problem as an imprecise data envelopment analysis (DEA) problem, and propose two novel methods for measuring the overall profit MPI when the inputs, outputs, and price vectors are fuzzy or vary in intervals. We develop a fuzzy version of the conventional MPI model by using a ranking method, and solve the model with a commercial off-the-shelf DEA software package. In addition, we define an interval for the overall profit MPI of each decision-making unit (DMU) and divide the DMUs into six groups according to the intervals obtained for their overall profit efficiency and MPIs. We also present two numerical examples to demonstrate the applicability of the two proposed models and exhibit the efficacy of the procedures and algorithms. © 2011 Elsevier Ltd.
Resumo:
The paper presents the history, structure and ongoing activities of the Institute for Bulgarian Language of Bulgarian Academy of Sciences.
Resumo:
In the context of discrete districting problems with geographical constraints, we demonstrate that determining an (ex post) unbiased districting, which requires that the number of representatives of a party should be proportional to its share of votes, turns out to be a computationally intractable (NP-complete) problem. This raises doubts as to whether an independent jury will be able to come up with a “fair” redistricting plan in case of a large population, that is, there is no guarantee for finding an unbiased districting (even if such exists). We also show that, in the absence of geographical constraints, an unbiased districting can be implemented by a simple alternating-move game among the two parties.
Resumo:
A distance-based inconsistency indicator, defined by the third author for the consistency-driven pairwise comparisons method, is extended to the incomplete case. The corresponding optimization problem is transformed into an equivalent linear programming problem. The results can be applied in the process of filling in the matrix as the decision maker gets automatic feedback. As soon as a serious error occurs among the matrix elements, even due to a misprint, a significant increase in the inconsistency index is reported. The high inconsistency may be alarmed not only at the end of the process of filling in the matrix but also during the completion process. Numerical examples are also provided.