905 resultados para Computer simulations
Resumo:
This thesis considers the computer simulation of moist agglomerate collisions using the discrete element method (DEM). The study is confined to pendular state moist agglomerates, at which liquid is presented as either absorbed immobile films or pendular liquid bridges and the interparticle force is modelled as the adhesive contact force and interstitial liquid bridge force. Algorithms used to model the contact force due to surface adhesion, tangential friction and particle deformation have been derived by other researchers and are briefly described in the thesis. A theoretical study of the pendular liquid bridge force between spherical particles has been made and the algorithms for the modelling of the pendular liquid bridge force between spherical particles have been developed and incorporated into the Aston version of the DEM program TRUBAL. It has been found that, for static liquid bridges, the more explicit criterion for specifying the stable solution and critical separation is provided by the total free energy. The critical separation is given by the cube root of liquid bridge volume to a good approximation and the 'gorge method' of evaluation based on the toroidal approximation leads to errors in the calculated force of less than 10%. Three dimensional computer simulations of an agglomerate impacting orthogonally with a wall are reported. The results demonstrate the effectiveness of adding viscous binder to prevent attrition, a common practice in process engineering. Results of simulated agglomerate-agglomerate collisions show that, for colinear agglomerate impacts, there is an optimum velocity which results in a near spherical shape of the coalesced agglomerate and, hence, minimises attrition due to subsequent collisions. The relationship between the optimum impact velocity and the liquid viscosity and surface tension is illustrated. The effect of varying the angle of impact on the coalescence/attrition behaviour is also reported. (DX 187, 340).
Resumo:
Purpose Drafting in cycling influences collective behaviour of pelotons. Whilst evidence for collective behaviour in competitive running events exists, it is not clear if this results from energetic savings conferred by drafting. This study modelled the effects of drafting on behavior in elite 10,000 m runners. Methods Using performance data from a men’s elite 10,000 m track running event, computer simulations were constructed using Netlogo 5.1 to test the effects of three different drafting quantities on collective behaviour: no drafting, drafting to 3m behind with up to ~8% energy savings (a realistic running draft); and drafting up to 3m behind with up to 38% energy savings (a realistic cycling draft). Three measures of collective behaviour were analysed in each condition; mean speed, mean group stretch (distance between first and last placed runner), and Runner Convergence Ratio (RCR) which represents the degree of drafting benefit obtained by the follower in a pair of coupled runners. Results Mean speeds were 6.32±0.28m.s-1, 5.57±0.18 m.s-1, and 5.51±0.13 m.s-1 in the cycling draft, runner draft, and no draft conditions respectively (all P<0.001). RCR was lower in the cycling draft condition, but did not differ between the other two. Mean stretch did not differ between conditions. Conclusions Collective behaviours observed in running events cannot be fully explained through energetic savings conferred by realistic drafting benefits. They may therefore result from other, possibly psychological, processes. The benefits or otherwise of engaging in such behavior are, as yet, unclear.
Resumo:
With the accelerated trend of global warming, the thermal behavior of existing buildings, which were typically designed based on current weather data, may not be able to cope with the future climate. This paper quantifies, through computer simulations, the increased cooling loads imposed by potential global warming and probable indoor temperature increases due to possible undersized air-conditioning system. It is found from the sample office building examined that the existing buildings would generally be able to adapt to the increasing warmth of 2030 year Low and High scenarios projections and 2070 year Low scenario projection. However, for the 2070 year High scenario, the study indicates that the existing office buildings, in all capital cities except for Hobart, will suffer from overheating problems. When the annual average temperature increase exceeds 2°C, the risk of current office buildings subjected to overheating will be significantly increased. For existing buildings which are designed with current climate condition, it is shown that there is a nearly linear correlation between the increase of average external air temperature and the increase of building cooling load. For the new buildings, in which the possible global warming has been taken into account in the design, a 28-59% increase of cooling capacity under 2070 High scenario would be required to improve the building thermal comfort level to an acceptable standard.
Resumo:
Multicarrier code division multiple access (MC-CDMA) is a very promising candidate for the multiple access scheme in fourth generation wireless communi- cation systems. During asynchronous transmission, multiple access interference (MAI) is a major challenge for MC-CDMA systems and significantly affects their performance. The main objectives of this thesis are to analyze the MAI in asyn- chronous MC-CDMA, and to develop robust techniques to reduce the MAI effect. Focus is first on the statistical analysis of MAI in asynchronous MC-CDMA. A new statistical model of MAI is developed. In the new model, the derivation of MAI can be applied to different distributions of timing offset, and the MAI power is modelled as a Gamma distributed random variable. By applying the new statistical model of MAI, a new computer simulation model is proposed. This model is based on the modelling of a multiuser system as a single user system followed by an additive noise component representing the MAI, which enables the new simulation model to significantly reduce the computation load during computer simulations. MAI reduction using slow frequency hopping (SFH) technique is the topic of the second part of the thesis. Two subsystems are considered. The first sub- system involves subcarrier frequency hopping as a group, which is referred to as GSFH/MC-CDMA. In the second subsystem, the condition of group hopping is dropped, resulting in a more general system, namely individual subcarrier frequency hopping MC-CDMA (ISFH/MC-CDMA). This research found that with the introduction of SFH, both of GSFH/MC-CDMA and ISFH/MC-CDMA sys- tems generate less MAI power than the basic MC-CDMA system during asyn- chronous transmission. Because of this, both SFH systems are shown to outper- form MC-CDMA in terms of BER. This improvement, however, is at the expense of spectral widening. In the third part of this thesis, base station polarization diversity, as another MAI reduction technique, is introduced to asynchronous MC-CDMA. The com- bined system is referred to as Pol/MC-CDMA. In this part a new optimum com- bining technique namely maximal signal-to-MAI ratio combining (MSMAIRC) is proposed to combine the signals in two base station antennas. With the applica- tion of MSMAIRC and in the absents of additive white Gaussian noise (AWGN), the resulting signal-to-MAI ratio (SMAIR) is not only maximized but also in- dependent of cross polarization discrimination (XPD) and antenna angle. In the case when AWGN is present, the performance of MSMAIRC is still affected by the XPD and antenna angle, but to a much lesser degree than the traditional maximal ratio combining (MRC). Furthermore, this research found that the BER performance for Pol/MC-CDMA can be further improved by changing the angle between the two receiving antennas. Hence the optimum antenna angles for both MSMAIRC and MRC are derived and their effects on the BER performance are compared. With the derived optimum antenna angle, the Pol/MC-CDMA system is able to obtain the lowest BER for a given XPD.
Resumo:
As climate change will entail new conditions for the built environment, the thermal behaviour of air-conditioned office buildings may also change. Using building computer simulations, the impact of warmer weather is evaluated on the design and performance of air-conditioned office buildings in Australia, including the increased cooling loads and probable indoor temperature increases due to a possibly undersized air-conditioning system, as well as the possible change in energy use. It is found that existing office buildings would generally be able to adapt to the increasing warmth of year 2030 Low and High scenarios projections and the year 2070 Low scenario projection. However, for the 2070 High scenario, the study indicates that the existing office buildings in all capital cities of Australia would suffer from overheating problems. For existing buildings designed for current climate conditions, it is shown that there is a nearly linear correlation between the increase of average external air temperature and the increase of building cooling load. For the new buildings designed for warmer scenarios, a 28-59% increase of cooling capacity under the 2070 High scenario would be required.
Resumo:
An algorithm based on the concept of Kalman filtering is proposed in this paper for the estimation of power system signal attributes, like amplitude, frequency and phase angle. This technique can be used in protection relays, digital AVRs, DSTATCOMs, FACTS and other power electronics applications. Furthermore this algorithm is particularly suitable for the integration of distributed generation sources to power grids when fast and accurate detection of small variations of signal attributes are needed. Practical considerations such as the effect of noise, higher order harmonics, and computational issues of the algorithm are considered and tested in the paper. Several computer simulations are presented to highlight the usefulness of the proposed approach. Simulation results show that the proposed technique can simultaneously estimate the signal attributes, even if it is highly distorted due to the presence of non-linear loads and noise.
Resumo:
This paper presents an analysis of phasor measurement method for tracking the fundamental power frequency to show if it has the performance necessary to cope with the requirements of power system protection and control. In this regard, several computer simulations presenting the conditions of a typical power system signal especially those highly distorted by harmonics, noise and offset, are provided to evaluate the response of the Phasor Measurement (PM) technique. A new method, which can shorten the delay of estimation, has also been proposed for the PM method to work for signals free of even-order harmonics.
Resumo:
This paper discusses the development of a dynamic model for a torpedo shaped sub- marine. Expressions for hydrostatic, added mass, hydrodynamic, control surface and pro- peller forces and moments are derived from first principles. Experimental data obtained from flume tests of the submarine are inserted into the model in order to provide computer simulations of the open loop behavior of the system.
Resumo:
Introduction Ovine models are widely used in orthopaedic research. To better understand the impact of orthopaedic procedures computer simulations are necessary. 3D finite element (FE) models of bones allow implant designs to be investigated mechanically, thereby reducing mechanical testing. Hypothesis We present the development and validation of an ovine tibia FE model for use in the analysis of tibia fracture fixation plates. Material & Methods Mechanical testing of the tibia consisted of an offset 3-pt bend test with three repetitions of loading to 350N and return to 50N. Tri-axial stacked strain gauges were applied to the anterior and posterior surfaces of the bone and two rigid bodies – consisting of eight infrared active markers, were attached to the ends of the tibia. Positional measurements were taken with a FARO arm 3D digitiser. The FE model was constructed with both geometry and material properties derived from CT images of the bone. The elasticity-density relationship used for material property determination was validated separately using mechanical testing. This model was then transformed to the same coordinate system as the in vitro mechanical test and loads applied. Results Comparison between the mechanical testing and the FE model showed good correlation in surface strains (difference: anterior 2.3%, posterior 3.2%). Discussion & Conclusion This method of model creation provides a simple method for generating subject specific FE models from CT scans. The use of the CT data set for both the geometry and the material properties ensures a more accurate representation of the specific bone. This is reflected in the similarity of the surface strain results.
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
The concept of radar was developed for the estimation of the distance (range) and velocity of a target from a receiver. The distance measurement is obtained by measuring the time taken for the transmitted signal to propagate to the target and return to the receiver. The target's velocity is determined by measuring the Doppler induced frequency shift of the returned signal caused by the rate of change of the time- delay from the target. As researchers further developed conventional radar systems it become apparent that additional information was contained in the backscattered signal and that this information could in fact be used to describe the shape of the target itself. It is due to the fact that a target can be considered to be a collection of individual point scatterers, each of which has its own velocity and time- delay. DelayDoppler parameter estimation of each of these point scatterers thus corresponds to a mapping of the target's range and cross range, thus producing an image of the target. Much research has been done in this area since the early radar imaging work of the 1960s. At present there are two main categories into which radar imaging falls. The first of these is related to the case where the backscattered signal is considered to be deterministic. The second is related to the case where the backscattered signal is of a stochastic nature. In both cases the information which describes the target's scattering function is extracted by the use of the ambiguity function, a function which correlates the backscattered signal in time and frequency with the transmitted signal. In practical situations, it is often necessary to have the transmitter and the receiver of the radar system sited at different locations. The problem in these situations is 'that a reference signal must then be present in order to calculate the ambiguity function. This causes an additional problem in that detailed phase information about the transmitted signal is then required at the receiver. It is this latter problem which has led to the investigation of radar imaging using time- frequency distributions. As will be shown in this thesis, the phase information about the transmitted signal can be extracted from the backscattered signal using time- frequency distributions. The principle aim of this thesis was in the development, and subsequent discussion into the theory of radar imaging, using time- frequency distributions. Consideration is first given to the case where the target is diffuse, ie. where the backscattered signal has temporal stationarity and a spatially white power spectral density. The complementary situation is also investigated, ie. where the target is no longer diffuse, but some degree of correlation exists between the time- frequency points. Computer simulations are presented to demonstrate the concepts and theories developed in the thesis. For the proposed radar system to be practically realisable, both the time- frequency distributions and the associated algorithms developed must be able to be implemented in a timely manner. For this reason an optical architecture is proposed. This architecture is specifically designed to obtain the required time and frequency resolution when using laser radar imaging. The complex light amplitude distributions produced by this architecture have been computer simulated using an optical compiler.
Resumo:
An algorithm based on the concept of combining Kalman filter and Least Error Square (LES) techniques is proposed in this paper. The algorithm is intended to estimate signal attributes like amplitude, frequency and phase angle in the online mode. This technique can be used in protection relays, digital AVRs, DGs, DSTATCOMs, FACTS and other power electronics applications. The Kalman filter is modified to operate on a fictitious input signal and provides precise estimation results insensitive to noise and other disturbances. At the same time, the LES system has been arranged to operate in critical transient cases to compensate the delay and inaccuracy identified because of the response of the standard Kalman filter. Practical considerations such as the effect of noise, higher order harmonics, and computational issues of the algorithm are considered and tested in the paper. Several computer simulations and a laboratory test are presented to highlight the usefulness of the proposed method. Simulation results show that the proposed technique can simultaneously estimate the signal attributes, even if it is highly distorted due to the presence of non-linear loads and noise.
Resumo:
With the recent regulatory reforms in a number of countries, railways resources are no longer managed by a single party but are distributed among different stakeholders. To facilitate the operation of train services, a train service provider (SP) has to negotiate with the infrastructure provider (IP) for a train schedule and the associated track access charge. This paper models the SP and IP as software agents and the negotiation as a prioritized fuzzy constraint satisfaction (PFCS) problem. Computer simulations have been conducted to demonstrate the effects on the train schedule when the SP has different optimization criteria. The results show that by assigning different priorities on the fuzzy constraints, agents can represent SPs with different operational objectives.
Resumo:
Abstract Computer simulation is a versatile and commonly used tool for the design and evaluation of systems with different degrees of complexity. Power distribution systems and electric railway network are areas for which computer simulations are being heavily applied. A dominant factor in evaluating the performance of a software simulator is its processing time, especially in the cases of real-time simulation. Parallel processing provides a viable mean to reduce the computing time and is therefore suitable for building real-time simulators. In this paper, we present different issues related to solving the power distribution system with parallel computing based on a multiple-CPU server and we will concentrate, in particular, on the speedup performance of such an approach.
Resumo:
Malaysia’s Vision 2020 for enhancing its education system includes the development of scientific literacy commencing at the primary school level. This Vision focuses on using English as the Medium of Instruction (EMI) for teaching primary science, as Malaysia has English as a Foreign Language (EFL) in its curriculum. What changes need to occur in preservice teacher education programs for learning about primary science using EMI? This paper investigates the education of Malaysian preservice teachers for learning how to teach one strand in science education (i.e., space, primary astronomy) in an English-language context. Ninety-six second-year preservice teachers from two Malaysian institutes were involved in a 16-week “Earth and Space” course, half the course involved education about primary astronomy. Seventy-five of these preservice teachers provided written responses about the course and their development as potential teachers of primary astronomy using EMI. Preservice teacher assessments and multimedia presentations provided further evidence on learning how to teach primary astronomy. Many of these preservice teachers claimed that learning to teach primary astronomy needs to focus on teaching strategies, content knowledge with easy-to-understand concepts, computer simulations (e.g., Earth Centered Universe, Stellarium, Celestia), other ICT media, and field experiences that use naked-eye observations and telescopes to investigate celestial bodies. Although generally proficient in using ICT, they claimed there were EFL barriers for learning some new terminology. Nevertheless, powerpoints, animations, videos, and simulations were identified as effective ICT tools for providing clear visual representations of abstract concepts and ways to enhance the learning process.