991 resultados para Processing Element Array
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper reports on the design and development of a dividing/phasing network for a compact switched-beam array antenna for Land-vehicle mobile satellite communications, The device is formed by a switched radial divider/combiner and 1-bit phase shifters and generates a sufficient number of beams for the proper satellite tracking.
Resumo:
The suitable use of an array antenna at the base station of a wireless communications system can result in improvement in the signal-to-interference ratio (SIR). In general, the SIR is a function of the direction of arrival of the desired signal and depends on the configuration of the array, the number of elements, and their spacing. In this paper, we consider a uniform linear array antenna and study the effect of varying the number of its elements and inter-element spacing on the SIR performance. (C) 2002 Wiley Periodicals, Inc.
Resumo:
The complex design and development of a planar multilayer phased array antenna in microstrip technology can be simplified using two commercially available design tools 1) Ansoft Ensemble and 2) HP-EEsof Touchstone. In the approach presented here, Touchstone is used to design RF switches and phase shifters whose scattering parameters are incorporated in Ensemble simulations using its black box tool. Using this approach, Ensemble is able to fully analyze the performance of radiating and beamforming layers of a phased array prior to its manufacturing. This strategy is demonstrated in a design example of a 12-element linearly-polarized circular phased array operating at L band. A comparison between theoretical and experimental results of the array is demonstrated.
Resumo:
This paper details an investigation of a power combiner that uses a reflect array of dual-feed aperture-coupled microstrip patch antennas and a corporate-fed dual-polarized array as a signal distributing/combining device. In this configuration, elements of the reflect array receive a linearly polarized wave and retransmit it with an orthogonal polarization using variable-length sections of microstrip lines connecting receive and transmit ports. By applying appropriate lengths of these delay lines, the array focuses the transmitted wave onto the feed array. The operation of the combiner is investigated for a small-size circular reflect array for the cases of -3 dB, -6 dB and -10 dB edge illumination by the 2 x 2-element dual-polarized array.
Resumo:
Quantitative laser ablation (LA)-ICP-MS analyses of fluid inclusions, trace element chemistry of sulfides, stable isotope (S), and Pb isotopes have been used to discriminate the formation of two contrasting mineralization styles and to evaluate the origin of the Cu and Au at Mt Morgan. The Mt Morgan Au-Cu deposit is hosted by Devonian felsic volcanic rocks that have been intruded by multiple phases of the Mt Morgan Tonalite, a low-K, low-Al2O3 tonalite-trondhjemite-dacite (TTD) complex. An early, barren massive sulfide mineralization with stringer veins is conforming to VHMS sub-seafloor replacement processes, whereas the high-grade Au-Cu. ore is associated with a later quartz-chalcopyrite-pyrite stock work mineralization that is related to intrusive phases of the Tonalite complex. LA-ICP-MS fluid inclusion analyses reveal high As (avg. 8850 ppm) and Sb (avg. 140 ppm) for the Au-Cu mineralization and 5 to 10 times higher Cu concentration than in the fluids associated with the massive pyrite mineralization. Overall, the hydrothermal system of Mt Morgan is characterized by low average fluid salinities in both mineralization styles (45-80% seawater salinity) and temperatures of 210 to 270 degreesC estimated from fluid inclusions. Laser Raman Spectroscopic analysis indicates a consistent and uniform array Of CO2-bearing fluids. Comparison with active submarine hydrothermal vents shows an enrichment of the Mt Morgan fluids in base metals. Therefore, a seawater-dominated fluid is assumed for the barren massive sulfide mineralization, whereas magmatic volatile contributions are implied for the intrusive related mineralization. Condensation of magmatic vapor into a seawater-dominated environment explains the CO2 occurrence, the low salinities, and the enriched base and precious metal fluid composition that is associated with the Au-Cu. mineralization. The sulfur isotope signature of pyrite and chalcopyrite is composed of fractionated Devonian seawater and oxidized magmatic fluids or remobilized sulfur from existing sulfides. Pb isotopes indicate that Au and Cu. originated from the Mt Morgan intrusions and a particular volcanic strata that shows elevated Cu background. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Crushing and grinding are the most energy intensive part of the mineral recovery process. A major part of rock size reduction occurs in tumbling mills. Empirical models for the power draw of tumbling mills do not consider the effect of lifters. Discrete element modelling was used to investigate the effect of lifter condition on the power draw of tumbling mill. Results obtained with PFC3D code show that lifter condition will have a significant influence on the power draw and on the mode of energy consumption in the mill. Relatively high lifters will consume less power than low lifters, under otherwise identical conditions. The fraction of the power that will be consumed as friction will increase as the height of the lifters decreases. This will result in less power being used for high intensity comminution caused by the impacts. The fraction of the power that will be used to overcome frictional resistance is determined by the material's coefficient of friction. Based on the modelled results, it appears that the effective coefficient of friction for in situ mill is close to 0.1. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The PFC3D (particle flow code) that models the movement and interaction of particles by the DEM techniques was employed to simulate the particle movement and to calculate the velocity and energy distribution of collision in two types of impact crusher: the Canica vertical shaft crusher and the BJD horizontal shaft swing hammer mill. The distribution of collision energies was then converted into a product size distribution for a particular ore type using JKMRC impact breakage test data. Experimental data of the Canica VSI crusher treating quarry and the BJD hammer mill treating coal were used to verify the DEM simulation results. Upon the DEM procedures being validated, a detailed simulation study was conducted to investigate the effects of the machine design and operational conditions on velocity and energy distributions of collision inside the milling chamber and on the particle breakage behaviour. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
The power required to operate large mills is typically 5-10 MW. Hence, optimisation of power consumption will have a significant impact on overall economic performance and environmental impact. Power draw modelling results using the discrete element code PFC3D have been compared with results derived from the widely used empirical Model of Morrell. This is achieved by calculating the power draw for a range of operating conditions for constant mill size and fill factor using two modelling approaches. fThe discrete element modelling results show that, apart from density, selection of the appropriate material damping ratio is critical for the accuracy of modelling of the mill power draw. The relative insensitivity of the power draw to the material stiffness allows selection of moderate stiffness values, which result in acceptable computation time. The results obtained confirm that modelling of the power draw for a vertical slice of the mill, of thickness 20% of the mill length, is a reliable substitute for modelling the full mill. The power draw predictions from PFC3D show good agreement with those obtained using the empirical model. Due to its inherent flexibility, power draw modelling using PFC3D appears to be a viable and attractive alternative to empirical models where necessary code and computer power are available.
Resumo:
An equivalent unit cell waveguide approach (WGA) to designing 4 multilayer microstrip reflectarray of variable size patches is presented. In this approach, a normal incidence of a plane wave on an infinite periodic array of radiating elements is considered to obtain reflection coefficient phase curves for the reflectarray's elements. It is shown that this problem is equivalent to the problem of reflection of the dominant TEM mode in a waveguide with patches interleaved by layers of dielectric. This waveguide problem is solved using a field matching technique and a method of moments (MoM). Based on this solution, a fast computer algorithm is developed to generate reflection coefficient phase curves for a multilayer microstrip patch reflectarray. The validity of the developed algorithm is tested against alternative approaches and Agilent High Frequency Structure Simulator (HFSS). Having confirmed the validity of the WGA approach, a small offset feed two-layer microstrip patch array is designed and developed. This reflectarray is tested experimentally and shows good performance.
Resumo:
Pectus carinatum (PC) is a chest deformity caused by a disproportionate growth of the costal cartilages compared to the bony thoracic skeleton, pulling the sternum towards, which leads to its protrusion. There has been a growing interest on using the ‘reversed Nuss’ technique as minimally invasive procedure for PC surgical correction. A corrective bar is introduced between the skin and the thoracic cage and positioned on top of the sternum highest protrusion area for continuous pressure. Then, it is fixed to the ribs and kept implanted for about 2–3 years. The purpose of this work was to (a) assess the stresses distribution on the thoracic cage that arise from the procedure, and (b) investigate the impact of different positioning of the corrective bar along the sternum. The higher stresses were generated on the 4th, 5th and 6th ribs backend, supporting the hypothesis of pectus deformities correction-induced scoliosis. The different bar positioning originated different stresses on the ribs’ backend. The bar position that led to lower stresses generated on the ribs backend was the one that also led to the smallest sternum displacement. However, this may be preferred, as the risk of induced scoliosis is lowered.
Resumo:
Reliable flow simulation software is inevitable to determine an optimal injection strategy in Liquid Composite Molding processes. Several methodologies can be implemented into standard software in order to reduce CPU time. Post-processing techniques might be one of them. Post-processing a finite element solution is a well-known procedure, which consists in a recalculation of the originally obtained quantities such that the rate of convergence increases without the need for expensive remeshing techniques. Post-processing is especially effective in problems where better accuracy is required for derivatives of nodal variables in regions where Dirichlet essential boundary condition is imposed strongly. In previous works influence of smoothness of non-homogeneous Dirichlet condition, imposed on smooth front was examined. However, usually quite a non-smooth boundary is obtained at each time step of the infiltration process due to discretization. Then direct application of post-processing techniques does not improve final results as expected. The new contribution of this paper lies in improvement of the standard methodology. Improved results clearly show that the recalculated flow front is closer to the ”exact” one, is smoother that the previous one and it improves local disturbances of the “exact” solution.
Resumo:
Post-processing a finite element solution is a well-known technique, which consists in a recalculation of the originally obtained quantities such that the rate of convergence increases without the need for expensive remeshing techniques. Postprocessing is especially effective in problems where better accuracy is required for derivatives of nodal variables in regions where Dirichlet essential boundary condition is imposed strongly. Consequently such an approach can be exceptionally good in modelling of resin infiltration under quasi steady-state assumption by remeshing techniques and with explicit time integration, because only the free-front normal velocities are necessary to advance the resin front to the next position. The new contribution is the post-processing analysis and implementation of the freeboundary velocities of mesolevel infiltration analysis. Such implementation ensures better accuracy on even coarser meshes, which in consequence reduces the computational time also by the possibility of employing larger time steps.
Resumo:
An adaptive antenna array combines the signal of each element, using some constraints to produce the radiation pattern of the antenna, while maximizing the performance of the system. Direction of arrival (DOA) algorithms are applied to determine the directions of impinging signals, whereas beamforming techniques are employed to determine the appropriate weights for the array elements, to create the desired pattern. In this paper, a detailed analysis of both categories of algorithms is made, when a planar antenna array is used. Several simulation results show that it is possible to point an antenna array in a desired direction based on the DOA estimation and on the beamforming algorithms. A comparison of the performance in terms of runtime and accuracy of the used algorithms is made. These characteristics are dependent on the SNR of the incoming signal.
Resumo:
The most common techniques for stress analysis/strength prediction of adhesive joints involve analytical or numerical methods such as the Finite Element Method (FEM). However, the Boundary Element Method (BEM) is an alternative numerical technique that has been successfully applied for the solution of a wide variety of engineering problems. This work evaluates the applicability of the boundary elem ent code BEASY as a design tool to analyze adhesive joints. The linearity of peak shear and peel stresses with the applied displacement is studied and compared between BEASY and the analytical model of Frostig et al., considering a bonded single-lap joint under tensile loading. The BEM results are also compared with FEM in terms of stress distributions. To evaluate the mesh convergence of BEASY, the influence of the mesh refinement on peak shear and peel stress distributions is assessed. Joint stress predictions are carried out numerically in BEASY and ABAQUS®, and analytically by the models of Volkersen, Goland, and Reissner and Frostig et al. The failure loads for each model are compared with experimental results. The preparation, processing, and mesh creation times are compared for all models. BEASY results presented a good agreement with the conventional methods.