985 resultados para Coefficient diagram method


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper the diffusion and flow of carbon tetrachloride, benzene and n-hexane through a commercial activated carbon is studied by a differential permeation method. The range of pressure is covered from very low pressure to a pressure range where significant capillary condensation occurs. Helium as a non-adsorbing gas is used to determine the characteristics of the porous medium. For adsorbing gases and vapors, the motion of adsorbed molecules in small pores gives rise to a sharp increase in permeability at very low pressures. The interplay between a decreasing behavior in permeability due to the saturation of small pores with adsorbed molecules and an increasing behavior due to viscous flow in larger pores with pressure could lead to a minimum in the plot of total permeability versus pressure. This phenomenon is observed for n-hexane at 30degreesC. At relative pressure of 0.1-0.8 where the gaseous viscous flow dominates, the permeability is a linear function of pressure. Since activated carbon has a wide pore size distribution, the mobility mechanism of these adsorbed molecules is different from pore to pore. In very small pores where adsorbate molecules fill the pore the permeability decreases with an increase in pressure, while in intermediate pores the permeability of such transport increases with pressure due to the increasing build-up of layers of adsorbed molecules. For even larger pores, the transport is mostly due to diffusion and flow of free molecules, which gives rise to linear permeability with respect to pressure. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Anew thermodynamic approach has been developed in this paper to analyze adsorption in slitlike pores. The equilibrium is described by two thermodynamic conditions: the Helmholtz free energy must be minimal, and the grand potential functional at that minimum must be negative. This approach has led to local isotherms that describe adsorption in the form of a single layer or two layers near the pore walls. In narrow pores local isotherms have one step that could be either very sharp but continuous or discontinuous benchlike for a definite range of pore width. The latter reflects a so-called 0 --> 1 monolayer transition. In relatively wide pores, local isotherms have two steps, of which the first step corresponds to the appearance of two layers near the pore walls, while the second step corresponds to the filling of the space between these layers. All features of local isotherms are in agreement with the results obtained from the density functional theory and Monte Carlo simulations. The approach is used for determining pore size distributions of carbon materials. We illustrate this with the benzene adsorption data on activated carbon at 20, 50, and 80 degreesC, argon adsorption on activated carbon Norit ROX at 87.3 K, and nitrogen adsorption on activated carbon Norit R1 at 77.3 K.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The particle-based Lattice Solid Model (LSM) was developed to provide a basis to study the physics of rocks and the nonlinear dynamics of earthquakes (MORA and PLACE, 1994; PLACE and MORA, 1999). A new modular and flexible LSM approach has been developed that allows different microphysics to be easily included in or removed from the model. The approach provides a virtual laboratory where numerical experiments can easily be set up and all measurable quantities visualised. The proposed approach provides a means to simulate complex phenomena such as fracturing or localisation processes, and enables the effect of different micro-physics on macroscopic behaviour to be studied. The initial 2-D model is extended to allow three-dimensional simulations to be performed and particles of different sizes to be specified. Numerical bi-axial compression experiments under different confining pressure are used to calibrate the model. By tuning the different microscopic parameters (such as coefficient of friction, microscopic strength and distribution of grain sizes), the macroscopic strength of the material and can be adjusted to be in agreement with laboratory experiments, and the orientation of fractures is consistent with the theoretical value predicted based on Mohr-Coulomb diagram. Simulations indicate that 3-D numerical models have different macroscopic properties than in 2-D and, hence, the model must be recalibrated for 3-D simulations. These numerical experiments illustrate that the new approach is capable of simulating typical rock fracture behaviour. The new model provides a basis to investigate nucleation, rupture and slip pulse propagation in complex fault zones without the previous model limitations of a regular low-level surface geometry and being restricted to two-dimensions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A supersweet sweet corn hybrid, Pacific H5, was planted at weekly intervals (P-1 to P-5) in spring in South-Eastern Queensland. All plantings were harvested at the same time resulting in immature seed for the last planting (P-5). The seed was handled by three methods: manual harvest and processing (M-1), manual harvest and mechanical processing (M-2) and mechanical harvest and processing (M-3), and later graded into three sizes (small, medium and large). After eight months storage at 12-14degreesC, seed was maintained at 30degreesC with bimonthly monitoring of germination for fourteen months and seed damage at the end of this period. Seed quality was greatest for M-1 and was reduced by mechanical processing but not by mechanical harvesting. Large and medium seed had higher germination due to greater storage reserves but also more seed damage during mechanical processing. Immature seed from premature harvest (P-5) had poor quality especially when processed mechanically and reinforced the need for harvested seed to be physiologically mature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trials conducted in Queensland, Australia between 1997 and 2002 demonstrated that fungicides belonging to the triazole group were the most effective in minimising the severity of infection of sorghum by Claviceps africana, the causal agent of sorghum ergot. Triadimenol ( as Bayfidan 250EC) at 0.125 kg a. i./ha was the most effective fungicide. A combination of the systemic activated resistance compound acibenzolar-S-methyl ( as Bion 50WG) at 0.05 kg a. i./ha and mancozeb ( as Penncozeb 750DF) at 1.5 kg a. i./ha has the potential to provide protection against the pathogen, should triazole-resistant isolates be detected. Timing and method of fungicide application are important. Our results suggest that the triazole fungicides have no systemic activity in sorghum panicles, necessitating the need for multiple applications from first anthesis to the end of flowering, whereas acibenzolar-S-methyl is most effective when applied 4 days before flowering. The flat fan nozzles tested in the trials provided higher levels of protection against C. africana and greater droplet deposition on panicles than the tested hollow cone nozzles. Application of triadimenol by a fixed wing aircraft was as efficacious as application through a tractor-mounted boom spray.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A high definition, finite difference time domain (HD-FDTD) method is presented in this paper. This new method allows the FDTD method to be efficiently applied over a very large frequency range including low frequencies, which are problematic for conventional FDTD methods. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and has been verified against analytical solutions within the frequency range 50 Hz-1 GHz. As an example of the lower frequency range, the method has been applied to the problem of induced eddy currents in the human body resulting from the pulsed magnetic field gradients of an MRI system. The new method only requires approximately 0.3% of the source period to obtain an accurate solution. (C) 2003 Elsevier Science Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Blasting has been the most frequently used method for rock breakage since black powder was first used to fragment rocks, more than two hundred years ago. This paper is an attempt to reassess standard design techniques used in blasting by providing an alternative approach to blast design. The new approach has been termed asymmetric blasting. Based on providing real time rock recognition through the capacity of measurement while drilling (MWD) techniques, asymmetric blasting is an approach to deal with rock properties as they occur in nature, i.e., randomly and asymmetrically spatially distributed. It is well accepted that performance of basic mining operations, such as excavation and crushing rely on a broken rock mass which has been pre conditioned by the blast. By pre-conditioned we mean well fragmented, sufficiently loose and with adequate muckpile profile. These muckpile characteristics affect loading and hauling [1]. The influence of blasting does not end there. Under the Mine to Mill paradigm, blasting has a significant leverage on downstream operations such as crushing and milling. There is a body of evidence that blasting affects mineral liberation [2]. Thus, the importance of blasting has increased from simply fragmenting and loosing the rock mass, to a broader role that encompasses many aspects of mining, which affects the cost of the end product. A new approach is proposed in this paper which facilitates this trend 'to treat non-homogeneous media (rock mass) in a non-homogeneous manner (an asymmetrical pattern) in order to achieve an optimal result (in terms of muckpile size distribution).' It is postulated there are no logical reasons (besides the current lack of means to infer rock mass properties in the blind zones of the bench and onsite precedents) for drilling a regular blast pattern over a rock mass that is inherently heterogeneous. Real and theoretical examples of such a method are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Skin-friction measurements are reported for high-enthalpy and high-Mach-number laminar, transitional and turbulent boundary layers. The measurements were performed in a free-piston shock tunnel with air-flow Mach number, stagnation enthalpy and Reynolds numbers in the ranges of 4.4-6.7, 3-13 MJ kg(-1) and 0.16 x 10(6)-21 x 10(6), respectively. Wall temperatures were near 300 K and this resulted in ratios of wall enthalpy to flow-stagnation enthalpy in the range of 0.1-0.02. The experiments were performed using rectangular ducts. The measurements were accomplished using a new skin-friction gauge that was developed for impulse facility testing. The gauge was an acceleration compensated piezoelectric transducer and had a lowest natural frequency near 40 kHz. Turbulent skin-friction levels were measured to within a typical uncertainty of +/-7%. The systematic uncertainty in measured skin-friction coefficient was high for the tested laminar conditions; however, to within experimental uncertainty, the skin-friction and heat-transfer measurements were in agreement with the laminar theory of van Driest (1952). For predicting turbulent skin-friction coefficient, it was established that, for the range of Mach numbers and Reynolds numbers of the experiments, with cold walls and boundary layers approaching the turbulent equilibrium state, the Spalding & Chi (1964) method was the most suitable of the theories tested. It was also established that if the heat transfer rate to the wall is to be predicted, then the Spalding & Chi (1964) method should be used in conjunction with a Reynolds analogy factor near unity. If more accurate results are required, then an experimentally observed relationship between the Reynolds analogy factor and the skin-friction coefficient may be applied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An equivalent unit cell waveguide approach (WGA) to designing 4 multilayer microstrip reflectarray of variable size patches is presented. In this approach, a normal incidence of a plane wave on an infinite periodic array of radiating elements is considered to obtain reflection coefficient phase curves for the reflectarray's elements. It is shown that this problem is equivalent to the problem of reflection of the dominant TEM mode in a waveguide with patches interleaved by layers of dielectric. This waveguide problem is solved using a field matching technique and a method of moments (MoM). Based on this solution, a fast computer algorithm is developed to generate reflection coefficient phase curves for a multilayer microstrip patch reflectarray. The validity of the developed algorithm is tested against alternative approaches and Agilent High Frequency Structure Simulator (HFSS). Having confirmed the validity of the WGA approach, a small offset feed two-layer microstrip patch array is designed and developed. This reflectarray is tested experimentally and shows good performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This communications describes an electromagnetic model of a radial line planar antenna consisting of a radial guide with one central probe and many peripheral probes arranged in concentric circles feeding an array of antenna elements such as patches or wire curls. The model takes into account interactions between the coupling probes while assuming isolation of radiating elements. Based on this model, computer programs are developed to determine equivalent circuit parameters of the feed network and the radiation pattern of the radial line planar antenna. Comparisons are made between the present model and the two-probe model developed earlier by other researchers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Signal peptides and transmembrane helices both contain a stretch of hydrophobic amino acids. This common feature makes it difficult for signal peptide and transmembrane helix predictors to correctly assign identity to stretches of hydrophobic residues near the N-terminal methionine of a protein sequence. The inability to reliably distinguish between N-terminal transmembrane helix and signal peptide is an error with serious consequences for the prediction of protein secretory status or transmembrane topology. In this study, we report a new method for differentiating protein N-terminal signal peptides and transmembrane helices. Based on the sequence features extracted from hydrophobic regions (amino acid frequency, hydrophobicity, and the start position), we set up discriminant functions and examined them on non-redundant datasets with jackknife tests. This method can incorporate other signal peptide prediction methods and achieve higher prediction accuracy. For Gram-negative bacterial proteins, 95.7% of N-terminal signal peptides and transmembrane helices can be correctly predicted (coefficient 0.90). Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 99% (coefficient 0.92). For eukaryotic proteins, 94.2% of N-terminal signal peptides and transmembrane helices can be correctly predicted with coefficient 0.83. Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 87% (coefficient 0.85). The method can be used to complement current transmembrane protein prediction and signal peptide prediction methods to improve their prediction accuracies. (C) 2003 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The role of sunscreens in preventing skin cancer and melanoma is the focus of ongoing research. Currently, there is no objective measure which can be used in field studies to determine whether a person has applied sunscreen to their skin, and researchers must use indirect assessments such as questionnaires. We sought to develop a rapid, non-invasive method for identifying sunscreen on the skin for use in epidemiological studies. Our basic method is to swab the skin, elute any residues which have been adsorbed onto the swab by rinsing in ethanol, and submit the eluted washings for spectrophotometric analysis. In a controlled study, we applied 0.1 ml of sunscreen to a 50 cm(2) grid on both forearms of 21 volunteers. Each forearm was allocated one of 10 different sunscreen brands. The skin was swabbed after intervals of 20 min, 1 h, 2 h and 4 h. In a field study conducted among 12 children aged 2-4 years attending a child care centre, sunscreen was applied to the faces of half the children. Swabs were then taken from the face and back of all children without knowledge of sunscreen status. In the controlled study, sunscreen was clearly detectable up to 2 h after application for all brands containing organic sunscreen, and marginally detectable at 4 h. In the field study, this method correctly identified all children with and without sunscreen. We conclude that spectrophotometric analysis of skin swabs can reliably detect the presence of sunscreen on the skin for up to 2 It after application. (C) 2002 Elsevier Science B.V. All rights reserved.