982 resultados para Engineering, Multidisciplinary
Resumo:
The emphasis of this work is on the optimal design of MRI magnets with both superconducting coils and ferromagnetic rings. The work is directed to the automated design of MRI magnet systems containing superconducting wire and both `cold' and `warm' iron. Details of the optimization procedure are given and the results show combined superconducting and iron material MRI magnets with excellent field characteristics. Strong, homogeneous central magnetic fields are produced with little stray or external field leakage. The field calculations are performed using a semi-analytical method for both current coil and iron material sources. Design examples for symmetric, open and asymmetric clinical MRI magnets containing both superconducting coils and ferromagnetic material are presented.
Resumo:
Magnetic resonance imaging (MRI) magnets have very stringent constraints on the homogeneity of the static magnetic field that they generate over desired imaging regions. The magnet system also preferably generates very little stray field external to its structure, so that ease of siting and safety are assured. This work concentrates on deriving, means of rapidly computing the effect of 'cold' and 'warm' ferromagnetic material in or around the superconducting magnet system, so as to facilitate the automated design of hybrid material MR magnets. A complete scheme for the direct calculation of the spherical harmonics of the magnetic field generated by a circular ring of ferromagnetic material is derived under the conditions of arbitrary external magnetizing fields. The magnetic field produced by the superconducting coils in the system is computed using previously developed methods. The final, hybrid algorithm is fast enough for use in large-scale optimization methods. The resultant fields from a practical example of a 4 T, clinical MRI magnet containing both superconducting coils and magnetic material are presented.
Resumo:
This study explores several important aspects of the management of new product development (NPD) in the Chinese steel industry. Specifically it explores NPD success factors, the importance of management functions to new product success and measures of new product success from the perspective of the industry's practitioners. Based on a sample of 190 industrial practitioners from 18 Chinese steel companies, the study provides a mixed picture as China makes the transition from a centrally-controlled to market-based economy. On one hand, respondents ranked understanding users' needs as the most important factor influencing the performance of the new products. Further, formulating new product strategy and strengthening market research are perceived as the most important managerial functions in NPD. However, technical performance measures are regarded as more important and are more widely used in industry than market-based or financial measures of success.
Resumo:
A high definition, finite difference time domain (HD-FDTD) method is presented in this paper. This new method allows the FDTD method to be efficiently applied over a very large frequency range including low frequencies, which are problematic for conventional FDTD methods. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and has been verified against analytical solutions within the frequency range 50 Hz-1 GHz. As an example of the lower frequency range, the method has been applied to the problem of induced eddy currents in the human body resulting from the pulsed magnetic field gradients of an MRI system. The new method only requires approximately 0.3% of the source period to obtain an accurate solution. (C) 2003 Elsevier Science Inc. All rights reserved.
Resumo:
Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Subcycling, or the use of different timesteps at different nodes, can be an effective way of improving the computational efficiency of explicit transient dynamic structural solutions. The method that has been most widely adopted uses a nodal partition. extending the central difference method, in which small timestep updates are performed interpolating on the displacement at neighbouring large timestep nodes. This approach leads to narrow bands of unstable timesteps or statistical stability. It also can be in error due to lack of momentum conservation on the timestep interface. The author has previously proposed energy conserving algorithms that avoid the first problem of statistical stability. However, these sacrifice accuracy to achieve stability. An approach to conserve momentum on an element interface by adding partial velocities is considered here. Applied to extend the central difference method. this approach is simple. and has accuracy advantages. The method can be programmed by summing impulses of internal forces, evaluated using local element timesteps, in order to predict a velocity change at a node. However, it is still only statistically stable, so an adaptive timestep size is needed to monitor accuracy and to be adjusted if necessary. By replacing the central difference method with the explicit generalized alpha method. it is possible to gain stability by dissipating the high frequency response that leads to stability problems. However. coding the algorithm is less elegant, as the response depends on previous partial accelerations. Extension to implicit integration, is shown to be impractical due to the neglect of remote effects of internal forces acting across a timestep interface. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
This paper presents a large amplitude vibration analysis of pre-stressed functionally graded material (FGM) laminated plates that are composed of a shear deformable functionally graded layer and two surface-mounted piezoelectric actuator layers. Nonlinear governing equations of motion are derived within the context of Reddy's higher-order shear deformation plate theory to account for transverse shear strain and rotary inertia. Due to the bending and stretching coupling effect, a nonlinear static problem is solved first to determine the initial stress state and pre-vibration deformations of the plate that is subjected to uniform temperature change, in-plane forces and applied actuator voltage. By adding an incremental dynamic state to the pre-vibration state, the differential equations that govern the nonlinear vibration behavior of pre-stressed FGM laminated plates are derived. A semi-analytical method that is based on one-dimensional differential quadrature and Galerkin technique is proposed to predict the large amplitude vibration behavior of the laminated rectangular plates with two opposite clamped edges. Linear vibration frequencies and nonlinear normalized frequencies are presented in both tabular and graphical forms, showing that the normalized frequency of the FGM laminated plate is very sensitive to vibration amplitude, out-of-plane boundary support, temperature change, in-plane compression and the side-to-thickness ratio. The CSCF and CFCF plates even change the inherent hard-spring characteristic to soft-spring behavior at large vibration amplitudes. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
A method is presented for calculating the currents and winding patterns required to design independent zonal and tesseral shim coils for magnetic resonance imaging. Both actively shielded and unshielded configurations are considered, and the region of interest can be located asymmetrically with respect to the coil's length. Streamline, target-field and Fourier-series methods are utilized. The desired target-field is specified at two cylindrical radii, on and inside a circular conducting cylinder of length 2L and radius a. The specification is over some asymmetric portion pL < z < qL of the coil's length (-1 < p < q < 1). Arbitrary functions are used in the outer sections, -L < z < pL and qL < z < L, to ensure continuity of the magnetic field across the entire length of the coil. The entire field is then periodically extended as a half-range cosine Fourier series about either end of the coil. The resultant Fourier coefficients are then substituted into the Fourier-series expressions for the internal and external magnetic fields, and current densities and stream functions on both the primary coil and shield. A contour plot of the stream function directly gives the required coil winding patterns. Spherical harmonic analysis and shielding analysis on field calculations from a ZX shim coil indicate that example designs and theory are well matched.
Resumo:
For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
The equipment used to measure magnetic fields and, electric currents in residences is described. The instrumentation consisted of current transformers, magnetic field probes and locally designed and, built signal conditioning modules. The data acquisition system was capable of unattended recording for extended time periods. The complete system was calibrated to verify its response to known physical inputs. (C) 2003 ISA-The Instrumentation Automation Society.
Resumo:
The effects of iron ions on dielectric properties of lithium sodium phosphate glasses were studied by non-usual, fast and non-destructive microwave techniques. The dielectric constant (epsilon`). insertion loss (L) and microwave absorption spectra (microwave response) of the selected glass system xFe(2)O(3)center dot(1 - x)(50P(2)O5 center dot 25Li(2)O center dot 25Na(2)O), being x = 0, 3, 6, ....,15 expressed in mol.%, were investigated. The dielectric constant of the samples was investigated at 9.00 GHz using the shorted-line method (SLM) giving the minimum value of epsilon` = 2.10 +/- 0.02 at room temperature, and increasing further with x, following a given law. It was observed a gradual increasing slope Of E in the temperature range of 25 <= t <= 330 degrees C, at the frequency of 9.00 GHz. Insertion loss (measured at 9.00 GHz) and measurements of microwave energy attenuation, at frequencies ranging from 8.00 to 12.00 GHz were also studied as a function of iron content in the glass samples. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Three different types of maltodextrin encapsulated dehydrated blackberry fruit powders were obtained using vibrofluidized bed drying (VF), spray drying (SD), vacuum drying (VD), and freeze drying (FD). Moisture equilibrium data of blackberry pulp powders with 18% maltodextrin were determined at 20, 30, 40, and 50 degrees C using the static gravimetric method for the water activity range of 0.06-0.90. Experimental equilibrium moisture content data versus water activity were fit to the Guggenheim-Anderson-de Boer (GAB) model. Agreement was found between experimental and calculated values. The isosteric heat of sorption of water was determined using the Clausius-Clapeyron equation from the equilibrium data; isosteric heats of sorption were found to increase with increasing temperature and could be adjusted by an exponential relationship. For freeze dried, vibrofluidized, and vacuum dried pulp powder samples, the isosteric heats of sorption were lower (more negative) than those calculated for spray dried samples. The enthalpy-entropy compensation theory was applied to sorption isotherms and plots of Delta H versus Delta S provided the isokinetic temperatures, indicating an enthalpy-controlled sorption process.
Resumo:
Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this article we address decomposition strategies especially tailored to perform strong coupling of dimensionally heterogeneous models, under the hypothesis that one wants to solve each submodel separately and implement the interaction between subdomains by boundary conditions alone. The novel methodology takes full advantage of the small number of interface unknowns in this kind of problems. Existing algorithms can be viewed as variants of the `natural` staggered algorithm in which each domain transfers function values to the other, and receives fluxes (or forces), and vice versa. This natural algorithm is known as Dirichlet-to-Neumann in the Domain Decomposition literature. Essentially, we propose a framework in which this algorithm is equivalent to applying Gauss-Seidel iterations to a suitably defined (linear or nonlinear) system of equations. It is then immediate to switch to other iterative solvers such as GMRES or other Krylov-based method. which we assess through numerical experiments showing the significant gain that can be achieved. indeed. the benefit is that an extremely flexible, automatic coupling strategy can be developed, which in addition leads to iterative procedures that are parameter-free and rapidly converging. Further, in linear problems they have the finite termination property. Copyright (C) 2009 John Wiley & Sons, Ltd.