211 resultados para Effective notch method
Resumo:
The emphasis of this work is on the optimal design of MRI magnets with both superconducting coils and ferromagnetic rings. The work is directed to the automated design of MRI magnet systems containing superconducting wire and both `cold' and `warm' iron. Details of the optimization procedure are given and the results show combined superconducting and iron material MRI magnets with excellent field characteristics. Strong, homogeneous central magnetic fields are produced with little stray or external field leakage. The field calculations are performed using a semi-analytical method for both current coil and iron material sources. Design examples for symmetric, open and asymmetric clinical MRI magnets containing both superconducting coils and ferromagnetic material are presented.
Resumo:
Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model.
Resumo:
In this paper the diffusion and flow of carbon tetrachloride, benzene and n-hexane through a commercial activated carbon is studied by a differential permeation method. The range of pressure is covered from very low pressure to a pressure range where significant capillary condensation occurs. Helium as a non-adsorbing gas is used to determine the characteristics of the porous medium. For adsorbing gases and vapors, the motion of adsorbed molecules in small pores gives rise to a sharp increase in permeability at very low pressures. The interplay between a decreasing behavior in permeability due to the saturation of small pores with adsorbed molecules and an increasing behavior due to viscous flow in larger pores with pressure could lead to a minimum in the plot of total permeability versus pressure. This phenomenon is observed for n-hexane at 30degreesC. At relative pressure of 0.1-0.8 where the gaseous viscous flow dominates, the permeability is a linear function of pressure. Since activated carbon has a wide pore size distribution, the mobility mechanism of these adsorbed molecules is different from pore to pore. In very small pores where adsorbate molecules fill the pore the permeability decreases with an increase in pressure, while in intermediate pores the permeability of such transport increases with pressure due to the increasing build-up of layers of adsorbed molecules. For even larger pores, the transport is mostly due to diffusion and flow of free molecules, which gives rise to linear permeability with respect to pressure. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Anew thermodynamic approach has been developed in this paper to analyze adsorption in slitlike pores. The equilibrium is described by two thermodynamic conditions: the Helmholtz free energy must be minimal, and the grand potential functional at that minimum must be negative. This approach has led to local isotherms that describe adsorption in the form of a single layer or two layers near the pore walls. In narrow pores local isotherms have one step that could be either very sharp but continuous or discontinuous benchlike for a definite range of pore width. The latter reflects a so-called 0 --> 1 monolayer transition. In relatively wide pores, local isotherms have two steps, of which the first step corresponds to the appearance of two layers near the pore walls, while the second step corresponds to the filling of the space between these layers. All features of local isotherms are in agreement with the results obtained from the density functional theory and Monte Carlo simulations. The approach is used for determining pore size distributions of carbon materials. We illustrate this with the benzene adsorption data on activated carbon at 20, 50, and 80 degreesC, argon adsorption on activated carbon Norit ROX at 87.3 K, and nitrogen adsorption on activated carbon Norit R1 at 77.3 K.
Resumo:
Super vision probably does have benefits both for the maintenance and improvement of clinical skills and for job satisfaction, but the data are very thin and almost non-existent in the area of alcohol and other drugs services. Because of the potential complexity of objectives and roles in super vision, a structured agreement appears to be an important part of the effective supervision relationship. Because sessions can degenerate easily into unstructured socialization, agendas and session objectives may also be important. While a working alliance based on mutual respect and trust is an essential base for the supervision relationship, procedures for direct observation of clinical skills, demonstration of new procedures and skills practice with detailed feedback appear critical to super vision's impact on practice. To ensure effective super vision, there needs not only to be a minimum of personnel and resources, but also a compatibility with the values and procedures of management and staff, access to supervision training and consultation and sufficient incentives to ensure it continues.
Resumo:
Background: Augmentation strategies in schizophrenia treatment remain an important issue because despite the introduction of several new antipsychotics, many patients remain treatment resistant. The aim of this study was to undertake a systematic review and meta-analysis of the safety and efficacy of one frequently used adjunctive compound: carbamazepine. Data sources and study selection: Randomized controlled trials comparing carbamazopine (as a sole or as an adjunctive compound) with placebo or no intervention in participants with schizophrenia or schizoaffective disorder were searched for by accessing 7 electronic databases, cross-referencing publications cited in pertinent studies, and contacting drug companies that manufacture carbamazepine. Method: The identified studies were independently inspected and their quality assessed by 2 reviewers, Because the study results were generally incompletely reported, original patient data were requested from the authors; data were received for 8 of the 10 randomized controlled trials included in the present analysis, allowing for a reanalysis of the primary data. Dichotomous variables were analyzed using the Mantel-Haenszel odds ratio and continuous data were analyzed using standardized mean differences, both specified with 95% confidence intervals. Results: Ten studies (total N = 283 subjects) were included. Carbamazepine was not effective in preventing relapse in the only randomized controlled trial that compared carbamazepine monotherapy with placebo. Carbamazepine tended to be less effective than perphenazine in the only trial comparing carbamazepine with an antipsychotic. Although there was a trend indicating a benefit from carbamazepine as an adjunct to antipsychotics, this trend did not reach statistical significance. Conclusion: At present, this augmentation strategy cannot be recommended for routine use. The most promising targets for future trials are patients with excitement, aggression, and schizoaffective disorder bipolar type.
Resumo:
A range of lasers. is now available for use in dentistry. This paper summarizes key current and emerging applications, for lasers in clinical practice. A major diagnostic application of low power lasers is the detection of caries, using fluorescence elicited from hydroxyapatite or from bacterial by-products. Laser fluorescence is an effective method for detecting and quantifying incipient occlusal and cervical,carious lesions, and with further refinement could be used in the, same manner for proximal lesions. Photoactivated dye techniques have been developed which use low power lasers to elicit a photochemical reaction, Photoactivated dye techniques' can be used to disinfect root canals, periodontal pockets, cavity preparations and sites of peri-implantitis. Using similar principles, more powerful lasers tan be used for photodynamic therapy in the treatment of malignancies of the oral mucosa. Laser-driven photochemical reactions can also be used for tooth whitening. In combination with fluoride, laser irradiation can improve the resistance of tooth structure to demineralization, and this application is of particular benefit for susceptible sites in high caries risk patients. Laser technology for caries' removal, cavity preparation and soft tissue surgery is at a high state of refinement, having had several decades of development up to the present time. Used in conjunction with or as a replacement for traditional methods, it is expected that specific laser technologies will become an essential component of contemporary dental practice over the next decade.
Resumo:
Using benthic habitat data from the Florida Keys (USA), we demonstrate how siting algorithms can help identify potential networks of marine reserves that comprehensively represent target habitat types. We applied a flexible optimization tool-simulated annealing-to represent a fixed proportion of different marine habitat types within a geographic area. We investigated the relative influence of spatial information, planning-unit size, detail of habitat classification, and magnitude of the overall conservation goal on the resulting network scenarios. With this method, we were able to identify many adequate reserve systems that met the conservation goals, e.g., representing at least 20% of each conservation target (i.e., habitat type) while fulfilling the overall aim of minimizing the system area and perimeter. One of the most useful types of information provided by this siting algorithm comes from an irreplaceability analysis, which is a count of the number of, times unique planning units were included in reserve system scenarios. This analysis indicated that many different combinations of sites produced networks that met the conservation goals. While individual 1-km(2) areas were fairly interchangeable, the irreplaceability analysis highlighted larger areas within the planning region that were chosen consistently to meet the goals incorporated into the algorithm. Additionally, we found that reserve systems designed with a high degree of spatial clustering tended to have considerably less perimeter and larger overall areas in reserve-a configuration that may be preferable particularly for sociopolitical reasons. This exercise illustrates the value of using the simulated annealing algorithm to help site marine reserves: the approach makes efficient use of;available resources, can be used interactively by conservation decision makers, and offers biologically suitable alternative networks from which an effective system of marine reserves can be crafted.
Resumo:
In the present survey, we identified most of the genes involved in the receptor tyrosine kinase (RTK), mitogen activated protein kinase (MAPK) and Notch signaling pathways in the draft genome sequence of Ciona intestinalis, a basal chordate. Compared to vertebrates, most of the genes found in the Ciona genome had fewer paralogues, although several genes including ephrin, Eph and fringe appeared to have multiplied or duplicated independently in the ascidian genome. In contrast, some genes including kit/flt, PDGF and Trk receptor tyrosine kinases were not found in the present survey, suggesting that these genes are innovations in the vertebrate lineage or lost in the ascidian lineage. The gene set identified in the present analysis provides an insight into genes for the RTK, MAPK and Notch signaling pathways in the ancient chordate genome and thereby how chordates evolved these signaling pathway.
Resumo:
Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
A supersweet sweet corn hybrid, Pacific H5, was planted at weekly intervals (P-1 to P-5) in spring in South-Eastern Queensland. All plantings were harvested at the same time resulting in immature seed for the last planting (P-5). The seed was handled by three methods: manual harvest and processing (M-1), manual harvest and mechanical processing (M-2) and mechanical harvest and processing (M-3), and later graded into three sizes (small, medium and large). After eight months storage at 12-14degreesC, seed was maintained at 30degreesC with bimonthly monitoring of germination for fourteen months and seed damage at the end of this period. Seed quality was greatest for M-1 and was reduced by mechanical processing but not by mechanical harvesting. Large and medium seed had higher germination due to greater storage reserves but also more seed damage during mechanical processing. Immature seed from premature harvest (P-5) had poor quality especially when processed mechanically and reinforced the need for harvested seed to be physiologically mature.
Resumo:
A high definition, finite difference time domain (HD-FDTD) method is presented in this paper. This new method allows the FDTD method to be efficiently applied over a very large frequency range including low frequencies, which are problematic for conventional FDTD methods. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and has been verified against analytical solutions within the frequency range 50 Hz-1 GHz. As an example of the lower frequency range, the method has been applied to the problem of induced eddy currents in the human body resulting from the pulsed magnetic field gradients of an MRI system. The new method only requires approximately 0.3% of the source period to obtain an accurate solution. (C) 2003 Elsevier Science Inc. All rights reserved.
Resumo:
Blasting has been the most frequently used method for rock breakage since black powder was first used to fragment rocks, more than two hundred years ago. This paper is an attempt to reassess standard design techniques used in blasting by providing an alternative approach to blast design. The new approach has been termed asymmetric blasting. Based on providing real time rock recognition through the capacity of measurement while drilling (MWD) techniques, asymmetric blasting is an approach to deal with rock properties as they occur in nature, i.e., randomly and asymmetrically spatially distributed. It is well accepted that performance of basic mining operations, such as excavation and crushing rely on a broken rock mass which has been pre conditioned by the blast. By pre-conditioned we mean well fragmented, sufficiently loose and with adequate muckpile profile. These muckpile characteristics affect loading and hauling [1]. The influence of blasting does not end there. Under the Mine to Mill paradigm, blasting has a significant leverage on downstream operations such as crushing and milling. There is a body of evidence that blasting affects mineral liberation [2]. Thus, the importance of blasting has increased from simply fragmenting and loosing the rock mass, to a broader role that encompasses many aspects of mining, which affects the cost of the end product. A new approach is proposed in this paper which facilitates this trend 'to treat non-homogeneous media (rock mass) in a non-homogeneous manner (an asymmetrical pattern) in order to achieve an optimal result (in terms of muckpile size distribution).' It is postulated there are no logical reasons (besides the current lack of means to infer rock mass properties in the blind zones of the bench and onsite precedents) for drilling a regular blast pattern over a rock mass that is inherently heterogeneous. Real and theoretical examples of such a method are presented.
Resumo:
Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Subcycling, or the use of different timesteps at different nodes, can be an effective way of improving the computational efficiency of explicit transient dynamic structural solutions. The method that has been most widely adopted uses a nodal partition. extending the central difference method, in which small timestep updates are performed interpolating on the displacement at neighbouring large timestep nodes. This approach leads to narrow bands of unstable timesteps or statistical stability. It also can be in error due to lack of momentum conservation on the timestep interface. The author has previously proposed energy conserving algorithms that avoid the first problem of statistical stability. However, these sacrifice accuracy to achieve stability. An approach to conserve momentum on an element interface by adding partial velocities is considered here. Applied to extend the central difference method. this approach is simple. and has accuracy advantages. The method can be programmed by summing impulses of internal forces, evaluated using local element timesteps, in order to predict a velocity change at a node. However, it is still only statistically stable, so an adaptive timestep size is needed to monitor accuracy and to be adjusted if necessary. By replacing the central difference method with the explicit generalized alpha method. it is possible to gain stability by dissipating the high frequency response that leads to stability problems. However. coding the algorithm is less elegant, as the response depends on previous partial accelerations. Extension to implicit integration, is shown to be impractical due to the neglect of remote effects of internal forces acting across a timestep interface. (C) 2002 Elsevier Science B.V. All rights reserved.