946 resultados para Fast virtual stenting method
Resumo:
Cascaded multilevel inverters-based Static Var Generators (SVGs) are FACTS equipment introduced for active and reactive power flow control. They eliminate the need for zigzag transformers and give a fast response. However, with regard to their application for flicker reduction in using Electric Arc Furnace (EAF), the existing multilevel inverter-based SVGs suffer from the following disadvantages. (1) To control the reactive power, an off-line calculation of Modulation Index (MI) is required to adjust the SVG output voltage. This slows down the transient response to the changes of reactive power; and (2) Random active power exchange may cause unbalance to the voltage of the d.c. link (HBI) capacitor when the reactive power control is done by adjusting the power angle d alone. To resolve these problems, a mathematical model of 11-level cascaded SVG, was developed. A new control strategy involving both MI (modulation index) and power angle (d) is proposed. A selected harmonics elimination method (SHEM) is taken for switching pattern calculations. To shorten the response time and simplify the controls system, feed forward neural networks are used for on-line computation of the switching patterns instead of using look-up tables. The proposed controller updates the MI and switching patterns once each line-cycle according to the sampled reactive power Qs. Meanwhile, the remainder reactive power (compensated by the MI) and the reactive power variations during the line-cycle will be continuously compensated by adjusting the power angles, d. The scheme senses both variables MI and d, and takes action through the inverter switching angle, qi. As a result, the proposed SVG is expected to give a faster and more accurate response than present designs allow. In support of the proposal there is a mathematical model for reactive powered distribution and a sensitivity matrix for voltage regulation assessment, MATLAB simulation results are provided to validate the proposed schemes. The performance with non-linear time varying loads is analysed and refers to a general review of flicker, of methods for measuring flickers due to arc furnace and means for mitigation.
Resumo:
The article deals with the CFD modelling of fast pyrolysis of biomass in an Entrained Flow Reactor (EFR). The Lagrangian approach is adopted for the particle tracking, while the flow of the inert gas is treated with the standard Eulerian method for gases. The model includes the thermal degradation of biomass to char with simultaneous evolution of gases and tars from a discrete biomass particle. The chemical reactions are represented using a two-stage, semi-global model. The radial distribution of the pyrolysis products is predicted as well as their effect on the particle properties. The convective heat transfer to the surface of the particle is computed using the Ranz-Marshall correlation.
Resumo:
The pyrolysis of a freely moving cellulosic particle inside a 41.7mgs -1 source continuously fed fluid bed reactor subjected to convective heat transfer is modelled. The Lagrangian approach is adopted for the particle tracking inside the reactor, while the flow of the inert gas is treated with the standard Eulerian method for gases. The model incorporates the thermal degradation of cellulose to char with simultaneous evolution of gases and vapours from discrete cellulosic particles. The reaction kinetics is represented according to the Broido–Shafizadeh scheme. The convective heat transfer to the surface of the particle is solved by two means, namely the Ranz–Marshall correlation and the limit case of infinitely fast external heat transfer rates. The results from both approaches are compared and discussed. The effect of the different heat transfer rates on the discrete phase trajectory is also considered.
Resumo:
In this paper we discuss a fast Bayesian extension to kriging algorithms which has been used successfully for fast, automatic mapping in emergency conditions in the Spatial Interpolation Comparison 2004 (SIC2004) exercise. The application of kriging to automatic mapping raises several issues such as robustness, scalability, speed and parameter estimation. Various ad-hoc solutions have been proposed and used extensively but they lack a sound theoretical basis. In this paper we show how observations can be projected onto a representative subset of the data, without losing significant information. This allows the complexity of the algorithm to grow as O(n m 2), where n is the total number of observations and m is the size of the subset of the observations retained for prediction. The main contribution of this paper is to further extend this projective method through the application of space-limited covariance functions, which can be used as an alternative to the commonly used covariance models. In many real world applications the correlation between observations essentially vanishes beyond a certain separation distance. Thus it makes sense to use a covariance model that encompasses this belief since this leads to sparse covariance matrices for which optimised sparse matrix techniques can be used. In the presence of extreme values we show that space-limited covariance functions offer an additional benefit, they maintain the smoothness locally but at the same time lead to a more robust, and compact, global model. We show the performance of this technique coupled with the sparse extension to the kriging algorithm on synthetic data and outline a number of computational benefits such an approach brings. To test the relevance to automatic mapping we apply the method to the data used in a recent comparison of interpolation techniques (SIC2004) to map the levels of background ambient gamma radiation. © Springer-Verlag 2007.
Resumo:
In the present work the neutron emission spectra from a graphite cube, and from natural uranium, lithium fluoride, graphite, lead and steel slabs bombarded with 14.1 MeV neutrons were measured to test nuclear data and calculational methods for D - T fusion reactor neutronics. The neutron spectra measured were performed by an organic scintillator using a pulse shape discrimination technique based on a charge comparison method to reject the gamma rays counts. A computer programme was used to analyse the experimental data by the differentiation unfolding method. The 14.1 MeV neutron source was obtained from T(d,n)4He reaction by the bombardment of T - Ti target with a deuteron beam of energy 130 KeV. The total neutron yield was monitored by the associated particle method using a silicon surface barrier detector. The numerical calculations were performed using the one-dimensional discrete-ordinate neutron transport code ANISN with the ZZ-FEWG 1/ 31-1F cross section library. A computer programme based on Gaussian smoothing function was used to smooth the calculated data and to match the experimental data. There was general agreement between measured and calculated spectra for the range of materials studied. The ANISN calculations carried out with P3 - S8 calculations together with representation of the slab assemblies by a hollow sphere with no reflection at the internal boundary were adequate to model the experimental data and hence it appears that the cross section set is satisfactory and for the materials tested needs no modification in the range 14.1 MeV to 2 MeV. Also it would be possible to carry out a study on fusion reactor blankets, using cylindrical geometry and including a series of concentric cylindrical shells to represent the torus wall, possible neutron converter and breeder regions, and reflector and shielding regions.
Resumo:
We propose a new simple method to achieve precise symbol synchronization using one start-of-frame (SOF) symbol in optical fast orthogonal frequency-division multiplexing (FOFDM) with subchannel spacing equal to half of the symbol rate per sub-carrier. The proposed method first identifies the SOF symbol, then exploits the evenly symmetric property of the discrete cosine transform in FOFDM, which is also valid in the presence of chromatic dispersion, to achieve precise symbol synchronization. We demonstrate its use in a 16.88-Gb/s phase-shifted-keying-based FOFDM system over a 124-km field-installed single-mode fiber link and show that this technique operates well in automatic precise symbol synchronization at an optical signal-to-noise ratio as low as 3 dB and after transmission.
Resumo:
The invention relates to a liquid bio-fuel mixture, and uses thereof in the generation of electrical power, mechanical power and/or heat. The liquid bio-fuel mixture is macroscopically single phase, and comprises a liquid condensate product of biomass fast pyrolysis, a bio-diesel component and an ethanol component.
Resumo:
An international round robin study of the viscosity measurements and aging of fast pyrolysis bio-oil has been undertaken recently, and this work is an outgrowth from that effort. Two bio-oil samples were distributed to two laboratories for accelerated aging tests and to three laboratories of long-term aging studies. The accelerated aging test was defined as the change in viscosity of a sealed sample of bio-oil held for 24 h at 80 °C. The test was repeated 10 times over consecutive days to determine the intra-laboratory repeatability of the method. Other bio-oil samples were placed in storage at three temperatures, 21, 5, and -17 °C, for a period of up to 1 year to evaluate the change in viscosity. The variation in the results of the accelerated aging test was shown to be low within a given laboratory. The long-term aging studies showed that storage of a filtered bio-oil under refrigeration can minimize the amount of change in viscosity. The accelerated aging test gave a measure of change similar to that of 6-12 months of storage at room temperature for a filtered bio-oil. Filtration of solids was identified as a key contributor to improving the stability of the bio-oil as expressed by the viscosity based on results of the accelerated aging tests as well as long-term aging studies. Only the filtered bio-oil consistently gave useful results in the accelerated aging and long-term aging studies. The inconsistency suggests that better protocols need to be developed for sampling bio-oils. These results can be helpful in setting standards for use of bio-oil, which is just coming into the marketplace. © 2012 American Chemical Society.
Resumo:
We propose and investigate a method for the stable determination of a harmonic function from knowledge of its value and its normal derivative on a part of the boundary of the (bounded) solution domain (Cauchy problem). We reformulate the Cauchy problem as an operator equation on the boundary using the Dirichlet-to-Neumann map. To discretize the obtained operator, we modify and employ a method denoted as Classic II given in [J. Helsing, Faster convergence and higher accuracy for the Dirichlet–Neumann map, J. Comput. Phys. 228 (2009), pp. 2578–2576, Section 3], which is based on Fredholm integral equations and Nyström discretization schemes. Then, for stability reasons, to solve the discretized integral equation we use the method of smoothing projection introduced in [J. Helsing and B.T. Johansson, Fast reconstruction of harmonic functions from Cauchy data using integral equation techniques, Inverse Probl. Sci. Eng. 18 (2010), pp. 381–399, Section 7], which makes it possible to solve the discretized operator equation in a stable way with minor computational cost and high accuracy. With this approach, for sufficiently smooth Cauchy data, the normal derivative can also be accurately computed on the part of the boundary where no data is initially given.
Resumo:
We consider the problem of stable determination of a harmonic function from knowledge of the solution and its normal derivative on a part of the boundary of the (bounded) solution domain. The alternating method is a procedure to generate an approximation to the harmonic function from such Cauchy data and we investigate a numerical implementation of this procedure based on Fredholm integral equations and Nyström discretization schemes, which makes it possible to perform a large number of iterations (millions) with minor computational cost (seconds) and high accuracy. Moreover, the original problem is rewritten as a fixed point equation on the boundary, and various other direct regularization techniques are discussed to solve that equation. We also discuss how knowledge of the smoothness of the data can be used to further improve the accuracy. Numerical examples are presented showing that accurate approximations of both the solution and its normal derivative can be obtained with much less computational time than in previous works.
Resumo:
An international round robin study of the stability of fast pyrolysis bio-oil was undertaken. Fifteen laboratories in five different countries contributed. Two bio-oil samples were distributed to the laboratories for stability testing and further analysis. The stability test was defined in a method provided with the bio-oil samples. Viscosity measurement was a key input. The change in viscosity of a sealed sample of bio-oil held for 24 h at 80 °C was the defining element of stability. Subsequent analyses included ultimate analysis, density, moisture, ash, filterable solids, and TAN/pH determination, and gel permeation chromatography. The results showed that kinematic viscosity measurement was more generally conducted and more reproducibly performed versus dynamic viscosity measurement. The variation in the results of the stability test was great and a number of reasons for the variation were identified. The subsequent analyses proved to be at the level of reproducibility, as found in earlier round robins on bio-oil analysis. Clearly, the analyses were more straightforward and reproducible with a bio-oil sample low in filterable solids (0.2%), compared to one with a higher (2%) solids loading. These results can be helpful in setting standards for use of bio-oil, which is just coming into the marketplace. © 2012 American Chemical Society.
Resumo:
Subunit vaccine discovery is an accepted clinical priority. The empirical approach is time- and labor-consuming and can often end in failure. Rational information-driven approaches can overcome these limitations in a fast and efficient manner. However, informatics solutions require reliable algorithms for antigen identification. All known algorithms use sequence similarity to identify antigens. However, antigenicity may be encoded subtly in a sequence and may not be directly identifiable by sequence alignment. We propose a new alignment-independent method for antigen recognition based on the principal chemical properties of protein amino acid sequences. The method is tested by cross-validation on a training set of bacterial antigens and external validation on a test set of known antigens. The prediction accuracy is 83% for the cross-validation and 80% for the external test set. Our approach is accurate and robust, and provides a potent tool for the in silico discovery of medically relevant subunit vaccines.
Resumo:
The general iteration method for nonexpansive mappings on a Banach space is considered. Under some assumption of fast enough convergence on the sequence of (“almost” nonexpansive) perturbed iteration mappings, if the basic method is τ−convergent for a suitable topology τ weaker than the norm topology, then the perturbed method is also τ−convergent. Application is presented to the gradient-prox method for monotone inclusions in Hilbert spaces.
Resumo:
The article describes the method of preliminary segmentation of a speech signal with wavelet transformation use, consisting of two stages. At the first stage there is an allocation of sibilants and pauses, at the second – the further segmentation of the rest signal parts.
Resumo:
Agents inhabiting large scale environments are faced with the problem of generating maps by which they can navigate. One solution to this problem is to use probabilistic roadmaps which rely on selecting and connecting a set of points that describe the interconnectivity of free space. However, the time required to generate these maps can be prohibitive, and agents do not typically know the environment in advance. In this paper we show that the optimal combination of different point selection methods used to create the map is dependent on the environment, no point selection method dominates. This motivates a novel self-adaptive approach for an agent to combine several point selection methods. The success rate of our approach is comparable to the state of the art and the generation cost is substantially reduced. Self-adaptation therefore enables a more efficient use of the agent's resources. Results are presented for both a set of archetypal scenarios and large scale virtual environments based in Second Life, representing real locations in London.