982 resultados para Fast methods


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The high computational cost of calculating the radiative heating rates in numerical weather prediction (NWP) and climate models requires that calculations are made infrequently, leading to poor sampling of the fast-changing cloud field and a poor representation of the feedback that would occur. This paper presents two related schemes for improving the temporal sampling of the cloud field. Firstly, the ‘split time-stepping’ scheme takes advantage of the independent nature of the monochromatic calculations of the ‘correlated-k’ method to split the calculation into gaseous absorption terms that are highly dependent on changes in cloud (the optically thin terms) and those that are not (optically thick). The small number of optically thin terms can then be calculated more often to capture changes in the grey absorption and scattering associated with cloud droplets and ice crystals. Secondly, the ‘incremental time-stepping’ scheme uses a simple radiative transfer calculation using only one or two monochromatic calculations representing the optically thin part of the atmospheric spectrum. These are found to be sufficient to represent the heating rate increments caused by changes in the cloud field, which can then be added to the last full calculation of the radiation code. We test these schemes in an operational forecast model configuration and find a significant improvement is achieved, for a small computational cost, over the current scheme employed at the Met Office. The ‘incremental time-stepping’ scheme is recommended for operational use, along with a new scheme to correct the surface fluxes for the change in solar zenith angle between radiation calculations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The conventional Newton and fast decoupled power flow (FDPF) methods have been considered inadequate to obtain the maximum loading point of power systems due to ill-conditioning problems at and near this critical point. It is well known that the PV and Q-theta decoupling assumptions of the fast decoupled power flow formulation no longer hold in the vicinity of the critical point. Moreover, the Jacobian matrix of the Newton method becomes singular at this point. However, the maximum loading point can be efficiently computed through parameterization techniques of continuation methods. In this paper it is shown that by using either theta or V as a parameter, the new fast decoupled power flow versions (XB and BX) become adequate for the computation of the maximum loading point only with a few small modifications. The possible use of reactive power injection in a selected PV bus (Q(PV)) as continuation parameter (mu) for the computation of the maximum loading point is also shown. A trivial secant predictor, the modified zero-order polynomial which uses the current solution and a fixed increment in the parameter (V, theta, or mu) as an estimate for the next solution, is used in predictor step. These new versions are compared to each other with the purpose of pointing out their features, as well as the influence of reactive power and transformer tap limits. The results obtained with the new approach for the IEEE test systems (14, 30, 57 and 118 buses) are presented and discussed in the companion paper. The results show that the characteristics of the conventional method are enhanced and the region of convergence around the singular solution is enlarged. In addition, it is shown that parameters can be switched during the tracing process in order to efficiently determine all the PV curve points with few iterations. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The parameterized fast decoupled power flow (PFDPF), versions XB and BX, using either theta or V as a parameter have been proposed by the authors in Part I of this paper. The use of reactive power injection of a selected PVbus (Q(PV)) as the continuation parameter for the computation of the maximum loading point (MLP) was also investigated. In this paper, the proposed versions obtained only with small modifications of the conventional one are used for the computation of the MLP of IEEE test systems (14, 30, 57 and 118 buses). These new versions are compared to each other with the purpose of pointing out their features, as well as the influence of reactive power and transformer tap limits. The results obtained with the new approaches are presented and discussed. The results show that the characteristics of the conventional FDPF method are enhanced and the region of convergence around the singular solution is enlarged. In addition, it is shown that these versions can be switched during the tracing process in order to efficiently determine all the PV curve points with few iterations. A trivial secant predictor, the modified zero-order polynomial, which uses the current solution and a fixed increment in the parameter (V, theta, or mu) as an estimate for the next solution, is used for the predictor step. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A fast, low-cost, convenient, and especially sensitive voltammetric screening approach for the study of the antioxidant properties of isoquercitrin and pedalitin from Pterogyne nitens is suggested in this work. These flavonoids were investigated for their redox properties using cyclic voltammetry in nonaqueous media using N,N-dimethylformamide and tetrabutylammonium tetrafluorborate as the supporting electrolyte, a glassy carbon working electrode, AglAgCl reference electrode, and Pt bare wire counter electrode. The comparative analysis of the activity of rutin has also been carried out. Moreover, combining HPLC with an electrochemical detector allowed qualitative and quantitative detection of micromolecules (e.g., isoquercitrin and pedalitin) that showed antioxidant activities. These results were then correlated to the inhibition of p-carotene bleaching determined by TLC autographic assay and to structural features of the flavonoids.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Microalgae have many applications, such as biodiesel production or food supplement. Depending on the application, the optimization of certain fractions of the biochemical composition (proteins, carbohydrates and lipids) is required. Therefore, samples obtained in different culture conditions must be analyzed in order to compare the content of such fractions. Nevertheless, traditional methods necessitate lengthy analytical procedures with prolonged sample turn-around times. Results of the biochemical composition of Nannochloropsis oculata samples with different protein, carbohydrate and lipid contents obtained by conventional analytical methods have been compared to those obtained by thermogravimetry (TGA) and a Pyroprobe device connected to a gas chromatograph with mass spectrometer detector (Py–GC/MS), showing a clear correlation. These results suggest a potential applicability of these techniques as fast and easy methods to qualitatively compare the biochemical composition of microalgal samples.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research investigates specific ash control methods to limit inorganic content within biomass prior to fast pyrolysis and effect of specific ash components on fast pyrolysis processing, mass balance yields and bio-oil quality and stability. Inorganic content in miscanthus was naturally reduced over the winter period from June (7.36 wt. %) to February (2.80 wt. %) due to a combination of senescence and natural leaching from rain water. September harvest produced similar mass balance yields, bio-oil quality and stability compared to February harvest (conventional harvest), but nitrogen content in above ground crop was to high (208 kg ha.-1) to maintain sustainable crop production. Deionised water, 1.00% HCl and 0.10% Triton X-100 washes were used to reduce inorganic content of miscanthus. Miscanthus washed with 0.10% Triton X-100 resulted in the highest total liquid yield (76.21 wt. %) and lowest char and reaction water yields (9.77 wt. % and 8.25 wt. % respectively). Concentrations of Triton X-100 were varied to study further effects on mass balance yields and bio-oil stability. All concentrations of Triton X-100 increased total liquid yield and decreased char and reaction water yields compared to untreated miscanthus. In terms of bio-oil stability 1.00% Triton X-100 produced the most stable bio-oil with lowest viscosity index (2.43) and lowest water content index (1.01). Beech wood was impregnated with potassium and phosphorus resulting in lower liquid yields and increased char and gas yields due to their catalytic effect on fast pyrolysis product distribution. Increased potassium and phosphorus concentrations produced less stable bio-oils with viscosity and water content indexes increasing. Fast pyrolysis processing of phosphorus impregnated beech wood was problematic as the reactor bed material agglomerated into large clumps due to char formation within the reactor, affecting fluidisation and heat transfer.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An approach for knowledge extraction from the information arriving to the knowledge base input and also new knowledge distribution over knowledge subsets already present in the knowledge base is developed. It is also necessary to realize the knowledge transform into parameters (data) of the model for the following decision-making on the given subset. It is assumed to realize the decision-making with the fuzzy sets’ apparatus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper presents a fast and robust stereo object recognition method. The method is currently unable to identify the rotation of objects. This makes it very good at locating spheres which are rotationally independent. Approximate methods for located non-spherical objects have been developed. Fundamental to the method is that the correspondence problem is solved using information about the dimensions of the object being located. This is in contrast to previous stereo object recognition systems where the scene is first reconstructed by point matching techniques. The method is suitable for real-time application on low-power devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To examine the influence of two different fast-start pacing strategies on performance and oxygen consumption (V˙O2) during cycle ergometer time trials lasting ∼5 min. Methods: Eight trained male cyclists performed four cycle ergometer time trials whereby the total work completed (113 ± 11.5 kJ; mean ± SD) was identical to the better of two 5-min self-paced familiarization trials. During the performance trials, initial power output was manipulated to induce either an all-out or a fast start. Power output during the first 60 s of the fast-start trial was maintained at 471.0 ± 48.0 W, whereas the all-out start approximated a maximal starting effort for the first 15 s (mean power: 753.6 ± 76.5 W) followed by 45 s at a constant power output (376.8 ± 38.5 W). Irrespective of starting strategy, power output was controlled so that participants would complete the first quarter of the trial (28.3 ± 2.9 kJ) in 60 s. Participants performed two trials using each condition, with their fastest time trial compared. Results: Performance time was significantly faster when cyclists adopted the all-out start (4 min 48 s ± 8 s) compared with the fast start (4 min 51 s ± 8 s; P < 0.05). The first-quarter V˙O2 during the all-out start trial (3.4 ± 0.4 L·min-1) was significantly higher than during the fast-start trial (3.1 ± 0.4 L·min-1; P < 0.05). After removal of an outlier, the percentage increase in first-quarter V˙O2 was significantly correlated (r = -0.86, P < 0.05) with the relative difference in finishing time. Conclusions: An all-out start produces superior middle distance cycling performance when compared with a fast start. The improvement in performance may be due to a faster V˙O2 response rather than time saved due to a rapid acceleration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the first time in human history, large volumes of spoken audio are being broadcast, made available on the internet, archived, and monitored for surveillance every day. New technologies are urgently required to unlock these vast and powerful stores of information. Spoken Term Detection (STD) systems provide access to speech collections by detecting individual occurrences of specified search terms. The aim of this work is to develop improved STD solutions based on phonetic indexing. In particular, this work aims to develop phonetic STD systems for applications that require open-vocabulary search, fast indexing and search speeds, and accurate term detection. Within this scope, novel contributions are made within two research themes, that is, accommodating phone recognition errors and, secondly, modelling uncertainty with probabilistic scores. A state-of-the-art Dynamic Match Lattice Spotting (DMLS) system is used to address the problem of accommodating phone recognition errors with approximate phone sequence matching. Extensive experimentation on the use of DMLS is carried out and a number of novel enhancements are developed that provide for faster indexing, faster search, and improved accuracy. Firstly, a novel comparison of methods for deriving a phone error cost model is presented to improve STD accuracy, resulting in up to a 33% improvement in the Figure of Merit. A method is also presented for drastically increasing the speed of DMLS search by at least an order of magnitude with no loss in search accuracy. An investigation is then presented of the effects of increasing indexing speed for DMLS, by using simpler modelling during phone decoding, with results highlighting the trade-off between indexing speed, search speed and search accuracy. The Figure of Merit is further improved by up to 25% using a novel proposal to utilise word-level language modelling during DMLS indexing. Analysis shows that this use of language modelling can, however, be unhelpful or even disadvantageous for terms with a very low language model probability. The DMLS approach to STD involves generating an index of phone sequences using phone recognition. An alternative approach to phonetic STD is also investigated that instead indexes probabilistic acoustic scores in the form of a posterior-feature matrix. A state-of-the-art system is described and its use for STD is explored through several experiments on spontaneous conversational telephone speech. A novel technique and framework is proposed for discriminatively training such a system to directly maximise the Figure of Merit. This results in a 13% improvement in the Figure of Merit on held-out data. The framework is also found to be particularly useful for index compression in conjunction with the proposed optimisation technique, providing for a substantial index compression factor in addition to an overall gain in the Figure of Merit. These contributions significantly advance the state-of-the-art in phonetic STD, by improving the utility of such systems in a wide range of applications.