901 resultados para Filter-rectify-filter-model
Resumo:
The East China Sea is a hot area for typhoon waves to occur. A wave spectra assimilation model has been developed to predict the typhoon wave more accurately and operationally. This is the first time where wave data from Taiwan have been used to predict typhoon wave along the mainland China coast. The two-dimensional spectra observed in Taiwan northeast coast modify the wave field output by SWAN model through the technology of optimal interpolation (OI) scheme. The wind field correction is not involved as it contributes less than a quarter of the correction achieved by assimilation of waves. The initialization issue for assimilation is discussed. A linear evolution law for noise in the wave field is derived from the SWAN governing equations. A two-dimensional digital low-pass filter is used to obtain the initialized wave fields. The data assimilation model is optimized during the typhoon Sinlaku. During typhoons Krosa and Morakot, data assimilation significantly improves the low frequency wave energy and wave propagation direction in Taiwan coast. For the far-field region, the assimilation model shows an expected ability of improving typhoon wave forecast as well, as data assimilation enhances the low frequency wave energy. The proportion of positive assimilation indexes is over 81% for all the periods of comparison. The paper also finds that the impact of data assimilation on the far-field region depends on the state of the typhoon developing and the swell propagation direction.
Resumo:
A smoother introduced earlier by van Leeuwen and Evensen is applied to a problem in which real obser vations are used in an area with strongly nonlinear dynamics. The derivation is new , but it resembles an earlier derivation by van Leeuwen and Evensen. Again a Bayesian view is taken in which the prior probability density of the model and the probability density of the obser vations are combined to for m a posterior density . The mean and the covariance of this density give the variance-minimizing model evolution and its errors. The assumption is made that the prior probability density is a Gaussian, leading to a linear update equation. Critical evaluation shows when the assumption is justified. This also sheds light on why Kalman filters, in which the same ap- proximation is made, work for nonlinear models. By reference to the derivation, the impact of model and obser vational biases on the equations is discussed, and it is shown that Bayes’ s for mulation can still be used. A practical advantage of the ensemble smoother is that no adjoint equations have to be integrated and that error estimates are easily obtained. The present application shows that for process studies a smoother will give superior results compared to a filter , not only owing to the smooth transitions at obser vation points, but also because the origin of features can be followed back in time. Also its preference over a strong-constraint method is highlighted. Further more, it is argued that the proposed smoother is more efficient than gradient descent methods or than the representer method when error estimates are taken into account
Resumo:
It is for mally proved that the general smoother for nonlinear dynamics can be for mulated as a sequential method, that is, obser vations can be assimilated sequentially during a for ward integration. The general filter can be derived from the smoother and it is shown that the general smoother and filter solutions at the final time become identical, as is expected from linear theor y. Then, a new smoother algorithm based on ensemble statistics is presented and examined in an example with the Lorenz equations. The new smoother can be computed as a sequential algorithm using only for ward-in-time model integrations. It bears a strong resemblance with the ensemble Kalman filter . The difference is that ever y time a new dataset is available during the for ward integration, an analysis is computed for all previous times up to this time. Thus, the first guess for the smoother is the ensemble Kalman filter solution, and the smoother estimate provides an improvement of this, as one would expect a smoother to do. The method is demonstrated in this paper in an intercomparison with the ensemble Kalman filter and the ensemble smoother introduced by van Leeuwen and Evensen, and it is shown to be superior in an application with the Lorenz equations. Finally , a discussion is given regarding the properties of the analysis schemes when strongly non-Gaussian distributions are used. It is shown that in these cases more sophisticated analysis schemes based on Bayesian statistics must be used.
Resumo:
The weak-constraint inverse for nonlinear dynamical models is discussed and derived in terms of a probabilistic formulation. The well-known result that for Gaussian error statistics the minimum of the weak-constraint inverse is equal to the maximum-likelihood estimate is rederived. Then several methods based on ensemble statistics that can be used to find the smoother (as opposed to the filter) solution are introduced and compared to traditional methods. A strong point of the new methods is that they avoid the integration of adjoint equations, which is a complex task for real oceanographic or atmospheric applications. they also avoid iterative searches in a Hilbert space, and error estimates can be obtained without much additional computational effort. the feasibility of the new methods is illustrated in a two-layer quasigeostrophic model.
Resumo:
Nonlinear data assimilation is high on the agenda in all fields of the geosciences as with ever increasing model resolution and inclusion of more physical (biological etc.) processes, and more complex observation operators the data-assimilation problem becomes more and more nonlinear. The suitability of particle filters to solve the nonlinear data assimilation problem in high-dimensional geophysical problems will be discussed. Several existing and new schemes will be presented and it is shown that at least one of them, the Equivalent-Weights Particle Filter, does indeed beat the curse of dimensionality and provides a way forward to solve the problem of nonlinear data assimilation in high-dimensional systems.
Resumo:
We present a novel algorithm for concurrent model state and parameter estimation in nonlinear dynamical systems. The new scheme uses ideas from three dimensional variational data assimilation (3D-Var) and the extended Kalman filter (EKF) together with the technique of state augmentation to estimate uncertain model parameters alongside the model state variables in a sequential filtering system. The method is relatively simple to implement and computationally inexpensive to run for large systems with relatively few parameters. We demonstrate the efficacy of the method via a series of identical twin experiments with three simple dynamical system models. The scheme is able to recover the parameter values to a good level of accuracy, even when observational data are noisy. We expect this new technique to be easily transferable to much larger models.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
Existing theoretical models of house prices and credit rely on continuous rationality of consumers, an assumption that has been frequently questioned in recent years. Meanwhile, empirical investigations of the relationship between prices and credit are often based on national-level data, which is then tested for structural breaks and asymmetric responses, usually with subsamples. Earlier author argues that local markets are structurally different from one another and so the coefficients of any estimated housing market model should vary from region to region. We investigate differences in the price–credit relationship for 12 regions of the UK. Markov-switching is introduced to capture asymmetric market behaviours and turning points. Results show that credit abundance had a large impact on house prices in Greater London and nearby regions alongside a strong positive feedback effect from past house price movements. This impact is even larger in Greater London and the South East of England when house prices are falling, which are the only instances where the credit effect is more prominent than the positive feedback effect. A strong positive feedback effect from past lending activity is also present in the loan dynamics. Furthermore, bubble probabilities extracted using a discrete Kalman filter neatly capture market turning points.
Resumo:
For proper management of wastes and their possible recycling as raw materials, complete characterization of the materials is necessary to evaluate the main scientific aspects and potential applications. The current paper presents a detailed scientific study of different Brazilian sugar cane bagasse ashes from the cogeneration industry as alternative cementing materials (active addition) for cement manufacture. The results show that the ashes from the industrial process (filter and bottom ones) present different chemical and mineralogical compositions and pozzolanic properties as well. As a consequence of its nature, the kinetic rate constant (K) states that the pozzolanic activity is null for the bottom ash and very low for the filter ash with respect to a sugar cane bagasse ash obtained in the laboratory under controlled burning conditions (reference). The scarce pozzolanic activity showed by ashes could be related to a possible contamination of bagasse wastes (with soils) before their use as alternative combustibles. For this reason, an optimization process for these wastes is advisable, if the ashes are to be used as pozzolans. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
We present the discovery of a wide (67 AU) substellar companion to the nearby (21 pc) young solar-metallicity M1 dwarf CD-35 2722, a member of the approximate to 100 Myr AB Doradus association. Two epochs of astrometry from the NICI Planet-Finding Campaign confirm that CD-35 2722 B is physically associated with the primary star. Near-IR spectra indicate a spectral type of L4 +/- 1 with a moderately low surface gravity, making it one of the coolest young companions found to date. The absorption lines and near-IR continuum shape of CD-35 2722 B agree especially well the dusty field L4.5 dwarf 2MASS J22244381-0158521, while the near-IR colors and absolute magnitudes match those of the 5 Myr old L4 planetary-mass companion, 1RXS J160929.1-210524 b. Overall, CD-35 2722 B appears to be an intermediate-age benchmark for L dwarfs, with a less peaked H-band continuum than the youngest objects and near-IR absorption lines comparable to field objects. We fit Ames-Dusty model atmospheres to the near-IR spectra and find T(eff) = 1700-1900 K and log(g) = 4.5 +/- 0.5. The spectra also show that the radial velocities of components A and B agree to within +/- 10 km s(-1), further confirming their physical association. Using the age and bolometric luminosity of CD-35 2722 B, we derive a mass of 31 +/- 8 M(Jup) from the Lyon/Dusty evolutionary models. Altogether, young late-M to mid-L type companions appear to be overluminous for their near-IR spectral type compared with field objects, in contrast to the underluminosity of young late-L and early-T dwarfs.
Resumo:
The protective shielding design of a mammography facility requires the knowledge of the scattered radiation by the patient and image receptor components. The shape and intensity of secondary x-ray beams depend on the kVp applied to the x-ray tube, target/filter combination, primary x-ray field size, and scattering angle. Currently, shielding calculations for mammography facilities are performed based on scatter fraction data for Mo/Mo target/filter, even though modern mammography equipment is designed with different anode/filter combinations. In this work we present scatter fraction data evaluated based on the x-ray spectra produced by a Mo/Mo, Mo/Rh and W/Rh target/filter, for 25, 30 and 35 kV tube voltages and scattering angles between 30 and 165 degrees. Three mammography phantoms were irradiated and the scattered radiation was measured with a CdZnTe detector. The primary x-ray spectra were computed with a semiempirical model based on the air kerma and HVL measured with an ionization chamber. The results point out that the scatter fraction values are higher for W/Rh than for Mo/Mo and Mo/Rh, although the primary and scattered air kerma are lower for W/Rh than for Mo/Mo and Mo/Rh target/filter combinations. The scatter fractions computed in this work were applied in a shielding design calculation in order to evaluate shielding requirements for each of these target/filter combinations. Besides, shielding requirements have been evaluated converting the scattered air kerma from mGy/week to mSv/week adopting initially a conversion coefficient from air kerma to effective dose as 1 Sv/Gy and then a mean conversion coefficient specific for the x-ray beam considered. Results show that the thickest barrier should be provided for Mo/Mo target/filter combination. They also point out that the use of the conversion coefficient from air kerma to effective dose as 1 Sv/Gy is conservatively high in the mammography energy range and overestimate the barrier thickness. (c) 2008 American Association of Physicists in Medicine.
2D QSAR and similarity studies on cruzain inhibitors aimed at improving selectivity over cathepsin L
Resumo:
Hologram quantitative structure-activity relationships (HQSAR) were applied to a data set of 41 cruzain inhibitors. The best HQSAR model (Q(2) = 0.77; R-2 = 0.90) employing Surflex-Sim, as training and test sets generator, was obtained using atoms, bonds, and connections as fragment distinctions and 4-7 as fragment size. This model was then used to predict the potencies of 12 test set compounds, giving satisfactory predictive R-2 value of 0,88. The contribution maps obtained from the best HQSAR model are in agreement with the biological activities of the study compounds. The Trypanosoma cruzi cruzain shares high similarity with the mammalian homolog cathepsin L. The selectivity toward cruzam was checked by a database of 123 compounds, which corresponds to the 41 cruzain inhibitors used in the HQSAR model development plus 82 cathepsin L inhibitors. We screened these compounds by ROCS (Rapid Overlay of Chemical Structures), a Gaussian-shape volume overlap filter that can rapidly identify shapes that match the query molecule. Remarkably, ROCS was able to rank the first 37 hits as being only cruzain inhibitors. In addition, the area under the curve (AUC) obtained with ROCS was 0.96, indicating that the method was very efficient to distinguishing between cruzain and cathepsin L inhibitors. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The aim of this thesis is to investigate computerized voice assessment methods to classify between the normal and Dysarthric speech signals. In this proposed system, computerized assessment methods equipped with signal processing and artificial intelligence techniques have been introduced. The sentences used for the measurement of inter-stress intervals (ISI) were read by each subject. These sentences were computed for comparisons between normal and impaired voice. Band pass filter has been used for the preprocessing of speech samples. Speech segmentation is performed using signal energy and spectral centroid to separate voiced and unvoiced areas in speech signal. Acoustic features are extracted from the LPC model and speech segments from each audio signal to find the anomalies. The speech features which have been assessed for classification are Energy Entropy, Zero crossing rate (ZCR), Spectral-Centroid, Mean Fundamental-Frequency (Meanf0), Jitter (RAP), Jitter (PPQ), and Shimmer (APQ). Naïve Bayes (NB) has been used for speech classification. For speech test-1 and test-2, 72% and 80% accuracies of classification between healthy and impaired speech samples have been achieved respectively using the NB. For speech test-3, 64% correct classification is achieved using the NB. The results direct the possibility of speech impairment classification in PD patients based on the clinical rating scale.
Resumo:
O objetivo deste trabalho é caracterizar a Curva de Juros Mensal para o Brasil através de três fatores, comparando dois tipos de métodos de estimação: Através da Representação em Espaço de Estado é possível estimá-lo por dois Métodos: Filtro de Kalman e Mínimos Quadrados em Dois Passos. Os fatores têm sua dinâmica representada por um Modelo Autorregressivo Vetorial, VAR(1), e para o segundo método de estimação, atribui-se uma estrutura para a Variância Condicional. Para a comparação dos métodos empregados, propõe-se uma forma alternativa de compará-los: através de Processos de Markov que possam modelar conjuntamente o Fator de Inclinação da Curva de Juros, obtido pelos métodos empregados neste trabalho, e uma váriavel proxy para Desempenho Econômico, fornecendo alguma medida de previsão para os Ciclos Econômicos.
Resumo:
The objective of this work is to describe the behavior of the economic cycle in Brazil through Markov processes which can jointly model the slope factor of the yield curve, obtained by the estimation of the Nelson-Siegel Dynamic Model by the Kalman filter and a proxy variable for economic performance, providing some forecasting measure for economic cycles