986 resultados para Filter method


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new iterative algorithm based on the inexact-restoration (IR) approach combined with the filter strategy to solve nonlinear constrained optimization problems is presented. The high level algorithm is suggested by Gonzaga et al. (SIAM J. Optim. 14:646–669, 2003) but not yet implement—the internal algorithms are not proposed. The filter, a new concept introduced by Fletcher and Leyffer (Math. Program. Ser. A 91:239–269, 2002), replaces the merit function avoiding the penalty parameter estimation and the difficulties related to the nondifferentiability. In the IR approach two independent phases are performed in each iteration, the feasibility and the optimality phases. The line search filter is combined with the first one phase to generate a “more feasible” point, and then it is used in the optimality phase to reach an “optimal” point. Numerical experiences with a collection of AMPL problems and a performance comparison with IPOPT are provided.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The filter method is a technique for solving nonlinear programming problems. The filter algorithm has two phases in each iteration. The first one reduces a measure of infeasibility, while in the second the objective function value is reduced. In real optimization problems, usually the objective function is not differentiable or its derivatives are unknown. In these cases it becomes essential to use optimization methods where the calculation of the derivatives or the verification of their existence is not necessary: direct search methods or derivative-free methods are examples of such techniques. In this work we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of simplex and filter methods. This method neither computes nor approximates derivatives, penalty constants or Lagrange multipliers.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The purpose of this work is to present an algorithm to solve nonlinear constrained optimization problems, using the filter method with the inexact restoration (IR) approach. In the IR approach two independent phases are performed in each iteration—the feasibility and the optimality phases. The first one directs the iterative process into the feasible region, i.e. finds one point with less constraints violation. The optimality phase starts from this point and its goal is to optimize the objective function into the satisfied constraints space. To evaluate the solution approximations in each iteration a scheme based on the filter method is used in both phases of the algorithm. This method replaces the merit functions that are based on penalty schemes, avoiding the related difficulties such as the penalty parameter estimation and the non-differentiability of some of them. The filter method is implemented in the context of the line search globalization technique. A set of more than two hundred AMPL test problems is solved. The algorithm developed is compared with LOQO and NPSOL software packages.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Considerable research effort has been devoted in predicting the exon regions of genes. The binary indicator (BI), Electron ion interaction pseudo potential (EIIP), Filter method are some of the methods. All these methods make use of the period three behavior of the exon region. Even though the method suggested in this paper is similar to above mentioned methods , it introduces a set of sequences for mapping the nucleotides selected by applying genetic algorithm and found to be more promising

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A new distributed spam filter system based on mobile agent is proposed in this paper. We introduce the application of mobile agent technology to the spam filter system. The system architecture, the work process, the pivotal technology of the distributed spam filter system based on mobile agent, and the Naive Bayesian filter method are described in detail. The experiment results indicate that the system can prevent spam emails effectively.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper discusses an important issue related to the implementation and interpretation of the analysis scheme in the ensemble Kalman filter . I t i s shown that the obser vations must be treated as random variables at the analysis steps. That is, one should add random perturbations with the correct statistics to the obser vations and generate an ensemble of obser vations that then is used in updating the ensemble of model states. T raditionally , this has not been done in previous applications of the ensemble Kalman filter and, as will be shown, this has resulted in an updated ensemble with a variance that is too low . This simple modification of the analysis scheme results in a completely consistent approach if the covariance of the ensemble of model states is interpreted as the prediction error covariance, and there are no further requirements on the ensemble Kalman filter method, except for the use of an ensemble of sufficient size. Thus, there is a unique correspondence between the error statistics from the ensemble Kalman filter and the standard Kalman filter approach

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A particle filter method is presented for the discrete-time filtering problem with nonlinear ItA ` stochastic ordinary differential equations (SODE) with additive noise supposed to be analytically integrable as a function of the underlying vector-Wiener process and time. The Diffusion Kernel Filter is arrived at by a parametrization of small noise-driven state fluctuations within branches of prediction and a local use of this parametrization in the Bootstrap Filter. The method applies for small noise and short prediction steps. With explicit numerical integrators, the operations count in the Diffusion Kernel Filter is shown to be smaller than in the Bootstrap Filter whenever the initial state for the prediction step has sufficiently few moments. The established parametrization is a dual-formula for the analysis of sensitivity to gaussian-initial perturbations and the analysis of sensitivity to noise-perturbations, in deterministic models, showing in particular how the stability of a deterministic dynamics is modeled by noise on short times and how the diffusion matrix of an SODE should be modeled (i.e. defined) for a gaussian-initial deterministic problem to be cast into an SODE problem. From it, a novel definition of prediction may be proposed that coincides with the deterministic path within the branch of prediction whose information entropy at the end of the prediction step is closest to the average information entropy over all branches. Tests are made with the Lorenz-63 equations, showing good results both for the filter and the definition of prediction.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

No âmbito da condução da política monetária, as funções de reação estimadas em estudos empíricos, tanto para a economia brasileira como para outras economias, têm mostrado uma boa aderência aos dados. Porém, os estudos mostram que o poder explicativo das estimativas aumenta consideravelmente quando se inclui um componente de suavização da taxa de juros, representado pela taxa de juros defasada. Segundo Clarida, et. al. (1998) o coeficiente da taxa de juros defasada (situado ente 0,0 e 1,0) representaria o grau de inércia da política monetária, e quanto maior esse coeficiente, menor e mais lenta é a resposta da taxa de juros ao conjunto de informações relevantes. Por outro lado, a literatura empírica internacional mostra que esse componente assume um peso expressivo nas funções de reação, o que revela que os BCs ajustam o instrumento de modo lento e parcimonioso. No entanto, o caso brasileiro é de particular interesse porque os trabalhos mais recentes têm evidenciado uma elevação no componente inercial, o que sugere que o BCB vem aumentando o grau de suavização da taxa de juros nos últimos anos. Nesse contexto, mais do que estimar uma função de reação forward looking para captar o comportamento global médio do Banco Central do Brasil no período de Janeiro de 2005 a Maio de 2013, o trabalho se propôs a procurar respostas para uma possível relação de causalidade dinâmica entre a trajetória do coeficiente de inércia e as variáveis macroeconômicas relevantes, usando como método a aplicação do filtro de Kalman para extrair a trajetória do coeficiente de inércia e a estimação de um modelo de Vetores Autorregressivos (VAR) que incluirá a trajetória do coeficiente de inércia e as variáveis macroeconômicas relevantes. De modo geral, pelas regressões e pelo filtro de Kalman, os resultados mostraram um coeficiente de inércia extremamente elevado em todo o período analisado, e coeficientes de resposta global muito pequenos, inconsistentes com o que é esperado pela teoria. Pelo método VAR, o resultado de maior interesse foi o de que choques positivos na variável de inércia foram responsáveis por desvios persistentes no hiato do produto e, consequentemente, sobre os desvios de inflação e de expectativas de inflação em relação à meta central.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study characterized the fecal indicator bacteria (FIB), including Escherichia coli (E. coli) and Enteroccocus (ENT), disseminated over time in the Bay of Vidy, which is the most contaminated area of Lake Geneva. Sediments were collected from a site located at similar to 500 m from the present waste water treatment plant (WWTP) outlet pipe, in front of the former WWTP outlet pipe, which was located at only 300 m from the coastal recreational area (before 2001). E. coil and ENT were enumerated in sediment suspension using the membrane filter method. The FIB characterization was performed for human Enterococcus faecalis (E. faecalis) and Enterococcus faecium (E. faecium) and human specific bacteroides by PCR using specific primers and a matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS). Bacterial cultures revealed that maximum values of 35.2 x 10(8) and 6.6 x 10(6) CFU g(-1) dry sediment for E. coil and ENT, respectively, were found in the sediments deposited following eutrophication of Lake Geneva in the 1970s. whereas the WWTP started operating in 1964. The same tendency was observed for the presence of human fecal pollution: the percentage of PCR amplification with primers ESP-1/ESP-2 for E. faecalis and E. faecium indicated that more than 90% of these bacteria were from human origin. Interestingly, the PCR assays for specific-human bacteroides HF183/HF134 were positive for DNA extracted from all isolated strains of sediment surrounding WWPT outlet pipe discharge. The MALDI-TOF MS confirmed the presence of general E. coli and predominance E. faecium in isolated strains. Our results demonstrated that human fecal bacteria highly increased in the sediments contaminated with WWTP effluent following the eutrophication of Lake Geneva. Additionally, other FIB cultivable strains from animals or adapted environmental strains were detected in the sediment of the bay. The approaches used in this research are valuable to assess the temporal distribution and the source of the human fecal pollution in aquatic environments. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The use of urinary hexane diamine (HDA) as a biomarker to assess human respiratory exposure to hexamethylene diisocyanate (HDI) aerosol was evaluated. Twenty-three auto body shop workers were exposed to HDI biuret aerosol for two hours using a closed exposure apparatus. HDI exposures were quantified using both a direct-reading instrument and a treated-filter method. Urine samples collected at baseline, immediately post exposure, and every four to five hours for up to 20 hours were analyzed for HDA using gas chromatography and mass spectrometry. Mean urinary HDA (microg/g creatinine) sharply increased from the baseline value of 0.7 to 18.1 immediately post exposure and decreased rapidly to 4.7, 1.9 and 1.1, respectively, at 4, 9, and 18 hours post exposure. Considerable individual variability was found. Urinary HDA can assess acute respiratory exposure to HDI aerosol, but may have limited use as a biomarker of exposure in the workplace. [Authors]

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Time series analysis can be categorized into three different approaches: classical, Box-Jenkins, and State space. Classical approach makes a basement for the analysis and Box-Jenkins approach is an improvement of the classical approach and deals with stationary time series. State space approach allows time variant factors and covers up a broader area of time series analysis. This thesis focuses on parameter identifiablity of different parameter estimation methods such as LSQ, Yule-Walker, MLE which are used in the above time series analysis approaches. Also the Kalman filter method and smoothing techniques are integrated with the state space approach and MLE method to estimate parameters allowing them to change over time. Parameter estimation is carried out by repeating estimation and integrating with MCMC and inspect how well different estimation methods can identify the optimal model parameters. Identification is performed in probabilistic and general senses and compare the results in order to study and represent identifiability more informative way.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper compares the most common digital signal processing methods of exon prediction in eukaryotes, and also proposes a technique for noise suppression in exon prediction. The specimen used here which has relevance in medical research, has been taken from the public genomic database - GenBank.Here exon prediction has been done using the digital signal processing methods viz. binary method, EIIP (electron-ion interaction psuedopotential) method and filter methods. Under filter method two filter designs, and two approaches using these two designs have been tried. The discrete wavelet transform has been used for de-noising of the exon plots.Results of exon prediction based on the methods mentioned above, which give values closest to the ones found in the NCBI database are given here. The exon plot de-noised using discrete wavelet transform is also given.Alterations to the proven methods as done by the authors, improves performance of exon prediction algorithms. Also it has been proven that the discrete wavelet transform is an effective tool for de-noising which can be used with exon prediction algorithms

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we investigate the commonly used autoregressive filter method of adjusting appraisal-based real estate returns to correct for the perceived biases induced in the appraisal process. Since the early work by Geltner (1989), many papers have been written on this topic but remarkably few have considered the relationship between smoothing at the individual property level and the amount of persistence in the aggregate appraised-based index. To investigate this issue in more detail we analyse a sample of individual property level appraisal data from the Investment Property Database (IPD). We find that commonly used unsmoothing estimates overstate the extent of smoothing that takes place at the individual property level. There is also strong support for an ARFIMA representation of appraisal returns.