993 resultados para Filter methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a new approach to construct a 2-dimensional (2-D) directional filter bank (DFB) by cascading a 2-D nonseparable checkerboard-shaped filter pair and 2-D separable cosine modulated filter bank (CMFB). Similar to diagonal subbands in 2-D separable wavelets, most of the subbands in 2-D separable CMFBs, tensor products of two 1-D CMFBs, are poor in directional selectivity due to the fact that the frequency supports of most of the subband filters are concentrated along two different directions. To improve the directional selectivity, we propose a new DFB to realize the subband decomposition. First, a checkerboard-shaped filter pair is used to decompose an input image into two images containing different directional information of the original image. Next, a 2-D separable CMFB is applied to each of the two images for directional decomposition. The new DFB is easy in design and has merits: low redundancy ratio and fine directional-frequency tiling. As its application, the BLS-GSM algorithm for image denoising is extended to use the new DFBs. Experimental results show that the proposed DFB achieves better denoising performance than the methods using other DFBs for images of abundant textures. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple, sensitive fluorescent method for detecting cyanide has been developed based on the inner filter effect (IFE) of silver nanoparticles (Ag NPs). With a high extinction coefficient and tunable plasmon absorption feature, Ag NPs are expected to be a powerful absorber to tune the emission of the fluorophore in the IFE-based fluorescent assays. In the present work, we developed a turn-on fluorescent assay for cyanide based on the strong absorption of Ag NPs to both excitation and emission light of an isolated fluorescence indicator. In the presence of cyanide, the absorber Ag NPs will dissolve gradually, which then leads to recovery of the IFE-decreased emission of the fluorophore. The concentration of Ag NPs in the detection system was found to affect the fluorescence response toward cyanide greatly. Under the optimum conditions, the present IFE-based approach can detect cyanide ranging from 5.0 x 10 (7) to 6.0 x 10 (4) M with a detection limit of 2.5 x 10 (7) M, which is much lower than the corresponding absorbance-based approach and compares favorably with other reported fluorescent methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many real world image analysis problems, such as face recognition and hand pose estimation, involve recognizing a large number of classes of objects or shapes. Large margin methods, such as AdaBoost and Support Vector Machines (SVMs), often provide competitive accuracy rates, but at the cost of evaluating a large number of binary classifiers, thus making it difficult to apply such methods when thousands or millions of classes need to be recognized. This thesis proposes a filter-and-refine framework, whereby, given a test pattern, a small number of candidate classes can be identified efficiently at the filter step, and computationally expensive large margin classifiers are used to evaluate these candidates at the refine step. Two different filtering methods are proposed, ClassMap and OVA-VS (One-vs.-All classification using Vector Search). ClassMap is an embedding-based method, works for both boosted classifiers and SVMs, and tends to map the patterns and their associated classes close to each other in a vector space. OVA-VS maps OVA classifiers and test patterns to vectors based on the weights and outputs of weak classifiers of the boosting scheme. At runtime, finding the strongest-responding OVA classifier becomes a classical vector search problem, where well-known methods can be used to gain efficiency. In our experiments, the proposed methods achieve significant speed-ups, in some cases up to two orders of magnitude, compared to exhaustive evaluation of all OVA classifiers. This was achieved in hand pose recognition and face recognition systems where the number of classes ranges from 535 to 48,600.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of a low loss quasi-optical beam splitter which is required to provide efficient diplexing of the bands 316.5-325.5 GHz and 349.5-358.5 GHz is presented. To minimise the filter insertion loss, the chosen architecture is a three-layer freestanding array of dipole slot elements. Floquet modal analysis and finite element method computer models are used to establish the geometry of the periodic structure and to predict its spectral response. Two different micromachining approaches have been employed to fabricate close packed arrays of 460 mm long elements in the screens that form the basic building block of the 30mm diameter multilayer frequency selective surface. Comparisons between simulated and measured transmission coefficients for the individual dichroic surfaces are used to determine the accuracy of the computer models and to confirm the suitability of the fabrication methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Test procedures for a pipelined bit-parallel IIR filter chip which maximally exploit its regularity are described. It is shown that small modifications to the basic architecture result in significant reductions in the number of test patterns required to test such chips. The methods used allow 100% fault coverage to be achieved using less than 1000 test vectors for a chip which has 12 bit data and coefficients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate and compare the outcome of functioning filtration surgery followed by cataract surgery with posterior intraocular lens implantation by both phacoemulsification and extracapsular cataract extraction (ECCE) techniques in glaucomatous eyes. PATIENTS AND METHODS: We retrospectively evaluated the clinical course of 77 eyes (68 patients) that after successful trabeculectomy, underwent cataract surgery by either phacoemulsification or ECCE techniques. We determined the frequency of partial and absolute failure following cataract surgery by either phacoemulsification or ECCE in eyes with functioning trabeculectomies. Partial failure of intraocular pressure (IOP), control after cataract extraction was defined as the need for an increased number of antiglaucoma medications or argon laser trabeculoplasty to maintain IOP =21mm Hg. Complete failure of IOP control after cataract surgery was defined as an IOP >21 mm Hg on at least two consecutive measurements one or more weeks apart or the performance of additional filtration surgery. Failure rates were calculated using the Kaplan-Meier actuarial method. Failure rates between phacoemulsification and ECCE subgroups were compared using the log rank test. RESULTS: The probability of partial failure by the third postoperative year after cataract surgery was 39.5% in the phacoemulsification subgroup and 37.3% in the ECCE subgroup. This small difference is not statistically significant (P = 0.48). The probability of complete failure by the fourth postoperative year after cataract surgery was 12.0% in the phacoemulsification subgroup and 12.5% in the ECCE subgroup. This difference is also not statistically significant (P = 0.77). At the 6-month follow-up visit, visual acuity of both groups improved one or more lines in 87.0% of patients, and worsened one or more lines in 3.9% of patients. Sixty-one percent achieved visual acuity of 20/40 or better. The most frequent complication was posterior capsular opacification requiring laser capsulotomy that occurred in 31.2% of patients. CONCLUSION: Cataract extraction by either phacoemulsification or ECCE following trabeculectomy surgery may be associated with a partial loss of the previously functioning filter and the need for more antiglaucoma medications to control IOP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: In this study the Octavius detector 729 ionization chamber (IC) array with the Octavius 4D phantom was characterized for flattening filter (FF) and flattening filter free (FFF) static and rotational beams. The device was assessed for verification with FF and FFF RapidArc treatment plans.

Methods: The response of the detectors to field size, dose linearity, and dose rate were assessed for 6 MV FF beams and also 6 and 10 MV FFF beams. Dosimetric and mechanical accuracy of the detector array within the Octavius 4D rotational phantom was evaluated against measurements made using semiflex and pinpoint ionization chambers, and radiochromic film. Verification FF and FFF RapidArc plans were assessed using a gamma function with 3%/3 mm tolerances and 2%/2 mm tolerances and further analysis of these plans was undertaken using film and a second detector array with higher spatial resolution.

Results: A warm-up dose of >6 Gy was required for detector stability. Dose-rate measurements were stable across a range from 0.26 to 15 Gy/min and dose response was linear, although the device overestimated small doses compared with pinpoint ionization chamber measurements. Output factors agreed with ionization chamber measurements to within 0.6% for square fields of side between 3 and 25 cm and within 1.2% for 2 x 2 cm(2) fields. The Octavius 4D phantom was found to be consistent with measurements made with radiochromic film, where the gantry angle was found to be within 0.4. of that expected during rotational deliveries. RapidArc FF and FFF beams were found to have an accuracy of >97.9% and >90% of pixels passing 3%/3 mm and 2%/2 mm, respectively. Detector spatial resolution was observed to be a factor in determining the accurate delivery of each plan, particularly at steep dose gradients. This was confirmed using data from a second detector array with higher spatial resolution and with radiochromic film.

Conclusions: The Octavius 4D phantom with associated Octavius detector 729 ionization chamber array is a dosimetrically and mechanically stable device for pretreatment verification of FF and FFF RapidArc treatments. Further improvements may be possible through use of a detector array with higher spatial resolution (detector size and/or detector spacing). (C) 2013 American Association of Physicists in Medicine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Life science research aims to continuously improve the quality and standard of human life. One of the major challenges in this area is to maintain food safety and security. A number of image processing techniques have been used to investigate the quality of food products. In this paper,we propose a new algorithm to effectively segment connected grains so that each of them can be inspected in a later processing stage. One family of the existing segmentation methods is based on the idea of watersheding, and it has shown promising results in practice.However,due to the over-segmentation issue,this technique has experienced poor performance in various applications,such as inhomogeneous background and connected targets. To solve this problem,we present a combination of two classical techniques to handle this issue.In the first step,a mean shift filter is used to eliminate the inhomogeneous background, where entropy is used to be a converging criterion. Secondly,a color gradient algorithm is used in order to detect the most significant edges, and a marked watershed transform is applied to segment cluttered objects out of the previous processing stages. The proposed framework is capable of compromising among execution time, usability, efficiency and segmentation outcome in analyzing ring die pellets. The experimental results demonstrate that the proposed approach is effectiveness and robust.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a novel finite impulse response (FIR) filter design methodology that reduces the number of operations with a motivation to reduce power consumption and enhance performance. The novelty of our approach lies in the generation of filter coefficients such that they conform to a given low-power architecture, while meeting the given filter specifications. The proposed algorithm is formulated as a mixed integer linear programming problem that minimizes chebychev error and synthesizes coefficients which consist of pre-specified alphabets. The new modified coefficients can be used for low-power VLSI implementation of vector scaling operations such as FIR filtering using computation sharing multiplier (CSHM). Simulations in 0.25um technology show that CSHM FIR filter architecture can result in 55% power and 34% speed improvement compared to carry save multiplier (CSAM) based filters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-bit trie is a popular approach performing the longest prefix matching for packet classification. However, it requires a long lookup time and inefficiently consumes memory space. This paper presents an in-depth study of different variations of multi-bit trie for IP address lookup. Our main aim is to study a method of data structure which reduces memory space. The proposed approach has been implemented using the label method in two approaches. Both methods present better results regarding lookup speed, update time and memory bit consumptions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the design analysis of novel tunable narrow-band bandpass sigma-delta modulators, which can achieve concurrent multiple noise-shaping for multi-tone input signals. Four different design methodologies based on the noise transfer functions of comb filters, slink filters, multi-notch filters and fractional delay comb filters are applied for the design of these multiple-band sigma-delta modulators. The latter approach utilises conventional comb filters in conjunction with FIR, or allpass IIR fractional delay filters, to deliver the desired nulls for the quantisation noise transfer function. Detailed simulation results show that FIR fractional delay comb filter-based sigma-delta modulators tune accurately to most centre frequencies, but suffer from degraded resolution at frequencies close to Nyquist. However, superior accuracies are obtained from their allpass IIR fractional delay counterpart at the expense of a slight shift in noise-shaping bands at very high frequencies. The merits and drawbacks of each technique for the various sigma-delta topologies are assessed in terms of in-band signal-to-noise ratios, accuracy of tunability and coefficient complexity for ease of implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Discrete data representations are necessary, or at least convenient, in many machine learning problems. While feature selection (FS) techniques aim at finding relevant subsets of features, the goal of feature discretization (FD) is to find concise (quantized) data representations, adequate for the learning task at hand. In this paper, we propose two incremental methods for FD. The first method belongs to the filter family, in which the quality of the discretization is assessed by a (supervised or unsupervised) relevance criterion. The second method is a wrapper, where discretized features are assessed using a classifier. Both methods can be coupled with any static (unsupervised or supervised) discretization procedure and can be used to perform FS as pre-processing or post-processing stages. The proposed methods attain efficient representations suitable for binary and multi-class problems with different types of data, being competitive with existing methods. Moreover, using well-known FS methods with the features discretized by our techniques leads to better accuracy than with the features discretized by other methods or with the original features. (C) 2013 Elsevier B.V. All rights reserved.