855 resultados para adaptive thresholding
Resumo:
The standard separable two dimensional wavelet transform has achieved a great success in image denoising applications due to its sparse representation of images. However it fails to capture efficiently the anisotropic geometric structures like edges and contours in images as they intersect too many wavelet basis functions and lead to a non-sparse representation. In this paper a novel de-noising scheme based on multi directional and anisotropic wavelet transform called directionlet is presented. The image denoising in wavelet domain has been extended to the directionlet domain to make the image features to concentrate on fewer coefficients so that more effective thresholding is possible. The image is first segmented and the dominant direction of each segment is identified to make a directional map. Then according to the directional map, the directionlet transform is taken along the dominant direction of the selected segment. The decomposed images with directional energy are used for scale dependent subband adaptive optimal threshold computation based on SURE risk. This threshold is then applied to the sub-bands except the LLL subband. The threshold corrected sub-bands with the unprocessed first sub-band (LLL) are given as input to the inverse directionlet algorithm for getting the de-noised image. Experimental results show that the proposed method outperforms the standard wavelet-based denoising methods in terms of numeric and visual quality
Resumo:
Garment information tracking is required for clean room garment management. In this paper, we present a camera-based robust system with implementation of Optical Character Reconition (OCR) techniques to fulfill garment label recognition. In the system, a camera is used for image capturing; an adaptive thresholding algorithm is employed to generate binary images; Connected Component Labelling (CCL) is then adopted for object detection in the binary image as a part of finding the ROI (Region of Interest); Artificial Neural Networks (ANNs) with the BP (Back Propagation) learning algorithm are used for digit recognition; and finally the system is verified by a system database. The system has been tested. The results show that it is capable of coping with variance of lighting, digit twisting, background complexity, and font orientations. The system performance with association to the digit recognition rate has met the design requirement. It has achieved real-time and error-free garment information tracking during the testing.
Resumo:
Abstract: Near-infrared spectroscopy (NIRS) enables the non-invasive measurement of changes in hemodynamics and oxygenation in tissue. Changes in light-coupling due to movement of the subject can cause movement artifacts (MAs) in the recorded signals. Several methods have been developed so far that facilitate the detection and reduction of MAs in the data. However, due to fixed parameter values (e.g., global threshold) none of these methods are perfectly suitable for long-term (i.e., hours) recordings or were not time-effective when applied to large datasets. We aimed to overcome these limitations by automation, i.e., data adaptive thresholding specifically designed for long-term measurements, and by introducing a stable long-term signal reconstruction. Our new technique (“acceleration-based movement artifact reduction algorithm”, AMARA) is based on combining two methods: the “movement artifact reduction algorithm” (MARA, Scholkmann et al. Phys. Meas. 2010, 31, 649–662), and the “accelerometer-based motion artifact removal” (ABAMAR, Virtanen et al. J. Biomed. Opt. 2011, 16, 087005). We describe AMARA in detail and report about successful validation of the algorithm using empirical NIRS data, measured over the prefrontal cortex in adolescents during sleep. In addition, we compared the performance of AMARA to that of MARA and ABAMAR based on validation data.
Resumo:
O presente trabalho apresenta uma alternativa ao processo de classificação do defeito da segregação central em amostras de aço, utilizando as imagens digitais que são geradas durante o ensaio de Baumann. O algoritmo proposto tem como objetivo agregar as técnicas de processamento digital de imagens e o conhecimento dos especialistas sobre o defeito da segregação central, visando a classificação do defeito de referência. O algoritmo implementado inclui a identificação e a segmentação da linha segregada por meio da aplicação da transformada de Hough e limiar adaptativo. Adicionalmente, o algoritmo apresenta uma proposta para o mapeamento dos atributos da segregação central nos diferentes graus de severidade do defeito, em função dos critérios de continuidade e intensidade. O mapeamento foi realizado por meio da análise das características individuais, como comprimento, largura e área, dos elementos segmentados que compõem a linha segregada. A avaliação do desempenho do algoritmo foi realizada em dois momentos específicos, de acordo com sua fase de implementação. Para a realização da avaliação, foram analisadas 255 imagens de amostras reais, oriundas de duas usinas siderúrgicas, distribuídas nos diferentes graus de severidade. Os resultados da primeira fase de implementação mostram que a identificação da linha segregada apresenta acurácia de 93%. As classificações oriundas do mapeamento realizado para as classes de criticidade do defeito, na segunda fase de implementação, apresentam acurácia de 92% para o critério de continuidade e 68% para o critério de intensidade.
Resumo:
In this paper, space adaptivity is introduced to control the error in the numerical solution of hyperbolic systems of conservation laws. The reference numerical scheme is a new version of the discontinuous Galerkin method, which uses an implicit diffusive term in the direction of the streamlines, for stability purposes. The decision whether to refine or to unrefine the grid in a certain location is taken according to the magnitude of wavelet coefficients, which are indicators of local smoothness of the numerical solution. Numerical solutions of the nonlinear Euler equations illustrate the efficiency of the method. © Springer 2005.
Resumo:
Nucleoside hydrolases (NHs) show homology among parasite protozoa, fungi and bacteria. They are vital protagonists in the establishment of early infection and, therefore, are excellent candidates for the pathogen recognition by adaptive immune responses. Immune protection against NHs would prevent disease at the early infection of several pathogens. We have identified the domain of the NH of L. donovani (NH36) responsible for its immunogenicity and protective efficacy against murine visceral leishmaniasis (VL). Using recombinant generated peptides covering the whole NH36 sequence and saponin we demonstrate that protection against L. chagasi is related to its C-terminal domain (amino-acids 199-314) and is mediated mainly by a CD4+ T cell driven response with a lower contribution of CD8+ T cells. Immunization with this peptide exceeds in 36.73 +/- 12.33% the protective response induced by the cognate NH36 protein. Increases in IgM, IgG2a, IgG1 and IgG2b antibodies, CD4+ T cell proportions, IFN-gamma secretion, ratios of IFN-gamma/IL-10 producing CD4+ and CD8+ T cells and percents of antibody binding inhibition by synthetic predicted epitopes were detected in F3 vaccinated mice. The increases in DTH and in ratios of TNF alpha/IL-10 CD4+ producing cells were however the strong correlates of protection which was confirmed by in vivo depletion with monoclonal antibodies, algorithm predicted CD4 and CD8 epitopes and a pronounced decrease in parasite load (90.5-88.23%; p = 0.011) that was long-lasting. No decrease in parasite load was detected after vaccination with the N-domain of NH36, in spite of the induction of IFN-gamma/IL-10 expression by CD4+ T cells after challenge. Both peptides reduced the size of footpad lesions, but only the C-domain reduced the parasite load of mice challenged with L. amazonensis. The identification of the target of the immune response to NH36 represents a basis for the rationale development of a bivalent vaccine against leishmaniasis and for multivalent vaccines against NHs-dependent pathogens.
Resumo:
The present study investigated the effects of 8 week of resistance training (RT) on hemodynamic and ventricular function on cardiac myosin ATPase activity, and on contractility of papillary muscles of rats. Groups: control (CO), electrically stimulated (ES), trained at 60% (TR 60%) and 75% of one repetition maximum (1RM) (TR 75%). Exercise protocol: 5 sets of 12 repetitions at 60 and 75% of 1RM, 5 times per week. The CO and ES groups had similar values for parameters analyzed (P > 0.05). Blood pressure (BP), heart rate (13%), left ventricle systolic pressure (LVSP 13%) decreased and cardiac myosin ATPase activity increased in the TR 75% group (90%, P < 0.05). The contractile performance of papillary muscles increased in trained rats (P < 0.05). Eight weeks of RT was associated with lowering of resting BP, heart rate and LVSP, improvements in contractility of the papillary muscle and an increase of cardiac myosin ATPase activity in rats.
Resumo:
The adaptive process in motor learning was examined in terms of effects of varying amounts of constant practice performed before random practice. Participants pressed five response keys sequentially, the last one coincident with the lighting of a final visual stimulus provided by a complex coincident timing apparatus. Different visual stimulus speeds were used during the random practice. 33 children (M age=11.6 yr.) were randomly assigned to one of three experimental groups: constant-random, constant-random 33%, and constant-random 66%. The constant-random group practiced constantly until they reached a criterion of performance stabilization three consecutive trials within 50 msec. of error. The other two groups had additional constant practice of 33 and 66%, respectively, of the number of trials needed to achieve the stabilization criterion. All three groups performed 36 trials under random practice; in the adaptation phase, they practiced at a different visual stimulus speed adopted in the stabilization phase. Global performance measures were absolute, constant, and variable errors, and movement pattern was analyzed by relative timing and overall movement time. There was no group difference in relation to global performance measures and overall movement time. However, differences between the groups were observed on movement pattern, since constant-random 66% group changed its relative timing performance in the adaptation phase.
Resumo:
This paper presents a novel adaptive control scheme. with improved convergence rate, for the equalization of harmonic disturbances such as engine noise. First, modifications for improving convergence speed of the standard filtered-X LMS control are described. Equalization capabilities are then implemented, allowing the independent tuning of harmonics. Eventually, by providing the desired order vs. engine speed profiles, the pursued sound quality attributes can be achieved. The proposed control scheme is first demonstrated with a simple secondary path model and, then, experimentally validated with the aid of a vehicle mockup which is excited with engine noise. The engine excitation is provided by a real-time sound quality equivalent engine simulator. Stationary and transient engine excitations are used to assess the control performance. The results reveal that the proposed controller is capable of large order-level reductions (up to 30 dB) for stationary excitation, which allows a comfortable margin for equalization. The same holds for slow run-ups ( > 15s) thanks to the improved convergence rate. This margin, however, gets narrower with shorter run-ups (<= 10s). (c) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This work presents a critical analysis of methodologies to evaluate the effective (or generalized) electromechanical coupling coefficient (EMCC) for structures with piezoelectric elements. First, a review of several existing methodologies to evaluate material and effective EMCC is presented. To illustrate the methodologies, a comparison is made between numerical, analytical and experimental results for two simple structures: a cantilever beam with bonded extension piezoelectric patches and a simply-supported sandwich beam with an embedded shear piezoceramic. An analysis of the electric charge cancelation effect on the effective EMCC observed in long piezoelectric patches is performed. It confirms the importance of reinforcing the electrodes equipotentiality condition in the finite element model. Its results indicate also that smaller (segmented) and independent piezoelectric patches could be more interesting for energy conversion efficiency. Then, parametric analyses and optimization are performed for a cantilever sandwich beam with several embedded shear piezoceramic patches. Results indicate that to fully benefit from the higher material coupling of shear piezoceramic patches, attention must be paid to the configuration design so that the shear strains in the patches are maximized. In particular, effective square EMCC values higher than 1% were obtained embedding nine well-spaced short piezoceramic patches in an aluminum/foam/aluminum sandwich beam.
Resumo:
This paper aims to formulate and investigate the application of various nonlinear H(infinity) control methods to a fiee-floating space manipulator subject to parametric uncertainties and external disturbances. From a tutorial perspective, a model-based approach and adaptive procedures based on linear parametrization, neural networks and fuzzy systems are covered by this work. A comparative study is conducted based on experimental implementations performed with an actual underactuated fixed-base planar manipulator which is, following the DEM concept, dynamically equivalent to a free-floating space manipulator. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
This paper contains a new proposal for the definition of the fundamental operation of query under the Adaptive Formalism, one capable of locating functional nuclei from descriptions of their semantics. To demonstrate the method`s applicability, an implementation of the query procedure constrained to a specific class of devices is shown, and its asymptotic computational complexity is discussed.
Resumo:
This paper presents a free software tool that supports the next-generation Mobile Communications, through the automatic generation of models of components and electronic devices based on neural networks. This tool enables the creation, training, validation and simulation of the model directly from measurements made on devices of interest, using an interface totally oriented to non-experts in neural models. The resulting model can be exported automatically to a traditional circuit simulator to test different scenarios.
Resumo:
This work deals with the problem of minimizing the waste of space that occurs on a rotational placement of a set of irregular bi-dimensional items inside a bi-dimensional container. This problem is approached with a heuristic based on Simulated Annealing (SA) with adaptive neighborhood. The objective function is evaluated in a constructive approach, where the items are placed sequentially. The placement is governed by three different types of parameters: sequence of placement, the rotation angle and the translation. The rotation applied and the translation of the polygon are cyclic continuous parameters, and the sequence of placement defines a combinatorial problem. This way, it is necessary to control cyclic continuous and discrete parameters. The approaches described in the literature deal with only type of parameter (sequence of placement or translation). In the proposed SA algorithm, the sensibility of each continuous parameter is evaluated at each iteration increasing the number of accepted solutions. The sensibility of each parameter is associated to its probability distribution in the definition of the next candidate.