917 resultados para post-processing method
Resumo:
In this thesis, a new algorithm has been proposed to segment the foreground of the fingerprint from the image under consideration. The algorithm uses three features, mean, variance and coherence. Based on these features, a rule system is built to help the algorithm to efficiently segment the image. In addition, the proposed algorithm combine split and merge with modified Otsu. Both enhancements techniques such as Gaussian filter and histogram equalization are applied to enhance and improve the quality of the image. Finally, a post processing technique is implemented to counter the undesirable effect in the segmented image. Fingerprint recognition system is one of the oldest recognition systems in biometrics techniques. Everyone have a unique and unchangeable fingerprint. Based on this uniqueness and distinctness, fingerprint identification has been used in many applications for a long period. A fingerprint image is a pattern which consists of two regions, foreground and background. The foreground contains all important information needed in the automatic fingerprint recognition systems. However, the background is a noisy region that contributes to the extraction of false minutiae in the system. To avoid the extraction of false minutiae, there are many steps which should be followed such as preprocessing and enhancement. One of these steps is the transformation of the fingerprint image from gray-scale image to black and white image. This transformation is called segmentation or binarization. The aim for fingerprint segmentation is to separate the foreground from the background. Due to the nature of fingerprint image, the segmentation becomes an important and challenging task. The proposed algorithm is applied on FVC2000 database. Manual examinations from human experts show that the proposed algorithm provides an efficient segmentation results. These improved results are demonstrating in diverse experiments.
Resumo:
Parkinson’s disease (PD) is an increasing neurological disorder in an aging society. The motor and non-motor symptoms of PD advance with the disease progression and occur in varying frequency and duration. In order to affirm the full extent of a patient’s condition, repeated assessments are necessary to adjust medical prescription. In clinical studies, symptoms are assessed using the unified Parkinson’s disease rating scale (UPDRS). On one hand, the subjective rating using UPDRS relies on clinical expertise. On the other hand, it requires the physical presence of patients in clinics which implies high logistical costs. Another limitation of clinical assessment is that the observation in hospital may not accurately represent a patient’s situation at home. For such reasons, the practical frequency of tracking PD symptoms may under-represent the true time scale of PD fluctuations and may result in an overall inaccurate assessment. Current technologies for at-home PD treatment are based on data-driven approaches for which the interpretation and reproduction of results are problematic. The overall objective of this thesis is to develop and evaluate unobtrusive computer methods for enabling remote monitoring of patients with PD. It investigates first-principle data-driven model based novel signal and image processing techniques for extraction of clinically useful information from audio recordings of speech (in texts read aloud) and video recordings of gait and finger-tapping motor examinations. The aim is to map between PD symptoms severities estimated using novel computer methods and the clinical ratings based on UPDRS part-III (motor examination). A web-based test battery system consisting of self-assessment of symptoms and motor function tests was previously constructed for a touch screen mobile device. A comprehensive speech framework has been developed for this device to analyze text-dependent running speech by: (1) extracting novel signal features that are able to represent PD deficits in each individual component of the speech system, (2) mapping between clinical ratings and feature estimates of speech symptom severity, and (3) classifying between UPDRS part-III severity levels using speech features and statistical machine learning tools. A novel speech processing method called cepstral separation difference showed stronger ability to classify between speech symptom severities as compared to existing features of PD speech. In the case of finger tapping, the recorded videos of rapid finger tapping examination were processed using a novel computer-vision (CV) algorithm that extracts symptom information from video-based tapping signals using motion analysis of the index-finger which incorporates a face detection module for signal calibration. This algorithm was able to discriminate between UPDRS part III severity levels of finger tapping with high classification rates. Further analysis was performed on novel CV based gait features constructed using a standard human model to discriminate between a healthy gait and a Parkinsonian gait. The findings of this study suggest that the symptom severity levels in PD can be discriminated with high accuracies by involving a combination of first-principle (features) and data-driven (classification) approaches. The processing of audio and video recordings on one hand allows remote monitoring of speech, gait and finger-tapping examinations by the clinical staff. On the other hand, the first-principles approach eases the understanding of symptom estimates for clinicians. We have demonstrated that the selected features of speech, gait and finger tapping were able to discriminate between symptom severity levels, as well as, between healthy controls and PD patients with high classification rates. The findings support suitability of these methods to be used as decision support tools in the context of PD assessment.
Resumo:
Distributed energy and water balance models require time-series surfaces of the meteorological variables involved in hydrological processes. Most of the hydrological GIS-based models apply simple interpolation techniques to extrapolate the point scale values registered at weather stations at a watershed scale. In mountainous areas, where the monitoring network ineffectively covers the complex terrain heterogeneity, simple geostatistical methods for spatial interpolation are not always representative enough, and algorithms that explicitly or implicitly account for the features creating strong local gradients in the meteorological variables must be applied. Originally developed as a meteorological pre-processing tool for a complete hydrological model (WiMMed), MeteoMap has become an independent software. The individual interpolation algorithms used to approximate the spatial distribution of each meteorological variable were carefully selected taking into account both, the specific variable being mapped, and the common lack of input data from Mediterranean mountainous areas. They include corrections with height for both rainfall and temperature (Herrero et al., 2007), and topographic corrections for solar radiation (Aguilar et al., 2010). MeteoMap is a GIS-based freeware upon registration. Input data include weather station records and topographic data and the output consists of tables and maps of the meteorological variables at hourly, daily, predefined rainfall event duration or annual scales. It offers its own pre and post-processing tools, including video outlook, map printing and the possibility of exporting the maps to images or ASCII ArcGIS formats. This study presents the friendly user interface of the software and shows some case studies with applications to hydrological modeling.
Resumo:
The organization climate research is a widely used human resources tool grounded on the managerial discourse that preaches that listening to employees's opinions is relevant for the identification of corporate aspects that demand improvement. This study aims at desmistifying this discourse by means of analytical tools from the Critical Administration Studies, namely: denaturalized view of administration, detachment between intentions and performance and search for emancipation. The study is grounded on the assumption that the organizational climate research derives from functionalist theory, which benefits a dominating class, in name of productivity and for the maintenance of the status quo, therefore contributing for individual alienation at work. The study was designed to identify elements that show the relation between an organizational climate research tool and the social control over individuals in an organization - here the research conducted in 2005 by Centrais Elétricas Brasileiras S/A - Eletrobrás, the Brazilian power sector holding company. The theoretical section presents an overview of corporate paradigms relevant for a sound understanding of the organizational climate concept. Data analysis was conducted by means of the post-modern method of binary deconstruction: the questions contained in the tool's questionnaire were grouped into categories and then analyzed in terms of the fallowing conceptual pairs: well-being/productivity, autonomy/control, ethics/ competitiveness and participation/alienation. The analysis showed that the organization climate research tool is used as a resource for social control and power, because it contributes to individual alienation as it satisfies some specific individual demands, therefore preventing the individual form a thorough understanding of how the system works. Besides, the helps anticipate, mitigate and conceal the conflicts arising from the opposing interests of capital and labor.
Resumo:
PURPOSE: Stroke is a high-incidence cerebrovascular disease with elevated morbidity that results in impairments such as functional disabilities. This study aimed to investigate the functional evolution of individuals in the first six months post-stroke. METHOD: Longitudinal study with 42 stroke patients. The functional independence measure (FIM) and The National Institutes of Health Stroke Scale (NIHSS) were used by multidisciplinary staff 3 times in each participant; the first application was at admission to rehabilitation and the others three and six months later. RESULTS: Sample predominantly female (57%), married (52%), mean age 65.26 ±10.72 years, elementary schooling level (43%), ischemic stroke (91%), and right cerebral hemisphere (74%). Motor FIM scores and NIHSS scale showed improvement in the 3 evaluations, with significant p-value (<0.001). There was a strong relation between motor FIM evolution and NIHSS evolution (r = - 0.69 p-value< 0.001). CONCLUSIONS: It was observed that functional evolution at 6 months post-stroke was significant and the smaller the evolution of clinical impairment in these patients, the larger the evolution of their functional independence. The study is important because it allows a more appropriate therapeutic planning according with functional evolution in stroke rehabilitation
Resumo:
The objective of this study was to evaluate the processing methods (F-1 = to remove skin with pliers and then to cut in fillets; F-2 = cut in fillet and then to remove skin with knife and pliers help) and weight categories (W-1=250-300 g; W-2=301-350 g; W-3 = 351-400 g and W-4 = 401-450 g), on the carcass (CY), fillet (FY) and skin yield of Nile tilapia. Forty-eight fishes were used in a completely randomized design. There was effect for the processing method, being the F-1 mean (56.43 and 36.67 %) higher to the F-2 (53.46 and 32.89%) for CY and FY respectively. For the weight categories, W-1 (56.49 and 37.34%) and W-2 (56.34 and 36.40%) were superior as compared to W-3 (53.27 and 31.98%) and W-4 (53.71 and 33.42%), respectively for CY and FY. Crude skin percentage, clean and of fleshed were higher for F-2, but there was no effect for weight categories. The F-1 processing method promoted the best yield and skin results, and for the weight categories W-1 and W-2 higher yields.
Resumo:
This article presents a detailed study of the application of different additive manufacturing technologies (sintering process, three-dimensional printing, extrusion and stereolithographic process), in the design process of a complex geometry model and its moving parts. The fabrication sequence was evaluated in terms of pre-processing conditions (model generation and model STL SLI), generation strategy and physical model post-processing operations. Dimensional verification of the obtained models was undertook by projecting structured light (optical scan), a relatively new technology of main importance for metrology and reverse engineering. Studies were done in certain manufacturing time and production costs, which allowed the definition of an more comprehensive evaluation matrix of additive technologies.
Resumo:
Self-organizing maps (SOM) are artificial neural networks widely used in the data mining field, mainly because they constitute a dimensionality reduction technique given the fixed grid of neurons associated with the network. In order to properly the partition and visualize the SOM network, the various methods available in the literature must be applied in a post-processing stage, that consists of inferring, through its neurons, relevant characteristics of the data set. In general, such processing applied to the network neurons, instead of the entire database, reduces the computational costs due to vector quantization. This work proposes a post-processing of the SOM neurons in the input and output spaces, combining visualization techniques with algorithms based on gravitational forces and the search for the shortest path with the greatest reward. Such methods take into account the connection strength between neighbouring neurons and characteristics of pattern density and distances among neurons, both associated with the position that the neurons occupy in the data space after training the network. Thus, the goal consists of defining more clearly the arrangement of the clusters present in the data. Experiments were carried out so as to evaluate the proposed methods using various artificially generated data sets, as well as real world data sets. The results obtained were compared with those from a number of well-known methods existent in the literature
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
TEMA: programa de treinamento auditivo em escolares com distúrbio de aprendizagem. OBJETIVOS: verificar a eficácia de um programa de treinamento auditivo em escolares com distúrbio de aprendizagem e comparar os achados dos procedimentos de avaliação utilizados nas pré e pós-testagem em escolares com distúrbio de aprendizagem e sem dificuldades de aprendizagem, submetidos e não submetidos ao programa de treinamento auditivo. MÉTODO: participaram deste estudo 40 escolares, sendo que esses foram divididos em: GI, subdividido em: GIe (10 escolares com distúrbio de aprendizagem submetidos ao programa de treinamento auditivo), GIc (10 escolares com distúrbio de aprendizagem não submetidos ao programa de treinamento auditivo) e GII, subdividido em: GIIe (10 escolares sem dificuldades de aprendizagem submetidos ao programa de treinamento auditivo) e GIIc (10 escolares sem dificuldades de aprendizagem não submetidos ao programa de treinamento auditivo). Foi realizado o programa de Treinamento Auditivo Audio Training®. RESULTADOS: os resultados mostraram que o GI apresentou desempenho inferior ao de GII em atividades relacionadas com as habilidades auditivas e de consciência fonológica. O GIe e o GIIe apresentaram melhor desempenho em habilidades auditivas e de consciência fonológica depois da aplicação do programa de treinamento auditivo, quando comparados os achados de pré e pós-testagem. CONCLUSÃO: o desempenho de escolares com distúrbio de aprendizagem nas tarefas auditivas e fonológicas apresenta-se inferior no que concerne ao de escolares sem distúrbio de aprendizagem. A utilização do programa de treinamento auditivo mostrou-se eficaz e possibilitou aos escolares o desenvolvimento dessas habilidades.
Resumo:
Utilizaram-se 60 leitões mestiços (Large White x Landrace), desmamados com peso inicial médio de 7,9kg, no experimento de desempenho e 20 leitões mestiços, com peso inicial médio de 16,8kg, no experimento de digestibilidade para avaliar a silagem de grãos úmidos de milho com diferentes teores de óleo. O delineamento experimental foi o de blocos ao acaso em ambos os experimentos avaliou-se o valor nutricional das silagens e dos milhos secos com teor normal (4,3%EE na MS) ou elevado de óleo (5,66%EE na MS). Não houve efeito dos tratamentos sobre o consumo diário de ração e no ganho de peso diário nos períodos de 0 a 9 e 0 a 31 dias. Os leitões apresentaram melhor conversão alimentar em ambos os períodos estudados, quando receberam silagem e no período de 0 a 9 dias, quando foram alimentados com milho com teor mais alto de óleo. As frações digestível e metabolizável da energia foram influenciadas pelo processamento, sendo que a ensilagem proporcionou melhor aproveitamento da energia, independentemente do teor de óleo presente nos grãos.
Resumo:
Organic-inorganic hybrids formed by polyether-based chains grafted to both ends to a siliceous backbone through urea cross-linkages (-NHC=O)NH-), named di-ureasil, have been used as host for incorporation of Eu3+ in the form of EuCl3. The bulks and the thin films, both optically transparent, were characterized by excitation, absorption and emission spectroscopy. Photoluminescence results point out that the Eu3+ ions occupy, at least, two distinct local environments. Besides, the processing method (thin films or bulks) has influence on the energy levels of the hybrid host probably due to the lower degree of organization of the thin films structure. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
SAFT techniques are based on the sequential activation, in emission and reception, of the array elements and the post-processing of all the received signals to compose the image. Thus, the image generation can be divided into two stages: (1) the excitation and acquisition stage, where the signals received by each element or group of elements are stored; and (2) the beamforming stage, where the signals are combined together to obtain the image pixels. The use of Graphics Processing Units (GPUs), which are programmable devices with a high level of parallelism, can accelerate the computations of the beamforming process, that usually includes different functions such as dynamic focusing, band-pass filtering, spatial filtering or envelope detection. This work shows that using GPU technology can accelerate, in more than one order of magnitude with respect to CPU implementations, the beamforming and post-processing algorithms in SAFT imaging. ©2009 IEEE.
Resumo:
Although association mining has been highlighted in the last years, the huge number of rules that are generated hamper its use. To overcome this problem, many post-processing approaches were suggested, such as clustering, which organizes the rules in groups that contain, somehow, similar knowledge. Nevertheless, clustering can aid the user only if good descriptors be associated with each group. This is a relevant issue, since the labels will provide to the user a view of the topics to be explored, helping to guide its search. This is interesting, for example, when the user doesn't have, a priori, an idea where to start. Thus, the analysis of different labeling methods for association rule clustering is important. Considering the exposed arguments, this paper analyzes some labeling methods through two measures that are proposed. One of them, Precision, measures how much the methods can find labels that represent as accurately as possible the rules contained in its group and Repetition Frequency determines how the labels are distributed along the clusters. As a result, it was possible to identify the methods and the domain organizations with the best performances that can be applied in clusters of association rules.
Resumo:
The Brazilian Network for Continuous Monitoring of GNSS - RBMC is a national network of continuously operating reference GNSS stations. Since its establishment in December of 1996, it has been playing an essential role for the maintenance and user access of the fundamental geodetic frame in the country. In order to provide better services for RBMC, the Brazilian Institute of Geography and Statistics - IBGE and the National Institute of Colonization and Land Reform - INCRA are both partners involved in the National Geospatial Framework Project - PIGN. This paper provides an overview of the recent modernization phases the RBMC network has undergone highlighting its future steps. These steps involve the installation of new equipment, provide real time data from a group of core stations and compute real-time DGPS corrections, based on CDGPS (The real-time Canada-Wide DGPS Service) (The Real-Time Canada-Wide DGPS Service. http://www.cdgps.com/ 2009a). In addition to this, a post-mission Precise Point Positioning (PPP) service has been established based on the current Geodetic Survey Division of NRCan (CSRS-PPP) service. This service is operational since April 2009 and is in large use in the country. All activities mentioned before are based on a cooperation signed at the end of 2004 with the University of New Brunswick, supported by the Canadian International Development Agency and the Brazilian Cooperation Agency. The Geodetic Survey Division of NRCan is also participating in this modernization effort under the same project. This infrastructure of 66 GNSS stations, the real time, post processing services and the potentiality of providing Wide Area DGPS corrections in the future show that the RBMC system is comparable to those available in USA and Europe. © Springer-Verlag Berlin Heidelberg 2012.