930 resultados para Line and edge detection
Resumo:
Les changements sont faits de façon continue dans le code source des logiciels pour prendre en compte les besoins des clients et corriger les fautes. Les changements continus peuvent conduire aux défauts de code et de conception. Les défauts de conception sont des mauvaises solutions à des problèmes récurrents de conception ou d’implémentation, généralement dans le développement orienté objet. Au cours des activités de compréhension et de changement et en raison du temps d’accès au marché, du manque de compréhension, et de leur expérience, les développeurs ne peuvent pas toujours suivre les normes de conception et les techniques de codage comme les patrons de conception. Par conséquent, ils introduisent des défauts de conception dans leurs systèmes. Dans la littérature, plusieurs auteurs ont fait valoir que les défauts de conception rendent les systèmes orientés objet plus difficile à comprendre, plus sujets aux fautes, et plus difficiles à changer que les systèmes sans les défauts de conception. Pourtant, seulement quelques-uns de ces auteurs ont fait une étude empirique sur l’impact des défauts de conception sur la compréhension et aucun d’entre eux n’a étudié l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes. Dans cette thèse, nous proposons trois principales contributions. La première contribution est une étude empirique pour apporter des preuves de l’impact des défauts de conception sur la compréhension et le changement. Nous concevons et effectuons deux expériences avec 59 sujets, afin d’évaluer l’impact de la composition de deux occurrences de Blob ou deux occurrences de spaghetti code sur la performance des développeurs effectuant des tâches de compréhension et de changement. Nous mesurons la performance des développeurs en utilisant: (1) l’indice de charge de travail de la NASA pour leurs efforts, (2) le temps qu’ils ont passé dans l’accomplissement de leurs tâches, et (3) les pourcentages de bonnes réponses. Les résultats des deux expériences ont montré que deux occurrences de Blob ou de spaghetti code sont un obstacle significatif pour la performance des développeurs lors de tâches de compréhension et de changement. Les résultats obtenus justifient les recherches antérieures sur la spécification et la détection des défauts de conception. Les équipes de développement de logiciels doivent mettre en garde les développeurs contre le nombre élevé d’occurrences de défauts de conception et recommander des refactorisations à chaque étape du processus de développement pour supprimer ces défauts de conception quand c’est possible. Dans la deuxième contribution, nous étudions la relation entre les défauts de conception et les fautes. Nous étudions l’impact de la présence des défauts de conception sur l’effort nécessaire pour corriger les fautes. Nous mesurons l’effort pour corriger les fautes à l’aide de trois indicateurs: (1) la durée de la période de correction, (2) le nombre de champs et méthodes touchés par la correction des fautes et (3) l’entropie des corrections de fautes dans le code-source. Nous menons une étude empirique avec 12 défauts de conception détectés dans 54 versions de quatre systèmes: ArgoUML, Eclipse, Mylyn, et Rhino. Nos résultats ont montré que la durée de la période de correction est plus longue pour les fautes impliquant des classes avec des défauts de conception. En outre, la correction des fautes dans les classes avec des défauts de conception fait changer plus de fichiers, plus les champs et des méthodes. Nous avons également observé que, après la correction d’une faute, le nombre d’occurrences de défauts de conception dans les classes impliquées dans la correction de la faute diminue. Comprendre l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes est important afin d’aider les équipes de développement pour mieux évaluer et prévoir l’impact de leurs décisions de conception et donc canaliser leurs efforts pour améliorer la qualité de leurs systèmes. Les équipes de développement doivent contrôler et supprimer les défauts de conception de leurs systèmes car ils sont susceptibles d’augmenter les efforts de changement. La troisième contribution concerne la détection des défauts de conception. Pendant les activités de maintenance, il est important de disposer d’un outil capable de détecter les défauts de conception de façon incrémentale et itérative. Ce processus de détection incrémentale et itérative pourrait réduire les coûts, les efforts et les ressources en permettant aux praticiens d’identifier et de prendre en compte les occurrences de défauts de conception comme ils les trouvent lors de la compréhension et des changements. Les chercheurs ont proposé des approches pour détecter les occurrences de défauts de conception, mais ces approches ont actuellement quatre limites: (1) elles nécessitent une connaissance approfondie des défauts de conception, (2) elles ont une précision et un rappel limités, (3) elles ne sont pas itératives et incrémentales et (4) elles ne peuvent pas être appliquées sur des sous-ensembles de systèmes. Pour surmonter ces limitations, nous introduisons SMURF, une nouvelle approche pour détecter les défauts de conception, basé sur une technique d’apprentissage automatique — machines à vecteur de support — et prenant en compte les retours des praticiens. Grâce à une étude empirique portant sur trois systèmes et quatre défauts de conception, nous avons montré que la précision et le rappel de SMURF sont supérieurs à ceux de DETEX et BDTEX lors de la détection des occurrences de défauts de conception. Nous avons également montré que SMURF peut être appliqué à la fois dans les configurations intra-système et inter-système. Enfin, nous avons montré que la précision et le rappel de SMURF sont améliorés quand on prend en compte les retours des praticiens.
Effectiveness Of Feature Detection Operators On The Performance Of Iris Biometric Recognition System
Resumo:
Iris Recognition is a highly efficient biometric identification system with great possibilities for future in the security systems area.Its robustness and unobtrusiveness, as opposed tomost of the currently deployed systems, make it a good candidate to replace most of thesecurity systems around. By making use of the distinctiveness of iris patterns, iris recognition systems obtain a unique mapping for each person. Identification of this person is possible by applying appropriate matching algorithm.In this paper, Daugman’s Rubber Sheet model is employed for irisnormalization and unwrapping, descriptive statistical analysis of different feature detection operators is performed, features extracted is encoded using Haar wavelets and for classification hammingdistance as a matching algorithm is used. The system was tested on the UBIRIS database. The edge detection algorithm, Canny, is found to be the best one to extract most of the iris texture. The success rate of feature detection using canny is 81%, False Accept Rate is 9% and False Reject Rate is 10%.
Resumo:
The basic concepts of digital signal processing are taught to the students in engineering and science. The focus of the course is on linear, time invariant systems. The question as to what happens when the system is governed by a quadratic or cubic equation remains unanswered in the vast majority of literature on signal processing. Light has been shed on this problem when John V Mathews and Giovanni L Sicuranza published the book Polynomial Signal Processing. This book opened up an unseen vista of polynomial systems for signal and image processing. The book presented the theory and implementations of both adaptive and non-adaptive FIR and IIR quadratic systems which offer improved performance than conventional linear systems. The theory of quadratic systems presents a pristine and virgin area of research that offers computationally intensive work. Once the area of research is selected, the next issue is the choice of the software tool to carry out the work. Conventional languages like C and C++ are easily eliminated as they are not interpreted and lack good quality plotting libraries. MATLAB is proved to be very slow and so do SCILAB and Octave. The search for a language for scientific computing that was as fast as C, but with a good quality plotting library, ended up in Python, a distant relative of LISP. It proved to be ideal for scientific computing. An account of the use of Python, its scientific computing package scipy and the plotting library pylab is given in the appendix Initially, work is focused on designing predictors that exploit the polynomial nonlinearities inherent in speech generation mechanisms. Soon, the work got diverted into medical image processing which offered more potential to exploit by the use of quadratic methods. The major focus in this area is on quadratic edge detection methods for retinal images and fingerprints as well as de-noising raw MRI signals
Resumo:
This paper investigates the linear degeneracies of projective structure estimation from point and line features across three views. We show that the rank of the linear system of equations for recovering the trilinear tensor of three views reduces to 23 (instead of 26) in the case when the scene is a Linear Line Complex (set of lines in space intersecting at a common line) and is 21 when the scene is planar. The LLC situation is only linearly degenerate, and we show that one can obtain a unique solution when the admissibility constraints of the tensor are accounted for. The line configuration described by an LLC, rather than being some obscure case, is in fact quite typical. It includes, as a particular example, the case of a camera moving down a hallway in an office environment or down an urban street. Furthermore, an LLC situation may occur as an artifact such as in direct estimation from spatio-temporal derivatives of image brightness. Therefore, an investigation into degeneracies and their remedy is important also in practice.
Resumo:
La presencia de microorganismos patógenos en alimentos es uno de los problemas esenciales en salud pública, y las enfermedades producidas por los mismos es una de las causas más importantes de enfermedad. Por tanto, la aplicación de controles microbiológicos dentro de los programas de aseguramiento de la calidad es una premisa para minimizar el riesgo de infección de los consumidores. Los métodos microbiológicos clásicos requieren, en general, el uso de pre-enriquecimientos no-selectivos, enriquecimientos selectivos, aislamiento en medios selectivos y la confirmación posterior usando pruebas basadas en la morfología, bioquímica y serología propias de cada uno de los microorganismos objeto de estudio. Por lo tanto, estos métodos son laboriosos, requieren un largo proceso para obtener resultados definitivos y, además, no siempre pueden realizarse. Para solucionar estos inconvenientes se han desarrollado diversas metodologías alternativas para la detección identificación y cuantificación de microorganismos patógenos de origen alimentario, entre las que destacan los métodos inmunológicos y moleculares. En esta última categoría, la técnica basada en la reacción en cadena de la polimerasa (PCR) se ha convertido en la técnica diagnóstica más popular en microbiología, y recientemente, la introducción de una mejora de ésta, la PCR a tiempo real, ha producido una segunda revolución en la metodología diagnóstica molecular, como pude observarse por el número creciente de publicaciones científicas y la aparición continua de nuevos kits comerciales. La PCR a tiempo real es una técnica altamente sensible -detección de hasta una molécula- que permite la cuantificación exacta de secuencias de ADN específicas de microorganismos patógenos de origen alimentario. Además, otras ventajas que favorecen su implantación potencial en laboratorios de análisis de alimentos son su rapidez, sencillez y el formato en tubo cerrado que puede evitar contaminaciones post-PCR y favorece la automatización y un alto rendimiento. En este trabajo se han desarrollado técnicas moleculares (PCR y NASBA) sensibles y fiables para la detección, identificación y cuantificación de bacterias patogénicas de origen alimentario (Listeria spp., Mycobacterium avium subsp. paratuberculosis y Salmonella spp.). En concreto, se han diseñado y optimizado métodos basados en la técnica de PCR a tiempo real para cada uno de estos agentes: L. monocytogenes, L. innocua, Listeria spp. M. avium subsp. paratuberculosis, y también se ha optimizado y evaluado en diferentes centros un método previamente desarrollado para Salmonella spp. Además, se ha diseñado y optimizado un método basado en la técnica NASBA para la detección específica de M. avium subsp. paratuberculosis. También se evaluó la aplicación potencial de la técnica NASBA para la detección específica de formas viables de este microorganismo. Todos los métodos presentaron una especificidad del 100 % con una sensibilidad adecuada para su aplicación potencial a muestras reales de alimentos. Además, se han desarrollado y evaluado procedimientos de preparación de las muestras en productos cárnicos, productos pesqueros, leche y agua. De esta manera se han desarrollado métodos basados en la PCR a tiempo real totalmente específicos y altamente sensibles para la determinación cuantitativa de L. monocytogenes en productos cárnicos y en salmón y productos derivados como el salmón ahumado y de M. avium subsp. paratuberculosis en muestras de agua y leche. Además este último método ha sido también aplicado para evaluar la presencia de este microorganismo en el intestino de pacientes con la enfermedad de Crohn's, a partir de biopsias obtenidas de colonoscopia de voluntarios afectados. En conclusión, este estudio presenta ensayos moleculares selectivos y sensibles para la detección de patógenos en alimentos (Listeria spp., Mycobacterium avium subsp. paratuberculosis) y para una rápida e inambigua identificación de Salmonella spp. La exactitud relativa de los ensayos ha sido excelente, si se comparan con los métodos microbiológicos de referencia y pueden serusados para la cuantificación de tanto ADN genómico como de suspensiones celulares. Por otro lado, la combinación con tratamientos de preamplificación ha resultado ser de gran eficiencia para el análisis de las bacterias objeto de estudio. Por tanto, pueden constituir una estrategia útil para la detección rápida y sensible de patógenos en alimentos y deberían ser una herramienta adicional al rango de herramientas diagnósticas disponibles para el estudio de patógenos de origen alimentario.
Resumo:
Two ongoing projects at ESSC that involve the development of new techniques for extracting information from airborne LiDAR data and combining this information with environmental models will be discussed. The first project in conjunction with Bristol University is aiming to improve 2-D river flood flow models by using remote sensing to provide distributed data for model calibration and validation. Airborne LiDAR can provide such models with a dense and accurate floodplain topography together with vegetation heights for parameterisation of model friction. The vegetation height data can be used to specify a friction factor at each node of a model’s finite element mesh. A LiDAR range image segmenter has been developed which converts a LiDAR image into separate raster maps of surface topography and vegetation height for use in the model. Satellite and airborne SAR data have been used to measure flood extent remotely in order to validate the modelled flood extent. Methods have also been developed for improving the models by decomposing the model’s finite element mesh to reflect floodplain features such as hedges and trees having different frictional properties to their surroundings. Originally developed for rural floodplains, the segmenter is currently being extended to provide DEMs and friction parameter maps for urban floods, by fusing the LiDAR data with digital map data. The second project is concerned with the extraction of tidal channel networks from LiDAR. These networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt-marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. A semi-automatic technique has been developed to extract networks from LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low level algorithms first extract channel fragments based mainly on image properties then a high level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism.
Resumo:
We have discovered a novel approach of intrusion detection system using an intelligent data classifier based on a self organizing map (SOM). We have surveyed all other unsupervised intrusion detection methods, different alternative SOM based techniques and KDD winner IDS methods. This paper provides a robust designed and implemented intelligent data classifier technique based on a single large size (30x30) self organizing map (SOM) having the capability to detect all types of attacks given in the DARPA Archive 1999 the lowest false positive rate being 0.04 % and higher detection rate being 99.73% tested using full KDD data sets and 89.54% comparable detection rate and 0.18% lowest false positive rate tested using corrected data sets.
Resumo:
Parkinson is a neurodegenerative disease, in which tremor is the main symptom. This paper investigates the use of different classification methods to identify tremors experienced by Parkinsonian patients.Some previous research has focussed tremor analysis on external body signals (e.g., electromyography, accelerometer signals, etc.). Our advantage is that we have access to sub-cortical data, which facilitates the applicability of the obtained results into real medical devices since we are dealing with brain signals directly. Local field potentials (LFP) were recorded in the subthalamic nucleus of 7 Parkinsonian patients through the implanted electrodes of a deep brain stimulation (DBS) device prior to its internalization. Measured LFP signals were preprocessed by means of splinting, down sampling, filtering, normalization and rec-tification. Then, feature extraction was conducted through a multi-level decomposition via a wavelettrans form. Finally, artificial intelligence techniques were applied to feature selection, clustering of tremor types, and tremor detection.The key contribution of this paper is to present initial results which indicate, to a high degree of certainty, that there appear to be two distinct subgroups of patients within the group-1 of patients according to the Consensus Statement of the Movement Disorder Society on Tremor. Such results may well lead to different resultant treatments for the patients involved, depending on how their tremor has been classified. Moreover, we propose a new approach for demand driven stimulation, in which tremor detection is also based on the subtype of tremor the patient has. Applying this knowledge to the tremor detection problem, it can be concluded that the results improve when patient clustering is applied prior to detection.
Resumo:
This paper describes the development and evaluation of a sequential injection method to automate the determination of methyl parathion by square wave adsorptive cathodic stripping voltammetry exploiting the concept of monosegmented flow analysis to perform in-line sample conditioning and standard addition. Accumulation and stripping steps are made in the sample medium conditioned with 40 mmol L-1 Britton-Robinson buffer (pH 10) in 0.25 mol L-1 NaNO3. The homogenized mixture is injected at a flow rate of 10 mu Ls(-1) toward the flow cell, which is adapted to the capillary of a hanging drop mercury electrode. After a suitable deposition time, the flow is stopped and the potential is scanned from -0.3 to -1.0 V versus Ag/AgCl at frequency of 250 Hz and pulse height of 25 mV The linear dynamic range is observed for methyl parathion concentrations between 0.010 and 0.50 mgL(-1), with detection and quantification limits of 2 and 7 mu gL(-1), respectively. The sampling throughput is 25 h(-1) if the in line standard addition and sample conditioning protocols are followed, but this frequency can be increased up to 61 h(-1) if the sample is conditioned off-line and quantified using an external calibration curve. The method was applied for determination of methyl parathion in spiked water samples and the accuracy was evaluated either by comparison to high performance liquid chromatography with UV detection, or by the recovery percentages. Although no evidences of statistically significant differences were observed between the expected and obtained concentrations, because of the susceptibility of the method to interference by other pesticides (e.g., parathion, dichlorvos) and natural organic matter (e.g., fulvic and humic acids), isolation of the analyte may be required when more complex sample matrices are encountered. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
A simple, rapid, and low-cost coulometric method for direct detection of glyphosate and aminomethylphosphonic acid (AMPA) in water samples using anion-exchange chromatography and coulometric detection with copper electrode is presented. Under optimized conditions, the limits of detection (LODs) (S/N = 3) were 0.038 mu g ml(-1) for glyphosate and 0.24 mu g ml(-1) for AMPA, without any preconcentration method. The calibration curves were linear and presented an excellent correlation coefficient. The method was successfully applied to the determination of glyphosate and AMPA in water samples without any kind of extraction, clean-up, or preconcentration step. No interferent was found in the water, like this, the recovery was, practically, 100%. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
The objective of this thesis work, is to propose an algorithm to detect the faces in a digital image with complex background. A lot of work has already been done in the area of face detection, but drawback of some face detection algorithms is the lack of ability to detect faces with closed eyes and open mouth. Thus facial features form an important basis for detection. The current thesis work focuses on detection of faces based on facial objects. The procedure is composed of three different phases: segmentation phase, filtering phase and localization phase. In segmentation phase, the algorithm utilizes color segmentation to isolate human skin color based on its chrominance properties. In filtering phase, Minkowski addition based object removal (Morphological operations) has been used to remove the non-skin regions. In the last phase, Image Processing and Computer Vision methods have been used to find the existence of facial components in the skin regions.This method is effective on detecting a face region with closed eyes, open mouth and a half profile face. The experiment’s results demonstrated that the detection accuracy is around 85.4% and the detection speed is faster when compared to neural network method and other techniques.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This article shows a transmission line model for simulation of fast and slow transients, applied to symmetrical or asymmetrical configurations. A transmission line model is developed based on lumped elements representation and state-space techniques. The proposed methodology represents a practical procedure to model three-phase transmission lines directly in time domain, without the explicit or implicit use of inverse transforms. In three-phase representation, analysis modal techniques are applied to decouple the phases in their respective propagation modes, using a correction procedure to set a real and constant matrix for untransposed lines with or without vertical symmetry plane. The proposed methodology takes into account the frequency-dependent parameters of the line and in order to include this effect in the state matrices, a fitting procedure is applied. To verify the accuracy of the proposed state-space model in frequency domain, a simple methodology is described based on line distributed parameters and transfer function associated with input/output signals of the lumped parameters representation. In addition, this article proposes the use of a fast and robust integration procedure to solve the state equations, enabling transient and steady-state simulations. The results obtained by the proposed methodology are compared with several established transmission line models in EMTP, taking into account an asymmetrical three-phase transmission line. The principal contribution of the proposed methodology is to handle a steady fundamental signal mixed with fast and slow transients, including impulsive and oscillatory behavior, by a practical procedure applied directly in time domain for symmetrical or asymmetrical representations. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Studies on patterns of habitat use by mammals are necessary for understanding the mechanisms involved in their distribution and abundance. In this study, we used the spool-and-line method to investigate habitat utilization by two sigmodontine rodents from Brazilian Cerrado, Necromys lasiurus and Oryzomys scotti. We conducted the study in a Cerrado area in central Brazil (15 degrees 56'S and e 47 degrees 56'W) where the animals were caught in an area of 7.68 ha of Cerrado sensu stricto. Captured individuals were marked, equipped with a spool-and-Line device, and released at the same capture point. The next day we followed the thread to record their daily movements and find their nests. To investigate microhabitat selection we compared habitat characteristics along traits of each studied species with general habitat characteristics of the study area. Although the mean 24-h distance was greater for N. lasiurus (mean +/- SE: 41.9 +/- 42.2 m, N=3) than for O. scotti (28.7 +/- 14.2 m, N=6) this difference was not significant (Mann-Whitney test, U=26, P>0.6). We detected significant differences among observed microhabitats variables of both species and available microhabitat characteristics as determined by discriminant analysis (Wilks's lambda F=3.001; df=14, 116; P<0.001). Both species were associated to microhabitat characteristics whose values differed markedly from the overall available habitat. Along the first canonical discriminant function of the DFA both them were associated with greater grass height than the mean height available and along the second axis N. lasiurus selected areas with higher fruit availability and more shelters than those selected by 0. scotti. For stronger inferences regarding differential patterns of habitat utilization by Cerrado rodents we suggest the simultaneous use of both spool-and-line and standard trapping methods. (c) 2005 Deutsche Geseltschaft fur Saugetierkunde. Published by Elsevier GmbH. ALL rights reserved.
Resumo:
Um método de correção de interferência espectral e de transporte é proposto, e foi aplicado para minimizar interferências por moléculas de PO produzidas em chama ar-acetileno e de transporte causada pela variação da concentração de ácido fosfórico. Átomos de Pb e moléculas de PO absorvem a 217,0005 nm, então Atotal217,0005 nm = A Pb217,0005 nm + A PO217,0005 nm. Monitorando o comprimento de onda alternativo de PO em 217,0458 nm, é possível calcular a contribuição relativa de PO na absorbância total a 217,0005 nm: A Pb217,0005 nm = Atotal217,0005 nm - A PO217,0005 nm = Atotal217,0005 nm - k (A PO217,0458 nm). O fator de correção k é a razão entre os coeficientes angulares de duas curvas analíticas para P obtidas a 217,0005 e 217,0458 nm (k = b217,0005 nm/b217,0458 nm). Fixando-se a taxa de aspiração da amostra em 5,0 ml min-1, e integrando-se a absorbância no comprimento de onda a 3 pixels, curvas analíticas para Pb (0,1 - 1,0 mg L-1) foram obtidas com coeficientes de correlação típicos > 0,9990. As correlações lineares entre absorbância e concentração de P nos comprimentos de onda 217,0005 e 217,0458 foram > 0,998. O limite de detecção de Pb foi 10 µg L-1. O método de correção proposto forneceu desvios padrão relativos (n=12) de 2,0 a 6,0%, ligeiramente menores que os obtidos sem correção (1,4-4,3%). As recuperações de Pb adicionado às amostras de ácido fosfórico variaram de 97,5 a 100% (com correção pelo método proposto) e de 105 a 230% (sem correção).