970 resultados para Automatic Peak Detection


Relevância:

30.00% 30.00%

Publicador:

Resumo:

El presente trabajo describe una nueva metodología para la detección automática del espacio glotal de imágenes laríngeas tomadas a partir de 15 vídeos grabados por el servicio ORL del hospital Gregorio Marañón de Madrid con luz estroboscópica. El sistema desarrollado está basado en el modelo de contornos activos (snake). El algoritmo combina en el pre-procesado, algunas técnicas tradicionales (umbralización y filtro de mediana) con técnicas más sofisticadas tales como filtrado anisotrópico. De esta forma, se obtiene una imagen apropiada para el uso de las snakes. El valor escogido para el umbral es del 85% del pico máximo del histograma de la imagen; sobre este valor la información de los píxeles no es relevante. El filtro anisotrópico permite distinguir dos niveles de intensidad, uno es el fondo y el otro es la glotis. La inicialización se basa en obtener el módulo del campo GVF; de esta manera se asegura un proceso automático para la selección del contorno inicial. El rendimiento del algoritmo se valida usando los coeficientes de Pratt y se compara contra una segmentación realizada manualmente y otro método automático basado en la transformada de watershed. SUMMARY: The present work describes a new methodology for the automatic detection of the glottal space from laryngeal images taken from 15 videos recorded by the ENT service of the Gregorio Marañon Hospital in Madrid with videostroboscopic equipment. The system is based on active contour models (snakes). The algorithm combines for the pre-processing, some traditional techniques (thresholding and median filter) with more sophisticated techniques such as anisotropic filtering. In this way, we obtain an appropriate image for the use of snake. The value selected for the threshold is 85% of the maximum peak of the image histogram; over this point the information of the pixels is not relevant. The anisotropic filter permits to distinguish two intensity levels, one is the background and the other one is the glottis. The initialization is based on the obtained magnitude by GVF field; in this manner an automatic process for the initial contour selection will be assured. The performance of the algorithm is tested using the Pratt coefficient and compared against a manual segmentation and another automatic method based on the watershed transformation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a study of the effectiveness of three different algorithms for the parallelization of logic programs based on compile-time detection of independence among goals. The algorithms are embedded in a complete parallelizing compiler, which incorporates different abstract interpretation-based program analyses. The complete system shows the task of automatic program parallelization to be practical. The trade-offs involved in using each of the algorithms in this task are studied experimentally, weaknesses of these identified, and possible improvements discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a technique for achieving a class of optimizations related to the reduction of checks within cycles. The technique uses both Program Transformation and Abstract Interpretation. After a ñrst pass of an abstract interpreter which detects simple invariants, program transformation is used to build a hypothetical situation that simpliñes some predicates that should be executed within the cycle. This transformation implements the heuristic hypothesis that once conditional tests hold they may continué doing so recursively. Specialized versions of predicates are generated to detect and exploit those cases in which the invariance may hold. Abstract interpretation is then used again to verify the truth of such hypotheses and conñrm the proposed simpliñcation. This allows optimizations that go beyond those possible with only one pass of the abstract interpreter over the original program, as is normally the case. It also allows selective program specialization using a standard abstract interpreter not speciñcally designed for this purpose, thus simplifying the design of this already complex module of the compiler. In the paper, a class of programs amenable to such optimization is presented, along with some examples and an evaluation of the proposed techniques in some application áreas such as floundering detection and reducing run-time tests in automatic logic program parallelization. The analysis of the examples presented has been performed automatically by an implementation of the technique using existing abstract interpretation and program transformation tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Irregular computations pose some of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. In the past decade there has been significant progress in the development of parallelizing compilers for logic programming and, more recently, constraint programming. The typical applications of these paradigms frequently involve irregular computations, which arguably makes the techniques used in these compilers potentially interesting. In this paper we introduce in a tutorial way some of the problems faced by parallelizing compilers for logic and constraint programs. These include the need for inter-procedural pointer aliasing analysis for independence detection and having to manage speculative and irregular computations through task granularity control and dynamic task allocation. We also provide pointers to some of the progress made in these áreas. In the associated talk we demónstrate representatives of several generations of these parallelizing compilers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of independence has been recently generalized to the constraint logic programming (CLP) paradigm. Also, several abstract domains specifically designed for CLP languages, and whose information can be used to detect the generalized independence conditions, have been recently defined. As a result we are now in a position where automatic parallelization of CLP programs is feasible. In this paper we study the task of automatically parallelizing CLP programs based on such analyses, by transforming them to explicitly concurrent programs in our parallel CC platform (CIAO) as well as to AKL. We describe the analysis and transformation process, and study its efficiency, accuracy, and effectiveness in program parallelization. The information gathered by the analyzers is evaluated not only in terms of its accuracy, i.e. its ability to determine the actual dependencies among the program variables, but also of its effectiveness, measured in terms of code reduction in the resulting parallelized programs. Given that only a few abstract domains have been already defined for CLP, and that none of them were specifically designed for dependency detection, the aim of the evaluation is not only to asses the effectiveness of the available domains, but also to study what additional information it would be desirable to infer, and what domains would be appropriate for further improving the parallelization process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The employment of nonlinear analysis techniques for automatic voice pathology detection systems has gained popularity due to the ability of such techniques for dealing with the underlying nonlinear phenomena. On this respect, characterization using nonlinear analysis typically employs the classical Correlation Dimension and the largest Lyapunov Exponent, as well as some regularity quantifiers computing the system predictability. Mostly, regularity features highly depend on a correct choosing of some parameters. One of those, the delay time �, is usually fixed to be 1. Nonetheless, it has been stated that a unity � can not avoid linear correlation of the time series and hence, may not correctly capture system nonlinearities. Therefore, present work studies the influence of the � parameter on the estimation of regularity features. Three � estimations are considered: the baseline value 1; a � based on the Average Automutual Information criterion; and � chosen from the embedding window. Testing results obtained for pathological voice suggest that an improved accuracy might be obtained by using a � value different from 1, as it accounts for the underlying nonlinearities of the voice signal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic grading of programming assignments is an important topic in academic research. It aims at improving the level of feedback given to students and optimizing the professor time. Several researches have reported the development of software tools to support this process. Then, it is helpfulto get a quickly and good sight about their key features. This paper reviews an ample set of tools forautomatic grading of programming assignments. They are divided in those most important mature tools, which have remarkable features; and those built recently, with new features. The review includes the definition and description of key features e.g. supported languages, used technology, infrastructure, etc. The two kinds of tools allow making a temporal comparative analysis. This analysis infrastructure, etc. The two kinds of tools allow making a temporal comparative analysis. This analysis shows good improvements in this research field, these include security, more language support, plagiarism detection, etc. On the other hand, the lack of a grading model for assignments is identified as an important gap in the reviewed tools. Thus, a characterization of evaluation metrics to grade programming assignments is provided as first step to get a model. Finally new paths in this research field are proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present an innovative technique to tackle the problem of automatic road sign detection and tracking using an on-board stereo camera. It involves a continuous 3D analysis of the road sign during the whole tracking process. Firstly, a color and appearance based model is applied to generate road sign candidates in both stereo images. A sparse disparity map between the left and right images is then created for each candidate by using contour-based and SURF-based matching in the far and short range, respectively. Once the map has been computed, the correspondences are back-projected to generate a cloud of 3D points, and the best-fit plane is computed through RANSAC, ensuring robustness to outliers. Temporal consistency is enforced by means of a Kalman filter, which exploits the intrinsic smoothness of the 3D camera motion in traffic environments. Additionally, the estimation of the plane allows to correct deformations due to perspective, thus easing further sign classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose an innovative method for the automatic detection and tracking of road traffic signs using an onboard stereo camera. It involves a combination of monocular and stereo analysis strategies to increase the reliability of the detections such that it can boost the performance of any traffic sign recognition scheme. Firstly, an adaptive color and appearance based detection is applied at single camera level to generate a set of traffic sign hypotheses. In turn, stereo information allows for sparse 3D reconstruction of potential traffic signs through a SURF-based matching strategy. Namely, the plane that best fits the cloud of 3D points traced back from feature matches is estimated using a RANSAC based approach to improve robustness to outliers. Temporal consistency of the 3D information is ensured through a Kalman-based tracking stage. This also allows for the generation of a predicted 3D traffic sign model, which is in turn used to enhance the previously mentioned color-based detector through a feedback loop, thus improving detection accuracy. The proposed solution has been tested with real sequences under several illumination conditions and in both urban areas and highways, achieving very high detection rates in challenging environments, including rapid motion and significant perspective distortion

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the experimental set up of a system composed by a set of wearable sensors devices for the recording of the motion signals and software algorithms for the signal analysis. This system is able to automatically detect and assess the severity of bradykinesia, tremor, dyskinesia and akinesia motor symptoms. Based on the assessment of the akinesia, the ON-OFF status of the patient is determined for each moment. The assessment performed through the automatic evaluation of the akinesia is compared with the status reported by the patients in their diaries. Preliminary results with a total recording period of 32 hours with two PD patients are presented, where a good correspondence (88.2 +/- 3.7 %) was observed. Best (93.7 por ciento) and worst (87 por ciento) correlation results are illustrated, together with the analysis of the automatic assessment of the akinesia symptom leading to the status determination. The results obtained are promising, and if confirmed with further data, this automatic assessment of PD motor symptoms will lead to a better adjustment of medication dosages and timing, cost savings and an improved quality of life of the patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Project you are about to see it is based on the technologies used on object detection and recognition, especially on leaves and chromosomes. To do so, this document contains the typical parts of a scientific paper, as it is what it is. It is composed by an Abstract, an Introduction, points that have to do with the investigation area, future work, conclusions and references used for the elaboration of the document. The Abstract talks about what are we going to find in this paper, which is technologies employed on pattern detection and recognition for leaves and chromosomes and the jobs that are already made for cataloguing these objects. In the introduction detection and recognition meanings are explained. This is necessary as many papers get confused with these terms, specially the ones talking about chromosomes. Detecting an object is gathering the parts of the image that are useful and eliminating the useless parts. Summarizing, detection would be recognizing the objects borders. When talking about recognition, we are talking about the computers or the machines process, which says what kind of object we are handling. Afterwards we face a compilation of the most used technologies in object detection in general. There are two main groups on this category: Based on derivatives of images and based on ASIFT points. The ones that are based on derivatives of images have in common that convolving them with a previously created matrix does the treatment of them. This is done for detecting borders on the images, which are changes on the intensity of the pixels. Within these technologies we face two groups: Gradian based, which search for maximums and minimums on the pixels intensity as they only use the first derivative. The Laplacian based methods search for zeros on the pixels intensity as they use the second derivative. Depending on the level of details that we want to use on the final result, we will choose one option or the other, because, as its logic, if we used Gradian based methods, the computer will consume less resources and less time as there are less operations, but the quality will be worse. On the other hand, if we use the Laplacian based methods we will need more time and resources as they require more operations, but we will have a much better quality result. After explaining all the derivative based methods, we take a look on the different algorithms that are available for both groups. The other big group of technologies for object recognition is the one based on ASIFT points, which are based on 6 image parameters and compare them with another image taking under consideration these parameters. These methods disadvantage, for our future purposes, is that it is only valid for one single object. So if we are going to recognize two different leaves, even though if they refer to the same specie, we are not going to be able to recognize them with this method. It is important to mention these types of technologies as we are talking about recognition methods in general. At the end of the chapter we can see a comparison with pros and cons of all technologies that are employed. Firstly comparing them separately and then comparing them all together, based on our purposes. Recognition techniques, which are the next chapter, are not really vast as, even though there are general steps for doing object recognition, every single object that has to be recognized has its own method as the are different. This is why there is not a general method that we can specify on this chapter. We now move on into leaf detection techniques on computers. Now we will use the technique explained above based on the image derivatives. Next step will be to turn the leaf into several parameters. Depending on the document that you are referring to, there will be more or less parameters. Some papers recommend to divide the leaf into 3 main features (shape, dent and vein] and doing mathematical operations with them we can get up to 16 secondary features. Next proposition is dividing the leaf into 5 main features (Diameter, physiological length, physiological width, area and perimeter] and from those, extract 12 secondary features. This second alternative is the most used so it is the one that is going to be the reference. Following in to leaf recognition, we are based on a paper that provides a source code that, clicking on both leaf ends, it automatically tells to which specie belongs the leaf that we are trying to recognize. To do so, it only requires having a database. On the tests that have been made by the document, they assure us a 90.312% of accuracy over 320 total tests (32 plants on the database and 10 tests per specie]. Next chapter talks about chromosome detection, where we shall pass the metaphasis plate, where the chromosomes are disorganized, into the karyotype plate, which is the usual view of the 23 chromosomes ordered by number. There are two types of techniques to do this step: the skeletonization process and swiping angles. Skeletonization progress consists on suppressing the inside pixels of the chromosome to just stay with the silhouette. This method is really similar to the ones based on the derivatives of the image but the difference is that it doesnt detect the borders but the interior of the chromosome. Second technique consists of swiping angles from the beginning of the chromosome and, taking under consideration, that on a single chromosome we cannot have more than an X angle, it detects the various regions of the chromosomes. Once the karyotype plate is defined, we continue with chromosome recognition. To do so, there is a technique based on the banding that chromosomes have (grey scale bands] that make them unique. The program then detects the longitudinal axis of the chromosome and reconstructs the band profiles. Then the computer is able to recognize this chromosome. Concerning the future work, we generally have to independent techniques that dont reunite detection and recognition, so our main focus would be to prepare a program that gathers both techniques. On the leaf matter we have seen that, detection and recognition, have a link as both share the option of dividing the leaf into 5 main features. The work that would have to be done is to create an algorithm that linked both methods, as in the program, which recognizes leaves, it has to be clicked both leaf ends so it is not an automatic algorithm. On the chromosome side, we should create an algorithm that searches for the beginning of the chromosome and then start to swipe angles, to later give the parameters to the program that searches for the band profiles. Finally, on the summary, we explain why this type of investigation is needed, and that is because with global warming, lots of species (animals and plants] are beginning to extinguish. That is the reason why a big database, which gathers all the possible species, is needed. For recognizing animal species, we just only have to have the 23 chromosomes. While recognizing a plant, there are several ways of doing it, but the easiest way to input a computer is to scan the leaf of the plant. RESUMEN. El proyecto que se puede ver a continuación trata sobre las tecnologías empleadas en la detección y reconocimiento de objetos, especialmente de hojas y cromosomas. Para ello, este documento contiene las partes típicas de un paper de investigación, puesto que es de lo que se trata. Así, estará compuesto de Abstract, Introducción, diversos puntos que tengan que ver con el área a investigar, trabajo futuro, conclusiones y biografía utilizada para la realización del documento. Así, el Abstract nos cuenta qué vamos a poder encontrar en este paper, que no es ni más ni menos que las tecnologías empleadas en el reconocimiento y detección de patrones en hojas y cromosomas y qué trabajos hay existentes para catalogar a estos objetos. En la introducción se explican los conceptos de qué es la detección y qué es el reconocimiento. Esto es necesario ya que muchos papers científicos, especialmente los que hablan de cromosomas, confunden estos dos términos que no podían ser más sencillos. Por un lado tendríamos la detección del objeto, que sería simplemente coger las partes que nos interesasen de la imagen y eliminar aquellas partes que no nos fueran útiles para un futuro. Resumiendo, sería reconocer los bordes del objeto de estudio. Cuando hablamos de reconocimiento, estamos refiriéndonos al proceso que tiene el ordenador, o la máquina, para decir qué clase de objeto estamos tratando. Seguidamente nos encontramos con un recopilatorio de las tecnologías más utilizadas para la detección de objetos, en general. Aquí nos encontraríamos con dos grandes grupos de tecnologías: Las basadas en las derivadas de imágenes y las basadas en los puntos ASIFT. El grupo de tecnologías basadas en derivadas de imágenes tienen en común que hay que tratar a las imágenes mediante una convolución con una matriz creada previamente. Esto se hace para detectar bordes en las imágenes que son básicamente cambios en la intensidad de los píxeles. Dentro de estas tecnologías nos encontramos con dos grupos: Los basados en gradientes, los cuales buscan máximos y mínimos de intensidad en la imagen puesto que sólo utilizan la primera derivada; y los Laplacianos, los cuales buscan ceros en la intensidad de los píxeles puesto que estos utilizan la segunda derivada de la imagen. Dependiendo del nivel de detalles que queramos utilizar en el resultado final nos decantaremos por un método u otro puesto que, como es lógico, si utilizamos los basados en el gradiente habrá menos operaciones por lo que consumirá más tiempo y recursos pero por la contra tendremos menos calidad de imagen. Y al revés pasa con los Laplacianos, puesto que necesitan más operaciones y recursos pero tendrán un resultado final con mejor calidad. Después de explicar los tipos de operadores que hay, se hace un recorrido explicando los distintos tipos de algoritmos que hay en cada uno de los grupos. El otro gran grupo de tecnologías para el reconocimiento de objetos son los basados en puntos ASIFT, los cuales se basan en 6 parámetros de la imagen y la comparan con otra imagen teniendo en cuenta dichos parámetros. La desventaja de este método, para nuestros propósitos futuros, es que sólo es valido para un objeto en concreto. Por lo que si vamos a reconocer dos hojas diferentes, aunque sean de la misma especie, no vamos a poder reconocerlas mediante este método. Aún así es importante explicar este tipo de tecnologías puesto que estamos hablando de técnicas de reconocimiento en general. Al final del capítulo podremos ver una comparación con los pros y las contras de todas las tecnologías empleadas. Primeramente comparándolas de forma separada y, finalmente, compararemos todos los métodos existentes en base a nuestros propósitos. Las técnicas de reconocimiento, el siguiente apartado, no es muy extenso puesto que, aunque haya pasos generales para el reconocimiento de objetos, cada objeto a reconocer es distinto por lo que no hay un método específico que se pueda generalizar. Pasamos ahora a las técnicas de detección de hojas mediante ordenador. Aquí usaremos la técnica explicada previamente explicada basada en las derivadas de las imágenes. La continuación de este paso sería diseccionar la hoja en diversos parámetros. Dependiendo de la fuente a la que se consulte pueden haber más o menos parámetros. Unos documentos aconsejan dividir la morfología de la hoja en 3 parámetros principales (Forma, Dentina y ramificación] y derivando de dichos parámetros convertirlos a 16 parámetros secundarios. La otra propuesta es dividir la morfología de la hoja en 5 parámetros principales (Diámetro, longitud fisiológica, anchura fisiológica, área y perímetro] y de ahí extraer 12 parámetros secundarios. Esta segunda propuesta es la más utilizada de todas por lo que es la que se utilizará. Pasamos al reconocimiento de hojas, en la cual nos hemos basado en un documento que provee un código fuente que cucando en los dos extremos de la hoja automáticamente nos dice a qué especie pertenece la hoja que estamos intentando reconocer. Para ello sólo hay que formar una base de datos. En los test realizados por el citado documento, nos aseguran que tiene un índice de acierto del 90.312% en 320 test en total (32 plantas insertadas en la base de datos por 10 test que se han realizado por cada una de las especies]. El siguiente apartado trata de la detección de cromosomas, en el cual se debe de pasar de la célula metafásica, donde los cromosomas están desorganizados, al cariotipo, que es como solemos ver los 23 cromosomas de forma ordenada. Hay dos tipos de técnicas para realizar este paso: Por el proceso de esquelotonización y barriendo ángulos. El proceso de esqueletonización consiste en eliminar los píxeles del interior del cromosoma para quedarse con su silueta; Este proceso es similar a los métodos de derivación de los píxeles pero se diferencia en que no detecta bordes si no que detecta el interior de los cromosomas. La segunda técnica consiste en ir barriendo ángulos desde el principio del cromosoma y teniendo en cuenta que un cromosoma no puede doblarse más de X grados detecta las diversas regiones de los cromosomas. Una vez tengamos el cariotipo, se continua con el reconocimiento de cromosomas. Para ello existe una técnica basada en las bandas de blancos y negros que tienen los cromosomas y que son las que los hacen únicos. Para ello el programa detecta los ejes longitudinales del cromosoma y reconstruye los perfiles de las bandas que posee el cromosoma y que lo identifican como único. En cuanto al trabajo que se podría desempeñar en el futuro, tenemos por lo general dos técnicas independientes que no unen la detección con el reconocimiento por lo que se habría de preparar un programa que uniese estas dos técnicas. Respecto a las hojas hemos visto que ambos métodos, detección y reconocimiento, están vinculados debido a que ambos comparten la opinión de dividir las hojas en 5 parámetros principales. El trabajo que habría que realizar sería el de crear un algoritmo que conectase a ambos ya que en el programa de reconocimiento se debe clicar a los dos extremos de la hoja por lo que no es una tarea automática. En cuanto a los cromosomas, se debería de crear un algoritmo que busque el inicio del cromosoma y entonces empiece a barrer ángulos para después poder dárselo al programa que busca los perfiles de bandas de los cromosomas. Finalmente, en el resumen se explica el por qué hace falta este tipo de investigación, esto es que con el calentamiento global, muchas de las especies (tanto animales como plantas] se están empezando a extinguir. Es por ello que se necesitará una base de datos que contemple todas las posibles especies tanto del reino animal como del reino vegetal. Para reconocer a una especie animal, simplemente bastará con tener sus 23 cromosomas; mientras que para reconocer a una especie vegetal, existen diversas formas. Aunque la más sencilla de todas es contar con la hoja de la especie puesto que es el elemento más fácil de escanear e introducir en el ordenador.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work describes a new methodology for the automatic detection of the glottal space from laryngeal images based on active contour models (snakes). In order to obtain an appropriate image for the use of snakes based techniques, the proposed algorithm combines a pre-processing stage including some traditional techniques (thresholding and median filter) with more sophisticated ones such as anisotropic filtering. The value selected for the thresholding was fixed to the 85% of the maximum peak of the image histogram, and the anisotropic filter permits to distinguish two intensity levels, one corresponding to the background and the other one to the foreground (glottis). The initialization carried out is based on the magnitude obtained using the Gradient Vector Flow field, ensuring an automatic process for the selection of the initial contour. The performance of the algorithm is tested using the Pratt coefficient and compared against a manual segmentation. The results obtained suggest that this method provided results comparable with other techniques such as the proposed in (Osma-Ruiz et al., 2008).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlinear analysis tools for studying and characterizing the dynamics of physiological signals have gained popularity, mainly because tracking sudden alterations of the inherent complexity of biological processes might be an indicator of altered physiological states. Typically, in order to perform an analysis with such tools, the physiological variables that describe the biological process under study are used to reconstruct the underlying dynamics of the biological processes. For that goal, a procedure called time-delay or uniform embedding is usually employed. Nonetheless, there is evidence of its inability for dealing with non-stationary signals, as those recorded from many physiological processes. To handle with such a drawback, this paper evaluates the utility of non-conventional time series reconstruction procedures based on non uniform embedding, applying them to automatic pattern recognition tasks. The paper compares a state of the art non uniform approach with a novel scheme which fuses embedding and feature selection at once, searching for better reconstructions of the dynamics of the system. Moreover, results are also compared with two classic uniform embedding techniques. Thus, the goal is comparing uniform and non uniform reconstruction techniques, including the one proposed in this work, for pattern recognition in biomedical signal processing tasks. Once the state space is reconstructed, the scheme followed characterizes with three classic nonlinear dynamic features (Largest Lyapunov Exponent, Correlation Dimension and Recurrence Period Density Entropy), while classification is carried out by means of a simple k-nn classifier. In order to test its generalization capabilities, the approach was tested with three different physiological databases (Speech Pathologies, Epilepsy and Heart Murmurs). In terms of the accuracy obtained to automatically detect the presence of pathologies, and for the three types of biosignals analyzed, the non uniform techniques used in this work lightly outperformed the results obtained using the uniform methods, suggesting their usefulness to characterize non-stationary biomedical signals in pattern recognition applications. On the other hand, in view of the results obtained and its low computational load, the proposed technique suggests its applicability for the applications under study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to ever increasing transportation of people and goods, automatic traffic surveillance is becoming a key issue for both providing safety to road users and improving traffic control in an efficient way. In this paper, we propose a new system that, exploiting the capabilities that both computer vision and machine learning offer, is able to detect and track different types of real incidents on a highway. Specifically, it is able to accurately detect not only stopped vehicles, but also drivers and passengers leaving the stopped vehicle, and other pedestrians present in the roadway. Additionally, a theoretical approach for detecting vehicles which may leave the road in an unexpected way is also presented. The system works in real-time and it has been optimized for working outdoor, being thus appropriate for its deployment in a real-world environment like a highway. First experimental results on a dataset created with videos provided by two Spanish highway operators demonstrate the effectiveness of the proposed system and its robustness against noise and low-quality videos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Different treatments (consolidation and water-repellent) were applied on samples of marble and granite from the Front stage of the Roman Theatre of Merida (Spain). The main goal is to study the effects of these treatments on archaeological stone material, by analyzing the surface changes. X-Ray Fluorescence and Laser-Induced Breakdown Spectroscopy techniques, as well as Nuclear Magnetic Resonance have been used in order to study changes in the surface properties of the material, comparing treated and untreated specimens. The results confirm that silicon (Si) marker tracking allows the detection of applied treatments, increasing the peak signal in treated specimens. Furthermore, it is also possible to prove changes both within the pore system of the materialand in the distribution of surface water, resulting from the application of these products