857 resultados para Detection and segmentation
Resumo:
In the last years, many analyses from acoustic signal processing have been used for different applications. In most cases, these sensor systems are based on the determination of times of flight for signals from every transducer. This paper presents a flat plate generalization method for impact detection and location over linear links or bars-based structures. The use of three piezoelectric sensors allow to achieve the position and impact time while the use of additional sensors lets cover a larger area of detection and avoid wrong timing difference measurements. An experimental setup and some experimental results are briefly presented.
Resumo:
This paper presents the detection and identification of hydrocarbons through flu oro-sensing by developing a simple and inexpensive detector for inland water, in contrast to current systems, designed to be used for marine waters at large distances and being extremely costly. To validate the proposed system, three test-benches have been mounted, with various UV-Iight sources. Main application of this system would be detect hydrocarbons pollution in rivers, lakes or dams, which in fact, is of growing interest by administrations.
Resumo:
Active optical sensing (LIDAR and light curtain transmission) devices mounted on a mobile platform can correctly detect, localize, and classify trees. To conduct an evaluation and comparison of the different sensors, an optical encoder wheel was used for vehicle odometry and provided a measurement of the linear displacement of the prototype vehicle along a row of tree seedlings as a reference for each recorded sensor measurement. The field trials were conducted in a juvenile tree nursery with one-year-old grafted almond trees at Sierra Gold Nurseries, Yuba City, CA, United States. Through these tests and subsequent data processing, each sensor was individually evaluated to characterize their reliability, as well as their advantages and disadvantages for the proposed task. Test results indicated that 95.7% and 99.48% of the trees were successfully detected with the LIDAR and light curtain sensors, respectively. LIDAR correctly classified, between alive or dead tree states at a 93.75% success rate compared to 94.16% for the light curtain sensor. These results can help system designers select the most reliable sensor for the accurate detection and localization of each tree in a nursery, which might allow labor-intensive tasks, such as weeding, to be automated without damaging crops.
Resumo:
In this paper we present an innovative technique to tackle the problem of automatic road sign detection and tracking using an on-board stereo camera. It involves a continuous 3D analysis of the road sign during the whole tracking process. Firstly, a color and appearance based model is applied to generate road sign candidates in both stereo images. A sparse disparity map between the left and right images is then created for each candidate by using contour-based and SURF-based matching in the far and short range, respectively. Once the map has been computed, the correspondences are back-projected to generate a cloud of 3D points, and the best-fit plane is computed through RANSAC, ensuring robustness to outliers. Temporal consistency is enforced by means of a Kalman filter, which exploits the intrinsic smoothness of the 3D camera motion in traffic environments. Additionally, the estimation of the plane allows to correct deformations due to perspective, thus easing further sign classification.
Resumo:
In this paper we propose an innovative method for the automatic detection and tracking of road traffic signs using an onboard stereo camera. It involves a combination of monocular and stereo analysis strategies to increase the reliability of the detections such that it can boost the performance of any traffic sign recognition scheme. Firstly, an adaptive color and appearance based detection is applied at single camera level to generate a set of traffic sign hypotheses. In turn, stereo information allows for sparse 3D reconstruction of potential traffic signs through a SURF-based matching strategy. Namely, the plane that best fits the cloud of 3D points traced back from feature matches is estimated using a RANSAC based approach to improve robustness to outliers. Temporal consistency of the 3D information is ensured through a Kalman-based tracking stage. This also allows for the generation of a predicted 3D traffic sign model, which is in turn used to enhance the previously mentioned color-based detector through a feedback loop, thus improving detection accuracy. The proposed solution has been tested with real sequences under several illumination conditions and in both urban areas and highways, achieving very high detection rates in challenging environments, including rapid motion and significant perspective distortion
Resumo:
The Project you are about to see it is based on the technologies used on object detection and recognition, especially on leaves and chromosomes. To do so, this document contains the typical parts of a scientific paper, as it is what it is. It is composed by an Abstract, an Introduction, points that have to do with the investigation area, future work, conclusions and references used for the elaboration of the document. The Abstract talks about what are we going to find in this paper, which is technologies employed on pattern detection and recognition for leaves and chromosomes and the jobs that are already made for cataloguing these objects. In the introduction detection and recognition meanings are explained. This is necessary as many papers get confused with these terms, specially the ones talking about chromosomes. Detecting an object is gathering the parts of the image that are useful and eliminating the useless parts. Summarizing, detection would be recognizing the objects borders. When talking about recognition, we are talking about the computers or the machines process, which says what kind of object we are handling. Afterwards we face a compilation of the most used technologies in object detection in general. There are two main groups on this category: Based on derivatives of images and based on ASIFT points. The ones that are based on derivatives of images have in common that convolving them with a previously created matrix does the treatment of them. This is done for detecting borders on the images, which are changes on the intensity of the pixels. Within these technologies we face two groups: Gradian based, which search for maximums and minimums on the pixels intensity as they only use the first derivative. The Laplacian based methods search for zeros on the pixels intensity as they use the second derivative. Depending on the level of details that we want to use on the final result, we will choose one option or the other, because, as its logic, if we used Gradian based methods, the computer will consume less resources and less time as there are less operations, but the quality will be worse. On the other hand, if we use the Laplacian based methods we will need more time and resources as they require more operations, but we will have a much better quality result. After explaining all the derivative based methods, we take a look on the different algorithms that are available for both groups. The other big group of technologies for object recognition is the one based on ASIFT points, which are based on 6 image parameters and compare them with another image taking under consideration these parameters. These methods disadvantage, for our future purposes, is that it is only valid for one single object. So if we are going to recognize two different leaves, even though if they refer to the same specie, we are not going to be able to recognize them with this method. It is important to mention these types of technologies as we are talking about recognition methods in general. At the end of the chapter we can see a comparison with pros and cons of all technologies that are employed. Firstly comparing them separately and then comparing them all together, based on our purposes. Recognition techniques, which are the next chapter, are not really vast as, even though there are general steps for doing object recognition, every single object that has to be recognized has its own method as the are different. This is why there is not a general method that we can specify on this chapter. We now move on into leaf detection techniques on computers. Now we will use the technique explained above based on the image derivatives. Next step will be to turn the leaf into several parameters. Depending on the document that you are referring to, there will be more or less parameters. Some papers recommend to divide the leaf into 3 main features (shape, dent and vein] and doing mathematical operations with them we can get up to 16 secondary features. Next proposition is dividing the leaf into 5 main features (Diameter, physiological length, physiological width, area and perimeter] and from those, extract 12 secondary features. This second alternative is the most used so it is the one that is going to be the reference. Following in to leaf recognition, we are based on a paper that provides a source code that, clicking on both leaf ends, it automatically tells to which specie belongs the leaf that we are trying to recognize. To do so, it only requires having a database. On the tests that have been made by the document, they assure us a 90.312% of accuracy over 320 total tests (32 plants on the database and 10 tests per specie]. Next chapter talks about chromosome detection, where we shall pass the metaphasis plate, where the chromosomes are disorganized, into the karyotype plate, which is the usual view of the 23 chromosomes ordered by number. There are two types of techniques to do this step: the skeletonization process and swiping angles. Skeletonization progress consists on suppressing the inside pixels of the chromosome to just stay with the silhouette. This method is really similar to the ones based on the derivatives of the image but the difference is that it doesnt detect the borders but the interior of the chromosome. Second technique consists of swiping angles from the beginning of the chromosome and, taking under consideration, that on a single chromosome we cannot have more than an X angle, it detects the various regions of the chromosomes. Once the karyotype plate is defined, we continue with chromosome recognition. To do so, there is a technique based on the banding that chromosomes have (grey scale bands] that make them unique. The program then detects the longitudinal axis of the chromosome and reconstructs the band profiles. Then the computer is able to recognize this chromosome. Concerning the future work, we generally have to independent techniques that dont reunite detection and recognition, so our main focus would be to prepare a program that gathers both techniques. On the leaf matter we have seen that, detection and recognition, have a link as both share the option of dividing the leaf into 5 main features. The work that would have to be done is to create an algorithm that linked both methods, as in the program, which recognizes leaves, it has to be clicked both leaf ends so it is not an automatic algorithm. On the chromosome side, we should create an algorithm that searches for the beginning of the chromosome and then start to swipe angles, to later give the parameters to the program that searches for the band profiles. Finally, on the summary, we explain why this type of investigation is needed, and that is because with global warming, lots of species (animals and plants] are beginning to extinguish. That is the reason why a big database, which gathers all the possible species, is needed. For recognizing animal species, we just only have to have the 23 chromosomes. While recognizing a plant, there are several ways of doing it, but the easiest way to input a computer is to scan the leaf of the plant. RESUMEN. El proyecto que se puede ver a continuación trata sobre las tecnologías empleadas en la detección y reconocimiento de objetos, especialmente de hojas y cromosomas. Para ello, este documento contiene las partes típicas de un paper de investigación, puesto que es de lo que se trata. Así, estará compuesto de Abstract, Introducción, diversos puntos que tengan que ver con el área a investigar, trabajo futuro, conclusiones y biografía utilizada para la realización del documento. Así, el Abstract nos cuenta qué vamos a poder encontrar en este paper, que no es ni más ni menos que las tecnologías empleadas en el reconocimiento y detección de patrones en hojas y cromosomas y qué trabajos hay existentes para catalogar a estos objetos. En la introducción se explican los conceptos de qué es la detección y qué es el reconocimiento. Esto es necesario ya que muchos papers científicos, especialmente los que hablan de cromosomas, confunden estos dos términos que no podían ser más sencillos. Por un lado tendríamos la detección del objeto, que sería simplemente coger las partes que nos interesasen de la imagen y eliminar aquellas partes que no nos fueran útiles para un futuro. Resumiendo, sería reconocer los bordes del objeto de estudio. Cuando hablamos de reconocimiento, estamos refiriéndonos al proceso que tiene el ordenador, o la máquina, para decir qué clase de objeto estamos tratando. Seguidamente nos encontramos con un recopilatorio de las tecnologías más utilizadas para la detección de objetos, en general. Aquí nos encontraríamos con dos grandes grupos de tecnologías: Las basadas en las derivadas de imágenes y las basadas en los puntos ASIFT. El grupo de tecnologías basadas en derivadas de imágenes tienen en común que hay que tratar a las imágenes mediante una convolución con una matriz creada previamente. Esto se hace para detectar bordes en las imágenes que son básicamente cambios en la intensidad de los píxeles. Dentro de estas tecnologías nos encontramos con dos grupos: Los basados en gradientes, los cuales buscan máximos y mínimos de intensidad en la imagen puesto que sólo utilizan la primera derivada; y los Laplacianos, los cuales buscan ceros en la intensidad de los píxeles puesto que estos utilizan la segunda derivada de la imagen. Dependiendo del nivel de detalles que queramos utilizar en el resultado final nos decantaremos por un método u otro puesto que, como es lógico, si utilizamos los basados en el gradiente habrá menos operaciones por lo que consumirá más tiempo y recursos pero por la contra tendremos menos calidad de imagen. Y al revés pasa con los Laplacianos, puesto que necesitan más operaciones y recursos pero tendrán un resultado final con mejor calidad. Después de explicar los tipos de operadores que hay, se hace un recorrido explicando los distintos tipos de algoritmos que hay en cada uno de los grupos. El otro gran grupo de tecnologías para el reconocimiento de objetos son los basados en puntos ASIFT, los cuales se basan en 6 parámetros de la imagen y la comparan con otra imagen teniendo en cuenta dichos parámetros. La desventaja de este método, para nuestros propósitos futuros, es que sólo es valido para un objeto en concreto. Por lo que si vamos a reconocer dos hojas diferentes, aunque sean de la misma especie, no vamos a poder reconocerlas mediante este método. Aún así es importante explicar este tipo de tecnologías puesto que estamos hablando de técnicas de reconocimiento en general. Al final del capítulo podremos ver una comparación con los pros y las contras de todas las tecnologías empleadas. Primeramente comparándolas de forma separada y, finalmente, compararemos todos los métodos existentes en base a nuestros propósitos. Las técnicas de reconocimiento, el siguiente apartado, no es muy extenso puesto que, aunque haya pasos generales para el reconocimiento de objetos, cada objeto a reconocer es distinto por lo que no hay un método específico que se pueda generalizar. Pasamos ahora a las técnicas de detección de hojas mediante ordenador. Aquí usaremos la técnica explicada previamente explicada basada en las derivadas de las imágenes. La continuación de este paso sería diseccionar la hoja en diversos parámetros. Dependiendo de la fuente a la que se consulte pueden haber más o menos parámetros. Unos documentos aconsejan dividir la morfología de la hoja en 3 parámetros principales (Forma, Dentina y ramificación] y derivando de dichos parámetros convertirlos a 16 parámetros secundarios. La otra propuesta es dividir la morfología de la hoja en 5 parámetros principales (Diámetro, longitud fisiológica, anchura fisiológica, área y perímetro] y de ahí extraer 12 parámetros secundarios. Esta segunda propuesta es la más utilizada de todas por lo que es la que se utilizará. Pasamos al reconocimiento de hojas, en la cual nos hemos basado en un documento que provee un código fuente que cucando en los dos extremos de la hoja automáticamente nos dice a qué especie pertenece la hoja que estamos intentando reconocer. Para ello sólo hay que formar una base de datos. En los test realizados por el citado documento, nos aseguran que tiene un índice de acierto del 90.312% en 320 test en total (32 plantas insertadas en la base de datos por 10 test que se han realizado por cada una de las especies]. El siguiente apartado trata de la detección de cromosomas, en el cual se debe de pasar de la célula metafásica, donde los cromosomas están desorganizados, al cariotipo, que es como solemos ver los 23 cromosomas de forma ordenada. Hay dos tipos de técnicas para realizar este paso: Por el proceso de esquelotonización y barriendo ángulos. El proceso de esqueletonización consiste en eliminar los píxeles del interior del cromosoma para quedarse con su silueta; Este proceso es similar a los métodos de derivación de los píxeles pero se diferencia en que no detecta bordes si no que detecta el interior de los cromosomas. La segunda técnica consiste en ir barriendo ángulos desde el principio del cromosoma y teniendo en cuenta que un cromosoma no puede doblarse más de X grados detecta las diversas regiones de los cromosomas. Una vez tengamos el cariotipo, se continua con el reconocimiento de cromosomas. Para ello existe una técnica basada en las bandas de blancos y negros que tienen los cromosomas y que son las que los hacen únicos. Para ello el programa detecta los ejes longitudinales del cromosoma y reconstruye los perfiles de las bandas que posee el cromosoma y que lo identifican como único. En cuanto al trabajo que se podría desempeñar en el futuro, tenemos por lo general dos técnicas independientes que no unen la detección con el reconocimiento por lo que se habría de preparar un programa que uniese estas dos técnicas. Respecto a las hojas hemos visto que ambos métodos, detección y reconocimiento, están vinculados debido a que ambos comparten la opinión de dividir las hojas en 5 parámetros principales. El trabajo que habría que realizar sería el de crear un algoritmo que conectase a ambos ya que en el programa de reconocimiento se debe clicar a los dos extremos de la hoja por lo que no es una tarea automática. En cuanto a los cromosomas, se debería de crear un algoritmo que busque el inicio del cromosoma y entonces empiece a barrer ángulos para después poder dárselo al programa que busca los perfiles de bandas de los cromosomas. Finalmente, en el resumen se explica el por qué hace falta este tipo de investigación, esto es que con el calentamiento global, muchas de las especies (tanto animales como plantas] se están empezando a extinguir. Es por ello que se necesitará una base de datos que contemple todas las posibles especies tanto del reino animal como del reino vegetal. Para reconocer a una especie animal, simplemente bastará con tener sus 23 cromosomas; mientras que para reconocer a una especie vegetal, existen diversas formas. Aunque la más sencilla de todas es contar con la hoja de la especie puesto que es el elemento más fácil de escanear e introducir en el ordenador.
Resumo:
The use of a common environment for processing different powder foods in the industry has increased the risk of finding peanut traces in powder foods. The analytical methods commonly used for detection of peanut such as enzyme-linked immunosorbent assay (ELISA) and real-time polymerase chain reaction (RT-PCR) represent high specificity and sensitivity but are destructive and time-consuming, and require highly skilled experimenters. The feasibility of NIR hyperspectral imaging (HSI) is studied for the detection of peanut traces down to 0.01% by weight. A principal-component analysis (PCA) was carried out on a dataset of peanut and flour spectra. The obtained loadings were applied to the HSI images of adulterated wheat flour samples with peanut traces. As a result, HSI images were reduced to score images with enhanced contrast between peanut and flour particles. Finally, a threshold was fixed in score images to obtain a binary classification image, and the percentage of peanut adulteration was compared with the percentage of pixels identified as peanut particles. This study allowed the detection of traces of peanut down to 0.01% and quantification of peanut adulteration from 10% to 0.1% with a coefficient of determination (r2) of 0.946. These results show the feasibility of using HSI systems for the detection of peanut traces in conjunction with chemical procedures, such as RT-PCR and ELISA to facilitate enhanced quality-control surveillance on food-product processing lines.
Resumo:
Different treatments (consolidation and water-repellent) were applied on samples of marble and granite from the Front stage of the Roman Theatre of Merida (Spain). The main goal is to study the effects of these treatments on archaeological stone material, by analyzing the surface changes. X-Ray Fluorescence and Laser-Induced Breakdown Spectroscopy techniques, as well as Nuclear Magnetic Resonance have been used in order to study changes in the surface properties of the material, comparing treated and untreated specimens. The results confirm that silicon (Si) marker tracking allows the detection of applied treatments, increasing the peak signal in treated specimens. Furthermore, it is also possible to prove changes both within the pore system of the materialand in the distribution of surface water, resulting from the application of these products
Resumo:
Uno de los defectos más frecuentes en los generadores síncronos son los defectos a tierra tanto en el devanado estatórico, como de excitación. Se produce un defecto cuando el aislamiento eléctrico entre las partes activas de cualquiera de estos devanados y tierra se reduce considerablemente o desaparece. La detección de los defectos a tierra en ambos devanados es un tema ampliamente estudiado a nivel industrial. Tras la detección y confirmación de la existencia del defecto, dicha falta debe ser localizada a lo largo del devanado para su reparación, para lo que habitualmente el rotor debe ser extraído del estator. Esta operación resulta especialmente compleja y cara. Además, el hecho de limitar la corriente de defecto en ambos devanados provoca que el defecto no sea localizable visualmente, pues apenas existe daño en el generador. Por ello, se deben aplicar técnicas muy laboriosas para localizar exactamente el defecto y poder así reparar el devanado. De cara a reducir el tiempo de reparación, y con ello el tiempo en que el generador esta fuera de servicio, cualquier información por parte del relé de protección acerca de la localización del defecto resultaría de gran utilidad. El principal objetivo de esta tesis doctoral ha sido el desarrollo de nuevos algoritmos que permitan la estimación de la localización de los defectos a tierra tanto en el devanado rotórico como estatórico de máquinas síncronas. Respecto al devanado de excitación, se ha presentado un nuevo método de localización de defectos a tierra para generadores con excitación estática. Este método permite incluso distinguir si el defecto se ha producido en el devanado de excitación, o en cualquiera de los componentes del sistema de excitación, esto es, transformador de excitación, conductores de alimentación del rectificador controlado, etc. En caso de defecto a tierra en del devanado rotórico, este método proporciona una estimación de su localización. Sin embargo, para poder obtener la localización del defecto, se precisa conocer el valor de resistencia de defecto. Por ello, en este trabajo se presenta además un nuevo método para la estimación de este parámetro de forma precisa. Finalmente, se presenta un nuevo método de detección de defectos a tierra, basado en el criterio direccional, que complementa el método de localización, permitiendo tener en cuenta la influencia de las capacidades a tierra del sistema. Estas capacidades resultan determinantes a la hora de localizar el defecto de forma adecuada. En relación con el devanado estatórico, en esta tesis doctoral se presenta un nuevo algoritmo de localización de defectos a tierra para generadores que dispongan de la protección de faltas a tierra basada en la inyección de baja frecuencia. Se ha propuesto un método general, que tiene en cuenta todos los parámetros del sistema, así como una versión simplificada del método para generadores con capacidades a tierra muy reducida, que podría resultar de fácil implementación en relés de protección comercial. Los algoritmos y métodos presentados se han validado mediante ensayos experimentales en un generador de laboratorio de 5 kVA, así como en un generador comercial de 106 MVA con resultados satisfactorios y prometedores. ABSTRACT One of the most common faults in synchronous generators is the ground fault in both the stator winding and the excitation winding. In case of fault, the insulation level between the active part of any of these windings and ground lowers considerably, or even disappears. The detection of ground faults in both windings is a very researched topic. The fault current is typically limited intentionally to a reduced level. This allows to detect easily the ground faults, and therefore to avoid damage in the generator. After the detection and confirmation of the existence of a ground fault, it should be located along the winding in order to repair of the machine. Then, the rotor has to be extracted, which is a very complex and expensive operation. Moreover, the fact of limiting the fault current makes that the insulation failure is not visually detectable, because there is no visible damage in the generator. Therefore, some laborious techniques have to apply to locate accurately the fault. In order to reduce the repair time, and therefore the time that the generator is out of service, any information about the approximate location of the fault would be very useful. The main objective of this doctoral thesis has been the development of new algorithms and methods to estimate the location of ground faults in the stator and in the rotor winding of synchronous generators. Regarding the excitation winding, a new location method of ground faults in excitation winding of synchronous machines with static excitation has been presented. This method allows even to detect if the fault is at the excitation winding, or in any other component of the excitation system: controlled rectifier, excitation transformer, etc. In case of ground fault in the rotor winding, this method provides an estimation of the fault location. However, in order to calculate the location, the value of fault resistance is necessary. Therefore, a new fault-resistance estimation algorithm is presented in this text. Finally, a new fault detection algorithm based on directional criterion is described to complement the fault location method. This algorithm takes into account the influence of the capacitance-to-ground of the system, which has a remarkable impact in the accuracy of the fault location. Regarding the stator winding, a new fault-location algorithm has been presented for stator winding of synchronous generators. This algorithm is applicable to generators with ground-fault protection based in low-frequency injection. A general algorithm, which takes every parameter of the system into account, has been presented. Moreover, a simplified version of the algorithm has been proposed for generators with especially low value of capacitance to ground. This simplified algorithm might be easily implementable in protective relays. The proposed methods and algorithms have been tested in a 5 kVA laboratory generator, as well as in a 106 MVA synchronous generator with satisfactory and promising results.
Resumo:
El daño cerebral adquirido (DCA) es un problema social y sanitario grave, de magnitud creciente y de una gran complejidad diagnóstica y terapéutica. Su elevada incidencia, junto con el aumento de la supervivencia de los pacientes, una vez superada la fase aguda, lo convierten también en un problema de alta prevalencia. En concreto, según la Organización Mundial de la Salud (OMS) el DCA estará entre las 10 causas más comunes de discapacidad en el año 2020. La neurorrehabilitación permite mejorar el déficit tanto cognitivo como funcional y aumentar la autonomía de las personas con DCA. Con la incorporación de nuevas soluciones tecnológicas al proceso de neurorrehabilitación se pretende alcanzar un nuevo paradigma donde se puedan diseñar tratamientos que sean intensivos, personalizados, monitorizados y basados en la evidencia. Ya que son estas cuatro características las que aseguran que los tratamientos son eficaces. A diferencia de la mayor parte de las disciplinas médicas, no existen asociaciones de síntomas y signos de la alteración cognitiva que faciliten la orientación terapéutica. Actualmente, los tratamientos de neurorrehabilitación se diseñan en base a los resultados obtenidos en una batería de evaluación neuropsicológica que evalúa el nivel de afectación de cada una de las funciones cognitivas (memoria, atención, funciones ejecutivas, etc.). La línea de investigación en la que se enmarca este trabajo de investigación pretende diseñar y desarrollar un perfil cognitivo basado no sólo en el resultado obtenido en esa batería de test, sino también en información teórica que engloba tanto estructuras anatómicas como relaciones funcionales e información anatómica obtenida de los estudios de imagen. De esta forma, el perfil cognitivo utilizado para diseñar los tratamientos integra información personalizada y basada en la evidencia. Las técnicas de neuroimagen representan una herramienta fundamental en la identificación de lesiones para la generación de estos perfiles cognitivos. La aproximación clásica utilizada en la identificación de lesiones consiste en delinear manualmente regiones anatómicas cerebrales. Esta aproximación presenta diversos problemas relacionados con inconsistencias de criterio entre distintos clínicos, reproducibilidad y tiempo. Por tanto, la automatización de este procedimiento es fundamental para asegurar una extracción objetiva de información. La delineación automática de regiones anatómicas se realiza mediante el registro tanto contra atlas como contra otros estudios de imagen de distintos sujetos. Sin embargo, los cambios patológicos asociados al DCA están siempre asociados a anormalidades de intensidad y/o cambios en la localización de las estructuras. Este hecho provoca que los algoritmos de registro tradicionales basados en intensidad no funcionen correctamente y requieran la intervención del clínico para seleccionar ciertos puntos (que en esta tesis hemos denominado puntos singulares). Además estos algoritmos tampoco permiten que se produzcan deformaciones grandes deslocalizadas. Hecho que también puede ocurrir ante la presencia de lesiones provocadas por un accidente cerebrovascular (ACV) o un traumatismo craneoencefálico (TCE). Esta tesis se centra en el diseño, desarrollo e implementación de una metodología para la detección automática de estructuras lesionadas que integra algoritmos cuyo objetivo principal es generar resultados que puedan ser reproducibles y objetivos. Esta metodología se divide en cuatro etapas: pre-procesado, identificación de puntos singulares, registro y detección de lesiones. Los trabajos y resultados alcanzados en esta tesis son los siguientes: Pre-procesado. En esta primera etapa el objetivo es homogeneizar todos los datos de entrada con el objetivo de poder extraer conclusiones válidas de los resultados obtenidos. Esta etapa, por tanto, tiene un gran impacto en los resultados finales. Se compone de tres operaciones: eliminación del cráneo, normalización en intensidad y normalización espacial. Identificación de puntos singulares. El objetivo de esta etapa es automatizar la identificación de puntos anatómicos (puntos singulares). Esta etapa equivale a la identificación manual de puntos anatómicos por parte del clínico, permitiendo: identificar un mayor número de puntos lo que se traduce en mayor información; eliminar el factor asociado a la variabilidad inter-sujeto, por tanto, los resultados son reproducibles y objetivos; y elimina el tiempo invertido en el marcado manual de puntos. Este trabajo de investigación propone un algoritmo de identificación de puntos singulares (descriptor) basado en una solución multi-detector y que contiene información multi-paramétrica: espacial y asociada a la intensidad. Este algoritmo ha sido contrastado con otros algoritmos similares encontrados en el estado del arte. Registro. En esta etapa se pretenden poner en concordancia espacial dos estudios de imagen de sujetos/pacientes distintos. El algoritmo propuesto en este trabajo de investigación está basado en descriptores y su principal objetivo es el cálculo de un campo vectorial que permita introducir deformaciones deslocalizadas en la imagen (en distintas regiones de la imagen) y tan grandes como indique el vector de deformación asociado. El algoritmo propuesto ha sido comparado con otros algoritmos de registro utilizados en aplicaciones de neuroimagen que se utilizan con estudios de sujetos control. Los resultados obtenidos son prometedores y representan un nuevo contexto para la identificación automática de estructuras. Identificación de lesiones. En esta última etapa se identifican aquellas estructuras cuyas características asociadas a la localización espacial y al área o volumen han sido modificadas con respecto a una situación de normalidad. Para ello se realiza un estudio estadístico del atlas que se vaya a utilizar y se establecen los parámetros estadísticos de normalidad asociados a la localización y al área. En función de las estructuras delineadas en el atlas, se podrán identificar más o menos estructuras anatómicas, siendo nuestra metodología independiente del atlas seleccionado. En general, esta tesis doctoral corrobora las hipótesis de investigación postuladas relativas a la identificación automática de lesiones utilizando estudios de imagen médica estructural, concretamente estudios de resonancia magnética. Basándose en estos cimientos, se han abrir nuevos campos de investigación que contribuyan a la mejora en la detección de lesiones. ABSTRACT Brain injury constitutes a serious social and health problem of increasing magnitude and of great diagnostic and therapeutic complexity. Its high incidence and survival rate, after the initial critical phases, makes it a prevalent problem that needs to be addressed. In particular, according to the World Health Organization (WHO), brain injury will be among the 10 most common causes of disability by 2020. Neurorehabilitation improves both cognitive and functional deficits and increases the autonomy of brain injury patients. The incorporation of new technologies to the neurorehabilitation tries to reach a new paradigm focused on designing intensive, personalized, monitored and evidence-based treatments. Since these four characteristics ensure the effectivity of treatments. Contrary to most medical disciplines, it is not possible to link symptoms and cognitive disorder syndromes, to assist the therapist. Currently, neurorehabilitation treatments are planned considering the results obtained from a neuropsychological assessment battery, which evaluates the functional impairment of each cognitive function (memory, attention, executive functions, etc.). The research line, on which this PhD falls under, aims to design and develop a cognitive profile based not only on the results obtained in the assessment battery, but also on theoretical information that includes both anatomical structures and functional relationships and anatomical information obtained from medical imaging studies, such as magnetic resonance. Therefore, the cognitive profile used to design these treatments integrates information personalized and evidence-based. Neuroimaging techniques represent an essential tool to identify lesions and generate this type of cognitive dysfunctional profiles. Manual delineation of brain anatomical regions is the classical approach to identify brain anatomical regions. Manual approaches present several problems related to inconsistencies across different clinicians, time and repeatability. Automated delineation is done by registering brains to one another or to a template. However, when imaging studies contain lesions, there are several intensity abnormalities and location alterations that reduce the performance of most of the registration algorithms based on intensity parameters. Thus, specialists may have to manually interact with imaging studies to select landmarks (called singular points in this PhD) or identify regions of interest. These two solutions have the same inconvenient than manual approaches, mentioned before. Moreover, these registration algorithms do not allow large and distributed deformations. This type of deformations may also appear when a stroke or a traumatic brain injury (TBI) occur. This PhD is focused on the design, development and implementation of a new methodology to automatically identify lesions in anatomical structures. This methodology integrates algorithms whose main objective is to generate objective and reproducible results. It is divided into four stages: pre-processing, singular points identification, registration and lesion detection. Pre-processing stage. In this first stage, the aim is to standardize all input data in order to be able to draw valid conclusions from the results. Therefore, this stage has a direct impact on the final results. It consists of three steps: skull-stripping, spatial and intensity normalization. Singular points identification. This stage aims to automatize the identification of anatomical points (singular points). It involves the manual identification of anatomical points by the clinician. This automatic identification allows to identify a greater number of points which results in more information; to remove the factor associated to inter-subject variability and thus, the results are reproducible and objective; and to eliminate the time spent on manual marking. This PhD proposed an algorithm to automatically identify singular points (descriptor) based on a multi-detector approach. This algorithm contains multi-parametric (spatial and intensity) information. This algorithm has been compared with other similar algorithms found on the state of the art. Registration. The goal of this stage is to put in spatial correspondence two imaging studies of different subjects/patients. The algorithm proposed in this PhD is based on descriptors. Its main objective is to compute a vector field to introduce distributed deformations (changes in different imaging regions), as large as the deformation vector indicates. The proposed algorithm has been compared with other registration algorithms used on different neuroimaging applications which are used with control subjects. The obtained results are promising and they represent a new context for the automatic identification of anatomical structures. Lesion identification. This final stage aims to identify those anatomical structures whose characteristics associated to spatial location and area or volume has been modified with respect to a normal state. A statistical study of the atlas to be used is performed to establish which are the statistical parameters associated to the normal state. The anatomical structures that may be identified depend on the selected anatomical structures identified on the atlas. The proposed methodology is independent from the selected atlas. Overall, this PhD corroborates the investigated research hypotheses regarding the automatic identification of lesions based on structural medical imaging studies (resonance magnetic studies). Based on these foundations, new research fields to improve the automatic identification of lesions in brain injury can be proposed.
Resumo:
A highly sensitive assay combining immunomagnetic enrichment with multiparameter flow cytometric and immunocytochemical analysis has been developed to detect, enumerate, and characterize carcinoma cells in the blood. The assay can detect one epithelial cell or less in 1 ml of blood. Peripheral blood (10–20 ml) from 30 patients with carcinoma of the breast, from 3 patients with prostate cancer, and from 13 controls was examined by flow cytometry for the presence of circulating epithelial cells defined as nucleic acid+, CD45−, and cytokeratin+. Highly significant differences in the number of circulating epithelial cells were found between normal controls and patients with cancer including 17 with organ-confined disease. To determine whether the circulating epithelial cells in the cancer patients were neoplastic cells, cytospin preparations were made after immunomagnetic enrichment and were analyzed. Epithelial cells from patients with breast cancer generally stained with mAbs against cytokeratin and 3 of 5 for mucin-1. In contrast, no cells that stained for these antigens were observed in the blood from normal controls. The morphology of the stained cells was consistent with that of neoplastic cells. Of 8 patients with breast cancer followed for 1–10 months, there was a good correlation between changes in the level of tumor cells in the blood with both treatment with chemotherapy and clinical status. The present assay may be helpful in early detection, in monitoring disease, and in prognostication.
Resumo:
Intact Escherichia coli ribosomes have been projected into the gas phase of a mass spectrometer by means of nanoflow electrospray techniques. Species with mass/charge ratios in excess of 20,000 were detected at the level of individual ions by using time-of-flight analysis. Once in the gas phase the stability of intact ribosomes was investigated and found to increase as a result of cross-linking ribosomal proteins to the rRNA. By lowering the Mg2+ concentration in solutions containing ribosomes the particles were found to dissociate into 30S and 50S subunits. The resolution of the charge states in the spectrum of the 30S subunit enabled its mass to be determined as 852,187 ± 3,918 Da, a value within 0.6% of that calculated from the individual proteins and the 16S RNA. Further dissociation into smaller macromolecular complexes and then individual proteins could be induced by subjecting the particles to increasingly energetic gas phase collisions. The ease with which proteins dissociated from the intact species was found to be related to their known interactions in the ribosome particle. The results show that emerging mass spectrometric techniques can be used to characterize a fully functional biological assembly as well as its isolated components.
Resumo:
One of the earliest events in programmed cell death is the externalization of phosphatidylserine, a membrane phospholipid normally restricted to the inner leaflet of the lipid bilayer. Annexin V, an endogenous human protein with a high affinity for membrane bound phosphatidylserine, can be used in vitro to detect apoptosis before other well described morphologic or nuclear changes associated with programmed cell death. We tested the ability of exogenously administered radiolabeled annexin V to concentrate at sites of apoptotic cell death in vivo. After derivatization with hydrazinonicotinamide, annexin V was radiolabeled with technetium 99m. In vivo localization of technetium 99m hydrazinonicotinamide-annexin V was tested in three models: fuminant hepatic apoptosis induced by anti-Fas antibody injection in BALB/c mice; acute rejection in ACI rats with transplanted heterotopic PVG cardiac allografts; and cyclophosphamide treatment of transplanted 38C13 murine B cell lymphomas. External radionuclide imaging showed a two- to sixfold increase in the uptake of radiolabeled annexin V at sites of apoptosis in all three models. Immunohistochemical staining of cardiac allografts for exogenously administered annexin V revealed intense staining of numerous myocytes at the periphery of mononuclear infiltrates of which only a few demonstrated positive apoptotic nuclei by the terminal deoxynucleotidyltransferase-mediated UTP end labeling method. These results suggest that radiolabeled annexin V can be used in vivo as a noninvasive means to detect and serially image tissues and organs undergoing programmed cell death.