984 resultados para image acquisition
Resumo:
The application of computer vision based quality control has been slowly but steadily gaining importance mainly due to its speed in achieving results and also greatly due to its non- destnictive nature of testing. Besides, in food applications it also does not contribute to contamination. However, computer vision applications in quality control needs the application of an appropriate software for image analysis. Eventhough computer vision based quality control has several advantages, its application has limitations as to the type of work to be done, particularly so in the food industries. Selective applications, however, can be highly advantageous and very accurate.Computer vision based image analysis could be used in morphometric measurements of fish with the same accuracy as the existing conventional method. The method is non-destructive and non-contaminating thus providing anadvantage in seafood processing.The images could be stored in archives and retrieved at anytime to carry out morphometric studies for biologists.Computer vision and subsequent image analysis could be used in measurements of various food products to assess uniformity of size. One product namely cutlet and product ingredients namely coating materials such as bread crumbs and rava were selected for the study. Computer vision based image analysis was used in the measurements of length, width and area of cutlets. Also the width of coating materials like bread crumbs was measured.Computer imaging and subsequent image analysis can be very effectively used in quality evaluations of product ingredients in food processing. Measurement of width of coating materials could establish uniformity of particles or the lack of it. The application of image analysis in bacteriological work was also done
Resumo:
Introducción: El diagnóstico de osteomielitis esternal post-esternotomía resulta difícil empleando síntomas clínicos o de laboratorio y las imágenes morfológicas orientan a sospecha más que al diagnóstico. Un diagnóstico precoz ofrece calidad de vida y el mejor tratamiento para reducir una mortalidad que oscila entre 14% y 47%. La gammagrafía con leucocitos marcados ofrece el mejor rendimiento diagnóstico para infecciones y se destaca como el patrón de oro diagnóstico. Objetivo: Identificar el desempeño y utilidad de la gammagrafía con leucocitos autólogos marcados con 99mTc-HMPAO en los estudios realizados para la evaluación de osteomielitis esternal. Materiales y métodos: Se realizó un estudio descriptivo, retrospectivo de prueba diagnóstica en la Fundación Cardioinfantil de Bogotá entre enero/2010 y mayo/2015 evaluando gammagrafías con leucocitos marcados ante la sospecha de osteomielitis posterior a esternotomía. Resultados: Se evaluaron 52 pacientes, en los que la gammagrafía con leucocitos mostró 23 pacientes (44,2%) con osteomielitis esternal, logrando una sensibilidad y especificidad del 88,46% y 100% respectivamente. El valor predictivo positivo fue de 100%, y el valor predictivo negativo fue de 89,66%. El impacto de una prueba negativa no modificó el manejo médico inicial en el 93% de los casos mientras que una prueba positiva lo modificó en el 83%. Conclusiones: La gammagrafía con leucocitos autólogos radiomarcados con 99mTc-HMPAO continúa siendo el patrón de oro de referencia no invasiva para el diagnóstico de osteomielitis, y en el caso de osteomielitis esternal se convierte en la prueba de elección pertinente en la selección de pacientes que ameritan una re-intervención quirúrgica.
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.
Resumo:
Aquesta tesi s'emmarca dins del projecte CICYT TAP 1999-0443-C05-01. L'objectiu d'aquest projecte és el disseny, implementació i avaluació de robots mòbils, amb un sistema de control distribuït, sistemes de sensorització i xarxa de comunicacions per realitzar tasques de vigilància. Els robots han de poder-se moure per un entorn reconeixent la posició i orientació dels diferents objectes que l'envolten. Aquesta informació ha de permetre al robot localitzar-se dins de l'entorn on es troba per poder-se moure evitant els possibles obstacles i dur a terme la tasca encomanada. El robot ha de generar un mapa dinàmic de l'entorn que serà utilitzat per localitzar la seva posició. L'objectiu principal d'aquest projecte és aconseguir que un robot explori i construeixi un mapa de l'entorn sense la necessitat de modificar el propi entorn. Aquesta tesi està enfocada en l'estudi de la geometria dels sistemes de visió estereoscòpics formats per dues càmeres amb l'objectiu d'obtenir informació geomètrica 3D de l'entorn d'un vehicle. Aquest objectiu tracta de l'estudi del modelatge i la calibració de càmeres i en la comprensió de la geometria epipolar. Aquesta geometria està continguda en el que s'anomena emph{matriu fonamental}. Cal realitzar un estudi del càlcul de la matriu fonamental d'un sistema estereoscòpic amb la finalitat de reduir el problema de la correspondència entre dos plans imatge. Un altre objectiu és estudiar els mètodes d'estimació del moviment basats en la geometria epipolar diferencial per tal de percebre el moviment del robot i obtenir-ne la posició. Els estudis de la geometria que envolta els sistemes de visió estereoscòpics ens permeten presentar un sistema de visió per computador muntat en un robot mòbil que navega en un entorn desconegut. El sistema fa que el robot sigui capaç de generar un mapa dinàmic de l'entorn a mesura que es desplaça i determinar quin ha estat el moviment del robot per tal de emph{localitzar-se} dins del mapa. La tesi presenta un estudi comparatiu dels mètodes de calibració de càmeres més utilitzats en les últimes dècades. Aquestes tècniques cobreixen un gran ventall dels mètodes de calibració clàssics. Aquest mètodes permeten estimar els paràmetres de la càmera a partir d'un conjunt de punts 3D i de les seves corresponents projeccions 2D en una imatge. Per tant, aquest estudi descriu un total de cinc tècniques de calibració diferents que inclouen la calibració implicita respecte l'explicita i calibració lineal respecte no lineal. Cal remarcar que s'ha fet un gran esforç en utilitzar la mateixa nomenclatura i s'ha estandaritzat la notació en totes les tècniques presentades. Aquesta és una de les dificultats principals a l'hora de poder comparar les tècniques de calibració ja què cada autor defineix diferents sistemes de coordenades i diferents conjunts de paràmetres. El lector és introduït a la calibració de càmeres amb la tècnica lineal i implícita proposada per Hall i amb la tècnica lineal i explicita proposada per Faugeras-Toscani. A continuació es passa a descriure el mètode a de Faugeras incloent el modelatge de la distorsió de les lents de forma radial. Seguidament es descriu el conegut mètode proposat per Tsai, i finalment es realitza una descripció detallada del mètode de calibració proposat per Weng. Tots els mètodes són comparats tant des del punt de vista de model de càmera utilitzat com de la precisió de la calibració. S'han implementat tots aquests mètodes i s'ha analitzat la precisió presentant resultats obtinguts tant utilitzant dades sintètiques com càmeres reals. Calibrant cada una de les càmeres del sistema estereoscòpic es poden establir un conjunt de restriccions geomètri ques entre les dues imatges. Aquestes relacions són el que s'anomena geometria epipolar i estan contingudes en la matriu fonamental. Coneixent la geometria epipolar es pot: simplificar el problema de la correspondència reduint l'espai de cerca a llarg d'una línia epipolar; estimar el moviment d'una càmera quan aquesta està muntada sobre un robot mòbil per realitzar tasques de seguiment o de navegació; reconstruir una escena per aplicacions d'inspecció, propotipatge o generació de motlles. La matriu fonamental s'estima a partir d'un conjunt de punts en una imatges i les seves correspondències en una segona imatge. La tesi presenta un estat de l'art de les tècniques d'estimació de la matriu fonamental. Comença pels mètode lineals com el dels set punts o el mètode dels vuit punts, passa pels mètodes iteratius com el mètode basat en el gradient o el CFNS, fins arribar las mètodes robustos com el M-Estimators, el LMedS o el RANSAC. En aquest treball es descriuen fins a 15 mètodes amb 19 implementacions diferents. Aquestes tècniques són comparades tant des del punt de vista algorísmic com des del punt de vista de la precisió que obtenen. Es presenten el resultats obtinguts tant amb imatges reals com amb imatges sintètiques amb diferents nivells de soroll i amb diferent quantitat de falses correspondències. Tradicionalment, l'estimació del moviment d'una càmera està basada en l'aplicació de la geometria epipolar entre cada dues imatges consecutives. No obstant el cas tradicional de la geometria epipolar té algunes limitacions en el cas d'una càmera situada en un robot mòbil. Les diferencies entre dues imatges consecutives són molt petites cosa que provoca inexactituds en el càlcul de matriu fonamental. A més cal resoldre el problema de la correspondència, aquest procés és molt costós en quant a temps de computació i no és gaire efectiu per aplicacions de temps real. En aquestes circumstàncies les tècniques d'estimació del moviment d'una càmera solen basar-se en el flux òptic i en la geometria epipolar diferencial. En la tesi es realitza un recull de totes aquestes tècniques degudament classificades. Aquests mètodes són descrits unificant la notació emprada i es remarquen les semblances i les diferencies entre el cas discret i el cas diferencial de la geometria epipolar. Per tal de poder aplicar aquests mètodes a l'estimació de moviment d'un robot mòbil, aquest mètodes generals que estimen el moviment d'una càmera amb sis graus de llibertat, han estat adaptats al cas d'un robot mòbil que es desplaça en una superfície plana. Es presenten els resultats obtinguts tant amb el mètodes generals de sis graus de llibertat com amb els adaptats a un robot mòbil utilitzant dades sintètiques i seqüències d'imatges reals. Aquest tesi finalitza amb una proposta de sistema de localització i de construcció d'un mapa fent servir un sistema estereoscòpic situat en un robot mòbil. Diverses aplicacions de robòtica mòbil requereixen d'un sistema de localització amb l'objectiu de facilitar la navegació del vehicle i l'execució del les trajectòries planificades. La localització es sempre relativa al mapa de l'entorn on el robot s'està movent. La construcció de mapes en un entorn desconegut és una tasca important a realitzar per les futures generacions de robots mòbils. El sistema que es presenta realitza la localització i construeix el mapa de l'entorn de forma simultània. A la tesi es descriu el robot mòbil GRILL, que ha estat la plataforma de treball emprada per aquesta aplicació, amb el sistema de visió estereoscòpic que s'ha dissenyat i s'ha muntat en el robot. També es descriu tots el processos que intervenen en el sistema de localització i construcció del mapa. La implementació d'aquest processos ha estat possible gràcies als estudis realitzats i presentats prèviament (calibració de càmeres, estimació de la matriu fonamental, i estimació del moviment) sense els quals no s'hauria pogut plantejar aquest sistema. Finalment es presenten els mapes en diverses trajectòries realitzades pel robot GRILL en el laboratori. Les principals contribucions d'aquest treball són: ·Un estat de l'art sobre mètodes de calibració de càmeres. El mètodes són comparats tan des del punt de vista del model de càmera utilitzat com de la precisió dels mètodes. ·Un estudi dels mètodes d'estimació de la matriu fonamental. Totes les tècniques estudiades són classificades i descrites des d'un punt de vista algorísmic. ·Un recull de les tècniques d'estimació del moviment d'una càmera centrat en el mètodes basat en la geometria epipolar diferencial. Aquestes tècniques han estat adaptades per tal d'estimar el moviment d'un robot mòbil. ·Una aplicació de robòtica mòbil per tal de construir un mapa dinàmic de l'entorn i localitzar-se per mitja d'un sistema estereoscòpic. L'aplicació presentada es descriu tant des del punt de vista del maquinari com del programari que s'ha dissenyat i implementat.
Resumo:
Este estudo teve como objetivo avaliar e comparar as doses de radiação recolhidas numa amostra de 69 pacientes, em dois hospitais, com diferentes métodos de aquisição de imagem digital, direta e indireta, que realizaram radiografia de tórax, em projeção postero-anterior (PA). Para os dois hospitais, a dose à entrada da pele (DEP) e efectiva (E), foram medidas usando o software PCXMC para comparação entre si e com referências internacionais. No Hospital A, com aquisição digital direta, a média de DEP foi de 0,089 mGy e a média de E foi 0,013 mSv. No Hospital B, com aquisição digital de indireta, a média de DEP foi de 0.151 mGy e a média de E foi 0.030mSv. Em ambos os hospitais, as doses médias não ultrapassaram os limites recomendados por lei (0,3 mGy). Para a radiografia de tórax PA, o nível de referência diagnostico (NRD) local calculado foi 0.107 mGy, para o Hospital A e 0.164 mGy, para o Hospital B. Na radiografia de tórax PA, a utilização de um sistema de aquisição direta implicou uma redução de dose de 41 %, concordante com as referências disponíveis que apontam para a redução da dose de cerca de 50 % entre os dois sistemas.
Resumo:
Gene Chips are finding extensive use in animal and plant science. Generally microarrays are of two kind, cDNA or oligonucleotide. cDNA microarrays were developed at Stanford University, whereas oligonucleotide were developed by Affymetrix. The construction of cDNA or oligonucleotide on a glass slide helps to compare the gene expression level of treated and control samples by labeling mRNA with green (Cy3) and red (Cy5) dyes. The hybridized gene chip emit fluorescence whose intensity and colour can be measured. RNA labeling can be done directly or indirectly. Indirect method involves amino allyle modified dUTP instead of pre-labelled nucleotide. Hybridization of gene chip generally occurs in a minimum volume possible and to ensure the hetroduplex formation, a ten fold more DNA is spotted on slide than in the solutions. A confocal or semi confocal laser technologies coupled with CCD camera are used for image acquisition. For standardization, house keeping genes are used or cDNA are spotted in gene chip that are not present in treated or control samples. Moreover, statistical analysis (image analysis) and cluster analysis softwares have been developed by Stanford University. The gene-chip technology has many applications like expression analysis, gene expression signatures (molecular phenotypes) and promoter regulatory element co-expression.
Resumo:
To investigate the neural network of overt speech production, eventrelated fMRI was performed in 9 young healthy adult volunteers. A clustered image acquisition technique was chosen to minimize speechrelated movement artifacts. Functional images were acquired during the production of oral movements and of speech of increasing complexity (isolated vowel as well as monosyllabic and trisyllabic utterances). This imaging technique and behavioral task enabled depiction of the articulo-phonologic network of speech production from the supplementary motor area at the cranial end to the red nucleus at the caudal end. Speaking a single vowel and performing simple oral movements involved very similar activation of the corticaland subcortical motor systems. More complex, polysyllabic utterances were associated with additional activation in the bilateral cerebellum,reflecting increased demand on speech motor control, and additional activation in the bilateral temporal cortex, reflecting the stronger involvement of phonologic processing.
Resumo:
The analysis of histological sections has long been a valuable tool in the pathological studies. The interpretation of tissue conditions, however, relies directly on visual evaluation of tissue slides, which may be difficult to interpret because of poor contrast or poor color differentiation. The Chromatic Contrast Visualization System (CCV) combines an optical microscope with electronically controlled light-emitting diodes (LEDs) in order to generate adjustable intensities of RGB channels for sample illumination. While most image enhancement techniques rely on software post-processing of an image acquired under standard illumination conditions, CCV produces real-time variations in the color composition of the light source itself. The possibility of covering the entire RGB chromatic range, combined with the optical properties of the different tissues, allows for a substantial enhancement in image details. Traditional image acquisition methods do not exploit these visual enhancements which results in poorer visual distinction among tissue structures. Photodynamic therapy (PDT) procedures are of increasing interest in the treatment of several forms of cancer. This study uses histological slides of rat liver samples that were induced to necrosis after being exposed to PDT. Results show that visualization of tissue structures could be improved by changing colors and intensities of the microscope light source. PDT-necrosed tissue samples are better differentiated when illuminated with different color wavelengths, leading to an improved differentiation of cells in the necrosis area. Due to the potential benefits it can bring to interpretation and diagnosis, further research in this field could make CCV an attractive technique for medical applications.
Resumo:
The treatment of wastewaters contaminated with oil is of great practical interest and it is fundamental in environmental issues. A relevant process, which has been studied on continuous treatment of contaminated water with oil, is the equipment denominated MDIF® (a mixer-settler based on phase inversion). An important variable during the operation of MDIF® is the water-solvent interface level in the separation section. The control of this level is essential both to avoid the dragging of the solvent during the water removal and improve the extraction efficiency of the oil by the solvent. The measurement of oil-water interface level (in line) is still a hard task. There are few sensors able to measure oil-water interface level in a reliable way. In the case of lab scale systems, there are no interface sensors with compatible dimensions. The objective of this work was to implement a level control system to the organic solvent/water interface level on the equipment MDIF®. The detection of the interface level is based on the acquisition and treatment of images obtained dynamically through a standard camera (webcam). The control strategy was developed to operate in feedback mode, where the level measure obtained by image detection is compared to the desired level and an action is taken on a control valve according to an implemented PID law. A control and data acquisition program was developed in Fortran to accomplish the following tasks: image acquisition; water-solvent interface identification; to perform decisions and send control signals; and to record data in files. Some experimental runs in open-loop were carried out using the MDIF® and random pulse disturbances were applied on the input variable (water outlet flow). The responses of interface level permitted the process identification by transfer models. From these models, the parameters for a PID controller were tuned by direct synthesis and tests in closed-loop were performed. Preliminary results for the feedback loop demonstrated that the sensor and the control strategy developed in this work were suitable for the control of organic solvent-water interface level
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This research presents a methodology for prediction of building shadows cast on urban roads existing on high-resolution aerial imagery. Shadow elements can be used in the modeling of contextual information, whose use has become more and more common in image analysis complex processes. The proposed methodology consists in three sequential steps. First, the building roof contours are manually extracted from an intensity image generated by the transformation of a digital elevation model (DEM) obtained from airborne laser scanning data. In similarly, the roadside contours are extracted, now from the radiometric information of the laser scanning data. Second, the roof contour polygons are projected onto the adjacent roads by using the parallel projection straight lines, whose directions are computed from the solar ephemeris, which depends on the aerial image acquisition time. Finally, parts of shadow polygons that are free from building perspective obstructions are determined, given rise to new shadow polygons. The results obtained in the experimental evaluation of the methodology showed that the method works properly, since it allowed the prediction of shadow in high-resolution imagery with high accuracy and reliability.
Resumo:
Image acquisition systems based on multi-head arrangement of digital frame cameras, such as the commercial systems DMC, UltraCam, besides others, are attractive alternatives enabling larger imaging area when compared to a single frame camera. Considering that in these systems, cameras are tightly attached to an external mount, it is assumed that relative position and orientation between cameras are stable during image acquisition and, consequently, these constraint can be included in the calibration step. This constraint is acceptable because estimates of the relative orientation (RO) parameters between cameras, from previously estimated exterior orientation parameters, present higher and significant deviations than the expected physical variations, due to error propagation. In order to solve this problem, this work presents an approach based on simultaneous calibration of two or more cameras using constraints that state that the relative rotation matrix and the distance between the cameras head are stable. Experiments with images acquired by an arrangement of two Hasselblad H2D cameras were accomplished, without and with the mentioned constraints. The experiments showed that the calibration process with RO constraints allows better results than the approach based on single camera calibration, provided that the estimation has included only images with good target distribution.
Resumo:
The fundamental senses of the human body are: vision, hearing, touch, taste and smell. These senses are the functions that provide our relationship with the environment. The vision serves as a sensory receptor responsible for obtaining information from the outside world that will be sent to the brain. The gaze reflects its attention, intention and interest. Therefore, the estimation of gaze direction, using computer tools, provides a promising alternative to improve the capacity of human-computer interaction, mainly with respect to those people who suffer from motor deficiencies. Thus, the objective of this work is to present a non-intrusive system that basically uses a personal computer and a low cost webcam, combined with the use of digital image processing techniques, Wavelets transforms and pattern recognition, such as artificial neural network models, resulting in a complete system that performs since the image acquisition (including face detection and eye tracking) to the estimation of gaze direction. The obtained results show the feasibility of the proposed system, as well as several feature advantages.
Resumo:
Computer-aided design/computer-aided manufacturing images can be taken through either direct or indirect imaging. For the indirect systems, the digitalization is obtained from the impression material or cast, and for the direct ones the image is taken directly from the mouth using intraoral scanners.The direct acquisition systems have been constantly improved because these are less invasive, quicker, and more precise than the conventional method. Besides, the digital images can be easily stored for a long time. Therefore, the aim of this paper was to describe and discuss based on the literature the main direct image acquisition systems available on the market: CEREC Bluecam (Sirona), Lava C.O.S. System (3M ESPE), iTero System (Cadent/Straumann), and E4D System (D4D Technologies).
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)