997 resultados para processing chain
Resumo:
Arctic permafrost landscapes are among the most vulnerable and dynamic landscapes globally, but due to their extent and remoteness most of the landscape changes remain unnoticed. In order to detect disturbances in these areas we developed an automated processing chain for the calculation and analysis of robust trends of key land surface indicators based on the full record of available Landsat TM, ETM +, and OLI data. The methodology was applied to the ~ 29,000 km**2 Lena Delta in Northeast Siberia, where robust trend parameters (slope, confidence intervals of the slope, and intercept) were calculated for Tasseled Cap Greenness, Wetness and Brightness, NDVI, and NDWI, and NDMI based on 204 Landsat scenes for the observation period between 1999 and 2014. The resulting datasets revealed regional greening trends within the Lena Delta with several localized hot-spots of change, particularly in the vicinity of the main river channels. With a 30-m spatial resolution various permafrost-thaw related processes and disturbances, such as thermokarst lake expansion and drainage, fluvial erosion, and coastal changes were detected within the Lena Delta region, many of which have not been noticed or described before. Such hotspots of permafrost change exhibit significantly different trend parameters compared to non-disturbed areas. The processed dataset, which is made freely available through the data archive PANGAEA, will be a useful resource for further process specific analysis by researchers and land managers. With the high level of automation and the use of the freely available Landsat archive data, the workflow is scalable and transferrable to other regions, which should enable the comparison of land surface changes in different permafrost affected regions and help to understand and quantify permafrost landscape dynamics.
Resumo:
This paper describes a novel method to enhance current airport surveillance systems used in Advanced Surveillance Monitoring Guidance and Control Systems (A-SMGCS). The proposed method allows for the automatic calibration of measurement models and enhanced detection of nonideal situations, increasing surveillance products integrity. It is based on the definition of a set of observables from the surveillance processing chain and a rule based expert system aimed to change the data processing methods
Resumo:
El análisis de imágenes hiperespectrales permite obtener información con una gran resolución espectral: cientos de bandas repartidas desde el espectro infrarrojo hasta el ultravioleta. El uso de dichas imágenes está teniendo un gran impacto en el campo de la medicina y, en concreto, destaca su utilización en la detección de distintos tipos de cáncer. Dentro de este campo, uno de los principales problemas que existen actualmente es el análisis de dichas imágenes en tiempo real ya que, debido al gran volumen de datos que componen estas imágenes, la capacidad de cómputo requerida es muy elevada. Una de las principales líneas de investigación acerca de la reducción de dicho tiempo de procesado se basa en la idea de repartir su análisis en diversos núcleos trabajando en paralelo. En relación a esta línea de investigación, en el presente trabajo se desarrolla una librería para el lenguaje RVC – CAL – lenguaje que está especialmente pensado para aplicaciones multimedia y que permite realizar la paralelización de una manera intuitiva – donde se recogen las funciones necesarias para implementar dos de las cuatro fases propias del procesado espectral: reducción dimensional y extracción de endmembers. Cabe mencionar que este trabajo se complementa con el realizado por Raquel Lazcano en su Proyecto Fin de Grado, donde se desarrollan las funciones necesarias para completar las otras dos fases necesarias en la cadena de desmezclado. En concreto, este trabajo se encuentra dividido en varias partes. La primera de ellas expone razonadamente los motivos que han llevado a comenzar este Proyecto Fin de Grado y los objetivos que se pretenden conseguir con él. Tras esto, se hace un amplio estudio del estado del arte actual y, en él, se explican tanto las imágenes hiperespectrales como los medios y las plataformas que servirán para realizar la división en núcleos y detectar las distintas problemáticas con las que nos podamos encontrar al realizar dicha división. Una vez expuesta la base teórica, nos centraremos en la explicación del método seguido para componer la cadena de desmezclado y generar la librería; un punto importante en este apartado es la utilización de librerías especializadas en operaciones matriciales complejas, implementadas en C++. Tras explicar el método utilizado, se exponen los resultados obtenidos primero por etapas y, posteriormente, con la cadena de procesado completa, implementada en uno o varios núcleos. Por último, se aportan una serie de conclusiones obtenidas tras analizar los distintos algoritmos en cuanto a bondad de resultados, tiempos de procesado y consumo de recursos y se proponen una serie de posibles líneas de actuación futuras relacionadas con dichos resultados. ABSTRACT. Hyperspectral imaging allows us to collect high resolution spectral information: hundred of bands covering from infrared to ultraviolet spectrum. These images have had strong repercussions in the medical field; in particular, we must highlight its use in cancer detection. In this field, the main problem we have to deal with is the real time analysis, because these images have a great data volume and they require a high computational power. One of the main research lines that deals with this problem is related with the analysis of these images using several cores working at the same time. According to this investigation line, this document describes the development of a RVC – CAL library – this language has been widely used for working with multimedia applications and allows an optimized system parallelization –, which joins all the functions needed to implement two of the four stages of the hyperspectral imaging processing chain: dimensionality reduction and endmember extraction. This research is complemented with the research conducted by Raquel Lazcano in her Diploma Project, where she studies the other two stages of the processing chain. The document is divided in several chapters. The first of them introduces the motivation of the Diploma Project and the main objectives to achieve. After that, we study the state of the art of some technologies related with this work, like hyperspectral images and the software and hardware that we will use to parallelize the system and to analyze its performance. Once we have exposed the theoretical bases, we will explain the followed methodology to compose the processing chain and to generate the library; one of the most important issues in this chapter is the use of some C++ libraries specialized in complex matrix operations. At this point, we will expose the results obtained in the individual stage analysis and then, the results of the full processing chain implemented in one or several cores. Finally, we will extract some conclusions related with algorithm behavior, time processing and system performance. In the same way, we propose some future research lines according to the results obtained in this document
Resumo:
Las imágenes hiperespectrales permiten extraer información con una gran resolución espectral, que se suele extender desde el espectro ultravioleta hasta el infrarrojo. Aunque esta tecnología fue aplicada inicialmente a la observación de la superficie terrestre, esta característica ha hecho que, en los últimos años, la aplicación de estas imágenes se haya expandido a otros campos, como la medicina y, en concreto, la detección del cáncer. Sin embargo, este nuevo ámbito de aplicación ha generado nuevas necesidades, como la del procesado de las imágenes en tiempo real. Debido, precisamente, a la gran resolución espectral, estas imágenes requieren una elevada capacidad computacional para ser procesadas, lo que imposibilita la consecución de este objetivo con las técnicas tradicionales de procesado. En este sentido, una de las principales líneas de investigación persigue el objetivo del tiempo real mediante la paralelización del procesamiento, dividiendo esta carga computacional en varios núcleos que trabajen simultáneamente. A este respecto, en el presente documento se describe el desarrollo de una librería de procesado hiperespectral para el lenguaje RVC - CAL, que está específicamente pensado para el desarrollo de aplicaciones multimedia y proporciona las herramientas necesarias para paralelizar las aplicaciones. En concreto, en este Proyecto Fin de Grado se han desarrollado las funciones necesarias para implementar dos de las cuatro fases de la cadena de análisis de una imagen hiperespectral - en concreto, las fases de estimación del número de endmembers y de la estimación de la distribución de los mismos en la imagen -; conviene destacar que este trabajo se complementa con el realizado por Daniel Madroñal en su Proyecto Fin de Grado, donde desarrolla las funciones necesarias para completar las otras dos fases de la cadena. El presente documento sigue la estructura clásica de un trabajo de investigación, exponiendo, en primer lugar, las motivaciones que han cimentado este Proyecto Fin de Grado y los objetivos que se esperan alcanzar con él. A continuación, se realiza un amplio análisis del estado del arte de las tecnologías necesarias para su desarrollo, explicando, por un lado, las imágenes hiperespectrales y, por otro, todos los recursos hardware y software necesarios para la implementación de la librería. De esta forma, se proporcionarán todos los conceptos técnicos necesarios para el correcto seguimiento de este documento. Tras ello, se detallará la metodología seguida para la generación de la mencionada librería, así como el proceso de implementación de una cadena completa de procesado de imágenes hiperespectrales que permita la evaluación tanto de la bondad de la librería como del tiempo necesario para analizar una imagen hiperespectral completa. Una vez expuesta la metodología utilizada, se analizarán en detalle los resultados obtenidos en las pruebas realizadas; en primer lugar, se explicarán los resultados individuales extraídos del análisis de las dos etapas implementadas y, posteriormente, se discutirán los arrojados por el análisis de la ejecución de la cadena completa, tanto en uno como en varios núcleos. Por último, como resultado de este estudio se extraen una serie de conclusiones, que engloban aspectos como bondad de resultados, tiempos de ejecución y consumo de recursos; asimismo, se proponen una serie de líneas futuras de actuación con las que se podría continuar y complementar la investigación desarrollada en este documento. ABSTRACT. Hyperspectral imaging collects information from across the electromagnetic spectrum, covering a wide range of wavelengths. Although this technology was initially developed for remote sensing and earth observation, its multiple advantages - such as high spectral resolution - led to its application in other fields, as cancer detection. However, this new field has shown specific requirements; for example, it needs to accomplish strong time specifications, since all the potential applications - like surgical guidance or in vivo tumor detection - imply real-time requisites. Achieving this time requirements is a great challenge, as hyperspectral images generate extremely high volumes of data to process. For that reason, some new research lines are studying new processing techniques, and the most relevant ones are related to system parallelization: in order to reduce the computational load, this solution executes image analysis in several processors simultaneously; in that way, this computational load is divided among the different cores, and real-time specifications can be accomplished. This document describes the construction of a new hyperspectral processing library for RVC - CAL language, which is specifically designed for multimedia applications and allows multithreading compilation and system parallelization. This Diploma Project develops the required library functions to implement two of the four stages of the hyperspectral imaging processing chain - endmember and abundance estimations -. The two other stages - dimensionality reduction and endmember extraction - are studied in the Diploma Project of Daniel Madroñal, which complements the research work described in this document. The document follows the classical structure of a research work. Firstly, it introduces the motivations that have inspired this Diploma Project and the main objectives to achieve. After that, it thoroughly studies the state of the art of the technologies related to the development of the library. The state of the art contains all the concepts needed to understand the contents of this research work, like the definition and applications of hyperspectral imaging and the typical processing chain. Thirdly, it explains the methodology of the library implementation, as well as the construction of a complete processing chain in RVC - CAL applying the mentioned library. This chain will test both the correct behavior of the library and the time requirements for the complete analysis of one hyperspectral image, either executing the chain in one processor or in several ones. Afterwards, the collected results will be carefully analyzed: first of all, individual results -from endmember and abundance estimations stages - will be discussed and, after that, complete results will be studied; this results will be obtained from the complete processing chain, so they will analyze the effects of multithreading and system parallelization on the mentioned processing chain. Finally, as a result of this discussion, some conclusions will be gathered regarding some relevant aspects, such as algorithm behavior, execution times and processing performance. Likewise, this document will conclude with the proposal of some future research lines that could continue the research work described in this document.
Resumo:
The distribution of mould species was examined at several points of the processing chain in a Manchego cheese plant and associated dairy farms. Geotrichum and Fusarium were the most frequent genera isolated in milk samples as well as in 1-month ripened cheeses, evidencing a direct transfer from raw milk. Conversely, the mycobiota of long-ripened cheeses consisted mainly of Penicillium species, which gained entry to the cheese through the air of ripening rooms. This study contributes to the understanding of the dynamics of fungal populations in semihard and hard cheeses, highlighting that airborne transfer from the stables could have a direct impact on their quality.
Resumo:
Esta tesis presenta un estudio exhaustivo sobre la evaluación de la calidad de experiencia (QoE, del inglés Quality of Experience) percibida por los usuarios de sistemas de vídeo 3D, analizando el impacto de los efectos introducidos por todos los elementos de la cadena de procesamiento de vídeo 3D. Por lo tanto, se presentan varias pruebas de evaluación subjetiva específicamente diseñadas para evaluar los sistemas considerados, teniendo en cuenta todos los factores perceptuales relacionados con la experiencia visual tridimensional, tales como la percepción de profundidad y la molestia visual. Concretamente, se describe un test subjetivo basado en la evaluación de degradaciones típicas que pueden aparecer en el proceso de creación de contenidos de vídeo 3D, por ejemplo debidas a calibraciones incorrectas de las cámaras o a algoritmos de procesamiento de la señal de vídeo (p. ej., conversión de 2D a 3D). Además, se presenta el proceso de generación de una base de datos de vídeos estereoscópicos de alta calidad, disponible gratuitamente para la comunidad investigadora y que ha sido utilizada ampliamente en diferentes trabajos relacionados con vídeo 3D. Asimismo, se presenta otro estudio subjetivo, realizado entre varios laboratorios, con el que se analiza el impacto de degradaciones causadas por la codificación de vídeo, así como diversos formatos de representación de vídeo 3D. Igualmente, se describen tres pruebas subjetivas centradas en el estudio de posibles efectos causados por la transmisión de vídeo 3D a través de redes de televisión sobre IP (IPTV, del inglés Internet Protocol Television) y de sistemas de streaming adaptativo de vídeo. Para estos casos, se ha propuesto una innovadora metodología de evaluación subjetiva de calidad vídeo, denominada Content-Immersive Evaluation of Transmission Impairments (CIETI), diseñada específicamente para evaluar eventos de transmisión simulando condiciones realistas de visualización de vídeo en ámbitos domésticos, con el fin de obtener conclusiones más representativas sobre la experiencia visual de los usuarios finales. Finalmente, se exponen dos experimentos subjetivos comparando varias tecnologías actuales de televisores 3D disponibles en el mercado de consumo y evaluando factores perceptuales de sistemas Super Multiview Video (SMV), previstos a ser la tecnología futura de televisores 3D de consumo, gracias a una prometedora visualización de contenido 3D sin necesidad de gafas específicas. El trabajo presentado en esta tesis ha permitido entender los factores perceptuales y técnicos relacionados con el procesamiento y visualización de contenidos de vídeo 3D, que pueden ser de utilidad en el desarrollo de nuevas tecnologías y técnicas de evaluación de la QoE, tanto metodologías subjetivas como métricas objetivas. ABSTRACT This thesis presents a comprehensive study of the evaluation of the Quality of Experience (QoE) perceived by the users of 3D video systems, analyzing the impact of effects introduced by all the elements of the 3D video processing chain. Therefore, various subjective assessment tests are presented, particularly designed to evaluate the systems under consideration, and taking into account all the perceptual factors related to the 3D visual experience, such as depth perception and visual discomfort. In particular, a subjective test is presented, based on evaluating typical degradations that may appear during the content creation, for instance due to incorrect camera calibration or video processing algorithms (e.g., 2D to 3D conversion). Moreover, the process of generation of a high-quality dataset of 3D stereoscopic videos is described, which is freely available for the research community, and has been already widely used in different works related with 3D video. In addition, another inter-laboratory subjective study is presented analyzing the impact of coding impairments and representation formats of stereoscopic video. Also, three subjective tests are presented studying the effects of transmission events that take place in Internet Protocol Television (IPTV) networks and adaptive streaming scenarios for 3D video. For these cases, a novel subjective evaluation methodology, called Content-Immersive Evaluation of Transmission Impairments (CIETI), was proposed, which was especially designed to evaluate transmission events simulating realistic home-viewing conditions, to obtain more representative conclusions about the visual experience of the end users. Finally, two subjective experiments are exposed comparing various current 3D displays available in the consumer market, and evaluating perceptual factors of Super Multiview Video (SMV) systems, expected to be the future technology for consumer 3D displays thanks to a promising visualization of 3D content without specific glasses. The work presented in this thesis has allowed to understand perceptual and technical factors related to the processing and visualization of 3D video content, which may be useful in the development of new technologies and approaches for QoE evaluation, both subjective methodologies and objective metrics.
Resumo:
The purpose of this paper is to survey and assess the state-of-the-art in automatic target recognition for synthetic aperture radar imagery (SAR-ATR). The aim is not to develop an exhaustive survey of the voluminous literature, but rather to capture in one place the various approaches for implementing the SAR-ATR system. This paper is meant to be as self-contained as possible, and it approaches the SAR-ATR problem from a holistic end-to-end perspective. A brief overview for the breadth of the SAR-ATR challenges is conducted. This is couched in terms of a single-channel SAR, and it is extendable to multi-channel SAR systems. Stages pertinent to the basic SAR-ATR system structure are defined, and the motivations of the requirements and constraints on the system constituents are addressed. For each stage in the SAR-ATR processing chain, a taxonomization methodology for surveying the numerous methods published in the open literature is proposed. Carefully selected works from the literature are presented under the taxa proposed. Novel comparisons, discussions, and comments are pinpointed throughout this paper. A two-fold benchmarking scheme for evaluating existing SAR-ATR systems and motivating new system designs is proposed. The scheme is applied to the works surveyed in this paper. Finally, a discussion is presented in which various interrelated issues, such as standard operating conditions, extended operating conditions, and target-model design, are addressed. This paper is a contribution toward fulfilling an objective of end-to-end SAR-ATR system design.
Resumo:
With the objective to evaluate PCR-mediated detection of Mycobacterium tuberculosis DNA as a diagnostic procedure for diagnosis of tuberculosis in individuals attending ambulatory services in Primary Health Units of the City Tuberculosis Program in Rio de Janeiro, Brazil, their sputum samples were collected and treated with a DNA extraction procedure using silica-guanidiniumthiocyanate. This procedure has been described to be highly efficient for extraction of different kind of nucleic acids from bacteria and clinical samples. Upon comparing PCR results with the number of acid-fast bacilli, no direct relation was observed between the number of bacilli present in the sample and PCR positivity. Part of the processed samples was therefore spiked with pure DNA of M. tuberculosis and inhibition of the PCR reaction was verified in 22 out of 36 (61%) of the samples, demonstrating that the extraction procedure as originally described should not be used for PCR analysis of sputum samples.
Resumo:
The effect of high pressure processing (400 MPa for 10 min) and natural antimicrobials 2 (enterocins and lactate-diacetate) on the behaviour of L. monocytogenes in sliced cooked ham 3 during refrigerated storage (1ºC and 6ºC) was assessed. The efficiency of the treatments after a 4 cold chain break was evaluated. Lactate-diacetate exerted a bacteriostatic effect against L. 5 monocytogenes during the whole storage period (3 months) at 1ºC and 6ºC, even after 6 temperature abuse. The combination of low storage temperature (1ºC), high pressure 7 processing (HPP) and addition of lactate-diacetate reduced the levels of L. monocytogenes 8 during storage by 2.7 log CFU/g. The most effective treatment was the combination of HPP, 9 enterocins and refrigeration at 1ºC, which reduced the population of the pathogen to final counts 10 of 4 MPN/g after 3 months of storage, even after the cold chain break.
Resumo:
This article aims to discuss the needs and problems of marolo value chain, as well as to evaluate the rehydration process of this fruit as a possibility of using it as a by-product during the interharvest growth periods. The study of the value chain included interviews with producers, handlers, and fruit and by-product sellers. In order to evaluate the rehydration process of this fruit, marolo was dehydrated using a conventional procedure and freeze-drying. The experiments were conducted in a completely randomized design and a triple factorial scheme (2 × 2 × 6). ANOVA was performed, followed by the Tukey's test (p < 0.05). Regression models were generated and adjusted for the time factor. The precariousness of the value chain of marolo was observed. The best procedure for marolo dehydration should be determined according to the intended use of the dehydrated product since the water-absorption capacity of the flour is higher and convective hot-air-drying is more effective in retaining soluble solids and reducing damage to the fruit. These results aim at contributing to the marolo value chain and to the preservation of native trees in the Brazilian savanna biome and can be used to analyze other underutilized crops.
Resumo:
The oxidative and thermo-mechanical degradation of HDPE was studied during processing in an internal mixer under two conditions: totally and partially filled chambers, which provides lower and higher concentrations of oxygen, respectively. Two types of HDPEs, Phillips and Ziegler-Natta, having different levels of terminal vinyl unsaturations were analyzed. Materials were processed at 160, 200, and 240 degrees C. Standard rheograrns using a partially filled chamber showed that the torque is much more unstable in comparison to a totally filled chamber which provides an environment depleted of oxygen. Carbonyl and transvinylene group concentrations increased, whereas vinyl group concentration decreased with temperature and oxygen availability. Average number of chain scission and branching (n(s)) was calculated from MWD curves and its plotting versus functional groups' concentration showed that chain scission or branching takes place depending upon oxygen content and vinyl groups' consumption. Chain scission and branching distribution function (CSBDF) values showed that longer chains undergo chain scission easier than shorter ones due to their higher probability of entanglements. This yields macroradicals that react with the vinyl terminal unsaturations of other chains producing chain branching. Shorter chains are more mobile, not suffering scission but instead are used for grafting the macroradicals, increasing the molecular weight. Increase in the oxygen concentration, temperature, and vinyl end groups' content facilitates the thermo-mechanical degradation reducing the amount of both, longer chains via chain scission and shorter chains via chain branching, narrowing the polydispersity. Phillips HDPE produces a higher level of chain branching than the Ziegler-Natta's type at the same processing condition. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Thermoplastic starch (TPS) was modified with ascorbic acid and citric acid by melt processing of native starch with glycerol as plasticizer in an intensive batch mixer at 160 degrees C. It was found that the molar mass decreases with acid content and processing time causing the reduction in melting temperature (T(m)). As observed by the results of X-ray diffraction and DSC measurements, crystallinity was not changed by the reaction with organic acids. T(m) depression with falling molar mass was interpreted on the basis of the effect of concentration of end-chain units, which act as diluents. FTIR did not show any appreciable change in starch chemical compositions, leading to the conclusion that the main changes observed were produced by the variation in molar mass of the material. We demonstrated that it is possible to decrease melt viscosity without the need for more plasticizer thus avoiding side-effects such as an increase in water affinity or relevant changes in the dynamic mechanical properties. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Female reproductive tissues of the ornamental tobacco amass high levels of serine proteinase inhibitors (PIs) for protection against pests and pathogens. These PIs are produced from a precursor protein composed of six repeats each with a protease reactive site. Here we show that proteolytic processing of the precursor generates five single-chain PIs and a remarkable two-chain inhibitor formed by disulfide-bond Linkage of Nand C-terminal peptide fragments. Surprisingly, PI precursors adopt this circular structure regardless of the number of inhibitor domains, suggesting this bracelet-like conformation is characteristic of the widespread potato inhibitor II (Pot II) protein family.
Resumo:
Heterologous genes encoding proproteins, including proinsulin, generally produce mature protein when expressed in endocrine cells while unprocessed or partially processed protein is produced in non-endocrine cells. Proproteins, which are normally processed in the regulated pathway restricted to endocrine cells, do not always contain the recognition sequence for cleavage by furin, the endoprotease specific to the constitutive pathway, the principal protein processing pathway in non-endocrine cells. Human proinsulin consists of B-Chain-C-peptide-A-Chain and cleavage at the B/C and C/A junctions is required for processing. The B/C, but not the C/A junction, is recognised and cleaved in the constitutive pathway. We expressed a human proinsulin and a mutated proinsulin gene with an engineered furin recognition sequence at the C/A junction and compared the processing efficiency of the mutant and native proinsulin in Chinese Hamster Ovary cells. The processing efficiency of the mutant proinsulin was 56% relative to 0.7% for native proinsulin. However, despite similar levels of mRNA being expressed in both cell lines, the absolute levels of immunoreactive insulin, normalized against mRNA levels, were 18-fold lower in the mutant proinsulin-expressing cells. As a result, there was only a marginal increase in absolute levels of insulin produced by these cells. This unexpected finding may result from preferential degradation of insulin in non-endocrine cells which lack the protection offered by the secretory granules found in endocrine cells.
Resumo:
Active surveillance for dengue (DEN) virus infected mosquitoes can be an effective way to predict the risk of dengue infection in a given area. However, doing so may pose logistical problems if mosquitoes must be kept alive or frozen fresh to detect DEN virus. In an attempt to simplify mosquito processing, we evaluated the usefulness of a sticky lure and a seminested reverse-transcriptase polymerase chain reaction assay (RT-PCR) for detecting DEN virus RNA under laboratory conditions using experimentally infected Aedes aegypti (L.) mosquitoes. In the first experiment, 40 male mosquitoes were inoculated with 0.13 mul of a 10(4) pfu/ml DEN-2 stock solution. After a 7-d incubation period, the mosquitoes were applied to the sticky lure and kept at room temperatures of 23-30 degreesC. Following 7,10,14, and 28 d application, 10 mosquitoes each were removed from the lure pooled and assayed for virus. DEN virus nucleic acid was clearly detectable in all pools up to 28 d after death. A second study evaluated sensitivity and specificity using one, two, and five DEN-infected mosquitoes removed after 7, 10, 14, 21 and 30 d application and tested by RT-PCR. All four DEN serotypes were individually inoculated in mosquitoes and evaluated using the same procedures as experiment 1. The four serotypes were detectable in as few as one mosquito 30 d after application to the lure with no evidence of cross-reactivity. The combination of sticky lures and RT-PCR show promise for mosquito and dengue virus surveillance and warrant further evaluation.