28 resultados para Faults detection and location
Resumo:
The verification and validation activity plays a fundamental role in improving software quality. Determining which the most effective techniques for carrying out this activity are has been an aspiration of experimental software engineering researchers for years. This paper reports a controlled experiment evaluating the effectiveness of two unit testing techniques (the functional testing technique known as equivalence partitioning (EP) and the control-flow structural testing technique known as branch testing (BT)). This experiment is a literal replication of Juristo et al. (2013).Both experiments serve the purpose of determining whether the effectiveness of BT and EP varies depending on whether or not the faults are visible for the technique (InScope or OutScope, respectively). We have used the materials, design and procedures of the original experiment, but in order to adapt the experiment to the context we have: (1) reduced the number of studied techniques from 3 to 2; (2) assigned subjects to experimental groups by means of stratified randomization to balance the influence of programming experience; (3) localized the experimental materials and (4) adapted the training duration. We ran the replication at the Escuela Politécnica del Ejército Sede Latacunga (ESPEL) as part of a software verification & validation course. The experimental subjects were 23 master?s degree students. EP is more effective than BT at detecting InScope faults. The session/program andgroup variables are found to have significant effects. BT is more effective than EP at detecting OutScope faults. The session/program and group variables have no effect in this case. The results of the replication and the original experiment are similar with respect to testing techniques. There are some inconsistencies with respect to the group factor. They can be explained by small sample effects. The results for the session/program factor are inconsistent for InScope faults.We believe that these differences are due to a combination of the fatigue effect and a technique x program interaction. Although we were able to reproduce the main effects, the changes to the design of the original experiment make it impossible to identify the causes of the discrepancies for sure. We believe that further replications closely resembling the original experiment should be conducted to improve our understanding of the phenomena under study.
Resumo:
This paper presents the detection and identification of hydrocarbons through flu oro-sensing by developing a simple and inexpensive detector for inland water, in contrast to current systems, designed to be used for marine waters at large distances and being extremely costly. To validate the proposed system, three test-benches have been mounted, with various UV-Iight sources. Main application of this system would be detect hydrocarbons pollution in rivers, lakes or dams, which in fact, is of growing interest by administrations.
Resumo:
Active optical sensing (LIDAR and light curtain transmission) devices mounted on a mobile platform can correctly detect, localize, and classify trees. To conduct an evaluation and comparison of the different sensors, an optical encoder wheel was used for vehicle odometry and provided a measurement of the linear displacement of the prototype vehicle along a row of tree seedlings as a reference for each recorded sensor measurement. The field trials were conducted in a juvenile tree nursery with one-year-old grafted almond trees at Sierra Gold Nurseries, Yuba City, CA, United States. Through these tests and subsequent data processing, each sensor was individually evaluated to characterize their reliability, as well as their advantages and disadvantages for the proposed task. Test results indicated that 95.7% and 99.48% of the trees were successfully detected with the LIDAR and light curtain sensors, respectively. LIDAR correctly classified, between alive or dead tree states at a 93.75% success rate compared to 94.16% for the light curtain sensor. These results can help system designers select the most reliable sensor for the accurate detection and localization of each tree in a nursery, which might allow labor-intensive tasks, such as weeding, to be automated without damaging crops.
Resumo:
In this paper we propose an innovative approach to tackle the problem of traffic sign detection using a computer vision algorithm and taking into account real-time operation constraints, trying to establish intelligent strategies to simplify as much as possible the algorithm complexity and to speed up the process. Firstly, a set of candidates is generated according to a color segmentation stage, followed by a region analysis strategy, where spatial characteristic of previously detected objects are taken into account. Finally, temporal coherence is introduced by means of a tracking scheme, performed using a Kalman filter for each potential candidate. Taking into consideration time constraints, efficiency is achieved two-fold: on the one side, a multi-resolution strategy is adopted for segmentation, where global operation will be applied only to low-resolution images, increasing the resolution to the maximum only when a potential road sign is being tracked. On the other side, we take advantage of the expected spacing between traffic signs. Namely, the tracking of objects of interest allows to generate inhibition areas, which are those ones where no new traffic signs are expected to appear due to the existence of a TS in the neighborhood. The proposed solution has been tested with real sequences in both urban areas and highways, and proved to achieve higher computational efficiency, especially as a result of the multi-resolution approach.
Resumo:
In this paper we present an innovative technique to tackle the problem of automatic road sign detection and tracking using an on-board stereo camera. It involves a continuous 3D analysis of the road sign during the whole tracking process. Firstly, a color and appearance based model is applied to generate road sign candidates in both stereo images. A sparse disparity map between the left and right images is then created for each candidate by using contour-based and SURF-based matching in the far and short range, respectively. Once the map has been computed, the correspondences are back-projected to generate a cloud of 3D points, and the best-fit plane is computed through RANSAC, ensuring robustness to outliers. Temporal consistency is enforced by means of a Kalman filter, which exploits the intrinsic smoothness of the 3D camera motion in traffic environments. Additionally, the estimation of the plane allows to correct deformations due to perspective, thus easing further sign classification.
Resumo:
In this paper we propose an innovative method for the automatic detection and tracking of road traffic signs using an onboard stereo camera. It involves a combination of monocular and stereo analysis strategies to increase the reliability of the detections such that it can boost the performance of any traffic sign recognition scheme. Firstly, an adaptive color and appearance based detection is applied at single camera level to generate a set of traffic sign hypotheses. In turn, stereo information allows for sparse 3D reconstruction of potential traffic signs through a SURF-based matching strategy. Namely, the plane that best fits the cloud of 3D points traced back from feature matches is estimated using a RANSAC based approach to improve robustness to outliers. Temporal consistency of the 3D information is ensured through a Kalman-based tracking stage. This also allows for the generation of a predicted 3D traffic sign model, which is in turn used to enhance the previously mentioned color-based detector through a feedback loop, thus improving detection accuracy. The proposed solution has been tested with real sequences under several illumination conditions and in both urban areas and highways, achieving very high detection rates in challenging environments, including rapid motion and significant perspective distortion
Resumo:
The Project you are about to see it is based on the technologies used on object detection and recognition, especially on leaves and chromosomes. To do so, this document contains the typical parts of a scientific paper, as it is what it is. It is composed by an Abstract, an Introduction, points that have to do with the investigation area, future work, conclusions and references used for the elaboration of the document. The Abstract talks about what are we going to find in this paper, which is technologies employed on pattern detection and recognition for leaves and chromosomes and the jobs that are already made for cataloguing these objects. In the introduction detection and recognition meanings are explained. This is necessary as many papers get confused with these terms, specially the ones talking about chromosomes. Detecting an object is gathering the parts of the image that are useful and eliminating the useless parts. Summarizing, detection would be recognizing the objects borders. When talking about recognition, we are talking about the computers or the machines process, which says what kind of object we are handling. Afterwards we face a compilation of the most used technologies in object detection in general. There are two main groups on this category: Based on derivatives of images and based on ASIFT points. The ones that are based on derivatives of images have in common that convolving them with a previously created matrix does the treatment of them. This is done for detecting borders on the images, which are changes on the intensity of the pixels. Within these technologies we face two groups: Gradian based, which search for maximums and minimums on the pixels intensity as they only use the first derivative. The Laplacian based methods search for zeros on the pixels intensity as they use the second derivative. Depending on the level of details that we want to use on the final result, we will choose one option or the other, because, as its logic, if we used Gradian based methods, the computer will consume less resources and less time as there are less operations, but the quality will be worse. On the other hand, if we use the Laplacian based methods we will need more time and resources as they require more operations, but we will have a much better quality result. After explaining all the derivative based methods, we take a look on the different algorithms that are available for both groups. The other big group of technologies for object recognition is the one based on ASIFT points, which are based on 6 image parameters and compare them with another image taking under consideration these parameters. These methods disadvantage, for our future purposes, is that it is only valid for one single object. So if we are going to recognize two different leaves, even though if they refer to the same specie, we are not going to be able to recognize them with this method. It is important to mention these types of technologies as we are talking about recognition methods in general. At the end of the chapter we can see a comparison with pros and cons of all technologies that are employed. Firstly comparing them separately and then comparing them all together, based on our purposes. Recognition techniques, which are the next chapter, are not really vast as, even though there are general steps for doing object recognition, every single object that has to be recognized has its own method as the are different. This is why there is not a general method that we can specify on this chapter. We now move on into leaf detection techniques on computers. Now we will use the technique explained above based on the image derivatives. Next step will be to turn the leaf into several parameters. Depending on the document that you are referring to, there will be more or less parameters. Some papers recommend to divide the leaf into 3 main features (shape, dent and vein] and doing mathematical operations with them we can get up to 16 secondary features. Next proposition is dividing the leaf into 5 main features (Diameter, physiological length, physiological width, area and perimeter] and from those, extract 12 secondary features. This second alternative is the most used so it is the one that is going to be the reference. Following in to leaf recognition, we are based on a paper that provides a source code that, clicking on both leaf ends, it automatically tells to which specie belongs the leaf that we are trying to recognize. To do so, it only requires having a database. On the tests that have been made by the document, they assure us a 90.312% of accuracy over 320 total tests (32 plants on the database and 10 tests per specie]. Next chapter talks about chromosome detection, where we shall pass the metaphasis plate, where the chromosomes are disorganized, into the karyotype plate, which is the usual view of the 23 chromosomes ordered by number. There are two types of techniques to do this step: the skeletonization process and swiping angles. Skeletonization progress consists on suppressing the inside pixels of the chromosome to just stay with the silhouette. This method is really similar to the ones based on the derivatives of the image but the difference is that it doesnt detect the borders but the interior of the chromosome. Second technique consists of swiping angles from the beginning of the chromosome and, taking under consideration, that on a single chromosome we cannot have more than an X angle, it detects the various regions of the chromosomes. Once the karyotype plate is defined, we continue with chromosome recognition. To do so, there is a technique based on the banding that chromosomes have (grey scale bands] that make them unique. The program then detects the longitudinal axis of the chromosome and reconstructs the band profiles. Then the computer is able to recognize this chromosome. Concerning the future work, we generally have to independent techniques that dont reunite detection and recognition, so our main focus would be to prepare a program that gathers both techniques. On the leaf matter we have seen that, detection and recognition, have a link as both share the option of dividing the leaf into 5 main features. The work that would have to be done is to create an algorithm that linked both methods, as in the program, which recognizes leaves, it has to be clicked both leaf ends so it is not an automatic algorithm. On the chromosome side, we should create an algorithm that searches for the beginning of the chromosome and then start to swipe angles, to later give the parameters to the program that searches for the band profiles. Finally, on the summary, we explain why this type of investigation is needed, and that is because with global warming, lots of species (animals and plants] are beginning to extinguish. That is the reason why a big database, which gathers all the possible species, is needed. For recognizing animal species, we just only have to have the 23 chromosomes. While recognizing a plant, there are several ways of doing it, but the easiest way to input a computer is to scan the leaf of the plant. RESUMEN. El proyecto que se puede ver a continuación trata sobre las tecnologías empleadas en la detección y reconocimiento de objetos, especialmente de hojas y cromosomas. Para ello, este documento contiene las partes típicas de un paper de investigación, puesto que es de lo que se trata. Así, estará compuesto de Abstract, Introducción, diversos puntos que tengan que ver con el área a investigar, trabajo futuro, conclusiones y biografía utilizada para la realización del documento. Así, el Abstract nos cuenta qué vamos a poder encontrar en este paper, que no es ni más ni menos que las tecnologías empleadas en el reconocimiento y detección de patrones en hojas y cromosomas y qué trabajos hay existentes para catalogar a estos objetos. En la introducción se explican los conceptos de qué es la detección y qué es el reconocimiento. Esto es necesario ya que muchos papers científicos, especialmente los que hablan de cromosomas, confunden estos dos términos que no podían ser más sencillos. Por un lado tendríamos la detección del objeto, que sería simplemente coger las partes que nos interesasen de la imagen y eliminar aquellas partes que no nos fueran útiles para un futuro. Resumiendo, sería reconocer los bordes del objeto de estudio. Cuando hablamos de reconocimiento, estamos refiriéndonos al proceso que tiene el ordenador, o la máquina, para decir qué clase de objeto estamos tratando. Seguidamente nos encontramos con un recopilatorio de las tecnologías más utilizadas para la detección de objetos, en general. Aquí nos encontraríamos con dos grandes grupos de tecnologías: Las basadas en las derivadas de imágenes y las basadas en los puntos ASIFT. El grupo de tecnologías basadas en derivadas de imágenes tienen en común que hay que tratar a las imágenes mediante una convolución con una matriz creada previamente. Esto se hace para detectar bordes en las imágenes que son básicamente cambios en la intensidad de los píxeles. Dentro de estas tecnologías nos encontramos con dos grupos: Los basados en gradientes, los cuales buscan máximos y mínimos de intensidad en la imagen puesto que sólo utilizan la primera derivada; y los Laplacianos, los cuales buscan ceros en la intensidad de los píxeles puesto que estos utilizan la segunda derivada de la imagen. Dependiendo del nivel de detalles que queramos utilizar en el resultado final nos decantaremos por un método u otro puesto que, como es lógico, si utilizamos los basados en el gradiente habrá menos operaciones por lo que consumirá más tiempo y recursos pero por la contra tendremos menos calidad de imagen. Y al revés pasa con los Laplacianos, puesto que necesitan más operaciones y recursos pero tendrán un resultado final con mejor calidad. Después de explicar los tipos de operadores que hay, se hace un recorrido explicando los distintos tipos de algoritmos que hay en cada uno de los grupos. El otro gran grupo de tecnologías para el reconocimiento de objetos son los basados en puntos ASIFT, los cuales se basan en 6 parámetros de la imagen y la comparan con otra imagen teniendo en cuenta dichos parámetros. La desventaja de este método, para nuestros propósitos futuros, es que sólo es valido para un objeto en concreto. Por lo que si vamos a reconocer dos hojas diferentes, aunque sean de la misma especie, no vamos a poder reconocerlas mediante este método. Aún así es importante explicar este tipo de tecnologías puesto que estamos hablando de técnicas de reconocimiento en general. Al final del capítulo podremos ver una comparación con los pros y las contras de todas las tecnologías empleadas. Primeramente comparándolas de forma separada y, finalmente, compararemos todos los métodos existentes en base a nuestros propósitos. Las técnicas de reconocimiento, el siguiente apartado, no es muy extenso puesto que, aunque haya pasos generales para el reconocimiento de objetos, cada objeto a reconocer es distinto por lo que no hay un método específico que se pueda generalizar. Pasamos ahora a las técnicas de detección de hojas mediante ordenador. Aquí usaremos la técnica explicada previamente explicada basada en las derivadas de las imágenes. La continuación de este paso sería diseccionar la hoja en diversos parámetros. Dependiendo de la fuente a la que se consulte pueden haber más o menos parámetros. Unos documentos aconsejan dividir la morfología de la hoja en 3 parámetros principales (Forma, Dentina y ramificación] y derivando de dichos parámetros convertirlos a 16 parámetros secundarios. La otra propuesta es dividir la morfología de la hoja en 5 parámetros principales (Diámetro, longitud fisiológica, anchura fisiológica, área y perímetro] y de ahí extraer 12 parámetros secundarios. Esta segunda propuesta es la más utilizada de todas por lo que es la que se utilizará. Pasamos al reconocimiento de hojas, en la cual nos hemos basado en un documento que provee un código fuente que cucando en los dos extremos de la hoja automáticamente nos dice a qué especie pertenece la hoja que estamos intentando reconocer. Para ello sólo hay que formar una base de datos. En los test realizados por el citado documento, nos aseguran que tiene un índice de acierto del 90.312% en 320 test en total (32 plantas insertadas en la base de datos por 10 test que se han realizado por cada una de las especies]. El siguiente apartado trata de la detección de cromosomas, en el cual se debe de pasar de la célula metafásica, donde los cromosomas están desorganizados, al cariotipo, que es como solemos ver los 23 cromosomas de forma ordenada. Hay dos tipos de técnicas para realizar este paso: Por el proceso de esquelotonización y barriendo ángulos. El proceso de esqueletonización consiste en eliminar los píxeles del interior del cromosoma para quedarse con su silueta; Este proceso es similar a los métodos de derivación de los píxeles pero se diferencia en que no detecta bordes si no que detecta el interior de los cromosomas. La segunda técnica consiste en ir barriendo ángulos desde el principio del cromosoma y teniendo en cuenta que un cromosoma no puede doblarse más de X grados detecta las diversas regiones de los cromosomas. Una vez tengamos el cariotipo, se continua con el reconocimiento de cromosomas. Para ello existe una técnica basada en las bandas de blancos y negros que tienen los cromosomas y que son las que los hacen únicos. Para ello el programa detecta los ejes longitudinales del cromosoma y reconstruye los perfiles de las bandas que posee el cromosoma y que lo identifican como único. En cuanto al trabajo que se podría desempeñar en el futuro, tenemos por lo general dos técnicas independientes que no unen la detección con el reconocimiento por lo que se habría de preparar un programa que uniese estas dos técnicas. Respecto a las hojas hemos visto que ambos métodos, detección y reconocimiento, están vinculados debido a que ambos comparten la opinión de dividir las hojas en 5 parámetros principales. El trabajo que habría que realizar sería el de crear un algoritmo que conectase a ambos ya que en el programa de reconocimiento se debe clicar a los dos extremos de la hoja por lo que no es una tarea automática. En cuanto a los cromosomas, se debería de crear un algoritmo que busque el inicio del cromosoma y entonces empiece a barrer ángulos para después poder dárselo al programa que busca los perfiles de bandas de los cromosomas. Finalmente, en el resumen se explica el por qué hace falta este tipo de investigación, esto es que con el calentamiento global, muchas de las especies (tanto animales como plantas] se están empezando a extinguir. Es por ello que se necesitará una base de datos que contemple todas las posibles especies tanto del reino animal como del reino vegetal. Para reconocer a una especie animal, simplemente bastará con tener sus 23 cromosomas; mientras que para reconocer a una especie vegetal, existen diversas formas. Aunque la más sencilla de todas es contar con la hoja de la especie puesto que es el elemento más fácil de escanear e introducir en el ordenador.
Resumo:
The verification and validation activity plays a fundamental role in improving software quality. Determining which the most effective techniques for carrying out this activity are has been an aspiration of experimental software engineering researchers for years. This paper reports a controlled experiment evaluating the effectiveness of two unit testing techniques (the functional testing technique known as equivalence partitioning (EP) and the control-flow structural testing technique known as branch testing (BT)). This experiment is a literal replication of Juristo et al. (2013). Both experiments serve the purpose of determining whether the effectiveness of BT and EP varies depending on whether or not the faults are visible for the technique (InScope or OutScope, respectively). We have used the materials, design and procedures of the original experiment, but in order to adapt the experiment to the context we have: (1) reduced the number of studied techniques from 3 to 2; (2) assigned subjects to experimental groups by means of stratified randomization to balance the influence of programming experience; (3) localized the experimental materials and (4) adapted the training duration. We ran the replication at the Escuela Polite?cnica del Eje?rcito Sede Latacunga (ESPEL) as part of a software verification & validation course. The experimental subjects were 23 master?s degree students. EP is more effective than BT at detecting InScope faults. The session/program and group variables are found to have significant effects. BT is more effective than EP at detecting OutScope faults. The session/program and group variables have no effect in this case. The results of the replication and the original experiment are similar with respect to testing techniques. There are some inconsistencies with respect to the group factor. They can be explained by small sample effects. The results for the session/program factor are inconsistent for InScope faults. We believe that these differences are due to a combination of the fatigue effect and a technique x program interaction. Although we were able to reproduce the main effects, the changes to the design of the original experiment make it impossible to identify the causes of the discrepancies for sure. We believe that further replications closely resembling the original experiment should be conducted to improve our understanding of the phenomena under study.
Resumo:
The use of a common environment for processing different powder foods in the industry has increased the risk of finding peanut traces in powder foods. The analytical methods commonly used for detection of peanut such as enzyme-linked immunosorbent assay (ELISA) and real-time polymerase chain reaction (RT-PCR) represent high specificity and sensitivity but are destructive and time-consuming, and require highly skilled experimenters. The feasibility of NIR hyperspectral imaging (HSI) is studied for the detection of peanut traces down to 0.01% by weight. A principal-component analysis (PCA) was carried out on a dataset of peanut and flour spectra. The obtained loadings were applied to the HSI images of adulterated wheat flour samples with peanut traces. As a result, HSI images were reduced to score images with enhanced contrast between peanut and flour particles. Finally, a threshold was fixed in score images to obtain a binary classification image, and the percentage of peanut adulteration was compared with the percentage of pixels identified as peanut particles. This study allowed the detection of traces of peanut down to 0.01% and quantification of peanut adulteration from 10% to 0.1% with a coefficient of determination (r2) of 0.946. These results show the feasibility of using HSI systems for the detection of peanut traces in conjunction with chemical procedures, such as RT-PCR and ELISA to facilitate enhanced quality-control surveillance on food-product processing lines.
Resumo:
Different treatments (consolidation and water-repellent) were applied on samples of marble and granite from the Front stage of the Roman Theatre of Merida (Spain). The main goal is to study the effects of these treatments on archaeological stone material, by analyzing the surface changes. X-Ray Fluorescence and Laser-Induced Breakdown Spectroscopy techniques, as well as Nuclear Magnetic Resonance have been used in order to study changes in the surface properties of the material, comparing treated and untreated specimens. The results confirm that silicon (Si) marker tracking allows the detection of applied treatments, increasing the peak signal in treated specimens. Furthermore, it is also possible to prove changes both within the pore system of the materialand in the distribution of surface water, resulting from the application of these products
Resumo:
This work describes an acoustic system that allows the automatic detection and location of mechanical impacts on metallic based structures, which is suitable in robotics and industrial applications. The system is based on the time delays of propagation of the acoustic waves along the metallic based structure and it determines the instant and the position when and were the impact has been produced by piezoelectric sensors and an electronic-computerized system. We have obtained that for distance impact of 40 cm and 50 cm the time delay is 2 s and 72 s respectively.
Resumo:
Cognitive Wireless Sensor Network (CWSN) is a new paradigm which integrates cognitive features in traditional Wireless Sensor Networks (WSNs) to mitigate important problems such as spectrum occupancy. Security in Cognitive Wireless Sensor Networks is an important problem because these kinds of networks manage critical applications and data. Moreover, the specific constraints of WSN make the problem even more critical. However, effective solutions have not been implemented yet. Among the specific attacks derived from new cognitive features, the one most studied is the Primary User Emulation (PUE) attack. This paper discusses a new approach, based on anomaly behavior detection and collaboration, to detect the PUE attack in CWSN scenarios. A nonparametric CUSUM algorithm, suitable for low resource networks like CWSN, has been used in this work. The algorithm has been tested using a cognitive simulator that brings important results in this area. For example, the result shows that the number of collaborative nodes is the most important parameter in order to improve the PUE attack detection rates. If the 20% of the nodes collaborates, the PUE detection reaches the 98% with less than 1% of false positives.
Resumo:
La aparición y avance de la enfermedad del marchitamiento del pino (Pine Wilt Desease, PWD), causada por Bursaphelenchus xylophilus (Nematoda; Aphelenchoididae), el nematodo de la madera del pino (NMP), en el suroeste de Europa, ha puesto de manifiesto la necesidad de estudiar la fenología y la dispersión de su único vector conocido en Europa, Monochamus galloprovincialis (Col., Cerambycidae). El análisis de 12 series de emergencias entre 2010 y 2014, registradas en Palencia, València y Teruel, con material procedente de diversos puntos de la península ibérica, demostró una alta variabilidad en la fenología de M. galloprovincialis y la divergencia térmica respecto de las poblaciones portuguesas. Para éstas, el establecimiento de los umbrales térmicos de desarrollo de las larvas post-dormantes del vector (12,2 y 33,5ºC) permitió la predicción de la emergencia mediana para la fecha en la que se acumulaban de 822 grados-día. Ninguna de las series analizadas en este trabajo necesitó de dichos grados-día estimados para la emergencia mediana. Asimismo, la emergencia se adelantó en las regiones más calurosas, mientras que se retrasó en las zonas más templadas. Más allá de la posible variabilidad entre poblaciones locales peninsulares, se detectaron indicios de que la diferencia en la acumulación de calor durante el otoño puede afectar el grado de maduración de las larvas invernantes, y su posterior patrón temporal de emergencia. Por último, también fueron observados comportamientos de protandria en las emergencias. Respecto a la fenología de su vuelo, entre los años 2010 y 2015, fueron ejecutados un total de 8 experimentos de captura de M. galloprovincialis mediante trampas cebadas con atrayentes en diferentes regiones (Castellón, Teruel, Segovia y Alicante) permitiendo el seguimiento del periodo de vuelo. Su análisis permitió constatar la disminución de las capturas y el acortamiento del periodo de vuelo con la altitud, el inicio del vuelo en el mes de mayo/junio a partir de los 14ºC de temperatura media diaria, la influencia de las altas temperaturas en la disminución de las capturas estivales (potencial causante de perfiles bimodales en las curvas de vuelo en las zonas menos frías), la evolución de la proporción de sexos a lo largo del periodo de vuelo (que muestra una mayor captura de hembras al inicio y de machos al final) y el comportamiento diurno y ligado a las altas temperaturas del vuelo circadiano del insecto. Dos redes de muestreo sistemático de insectos saproxílicos instaladas en la Comunitat Valencia (Red MUFFET, 15 parcelas, año 2013) y en Murcia (Red ESFP, 20 parcelas, años 2008-2010) permitieron el estudio de la comunidad de insectos relacionada con M. galloprovincialis. Cada una de las parcelas contaba con una trampa cebada con atrayentes y una estación meteorológica. El registro de más de 250 especies de coleópteros saproxílicos demostró el potencial que tiene el empleo de redes de trampas vigía para la detección temprana de organismos exóticos, además de permitir la caracterización y evaluación de las comunidades de entomofauna útil, representando una de las mejores herramientas de la gestión integrada de plagas. En este caso, la comunidad de saproxílicos estudiada mostró ser muy homogénea respecto a la variación ambiental de las zonas de muestreo, y que pese a las pequeñas variaciones entre las comunidades de los diferentes ecosistemas, el rol que M. galloprovincialis desempeña en ellas a lo largo de todo el gradiente estudiado es el mismo. Con todo, el análisis mediante redes de interacción mostró su relevancia ecológica al actuar de conector entre los diferentes niveles tróficos. Por último, un total de 12 experimentos de marcaje-liberación-recaptura desarrollados entre 2009 y 2012 en Castellón, Teruel, Valencia y Murcia permitieron evaluar el comportamiento dispersivo de M. galloprovincialis. Las detecciones mediante trampas cebadas de los insectos liberados se dieron por lo menos 8 días después de la emergencia. La abundancia de población pareció relacionada con la continuidad, la naturalización de la masa, y con la afección previa de incendios. La dispersión no estuvo influida por la dirección ni la intensidad de los vientos dominantes. La abundancia de material hospedante (en lo referente a las variables de masa y a los índices de competencia) influyó en la captura del insecto en paisajes fragmentados, aunque la ubicación de las trampas optimizó el número de capturas cuando se ubicaron en el límite de la masa y en zonas visibles. Por último también se constató que M. galloprovincialis posee suficiente capacidad de dispersión como para recorrer hasta 1500 m/día, llegando a alcanzar distancias máximas de 13600m o de 22100 m. ABSTRACT The detection and expansion of the Pine Wilt Desease (PWD), caused by Bursaphelenchus xylophilus (Nematoda; Aphelenchoididae), Pine Wood Nematode (PWN), in southwestern Europe since 1999, has triggered off the study of the phenology and the dispersion of its unique vector in the continent, Monochamus galloprovincialis (Coleoptera, Cerambycidae). The analysis of 12 emergence series between 2010 and 2014 registered in Palencia, Teruel and Valencia (Spain), registered from field colonized material collected at several locations of the Iberian Peninsula, showed a high variability in the emergence phenology of M. galloprovincialis. In addition, these patterns showed a very acute thermal divergence regarding a development model fitted earlier in Portugal. Such model forecasted the emergence of 50% of M. galloprovincialis individuals in the Setúbal Peninsula (Portugal) when an average of 822 degree-days (DD) were reached, based on the accumulation of heat from the 1st of March until emergence and lower and upper thresholds of 12.2 ºC and 33,5 °C respectively. In our results, all analyzed series needed less than 822 DD to complete the 50% of the emergence. Also, emergency occurred earlier in the hottest regions, while it was delayed in more temperate areas. Beyond the possible variability between local populations, the difference in the heat accumulation during the fall season may have affected the degree of maturation of overwintering larvae, and subsequently, the temporal pattern of M. galloprovincialis emergences. Therefore these results suggest the need to differentiate local management strategies for the PWN vector, depending on the location, and the climatic variables of each region. Finally, protandrous emergence patterns were observed for M. galloprovincialis in most of the studied data-sets. Regarding the flight phenology of M. galloprovincialis, a total of 8 trapping experiments were carried out in different regions of the Iberian Peninsula (Castellón, Teruel, Segovia and Alicante) between 2010 and 2015. The use of commercial lures and traps allowed monitoring of the flight period of M. galloprovincialis. The analyses of such curves, helped confirming different aspects. First, a decline in the number of catches and a shortening of the flight period was observed as the altitude increased. Flight period was recorded to start in May / June when the daily average temperature went over 14 ° C. A significant influence of high temperatures on the decrease of catches in the summer was found in many occasions, which frequently lead to a bimodal profile of the flight curves in warm areas. The evolution of sex ratio along the flight period shows a greater capture of females at the beginning of the period, and of males at the end. In addition, the circadian response of M. galloprovincialis to lured traps was described for the first time, concluding that the insect is diurnal and that such response is linked to high temperatures. Two networks of systematic sampling of saproxylic insects were installed in the Region of Valencia (Red MUFFET, 15 plots, 2013) and Murcia (Red ICPF, 20 plots, 2008-2010). These networks, intended to serve the double purpose of early-detection and long term monitoring of the saproxylic beetle assemblies, allowed the study of insect communities related to M. galloprovincialis. Each of the plots had a trap baited with attractants and a weather station. The registration of almost 300 species of saproxylic beetles demonstrated the potential use of such trapping networks for the early detection of exotic organisms, while at the same time allows the characterization and evaluation of useful entomological fauna communities, representing one of the best tools for the integrated pest management. In this particular case, the studied community of saproxylic beetles was very homogeneous with respect to environmental variation of the sampling areas, and despite small variations between communities of different ecosystems, the role that M. galloprovincialis apparently plays in them across the studied gradient seems to be the same. However, the analysis through food-webs showed the ecological significance of M. galloprovincialis as a connector between different trophic levels. Finally, 12 mark-release-recapture experiments were carried out between 2009 and 2012 in Castellón, Teruel, Valencia and Murcia (Spain) with the aim to describe the dispersive behavior of M. galloprovincialis as well as the stand and landscape characteristics that could influence its abundance and dispersal. No insects younger than 8 days were caught in lured traps. Population abundance estimates from mark-release-recapture data, seemed related to forest continuity, naturalization, and to prior presence of forest fires. On the other hand, M. galloprovincialis dispersal was not found to be significantly influenced by the direction and intensity of prevailing winds. The abundance of host material, very related to stand characteristics and spacing indexes, influenced the insect abundance in fragmented landscapes. In addition, the location of the traps optimized the number of catches when they were placed in the edge of the forest stands and in visible positions. Finally it was also found that M. galloprovincialis is able to fly up to 1500 m / day, reaching maximum distances of up to 13600 m or 22100 m.