915 resultados para CCD cameras
Resumo:
During the Eocene-Oligocene transition (EOT, ca. 34 Ma), Earth's climate cooled significantly from a greenhouse to an icehouse climate, while the calcite (CaCO3) compensation depth (CCD) in the Pacific Ocean increased rapidly. Fluctuations in the CCD could result from various processes that create an imbalance between calcium (Ca) sources to, and sinks from, the ocean (e.g., weathering and CaCO3 deposition), with different effects on the isotopic composition of dissolved Ca in the oceans due to differences in the Ca isotopic composition of various inputs and outputs. We used Ca isotope ratios (d44/40Ca) of coeval pelagic marine barite and bulk carbonate to evaluate changes in the marine Ca cycle across the EOT. We show that the permanent deepening of the CCD was not accompanied by a pronounced change in seawater d44/40Ca, whereas time intervals in the Neogene with smaller carbonate depositional changes are characterized by seawater d44/40Ca shifts. This suggests that the response of seawater d44/40Ca to changes in weathering fluxes and to imbalances in the oceanic alkalinity budget depends on the chemical composition of seawater. A minor and transient fluctuation in the Ca isotope ratio of bulk carbonate may reflect a change in isotopic fractionation associated with CaCO3 precipitation from seawater due to a combination of factors, including changes in temperature and/or in the assemblages of calcifying organisms.
Resumo:
Five sites were drilled along a transect of the Walvis Ridge. The basement rocks range in age from 69 to 71 m.y., and the deeper sites are slightly younger, in agreement with the sea-floor-spreading magnetic lineations. Geophysical and petrological evidence indicates that the Walvis Ridge was formed at a mid-ocean ridge at anomalously shallow elevations. The basement complex, associated with the relatively smooth acoustic basement in the area, consists of pillowed basalt and massive flows alternating with nannofossil chalk and limestone that contain a significant volcanogenic component. Basalts are quartz tholeiites at the ridge crest and olivine tholeiites downslope. The sediment sections are dominated by carbonate oozes and chalks with volcanogenic material common in the lower parts of the sediment columns. The volcanogenic sediments probably were derived from sources on the Walvis Ridge. Paleodepth estimates based on the benthic fauna are consistent with a normal crustal-cooling rate of subsidence of the Walvis Ridge. The shoalest site in the transect sank below sea level in the late Paleocene, and benthic fauna suggest a rapid sea-level lowering in the mid-Oligocene. Average accumulation rates during the Cenozoic indicate three peaks in the rate of supply of carbonate to the sea floor, that is, early Pliocene, late middle Miocene, and late Paleocene to early Eocene. Carbonate accumulation rates for the rest of the Cenozoic averaged 1 g/cm**2/kyr. Dissolution had a marked effect on sediment accumulation in the deeper sites, particularly during the late Miocene, Oligocene, and middle to late Eocene. Changes in the rates of accumulation as a function of depth demonstrate that the upper part of the water column had a greater degree of undersaturation with respect to carbonate during times of high productivity. Even when the calcium carbonate compensation depth (CCD) was below 4400 m, a significant amount of carbonate was dissolved at the shallower sites. The flora and fauna of the Walvis Ridge are temperate in nature. Warmer-water faunas are found in the uppermost Maastrichtian and lower Eocene sediments, with cooler-water faunas present in the lower Paleocene, Oligocene, and middle Miocene. The boreal elements of the lower Pliocene are replaced by more temperate forms in the middle Pliocene. The Cretaceous-Tertiary boundary was recovered in four sites drilled, with the sediments containing well-preserved nannofossils but poorly preserved foraminifera.
Resumo:
The ability to view and interact with 3D models has been happening for a long time. However, vision-based 3D modeling has only seen limited success in applications, as it faces many technical challenges. Hand-held mobile devices have changed the way we interact with virtual reality environments. Their high mobility and technical features, such as inertial sensors, cameras and fast processors, are especially attractive for advancing the state of the art in virtual reality systems. Also, their ubiquity and fast Internet connection open a path to distributed and collaborative development. However, such path has not been fully explored in many domains. VR systems for real world engineering contexts are still difficult to use, especially when geographically dispersed engineering teams need to collaboratively visualize and review 3D CAD models. Another challenge is the ability to rendering these environments at the required interactive rates and with high fidelity. In this document it is presented a virtual reality system mobile for visualization, navigation and reviewing large scale 3D CAD models, held under the CEDAR (Collaborative Engineering Design and Review) project. It’s focused on interaction using different navigation modes. The system uses the mobile device's inertial sensors and camera to allow users to navigate through large scale models. IT professionals, architects, civil engineers and oil industry experts were involved in a qualitative assessment of the CEDAR system, in the form of direct user interaction with the prototypes and audio-recorded interviews about the prototypes. The lessons learned are valuable and are presented on this document. Subsequently it was prepared a quantitative study on the different navigation modes to analyze the best mode to use it in a given situation.
Resumo:
The important technological advances experienced along the last years have resulted in an important demand for new and efficient computer vision applications. On the one hand, the increasing use of video editing software has given rise to a necessity for faster and more efficient editing tools that, in a first step, perform a temporal segmentation in shots. On the other hand, the number of electronic devices with integrated cameras has grown enormously. These devices require new, fast, and efficient computer vision applications that include moving object detection strategies. In this dissertation, we propose a temporal segmentation strategy and several moving object detection strategies, which are suitable for the last generation of computer vision applications requiring both low computational cost and high quality results. First, a novel real-time high-quality shot detection strategy is proposed. While abrupt transitions are detected through a very fast pixel-based analysis, gradual transitions are obtained from an efficient edge-based analysis. Both analyses are reinforced with a motion analysis that allows to detect and discard false detections. This analysis is carried out exclusively over a reduced amount of candidate transitions, thus maintaining the computational requirements. On the other hand, a moving object detection strategy, which is based on the popular Mixture of Gaussians method, is proposed. This strategy, taking into account the recent history of each image pixel, adapts dynamically the amount of Gaussians that are required to model its variations. As a result, we improve significantly the computational efficiency with respect to other similar methods and, additionally, we reduce the influence of the used parameters in the results. Alternatively, in order to improve the quality of the results in complex scenarios containing dynamic backgrounds, we propose different non-parametric based moving object detection strategies that model both background and foreground. To obtain high quality results regardless of the characteristics of the analyzed sequence we dynamically estimate the most adequate bandwidth matrices for the kernels that are used in the background and foreground modeling. Moreover, the application of a particle filter allows to update the spatial information and provides a priori knowledge about the areas to analyze in the following images, enabling an important reduction in the computational requirements and improving the segmentation results. Additionally, we propose the use of an innovative combination of chromaticity and gradients that allows to reduce the influence of shadows and reflects in the detections.
Resumo:
In this paper, a novel and approach for obtaining 3D models from video sequences captured with hand-held cameras is addressed. We define a pipeline that robustly deals with different types of sequences and acquiring devices. Our system follows a divide and conquer approach: after a frame decimation that pre-conditions the input sequence, the video is split into short-length clips. This allows to parallelize the reconstruction step which translates into a reduction in the amount of computational resources required. The short length of the clips allows an intensive search for the best solution at each step of reconstruction which robustifies the system. The process of feature tracking is embedded within the reconstruction loop for each clip as opposed to other approaches. A final registration step, merges all the processed clips to the same coordinate frame
Resumo:
The penalty corner is one of the most important game situations in field hockey with one third of all goals resulting from this tactical situation. The aim of this study was to develop and apply a training method, based on previous studies, to improve the drag- flick skill on a young top-class field hockey player. A young top-class player exercised three times per week using specific drills over a four week period. A VICON optoelectronic system (Oxford Metrics, Oxford, UK) was employed to capture twenty drag-flicks, with six cameras sampling at 250 Hz, prior and after the training period. In order to analyze pre- and post-test differences a dependent t-test was carried out. Angular velocities and the kinematic sequence were similar to previous studies. The player improved (albeit not significantly) the angular velocity of the stick. The player increased front foot to the ball at T1 (p < 0.01) and the drag-flick distances. The range of motion from the front leg decreased from T1 to T6 after the training period (p < 0.01). The specific training sessions conducted with the player improved some features of this particular skill. This article shows how technical knowledge can help with the design of training programs and whether some drills are more effective than others.
Resumo:
Motivated by the growing interest in unmanned aerial system's applications in indoor and outdoor settings and the standardisation of visual sensors as vehicle payload. This work presents a collision avoidance approach based on omnidirectional cameras that does not require the estimation of range between two platforms to resolve a collision encounter. It will achieve a minimum separation between the two vehicles involved by maximising the view-angle given by the omnidirectional sensor. Only visual information is used to achieve avoidance under a bearing-only visual servoing approach. We provide theoretical problem formulation, as well as results from real flight using small quadrotors
Resumo:
Multi-camera 3D tracking systems with overlapping cameras represent a powerful mean for scene analysis, as they potentially allow greater robustness than monocular systems and provide useful 3D information about object location and movement. However, their performance relies on accurately calibrated camera networks, which is not a realistic assumption in real surveillance environments. Here, we introduce a multi-camera system for tracking the 3D position of a varying number of objects and simultaneously refin-ing the calibration of the network of overlapping cameras. Therefore, we introduce a Bayesian framework that combines Particle Filtering for tracking with recursive Bayesian estimation methods by means of adapted transdimensional MCMC sampling. Addi-tionally, the system has been designed to work on simple motion detection masks, making it suitable for camera networks with low transmission capabilities. Tests show that our approach allows a successful performance even when starting from clearly inaccurate camera calibrations, which would ruin conventional approaches.
Resumo:
Lately the short-wave infrared (SWIR) has become very important due to the recent appearance on the market of the small detectors with a large focal plane array. Military applications for SWIR cameras include handheld and airborne systems with long range detection requirements, but where volume and weight restrictions must be considered. In this paper we present three different designs of telephoto objectives that have been designed according to three different methods. Firstly the conventional method where the starting point of the design is an existing design. Secondly we will face design starting from the design of an aplanatic system. And finally the simultaneous multiple surfaces (SMS) method, where the starting point is the input wavefronts that we choose. The designs are compared in terms of optical performance, volume, weight and manufacturability. Because the objectives have been designed for the SWIR waveband, the color correction has important implications in the choice of glass that will be discussed in detail
Resumo:
In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead
Resumo:
A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.
Resumo:
An important goal in the field of intelligent transportation systems (ITS) is to provide driving aids aimed at preventing accidents and reducing the number of traffic victims. The commonest traffic accidents in urban areas are due to sudden braking that demands a very fast response on the part of drivers. Attempts to solve this problem have motivated many ITS advances including the detection of the intention of surrounding cars using lasers, radars or cameras. However, this might not be enough to increase safety when there is a danger of collision. Vehicle to vehicle communications are needed to ensure that the other intentions of cars are also available. The article describes the development of a controller to perform an emergency stop via an electro-hydraulic braking system employed on dry asphalt. An original V2V communication scheme based on WiFi cards has been used for broadcasting positioning information to other vehicles. The reliability of the scheme has been theoretically analyzed to estimate its performance when the number of vehicles involved is much higher. This controller has been incorporated into the AUTOPIA program control for automatic cars. The system has been implemented in Citroën C3 Pluriel, and various tests were performed to evaluate its operation.
Resumo:
Este proyecto, titulado “Caracterización de colectores para concentración fotovoltaica”, consiste en una aplicación en Labview para obtener las características de los elementos ópticos utilizados en sistemas de concentración fotovoltaica , atendiendo a la distribución espacial del foco de luz concentrado que generan. Un sistema de concentración fotovoltaica utiliza un sistema óptico para transmitir la radiación luminosa a la célula solar aumentando la densidad de potencia luminosa. Estos sistemas ópticos están formados por espejos o lentes para recoger la radiación incidente en ellos y concentrar el haz de luz en una superficie mucho menor. De esta manera se puede reducir el área de material semiconductor necesario, lo que conlleva una importante reducción del coste del sistema. Se pueden distinguir diferentes sistemas de concentración dependiendo de la óptica que emplee, la estructura del receptor o el rango de concentración. Sin embargo, ya que el objetivo es analizar la distribución espacial, diferenciaremos dos tipos de concentradores dependiendo de la geometría que presenta el foco de luz. El concentrador lineal o cilíndrico que enfoca sobre una línea, y el concentrador de foco puntual o circular que enfoca la luz sobre un punto. Debido a esta diferencia el análisis en ambos casos se realizará de forma distinta. El análisis se realiza procesando una imagen del foco tomada en el lugar del receptor, este método se llama LS-CCD (Difusión de luz y captura con CCD). Puede utilizarse en varios montajes dependiendo si se capta la imagen por reflexión o por transmisión en el receptor. En algunos montajes no es posible captar la imagen perpendicular al receptor por lo que la aplicación realizará un ajuste de perspectiva para obtener el foco con su forma original. La imagen del foco ofrece información detallada acerca de la uniformidad del foco mediante el mapa de superficie, que es una representación en 3D de la imagen pero que resulta poco manejable. Una representación más sencilla y útil es la que ofrecen los llamados “perfiles de intensidad”. El perfil de intensidad o distribución de la irradiancia que representa la distribución de la luz para cada distancia al centro, y el perfil acumulado o irradiancia acumulada que representa la luz contenida en relación también al centro. Las representaciones de estos perfiles en el caso de un concentrador lineal y otro circular son distintas debido a su diferente geometría. Mientras que para un foco lineal se expresa el perfil en función de la semi-anchura del receptor, para uno circular se expresa en función del radio. En cualquiera de los casos ofrecen información sobre la uniformidad y el tamaño del foco de luz necesarios para diseñar el receptor. El objetivo de este proyecto es la creación de una aplicación software que realice el procesado y análisis de las imágenes obtenidas del foco de luz de los sistemas ópticos a caracterizar. La aplicación tiene una interfaz sencilla e intuitiva para que pueda ser empleada por cualquier usuario. Los recursos necesarios para realizar el proyecto son: un PC con sistema operativo Windows, el software Labview 8.6 Professional Edition y los módulos NI Vision Development Module (para trabajar con imágenes) y NI Report Generation Toolkit (para realizar reportes y guardar datos de la aplicación). ABSTRACT This project, called “Characterization of collectors for concentration photovoltaic systems”, consists in a Labview application to obtain the characteristics of the optical elements used in photovoltaic concentrator, taking into account the spatial distribution of concentrated light source generated. A concentrator photovoltaic system uses an optical system to transmit light radiation to the solar cell by increasing the light power density. This optical system are formed by mirrors or lenses to collect the radiation incident on them and focus the beam of light in a much smaller surface area. In this way you can reduce the area of semiconductor material needed, which implies a significant reduction in system cost. There are different concentration systems depending on the optics used, receptor structure or concentration range. However, as the aim is to analyze the spatial distribution, distinguish between two types of concentrators depending on the geometry that has the light focus. The linear or cylindrical concentrator that focused on a line, and the circular concentrator that focused light onto a point. Because this difference in both cases the analysis will be carried out differently. The analysis is performed by processing a focus image taken at the receiver site, this method is called “LS-CCD” (Light Scattering and CCD recording). Can be used in several mountings depending on whether the image is captured by reflection or transmission on the receiver. In some mountings it is not possible to capture the image perpendicular to the receivers so that the application makes an adjustment of perspective to get the focus to its original shape. The focus image provides detail information about the uniformity of focus through the surface map, which is a 3D image representation but it is unwieldy. A simple and useful representation is provided by so called “intensity profiles”. The intensity profile or irradiance distribution which represents the distribution of light to each distance to the center. The accumulated profile or accumulated irradiance that represents the cumulative light contained in relation also to the center. The representation of these profiles in the case of a linear and a circular concentrator are different due to their distinct geometry. While for a line focus profile is expressed in terms of semi-width of the receiver, for a circular concentrator is expressed in terms of radius. In either case provides information about the uniformity and size of focus needed to design the receiver. The objective of this project is the creation of a software application to perform processing and analysis of images obtained from light source of optical systems to characterize.The application has a simple and a intuitive interface so it can be used for any users. The resources required for the project are: a PC with Windows operating system, LabVIEW 8.6 Professional Edition and the modules NI Vision Development Module (for working with images) and NI Report Generation Toolkit (for reports and store application data .)
Resumo:
Infrared thermography IR is a technique, which allows us to get rapidly and non-invasive thermal images from objects or human beings. (Barnes, 1967). In Medicine, its usefulness as diagnosis tool was accepted decades ago (BenEliyahu, 1990), but other techniques with a higher efficiency -such as magnetic resonance or x-rays- ousted it. Nevertheless, the technological improvements on thermographic cameras and new studies on sport injuries are reinforcing new applications (Ring, 2006)
Resumo:
El interés cada vez mayor por las redes de sensores inalámbricos pueden ser entendido simplemente pensando en lo que esencialmente son: un gran número de pequeños nodos sensores autoalimentados que recogen información o detectan eventos especiales y se comunican de manera inalámbrica, con el objetivo final de entregar sus datos procesados a una estación base. Los nodos sensores están densamente desplegados dentro del área de interés, se pueden desplegar al azar y tienen capacidad de cooperación. Por lo general, estos dispositivos son pequeños y de bajo costo, de modo que pueden ser producidos y desplegados en gran numero aunque sus recursos en términos de energía, memoria, velocidad de cálculo y ancho de banda están enormemente limitados. Detección, tratamiento y comunicación son tres elementos clave cuya combinación en un pequeño dispositivo permite lograr un gran número de aplicaciones. Las redes de sensores proporcionan oportunidades sin fin, pero al mismo tiempo plantean retos formidables, tales como lograr el máximo rendimiento de una energía que es escasa y por lo general un recurso no renovable. Sin embargo, los recientes avances en la integración a gran escala, integrado de hardware de computación, comunicaciones, y en general, la convergencia de la informática y las comunicaciones, están haciendo de esta tecnología emergente una realidad. Del mismo modo, los avances en la nanotecnología están empezando a hacer que todo gire entorno a las redes de pequeños sensores y actuadores distribuidos. Hay diferentes tipos de sensores tales como sensores de presión, acelerómetros, cámaras, sensores térmicos o un simple micrófono. Supervisan las condiciones presentes en diferentes lugares tales como la temperatura, humedad, el movimiento, la luminosidad, presión, composición del suelo, los niveles de ruido, la presencia o ausencia de ciertos tipos de objetos, los niveles de tensión mecánica sobre objetos adheridos y las características momentáneas tales como la velocidad , la dirección y el tamaño de un objeto, etc. Se comprobara el estado de las Redes Inalámbricas de Sensores y se revisaran los protocolos más famosos. Así mismo, se examinara la identificación por radiofrecuencia (RFID) ya que se está convirtiendo en algo actual y su presencia importante. La RFID tiene un papel crucial que desempeñar en el futuro en el mundo de los negocios y los individuos por igual. El impacto mundial que ha tenido la identificación sin cables está ejerciendo fuertes presiones en la tecnología RFID, los servicios de investigación y desarrollo, desarrollo de normas, el cumplimiento de la seguridad y la privacidad y muchos más. Su potencial económico se ha demostrado en algunos países mientras que otros están simplemente en etapas de planificación o en etapas piloto, pero aun tiene que afianzarse o desarrollarse a través de la modernización de los modelos de negocio y aplicaciones para poder tener un mayor impacto en la sociedad. Las posibles aplicaciones de redes de sensores son de interés para la mayoría de campos. La monitorización ambiental, la guerra, la educación infantil, la vigilancia, la micro-cirugía y la agricultura son solo unos pocos ejemplos de los muchísimos campos en los que tienen cabida las redes mencionadas anteriormente. Estados Unidos de América es probablemente el país que más ha investigado en esta área por lo que veremos muchas soluciones propuestas provenientes de ese país. Universidades como Berkeley, UCLA (Universidad de California, Los Ángeles) Harvard y empresas como Intel lideran dichas investigaciones. Pero no solo EE.UU. usa e investiga las redes de sensores inalámbricos. La Universidad de Southampton, por ejemplo, está desarrollando una tecnología para monitorear el comportamiento de los glaciares mediante redes de sensores que contribuyen a la investigación fundamental en glaciología y de las redes de sensores inalámbricos. Así mismo, Coalesenses GmbH (Alemania) y Zurich ETH están trabajando en diversas aplicaciones para redes de sensores inalámbricos en numerosas áreas. Una solución española será la elegida para ser examinada más a fondo por ser innovadora, adaptable y polivalente. Este estudio del sensor se ha centrado principalmente en aplicaciones de tráfico, pero no se puede olvidar la lista de más de 50 aplicaciones diferentes que ha sido publicada por la firma creadora de este sensor específico. En la actualidad hay muchas tecnologías de vigilancia de vehículos, incluidos los sensores de bucle, cámaras de video, sensores de imagen, sensores infrarrojos, radares de microondas, GPS, etc. El rendimiento es aceptable, pero no suficiente, debido a su limitada cobertura y caros costos de implementación y mantenimiento, especialmente este ultimo. Tienen defectos tales como: línea de visión, baja exactitud, dependen mucho del ambiente y del clima, no se puede realizar trabajos de mantenimiento sin interrumpir las mediciones, la noche puede condicionar muchos de ellos, tienen altos costos de instalación y mantenimiento, etc. Por consiguiente, en las aplicaciones reales de circulación, los datos recibidos son insuficientes o malos en términos de tiempo real debido al escaso número de detectores y su costo. Con el aumento de vehículos en las redes viales urbanas las tecnologías de detección de vehículos se enfrentan a nuevas exigencias. Las redes de sensores inalámbricos son actualmente una de las tecnologías más avanzadas y una revolución en la detección de información remota y en las aplicaciones de recogida. Las perspectivas de aplicación en el sistema inteligente de transporte son muy amplias. Con este fin se ha desarrollado un programa de localización de objetivos y recuento utilizando una red de sensores binarios. Esto permite que el sensor necesite mucha menos energía durante la transmisión de información y que los dispositivos sean más independientes con el fin de tener un mejor control de tráfico. La aplicación se centra en la eficacia de la colaboración de los sensores en el seguimiento más que en los protocolos de comunicación utilizados por los nodos sensores. Las operaciones de salida y retorno en las vacaciones son un buen ejemplo de por qué es necesario llevar la cuenta de los coches en las carreteras. Para ello se ha desarrollado una simulación en Matlab con el objetivo localizar objetivos y contarlos con una red de sensores binarios. Dicho programa se podría implementar en el sensor que Libelium, la empresa creadora del sensor que se examinara concienzudamente, ha desarrollado. Esto permitiría que el aparato necesitase mucha menos energía durante la transmisión de información y los dispositivos sean más independientes. Los prometedores resultados obtenidos indican que los sensores de proximidad binarios pueden formar la base de una arquitectura robusta para la vigilancia de áreas amplias y para el seguimiento de objetivos. Cuando el movimiento de dichos objetivos es suficientemente suave, no tiene cambios bruscos de trayectoria, el algoritmo ClusterTrack proporciona un rendimiento excelente en términos de identificación y seguimiento de trayectorias los objetos designados como blancos. Este algoritmo podría, por supuesto, ser utilizado para numerosas aplicaciones y se podría seguir esta línea de trabajo para futuras investigaciones. No es sorprendente que las redes de sensores de binarios de proximidad hayan atraído mucha atención últimamente ya que, a pesar de la información mínima de un sensor de proximidad binario proporciona, las redes de este tipo pueden realizar un seguimiento de todo tipo de objetivos con la precisión suficiente. Abstract The increasing interest in wireless sensor networks can be promptly understood simply by thinking about what they essentially are: a large number of small sensing self-powered nodes which gather information or detect special events and communicate in a wireless fashion, with the end goal of handing their processed data to a base station. The sensor nodes are densely deployed inside the phenomenon, they deploy random and have cooperative capabilities. Usually these devices are small and inexpensive, so that they can be produced and deployed in large numbers, and so their resources in terms of energy, memory, computational speed and bandwidth are severely constrained. Sensing, processing and communication are three key elements whose combination in one tiny device gives rise to a vast number of applications. Sensor networks provide endless opportunities, but at the same time pose formidable challenges, such as the fact that energy is a scarce and usually non-renewable resource. However, recent advances in low power Very Large Scale Integration, embedded computing, communication hardware, and in general, the convergence of computing and communications, are making this emerging technology a reality. Likewise, advances in nanotechnology and Micro Electro-Mechanical Systems are pushing toward networks of tiny distributed sensors and actuators. There are different sensors such as pressure, accelerometer, camera, thermal, and microphone. They monitor conditions at different locations, such as temperature, humidity, vehicular movement, lightning condition, pressure, soil makeup, noise levels, the presence or absence of certain kinds of objects, mechanical stress levels on attached objects, the current characteristics such as speed, direction and size of an object, etc. The state of Wireless Sensor Networks will be checked and the most famous protocols reviewed. As Radio Frequency Identification (RFID) is becoming extremely present and important nowadays, it will be examined as well. RFID has a crucial role to play in business and for individuals alike going forward. The impact of ‘wireless’ identification is exerting strong pressures in RFID technology and services research and development, standards development, security compliance and privacy, and many more. The economic value is proven in some countries while others are just on the verge of planning or in pilot stages, but the wider spread of usage has yet to take hold or unfold through the modernisation of business models and applications. Possible applications of sensor networks are of interest to the most diverse fields. Environmental monitoring, warfare, child education, surveillance, micro-surgery, and agriculture are only a few examples. Some real hardware applications in the United States of America will be checked as it is probably the country that has investigated most in this area. Universities like Berkeley, UCLA (University of California, Los Angeles) Harvard and enterprises such as Intel are leading those investigations. But not just USA has been using and investigating wireless sensor networks. University of Southampton e.g. is to develop technology to monitor glacier behaviour using sensor networks contributing to fundamental research in glaciology and wireless sensor networks. Coalesenses GmbH (Germany) and ETH Zurich are working in applying wireless sensor networks in many different areas too. A Spanish solution will be the one examined more thoroughly for being innovative, adaptable and multipurpose. This study of the sensor has been focused mainly to traffic applications but it cannot be forgotten the more than 50 different application compilation that has been published by this specific sensor’s firm. Currently there are many vehicle surveillance technologies including loop sensors, video cameras, image sensors, infrared sensors, microwave radar, GPS, etc. The performance is acceptable but not sufficient because of their limited coverage and expensive costs of implementation and maintenance, specially the last one. They have defects such as: line-ofsight, low exactness, depending on environment and weather, cannot perform no-stop work whether daytime or night, high costs for installation and maintenance, etc. Consequently, in actual traffic applications the received data is insufficient or bad in terms of real-time owed to detector quantity and cost. With the increase of vehicle in urban road networks, the vehicle detection technologies are confronted with new requirements. Wireless sensor network is the state of the art technology and a revolution in remote information sensing and collection applications. It has broad prospect of application in intelligent transportation system. An application for target tracking and counting using a network of binary sensors has been developed. This would allow the appliance to spend much less energy when transmitting information and to make more independent devices in order to have a better traffic control. The application is focused on the efficacy of collaborative tracking rather than on the communication protocols used by the sensor nodes. Holiday crowds are a good case in which it is necessary to keep count of the cars on the roads. To this end a Matlab simulation has been produced for target tracking and counting using a network of binary sensors that e.g. could be implemented in Libelium’s solution. Libelium is the enterprise that has developed the sensor that will be deeply examined. This would allow the appliance to spend much less energy when transmitting information and to make more independent devices. The promising results obtained indicate that binary proximity sensors can form the basis for a robust architecture for wide area surveillance and tracking. When the target paths are smooth enough ClusterTrack particle filter algorithm gives excellent performance in terms of identifying and tracking different target trajectories. This algorithm could, of course, be used for different applications and that could be done in future researches. It is not surprising that binary proximity sensor networks have attracted a lot of attention lately. Despite the minimal information a binary proximity sensor provides, networks of these sensing modalities can track all kinds of different targets classes accurate enough.