86 resultados para crash avoidance, path planning, spatial modeling, object tracking
Resumo:
Remote sensing information from spaceborne and airborne platforms continues to provide valuable data for different environmental monitoring applications. In this sense, high spatial resolution im-agery is an important source of information for land cover mapping. For the processing of high spa-tial resolution images, the object-based methodology is one of the most commonly used strategies. However, conventional pixel-based methods, which only use spectral information for land cover classification, are inadequate for classifying this type of images. This research presents a method-ology to characterise Mediterranean land covers in high resolution aerial images by means of an object-oriented approach. It uses a self-calibrating multi-band region growing approach optimised by pre-processing the image with a bilateral filtering. The obtained results show promise in terms of both segmentation quality and computational efficiency.
Resumo:
Nowadays, organizations have plenty of data stored in DB databases, which contain invaluable information. Decision Support Systems DSS provide the support needed to manage this information and planning médium and long-term ?the modus operandi? of these organizations. Despite the growing importance of these systems, most proposals do not include its total evelopment, mostly limiting itself on the development of isolated parts, which often have serious integration problems. Hence, methodologies that include models and processes that consider every factor are necessary. This paper will try to fill this void as it proposes an approach for developing spatial DSS driven by the development of their associated Data Warehouse DW, without forgetting its other components. To the end of framing the proposal different Engineering Software focus (The Software Engineering Process and Model Driven Architecture) are used, and coupling with the DB development methodology, (and both of them adapted to DW peculiarities). Finally, an example illustrates the proposal.
Resumo:
Motivado por los últimos hallazgos realizados gracias a los recientes avances tecnológicos y misiones espaciales, el estudio de los asteroides ha despertado el interés de la comunidad científica. Tal es así que las misiones a asteroides han proliferado en los últimos años (Hayabusa, Dawn, OSIRIX-REx, ARM, AIMS-DART, ...) incentivadas por su enorme interés científico. Los asteroides son constituyentes fundamentales en la evolución del Sistema Solar, son además grandes concentraciones de valiosos recursos naturales, y también pueden considerarse como objectivos estratégicos para la futura exploración espacial. Desde hace tiempo se viene especulando con la posibilidad de capturar objetos próximos a la Tierra (NEOs en su acrónimo anglosajón) y acercarlos a nuestro planeta, permitiendo así un acceso asequible a los mismos para estudiarlos in-situ, explotar sus recursos u otras finalidades. Por otro lado, las asteroides se consideran con frecuencia como posibles peligros de magnitud planetaria, ya que impactos de estos objetos con la Tierra suceden constantemente, y un asteroide suficientemente grande podría desencadenar eventos catastróficos. Pese a la gravedad de tales acontecimientos, lo cierto es que son ciertamente difíciles de predecir. De hecho, los ricos aspectos dinámicos de los asteroides, su modelado complejo y las incertidumbres observaciones hacen que predecir su posición futura con la precisión necesaria sea todo un reto. Este hecho se hace más relevante cuando los asteroides sufren encuentros próximos con la Tierra, y más aún cuando estos son recurrentes. En tales situaciones en las cuales fuera necesario tomar medidas para mitigar este tipo de riesgos, saber estimar con precisión sus trayectorias y probabilidades de colisión es de una importancia vital. Por ello, se necesitan herramientas avanzadas para modelar su dinámica y predecir sus órbitas con precisión, y son también necesarios nuevos conceptos tecnológicos para manipular sus órbitas llegado el caso. El objetivo de esta Tesis es proporcionar nuevos métodos, técnicas y soluciones para abordar estos retos. Las contribuciones de esta Tesis se engloban en dos áreas: una dedicada a la propagación numérica de asteroides, y otra a conceptos de deflexión y captura de asteroides. Por lo tanto, la primera parte de este documento presenta novedosos avances de apliación a la propagación dinámica de alta precisión de NEOs empleando métodos de regularización y perturbaciones, con especial énfasis en el método DROMO, mientras que la segunda parte expone ideas innovadoras para la captura de asteroides y comenta el uso del “ion beam shepherd” (IBS) como tecnología para deflectarlos. Abstract Driven by the latest discoveries enabled by recent technological advances and space missions, the study of asteroids has awakened the interest of the scientific community. In fact, asteroid missions have become very popular in the recent years (Hayabusa, Dawn, OSIRIX-REx, ARM, AIMS-DART, ...) motivated by their outstanding scientific interest. Asteroids are fundamental constituents in the evolution of the Solar System, can be seen as vast concentrations of valuable natural resources, and are also considered as strategic targets for the future of space exploration. For long it has been hypothesized with the possibility of capturing small near-Earth asteroids and delivering them to the vicinity of the Earth in order to allow an affordable access to them for in-situ science, resource utilization and other purposes. On the other side of the balance, asteroids are often seen as potential planetary hazards, since impacts with the Earth happen all the time, and eventually an asteroid large enough could trigger catastrophic events. In spite of the severity of such occurrences, they are also utterly hard to predict. In fact, the rich dynamical aspects of asteroids, their complex modeling and observational uncertainties make exceptionally challenging to predict their future position accurately enough. This becomes particularly relevant when asteroids exhibit close encounters with the Earth, and more so when these happen recurrently. In such situations, where mitigation measures may need to be taken, it is of paramount importance to be able to accurately estimate their trajectories and collision probabilities. As a consequence, advanced tools are needed to model their dynamics and accurately predict their orbits, as well as new technological concepts to manipulate their orbits if necessary. The goal of this Thesis is to provide new methods, techniques and solutions to address these challenges. The contributions of this Thesis fall into two areas: one devoted to the numerical propagation of asteroids, and another to asteroid deflection and capture concepts. Hence, the first part of the dissertation presents novel advances applicable to the high accuracy dynamical propagation of near-Earth asteroids using regularization and perturbations techniques, with a special emphasis in the DROMO method, whereas the second part exposes pioneering ideas for asteroid retrieval missions and discusses the use of an “ion beam shepherd” (IBS) for asteroid deflection purposes.
Resumo:
Background: In recent years, Spain has implemented a number of air quality control measures that are expected to lead to a future reduction in fine particle concentrations and an ensuing positive impact on public health. Objectives: We aimed to assess the impact on mortality attributable to a reduction in fine particle levels in Spain in 2014 in relation to the estimated level for 2007. Methods: To estimate exposure, we constructed fine particle distribution models for Spain for 2007 (reference scenario) and 2014 (projected scenario) with a spatial resolution of 16x16 km2. In a second step, we used the concentration-response functions proposed by cohort studies carried out in Europe (European Study of Cohorts for Air Pollution Effects and Rome longitudinal cohort) and North America (American Cancer Society cohort, Harvard Six Cities study and Canadian national cohort) to calculate the number of attributable annual deaths corresponding to all causes, all non-accidental causes, ischemic heart disease and lung cancer among persons aged over 25 years (2005-2007 mortality rate data). We examined the effect of the Spanish demographic shift in our analysis using 2007 and 2012 population figures. Results: Our model suggested that there would be a mean overall reduction in fine particle levels of 1mg/m3 by 2014. Taking into account 2007 population data, between 8 and 15 all-cause deaths per 100,000 population could be postponed annually by the expected reduction in fine particle levels. For specific subgroups, estimates varied from 10 to 30 deaths for all non-accidental causes, from 1 to 5 for lung cancer, and from 2 to 6 for ischemic heart disease. The expected burden of preventable mortality would be even higher in the future due to the Spanish population growth. Taking into account the population older than 30 years in 2012, the absolute mortality impact estimate would increase approximately by 18%. Conclusions: Effective implementation of air quality measures in Spain, in a scenario with a short-term projection, would amount to an appreciable decline infine particle concentrations, and this, in turn, would lead to notable health-related benefits. Recent European cohort studies strengthen the evidence of an association between long-term exposure to fine particles and health effects, and could enhance the health impact quantification in Europe. Air quality models can contribute to improved assessment of air pollution health impact estimates, particularly in study areas without air pollution monitoring data.
Resumo:
This paper presents a novel background modeling system that uses a spatial grid of Support Vector Machines classifiers for segmenting moving objects, which is a key step in many video-based consumer applications. The system is able to adapt to a large range of dynamic background situations since no parametric model or statistical distribution are assumed. This is achieved by using a different classifier per image region that learns the specific appearance of that scene region and its variations (illumination changes, dynamic backgrounds, etc.). The proposed system has been tested with a recent public database, outperforming other state-of-the-art algorithms.
Resumo:
Shading reduces the power output of a photovoltaic (PV) system. The design engineering of PV systems requires modeling and evaluating shading losses. Some PV systems are affected by complex shading scenes whose resulting PV energy losses are very difficult to evaluate with current modeling tools. Several specialized PV design and simulation software include the possibility to evaluate shading losses. They generally possess a Graphical User Interface (GUI) through which the user can draw a 3D shading scene, and then evaluate its corresponding PV energy losses. The complexity of the objects that these tools can handle is relatively limited. We have created a software solution, 3DPV, which allows evaluating the energy losses induced by complex 3D scenes on PV generators. The 3D objects can be imported from specialized 3D modeling software or from a 3D object library. The shadows cast by this 3D scene on the PV generator are then directly evaluated from the Graphics Processing Unit (GPU). Thanks to the recent development of GPUs for the video game industry, the shadows can be evaluated with a very high spatial resolution that reaches well beyond the PV cell level, in very short calculation times. A PV simulation model then translates the geometrical shading into PV energy output losses. 3DPV has been implemented using WebGL, which allows it to run directly from a Web browser, without requiring any local installation from the user. This also allows taken full benefits from the information already available from Internet, such as the 3D object libraries. This contribution describes, step by step, the method that allows 3DPV to evaluate the PV energy losses caused by complex shading. We then illustrate the results of this methodology to several application cases that are encountered in the world of PV systems design. Keywords: 3D, modeling, simulation, GPU, shading, losses, shadow mapping, solar, photovoltaic, PV, WebGL
Resumo:
One of the core objectives of urban planning practice is to provide spatial equity in terms of opportunities and use of public space and facilities. Accessibility is the element that serves this purpose as a concept linking the reciprocal relationship between transport and land use, thus shaping individual potential mobility to reach the desired destinations. Accessibility concepts are increasingly acknowledged as fundamental to understand the functioning of cities and urban regions. Indeed, by introducing them in planning practice, better solutions can be achieved in terms of spatial equity. The COST Action TU1002 "Accessibility instruments for planning practice" was specifically designed to address the gap between scientific research in measuring and modelling accessibility, and the current use of indicators of accessibility in urban planning practice. This paper shows the full process of introducing an easily understandable measure of accessibility to planning practitioners in Madrid, which is one of the case studies of the above-mentioned COST action. Changes in accessibility after the opening of a new metro line using contour measures were analyzed and then presented to a selection of urban planners and practitioners in Madrid as part of a workshop to evaluate the usefulness of this tool for planning practice. Isochrone maps were confirmed as an effective tool, as their utility can be supplemented by other indicators, and being GIS-based, it can be easily computed (when compared with transport models) and integrated with other datasets.
Resumo:
In the last decade, multi-sensor data fusion has become a broadly demanded discipline to achieve advanced solutions that can be applied in many real world situations, either civil or military. In Defence,accurate detection of all target objects is fundamental to maintaining situational awareness, to locating threats in the battlefield and to identifying and protecting strategically own forces. Civil applications, such as traffic monitoring, have similar requirements in terms of object detection and reliable identification of incidents in order to ensure safety of road users. Thanks to the appropriate data fusion technique, we can give these systems the power to exploit automatically all relevant information from multiple sources to face for instance mission needs or assess daily supervision operations. This paper focuses on its application to active vehicle monitoring in a particular area of high density traffic, and how it is redirecting the research activities being carried out in the computer vision, signal processing and machine learning fields for improving the effectiveness of detection and tracking in ground surveillance scenarios in general. Specifically, our system proposes fusion of data at a feature level which is extracted from a video camera and a laser scanner. In addition, a stochastic-based tracking which introduces some particle filters into the model to deal with uncertainty due to occlusions and improve the previous detection output is presented in this paper. It has been shown that this computer vision tracker contributes to detect objects even under poor visual information. Finally, in the same way that humans are able to analyze both temporal and spatial relations among items in the scene to associate them a meaning, once the targets objects have been correctly detected and tracked, it is desired that machines can provide a trustworthy description of what is happening in the scene under surveillance. Accomplishing so ambitious task requires a machine learning-based hierarchic architecture able to extract and analyse behaviours at different abstraction levels. A real experimental testbed has been implemented for the evaluation of the proposed modular system. Such scenario is a closed circuit where real traffic situations can be simulated. First results have shown the strength of the proposed system.
Resumo:
As environmental standards become more stringent (e.g. European Directive 2008/50/EC), more reliable and sophisticated modeling tools are needed to simulate measures and plans that may effectively tackle air quality exceedances, common in large cities across Europe, particularly for NO2. Modeling air quality in urban areas is rather complex since observed concentration values are a consequence of the interaction of multiple sources and processes that involve a wide range of spatial and temporal scales. Besides a consistent and robust multi-scale modeling system, comprehensive and flexible emission inventories are needed. This paper discusses the application of the WRF-SMOKE-CMAQ system to the Madrid city (Spain) to assess the contribution of the main emitting sectors in the region. A detailed emission inventory was compiled for this purpose. This inventory relies on bottom-up methods for the most important sources. It is coupled with the regional traffic model and it makes use of an extensive database of industrial, commercial and residential combustion plants. Less relevant sources are downscaled from national or regional inventories. This paper reports the methodology and main results of the source apportionment study performed to understand the origin of pollution (main sectors and geographical areas) and define clear targets for the abatement strategy. Finally the structure of the air quality monitoring is analyzed and discussed to identify options to improve the monitoring strategy not only in the Madrid city but the whole metropolitan area.
Resumo:
An innovative background modeling technique that is able to accurately segment foreground regions in RGB-D imagery (RGB plus depth) has been presented in this paper. The technique is based on a Bayesian framework that efficiently fuses different sources of information to segment the foreground. In particular, the final segmentation is obtained by considering a prediction of the foreground regions, carried out by a novel Bayesian Network with a depth-based dynamic model, and, by considering two independent depth and color-based mixture of Gaussians background models. The efficient Bayesian combination of all these data reduces the noise and uncertainties introduced by the color and depth features and the corresponding models. As a result, more compact segmentations, and refined foreground object silhouettes are obtained. Experimental results with different databases suggest that the proposed technique outperforms existing state-of-the-art algorithms.
Resumo:
Because of the high number of crashes occurring on highways, it is necessary to intensify the search for new tools that help in understanding their causes. This research explores the use of a geographic information system (GIS) for an integrated analysis, taking into account two accident-related factors: design consistency (DC) (based on vehicle speed) and available sight distance (ASD) (based on visibility). Both factors require specific GIS software add-ins, which are explained. Digital terrain models (DTMs), vehicle paths, road centerlines, a speed prediction model, and crash data are integrated in the GIS. The usefulness of this approach has been assessed through a study of more than 500 crashes. From a regularly spaced grid, the terrain (bare ground) has been modeled through a triangulated irregular network (TIN). The length of the roads analyzed is greater than 100 km. Results have shown that DC and ASD could be related to crashes in approximately 4% of cases. In order to illustrate the potential of GIS, two crashes are fully analyzed: a car rollover after running off road on the right side and a rear-end collision of two moving vehicles. Although this procedure uses two software add-ins that are available only for ArcGIS, the study gives a practical demonstration of the suitability of GIS for conducting integrated studies of road safety.