903 resultados para Unmanned Aerial Vehicles (UAVs)
Resumo:
Glacier and ice sheet retreat exposes freshly deglaciated terrain which often contains small-scale fragile geomorphological features which could provide insight into subglacial or submarginal processes. Subaerial exposure results in potentially rapid landscape modification or even disappearance of the minor–relief landforms as wind, weather, water and vegetation impacts on the newly exposed surface. Ongoing retreat of many ice masses means there is a growing opportunity to obtain high resolution geospatial data from glacier forelands to aid in the understanding of recent subglacial and submarginal processes. Here we used an unmanned aerial vehicle to capture close-range aerial photography of the foreland of Isfallsglaciären, a small polythermal glacier situated in Swedish Lapland. An orthophoto and a digital elevation model with ~2 cm horizontal resolution were created from this photography using structure from motion software. These geospatial data was used to create a geomorphological map of the foreland, documenting moraines, fans, channels and flutes. The unprecedented resolution of the data enabled us to derive morphological metrics (length, width and relief) of the smallest flutes, which is not possible with other data products normally used for glacial landform metrics mapping. The map and flute metrics compare well with previous studies, highlighting the potential of this technique for rapidly documenting glacier foreland geomorphology at an unprecedented scale and resolution. The vast majority of flutes were found to have an associated stoss-side boulder, with the remainder having a likely explanation for boulder absence (burial or erosion). Furthermore, the size of this boulder was found to strongly correlate with the width and relief of the lee-side flute. This is consistent with the lee-side cavity infill model of flute formation. Whether this model is applicable to all flutes, or multiple mechanisms are required, awaits further study.
Resumo:
Surface flow types (SFT) are advocated as ecologically relevant hydraulic units, often mapped visually from the bankside to characterise rapidly the physical habitat of rivers. SFT mapping is simple, non-invasive and cost-efficient. However, it is also qualitative, subjective and plagued by difficulties in recording accurately the spatial extent of SFT units. Quantitative validation of the underlying physical habitat parameters is often lacking, and does not consistently differentiate between SFTs. Here, we investigate explicitly the accuracy, reliability and statistical separability of traditionally mapped SFTs as indicators of physical habitat, using independent, hydraulic and topographic data collected during three surveys of a c. 50m reach of the River Arrow, Warwickshire, England. We also explore the potential of a novel remote sensing approach, comprising a small unmanned aerial system (sUAS) and Structure-from-Motion photogrammetry (SfM), as an alternative method of physical habitat characterisation. Our key findings indicate that SFT mapping accuracy is highly variable, with overall mapping accuracy not exceeding 74%. Results from analysis of similarity (ANOSIM) tests found that strong differences did not exist between all SFT pairs. This leads us to question the suitability of SFTs for characterising physical habitat for river science and management applications. In contrast, the sUAS-SfM approach provided high resolution, spatially continuous, spatially explicit, quantitative measurements of water depth and point cloud roughness at the microscale (spatial scales ≤1m). Such data are acquired rapidly, inexpensively, and provide new opportunities for examining the heterogeneity of physical habitat over a range of spatial and temporal scales. Whilst continued refinement of the sUAS-SfM approach is required, we propose that this method offers an opportunity to move away from broad, mesoscale classifications of physical habitat (spatial scales 10-100m), and towards continuous, quantitative measurements of the continuum of hydraulic and geomorphic conditions which actually exists at the microscale.
Resumo:
Il versante sinistro delle Gole di Scascoli (BO) è caratterizzato da una marcata tendenza evolutiva per crollo e ribaltamento. Negli ultimi 25 anni si sono verificati eventi parossistici con volumi di roccia coinvolti rispettivamente di 7000 m3, 20000 m3 e 35000 m3. Il sito è di grande rilevanza a causa del forte fattore di rischio rappresentato per la strada di fondovalle ad esso adiacente. Il lavoro di tesi è stato finalizzato allo studio dei fenomeni di versante di una parete rocciosa inaccessibile nota in letteratura come “ex-Mammellone 1” mediante tecniche di telerilevamento quali TLS (Terrestrial Laser Scanning) e CRP (Close Range Photogrammetry) al fine affiancare il rilievo geomeccanico soggettivo dell’area svolto nel 2003 da ENSER Srl in seguito ai fenomeni di crollo del 2002. Lo sviluppo di tecnologie e metodi innovativi per l’analisi territoriale basata sull’impiego di UAV (Unmanned Aerial Vehicle, meglio noti come Droni), associata alle tecniche di fotogrammetria digitale costituisce un elemento di notevole ausilio nelle pratiche di rilevamento in campo di sicurezza e tempi di esecuzione. Il lavoro ha previsto una prima fase di rilevamento areo-fotogrammetrico mediante strumentazione professionale e amatoriale, a cui è seguita l’elaborazione dei rispettivi modelli. I diversi output sono stati confrontati dal punto di vista geomorfologico, geometrico, geomeccanico e di modellazione numerica di caduta massi. Dal lavoro è stato possibile indagare l’evoluzione morfologica del sito in esame negli ultimi 10 anni, confrontare diversi metodi di rilevamento e analisi dati, sperimentare la robustezza e ripetibilità geometrica del metodo fotogrammetrico per il rilievo di fronti rocciosi e mettere a punto un metodo semiautomatico di individuazione e analisi delle giaciture delle discontinuità.
Resumo:
L’oggetto di studio di questa tesi consiste nel primo approccio di sperimentazione dell’utilizzo di tecnologia UAV (Unmanned aerial vehicle, cioè velivoli senza pilota), per fotogrammetria ai fini di una valutazione di eventuali danni ad edifici, a seguito di un evento straordinario o disastri naturali. Uno degli aspetti più onerosi in termini di tempo e costi di esecuzione, nel processamento di voli fotogrammetrici per usi cartografici è dovuto alla necessità di un appoggio topografico a terra. Nella presente tesi è stata valutata la possibilità di effettuare un posizionamento di precisione della camera da presa al momento dello scatto, in modo da ridurre significativamente la necessità di Punti Fotografici di Appoggio (PFA) rilevati per via topografica a terra. In particolare si è voluto sperimentare l’impiego di stazioni totali robotiche per l’inseguimento e il posizionamento del velivolo durante le acquisizioni, in modo da simulare la presenza di ricevitori geodetici RTK installati a bordo. Al tempo stesso tale metodologia permetterebbe il posizionamento di precisione del velivolo anche in condizioni “indoor” o di scarsa ricezione dei segnali satellitari, quali ad esempio quelle riscontrabili nelle attività di volo entro canyon urbani o nel rilievo dei prospetti di edifici. Nell’ambito della tesi è stata, quindi, effettuata una analisi di un blocco fotogrammetrico in presenza del posizionamento di precisione della camera all’istante dello scatto, confrontando poi i risultati ottenuti con quelli desumibili attraverso un tradizionale appoggio topografico a terra. È stato quindi possibile valutare le potenzialità del rilievo fotogrammetrico da drone in condizioni vicine a quelle tipiche della fotogrammetria diretta. In questo caso non sono stati misurati però gli angoli di assetto della camera all’istante dello scatto, ma si è proceduto alla loro stima per via fotogrammetrica.
Resumo:
Il seguente elaborato di tesi tratta il problema della pianificazione di voli fotogrammetrici a bassa quota mediante l’uso di SAPR, in particolare è presentata una disamina delle principali applicazioni che permettono di programmare una copertura fotogrammetrica trasversale e longitudinale di un certo poligono con un drone commerciale. Il tema principale sviluppato è la gestione di un volo fotogrammetrico UAV mediante l’uso di applicativi software che permettono all’utente di inserire i parametri di volo in base alla tipologia di rilievo che vuole effettuare. L’obbiettivo finale è quello di ottenere una corretta presa fotogrammetrica da utilizzare per la creazione di un modello digitale del terreno o di un oggetto attraverso elaborazione dati in post-processing. La perfetta configurazione del volo non può prescindere dalle conoscenze base di fotogrammetria e delle meccaniche di un veicolo UAV. I capitoli introduttivi tratteranno infatti i principi della fotogrammetria analogica e digitale soffermandosi su temi utili alla comprensione delle problematiche relative al progetto di rilievo fotogrammetrico aereo. Una particolare attenzione è stata posta sulle nozioni di fotogrammetria digitale che, insieme agli algoritmi di Imagine Matching derivanti dalla Computer Vision, permette di definire il ramo della Fotogrammetria Moderna. Nei capitoli centrali verranno esaminate e confrontate una serie di applicazioni commerciali per smartphone e tablet, disponibili per sistemi Apple e Android, per trarne un breve resoconto conclusivo che le compari in termini di accessibilità, potenzialità e destinazione d’uso. Per una maggiore comprensione si determinano univocamente gli acronimi con cui i droni vengono chiamati nei diversi contesti: UAV (Unmanned Aerial Vehicle), SAPR (Sistemi Aeromobili a Pilotaggio Remoto), RPAS (Remotely Piloted Aicraft System), ARP (Aeromobili a Pilotaggio Remoto).
Resumo:
Remote Sensing has been used for decades, and more and more applications are added to its repertoire. With this study we aim to show the use of Remote Sensing in the field of vegetation recovery monitoring in burned areas and the added value of data with a high spatial resolution. This was done by analysing both Landsat 7 and 8 scenes, after the forest fire of summer 2012 in the parish of Calde, in the central region of Portugal, as well as an orthophoto produced with images acquired by an unmanned aerial vehicle.
Resumo:
The main thesis of this article is that the increasing recourse to the use of unmanned aerial systems in asymmetric warfare and the beginning routinization of U.S. drone operations represent part of an evolutionary change in the spatial ordering of global politics -- Using a heuristic framework based on actor-network theory, it is argued that practices of panoptic observation and selective airstrikes, being in need of legal justification, contribute to a reterritorialization of asymmetric conflicts -- Under a new normative spatial regime, a legal condition of state immaturity is constructed, which establishes a zone of conditional sovereignty subject to transnational aerial policing -- At the same time, this process is neither a deterministic result of the new technology nor a deliberate effect of policies to which drones are merely neutral instruments -- Rather, military technology and political decisions both form part of a long chain of action which has evolved under the specific circumstances of recent military interventions
Resumo:
Efficient crop monitoring and pest damage assessments are key to protecting the Australian agricultural industry and ensuring its leading position internationally. An important element in pest detection is gathering reliable crop data frequently and integrating analysis tools for decision making. Unmanned aerial systems are emerging as a cost-effective solution to a number of precision agriculture challenges. An important advantage of this technology is it provides a non-invasive aerial sensor platform to accurately monitor broad acre crops. In this presentation, we will give an overview on how unmanned aerial systems and machine learning can be combined to address crop protection challenges. A recent 2015 study on insect damage in sorghum will illustrate the effectiveness of this methodology. A UAV platform equipped with a high-resolution camera was deployed to autonomously perform a flight pattern over the target area. We describe the image processing pipeline implemented to create a georeferenced orthoimage and visualize the spatial distribution of the damage. An image analysis tool has been developed to minimize human input requirements. The computer program is based on a machine learning algorithm that automatically creates a meaningful partition of the image into clusters. Results show the algorithm delivers decision boundaries that accurately classify the field into crop health levels. The methodology presented in this paper represents a venue for further research towards automated crop protection assessments in the cotton industry, with applications in detecting, quantifying and monitoring the presence of mealybugs, mites and aphid pests.
Resumo:
Na Marinha Portuguesa, o emprego de Unmanned Underwater Vehicles (UUV) tem uma utilização muito limitada, restringe-se unicamente à deteção de minas. Contudo, com a evolução tecnológica e científica, o seu uso poderá estar a um passo de ser usado em outras vertentes cuja aplicabilidade ainda não foi explorada. Nesta linha de pensamento, surgiu o projeto ICARUS (Integrated Components for Assisted Rescue and Unmanned Search operations), que visa o desenvolvimento de veículos não tripulados para a busca e salvamento. O objetivo do mesmo, resume-se ao salvamento de náufragos com o recurso a UUV, promovendo assim uma eficiente gestão dos recursos, objetivo contemplado na diretiva de planeamento de marinha. Assim, com base no projeto desenvolvido nas teses do ano transato pelos ASPOF Maia da Fonseca e Ramos da Palma, pretende-se com a presente dissertação através de um sistema sonar instalado num UUV em modo upward looking, avaliar a viabilidade na deteção de um náufrago à deriva no mar através das suas leituras. Para tal recorre-se à simulação com o náufrago em diferentes posições e em ambientes mais adequados à realidade que é o mar. E, ainda a otimização das características que permitem a identificação do náufrago.
Resumo:
A camera maps 3-dimensional (3D) world space to a 2-dimensional (2D) image space. In the process it loses the depth information, i.e., the distance from the camera focal point to the imaged objects. It is impossible to recover this information from a single image. However, by using two or more images from different viewing angles this information can be recovered, which in turn can be used to obtain the pose (position and orientation) of the camera. Using this pose, a 3D reconstruction of imaged objects in the world can be computed. Numerous algorithms have been proposed and implemented to solve the above problem; these algorithms are commonly called Structure from Motion (SfM). State-of-the-art SfM techniques have been shown to give promising results. However, unlike a Global Positioning System (GPS) or an Inertial Measurement Unit (IMU) which directly give the position and orientation respectively, the camera system estimates it after implementing SfM as mentioned above. This makes the pose obtained from a camera highly sensitive to the images captured and other effects, such as low lighting conditions, poor focus or improper viewing angles. In some applications, for example, an Unmanned Aerial Vehicle (UAV) inspecting a bridge or a robot mapping an environment using Simultaneous Localization and Mapping (SLAM), it is often difficult to capture images with ideal conditions. This report examines the use of SfM methods in such applications and the role of combining multiple sensors, viz., sensor fusion, to achieve more accurate and usable position and reconstruction information. This project investigates the role of sensor fusion in accurately estimating the pose of a camera for the application of 3D reconstruction of a scene. The first set of experiments is conducted in a motion capture room. These results are assumed as ground truth in order to evaluate the strengths and weaknesses of each sensor and to map their coordinate systems. Then a number of scenarios are targeted where SfM fails. The pose estimates obtained from SfM are replaced by those obtained from other sensors and the 3D reconstruction is completed. Quantitative and qualitative comparisons are made between the 3D reconstruction obtained by using only a camera versus that obtained by using the camera along with a LIDAR and/or an IMU. Additionally, the project also works towards the performance issue faced while handling large data sets of high-resolution images by implementing the system on the Superior high performance computing cluster at Michigan Technological University.
Resumo:
In this paper, we consider the problem of autonomous navigation of multirotor platforms in GPS-denied environments. The focus of this work is on safe navigation based on unperfect odometry measurements, such as on-board optical flow measurements. The multirotor platform is modeled as a flying object with specific kinematic constraints that must be taken into account in order to obtain successful results. A navigation controller is proposed featuring a set of configurable parameters that allow, for instance, to have a configuration setup for fast trajectory following, and another to soften the control laws and make the vehicle navigation more precise and slow whenever necessary. The proposed controller has been successfully implemented in two different multirotor platforms with similar sensoring capabilities showing the openness and tolerance of the approach. This research is focused around the Computer Vision Group's objective of applying multirotor vehicles to civilian service applications. The presented work was implemented to compete in the International Micro Air Vehicle Conference and Flight Competition IMAV 2012, gaining two awards: the Special Award on "Best Automatic Performance - IMAV 2012" and the second overall prize in the participating category "Indoor Flight Dynamics - Rotary Wing MAV". Most of the code related to the present work is available as two open-source projects hosted in GitHub.
Resumo:
We present a novel, simple and effective approach for tele-operation of aerial robotic vehicles with haptic feedback. Such feedback provides the remote pilot with an intuitive feel of the robot’s state and perceived local environment that will ensure simple and safe operation in cluttered 3D environments common in inspection and surveillance tasks. Our approach is based on energetic considerations and uses the concepts of network theory and port-Hamiltonian systems. We provide a general framework for addressing problems such as mapping the limited stroke of a ‘master’ joystick to the infinite stroke of a ‘slave’ vehicle, while preserving passivity of the closed-loop system in the face of potential time delays in communications links and limited sensor data
Resumo:
This paper presents a new methodology for solving the multi-vehicle formation control problem. It employs a unique extension-decomposition-aggregation scheme to transform the overall complex formation control problem into a group of subproblems, which work via boundary interactions or disturbances. Thus, it is proved that the overall formation system is exponentially stable in the sense of Lyapunov, if all the individual augmented subsystems (IASs) are stable. Linear matrix inequality-based H8 control methodology is employed to design the decentralized formation controllers to reject the impact of the formation changes being treated as boundary disturbances and guarantee the stability of all the IASs, consequently maintaining the stability of the overall formation system. Simulation studies are performed to verify the stability, performance, and effectiveness of the proposed strategy.