910 resultados para Surveying and Mapping
Resumo:
Despite covering only approximately 138,000 km2, mangroves are globally important carbon sinks with carbon density values 3 to 4 times that of terrestrial forests. A key challenge in evaluating the carbon benefits from mangrove forest conservation is the lack of rigorous spatially resolved estimates of mangrove sediment carbon stocks; most mangrove carbon is stored belowground. Previous work has focused on detailed estimations of carbon stores over relatively small areas, which has obvious limitations in terms of generality and scope of application. Most studies have focused only on quantifying the top 1m of belowground carbon (BGC). Carbon stored at depths beyond 1m, and the effects of mangrove species, location and environmental context on these stores, is poorly studied. This study investigated these variables at two sites (Gazi and Vanga in the south of Kenya) and used the data to produce a country-specific BGC predictive model for Kenya and map BGC store estimates throughout Kenya at spatial scales relevant for climate change research, forest management and REDD+ (Reduced Emissions from Deforestation and Degradation). The results revealed that mangrove species was the most reliable predictor of BGC; Rhizophora muronata had the highest mean BGC with 1485.5t C ha-1. Applying the species-based predictive model to a base map of species distribution in Kenya for the year 2010 with a 2.5m2 resolution, produced an estimate of 69.41 Mt C (± 9.15 95% C.I.) for BGC in Kenyan mangroves. When applied to a 1992 mangrove distribution map, the BGC estimate was 75.65 Mt C (± 12.21 95% C.I.); an 8.3% loss in BGC stores between 1992 and 2010 in Kenya. The country level mangrove map provides a valuable tool for assessing carbon stocks and visualising the distribution of BGC. Estimates at the 2.5m2 resolution provide sufficient detail for highlighting and prioritising areas for mangrove conservation and restoration.
Resumo:
Simultaneous Localization and Mapping (SLAM) is a procedure used to determine the location of a mobile vehicle in an unknown environment, while constructing a map of the unknown environment at the same time. Mobile platforms, which make use of SLAM algorithms, have industrial applications in autonomous maintenance, such as the inspection of flaws and defects in oil pipelines and storage tanks. A typical SLAM consists of four main components, namely, experimental setup (data gathering), vehicle pose estimation, feature extraction, and filtering. Feature extraction is the process of realizing significant features from the unknown environment such as corners, edges, walls, and interior features. In this work, an original feature extraction algorithm specific to distance measurements obtained through SONAR sensor data is presented. This algorithm has been constructed by combining the SONAR Salient Feature Extraction Algorithm and the Triangulation Hough Based Fusion with point-in-polygon detection. The reconstructed maps obtained through simulations and experimental data with the fusion algorithm are compared to the maps obtained with existing feature extraction algorithms. Based on the results obtained, it is suggested that the proposed algorithm can be employed as an option for data obtained from SONAR sensors in environment, where other forms of sensing are not viable. The algorithm fusion for feature extraction requires the vehicle pose estimation as an input, which is obtained from a vehicle pose estimation model. For the vehicle pose estimation, the author uses sensor integration to estimate the pose of the mobile vehicle. Different combinations of these sensors are studied (e.g., encoder, gyroscope, or encoder and gyroscope). The different sensor fusion techniques for the pose estimation are experimentally studied and compared. The vehicle pose estimation model, which produces the least amount of error, is used to generate inputs for the feature extraction algorithm fusion. In the experimental studies, two different environmental configurations are used, one without interior features and another one with two interior features. Numerical and experimental findings are discussed. Finally, the SLAM algorithm is implemented along with the algorithms for feature extraction and vehicle pose estimation. Three different cases are experimentally studied, with the floor of the environment intentionally altered to induce slipping. Results obtained for implementations with and without SLAM are compared and discussed. The present work represents a step towards the realization of autonomous inspection platforms for performing concurrent localization and mapping in harsh environments.
Resumo:
Agroforestry has large potential for carbon (C) sequestration while providing many economical, social, and ecological benefits via its diversified products. Airborne lidar is considered as the most accurate technology for mapping aboveground biomass (AGB) over landscape levels. However, little research in the past has been done to study AGB of agroforestry systems using airborne lidar data. Focusing on an agroforestry system in the Brazilian Amazon, this study first predicted plot-level AGB using fixed-effects regression models that assumed the regression coefficients to be constants. The model prediction errors were then analyzed from the perspectives of tree DBH (diameter at breast height)?height relationships and plot-level wood density, which suggested the need for stratifying agroforestry fields to improve plot-level AGB modeling. We separated teak plantations from other agroforestry types and predicted AGB using mixed-effects models that can incorporate the variation of AGB-height relationship across agroforestry types. We found that, at the plot scale, mixed-effects models led to better model prediction performance (based on leave-one-out cross-validation) than the fixed-effects models, with the coefficient of determination (R2) increasing from 0.38 to 0.64. At the landscape level, the difference between AGB densities from the two types of models was ~10% on average and up to ~30% at the pixel level. This study suggested the importance of stratification based on tree AGB allometry and the utility of mixed-effects models in modeling and mapping AGB of agroforestry systems.
3D Surveying and Data Management towards the Realization of a Knowledge System for Cultural Heritage
Resumo:
The research activities involved the application of the Geomatic techniques in the Cultural Heritage field, following the development of two themes: Firstly, the application of high precision surveying techniques for the restoration and interpretation of relevant monuments and archaeological finds. The main case regards the activities for the generation of a high-fidelity 3D model of the Fountain of Neptune in Bologna. In this work, aimed to the restoration of the manufacture, both the geometrical and radiometrical aspects were crucial. The final product was the base of a 3D information system representing a shared tool where the different figures involved in the restoration activities shared their contribution in a multidisciplinary approach. Secondly, the arrangement of 3D databases for a Building Information Modeling (BIM) approach, in a process which involves the generation and management of digital representations of physical and functional characteristics of historical buildings, towards a so-called Historical Building Information Model (HBIM). A first application was conducted for the San Michele in Acerboli’s church in Santarcangelo di Romagna. The survey was performed by the integration of the classical and modern Geomatic techniques and the point cloud representing the church was used for the development of a HBIM model, where the relevant information connected to the building could be stored and georeferenced. A second application regards the domus of Obellio Firmo in Pompeii, surveyed by the integration of the classical and modern Geomatic techniques. An historical analysis permitted the definitions of phases and the organization of a database of materials and constructive elements. The goal is the obtaining of a federate model able to manage the different aspects: documental, analytic and reconstructive ones.
Resumo:
The increasing number of extreme rainfall events, combined with the high population density and the imperviousness of the land surface, makes urban areas particularly vulnerable to pluvial flooding. In order to design and manage cities to be able to deal with this issue, the reconstruction of weather phenomena is essential. Among the most interesting data sources which show great potential are the observational networks of private sensors managed by citizens (crowdsourcing). The number of these personal weather stations is consistently increasing, and the spatial distribution roughly follows population density. Precisely for this reason, they perfectly suit this detailed study on the modelling of pluvial flood in urban environments. The uncertainty associated with these measurements of precipitation is still a matter of research. In order to characterise the accuracy and precision of the crowdsourced data, we carried out exploratory data analyses. A comparison between Netatmo hourly precipitation amounts and observations of the same quantity from weather stations managed by national weather services is presented. The crowdsourced stations have very good skills in rain detection but tend to underestimate the reference value. In detail, the accuracy and precision of crowd- sourced data change as precipitation increases, improving the spread going to the extreme values. Then, the ability of this kind of observation to improve the prediction of pluvial flooding is tested. To this aim, the simplified raster-based inundation model incorporated in the Saferplaces web platform is used for simulating pluvial flooding. Different precipitation fields have been produced and tested as input in the model. Two different case studies are analysed over the most densely populated Norwegian city: Oslo. The crowdsourced weather station observations, bias-corrected (i.e. increased by 25%), showed very good skills in detecting flooded areas.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Vols. for Aug. 1926-Sept. 1930 in 2 pts., pt. 2 being a pamphlet of "news and notes."
Resumo:
Mode of access: Internet.
Resumo:
Includes indexes.
Resumo:
OBJECTIVE: To demonstrate the feasibility and safety of simultaneous catheterization and mapping of the 4 pulmonary veins for ablation of atrial fibrillation. METHODS: Ten patients, 8 with paroxysmal atrial fibrillation and 2 with persistent atrial fibrillation, refractory to at least 2 antiarrhythmic drugs and without structural cardiopathy, were consecutively studied. Through the transseptal insertion of 2 long sheaths, 4 pulmonary veins were simultaneously catheterized with octapolar microcatheters. After identification of arrhythmogenic foci radiofrequency was applied under angiographic or ultrasonographic control. RESULTS: During 17 procedures, 40 pulmonary veins were mapped, 16 of which had local ectopic activity, related or not with the triggering of atrial fibrillation paroxysms. At the end of each procedure, suppression of arrhythmias was obtained in 8 patients, and elimination of pulmonary vein potentials was accomplished in 4. During the clinical follow-up of 9.6±3 months, 7 patients remained in sinus rhythm, 5 of whom were using antiarrhythmic drugs that had previously been ineffective. None of the patients had pulmonary hypertension or evidence of stenosis in the pulmonary veins. CONCLUSION: Selective and simultaneous catheterization of the 4 pulmonary veins with microcatheters for simultaneous recording of their electrical activity is a feasible and safe procedure that may help ablation of atrial fibrillation.
Resumo:
Schistosomiasis mansoni is not just a physical disease, but is related to social and behavioural factors as well. Snails of the Biomphalaria genus are an intermediate host for Schistosoma mansoni and infect humans through water. The objective of this study is to classify the risk of schistosomiasis in the state of Minas Gerais (MG). We focus on socioeconomic and demographic features, basic sanitation features, the presence of accumulated water bodies, dense vegetation in the summer and winter seasons and related terrain characteristics. We draw on the decision tree approach to infection risk modelling and mapping. The model robustness was properly verified. The main variables that were selected by the procedure included the terrain's water accumulation capacity, temperature extremes and the Human Development Index. In addition, the model was used to generate two maps, one that included risk classification for the entire of MG and another that included classification errors. The resulting map was 62.9% accurate.
Resumo:
Many transportation agencies maintain grade as an attribute in roadway inventory databases; however, the information is often in an aggregated format. Cross slope is rarely included in large roadway inventories. Accurate methods available to collect grade and cross slope include global positioning systems, traditional surveying, and mobile mapping systems. However, most agencies do not have the resources to utilize these methods to collect grade and cross slope on a large scale. This report discusses the use of LIDAR to extract roadway grade and cross slope for large-scale inventories. Current data collection methods and their advantages and disadvantages are discussed. A pilot study to extract grade and cross slope from a LIDAR data set, including methodology, results, and conclusions, is presented. This report describes the regression methodology used to extract and evaluate the accuracy of grade and cross slope from three dimensional surfaces created from LIDAR data. The use of LIDAR data to extract grade and cross slope on tangent highway segments was evaluated and compared against grade and cross slope collected using an automatic level for 10 test segments along Iowa Highway 1. Grade and cross slope were measured from a surface model created from LIDAR data points collected for the study area. While grade could be estimated to within 1%, study results indicate that cross slope cannot practically be estimated using a LIDAR derived surface model.
Resumo:
An efficient and reliable automated model that can map physical Soil and Water Conservation (SWC) structures on cultivated land was developed using very high spatial resolution imagery obtained from Google Earth and ArcGIS, ERDAS IMAGINE, and SDC Morphology Toolbox for MATLAB and statistical techniques. The model was developed using the following procedures: (1) a high-pass spatial filter algorithm was applied to detect linear features, (2) morphological processing was used to remove unwanted linear features, (3) the raster format was vectorized, (4) the vectorized linear features were split per hectare (ha) and each line was then classified according to its compass direction, and (5) the sum of all vector lengths per class of direction per ha was calculated. Finally, the direction class with the greatest length was selected from each ha to predict the physical SWC structures. The model was calibrated and validated on the Ethiopian Highlands. The model correctly mapped 80% of the existing structures. The developed model was then tested at different sites with different topography. The results show that the developed model is feasible for automated mapping of physical SWC structures. Therefore, the model is useful for predicting and mapping physical SWC structures areas across diverse areas.
Resumo:
In 2014, UniDive (The University of Queensland Underwater Club) conducted an ecological assessment of the Point Lookout Dive sites for comparison with similar surveys conducted in 2001. Involvement in the project was voluntary. Members of UniDive who were marine experts conducted training for other club members who had no, or limited, experience in identifying marine organisms and mapping habitats. Since the 2001 detailed baseline study, no similar seasonal survey has been conducted. The 2014 data is particularly important given that numerous changes have taken place in relation to the management of, and potential impacts on, these reef sites. In 2009, Moreton Bay Marine Park was re-zoned, and Flat Rock was converted to a marine national park zone (Green zone) with no fishing or anchoring. In 2012, four permanent moorings were installed at Flat Rock. Additionally, the entire area was exposed to the potential effects of the 2011 and 2013 Queensland floods, including flood plumes which carried large quantities of sediment into Moreton Bay and surrounding waters. The population of South East Queensland has increased from 2.49 million in 2001 to 3.18 million in 2011 (BITRE, 2013). This rapidly expanding coastal population has increased the frequency and intensity of both commercial and recreational activities around Point Lookout dive sites (EPA 2008). Methodology used for the PLEA project was based on the 2001 survey protocols, Reef Check Australia protocols and Coral Watch methods. This hybrid methodology was used to monitor substrate and benthos, invertebrates, fish, and reef health impacts. Additional analyses were conducted with georeferenced photo transects. The PLEA marine surveys were conducted over six weekends in 2014 totaling 535 dives and 376 hours underwater. Two training weekends (February and March) were attended by 44 divers, whilst biological surveys were conducted on seasonal weekends (February, May, July and October). Three reefs were surveyed, with two semi-permanent transects at Flat Rock, two at Shag Rock, and one at Manta Ray Bommie. Each transect was sampled once every survey weekend, with the transect tapes deployed at a depth of 10 m below chart datum. Fish populations were assessed using a visual census along 3 x 20 m transects. Each transect was 5 m wide (2.5 m either side of the transect tape), 5 m high and 20 m in length. Fish families and species were chosen that are commonly targeted by recreational or commercial fishers, or targeted by aquarium collectors, and that were easily identified by their body shape. Rare or otherwise unusual species were also recorded. Target invertebrate populations were assessed using visual census along 3 x 20 m transects. Each transect was 5 m wide (2.5 m either side of the transect tape) and 20 m in length. The diver surveying invertebrates conducted a 'U-shaped' search pattern, covering 2.5 m on either side of the transect tape. Target impacts were assessed using a visual census along the 3 x 20 m transects. Each transect was 5 m wide (2.5 m either side of the transect tape) and 20 m in length. The transect was surveyed via a 'U-shaped' search pattern, covering 2.5 m on either side of the transect tape. Substrate surveys were conducted using the point sampling method, enabling percentage cover of substrate types and benthic organisms to be calculated. The substrate or benthos under the transect line was identified at 0.5m intervals, with a 5m gap between each of the three 20m segments. Categories recorded included various growth forms of hard and soft coral, key species/growth forms of algae, other living organisms (i.e. sponges), recently killed coral, and, non-living substrate types (i.e. bare rock, sand, rubble, silt/clay).