946 resultados para image registration system
Resumo:
In der vorliegenden Arbeit werden Daten einer systematischen Vermessung in der Lazarev Sea nahe des Fimbul-Ice-Shelfs (Fimbulisen) genutzt, die während der Expedition ANT XIX-2 mit dem Fächersonarsystem Hydrosweep DS-2 und dem Sedimentecholot Parasound erhoben wurden. Nach kurzer Darstellung der Hintergründe dieser durchgeführten Untersuchungen in dem Messgebiet wird allgemein auf wesentliche Aspekte der Hydroakustik hinsichtlich der Anwendung von Echolotsystemen eingegangen. Schwerpunktmäßig soll dabei der parametrischen Effekt, das Messprinzip parametrischer Sedimentecholote, behandelt werden. Nach anschließender Anführung zweier praktischer Anwendungen hydroakustischer Messverfahren anhand des Hydrosweep DS-2 und des Parasound-Systems wird eingehend deren Positionierung auf FS 'Polarstern' dargestellt, da sich bei der Aufbereitung der Messungen zeigte, dass sich das größte Problem der Daten beider Systeme in der minderwertigen Qualität der Navigationsdaten abzeichnete. Aus den bereinigten Tiefendaten der Fächersonarmessung wird ein digitales Geländemodell (DGM) mit einer Rasterweite von 100 m generiert. Dieses Modell liegt für die weitere Bearbeitung digital und in Form einer bathymetrischen Karte im Maßstab 1:250,000 vor, bei der die Topographie des Canyon-Systems nahe dem Fimbulisen durch Isolinien mit einer Aquidistanz von 50 m dargestellt wird. Die als Ergebnis der prozessierten Parasound-Daten erhaltenen Seismogramme, die gefiltert im digitalen Bildformat mit bekannter Start- und Endposition für einen definierten Tiefenbereich vorliegen, können zusammen mit dem DGM in einem dreidimensionalen Modell dargestellt werden. Dieses in digitaler Form vorliegende Modell kann durch den Nutzer interaktiv durchschritten und die darin enthaltenen Messergebnisse in ihrer Gesamtheit sowie in Detailansichten aus verschiedenen Perspektiven betrachtet werden, was das gegenseitige Verständnis und Einschätzung der Ergebnisse aus den beiden Messverfahren fördert. Diese gemeinsame Darstellungsweise eines digitalen Geländemodells in Kombination mit den Seismogramm-Bildern des Sedimentecholotes Parasound bietet sich auch hinsichtlich einer geologischen Klassifizierung der verschiedenen Echotypen und einer anschließenden Interpretation der Sedimentationsvorgänge in einem flächenhaft vermessenen Gebiet an.
Resumo:
The paper presents first results of a pan-boreal scale land cover harmonization and classification. A methodology is presented that combines global and regional vegetation datasets to extract percentage cover information for different vegetation physiognomy and barren for the pan-arctic region within the ESA Data User Element Permafrost. Based on the legend description of each land cover product the datasets are harmonized into four LCCS (Land Cover Classification System) classifiers which are linked to the MODIS Vegetation Continuous Field (VCF) product. Harmonized land cover and Vegetation Continuous Fields products are combined to derive a best estimate of percentage cover information for trees, shrubs, herbaceous and barren areas for Russia. Future work will concentrate on the expansion of the developed methodology to the pan-arctic scale. Since the vegetation builds an isolation layer, which protects the permafrost from heat and cold temperatures, a degradation of this layer due to fire strongly influences the frozen conditions in the soil. Fire is an important disturbance factor which affects vast processes and dynamics in ecosystems (e.g. biomass, biodiversity, hydrology, etc.). Especially in North Eurasia the fire occupancy has dramatically increased in the last 50 years and has doubled in the 1990s with respect to the last five decades. A comparison of global and regional fire products has shown discrepancies between the amounts of burn scars detected by different algorithms and satellite data.
Resumo:
A new topographic database for King George Island, one of the most visited areas in Antarctica, is presented. Data from differential GPS surveys, gained during the summers 1997/98 and 1999/2000, were combined with up to date coastlines from a SPOT satellite image mosaic, and topographic information from maps as well as from the Antarctic Digital Database. A digital terrain model (DTM) was generated using ARC/INFO GIS. From contour lines derived from the DTM and the satellite image mosaic a satellite image map was assembled. Extensive information on data accuracy, the database as well as on the criteria applied to select place names is given in the multilingual map. A lack of accurate topographic information in the eastern part of the island was identified. It was concluded that additional topographic surveying or radar interferometry should be conducted to improve the data quality in this area. In three case studies, the potential applications of the improved topographic database are demonstrated. The first two examples comprise the verification of glacier velocities and the study of glacier retreat from the various input data-sets as well as the use of the DTM for climatological modelling. The last case study focuses on the use of the new digital database as a basic GIS (Geographic Information System) layer for environmental monitoring and management on King George Island.
Resumo:
Conceptualization of groundwater flow systems is necessary for water resources planning. Geophysical, hydrochemical and isotopic characterization methods were used to investigate the groundwater flow system of a multi-layer fractured sedimentary aquifer along the coastline in Southwestern Nicaragua. A geologic survey was performed along the 46 km2 catchment. Electrical resistivity tomography (ERT) was applied along a 4.4 km transect parallel to the main river channel to identify fractures and determine aquifer geometry. Additionally, three cross sections in the lower catchment and two in hillslopes of the upper part of the catchment were surveyed using ERT. Stable water isotopes, chloride and silica were analyzed for springs, river, wells and piezometers samples during the dry and wet season of 2012. Indication of moisture recycling was found although the identification of the source areas needs further investigation. The upper-middle catchment area is formed by fractured shale/limestone on top of compact sandstone. The lower catchment area is comprised of an alluvial unit of about 15 m thickness overlaying a fractured shale unit. Two major groundwater flow systems were identified: one deep in the shale unit, recharged in the upper-middle catchment area; and one shallow, flowing in the alluvium unit and recharged locally in the lower catchment area. Recharged precipitation displaces older groundwater along the catchment, in a piston flow mechanism. Geophysical methods in combination with hydrochemical and isotopic tracers provide information over different scales and resolutions, which allow an integrated analysis of groundwater flow systems. This approach provides integrated surface and subsurface information where remoteness, accessibility, and costs prohibit installation of groundwater monitoring networks.
Resumo:
To deliver sample estimates provided with the necessary probability foundation to permit generalization from the sample data subset to the whole target population being sampled, probability sampling strategies are required to satisfy three necessary not sufficient conditions: (i) All inclusion probabilities be greater than zero in the target population to be sampled. If some sampling units have an inclusion probability of zero, then a map accuracy assessment does not represent the entire target region depicted in the map to be assessed. (ii) The inclusion probabilities must be: (a) knowable for nonsampled units and (b) known for those units selected in the sample: since the inclusion probability determines the weight attached to each sampling unit in the accuracy estimation formulas, if the inclusion probabilities are unknown, so are the estimation weights. This original work presents a novel (to the best of these authors' knowledge, the first) probability sampling protocol for quality assessment and comparison of thematic maps generated from spaceborne/airborne Very High Resolution (VHR) images, where: (I) an original Categorical Variable Pair Similarity Index (CVPSI, proposed in two different formulations) is estimated as a fuzzy degree of match between a reference and a test semantic vocabulary, which may not coincide, and (II) both symbolic pixel-based thematic quality indicators (TQIs) and sub-symbolic object-based spatial quality indicators (SQIs) are estimated with a degree of uncertainty in measurement in compliance with the well-known Quality Assurance Framework for Earth Observation (QA4EO) guidelines. Like a decision-tree, any protocol (guidelines for best practice) comprises a set of rules, equivalent to structural knowledge, and an order of presentation of the rule set, known as procedural knowledge. The combination of these two levels of knowledge makes an original protocol worth more than the sum of its parts. The several degrees of novelty of the proposed probability sampling protocol are highlighted in this paper, at the levels of understanding of both structural and procedural knowledge, in comparison with related multi-disciplinary works selected from the existing literature. In the experimental session the proposed protocol is tested for accuracy validation of preliminary classification maps automatically generated by the Satellite Image Automatic MapperT (SIAMT) software product from two WorldView-2 images and one QuickBird-2 image provided by DigitalGlobe for testing purposes. In these experiments, collected TQIs and SQIs are statistically valid, statistically significant, consistent across maps and in agreement with theoretical expectations, visual (qualitative) evidence and quantitative quality indexes of operativeness (OQIs) claimed for SIAMT by related papers. As a subsidiary conclusion, the statistically consistent and statistically significant accuracy validation of the SIAMT pre-classification maps proposed in this contribution, together with OQIs claimed for SIAMT by related works, make the operational (automatic, accurate, near real-time, robust, scalable) SIAMT software product eligible for opening up new inter-disciplinary research and market opportunities in accordance with the visionary goal of the Global Earth Observation System of Systems (GEOSS) initiative and the QA4EO international guidelines.
Resumo:
Due to the ongoing effects of climate change, phytoplankton are likely to experience enhanced irradiance, more reduced nitrogen, and increased water acidity in the future ocean. Here, we used Thalassiosira pseudonana as a model organism to examine how phytoplankton adjust energy production and expenditure to cope with these multiple, interrelated environmental factors. Following acclimation to a matrix of irradiance, nitrogen source, and CO2 levels, the diatom's energy production and expenditures were quantified and incorporated into an energetic budget to predict how photosynthesis was affected by growth conditions. Increased light intensity and a shift from inline image to inline image led to increased energy generation, through higher rates of light capture at high light and greater investment in photosynthetic proteins when grown on inline image. Secondary energetic expenditures were adjusted modestly at different culture conditions, except that inline image utilization was systematically reduced by increasing pCO2. The subsequent changes in element stoichiometry, biochemical composition, and release of dissolved organic compounds may have important implications for marine biogeochemical cycles. The predicted effects of changing environmental conditions on photosynthesis, made using an energetic budget, were in good agreement with observations at low light, when energy is clearly limiting, but the energetic budget over-predicts the response to inline image at high light, which might be due to relief of energetic limitations and/or increased percentage of inactive photosystem II at high light. Taken together, our study demonstrates that energetic budgets offered significant insight into the response of phytoplankton energy metabolism to the changing environment and did a reasonable job predicting them.
Resumo:
In this paper, a novel and approach for obtaining 3D models from video sequences captured with hand-held cameras is addressed. We define a pipeline that robustly deals with different types of sequences and acquiring devices. Our system follows a divide and conquer approach: after a frame decimation that pre-conditions the input sequence, the video is split into short-length clips. This allows to parallelize the reconstruction step which translates into a reduction in the amount of computational resources required. The short length of the clips allows an intensive search for the best solution at each step of reconstruction which robustifies the system. The process of feature tracking is embedded within the reconstruction loop for each clip as opposed to other approaches. A final registration step, merges all the processed clips to the same coordinate frame
Resumo:
Managing large medical image collections is an increasingly demanding important issue in many hospitals and other medical settings. A huge amount of this information is daily generated, which requires robust and agile systems. In this paper we present a distributed multi-agent system capable of managing very large medical image datasets. In this approach, agents extract low-level information from images and store them in a data structure implemented in a relational database. The data structure can also store semantic information related to images and particular regions. A distinctive aspect of our work is that a single image can be divided so that the resultant sub-images can be stored and managed separately by different agents to improve performance in data accessing and processing. The system also offers the possibility of applying some region-based operations and filters on images, facilitating image classification. These operations can be performed directly on data structures in the database.
Resumo:
BACKGROUND: Antiretroviral therapy has changed the natural history of human immunodeficiency virus (HIV) infection in developed countries, where it has become a chronic disease. This clinical scenario requires a new approach to simplify follow-up appointments and facilitate access to healthcare professionals. METHODOLOGY: We developed a new internet-based home care model covering the entire management of chronic HIV-infected patients. This was called Virtual Hospital. We report the results of a prospective randomised study performed over two years, comparing standard care received by HIV-infected patients with Virtual Hospital care. HIV-infected patients with access to a computer and broadband were randomised to be monitored either through Virtual Hospital (Arm I) or through standard care at the day hospital (Arm II). After one year of follow up, patients switched their care to the other arm. Virtual Hospital offered four main services: Virtual Consultations, Telepharmacy, Virtual Library and Virtual Community. A technical and clinical evaluation of Virtual Hospital was carried out. FINDINGS: Of the 83 randomised patients, 42 were monitored during the first year through Virtual Hospital (Arm I) and 41 through standard care (Arm II). Baseline characteristics of patients were similar in the two arms. The level of technical satisfaction with the virtual system was high: 85% of patients considered that Virtual Hospital improved their access to clinical data and they felt comfortable with the videoconference system. Neither clinical parameters [level of CD4+ T lymphocytes, proportion of patients with an undetectable level of viral load (p = 0.21) and compliance levels >90% (p = 0.58)] nor the evaluation of quality of life or psychological questionnaires changed significantly between the two types of care. CONCLUSIONS: Virtual Hospital is a feasible and safe tool for the multidisciplinary home care of chronic HIV patients. Telemedicine should be considered as an appropriate support service for the management of chronic HIV infection. TRIAL REGISTRATION: Clinical-Trials.gov: NCT01117675.
Resumo:
A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.