888 resultados para Virtual and remote laboratories
Resumo:
In this paper, we propose two active learning algorithms for semiautomatic definition of training samples in remote sensing image classification. Based on predefined heuristics, the classifier ranks the unlabeled pixels and automatically chooses those that are considered the most valuable for its improvement. Once the pixels have been selected, the analyst labels them manually and the process is iterated. Starting with a small and nonoptimal training set, the model itself builds the optimal set of samples which minimizes the classification error. We have applied the proposed algorithms to a variety of remote sensing data, including very high resolution and hyperspectral images, using support vector machines. Experimental results confirm the consistency of the methods. The required number of training samples can be reduced to 10% using the methods proposed, reaching the same level of accuracy as larger data sets. A comparison with a state-of-the-art active learning method, margin sampling, is provided, highlighting advantages of the methods proposed. The effect of spatial resolution and separability of the classes on the quality of the selection of pixels is also discussed.
Resumo:
A semisupervised support vector machine is presented for the classification of remote sensing images. The method exploits the wealth of unlabeled samples for regularizing the training kernel representation locally by means of cluster kernels. The method learns a suitable kernel directly from the image and thus avoids assuming a priori signal relations by using a predefined kernel structure. Good results are obtained in image classification examples when few labeled samples are available. The method scales almost linearly with the number of unlabeled samples and provides out-of-sample predictions.
Resumo:
Peatlands are soil environments that store carbon and large amounts of water, due to their composition (90 % water), low hydraulic conductivity and a sponge-like behavior. It is estimated that peat bogs cover approximately 4.2 % of the Earth's surface and stock 28.4 % of the soil carbon of the planet. Approximately 612 000 ha of peatlands have been mapped in Brazil, but the peat bogs in the Serra do Espinhaço Meridional (SdEM) were not included. The objective of this study was to map the peat bogs of the northern part of the SdEM and estimate the organic matter pools and water volume they stock. The peat bogs were pre-identified and mapped by GIS and remote sensing techniques, using ArcGIS 9.3, ENVI 4.5 and GPS Track Maker Pro software and the maps validated in the field. Six peat bogs were mapped in detail (1:20,000 and 1:5,000) by transects spaced 100 m and each transect were determined every 20 m, the UTM (Universal Transverse Mercator) coordinates, depth and samples collected for characterization and determination of organic matter, according to the Brazilian System of Soil Classification. In the northern part of SdEM, 14,287.55 ha of peatlands were mapped, distributed over 1,180,109 ha, representing 1.2 % of the total area. These peatlands have an average volume of 170,021,845.00 m³ and stock 6,120,167 t (428.36 t ha-1) of organic matter and 142,138,262 m³ (9,948 m³ ha-1) of water. In the peat bogs of the Serra do Espinhaço Meridional, advanced stages of decomposing (sapric) organic matter predominate, followed by the intermediate stage (hemic). The vertical growth rate of the peatlands ranged between 0.04 and 0.43 mm year-1, while the carbon accumulation rate varied between 6.59 and 37.66 g m-2 year-1. The peat bogs of the SdEM contain the headwaters of important water bodies in the basins of the Jequitinhonha and San Francisco Rivers and store large amounts of organic carbon and water, which is the reason why the protection and preservation of these soil environments is such an urgent and increasing need.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
In this study we propose an evaluation of the angular effects altering the spectral response of the land-cover over multi-angle remote sensing image acquisitions. The shift in the statistical distribution of the pixels observed in an in-track sequence of WorldView-2 images is analyzed by means of a kernel-based measure of distance between probability distributions. Afterwards, the portability of supervised classifiers across the sequence is investigated by looking at the evolution of the classification accuracy with respect to the changing observation angle. In this context, the efficiency of various physically and statistically based preprocessing methods in obtaining angle-invariant data spaces is compared and possible synergies are discussed.
Resumo:
We investigate the relevance of morphological operators for the classification of land use in urban scenes using submetric panchromatic imagery. A support vector machine is used for the classification. Six types of filters have been employed: opening and closing, opening and closing by reconstruction, and opening and closing top hat. The type and scale of the filters are discussed, and a feature selection algorithm called recursive feature elimination is applied to decrease the dimensionality of the input data. The analysis performed on two QuickBird panchromatic images showed that simple opening and closing operators are the most relevant for classification at such a high spatial resolution. Moreover, mixed sets combining simple and reconstruction filters provided the best performance. Tests performed on both images, having areas characterized by different architectural styles, yielded similar results for both feature selection and classification accuracy, suggesting the generalization of the feature sets highlighted.
Resumo:
OBJECTIVE: To assess the accuracy of a semiautomated 3D volume reconstruction method for organ volume measurement by postmortem MRI. METHODS: This prospective study was approved by the institutional review board and the infants' parents gave their consent. Postmortem MRI was performed in 16 infants (1 month to 1 year of age) at 1.5 T within 48 h of their sudden death. Virtual organ volumes were estimated using the Myrian software. Real volumes were recorded at autopsy by water displacement. The agreement between virtual and real volumes was quantified following the Bland and Altman's method. RESULTS: There was a good agreement between virtual and real volumes for brain (mean difference: -0.03% (-13.6 to +7.1)), liver (+8.3% (-9.6 to +26.2)) and lungs (+5.5% (-26.6 to +37.6)). For kidneys, spleen and thymus, the MRI/autopsy volume ratio was close to 1 (kidney: 0.87±0.1; spleen: 0.99±0.17; thymus: 0.94±0.25), but with a less good agreement. For heart, the MRI/real volume ratio was 1.29±0.76, possibly due to the presence of residual blood within the heart. The virtual volumes of adrenal glands were significantly underestimated (p=0.04), possibly due to their very small size during the first year of life. The percentage of interobserver and intraobserver variation was lower or equal to 10%, but for thymus (15.9% and 12.6%, respectively) and adrenal glands (69% and 25.9%). CONCLUSIONS: Virtual volumetry may provide significant information concerning the macroscopic features of the main organs and help pathologists in sampling organs that are more likely to yield histological findings.
Resumo:
In 1903, more than 30 million m3 of rock fell from the east slopes of Turtle Mountain in Alberta, Canada, causing a rock avalanche that killed about 70 people in the town of Frank. The Alberta Government, in response to continuing instabilities at the crest of the mountain, established a sophisticated field laboratory where state-of-the-art monitoring techniques have been installed and tested as part of an early-warning system. In this chapter, we provide an overview of the causes, trigger, and extreme mobility of the landslide. We then present new data relevant to the characterization and detection of the present-day instabilities on Turtle Mountain. Fourteen potential instabilities have been identified through field mapping and remote sensing. Lastly, we provide a detailed review of the different in-situ and remote monitoring systems that have been installed on the mountain. The implications of the new data for the future stability of Turtle Mountain and related landslide runout, and for monitoring strategies and risk management, are discussed.
Resumo:
Precision Viticulture (PV) is a concept that is beginning to have an impact on the wine-growing sector. Its practical implementation is dependant on various technological developments: crop sensors and yield monitors, local and remote sensors, Global Positioning Systems (GPS), VRA (Variable-Rate Application) equipment and machinery, Geographic Information Systems (GIS) and systems for data analysis and interpretation. This paper reviews a number of research lines related to PV. These areas of research have focused on four very specific fields: 1) quantification and evaluation of within-field variability, 2) delineation of zones of differential treatment at parcel level, based on the analysis and interpretation of this variability, 3) development of Variable-Rate Technologies (VRT) and, finally, 4) evaluation of the opportunities for site-specific vineyard management. Research in these fields should allow winegrowers and enologists to know and understand why yield variability exists within the same parcel, what the causes of this variability are, how the yield and its quality are interrelated and, if spatial variability exists, whether site-specific vineyard management is justifiable on a technical and economic basis.
Resumo:
Työn tavoitteena oli määrittää etähuoltokonseptin yleinen rakenne sekä selvittää asiakkaille tarjottavia modulaarisia palvelutuotteita. Aluksi selvitettiin huoltokonseptin rakennetta kirjallisuuden ja asiantuntijoiden haastattelujen avulla. Asiantuntijoiden haastattelut toteutettiin vapaamuotoisesti kysymyslistaa apuna käyttäen. Tämän lisäksi työssä pohditaan KM, CRM ja PDM rooleja etähuoltokonseptin kannalta sekä tutkaillaan etähuollon tulevaisuuden näkymiä.Diplomityössä käsitellään modulaarisuutta yleisellä tasolla. Moduularisuus on moniulotteinen termi. Usein sillä tarkoitetaan yrityksen sisäistä tuotekehityksen hallintaa. Toisaalta se voidaan nähdä myös asiakkaalle tarjottavina tuotteina ja tässä diplomityössä on keskitytty tähän puoleen. Loppuosa diplomityöstä käsittelee palvelutuotteiden tuotteistamisprosessia.Diplomityön tuloksena hahmoteltiin etähuolto konseptia ja tutkittiin mahdollisia palvelutuotteita, joita voidaan sisällyttää etähuoltokonseptiin. Tämän lisäksi toteutettiin osa tuotteistamisprosessista. Etähuollon kehittäminen on haasteellista. Haasteeksi jää tuotekehitystoiminnan ja palvelupuolen toiminnan tehostaminen, jotta tuotteita voitaisiin jo suunnittelupuolella kehittää etähuollon tarpeisiin. Kehittäminen vaatii molempien puolien aktiivista osallistumista.
Resumo:
The fight against doping in sports has been governed since 1999 by the World Anti-Doping Agency (WADA), an independent institution behind the implementation of the World Anti-Doping Code (Code). The intent of the Code is to protect clean athletes through the harmonization of anti-doping programs at the international level with special attention to detection, deterrence and prevention of doping.1 A new version of the Code came into force on January 1st 2015, introducing, among other improvements, longer periods of sanctioning for athletes (up to four years) and measures to strengthen the role of anti-doping investigations and intelligence. To ensure optimal harmonization, five International Standards covering different technical aspects of the Code are also currently in force: the List of Prohibited Substances and Methods (List), Testing and Investigations, Laboratories, Therapeutic Use Exemptions (TUE) and Protection of Privacy and Personal Information. Adherence to these standards is mandatory for all anti-doping stakeholders to be compliant with the Code. Among these documents, the eighth version of International Standard for Laboratories (ISL), which also came into effect on January 1st 2015, includes regulations for WADA and ISO/IEC 17025 accreditations and their application for urine and blood sample analysis by anti-doping laboratories.2 Specific requirements are also described in several Technical Documents or Guidelines in which various topics are highlighted such as the identification criteria for gas chromatography (GC) and liquid chromatography (LC) coupled to mass spectrometry (MS) techniques (IDCR), measurements and reporting of endogenous androgenic anabolic agents (EAAS) and analytical requirements for the Athlete Biological Passport (ABP).
Resumo:
Illicit drug analyses usually focus on the identification and quantitation of questioned material to support the judicial process. In parallel, more and more laboratories develop physical and chemical profiling methods in a forensic intelligence perspective. The analysis of large databases resulting from this approach enables not only to draw tactical and operational intelligence, but may also contribute to the strategic overview of drugs markets. In Western Switzerland, the chemical analysis of illicit drug seizures is centralised in a laboratory hosted by the University of Lausanne. For over 8 years, this laboratory has analysed 5875 cocaine and 2728 heroin specimens, coming from respectively 1138 and 614 seizures operated by police and border guards or customs. Chemical (major and minor alkaloids, purity, cutting agents, chemical class), physical (packaging and appearance) as well as circumstantial (criminal case number, mass of drug seized, date and place of seizure) information are collated in a dedicated database for each specimen. The study capitalises on this extended database and defines several indicators to characterise the structure of drugs markets, to follow-up on their evolution and to compare cocaine and heroin markets. Relational, spatial, temporal and quantitative analyses of data reveal the emergence and importance of distribution networks. They enable to evaluate the cross-jurisdictional character of drug trafficking and the observation time of drug batches, as well as the quantity of drugs entering the market every year. Results highlight the stable nature of drugs markets over the years despite the very dynamic flows of distribution and consumption. This research work illustrates how the systematic analysis of forensic data may elicit knowledge on criminal activities at a strategic level. In combination with information from other sources, such knowledge can help to devise intelligence-based preventive and repressive measures and to discuss the impact of countermeasures.