916 resultados para accuracy analysis
Resumo:
X-Ray Spectrom. 2003; 32: 396–401
Resumo:
Endmember extraction (EE) is a fundamental and crucial task in hyperspectral unmixing. Among other methods vertex component analysis ( VCA) has become a very popular and useful tool to unmix hyperspectral data. VCA is a geometrical based method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Many Hyperspectral imagery applications require a response in real time or near-real time. Thus, to met this requirement this paper proposes a parallel implementation of VCA developed for graphics processing units. The impact on the complexity and on the accuracy of the proposed parallel implementation of VCA is examined using both simulated and real hyperspectral datasets.
Resumo:
IEEE International Conference on Communications (IEEE ICC 2015). 8 to 12, Jun, 2015, IEEE ICC 2015 - Communications QoS, Reliability and Modeling, London, United Kingdom.
Resumo:
High-content analysis has revolutionized cancer drug discovery by identifying substances that alter the phenotype of a cell, which prevents tumor growth and metastasis. The high-resolution biofluorescence images from assays allow precise quantitative measures enabling the distinction of small molecules of a host cell from a tumor. In this work, we are particularly interested in the application of deep neural networks (DNNs), a cutting-edge machine learning method, to the classification of compounds in chemical mechanisms of action (MOAs). Compound classification has been performed using image-based profiling methods sometimes combined with feature reduction methods such as principal component analysis or factor analysis. In this article, we map the input features of each cell to a particular MOA class without using any treatment-level profiles or feature reduction methods. To the best of our knowledge, this is the first application of DNN in this domain, leveraging single-cell information. Furthermore, we use deep transfer learning (DTL) to alleviate the intensive and computational demanding effort of searching the huge parameter's space of a DNN. Results show that using this approach, we obtain a 30% speedup and a 2% accuracy improvement.
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertation submitted in the fufillment of the requirements for the Degree of Master in Biomedical Engineering
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
In the last years, volunteers have been contributing massively to what we know nowadays as Volunteered Geographic Information. This huge amount of data might be hiding a vast geographical richness and therefore research needs to be conducted to explore their potential and use it in the solution of real world problems. In this study we conduct an exploratory analysis of data from the OpenStreetMap initiative. Using the Corine Land Cover database as reference and continental Portugal as the study area, we establish a possible correspondence between both classification nomenclatures, evaluate the quality of OpenStreetMap polygon features classification against Corine Land Cover classes from level 1 nomenclature, and analyze the spatial distribution of OpenStreetMap classes over continental Portugal. A global classification accuracy around 76% and interesting coverage areas’ values are remarkable and promising results that encourages us for future research on this topic.
Resumo:
In the recent past, hardly anyone could predict this course of GIS development. GIS is moving from desktop to cloud. Web 2.0 enabled people to input data into web. These data are becoming increasingly geolocated. Big amounts of data formed something that is called "Big Data". Scientists still don't know how to deal with it completely. Different Data Mining tools are used for trying to extract some useful information from this Big Data. In our study, we also deal with one part of these data - User Generated Geographic Content (UGGC). The Panoramio initiative allows people to upload photos and describe them with tags. These photos are geolocated, which means that they have exact location on the Earth's surface according to a certain spatial reference system. By using Data Mining tools, we are trying to answer if it is possible to extract land use information from Panoramio photo tags. Also, we tried to answer to what extent this information could be accurate. At the end, we compared different Data Mining methods in order to distinguish which one has the most suited performances for this kind of data, which is text. Our answers are quite encouraging. With more than 70% of accuracy, we proved that extracting land use information is possible to some extent. Also, we found Memory Based Reasoning (MBR) method the most suitable method for this kind of data in all cases.
Resumo:
The rapid growth of big cities has been noticed since 1950s when the majority of world population turned to live in urban areas rather than villages, seeking better job opportunities and higher quality of services and lifestyle circumstances. This demographic transition from rural to urban is expected to have a continuous increase. Governments, especially in less developed countries, are going to face more challenges in different sectors, raising the essence of understanding the spatial pattern of the growth for an effective urban planning. The study aimed to detect, analyse and model the urban growth in Greater Cairo Region (GCR) as one of the fast growing mega cities in the world using remote sensing data. Knowing the current and estimated urbanization situation in GCR will help decision makers in Egypt to adjust their plans and develop new ones. These plans should focus on resources reallocation to overcome the problems arising in the future and to achieve a sustainable development of urban areas, especially after the high percentage of illegal settlements which took place in the last decades. The study focused on a period of 30 years; from 1984 to 2014, and the major transitions to urban were modelled to predict the future scenarios in 2025. Three satellite images of different time stamps (1984, 2003 and 2014) were classified using Support Vector Machines (SVM) classifier, then the land cover changes were detected by applying a high level mapping technique. Later the results were analyzed for higher accurate estimations of the urban growth in the future in 2025 using Land Change Modeler (LCM) embedded in IDRISI software. Moreover, the spatial and temporal urban growth patterns were analyzed using statistical metrics developed in FRAGSTATS software. The study resulted in an overall classification accuracy of 96%, 97.3% and 96.3% for 1984, 2003 and 2014’s map, respectively. Between 1984 and 2003, 19 179 hectares of vegetation and 21 417 hectares of desert changed to urban, while from 2003 to 2014, the transitions to urban from both land cover classes were found to be 16 486 and 31 045 hectares, respectively. The model results indicated that 14% of the vegetation and 4% of the desert in 2014 will turn into urban in 2025, representing 16 512 and 24 687 hectares, respectively.
Resumo:
Radiometric changes observed in multi-temporal optical satellite images have an important role in efforts to characterize selective-logging areas. The aim of this study was to analyze the multi-temporal behavior of spectral-mixture responses in satellite images in simulated selective-logging areas in the Amazon forest, considering red/near-infrared spectral relationships. Forest edges were used to infer the selective-logging infrastructure using differently oriented edges in the transition between forest and deforested areas in satellite images. TM/Landsat-5 images acquired at three dates with different solar-illumination geometries were used in this analysis. The method assumed that the radiometric responses between forest with selective-logging effects and forest edges in contact with recent clear-cuts are related. The spatial frequency attributes of red/near infrared bands for edge areas were analyzed. Analysis of dispersion diagrams showed two groups of pixels that represent selective-logging areas. The attributes for size and radiometric distance representing these two groups were related to solar-elevation angle. The results suggest that detection of timber exploitation areas is limited because of the complexity of the selective-logging radiometric response. Thus, the accuracy of detecting selective logging can be influenced by the solar-elevation angle at the time of image acquisition. We conclude that images with lower solar-elevation angles are less reliable for delineation of selecting logging.
Resumo:
Background: The diagnostic accuracy of 64-slice MDCT in comparison with IVUS has been poorly described and is mainly restricted to reports analyzing segments with documented atherosclerotic plaques. Objectives: We compared 64-slice multidetector computed tomography (MDCT) with gray scale intravascular ultrasound (IVUS) for the evaluation of coronary lumen dimensions in the context of a comprehensive analysis, including segments with absent or mild disease. Methods: The 64-slice MDCT was performed within 72 h before the IVUS imaging, which was obtained for at least one coronary, regardless of the presence of luminal stenosis at angiography. A total of 21 patients were included, with 70 imaged vessels (total length 114.6 ± 38.3 mm per patient). A coronary plaque was diagnosed in segments with plaque burden > 40%. Results: At patient, vessel, and segment levels, average lumen area, minimal lumen area, and minimal lumen diameter were highly correlated between IVUS and 64-slice MDCT (p < 0.01). However, 64-slice MDCT tended to underestimate the lumen size with a relatively wide dispersion of the differences. The comparison between 64-slice MDCT and IVUS lumen measurements was not substantially affected by the presence or absence of an underlying plaque. In addition, 64-slice MDCT showed good global accuracy for the detection of IVUS parameters associated with flow-limiting lesions. Conclusions: In a comprehensive, multi-territory, and whole-artery analysis, the assessment of coronary lumen by 64-slice MDCT compared with coronary IVUS showed a good overall diagnostic ability, regardless of the presence or absence of underlying atherosclerotic plaques.