898 resultados para Mapping documentary
Resumo:
Flood extent mapping is a basic tool for flood damage assessment, which can be done by digital classification techniques using satellite imageries, including the data recorded by radar and optical sensors. However, converting the data into the information we need is not a straightforward task. One of the great challenges involved in the data interpretation is to separate the permanent water bodies and flooding regions, including both the fully inundated areas and the wet areas where trees and houses are partly covered with water. This paper adopts the decision fusion technique to combine the mapping results from radar data and the NDVI data derived from optical data. An improved capacity in terms of identifying the permanent or semi-permanent water bodies from flood inundated areas has been achieved. Computer software tools Multispec and Matlab were used.
Resumo:
The aim of the studies was to improve the diagnostic capability of electrocardiography (ECG) in detecting myocardial ischemic injury with a future goal of an automatic screening and monitoring method for ischemic heart disease. The method of choice was body surface potential mapping (BSPM), containing numerous leads, with intention to find the optimal recording sites and optimal ECG variables for ischemia and myocardial infarction (MI) diagnostics. The studies included 144 patients with prior MI, 79 patients with evolving ischemia, 42 patients with left ventricular hypertrophy (LVH), and 84 healthy controls. Study I examined the depolarization wave in prior MI with respect to MI location. Studies II-V examined the depolarization and repolarization waves in prior MI detection with respect to the Minnesota code, Q-wave status, and study V also with respect to MI location. In study VI the depolarization and repolarization variables were examined in 79 patients in the face of evolving myocardial ischemia and ischemic injury. When analyzed from a single lead at any recording site the results revealed superiority of the repolarization variables over the depolarization variables and over the conventional 12-lead ECG methods, both in the detection of prior MI and evolving ischemic injury. The QT integral, covering both depolarization and repolarization, appeared indifferent to the Q-wave status, the time elapsed from MI, or the MI or ischemia location. In the face of evolving ischemic injury the performance of the QT integral was not hampered even by underlying LVH. The examined depolarization and repolarization variables were effective when recorded in a single site, in contrast to the conventional 12-lead ECG criteria. The inverse spatial correlation of the depolarization and depolarization waves in myocardial ischemia and injury could be reduced into the QT integral variable recorded in a single site on the left flank. In conclusion, the QT integral variable, detectable in a single lead, with optimal recording site on the left flank, was able to detect prior MI and evolving ischemic injury more effectively than the conventional ECG markers. The QT integral, in a single-lead or a small number of leads, offers potential for automated screening of ischemic heart disease, acute ischemia monitoring and therapeutic decision-guiding as well as risk stratification.
Resumo:
Mapping the shear wave velocity profile is an important part in seismic hazard and microzonation studies. The shear wave velocity of soil in the city of Bangalore was mapped using the Multichannel Analysis of Surface Wave (MASW) technique. An empirical relationship was found between the Standard Penetration Test (SPT) corrected N value ((N1)60cs) and measured shear wave velocity (Vs). The survey points were selected in such a way that the results represent the entire Bangalore region, covering an area of 220 km2. Fifty-eight 1-D and 20 2-D MASW surveys were performed and their velocity profiles determined. The average shear wave velocity of Bangalore soils was evaluated for depths of 5 m, 10 m, 15 m, 20 m, 25 m and 30 m. The sub-soil classification was made for seismic local site effect evaluation based on average shear wave velocity of 30-m depth (Vs30) of sites using the National Earthquake Hazards Reduction Program (NEHRP) and International Building Code (IBC) classification. Mapping clearly indicates that the depth of soil obtained from MASW closely matches with the soil layers identified in SPT bore holes. Estimation of local site effects for an earthquake requires knowledge of the dynamic properties of soil, which is usually expressed in terms of shear wave velocity. Hence, to make use of abundant SPT data available on many geotechnical projects in Bangalore, an attempt was made to develop a relationship between Vs (m/s) and (N1)60cs. The measured shear wave velocity at 38 locations close to SPT boreholes was used to generate the correlation between the corrected N values and shear wave velocity. A power fit model correlation was developed with a regression coefficient (R2) of 0.84. This relationship between shear wave velocity and corrected SPT N values correlates well with the Japan Road Association equations.
Resumo:
In this paper we focus on the challenging problem of place categorization and semantic mapping on a robot with-out environment-specific training. Motivated by their ongoing success in various visual recognition tasks, we build our system upon a state-of-the-art convolutional network. We overcome its closed-set limitations by complementing the network with a series of one-vs-all classifiers that can learn to recognize new semantic classes online. Prior domain knowledge is incorporated by embedding the classification system into a Bayesian filter framework that also ensures temporal coherence. We evaluate the classification accuracy of the system on a robot that maps a variety of places on our campus in real-time. We show how semantic information can boost robotic object detection performance and how the semantic map can be used to modulate the robot’s behaviour during navigation tasks. The system is made available to the community as a ROS module.
Resumo:
Social media play a prominent role in mediating issues of public concern, not only providing the stage on which public debates play out but also shaping their topics and dynamics. Building on and extending existing approaches to both issue mapping and social media analysis, this article explores ways of accounting for popular media practices and the special case of ‘born digital’ sociocultural controversies. We present a case study of the GamerGate controversy with a particular focus on a spike in activity associated with a 2015 Law and Order: SVU episode about gender-based violence and harassment in games culture that was widely interpreted as being based on events associated with GamerGate. The case highlights the importance and challenges of accounting for the cultural dynamics of digital media within and across platforms.
Resumo:
This paper addresses the challenges of flood mapping using multispectral images. Quantitative flood mapping is critical for flood damage assessment and management. Remote sensing images obtained from various satellite or airborne sensors provide valuable data for this application, from which the information on the extent of flood can be extracted. However the great challenge involved in the data interpretation is to achieve more reliable flood extent mapping including both the fully inundated areas and the 'wet' areas where trees and houses are partly covered by water. This is a typical combined pure pixel and mixed pixel problem. In this paper, an extended Support Vector Machines method for spectral unmixing developed recently has been applied to generate an integrated map showing both pure pixels (fully inundated areas) and mixed pixels (trees and houses partly covered by water). The outputs were compared with the conventional mean based linear spectral mixture model, and better performance was demonstrated with a subset of Landsat ETM+ data recorded at the Daly River Basin, NT, Australia, on 3rd March, 2008, after a flood event.
Resumo:
The most difficult operation in flood inundation mapping using optical flood images is to map the ‘wet’ areas where trees and houses are partly covered by water. This can be referred to as a typical problem of the presence of mixed pixels in the images. A number of automatic information extracting image classification algorithms have been developed over the years for flood mapping using optical remote sensing images, with most labelling a pixel as a particular class. However, they often fail to generate reliable flood inundation mapping because of the presence of mixed pixels in the images. To solve this problem, spectral unmixing methods have been developed. In this thesis, methods for selecting endmembers and the method to model the primary classes for unmixing, the two most important issues in spectral unmixing, are investigated. We conduct comparative studies of three typical spectral unmixing algorithms, Partial Constrained Linear Spectral unmixing, Multiple Endmember Selection Mixture Analysis and spectral unmixing using the Extended Support Vector Machine method. They are analysed and assessed by error analysis in flood mapping using MODIS, Landsat and World View-2 images. The Conventional Root Mean Square Error Assessment is applied to obtain errors for estimated fractions of each primary class. Moreover, a newly developed Fuzzy Error Matrix is used to obtain a clear picture of error distributions at the pixel level. This thesis shows that the Extended Support Vector Machine method is able to provide a more reliable estimation of fractional abundances and allows the use of a complete set of training samples to model a defined pure class. Furthermore, it can be applied to analysis of both pure and mixed pixels to provide integrated hard-soft classification results. Our research also identifies and explores a serious drawback in relation to endmember selections in current spectral unmixing methods which apply fixed sets of endmember classes or pure classes for mixture analysis of every pixel in an entire image. However, as it is not accurate to assume that every pixel in an image must contain all endmember classes, these methods usually cause an over-estimation of the fractional abundances in a particular pixel. In this thesis, a subset of adaptive endmembers in every pixel is derived using the proposed methods to form an endmember index matrix. The experimental results show that using the pixel-dependent endmembers in unmixing significantly improves performance.
Resumo:
Activity systems are the cognitively linked groups of activities that consumers carry out as a part of their daily life. The aim of this paper is to investigate how consumers experience value through their activities, and how services fit into the context of activity systems. A new technique for illustrating consumers’ activity systems is introduced. The technique consists of identifying a consumer’s activities through an interview, then quantitatively measuring how the consumer evaluates the identified activities on three dimensions: Experienced benefits, sacrifices and frequency. This information is used to create a graphical representation of the consumer’s activity system, an “activityscape map”. Activity systems work as infrastructure for the individual consumer’s value experience. The paper contributes to value and service literature, where there currently are no clearly described standardized techniques for visually mapping out individual consumer activity. Existing approaches are service- or relationship focused, and are mostly used to identify activities, not to understand them. The activityscape representation provides an overview of consumers’ perceptions of their activity patterns and the position of one or several services in this pattern. Comparing different consumers’ activityscapes, it shows the differences between consumers' activity structures, and provides insight into how services are used to create value within them. The paper is conceptual; an empirical illustration is used to indicate the potential in further empirical studies. The technique can be used by businesses to understand contexts for service use, which may uncover potential for business reconfiguration and customer segmentation.
Resumo:
Activity systems are the cognitively linked groups of activities that consumers carry out as a part of their daily life. The aim of this paper is to investigate how consumers experience value through their activities, and how services fit into the context of activity systems. A new technique for illustrating consumers’ activity systems is introduced. The technique consists of identifying a consumer’s activities through an interview, then quantitatively measuring how the consumer evaluates the identified activities on three dimensions: Experienced benefits, sacrifices and frequency. This information is used to create a graphical representation of the consumer’s activity system, an “activityscape map”. Activity systems work as infrastructure for the individual consumer’s value experience. The paper contributes to value and service literature, where there currently are no clearly described standardized techniques for visually mapping out individual consumer activity. Existing approaches are service- or relationship focused, and are mostly used to identify activities, not to understand them. The activityscape representation provides an overview of consumers’ perceptions of their activity patterns and the position of one or several services in this pattern. Comparing different consumers’ activityscapes, it shows the differences between consumers' activity structures, and provides insight into how services are used to create value within them. The paper is conceptual; an empirical illustration is used to indicate the potential in further empirical studies. The technique can be used by businesses to understand contexts for service use, which may uncover potential for business reconfiguration and customer segmentation.
Resumo:
A series of 6,11-dihydro-11-oxodibenz[b,e]oxepin-2-acetic acids (DOAA) which are known to be anti-inflammatory agents were studied. The geometries of some of the molecules obtained from X-ray crystallography were used in the calculations as such while the geometries of their derivatives were obtained by local, partial geometry optimization around the Sites of substitution employing the AMI method, keeping the remaining parts of the geometries the same as those in the parent molecules. Molecular electrostatic potential (MEP) mapping was performed for the molecules using optimized hybridization displacement charges (HDC) combined with Lowdin charges, as this charge distribution has been shown earlier to yield near ab initio quality results. A good correlation has been found between the MEP values near the oxygen atoms of the hydroxyl groups of the carboxy groups of the molecules and their anti-inflammatory activities. The result is broadly in agreement with the model proposed earlier by other authors regarding the structure-activity relationship for other similar molecules.
Resumo:
Gene mapping is a systematic search for genes that affect observable characteristics of an organism. In this thesis we offer computational tools to improve the efficiency of (disease) gene-mapping efforts. In the first part of the thesis we propose an efficient simulation procedure for generating realistic genetical data from isolated populations. Simulated data is useful for evaluating hypothesised gene-mapping study designs and computational analysis tools. As an example of such evaluation, we demonstrate how a population-based study design can be a powerful alternative to traditional family-based designs in association-based gene-mapping projects. In the second part of the thesis we consider a prioritisation of a (typically large) set of putative disease-associated genes acquired from an initial gene-mapping analysis. Prioritisation is necessary to be able to focus on the most promising candidates. We show how to harness the current biomedical knowledge for the prioritisation task by integrating various publicly available biological databases into a weighted biological graph. We then demonstrate how to find and evaluate connections between entities, such as genes and diseases, from this unified schema by graph mining techniques. Finally, in the last part of the thesis, we define the concept of reliable subgraph and the corresponding subgraph extraction problem. Reliable subgraphs concisely describe strong and independent connections between two given vertices in a random graph, and hence they are especially useful for visualising such connections. We propose novel algorithms for extracting reliable subgraphs from large random graphs. The efficiency and scalability of the proposed graph mining methods are backed by extensive experiments on real data. While our application focus is in genetics, the concepts and algorithms can be applied to other domains as well. We demonstrate this generality by considering coauthor graphs in addition to biological graphs in the experiments.
Resumo:
A non-occluded baculovirus, OBV-KI has been isolated from the insect pest, Oryctes rhinoceros. The viral genome is estimated to be 123 kb, with a G + C content of 43 mol% and no detectible methylated bases. A restriction map of the OBV-KI genome for BamHI, EcoRI, HindIII, PstI, SalI and XbaI has been constructed.
Resumo:
Road transport and infrastructure has a fundamental meaning for the developing world. Poor quality and inadequate coverage of roads, lack of maintenance operations and outdated road maps continue to hinder economic and social development in the developing countries. This thesis focuses on studying the present state of road infrastructure and its mapping in the Taita Hills, south-east Kenya. The study is included as a part of the TAITA-project by the Department of Geography, University of Helsinki. The road infrastructure of the study area is studied by remote sensing and GIS based methodology. As the principal dataset, true colour airborne digital camera data from 2004, was used to generate an aerial image mosaic of the study area. Auxiliary data includes SPOT satellite imagery from 2003, field spectrometry data of road surfaces and relevant literature. Road infrastructure characteristics are interpreted from three test sites using pixel-based supervised classification, object-oriented supervised classifications and visual interpretation. Road infrastructure of the test sites is interpreted visually from a SPOT image. Road centrelines are then extracted from the object-oriented classification results with an automatic vectorisation process. The road infrastructure of the entire image mosaic is mapped by applying the most appropriate assessed data and techniques. The spectral characteristics and reflectance of various road surfaces are considered with the acquired field spectra and relevant literature. The results are compared with the experimented road mapping methods. This study concludes that classification and extraction of roads remains a difficult task, and that the accuracy of the results is inadequate regardless of the high spatial resolution of the image mosaic used in this thesis. Visual interpretation, out of all the experimented methods in this thesis is the most straightforward, accurate and valid technique for road mapping. Certain road surfaces have similar spectral characteristics and reflectance values with other land cover and land use. This has a great influence for digital analysis techniques in particular. Road mapping is made even more complicated by rich vegetation and tree canopy, clouds, shadows, low contrast between roads and surroundings and the width of narrow roads in relation to the spatial resolution of the imagery used. The results of this thesis may be applied to road infrastructure mapping in developing countries on a more general context, although with certain limits. In particular, unclassified rural roads require updated road mapping schemas to intensify road transport possibilities and to assist in the development of the developing world.