118 resultados para Semantic extraction
em CentAUR: Central Archive University of Reading - UK
Resumo:
In this paper, we introduce a novel high-level visual content descriptor which is devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt to bridge the so called “semantic gap”. The proposed image feature vector model is fundamentally underpinned by the image labelling framework, called Collaterally Confirmed Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts of the images with the state-of-the-art low-level image processing and visual feature extraction techniques for automatically assigning linguistic keywords to image regions. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicates that our proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models.
Resumo:
Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.
Resumo:
In this paper, we introduce a novel high-level visual content descriptor devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt for bridging the so called "semantic gap". The proposed image feature vector model is fundamentally underpinned by an automatic image labelling framework, called Collaterally Cued Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts accompanying the images with the state-of-the-art low-level visual feature extraction techniques for automatically assigning textual keywords to image regions. A subset of the Corel image collection was used for evaluating the proposed method. The experimental results indicate that our semantic-level visual content descriptors outperform both conventional visual and textual image feature models.
Resumo:
This paper describes the implementation of a semantic web search engine on conversation styled transcripts. Our choice of data is Hansard, a publicly available conversation style transcript of parliamentary debates. The current search engine implementation on Hansard is limited to running search queries based on keywords or phrases hence lacks the ability to make semantic inferences from user queries. By making use of knowledge such as the relationship between members of parliament, constituencies, terms of office, as well as topics of debates the search results can be improved in terms of both relevance and coverage. Our contribution is not algorithmic instead we describe how we exploit a collection of external data sources, ontologies, semantic web vocabularies and named entity extraction in the analysis of underlying semantics of user queries as well as the semantic enrichment of the search index thereby improving the quality of results.
Resumo:
The storage and processing capacity realised by computing has lead to an explosion of data retention. We now reach the point of information overload and must begin to use computers to process more complex information. In particular, the proposition of the Semantic Web has given structure to this problem, but has yet realised practically. The largest of its problems is that of ontology construction; without a suitable automatic method most will have to be encoded by hand. In this paper we discus the current methods for semi and fully automatic construction and their current shortcomings. In particular we pay attention the application of ontologies to products and the particle application of the ontologies.
Resumo:
Currently many ontologies are available for addressing different domains. However, it is not always possible to deploy such ontologies to support collaborative working, so that their full potential can be exploited to implement intelligent cooperative applications capable of reasoning over a network of context-specific ontologies. The main problem arises from the fact that presently ontologies are created in an isolated way to address specific needs. However we foresee the need for a network of ontologies which will support the next generation of intelligent applications/devices, and, the vision of Ambient Intelligence. The main objective of this paper is to motivate the design of a networked ontology (Meta) model which formalises ways of connecting available ontologies so that they are easy to search, to characterise and to maintain. The aim is to make explicit the virtual and implicit network of ontologies serving the Semantic Web.
Resumo:
This investigation examines metal release from freshwater sediment using sequential extraction and single-step cold-acid leaching. The concentrations of Cd, Cr, Cu, Fe, Ni, Pb and Zn released using a standard 3-step sequential extraction (Rauret et al., 1999) are compared to those released using a 0.5 M HCl; leach. The results show that the three sediments behave in very different ways when subject to the same leaching experiments: the cold-acid extraction appears to remove higher relative concentrations of metals from the iron-rich sediment than from the other two sediments. Cold-acid extraction appears to be more effective at removing metals from sediments with crystalline iron oxides than the "reducible" step of the sequential extraction. The results show that a single-step acid leach can be just as effective as sequential extractions at removing metals from sediment and are a great deal less time-consuming.
Resumo:
The transmissible spongiform encephalopathies (TSEs) are caused by infectious agents whose structures have not been fully characterized but include abnormal forms of the host protein PrP, designated PrPSc, which are deposited in infected tissues. The transmission routes of scrapie and chronic wasting disease (CWD) seem to include environmental spread in their epidemiology, yet the fate of TSE agents in the environment is poorly understood. There are concerns that, for example, buried carcasses may remain a potential reservoir of infectivity for many years. Experimental determination of the environmental fate requires methods for assessing binding/elution of TSE infectivity, or its surrogate marker PrPSc, to and from materials with which it might interact. We report a method using Sarkosyl for the extraction of murine PrPSc, and its application to soils containing recombinant ovine PrP (recPrP). Elution properties suggest that PrP binds strongly to one or more soil components. Elution from a clay soil also required proteinase K digestion, suggesting that in the clay soil binding occurs via the N-terminal of PrP to a component that is absent from the sandy soils tested.
Resumo:
Procedures for routine analysis of soil phosphorus (P) have been used for assessment of P status, distribution and P losses from cultivated mineral soils. No similar studies have been carried out on wetland peat soils. The objective was to compare extraction efficiency of ammonium lactate (PAL), sodium bicarbonate (P-Olsen), and double calcium lactate (P-DCaL) and P distribution in the soil profile of wetland peat soils. For this purpose, 34 samples of the 0-30, 30-60 and 60-90 cm layers were collected from peat soils in Germany, Israel, Poland, Slovenia, Sweden and the United Kingdom and analysed for P. Mean soil pH (CaCl2, 0.01 M) was 5.84, 5.51 and 5.47 in the 0-30, 30-60 and 60-90 cm layers, respectively. The P-DCaL was consistently about half the magnitude of either P-AL or P-Olsen. The efficiency of P extraction increased in the order P-DCaL < P-AL &LE; P-Olsen, with corresponding means (mg kg(-1)) for all soils (34 samples) of 15.32, 33.49 and 34.27 in 0-30 cm; 8.87, 17.30 and 21.46 in 30-60 cm; and 5.69, 14.00 and 21.40 in 60-90 cm. The means decreased with depth. When examining soils for each country separately, P-Olsen was relatively evenly distributed in the German, UK and Slovenian soils. P-Olsen was linearly correlated (r = 0.594, P = 0.0002) with pH, whereas the three P tests (except P-Olsen vs P-DCaL) significantly correlated with each other (P = 0.017850.0001). The strongest correlation (r = 0.617, P = 0.0001) was recorded for P-AL vs P-DCaL) and the two methods were inter-convertible using a regression equation: P-AL = -22.593 + 5.353 pH + 1.423 P-DCaL, R-2 = 0.550.
Extraction of tidal channel networks from aerial photographs alone and combined with laser altimetry
Resumo:
Tidal channel networks play an important role in the intertidal zone, exerting substantial control over the hydrodynamics and sediment transport of the region and hence over the evolution of the salt marshes and tidal flats. The study of the morphodynamics of tidal channels is currently an active area of research, and a number of theories have been proposed which require for their validation measurement of channels over extensive areas. Remotely sensed data provide a suitable means for such channel mapping. The paper describes a technique that may be adapted to extract tidal channels from either aerial photographs or LiDAR data separately, or from both types of data used together in a fusion approach. Application of the technique to channel extraction from LiDAR data has been described previously. However, aerial photographs of intertidal zones are much more commonly available than LiDAR data, and most LiDAR flights now involve acquisition of multispectral images to complement the LiDAR data. In view of this, the paper investigates the use of multispectral data for semiautomatic identification of tidal channels, firstly from only aerial photographs or linescanner data, and secondly from fused linescanner and LiDAR data sets. A multi-level, knowledge-based approach is employed. The algorithm based on aerial photography can achieve a useful channel extraction, though may fail to detect some of the smaller channels, partly because the spectral response of parts of the non-channel areas may be similar to that of the channels. The algorithm for channel extraction from fused LiDAR and spectral data gives an increased accuracy, though only slightly higher than that obtained using LiDAR data alone. The results illustrate the difficulty of developing a fully automated method, and justify the semi-automatic approach adopted.
Resumo:
The study of the morphodynamics of tidal channel networks is important because of their role in tidal propagation and the evolution of salt-marshes and tidal flats. Channel dimensions range from tens of metres wide and metres deep near the low water mark to only 20-30cm wide and 20cm deep for the smallest channels on the marshes. The conventional method of measuring the networks is cumbersome, involving manual digitising of aerial photographs. This paper describes a semi-automatic knowledge-based network extraction method that is being implemented to work using airborne scanning laser altimetry (and later aerial photography). The channels exhibit a width variation of several orders of magnitude, making an approach based on multi-scale line detection difficult. The processing therefore uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels using a distance-with-destination transform. Breaks in the networks are repaired by extending channel ends in the direction of their ends to join with nearby channels, using domain knowledge that flow paths should proceed downhill and that any network fragment should be joined to a nearby fragment so as to connect eventually to the open sea.