892 resultados para Database, Image Retrieval, Browsing, Semantic Concept


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El trasplante de órganos y/o tejidos es considerado como una opción terapéutica viable para el tratamiento tanto de enfermedades crónicas o en estadios terminales, como de afectaciones no vitales, pero que generen una disminución en la calidad de vida percibida por el paciente. Este procedimiento, de carácter multidimensional, está compuesto por 3 actores principales: el donante, el órgano/tejido, y el receptor. Si bien un porcentaje significativo de investigaciones y planes de intervención han girado en torno a la dimensión biológica del trasplante, y a la promoción de la donación; el interés por la experiencia psicosocial y la calidad de vida de los receptores en este proceso ha aumentado durante la última década. En relación con esto, la presente monografía se plantea como objetivo general la exploración de la experiencia y los significados construidos por los pacientes trasplantados, a través de una revisión sistemática de la literatura sobre esta temática. Para ello, se plantearon unos objetivos específicos derivados del general, se seleccionaron términos o palabras claves por cada uno de estos, y se realizó una búsqueda en 5 bases de datos para revistas indexadas: Ebsco Host (Academic Search; y Psychology and Behavioral Sciences Collection); Proquest; Pubmed; y Science Direct. A partir de los resultados, se establece que si bien la vivencia de los receptores ha comenzado a ser investigada, aún es necesaria una mayor exploración sobre la experiencia de estos pacientes; exploración que carecería de objetivo si no se hiciera a través de las narrativas o testimonios de los mismos receptores

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'increment de bases de dades que cada vegada contenen imatges més difícils i amb un nombre més elevat de categories, està forçant el desenvolupament de tècniques de representació d'imatges que siguin discriminatives quan es vol treballar amb múltiples classes i d'algorismes que siguin eficients en l'aprenentatge i classificació. Aquesta tesi explora el problema de classificar les imatges segons l'objecte que contenen quan es disposa d'un gran nombre de categories. Primerament s'investiga com un sistema híbrid format per un model generatiu i un model discriminatiu pot beneficiar la tasca de classificació d'imatges on el nivell d'anotació humà sigui mínim. Per aquesta tasca introduïm un nou vocabulari utilitzant una representació densa de descriptors color-SIFT, i desprès s'investiga com els diferents paràmetres afecten la classificació final. Tot seguit es proposa un mètode par tal d'incorporar informació espacial amb el sistema híbrid, mostrant que la informació de context es de gran ajuda per la classificació d'imatges. Desprès introduïm un nou descriptor de forma que representa la imatge segons la seva forma local i la seva forma espacial, tot junt amb un kernel que incorpora aquesta informació espacial en forma piramidal. La forma es representada per un vector compacte obtenint un descriptor molt adequat per ésser utilitzat amb algorismes d'aprenentatge amb kernels. Els experiments realitzats postren que aquesta informació de forma te uns resultats semblants (i a vegades millors) als descriptors basats en aparença. També s'investiga com diferents característiques es poden combinar per ésser utilitzades en la classificació d'imatges i es mostra com el descriptor de forma proposat juntament amb un descriptor d'aparença millora substancialment la classificació. Finalment es descriu un algoritme que detecta les regions d'interès automàticament durant l'entrenament i la classificació. Això proporciona un mètode per inhibir el fons de la imatge i afegeix invariança a la posició dels objectes dins les imatges. S'ensenya que la forma i l'aparença sobre aquesta regió d'interès i utilitzant els classificadors random forests millora la classificació i el temps computacional. Es comparen els postres resultats amb resultats de la literatura utilitzant les mateixes bases de dades que els autors Aixa com els mateixos protocols d'aprenentatge i classificació. Es veu com totes les innovacions introduïdes incrementen la classificació final de les imatges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In general, ranking entities (resources) on the Semantic Web (SW) is subject to importance, relevance, and query length. Few existing SW search systems cover all of these aspects. Moreover, many existing efforts simply reuse the technologies from conventional Information Retrieval (IR), which are not designed for SW data. This paper proposes a ranking mechanism, which includes all three categories of rankings and are tailored to SW data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Older adults often demonstrate higher levels of false recognition than do younger adults. However, in experiments using novel shapes without preexisting semantic representations, this age-related elevation in false recognition was found to be greatly attenuated. Two experiments tested a semantic categorization account of these findings, examining whether older adults show especially heightened false recognition if the stimuli have preexisting semantic representations, such that semantic category information attenuates or truncates the encoding or retrieval of item-specific perceptual information. In Experiment 1, ambiguous shapes were presented with or without disambiguating semantic labels. Older adults showed higher false recognition when labels were present but not when labels were never presented. In Experiment 2, older adults showed higher false recognition for concrete but not abstract objects. The semantic categorization account was supported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Problems with lexical retrieval are common across all types of aphasia but certain word classes are thought to be more vulnerable in some aphasia types. Traditionally, verb retrieval problems have been considered characteristic of non-fluent aphasias but there is growing evidence that verb retrieval problems are also found in fluent aphasia. As verbs are retrieved from the mental lexicon with syntactic as well as phonological and semantic information, it is speculated that an improvement in verb retrieval should enhance communicative abilities in this population as in others. We report on an investigation into the effectiveness of verb treatment for three individuals with fluent aphasia. Methods & Procedures: Multiple pre-treatment baselines were established over 3 months in order to monitor language change before treatment. The three participants then received twice-weekly verb treatment over approximately 4 months. All pre-treatment assessments were administered immediately after treatment and 3 months post-treatment. Outcome & Results: Scores fluctuated in the pre-treatment period. Following treatment, there was a significant improvement in verb retrieval for two of the three participants on the treated items. The increase in scores for the third participant was statistically nonsignificant but post-treatment scores moved from below the normal range to within the normal range. All participants were significantly quicker in the verb retrieval task following treatment. There was an increase in well-formed sentences in the sentence construction test and in some samples of connected speech. Conclusions: Repeated systematic treatment can produce a significant improvement in verb retrieval of practised items and generalise to unpractised items for some participants. An increase in well-formed sentences is seen for some speakers. The theoretical and clinical implications of the results are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to explore the impact of a degraded semantic system on the structure of language production, we analysed transcripts from autobiographical memory interviews to identify naturally-occurring speech errors by eight patients with semantic dementia (SD) and eight age-matched normal speakers. Relative to controls, patients were significantly more likely to (a) substitute and omit open class words, (b) substitute (but not omit) closed class words, (c) substitute incorrect complex morphological forms and (d) produce semantically and/or syntactically anomalous sentences. Phonological errors were scarce in both groups. The study confirms previous evidence of SD patients’ problems with open class content words which are replaced by higher frequency, less specific terms. It presents the first evidence that SD patients have problems with closed class items and make syntactic as well as semantic speech errors, although these grammatical abnormalities are mostly subtle rather than gross. The results can be explained by the semantic deficit which disrupts the representation of a pre-verbal message, lexical retrieval and the early stages of grammatical encoding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The project investigated whether it would be possible to remove the main technical hindrance to precision application of herbicides to arable crops in the UK, namely creating geo-referenced weed maps for each field. The ultimate goal is an information system so that agronomists and farmers can plan precision weed control and create spraying maps. The project focussed on black-grass in wheat, but research was also carried out on barley and beans and on wild-oats, barren brome, rye-grass, cleavers and thistles which form stable patches in arable fields. Farmers may also make special efforts to control them. Using cameras mounted on farm machinery, the project explored the feasibility of automating the process of mapping black-grass in fields. Geo-referenced images were captured from June to December 2009, using sprayers, a tractor, combine harvesters and on foot. Cameras were mounted on the sprayer boom, on windows or on top of tractor and combine cabs and images were captured with a range of vibration levels and at speeds up to 20 km h-1. For acceptability to farmers, it was important that every image containing black-grass was classified as containing black-grass; false negatives are highly undesirable. The software algorithms recorded no false negatives in sample images analysed to date, although some black-grass heads were unclassified and there were also false positives. The density of black-grass heads per unit area estimated by machine vision increased as a linear function of the actual density with a mean detection rate of 47% of black-grass heads in sample images at T3 within a density range of 13 to 1230 heads m-2. A final part of the project was to create geo-referenced weed maps using software written in previous HGCA-funded projects and two examples show that geo-location by machine vision compares well with manually-mapped weed patches. The consortium therefore demonstrated for the first time the feasibility of using a GPS-linked computer-controlled camera system mounted on farm machinery (tractor, sprayer or combine) to geo-reference black-grass in winter wheat between black-grass head emergence and seed shedding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many weeds occur in patches but farmers frequently spray whole fields to control the weeds in these patches. Given a geo-referenced weed map, technology exists to confine spraying to these patches. Adoption of patch spraying by arable farmers has, however, been negligible partly due to the difficulty of constructing weed maps. Building on previous DEFRA and HGCA projects, this proposal aims to develop and evaluate a machine vision system to automate the weed mapping process. The project thereby addresses the principal technical stumbling block to widespread adoption of site specific weed management (SSWM). The accuracy of weed identification by machine vision based on a single field survey may be inadequate to create herbicide application maps. We therefore propose to test the hypothesis that sufficiently accurate weed maps can be constructed by integrating information from geo-referenced images captured automatically at different times of the year during normal field activities. Accuracy of identification will also be increased by utilising a priori knowledge of weeds present in fields. To prove this concept, images will be captured from arable fields on two farms and processed offline to identify and map the weeds, focussing especially on black-grass, wild oats, barren brome, couch grass and cleavers. As advocated by Lutman et al. (2002), the approach uncouples the weed mapping and treatment processes and builds on the observation that patches of these weeds are quite stable in arable fields. There are three main aspects to the project. 1) Machine vision hardware. Hardware component parts of the system are one or more cameras connected to a single board computer (Concurrent Solutions LLC) and interfaced with an accurate Global Positioning System (GPS) supplied by Patchwork Technology. The camera(s) will take separate measurements for each of the three primary colours of visible light (red, green and blue) in each pixel. The basic proof of concept can be achieved in principle using a single camera system, but in practice systems with more than one camera may need to be installed so that larger fractions of each field can be photographed. Hardware will be reviewed regularly during the project in response to feedback from other work packages and updated as required. 2) Image capture and weed identification software. The machine vision system will be attached to toolbars of farm machinery so that images can be collected during different field operations. Images will be captured at different ground speeds, in different directions and at different crop growth stages as well as in different crop backgrounds. Having captured geo-referenced images in the field, image analysis software will be developed to identify weed species by Murray State and Reading Universities with advice from The Arable Group. A wide range of pattern recognition and in particular Bayesian Networks will be used to advance the state of the art in machine vision-based weed identification and mapping. Weed identification algorithms used by others are inadequate for this project as we intend to collect and correlate images collected at different growth stages. Plants grown for this purpose by Herbiseed will be used in the first instance. In addition, our image capture and analysis system will include plant characteristics such as leaf shape, size, vein structure, colour and textural pattern, some of which are not detectable by other machine vision systems or are omitted by their algorithms. Using such a list of features observable using our machine vision system, we will determine those that can be used to distinguish weed species of interest. 3) Weed mapping. Geo-referenced maps of weeds in arable fields (Reading University and Syngenta) will be produced with advice from The Arable Group and Patchwork Technology. Natural infestations will be mapped in the fields but we will also introduce specimen plants in pots to facilitate more rigorous system evaluation and testing. Manual weed maps of the same fields will be generated by Reading University, Syngenta and Peter Lutman so that the accuracy of automated mapping can be assessed. The principal hypothesis and concept to be tested is that by combining maps from several surveys, a weed map with acceptable accuracy for endusers can be produced. If the concept is proved and can be commercialised, systems could be retrofitted at low cost onto existing farm machinery. The outputs of the weed mapping software would then link with the precision farming options already built into many commercial sprayers, allowing their use for targeted, site-specific herbicide applications. Immediate economic benefits would, therefore, arise directly from reducing herbicide costs. SSWM will also reduce the overall pesticide load on the crop and so may reduce pesticide residues in food and drinking water, and reduce adverse impacts of pesticides on non-target species and beneficials. Farmers may even choose to leave unsprayed some non-injurious, environmentally-beneficial, low density weed infestations. These benefits fit very well with the anticipated legislation emerging in the new EU Thematic Strategy for Pesticides which will encourage more targeted use of pesticides and greater uptake of Integrated Crop (Pest) Management approaches, and also with the requirements of the Water Framework Directive to reduce levels of pesticides in water bodies. The greater precision of weed management offered by SSWM is therefore a key element in preparing arable farming systems for the future, where policy makers and consumers want to minimise pesticide use and the carbon footprint of farming while maintaining food production and security. The mapping technology could also be used on organic farms to identify areas of fields needing mechanical weed control thereby reducing both carbon footprints and also damage to crops by, for example, spring tines. Objective i. To develop a prototype machine vision system for automated image capture during agricultural field operations; ii. To prove the concept that images captured by the machine vision system over a series of field operations can be processed to identify and geo-reference specific weeds in the field; iii. To generate weed maps from the geo-referenced, weed plants/patches identified in objective (ii).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new database of weather and circulation type catalogs is presented comprising 17 automated classification methods and five subjective classifications. It was compiled within COST Action 733 "Harmonisation and Applications of Weather Type Classifications for European regions" in order to evaluate different methods for weather and circulation type classification. This paper gives a technical description of the included methods using a new conceptual categorization for classification methods reflecting the strategy for the definition of types. Methods using predefined types include manual and threshold based classifications while methods producing types derived from the input data include those based on eigenvector techniques, leader algorithms and optimization algorithms. In order to allow direct comparisons between the methods, the circulation input data and the methods' configuration were harmonized for producing a subset of standard catalogs of the automated methods. The harmonization includes the data source, the climatic parameters used, the classification period as well as the spatial domain and the number of types. Frequency based characteristics of the resulting catalogs are presented, including variation of class sizes, persistence, seasonal and inter-annual variability as well as trends of the annual frequency time series. The methodological concept of the classifications is partly reflected by these properties of the resulting catalogs. It is shown that the types of subjective classifications compared to automated methods show higher persistence, inter-annual variation and long-term trends. Among the automated classifications optimization methods show a tendency for longer persistence and higher seasonal variation. However, it is also concluded that the distance metric used and the data preprocessing play at least an equally important role for the properties of the resulting classification compared to the algorithm used for type definition and assignment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Retrieval-Induced Forgetting (RIF) paradigm includes three phases: (a) study/encoding of category exemplars, (b) practicing retrieval of a sub-set of those category exemplars, and (c) recall of all exemplars. At the final recall phase, recall of items that belong to the same categories as those items that undergo retrieval-practice, but that do not undergo retrieval-practice, is impaired. The received view is that this is because retrieval of target category-exemplars (e.g., ‘Tiger’ in the category Four-legged animal) requires inhibition of non-target category-exemplars (e.g., ‘Dog’ and ‘Lion’) that compete for retrieval. Here, we used the RIF paradigm to investigate whether ignoring auditory items during the retrieval-practice phase modulates the inhibitory process. In two experiments, RIF was present when retrieval-practice was conducted in quiet and when conducted in the presence of spoken words that belonged to a category other than that of the items that were targets for retrieval-practice. In contrast, RIF was abolished when words that either were identical to the retrieval-practice words or were only semantically related to the retrieval-practice words were presented as background speech. The results suggest that the act of ignoring speech can reduce inhibition of the non-practiced category-exemplars, thereby eliminating RIF, but only when the spoken words are competitors for retrieval (i.e., belong to the same semantic category as the to-be-retrieved items).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We explored the impact of a degraded semantic system on lexical, morphological and syntactic complexity in language production. We analysed transcripts from connected speech samples from eight patients with semantic dementia (SD) and eight age-matched healthy speakers. The frequency distributions of nouns and verbs were compared for hand-scored data and data extracted using text-analysis software. Lexical measures showed the predicted pattern for nouns and verbs in hand-scored data, and for nouns in software-extracted data, with fewer low frequency items in the speech of the patients relative to controls. The distribution of complex morpho-syntactic forms for the SD group showed a reduced range, with fewer constructions that required multiple auxiliaries and inflections. Finally, the distribution of syntactic constructions also differed between groups, with a pattern that reflects the patients’ characteristic anomia and constraints on morpho-syntactic complexity. The data are in line with previous findings of an absence of gross syntactic errors or violations in SD speech. Alterations in the distributions of morphology and syntax, however, support constraint satisfaction models of speech production in which there is no hard boundary between lexical retrieval and grammatical encoding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In winter, brine rejection from sea ice formation and export in the Weddell Sea, offshore of Filchner-Ronne Ice Shelf (FRIS), leads to the formation of High Salinity Shelf Water (HSSW). This dense water mass enters the cavity beneath FRIS by sinking southward down the sloping continental shelf towards the grounding line. Melting occurs when the HSSW encounters the ice shelf, and the meltwater released cools and freshens the HSSW to form a water mass known as Ice Shelf Water (ISW). If this ISW rises, the ‘ice pump’ is initiated (Lewis and Perkin, 1986), whereby the ascending ISW becomes supercooled and deposits marine ice at shallower locations due to the pressure increase in the in-situ freezing temperature. Sandh¨ager et al. (2004) were able to infer the thickness patterns of marine ice deposits at the base of FRIS (figure 1), so the primary aim of this work is to try to understand the ocean flows that determine these patterns. The plume model we use to investigate ISW flow is described fully by Holland and Feltham (accepted) so only a relatively brief outline is presented here. The plume is simulated by combining a parameterisation of ice shelf basal interaction and a multiplesize- class frazil dynamics model with an unsteady, depth-averaged reduced-gravity plume model. In the model an active region of ISW evolves above and within an expanse of stagnant ambient fluid, which is considered to be ice-free and has fixed profiles of temperature and salinity. The two main assumptions of the model are that there is a well-mixed layer underneath the ice shelf and that the ambient fluid outside the plume is stagnant with fixed properties. The topography of the ice shelf that the plume flows beneath is set to the FRIS ice shelf draft calculated by Sandh¨ager et al. (2004) masked with the grounding line from the Antarctic Digital Database (ADD Consortium, 2002). To initiate the plumes, we assume that the intrusion of dense HSSW initially causes melting at the points on the grounding line where the glaciological tributaries feeding FRIS go afloat.