932 resultados para Retrieval
Resumo:
The amygdala nuclei appear to be critically implicated in emotional memory. However, in most studies, encoding and consolidation processes cannot be analyzed separately. We thus studied the verbal emotional memory in a young woman with a ganglioglioma of the left amygdala and analyzed its impact (1) on each step of the memory process (encoding, retrieval, and recognition) (2) on short- and long-term consolidation (1-hour and 1-week delay) and (3) on processing of valence (positive and negative items compared to neutral words). Results showed emotional encoding impairments and, after encoding was controlled for, emotional long-term consolidation. Finally, although the negative words were not acknowledged as emotionally arousing by the patient, these words were specifically poorly encoded, recalled, and consolidated. Our data suggest that separate cerebral networks support the processing of emotional versus neutral stimuli.
Resumo:
The majority of the Swiss population uses the internet to seek information about health. The objective is to be better informed, before or after the consultation. Doctors can advise their information-seeking patients about high quality websites, be it medical portals or websites dedicated to a specific pathology. Doctors should not see the internet as a threat but rather as an opportunity to strengthen the doctor-patient relationship.
Resumo:
Because memory retrieval often requires overt responses, it is difficult to determine to what extend forgetting occurs as a problem in explicit accessing of long-term memory traces. In this study, we used eye-tracking measures in combination with a behavioural task that favoured high forgetting rates to investigate the existence of memory traces from long-term memory in spite of failure in accessing them consciously. In 2 experiments, participants were encouraged to encode a large set of sound-picture56 location associations. In a later test, sounds were presented and participants were instructed to visually scan, before a verbal memory report, for the correct location of the associated pictures in an empty screen. We found the reactivation of associated memories by sound cues at test biased oculomotor behaviour towards locations congruent with memory representations, even when participants failed to consciously provide a memory report of it. These findings reveal the emergence of a memory-guided behaviour that can be used to map internal representations of forgotten memories from long-term memory.
Resumo:
Despite the successful retrieval of genomes from past remains, the prospects for human palaeogenomics remain unclear because of the difficulty of distinguishing contaminant from endogenous DNA sequences. Previous sequence data generated on high-throughput sequencing platforms indicate that fragmentation of ancient DNA sequences is a characteristic trait primarily arising due to depurination processes that create abasic sites leading to DNA breaks.
Resumo:
Automation or semi-automation of learning scenariospecifications is one of the least exploredsubjects in the e-learning research area. There isa need for a catalogue of learning scenarios and atechnique to facilitate automated retrieval of stored specifications. This requires constructing anontology with this goal and is justified inthis paper. This ontology must mainlysupport a specification technique for learning scenarios. This ontology should also be useful in the creation and validation of new scenarios as well as in the personalization of learning scenarios or their monitoring. Thus, after justifying the need for this ontology, a first approach of a possible knowledge domain is presented. An example of a concrete learning scenario illustrates some relevant concepts supported by this ontology in order to define the scenario in such a way that it could be easy to automate.
Resumo:
El objetivo es proporcionar el marco para la recopilación de datos en el área de la salud de los recién nacidos que permitan la armonización de la asistencia sea cual sea su lugar de nacimiento. Para ello es necesario conocer la población atendida y la mayor dificultad es la ausencia de un sistema de recopilación de datos y de unos estándares asistenciales para todas las condiciones del recién nacido. Es imprescindible disponer de un registro único en el que se recojan los principales datos perinatales y neonatales de todos los recién nacidos. La Sociedad Española de Neonatología (SEN) debe ser el depositario y responsable de la base de datos, que debe cumplir todas las exigencias legales de privacidad y confidencialidad. A nivel de cada centro es posible conocer el peso relativo de la afección atendida por grupos de diagnósticos relacionados (DRG) y los resultados desde el aspecto de calidad asistencial. Mediante análisis comparativos (estudios de benchmarking,. . .) es posible establecer las pautas de diagnóstico y tratamiento. Es necesario conocer la población de recién nacidos atendida y definir criterios de diagnóstico y tratamiento para mejorar la calidad asistencial. La SEN desea dirigirse a los responsables asistenciales de los centros hospitalarios para pedirles su apoyo y colaboración en la puesta en marcha de estas recomendaciones.
Resumo:
This dissertation is based on four articles dealing with modeling of ozonation. The literature part of this considers some models for hydrodynamics in bubble column simulation. A literature review of methods for obtaining mass transfer coefficients is presented. The methods presented to obtain mass transfer are general models and can be applied to any gas-liquid system. Ozonation reaction models and methods for obtaining stoichiometric coefficients and reaction rate coefficients for ozonation reactions are discussed in the final section of the literature part. In the first article, ozone gas-liquid mass transfer into water in a bubble column was investigated for different pH values. A more general method for estimation of mass transfer and Henry’s coefficient was developed from the Beltrán method. The ozone volumetric mass transfer coefficient and the Henry’s coefficient were determined simultaneously by parameter estimation using a nonlinear optimization method. A minor dependence of the Henry’s law constant on pH was detected at the pH range 4 - 9. In the second article, a new method using the axial dispersion model for estimation of ozone self-decomposition kinetics in a semi-batch bubble column reactor was developed. The reaction rate coefficients for literature equations of ozone decomposition and the gas phase dispersion coefficient were estimated and compared with the literature data. The reaction order in the pH range 7-10 with respect to ozone 1.12 and 0.51 the hydroxyl ion were obtained, which is in good agreement with literature. The model parameters were determined by parameter estimation using a nonlinear optimization method. Sensitivity analysis was conducted using object function method to obtain information about the reliability and identifiability of the estimated parameters. In the third article, the reaction rate coefficients and the stoichiometric coefficients in the reaction of ozone with the model component p-nitrophenol were estimated at low pH of water using nonlinear optimization. A novel method for estimation of multireaction model parameters in ozonation was developed. In this method the concentration of unknown intermediate compounds is presented as a residual COD (chemical oxygen demand) calculated from the measured COD and the theoretical COD for the known species. The decomposition rate of p-nitrophenol on the pathway producing hydroquinone was found to be about two times faster than the p-nitrophenol decomposition rate on the pathway producing 4- nitrocatechol. In the fourth article, the reaction kinetics of p-nitrophenol ozonation was studied in a bubble column at pH 2. Using the new reaction kinetic model presented in the previous article, the reaction kinetic parameters, rate coefficients, and stoichiometric coefficients as well as the mass transfer coefficient were estimated with nonlinear estimation. The decomposition rate of pnitrophenol was found to be equal both on the pathway producing hydroquinone and on the path way producing 4-nitrocathecol. Comparison of the rate coefficients with the case at initial pH 5 indicates that the p-nitrophenol degradation producing 4- nitrocathecol is more selective towards molecular ozone than the reaction producing hydroquinone. The identifiability and reliability of the estimated parameters were analyzed with the Marcov chain Monte Carlo (MCMC) method. @All rights reserved. No part of the publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of the author.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
An important issue in language learning is how new words are integrated in the brain representations that sustain language processing. To identify the brain regions involved in meaning acquisition and word learning, we conducted a functional magnetic resonance imaging study. Young participants were required to deduce the meaning of a novel word presented within increasingly constrained sentence contexts that were read silently during the scanning session. Inconsistent contexts were also presented in which no meaning could be assigned to the novel word. Participants showed meaning acquisition in the consistent but not in the inconsistent condition. A distributed brain network was identified comprising the left anterior inferior frontal gyrus (BA 45), the middle temporal gyrus (BA 21), the parahippocampal gyrus, and several subcortical structures (the thalamus and the striatum). Drawing on previous neuroimaging evidence, we tentatively identify the roles of these brain areas in the retrieval, selection, and encoding of the meaning.
Resumo:
In spite of the availability of large databases of chromatographic data on several standardized systems, one major task in systematic toxicological analysis remains, namely how to handle the experimental data and retrieve data from the large available databases in a meaningful and productive way. To achieve this purpose, our group proposed an Internet-based tool using previously published STA databases, which interlaboratorial reproducibility tests have already evaluated. The developed software has the capability to calculate corrected chromatographic parameters, after the input of data obtained with standard mixtures of calibrators, and search the databases, currently incorporating TLC, color reactions, GC and HPLC data. At the end of the process, a list with candidate substances and their similarity indexes is presented.
Resumo:
The objectives of this research work “Identification of the Emerging Issues in Recycled Fiber processing” are discovering of emerging research issues and presenting of new approaches to identify promising research themes in recovered paper application and production. The projected approach consists of identifying technological problems often encountered in wastepaper preparation processes and also improving the quality of recovered paper and increasing its proportion in the composition of paper and board. The source of information for the problem retrieval is scientific publications in which waste paper application and production were discussed. The study has exploited several research methods to understand the changes related to utilization of recovered paper. The all assembled data was carefully studied and categorized by applying software called RefViz and CiteSpace. Suggestions were made on the various classes of these problems that need further investigation in order to propose an emerging research trends in recovered paper.
Resumo:
Due to the need for more efficient, economical and environmentally-friendly technological processes, the use of enzymes has increased. However, reuse of enzymatic hydrolytic complex is required. The immobilization of enzymes provides a basis for stability and allows their reuse reflected in aspects of economic feasibility. Magnetic nanoparticles are a promising supports since their magnetic character allows retrieval by applying an external magnetic field. This article presents an analysis and discussion of methods of biocatalyst immobilization, emphasizing lignocellulolytic enzymes immobilized in magnetic nanoparticles and their applications for the production of high-value compounds such as bioethanol.
Resumo:
This study presents an automatic, computer-aided analytical method called Comparison Structure Analysis (CSA), which can be applied to different dimensions of music. The aim of CSA is first and foremost practical: to produce dynamic and understandable representations of musical properties by evaluating the prevalence of a chosen musical data structure through a musical piece. Such a comparison structure may refer to a mathematical vector, a set, a matrix or another type of data structure and even a combination of data structures. CSA depends on an abstract systematic segmentation that allows for a statistical or mathematical survey of the data. To choose a comparison structure is to tune the apparatus to be sensitive to an exclusive set of musical properties. CSA settles somewhere between traditional music analysis and computer aided music information retrieval (MIR). Theoretically defined musical entities, such as pitch-class sets, set-classes and particular rhythm patterns are detected in compositions using pattern extraction and pattern comparison algorithms that are typical within the field of MIR. In principle, the idea of comparison structure analysis can be applied to any time-series type data and, in the music analytical context, to polyphonic as well as homophonic music. Tonal trends, set-class similarities, invertible counterpoints, voice-leading similarities, short-term modulations, rhythmic similarities and multiparametric changes in musical texture were studied. Since CSA allows for a highly accurate classification of compositions, its methods may be applicable to symbolic music information retrieval as well. The strength of CSA relies especially on the possibility to make comparisons between the observations concerning different musical parameters and to combine it with statistical and perhaps other music analytical methods. The results of CSA are dependent on the competence of the similarity measure. New similarity measures for tonal stability, rhythmic and set-class similarity measurements were proposed. The most advanced results were attained by employing the automated function generation – comparable with the so-called genetic programming – to search for an optimal model for set-class similarity measurements. However, the results of CSA seem to agree strongly, independent of the type of similarity function employed in the analysis.
Resumo:
Local features are used in many computer vision tasks including visual object categorization, content-based image retrieval and object recognition to mention a few. Local features are points, blobs or regions in images that are extracted using a local feature detector. To make use of extracted local features the localized interest points are described using a local feature descriptor. A descriptor histogram vector is a compact representation of an image and can be used for searching and matching images in databases. In this thesis the performance of local feature detectors and descriptors is evaluated for object class detection task. Features are extracted from image samples belonging to several object classes. Matching features are then searched using random image pairs of a same class. The goal of this thesis is to find out what are the best detector and descriptor methods for such task in terms of detector repeatability and descriptor matching rate.