923 resultados para Natural Language Processing,Recommender Systems,Android,Applicazione mobile


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the article relevance of system development for subject search using computational linguistics is considered. The basic principles of system functioning are defined. The principle of grammar development for information retrieval from the partially structured text in a natural language is considered. The ranging principle of results of information search is defined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (M. S.)--University of Illinois at Urbana-Champaign.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Title from caption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, we investigated the size, submicrometer-scale structure, and aggregation state of ZnS formed by sulfate-reducing bacteria (SRB) in a SRB-dominated biofilm growing on degraded wood in cold (Tsimilar to8degreesC), circumneutral-pH (7.2-8.5) waters draining from an abandoned, carbonate-hosted Pb-Zn mine. High-resolution transmission electron microscope (HRTEM) data reveal that the earliest biologically induced precipitates are crystalline ZnS nanoparticles 1-5 nm in diameter. Although most nanocrystals have the sphalerite structure, nanocrystals of wurtzite are also present, consistent with a predicted size dependence for ZnS phase stability. Nearly all the nanocrystals are concentrated into 1-5 mum diameter spheroidal aggregates that display concentric banding patterns indicative of episodic precipitation and flocculation. Abundant disordered stacking sequences and faceted, porous crystal-aggregate morphologies are consistent with aggregation-driven growth of ZnS nanocrystals prior to and/or during spheroid formation. Spheroids are typically coated by organic polymers or associated with microbial cellular surfaces, and are concentrated roughly into layers within the biofilm. Size, shape, structure, degree of crystallinity, and polymer associations will all impact ZnS solubility, aggregation and coarsening behavior, transport in groundwater, and potential for deposition by sedimentation. Results presented here reveal nanometer- to micrometer-scale attributes of biologically induced ZnS formation likely to be relevant to sequestration via bacterial sulfate reduction (BSR) of other potential contaminant metal(loid)s, such as Pb2+, Cd2+, As3+ and Hg2+, into metal sulfides. The results highlight the importance of basic mineralogical information for accurate prediction and monitoring of long-term contaminant metal mobility and bioavailability in natural and constructed bioremediation systems. Our observations also provoke interesting questions regarding the role of size-dependent phase stability in biomineralization and provide new insights into the origin of submicrometer- to millimeter-scale petrographic features observed in low-temperature sedimentary sulfide ore deposits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acquisition by Processing Theory (APT) is a unified account of language processing and learning that encompasses both L1 and L2 acquisition. Bold in aim and broad in scope, the proposal offers parsimony and comprehensiveness, both highly desirable in a theory of language acquisition. However, the sweep of the proposal is accompanied by an economy of description that makes it difficult to evaluate the validity of key learning claims, or even how literally they are to be interpreted. Two in particular deserve comment; the first concerns the learning mechanisms responsible for adding new L2 grammatical information, and the second the theoretical and empirical status of the activation concept used in the model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report the results of an experimental and theoretical study of the electronic and structural properties of a key eumelanin precursor-5,6,-dihydroxyindole-2-carboxylic acid ( DHICA) - and its dimeric forms. We have used optical spectroscopy to follow the oxidative polymerization of DHICA to eumelanin and observe red shifting and broadening of the absorption spectrum as the reaction proceeds. First principles density functional theory calculations indicate that DHICA oligomers ( possible reaction products of oxidative polymerization) have the gap between the highest occupied molecular orbital and the lowest unoccupied molecular orbital red-shifted gaps with respect to the monomer. Furthermore, different bonding configurations ( leading to oligomers with different structures) produce a range of gaps. These experimental and theoretical results lend support to the chemical disorder model where the broadband monotonic absorption characteristic of all melanins is a consequence of the superposition of a large number of nonhomogeneously broadened Gaussian transitions associated with each of the components of a melanin ensemble. These results suggest that the traditional model of eumelanin as an amorphous organic semiconductor is not required to explain its optical properties and should be thoroughly reexamined. These results have significant implications for our understanding of the physics, chemistry, and biological function of these important biological macromolecules. Indeed, one may speculate that the robust functionality of melanins in vitro is a direct consequence of its heterogeneity, i.e., chemical disorder is a "low cost" natural resource in these systems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative databases are limited to information identified as important by their creators, while databases containing natural language are limited by our ability to analyze large unstructured bodies of text. Leximancer is a tool that uses semantic mapping to develop concept maps from natural language. We have applied Leximancer to educational based pathology case notes to demonstrate how real patient records or databases of case studies could be analyzed to identify unique relationships. We then discuss how such analysis could be used to conduct quantitative analysis from databases such as the Coronary Heart Disease Database.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Frith has argued that people with autism show “weak central coherence,” an unusual bias toward piecemeal rather than configurational processing and a reduction in the normal tendency to process information in context. However, the precise cognitive and neurological mechanisms underlying weak central coherence are still unknown. We propose the hypothesis that the features of autism associated with weak central coherence result from a reduction in the integration of specialized local neural networks in the brain caused by a deficit in temporal binding. The visuoperceptual anomalies associated with weak central coherence may be attributed to a reduction in synchronization of high-frequency gamma activity between local networks processing local features. The failure to utilize context in language processing in autism can be explained in similar terms. Temporal binding deficits could also contribute to executive dysfunction in autism and to some of the deficits in socialization and communication.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although reading ability has been related to the processing of simple pitch features such as isolated transitions or continuous modulation spoken language also contains complex patterns of pitch changes that are important for establishing stress location and for segmenting the speech stream. These aspects of spoken language processing depend critically on pitch pattern (global structure) rather than on absolute pitch values (local structure). Here we show that the detection of global structure, and not local structure, is predictive of performance on measures of phonological skill and reading ability, which supports a critical importance of pitch contour processing in the acquisition of literacy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over recent years, evidence has been accumulating in favour of the importance of long-term information as a variable which can affect the success of short-term recall. Lexicality, word frequency, imagery and meaning have all been shown to augment short term recall performance. Two competing theories as to the causes of this long-term memory influence are outlined and tested in this thesis. The first approach is the order-encoding account, which ascribes the effect to the usage of resources at encoding, hypothesising that word lists which require less effort to process will benefit from increased levels of order encoding, in turn enhancing recall success. The alternative view, trace redintegration theory, suggests that order is automatically encoded phonologically, and that long-term information can only influence the interpretation of the resultant memory trace. The free recall experiments reported here attempted to determine the importance of order encoding as a facilitatory framework and to determine the locus of the effects of long-term information in free recall. Experiments 1 and 2 examined the effects of word frequency and semantic categorisation over a filled delay, and experiments 3 and 4 did the same for immediate recall. Free recall was improved by both long-term factors tested. Order information was not used over a short filled delay, but was evident in immediate recall. Furthermore, it was found that both long-term factors increased the amount of order information retained. Experiment 5 induced an order encoding effect over a filled delay, leaving a picture of short-term processes which are closely associated with long-term processes, and which fit conceptions of short-term memory being part of language processes rather better than either the encoding or the retrieval-based models. Experiments 6 and 7 aimed to determine to what extent phonological processes were responsible for the pattern of results observed. Articulatory suppression affected the encoding of order information where speech rate had no direct influence, suggesting that it is ease of lexical access which is the most important factor in the influence of long-term memory on immediate recall tasks. The evidence presented in this thesis does not offer complete support for either the retrieval-based account or the order encoding account of long-term influence. Instead, the evidence sits best with models that are based upon language-processing. The path urged for future research is to find ways in which this diffuse model can be better specified, and which can take account of the versatility of the human brain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Early, lesion-based models of language processing suggested that semantic and phonological processes are associated with distinct temporal and parietal regions respectively, with frontal areas more indirectly involved. Contemporary spatial brain mapping techniques have not supported such clear-cut segregation, with strong evidence of activation in left temporal areas by both processes and disputed evidence of involvement of frontal areas in both processes. We suggest that combining spatial information with temporal and spectral data may allow a closer scrutiny of the differential involvement of closely overlapping cortical areas in language processing. Using beamforming techniques to analyze magnetoencephalography data, we localized the neuronal substrates underlying primed responses to nouns requiring either phonological or semantic processing, and examined the associated measures of time and frequency in those areas where activation was common to both tasks. Power changes in the beta (14-30 Hz) and gamma (30-50 Hz) frequency bandswere analyzed in pre-selected time windows of 350-550 and 500-700ms In left temporal regions, both tasks elicited power changes in the same time window (350-550 ms), but with different spectral characteristics, low beta (14-20 Hz) for the phonological task and high beta (20-30 Hz) for the semantic task. In frontal areas (BA10), both tasks elicited power changes in the gamma band (30-50 Hz), but in different time windows, 500-700ms for the phonological task and 350-550ms for the semantic task. In the left inferior parietal area (BA40), both tasks elicited changes in the 20-30 Hz beta frequency band but in different time windows, 350-550ms for the phonological task and 500-700ms for the semantic task. Our findings suggest that, where spatial measures may indicate overlapping areas of involvement, additional beamforming techniques can demonstrate differential activation in time and frequency domains. © 2012 McNab, Hillebrand, Swithenby and Rippon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a novel framework of incorporating protein-protein interactions (PPI) ontology knowledge into PPI extraction from biomedical literature in order to address the emerging challenges of deep natural language understanding. It is built upon the existing work on relation extraction using the Hidden Vector State (HVS) model. The HVS model belongs to the category of statistical learning methods. It can be trained directly from un-annotated data in a constrained way whilst at the same time being able to capture the underlying named entity relationships. However, it is difficult to incorporate background knowledge or non-local information into the HVS model. This paper proposes to represent the HVS model as a conditionally trained undirected graphical model in which non-local features derived from PPI ontology through inference would be easily incorporated. The seamless fusion of ontology inference with statistical learning produces a new paradigm to information extraction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a comparative study of three closely related Bayesian models for unsupervised document level sentiment classification, namely, the latent sentiment model (LSM), the joint sentiment-topic (JST) model, and the Reverse-JST model. Extensive experiments have been conducted on two corpora, the movie review dataset and the multi-domain sentiment dataset. It has been found that while all the three models achieve either better or comparable performance on these two corpora when compared to the existing unsupervised sentiment classification approaches, both JST and Reverse-JST are able to extract sentiment-oriented topics. In addition, Reverse-JST always performs worse than JST suggesting that the JST model is more appropriate for joint sentiment topic detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Natural language understanding (NLU) aims to map sentences to their semantic mean representations. Statistical approaches to NLU normally require fully-annotated training data where each sentence is paired with its word-level semantic annotations. In this paper, we propose a novel learning framework which trains the Hidden Markov Support Vector Machines (HM-SVMs) without the use of expensive fully-annotated data. In particular, our learning approach takes as input a training set of sentences labeled with abstract semantic annotations encoding underlying embedded structural relations and automatically induces derivation rules that map sentences to their semantic meaning representations. The proposed approach has been tested on the DARPA Communicator Data and achieved 93.18% in F-measure, which outperforms the previously proposed approaches of training the hidden vector state model or conditional random fields from unaligned data, with a relative error reduction rate of 43.3% and 10.6% being achieved.