973 resultados para combinatorial pattern matching


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We detected and mapped a dynamically spreading wave of gray matter loss in the brains of patients with Alzheimer's disease (AD). The loss pattern was visualized in four dimensions as it spread over time from temporal and limbic cortices into frontal and occipital brain regions, sparing sensorimotor cortices. The shifting deficits were asymmetric (left hemisphere > right hemisphere) and correlated with progressively declining cognitive status (p < 0.0006). Novel brain mapping methods allowed us to visualize dynamic patterns of atrophy in 52 high-resolution magnetic resonance image scans of 12 patients with AD (age 68.4 ± 1.9 years) and 14 elderly matched controls (age 71.4 ± 0.9 years) scanned longitudinally (two scans; interscan interval 2.1 ± 0.4 years). A cortical pattern matching technique encoded changes in brain shape and tissue distribution across subjects and time. Cortical atrophy occurred in a well defined sequence as the disease progressed, mirroring the sequence of neurofibrillary tangle accumulation observed in cross sections at autopsy. Advancing deficits were visualized as dynamic maps that change over time. Frontal regions, spared early in the disease, showed pervasive deficits later (< 15% loss). The maps distinguished different phases of AD and differentiated AD from normal aging. Local gray matter loss rates (5.3 ± 2.3% per year in AD v 0.9 ± 0.9% per year in controls) were faster in the left hemisphere (p < 0.029) than the right. Transient barriers to disease progression appeared at limbic/frontal boundaries. This degenerative sequence, observed in vivo as it developed, provides the first quantitative, dynamic visualization of cortical atrophic rates in normal elderly populations and in those with dementia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes algorithms that can identify patterns of brain structure and function associated with Alzheimer's disease, schizophrenia, normal aging, and abnormal brain development based on imaging data collected in large human populations. Extraordinary information can be discovered with these techniques: dynamic brain maps reveal how the brain grows in childhood, how it changes in disease, and how it responds to medication. Genetic brain maps can reveal genetic influences on brain structure, shedding light on the nature-nurture debate, and the mechanisms underlying inherited neurobehavioral disorders. Recently, we created time-lapse movies of brain structure for a variety of diseases. These identify complex, shifting patterns of brain structural deficits, revealing where, and at what rate, the path of brain deterioration in illness deviates from normal. Statistical criteria can then identify situations in which these changes are abnormally accelerated, or when medication or other interventions slow them. In this paper, we focus on describing our approaches to map structural changes in the cortex. These methods have already been used to reveal the profile of brain anomalies in studies of dementia, epilepsy, depression, childhood- and adult-onset schizophrenia, bipolar disorder, attention-deficit/hyperactivity disorder, fetal alcohol syndrome, Tourette syndrome, Williams syndrome, and in methamphetamine abusers. Specifically, we describe an image analysis pipeline known as cortical pattern matching that helps compare and pool cortical data over time and across subjects. Statistics are then defined to identify brain structural differences between groups, including localized alterations in cortical thickness, gray matter density (GMD), and asymmetries in cortical organization. Subtle features, not seen in individual brain scans, often emerge when population-based brain data are averaged in this way. Illustrative examples are presented to show the profound effects of development and various diseases on the human cortex. Dynamically spreading waves of gray matter loss are tracked in dementia and schizophrenia, and these sequences are related to normally occurring changes in healthy subjects of various ages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

When experts construct mental images, they do not rely only on perceptual features; they also access domain-specific knowledge and skills in long-term memory, which enables them to exceed the capacity limitations of the short-term working memory system. The central question of the present dissertation was whether the facilitating effect of long-term memory knowledge on working memory imagery tasks is primarily based on perceptual chunking or whether it relies on higher-level conceptual knowledge. Three domains of expertise were studied: chess, music, and taxi driving. The effects of skill level, stimulus surface features, and the stimulus structure on incremental construction of mental images were investigated. A method was developed to capture the chunking mechanisms that experts use in constructing images: chess pieces, street names, and visual notes were presented in a piecemeal fashion for later recall. Over 150 experts and non-experts participated in a total of 13 experiments, as reported in five publications. The results showed skill effects in all of the studied domains when experts performed memory and problem solving tasks that required mental imagery. Furthermore, only experts' construction of mental images benefited from meaningful stimuli. Manipulation of the stimulus surface features, such as replacing chess pieces with dots, did not significantly affect experts' performance in the imagery tasks. In contrast, the structure of the stimuli had a significant effect on experts' performance in every task domain. For example, taxi drivers recalled more street names from lists that formed a spatially continuous route than from alphabetically organised lists. The results suggest that the mechanisms of conceptual chunking rather than automatic perceptual pattern matching underlie expert performance, even though the tasks of the present studies required perception-like mental representations. The results show that experts are able to construct skilled images that surpass working memory capacity, and that their images are conceptually organised and interpreted rather than merely depictive.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis we present and evaluate two pattern matching based methods for answer extraction in textual question answering systems. A textual question answering system is a system that seeks answers to natural language questions from unstructured text. Textual question answering systems are an important research problem because as the amount of natural language text in digital format grows all the time, the need for novel methods for pinpointing important knowledge from the vast textual databases becomes more and more urgent. We concentrate on developing methods for the automatic creation of answer extraction patterns. A new type of extraction pattern is developed also. The pattern matching based approach chosen is interesting because of its language and application independence. The answer extraction methods are developed in the framework of our own question answering system. Publicly available datasets in English are used as training and evaluation data for the methods. The techniques developed are based on the well known methods of sequence alignment and hierarchical clustering. The similarity metric used is based on edit distance. The main conclusions of the research are that answer extraction patterns consisting of the most important words of the question and of the following information extracted from the answer context: plain words, part-of-speech tags, punctuation marks and capitalization patterns, can be used in the answer extraction module of a question answering system. This type of patterns and the two new methods for generating answer extraction patterns provide average results when compared to those produced by other systems using the same dataset. However, most answer extraction methods in the question answering systems tested with the same dataset are both hand crafted and based on a system-specific and fine-grained question classification. The the new methods developed in this thesis require no manual creation of answer extraction patterns. As a source of knowledge, they require a dataset of sample questions and answers, as well as a set of text documents that contain answers to most of the questions. The question classification used in the training data is a standard one and provided already in the publicly available data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This doctoral dissertation takes a buy side perspective to third-party logistics (3PL) providers’ service tiering by applying a linear serial dyadic view to transactions. It takes its point of departure not only from the unalterable focus on the dyad levels as units of analysis and how to manage them, but also the characteristics both creating and determining purposeful conditions for a longer duration. A conceptual framework is proposed and evaluated on its ability to capture logistics service buyers’ perceptions of service tiering. The problem discussed is in the theoretical context of logistics and reflects value appropriation, power dependencies, visibility in linear serial dyads, a movement towards the more market governed modes of transactions (i.e. service tiering) and buyers’ risk perception of broader utilisation of the logistics services market. Service tiering, in a supply chain setting, with the lack of multilateral agreements between supply chain members, is new. The deductive research approach applied, in which theoretically based propositions are empirically tested with quantitative and qualitative data, provides new insight into (contractual) transactions in 3PL. The study findings imply that the understanding of power dependencies and supply chain dynamics in a 3PL context is still in its infancy. The issues found include separation of service responsibilities, supply chain visibility, price-making behaviour and supply chain strategies under changing circumstances or influence of non-immediate supply chain actors. Understanding (or failing to understand) these issues may mean remarkable implications for the industry. Thus, the contingencies may trigger more open-book policies, larger liability scope of 3PL service providers or insourcing of critical logistics activities from the first-tier buyer core business and customer service perspectives. In addition, a sufficient understanding of the issues surrounding service tiering enables proactive responses to devise appropriate supply chain strategies. The author concludes that qualitative research designs, facilitating data collection on multiple supply chain actors, may capture and increase understanding of the impact of broader supply chain strategies. This would enable pattern-matching through an examination of two or more sides of exchange transactions to measure relational symmetries across linear serial dyads. Indeed, the performance of the firm depends not only on how efficiently it cooperates with its partners, but also on how well exchange partners cooperate with an organisation’s own business.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Over past few years, the studies of cultured neuronal networks have opened up avenues for understanding the ion channels, receptor molecules, and synaptic plasticity that may form the basis of learning and memory. The hippocampal neurons from rats are dissociated and cultured on a surface containing a grid of 64 electrodes. The signals from these 64 electrodes are acquired using a fast data acquisition system MED64 (Alpha MED Sciences, Japan) at a sampling rate of 20 K samples with a precision of 16-bits per sample. A few minutes of acquired data runs in to a few hundreds of Mega Bytes. The data processing for the neural analysis is highly compute-intensive because the volume of data is huge. The major processing requirements are noise removal, pattern recovery, pattern matching, clustering and so on. In order to interface a neuronal colony to a physical world, these computations need to be performed in real-time. A single processor such as a desk top computer may not be adequate to meet this computational requirements. Parallel computing is a method used to satisfy the real-time computational requirements of a neuronal system that interacts with an external world while increasing the flexibility and scalability of the application. In this work, we developed a parallel neuronal system using a multi-node Digital Signal processing system. With 8 processors, the system is able to compute and map incoming signals segmented over a period of 200 ms in to an action in a trained cluster system in real time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fragment Finder 2.0 is a web-based interactive computing server which can be used to retrieve structurally similar protein fragments from 25 and 90% nonredundant data sets. The computing server identifies structurally similar fragments using the protein backbone C alpha angles. In addition, the identified fragments can be superimposed using either of the two structural superposition programs, STAMP and PROFIT, provided in the server. The freely available Java plug-in Jmol has been interfaced with the server for the visualization of the query and superposed fragments. The server is the updated version of a previously developed search engine and employs an in-house-developed fast pattern matching algorithm. This server can be accessed freely over the World Wide Web through the URL http://cluster.physics.iisc.ernet.in/ff/.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: The function of a protein can be deciphered with higher accuracy from its structure than from its amino acid sequence. Due to the huge gap in the available protein sequence and structural space, tools that can generate functionally homogeneous clusters using only the sequence information, hold great importance. For this, traditional alignment-based tools work well in most cases and clustering is performed on the basis of sequence similarity. But, in the case of multi-domain proteins, the alignment quality might be poor due to varied lengths of the proteins, domain shuffling or circular permutations. Multi-domain proteins are ubiquitous in nature, hence alignment-free tools, which overcome the shortcomings of alignment-based protein comparison methods, are required. Further, existing tools classify proteins using only domain-level information and hence miss out on the information encoded in the tethered regions or accessory domains. Our method, on the other hand, takes into account the full-length sequence of a protein, consolidating the complete sequence information to understand a given protein better. Results: Our web-server, CLAP (Classification of Proteins), is one such alignment-free software for automatic classification of protein sequences. It utilizes a pattern-matching algorithm that assigns local matching scores (LMS) to residues that are a part of the matched patterns between two sequences being compared. CLAP works on full-length sequences and does not require prior domain definitions. Pilot studies undertaken previously on protein kinases and immunoglobulins have shown that CLAP yields clusters, which have high functional and domain architectural similarity. Moreover, parsing at a statistically determined cut-off resulted in clusters that corroborated with the sub-family level classification of that particular domain family. Conclusions: CLAP is a useful protein-clustering tool, independent of domain assignment, domain order, sequence length and domain diversity. Our method can be used for any set of protein sequences, yielding functionally relevant clusters with high domain architectural homogeneity. The CLAP web server is freely available for academic use at http://nslab.mbu.iisc.ernet.in/clap/.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

VODIS II, a research system in which recognition is based on the conventional one-pass connected-word algorithm extended in two ways, is described. Syntactic constraints can now be applied directly via context-free-grammar rules, and the algorithm generates a lattice of candidate word matches rather than a single globally optimal sequence. This lattice is then processed by a chart parser and an intelligent dialogue controller to obtain the most plausible interpretations of the input. A key feature of the VODIS II architecture is that the concept of an abstract word model allows the system to be used with different pattern-matching technologies and hardware. The current system implements the word models on a real-time dynamic-time-warping recognizer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A parallel processing network derived from Kanerva's associative memory theory Kanerva 1984 is shown to be able to train rapidly on connected speech data and recognize further speech data with a label error rate of 0·68%. This modified Kanerva model can be trained substantially faster than other networks with comparable pattern discrimination properties. Kanerva presented his theory of a self-propagating search in 1984, and showed theoretically that large-scale versions of his model would have powerful pattern matching properties. This paper describes how the design for the modified Kanerva model is derived from Kanerva's original theory. Several designs are tested to discover which form may be implemented fastest while still maintaining versatile recognition performance. A method is developed to deal with the time varying nature of the speech signal by recognizing static patterns together with a fixed quantity of contextual information. In order to recognize speech features in different contexts it is necessary for a network to be able to model disjoint pattern classes. This type of modelling cannot be performed by a single layer of links. Network research was once held back by the inability of single-layer networks to solve this sort of problem, and the lack of a training algorithm for multi-layer networks. Rumelhart, Hinton & Williams 1985 provided one solution by demonstrating the "back propagation" training algorithm for multi-layer networks. A second alternative is used in the modified Kanerva model. A non-linear fixed transformation maps the pattern space into a space of higher dimensionality in which the speech features are linearly separable. A single-layer network may then be used to perform the recognition. The advantage of this solution over the other using multi-layer networks lies in the greater power and speed of the single-layer network training algorithm. © 1989.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

分析了目前网络上最流行的BM算法及其改进算法BMH,在此基础上提出了BMH算法的改进算法BMH2。考虑了模式串自身的特征,在原有移动距离数组的基础上增加一个新的移动数组,从而充分利用模式串特征进行更大距离的移动,使算法获得更高的效率。实验证明,改进后的算法能够增加"坏字符"方法的右移量,有效地提高匹配速率。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A Function Definition Language (FDL) is presented. Though designed for describing specifications, FDL is also a general-purpose functional programming language. It uses context-free language as data type, supports pattern matching definition of functions, offers several function definition forms, and is executable. It is shown that FDL has strong expressiveness, is easy to use and describes algorithms concisely and naturally. An interpreter of FDL is introduced. Experiments and discussion are included.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

C.H. Orgill, N.W. Hardy, M.H. Lee, and K.A.I. Sharpe. An application of a multiple agent system for flexible assemble tasks. In Knowledge based envirnments for industrial applications including cooperating expert systems in control. IEE London, 1989.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Murphy, L., Lewandowski, G., McCauley, R., Simon, B., Thomas, L., and Zander, C. 2008. Debugging: the good, the bad, and the quirky -- a qualitative analysis of novices' strategies. SIGCSE Bull. 40, 1 (Feb. 2008), 163-167

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Simulation of pedestrian evacuations of smart buildings in emergency is a powerful tool for building analysis, dynamic evacuation planning and real-time response to the evolving state of evacuations. Macroscopic pedestrian models are low-complexity models that are and well suited to algorithmic analysis and planning, but are quite abstract. Microscopic simulation models allow for a high level of simulation detail but can be computationally intensive. By combining micro- and macro- models we can use each to overcome the shortcomings of the other and enable new capability and applications for pedestrian evacuation simulation that would not be possible with either alone. We develop the EvacSim multi-agent pedestrian simulator and procedurally generate macroscopic flow graph models of building space, integrating micro- and macroscopic approaches to simulation of the same emergency space. By “coupling” flow graph parameters to microscopic simulation results, the graph model captures some of the higher detail and fidelity of the complex microscopic simulation model. The coupled flow graph is used for analysis and prediction of the movement of pedestrians in the microscopic simulation, and investigate the performance of dynamic evacuation planning in simulated emergencies using a variety of strategies for allocation of macroscopic evacuation routes to microscopic pedestrian agents. The predictive capability of the coupled flow graph is exploited for the decomposition of microscopic simulation space into multiple future states in a scalable manner. By simulating multiple future states of the emergency in short time frames, this enables sensing strategy based on simulation scenario pattern matching which we show to achieve fast scenario matching, enabling rich, real-time feedback in emergencies in buildings with meagre sensing capabilities.