943 resultados para co-occurrence network
Resumo:
EV is a child with a talent for learning language combined with Asperger syndrome. EV’s talent is evident in the unusual circumstances of her acquisition of both her first (Bulgarian) and second (German) languages and the unique patterns of both receptive and expressive language (in both the L1 and L2), in which she shows subtle dissociations in competence and performance consistent with an uneven cognitive profile of skills and abilities. We argue that this case provides support for theories of language learning and usage that require more general underlying cognitive mechanisms and skills. One such account, the Weak Central Coherence (WCC) hypothesis of autism, provides a plausible framework for the interpretation of the simultaneous co-occurrence of EV’s particular pattern of cognitive strengths and weaknesses. Furthermore, we show that specific features of the uneven cognitive profile of Asperger syndrome can help explain the observed language talent displayed by EV. Thus, rather than demonstrating a case where language learning takes place despite the presence of deficits, EV’s case illustrates how a pattern of strengths within this profile can specifically promote language learning.
Resumo:
Developmental learning disabilities such as dyslexia and dyscalculia have a high rate of co-occurrence in pediatric populations, suggesting that they share underlying cognitive and neurophysiological mechanisms. Dyslexia and other developmental disorders with a strong heritable component have been associated with reduced sensitivity to coherent motion stimuli, an index of visual temporal processing on a millisecond time-scale. Here we examined whether deficits in sensitivity to visual motion are evident in children who have poor mathematics skills relative to other children of the same age. We obtained psychophysical thresholds for visual coherent motion and a control task from two groups of children who differed in their performance on a test of mathematics achievement. Children with math skills in the lowest 10% in their cohort were less sensitive than age-matched controls to coherent motion, but they had statistically equivalent thresholds to controls on a coherent form control measure. Children with mathematics difficulties therefore tend to present a similar pattern of visual processing deficit to those that have been reported previously in other developmental disorders. We speculate that reduced sensitivity to temporally defined stimuli such as coherent motion represents a common processing deficit apparent across a range of commonly co-occurring developmental disorders.
Resumo:
What is the role of pragmatics in the evolution of grammatical paradigms? It is to maintain marked candidates that may come to be the default expression. This perspective is validated by the Jespersen cycle, where the standard expression of sentential negation is renewed as pragmatically marked negatives achieve default status. How status changes are effected, however, remains to be documented. This is what is achieved in this paper that looks at the evolution of preverbal negative non in Old and Middle French. The negative, which categorically marks pragmatic activation (Dryer 1996) with finite verbs in Old French, loses this value when used with non-finite verbs in Middle French. This process is accompanied by competing semantic reanalyses of the distribution of infinitives negated in this way, and by the co-occurrence with a greater lexical variety of verbs. The absence of pragmatic contribution should lead the marker to take on the role of default, which is already fulfilled by a well-established ne ... pas, pushing non to decline. Hard empirical evidence is thus provided that validates the assumed role of pragmatics in the Jespersen cycle, supporting the general view of pragmatics as supporting alternative candidates that may or may not achieve default status in the evolution of a grammatical paradigm.
Resumo:
Summary writing is an important part of many English Language Examinations. As grading students' summary writings is a very time-consuming task, computer-assisted assessment will help teachers carry out the grading more effectively. Several techniques such as latent semantic analysis (LSA), n-gram co-occurrence and BLEU have been proposed to support automatic evaluation of summaries. However, their performance is not satisfactory for assessing summary writings. To improve the performance, this paper proposes an ensemble approach that integrates LSA and n-gram co-occurrence. As a result, the proposed ensemble approach is able to achieve high accuracy and improve the performance quite substantially compared with current techniques. A summary assessment system based on the proposed approach has also been developed.
Resumo:
In current organizations, valuable enterprise knowledge is often buried under rapidly expanding huge amount of unstructured information in the form of web pages, blogs, and other forms of human text communications. We present a novel unsupervised machine learning method called CORDER (COmmunity Relation Discovery by named Entity Recognition) to turn these unstructured data into structured information for knowledge management in these organizations. CORDER exploits named entity recognition and co-occurrence data to associate individuals in an organization with their expertise and associates. We discuss the problems associated with evaluating unsupervised learners and report our initial evaluation experiments in an expert evaluation, a quantitative benchmarking, and an application of CORDER in a social networking tool called BuddyFinder.
Resumo:
Discovering who works with whom, on which projects and with which customers is a key task in knowledge management. Although most organizations keep models of organizational structures, these models do not necessarily accurately reflect the reality on the ground. In this paper we present a text mining method called CORDER which first recognizes named entities (NEs) of various types from Web pages, and then discovers relations from a target NE to other NEs which co-occur with it. We evaluated the method on our departmental Website. We used the CORDER method to first find related NEs of four types (organizations, people, projects, and research areas) from Web pages on the Website and then rank them according to their co-occurrence with each of the people in our department. 20 representative people were selected and each of them was presented with ranked lists of each type of NE. Each person specified whether these NEs were related to him/her and changed or confirmed their rankings. Our results indicate that the method can find the NEs with which these people are closely related and provide accurate rankings.
Resumo:
In this paper, we propose a text mining method called LRD (latent relation discovery), which extends the traditional vector space model of document representation in order to improve information retrieval (IR) on documents and document clustering. Our LRD method extracts terms and entities, such as person, organization, or project names, and discovers relationships between them by taking into account their co-occurrence in textual corpora. Given a target entity, LRD discovers other entities closely related to the target effectively and efficiently. With respect to such relatedness, a measure of relation strength between entities is defined. LRD uses relation strength to enhance the vector space model, and uses the enhanced vector space model for query based IR on documents and clustering documents in order to discover complex relationships among terms and entities. Our experiments on a standard dataset for query based IR shows that our LRD method performed significantly better than traditional vector space model and other five standard statistical methods for vector expansion.
Resumo:
We present CORDER (COmmunity Relation Discovery by named Entity Recognition) an un-supervised machine learning algorithm that exploits named entity recognition and co-occurrence data to associate individuals in an organization with their expertise and associates. We discuss the problems associated with evaluating unsupervised learners and report our initial evaluation experiments.
Resumo:
Sentiment analysis on Twitter has attracted much attention recently due to its wide applications in both, commercial and public sectors. In this paper we present SentiCircles, a lexicon-based approach for sentiment analysis on Twitter. Different from typical lexicon-based approaches, which offer a fixed and static prior sentiment polarities of words regardless of their context, SentiCircles takes into account the co-occurrence patterns of words in different contexts in tweets to capture their semantics and update their pre-assigned strength and polarity in sentiment lexicons accordingly. Our approach allows for the detection of sentiment at both entity-level and tweet-level. We evaluate our proposed approach on three Twitter datasets using three different sentiment lexicons to derive word prior sentiments. Results show that our approach significantly outperforms the baselines in accuracy and F-measure for entity-level subjectivity (neutral vs. polar) and polarity (positive vs. negative) detections. For tweet-level sentiment detection, our approach performs better than the state-of-the-art SentiStrength by 4-5% in accuracy in two datasets, but falls marginally behind by 1% in F-measure in the third dataset.
Resumo:
In this paper, we present an innovative topic segmentation system based on a new informative similarity measure that takes into account word co-occurrence in order to avoid the accessibility to existing linguistic resources such as electronic dictionaries or lexico-semantic databases such as thesauri or ontology. Topic segmentation is the task of breaking documents into topically coherent multi-paragraph subparts. Topic segmentation has extensively been used in information retrieval and text summarization. In particular, our architecture proposes a language-independent topic segmentation system that solves three main problems evidenced by previous research: systems based uniquely on lexical repetition that show reliability problems, systems based on lexical cohesion using existing linguistic resources that are usually available only for dominating languages and as a consequence do not apply to less favored languages and finally systems that need previously existing harvesting training data. For that purpose, we only use statistics on words and sequences of words based on a set of texts. This solution provides a flexible solution that may narrow the gap between dominating languages and less favored languages thus allowing equivalent access to information.
Resumo:
OBJECTIVE: To analyze, in a general population sample, clustering of delusional and hallucinatory experiences in relation to environmental exposures and clinical parameters. METHOD: General population-based household surveys of randomly selected adults between 18 and 65 years of age were carried out. SETTING: 52 countries participating in the World Health Organization's World Health Survey were included. PARTICIPANTS: 225 842 subjects (55.6% women), from nationally representative samples, with an individual response rate of 98.5% within households participated. RESULTS: Compared with isolated delusions and hallucinations, co-occurrence of the two phenomena was associated with poorer outcome including worse general health and functioning status (OR = 0.93; 95% CI: 0.92-0.93), greater severity of symptoms (OR = 2.5 95% CI: 2.0-3.0), higher probability of lifetime diagnosis of psychotic disorder (OR = 12.9; 95% CI: 11.5-14.4), lifetime treatment for psychotic disorder (OR = 19.7; 95% CI: 17.3-22.5), and depression during the last 12 months (OR = 11.6; 95% CI: 10.9-12.4). Co-occurrence was also associated with adversity and hearing problems (OR = 2.0; 95% CI: 1.8-2.3). CONCLUSION: The results suggest that the co-occurrence of hallucinations and delusions in populations is not random but instead can be seen, compared with either phenomenon in isolation, as the result of more etiologic loading leading to a more severe clinical state.
Resumo:
Comorbidity is defined as the co-occurrence of two or more psychological disorders and has been identified as one of the most pressing issues facing child psychologists today. Unfortunately, research on comorbidity in anxious children is rare. The purpose of this research was to examine how specific comorbid patterns in children and adolescents referred with anxiety disorders affected clinical presentation. In addition, the effects of gender, age and total number of diagnoses were also examined.^ Three hundred fifty-five children and adolescents (145 girls and 210 boys, hereafter referred to as "children") aged 6 to 17 who presented to the Child Anxiety and Phobia Program during the years 1987 through 1996 were assessed through a structured clinical interview administered to both the children and their families. Based on information from both children and parents, children were assigned up to five DSM diagnoses. Global ratings of severity were also obtained. While children were interviewed, parents completed a number of questionnaires pertaining to their child's overall functioning, anxiety, thoughts and behaviors. Similarly, while parents were interviewed, children completed a number of self-report questionnaires concerning their own thoughts, feelings and behaviors.^ In general, children with only anxiety disorders were rated as severe as children who met criteria for both anxiety and externalizing disorders. Children with both anxiety and externalizing disorders were mostly young (i.e. age 6 through 11) and mostly male. These children tended to rate themselves (and be rated by their parents) equally as anxious as children with only anxiety disorders. Global ratings of severity tended to be associated with the type of comorbid pattern versus the number of diagnoses assigned to a child. The theoretical, development and clinical implications of these findings will be discussed. ^
Resumo:
Antibiotic resistance, production of alginate and virulence factors, and altered host immune responses are the hallmarks of chronic Pseudomonas aeruginosa infection. Failure of antibiotic therapy has been attributed to the emergence of P. aeruginosa strains that produce β-lactamase constitutively. In Enterobacteriaceae, β-lactamase induction involves four genes with known functions: ampC, ampR, ampD, and ampG, encoding the enzyme, transcriptional regulator, amidase and permease, respectively. In addition to all these amp genes, P. aeruginosa possesses two ampG paralogs, designated ampG and ampP. In this study, P. aeruginosa ampC, ampR, ampG and ampP were analyzed. Inactivation of ampC in the prototypic PAO1 failed to abolish the β-lactamase activity leading to the discovery of P. aeruginosa oxacillinase PoxB. Cloning and expression of poxB in Escherichia coli confers β-lactam resistance. Both AmpC and PoxB contribute to P. aeruginosa resistance against a wide spectrum of β-lactam antibiotics. The expression of PoxB and AmpC is regulated by a LysR-type transcriptional regulator AmpR that up-regulates AmpC but down-regulates PoxB activities. Analyses of P. aeruginosa ampR mutant demonstrate that AmpR is a global regulator that modulates the expressions of Las and Rhl quorum sensing (QS) systems, and the production of pyocyanin, LasA protease and LasB elastase. Introduction of the ampR mutation into an alginate-producing strain reveals the presence of a complex co-regulatory network between antibiotic resistance, QS alginate and other virulence factor production. Using phoA and lacZ protein fusion analyses, AmpR, AmpG and AmpP were localized to the inner membrane with one, 16 and 10 transmembrane helices, respectively. AmpR has a cytoplasmic DNA-binding and a periplasmic substrate binding domains. AmpG and AmpP are essential for the maximal expression of β-lactamase. Analysis of the murein breakdown products suggests that AmpG exports UDP-N-acetylmuramyl-L-alanine-γ-D-glutamate-meso-diaminopimelic acid-D-alanine-D-alanine (UDP-MurNAc-pentapeptide), the corepressor of AmpR, whereas AmpP imports N-acetylglucosaminyl-beta-1,4-anhydro-N-acetylmuramic acid-Ala-γ-D-Glu-meso-diaminopimelic acid (GlcNAc-anhMurNAc-tripeptide) and GlcNAc-anhMurNAc-pentapeptide, the co-inducers of AmpR. This study reveals a complex interaction between the Amp proteins and murein breakdown products involved in P. aeruginosa β-lactamase induction. In summary, this dissertation takes us a little closer to understanding the P. aeruginosa complex co-regulatory mechanism in the development of β-lactam resistance and establishment of chronic infection. ^
Resumo:
Mesoscale eddies play a major role in controlling ocean biogeochemistry. By impacting nutrient availability and water column ventilation, they are of critical importance for oceanic primary production. In the eastern tropical South Pacific Ocean off Peru, where a large and persistent oxygen-deficient zone is present, mesoscale processes have been reported to occur frequently. However, investigations into their biological activity are mostly based on model simulations, and direct measurements of carbon and dinitrogen (N2) fixation are scarce. We examined an open-ocean cyclonic eddy and two anticyclonic mode water eddies: a coastal one and an open-ocean one in the waters off Peru along a section at 16°S in austral summer 2012. Molecular data and bioassay incubations point towards a difference between the active diazotrophic communities present in the cyclonic eddy and the anticyclonic mode water eddies. In the cyclonic eddy, highest rates of N2 fixation were measured in surface waters but no N2 fixation signal was detected at intermediate water depths. In contrast, both anticyclonic mode water eddies showed pronounced maxima in N2 fixation below the euphotic zone as evidenced by rate measurements and geochemical data. N2 fixation and carbon (C) fixation were higher in the young coastal mode water eddy compared to the older offshore mode water eddy. A co-occurrence between N2 fixation and biogenic N2, an indicator for N loss, indicated a link between N loss and N2 fixation in the mode water eddies, which was not observed for the cyclonic eddy. The comparison of two consecutive surveys of the coastal mode water eddy in November 2012 and December 2012 also revealed a reduction in N2 and C fixation at intermediate depths along with a reduction in chlorophyll by half, mirroring an aging effect in this eddy. Our data indicate an important role for anticyclonic mode water eddies in stimulating N2 fixation and thus supplying N offshore.
Resumo:
Abstract
The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.
This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.
I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.
Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.
II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.
The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.
In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.