39 resultados para N-Dimensional Co-occurrence Matrix

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This book untangles the old grammatical paradox allowing for several negations within the same negative clause through his work of the scope of negations. The scope of each negation over the same predicate is what allows for concordant values. The frequent co-occurrence of negative items, cases of double negation and the expletive negative, as compared to constituent negation, help to demonstrate this. Analysis of these phenomena is based on a large body of data of different varieties of French considered in the light of historical, typological, and psycholinguistic tendencies. While extensive reference is made to current analysis, independence is maintained from any particular model. Starting from syntactic generalisations, the work provides an innovative solution to a classic interpretative issue.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neurodegenerative disorders are characterized by the formation of distinct pathological changes in the brain, including extracellular protein deposits, cellular inclusions, and changes in cell morphology. Since the earliest published descriptions of these disorders, diagnosis has been based on clinicopathological features, namely, the coexistence of a specific clinical profile together with the presence or absence of particular types of lesion. In addition, the molecular profile of lesions has become an increasingly important feature both in the diagnosis of existing disorders and in the description of new disease entities. Recent studies, however, have reported considerable overlap between the clinicopathological features of many disorders leading to difficulties in the diagnosis of individual cases and to calls for a new classification of neurodegenerative disease. This article discusses: (i) the nature and degree of the overlap between different neurodegenerative disorders and includes a discussion of Alzheimer's disease, dementia with Lewy bodies, the fronto-temporal dementias, and prion disease; (ii) the factors that contribute to disease overlap, including historical factors, the presence of disease heterogeneity, age-related changes, the problem of apolipoprotein genotype, and the co-occurrence of common diseases; and (iii) whether the current nosological status of disorders should be reconsidered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Developmental dyslexia is typically defined by deficits in phonological skills, but it is also associated with anomalous performance on measures of balance. Although balance assessments are included in several screening batteries for dyslexia, the association between impairments in literacy and deficits in postural stability could be due to the high co-occurrence of dyslexia with other developmental disorders in which impairments of motor behaviour are also prevalent. Methods: We identified 17 published studies that compared balance function between dyslexia and control samples and obtained effect-sizes for each. Contrast and association analyses were used to quantify the influence of hypothesised moderator variables on differences in effects across studies. Results: The mean effect-size of the balance deficit in dyslexia was .64 (95% CI = .44-.78) with heterogeneous findings across the population of studies. Probable co-occurrence of other developmental disorders and variability in intelligence scores in the dyslexia samples were the strongest moderator variables of effect-size. Conclusions: Balance deficits are associated with dyslexia, but these effects are apparently more strongly related to third variables other than to reading ability. Deficits of balance may indicate increased risk of developmental disorder, but are unlikely to be uniquely associated with dyslexia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

EV is a child with a talent for learning language combined with Asperger syndrome. EV’s talent is evident in the unusual circumstances of her acquisition of both her first (Bulgarian) and second (German) languages and the unique patterns of both receptive and expressive language (in both the L1 and L2), in which she shows subtle dissociations in competence and performance consistent with an uneven cognitive profile of skills and abilities. We argue that this case provides support for theories of language learning and usage that require more general underlying cognitive mechanisms and skills. One such account, the Weak Central Coherence (WCC) hypothesis of autism, provides a plausible framework for the interpretation of the simultaneous co-occurrence of EV’s particular pattern of cognitive strengths and weaknesses. Furthermore, we show that specific features of the uneven cognitive profile of Asperger syndrome can help explain the observed language talent displayed by EV. Thus, rather than demonstrating a case where language learning takes place despite the presence of deficits, EV’s case illustrates how a pattern of strengths within this profile can specifically promote language learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developmental learning disabilities such as dyslexia and dyscalculia have a high rate of co-occurrence in pediatric populations, suggesting that they share underlying cognitive and neurophysiological mechanisms. Dyslexia and other developmental disorders with a strong heritable component have been associated with reduced sensitivity to coherent motion stimuli, an index of visual temporal processing on a millisecond time-scale. Here we examined whether deficits in sensitivity to visual motion are evident in children who have poor mathematics skills relative to other children of the same age. We obtained psychophysical thresholds for visual coherent motion and a control task from two groups of children who differed in their performance on a test of mathematics achievement. Children with math skills in the lowest 10% in their cohort were less sensitive than age-matched controls to coherent motion, but they had statistically equivalent thresholds to controls on a coherent form control measure. Children with mathematics difficulties therefore tend to present a similar pattern of visual processing deficit to those that have been reported previously in other developmental disorders. We speculate that reduced sensitivity to temporally defined stimuli such as coherent motion represents a common processing deficit apparent across a range of commonly co-occurring developmental disorders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

What is the role of pragmatics in the evolution of grammatical paradigms? It is to maintain marked candidates that may come to be the default expression. This perspective is validated by the Jespersen cycle, where the standard expression of sentential negation is renewed as pragmatically marked negatives achieve default status. How status changes are effected, however, remains to be documented. This is what is achieved in this paper that looks at the evolution of preverbal negative non in Old and Middle French. The negative, which categorically marks pragmatic activation (Dryer 1996) with finite verbs in Old French, loses this value when used with non-finite verbs in Middle French. This process is accompanied by competing semantic reanalyses of the distribution of infinitives negated in this way, and by the co-occurrence with a greater lexical variety of verbs. The absence of pragmatic contribution should lead the marker to take on the role of default, which is already fulfilled by a well-established ne ... pas, pushing non to decline. Hard empirical evidence is thus provided that validates the assumed role of pragmatics in the Jespersen cycle, supporting the general view of pragmatics as supporting alternative candidates that may or may not achieve default status in the evolution of a grammatical paradigm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the failure of PRARE the orbital accuracy of ERS-1 is typically 10-15 cm radially as compared to 3-4cm for TOPEX/Poseidon. To gain the most from these simultaneous datasets it is necessary to improve the orbital accuracy of ERS-1 so that it is commensurate with that of TOPEX/Poseidon. For the integration of these two datasets it is also necessary to determine the altimeter and sea state biases for each of the satellites. Several models for the sea state bias of ERS-1 are considered by analysis of the ERS-1 single satellite crossovers. The model adopted consists of the sea state bias as a percentage of the significant wave height, namely 5.95%. The removal of ERS-1 orbit error and recovery of an ERS-1 - TOPEX/Poseidon relative bias are both achieved by analysis of dual crossover residuals. The gravitational field based radial orbit error is modelled by a finite Fourier expansion series with the dominant frequencies determined by analysis of the JGM-2 co-variance matrix. Periodic and secular terms to model the errors due to atmospheric density, solar radiation pressure and initial state vector mis-modelling are also solved for. Validation of the dataset unification consists of comparing the mean sea surface topographies and annual variabilities derived from both the corrected and uncorrected ERS-1 orbits with those derived from TOPEX/Poseidon. The global and regional geographically fixed/variable orbit errors are also analysed pre and post correction, and a significant reduction is noted. Finally the use of dual/single satellite crossovers and repeat pass data, for the calibration of ERS-2 with respect to ERS-1 and TOPEX/Poseidon is shown by calculating the ERS-1/2 sea state and relative biases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A methodology is presented which can be used to produce the level of electromagnetic interference, in the form of conducted and radiated emissions, from variable speed drives, the drive that was modelled being a Eurotherm 583 drive. The conducted emissions are predicted using an accurate circuit model of the drive and its associated equipment. The circuit model was constructed from a number of different areas, these being: the power electronics of the drive, the line impedance stabilising network used during the experimental work to measure the conducted emissions, a model of an induction motor assuming near zero load, an accurate model of the shielded cable which connected the drive to the motor, and finally the parasitic capacitances that were present in the drive modelled. The conducted emissions were predicted with an error of +/-6dB over the frequency range 150kHz to 16MHz, which compares well with the limits set in the standards which specify a frequency range of 150kHz to 30MHz. The conducted emissions model was also used to predict the current and voltage sources which were used to predict the radiated emissions from the drive. Two methods for the prediction of the radiated emissions from the drive were investigated, the first being two-dimensional finite element analysis and the second three-dimensional transmission line matrix modelling. The finite element model took account of the features of the drive that were considered to produce the majority of the radiation, these features being the switching of the IGBT's in the inverter, the shielded cable which connected the drive to the motor as well as some of the cables that were present in the drive.The model also took account of the structure of the test rig used to measure the radiated emissions. It was found that the majority of the radiation produced came from the shielded cable and the common mode currents that were flowing in the shield, and that it was feasible to model the radiation from the drive by only modelling the shielded cable. The radiated emissions were correctly predicted in the frequency range 30MHz to 200MHz with an error of +10dB/-6dB. The transmission line matrix method modelled the shielded cable which connected the drive to the motor and also took account of the architecture of the test rig. Only limited simulations were performed using the transmission line matrix model as it was found to be a very slow method and not an ideal solution to the problem. However the limited results obtained were comparable, to within 5%, to the results obtained using the finite element model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The loss of habitat and biodiversity worldwide has led to considerable resources being spent on conservation interventions. Prioritising these actions is challenging due to the complexity of the problem and because there can be multiple actors undertaking conservation actions, often with divergent or partially overlapping objectives. We explore this issue with a simulation study involving two agents sequentially purchasing land for the conservation of multiple species using three scenarios comprising either divergent or partially overlapping objectives between the agents. The first scenario investigates the situation where both agents are targeting different sets of threatened species. The second and third scenarios represent a case where a government agency attempts to implement a complementary conservation network representing 200 species, while a non-government organisation is focused on achieving additional protection for the ten rarest species. Simulated input data was generated using distributions taken from real data to model the cost of parcels, and the rarity and co-occurrence of species. We investigated three types of collaborative interactions between agents: acting in isolation, sharing information and pooling resources with the third option resulting in the agents combining their resources and effectively acting as a single entity. In each scenario we determine the cost savings when an agent moves from acting in isolation to either sharing information or pooling resources with the other agent. The model demonstrates how the value of collaboration can vary significantly in different situations. In most cases, collaborating would have associated costs and these costs need to be weighed against the potential benefits from collaboration. Our model demonstrates a method for determining the range of costs that would result in collaboration providing an efficient use of scarce conservation resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alpha-modified minimum essential medium (αMEM) has been found to cross-link a 1% gellan gum solution, resulting in the formation of a self-supporting hydrogel in 1:1 and 5:1 ratios of polysaccharide: αMEM. Rheological data from temperature sweeps confirm that in addition to orders of magnitude differences in G' between 1% gellan and 1% gellan with αMEM, there is also a 20°C increase in the temperature at which the onset of gelation takes place when αMEM is present. Frequency sweeps confirm the formation of a true gel; mechanical spectra for mixtures of gellan and αMEM clearly demonstrate G' to be independent of frequency. It is possible to immobilize cells within a three-dimensional (3D) gellan matrix that remain viable for up to 21 days in culture by adding a suspension of rat bone marrow cells (rBMC) in αMEM to 1% gellan solution. This extremely simple approach to cell immobilization within 3D constructs, made possible by the fact that gellan solutions cross-link in the presence of millimolar concentrations of cations, poses a very low risk to a cell population immobilized within a gellan matrix and thus indicates the potential of gellan for use as a tissue engineering scaffold. © 2007 Sage Publications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Summary writing is an important part of many English Language Examinations. As grading students' summary writings is a very time-consuming task, computer-assisted assessment will help teachers carry out the grading more effectively. Several techniques such as latent semantic analysis (LSA), n-gram co-occurrence and BLEU have been proposed to support automatic evaluation of summaries. However, their performance is not satisfactory for assessing summary writings. To improve the performance, this paper proposes an ensemble approach that integrates LSA and n-gram co-occurrence. As a result, the proposed ensemble approach is able to achieve high accuracy and improve the performance quite substantially compared with current techniques. A summary assessment system based on the proposed approach has also been developed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In current organizations, valuable enterprise knowledge is often buried under rapidly expanding huge amount of unstructured information in the form of web pages, blogs, and other forms of human text communications. We present a novel unsupervised machine learning method called CORDER (COmmunity Relation Discovery by named Entity Recognition) to turn these unstructured data into structured information for knowledge management in these organizations. CORDER exploits named entity recognition and co-occurrence data to associate individuals in an organization with their expertise and associates. We discuss the problems associated with evaluating unsupervised learners and report our initial evaluation experiments in an expert evaluation, a quantitative benchmarking, and an application of CORDER in a social networking tool called BuddyFinder.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Discovering who works with whom, on which projects and with which customers is a key task in knowledge management. Although most organizations keep models of organizational structures, these models do not necessarily accurately reflect the reality on the ground. In this paper we present a text mining method called CORDER which first recognizes named entities (NEs) of various types from Web pages, and then discovers relations from a target NE to other NEs which co-occur with it. We evaluated the method on our departmental Website. We used the CORDER method to first find related NEs of four types (organizations, people, projects, and research areas) from Web pages on the Website and then rank them according to their co-occurrence with each of the people in our department. 20 representative people were selected and each of them was presented with ranked lists of each type of NE. Each person specified whether these NEs were related to him/her and changed or confirmed their rankings. Our results indicate that the method can find the NEs with which these people are closely related and provide accurate rankings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a text mining method called LRD (latent relation discovery), which extends the traditional vector space model of document representation in order to improve information retrieval (IR) on documents and document clustering. Our LRD method extracts terms and entities, such as person, organization, or project names, and discovers relationships between them by taking into account their co-occurrence in textual corpora. Given a target entity, LRD discovers other entities closely related to the target effectively and efficiently. With respect to such relatedness, a measure of relation strength between entities is defined. LRD uses relation strength to enhance the vector space model, and uses the enhanced vector space model for query based IR on documents and clustering documents in order to discover complex relationships among terms and entities. Our experiments on a standard dataset for query based IR shows that our LRD method performed significantly better than traditional vector space model and other five standard statistical methods for vector expansion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present CORDER (COmmunity Relation Discovery by named Entity Recognition) an un-supervised machine learning algorithm that exploits named entity recognition and co-occurrence data to associate individuals in an organization with their expertise and associates. We discuss the problems associated with evaluating unsupervised learners and report our initial evaluation experiments.