28 resultados para Domain-specific analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many studies have assessed the neural underpinnings of creativity, failing to find a clear anatomical localization. We aimed to provide evidence for a multi-componential neural system for creativity. We applied a general activation likelihood estimation (ALE) meta-analysis to 45 fMRI studies. Three individual ALE analyses were performed to assess creativity in different cognitive domains (Musical, Verbal, and Visuo-spatial). The general ALE revealed that creativity relies on clusters of activations in the bilateral occipital, parietal, frontal, and temporal lobes. The individual ALE revealed different maximal activation in different domains. Musical creativity yields activations in the bilateral medial frontal gyrus, in the left cingulate gyrus, middle frontal gyrus, and inferior parietal lobule and in the right postcentral and fusiform gyri. Verbal creativity yields activations mainly located in the left hemisphere, in the prefrontal cortex, middle and superior temporal gyri, inferior parietal lobule, postcentral and supramarginal gyri, middle occipital gyrus, and insula. The right inferior frontal gyrus and the lingual gyrus were also activated. Visuo-spatial creativity activates the right middle and inferior frontal gyri, the bilateral thalamus and the left precentral gyrus. This evidence suggests that creativity relies on multi-componential neural networks and that different creativity domains depend on different brain regions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To benefit from the advantages that Cloud Computing brings to the IT industry, management policies must be implemented as a part of the operation of the Cloud. Among others, for example, the specification of policies can be used for the management of energy to reduce the cost of running the IT system or also for security policies while handling privacy issues of users. As cloud platforms are large, manual enforcement of policies is not scalable. Hence, autonomic approaches for management policies have recently received a considerable attention. These approaches allow specification of rules that are executed via rule-engines. The process of rules creation starts by the interpretation of the policies drafted by high-rank managers. Then, technical IT staff translate such policies to operational activities to implement them. Such process can start from a textual declarative description and after numerous steps terminates in a set of rules to be executed on a rule engine. To simplify the steps and to bridge the considerable gap between the declarative policies and executable rules, we propose a domain-specific language called CloudMPL. We also design a method of automated transformation of the rules captured in CloudMPL to the popular rule-engine Drools. As the policies are changed over time, code generation will reduce the time required for the implementation of the policies. In addition, using a declarative language for writing the specifications is expected to make the authoring of rules easier. We demonstrate the use of the CloudMPL language into a running example extracted from a management energy consumption case study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, we highlight the significance and need for conducting context-specific human resource management (HRM) research, by focusing on four critical themes. First, we discuss the need to analyze the convergence-divergence debate on HRM in Asia-Pacific. Next, we present an integrated framework, which would be very useful for conducting cross-national HRM research designed to focus on the key determinants of the dominant national HRM systems in the region. Following this, we discuss the critical challenges facing the HRM function in Asia-Pacific. Finally, we present an agenda for future research by presenting a series of research themes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new principled domain independent watermarking framework is presented. The new approach is based on embedding the message in statistically independent sources of the covertext to mimimise covertext distortion, maximise the information embedding rate and improve the method's robustness against various attacks. Experiments comparing the performance of the new approach, on several standard attacks show the current proposed approach to be competitive with other state of the art domain-specific methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sentiment analysis concerns about automatically identifying sentiment or opinion expressed in a given piece of text. Most prior work either use prior lexical knowledge defined as sentiment polarity of words or view the task as a text classification problem and rely on labeled corpora to train a sentiment classifier. While lexicon-based approaches do not adapt well to different domains, corpus-based approaches require expensive manual annotation effort. In this paper, we propose a novel framework where an initial classifier is learned by incorporating prior information extracted from an existing sentiment lexicon with preferences on expectations of sentiment labels of those lexicon words being expressed using generalized expectation criteria. Documents classified with high confidence are then used as pseudo-labeled examples for automatical domain-specific feature acquisition. The word-class distributions of such self-learned features are estimated from the pseudo-labeled examples and are used to train another classifier by constraining the model's predictions on unlabeled instances. Experiments on both the movie-review data and the multi-domain sentiment dataset show that our approach attains comparable or better performance than existing weakly-supervised sentiment classification methods despite using no labeled documents.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Oxidized and chlorinated phospholipids are generated under inflammatory conditions and are increasingly understood to play important roles in diseases involving oxidative stress. MS is a sensitive and informative technique for monitoring phospholipid oxidation that can provide structural information and simultaneously detect a wide variety of oxidation products, including chain-shortened and -chlorinated phospholipids. MSn technologies involve fragmentation of the compounds to yield diagnostic fragment ions and thus assist in identification. Advanced methods such as neutral loss and precursor ion scanning can facilitate the analysis of specific oxidation products in complex biological samples. This is essential for determining the contributions of different phospholipid oxidation products in disease. While many pro-inflammatory signalling effects of oxPLs (oxidized phospholipids) have been reported, it has more recently become clear that they can also have anti-inflammatory effects in conditions such as infection and endotoxaemia. In contrast with free radical-generated oxPLs, the signalling effects of chlorinated lipids are much less well understood, but they appear to demonstrate mainly pro-inflammatory effects. Specific analysis of oxidized and chlorinated lipids and the determination of their molecular effects are crucial to understanding their role in disease pathology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

During the last decade, biomedicine has witnessed a tremendous development. Large amounts of experimental and computational biomedical data have been generated along with new discoveries, which are accompanied by an exponential increase in the number of biomedical publications describing these discoveries. In the meantime, there has been a great interest with scientific communities in text mining tools to find knowledge such as protein-protein interactions, which is most relevant and useful for specific analysis tasks. This paper provides a outline of the various information extraction methods in biomedical domain, especially for discovery of protein-protein interactions. It surveys methodologies involved in plain texts analyzing and processing, categorizes current work in biomedical information extraction, and provides examples of these methods. Challenges in the field are also presented and possible solutions are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To date, more than 16 million citations of published articles in biomedical domain are available in the MEDLINE database. These articles describe the new discoveries which accompany a tremendous development in biomedicine during the last decade. It is crucial for biomedical researchers to retrieve and mine some specific knowledge from the huge quantity of published articles with high efficiency. Researchers have been engaged in the development of text mining tools to find knowledge such as protein-protein interactions, which are most relevant and useful for specific analysis tasks. This chapter provides a road map to the various information extraction methods in biomedical domain, such as protein name recognition and discovery of protein-protein interactions. Disciplines involved in analyzing and processing unstructured-text are summarized. Current work in biomedical information extracting is categorized. Challenges in the field are also presented and possible solutions are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The behaviour of self adaptive systems can be emergent, which means that the system’s behaviour may be seen as unexpected by its customers and its developers. Therefore, a self-adaptive system needs to garner confidence in its customers and it also needs to resolve any surprise on the part of the developer during testing and maintenance. We believe that these two functions can only be achieved if a self-adaptive system is also capable of self-explanation. We argue a self-adaptive system’s behaviour needs to be explained in terms of satisfaction of its requirements. Since self-adaptive system requirements may themselves be emergent, we propose the use of goal-based requirements models at runtime to offer self-explanation of how a system is meeting its requirements. We demonstrate the analysis of run-time requirements models to yield a self-explanation codified in a domain specific language, and discuss possible future work.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ontologies have become a key component in the Semantic Web and Knowledge management. One accepted goal is to construct ontologies from a domain specific set of texts. An ontology reflects the background knowledge used in writing and reading a text. However, a text is an act of knowledge maintenance, in that it re-enforces the background assumptions, alters links and associations in the ontology, and adds new concepts. This means that background knowledge is rarely expressed in a machine interpretable manner. When it is, it is usually in the conceptual boundaries of the domain, e.g. in textbooks or when ideas are borrowed into other domains. We argue that a partial solution to this lies in searching external resources such as specialized glossaries and the internet. We show that a random selection of concept pairs from the Gene Ontology do not occur in a relevant corpus of texts from the journal Nature. In contrast, a significant proportion can be found on the internet. Thus, we conclude that sources external to the domain corpus are necessary for the automatic construction of ontologies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of ontologies as representations of knowledge is widespread but their construction, until recently, has been entirely manual. We argue in this paper for the use of text corpora and automated natural language processing methods for the construction of ontologies. We delineate the challenges and present criteria for the selection of appropriate methods. We distinguish three ma jor steps in ontology building: associating terms, constructing hierarchies and labelling relations. A number of methods are presented for these purposes but we conclude that the issue of data-sparsity still is a ma jor challenge. We argue for the use of resources external tot he domain specific corpus.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Phospholipids are complex and varied biomolecules that are susceptible to lipid peroxidation after attack by free radicals or electrophilic oxidants and can yield a large number of different oxidation products. There are many available methods for detecting phospholipid oxidation products, but also various limitations and problems. Electrospray ionization mass spectrometry allows the simultaneous but specific analysis of multiple species with good sensitivity and has a further advantage that it can be coupled to liquid chromatography for separation of oxidation products. Here, we explain the principles of oxidized phospholipid analysis by electrospray mass spectrometry and describe fragmentation routines for surveying the structural properties of the analytes, in particular precursor ion and neutral loss scanning. These allow targeted detection of phospholipid headgroups and identification of phospholipids containing hydroperoxides and chlorine, as well as the detection of some individual oxidation products by their specific fragmentation patterns. We describe instrument protocols for carrying out these survey routines on a QTrap5500 mass spectrometer and also for interfacing with reverse-phase liquid chromatography. The article highlights critical aspects of the analysis as well as some limitations of the methodology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose a novel framework where an initial classifier is learned by incorporating prior information extracted from an existing sentiment lexicon. Preferences on expectations of sentiment labels of those lexicon words are expressed using generalized expectation criteria. Documents classified with high confidence are then used as pseudo-labeled examples for automatical domain-specific feature acquisition. The word-class distributions of such self-learned features are estimated from the pseudo-labeled examples and are used to train another classifier by constraining the model's predictions on unlabeled instances. Experiments on both the movie review data and the multi-domain sentiment dataset show that our approach attains comparable or better performance than exiting weakly-supervised sentiment classification methods despite using no labeled documents.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the developed world we are surrounded by man-made objects, but most people give little thought to the complex processes needed for their design. The design of hand knitting is complex because much of the domain knowledge is tacit. The objective of this thesis is to devise a methodology to help designers to work within design constraints, whilst facilitating creativity. A hybrid solution including computer aided design (CAD) and case based reasoning (CBR) is proposed. The CAD system creates designs using domain-specific rules and these designs are employed for initial seeding of the case base and the management of constraints. CBR reuses the designer's previous experience. The key aspects in the CBR system are measuring the similarity of cases and adapting past solutions to the current problem. Similarity is measured by asking the user to rank the importance of features; the ranks are then used to calculate weights for an algorithm which compares the specifications of designs. A novel adaptation operator called rule difference replay (RDR) is created. When the specifications to a new design is presented, the CAD program uses it to construct a design constituting an approximate solution. The most similar design from the case-base is then retrieved and RDR replays the changes previously made to the retrieved design on the new solution. A measure of solution similarity that can validate subjective success scores is created. Specification similarity can be used as a guide whether to invoke CBR, in a hybrid CAD-CBR system. If the newly resulted design is suffciently similar to a previous design, then CBR is invoked; otherwise CAD is used. The application of RDR to knitwear design has demonstrated the flexibility to overcome deficiencies in rules that try to automate creativity, and has the potential to be applied to other domains such as interior design.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: There is substantial evidence that cognitive deficits and brain structural abnormalities are present in patients with Bipolar Disorder (BD) and in their first-degree relatives. Previous studies have demonstrated associations between cognition and functional outcome in BD patients but have not examined the role of brain morphological changes. Similarly, the functional impact of either cognition or brain morphology in relatives remains unknown. Therefore we focused on delineating the relationship between psychosocial functioning, cognition and brain structure, in relation to disease expression and genetic risk for BD. Methods: Clinical, cognitive and brain structural measures were obtained from 41 euthymic BD patients and 50 of their unaffected first-degree relatives. Psychosocial function was evaluated using the General Assessment of Functioning (GAF) scale. We examined the relationship between level of functioning and general intellectual ability (IQ), memory, attention, executive functioning, symptomatology, illness course and total gray matter, white matter and cerebrospinal fluid volumes. Limitations: Cross-sectional design. Results: Multiple regression analyses revealed that IQ, total white matter volume and a predominantly depressive illness course were independently associated with functional outcome in BD patients, but not in their relatives, and accounted for a substantial proportion (53%) of the variance in patients' GAF scores. There were no significant domain-specific associations between cognition and outcome after consideration of IQ. Conclusions: Our results emphasise the role of IQ and white matter integrity in relation to outcome in BD and carry significant implications for treatment interventions. © 2010 Elsevier B.V.