897 resultados para (hyper)text
Resumo:
B cell development and humoral immune responses are controlled by signaling thresholds established through the B lymphocyte antigen receptor (BCR) complex. BCR signaling thresholds are differentially regulated by the CD22 and CD19 cell surface receptors in vivo. B cells from CD22-deficient mice exhibit characteristics of chronic stimulation and are hyper-responsive to BCR crosslinking with augmented intracellular Ca2+ responses. By contrast, B cells from CD19-deficient mice are hypo-responsive to transmembrane signals. To identify signaling molecules involved in the positive and negative regulation of signaling thresholds, the signal transduction pathways activated after BCR crosslinking were examined in CD22- and CD19-deficient B cells. These comparisons revealed that tyrosine phosphorylation of Vav protein was uniquely augmented after BCR or CD19 crosslinking in CD22-deficient B cells, yet was modest and transient after BCR crosslinking in CD19-deficient B cells. Ligation of CD19 and CD22 in vivo is likely to positively and negatively regulate BCR signaling, respectively, because CD19 crosslinking was more efficient than BCR crosslinking at inducing Vav phosphorylation. However, simultaneous crosslinking of CD19 with the BCR resulted in a substantial decrease in Vav phosphorylation when CD22 was expressed. Thus, the differential regulation of Vav tyrosine phosphorylation by CD19 and CD22 may provide a molecular mechanism for adjusting BCR signaling thresholds.
Resumo:
Mitogen-activated protein (MAP) kinases are pivotal components of eukaryotic signaling cascades. Phosphorylation of tyrosine and threonine residues activates MAP kinases, but either dual-specificity or monospecificity phosphatases can inactivate them. The Candida albicans CPP1 gene, a structural member of the VH1 family of dual- specificity phosphatases, was previously cloned by its ability to block the pheromone response MAP kinase cascade in Saccharomyces cerevisiae. Cpp1p inactivated mammalian MAP kinases in vitro and acted as a tyrosine-specific enzyme. In C. albicans a MAP kinase cascade can trigger the transition from the budding yeast form to a more invasive filamentous form. Disruption of the CPP1 gene in C. albicans derepressed the yeast to hyphal transition at ambient temperatures, on solid surfaces. A hyphal growth rate defect under physiological conditions in vitro was also observed and could explain a reduction in virulence associated with reduced fungal burden in the kidneys seen in a systemic mouse model. A hyper-hyphal pathway may thus have some detrimental effects on C. albicans cells. Disruption of the MAP kinase homologue CEK1 suppressed the morphological effects of the CPP1 disruption in C. albicans. The results presented here demonstrate the biological importance of a tyrosine phosphatase in cell-fate decisions and virulence in C. albicans.
Resumo:
The HIV Reverse Transcriptase and Protease Sequence Database is an on-line relational database that catalogs evolutionary and drug-related sequence variation in the human immunodeficiency virus (HIV) reverse transcriptase (RT) and protease enzymes, the molecular targets of anti-HIV therapy (http://hivdb.stanford.edu). The database contains a compilation of nearly all published HIV RT and protease sequences, including submissions from International Collaboration databases and sequences published in journal articles. Sequences are linked to data about the source of the sequence sample and the antiretroviral drug treatment history of the individual from whom the isolate was obtained. During the past year 3500 sequences have been added and the data model has been expanded to include drug susceptibility data on sequenced isolates. Database content has also been integrated with didactic text and the output of two sequence analysis programs.
Resumo:
The rise and growth of large Jewish law firms in New York City during the second half of the twentieth century was nothing short of an astounding success story. As late as 1950, there was not a single large Jewish law firm in town. By the mid-1960s, six of the largest twenty law firms were Jewish, and by 1980, four of the largest ten prestigious law firms were Jewish firms. Moreover, the accomplishment of the Jewish firms is especially striking because, while the traditional large White Anglo-Saxon Protestant law firms grew at a fast rate during this period, the Jewish firms grew twice as fast, and they did so in spite of experiencing explicit discrimination. What happened? This book chapter is a revised, updated study of the rise and growth of large New York City Jewish law firms. It is based on the public record, with respect to both the law firms themselves and trends in the legal profession generally, and on over twenty in-depth interviews with lawyers who either founded and practiced at these successful Jewish firms, attempted and failed to establish such firms, or were in a position to join these firms but decided instead to join WASP firms. According to the informants interviewed in this chapter, while Jewish law firms benefited from general decline in anti-Semitism and increased demand for corporate legal services, a unique combination of factors explains the incredible rise of the Jewish firms. First, white-shoe ethos caused large WASP firms to stay out of undignified practice areas and effectively created pockets of Jewish practice areas, where the Jewish firms encountered little competition for their services. Second, hiring and promotion discriminatory practices by the large WASP firms helped create a large pool of talented Jewish lawyers from which the Jewish firms could easily recruit. Finally, the Jewish firms benefited from a flip side of bias phenomenon, that is, they benefited from the positive consequences of stereotyping. Paradoxically, the very success of the Jewish firms is reflected in their demise by the early twenty-first century: because systematic large law firm ethno-religious discrimination against Jewish lawyers has become a thing of the past, the very reason for the existence of Jewish law firms has been nullified. As other minority groups, however, continue to struggle for equality within the senior ranks of Big Law, can the experience of the Jewish firms serve as a “separate-but-equal” blueprint for overcoming contemporary forms of discrimination for women, racial, and other minority attorneys? Perhaps not. As this chapter establishes, the success of large Jewish law firms was the result of unique conditions and circumstances between 1945 and 1980, which are unlikely to be replicated. For example, large law firms have become hyper-competitive and are not likely to allow any newcomers the benefit of protected pockets of practice. While smaller “separate-but-equal” specialized firms, for instance, ones exclusively hiring lawyer-mothers occasionally appear, the rise of large “separate-but-equal” firms is improbable.
Resumo:
The goal of the project is to analyze, experiment, and develop intelligent, interactive and multilingual Text Mining technologies, as a key element of the next generation of search engines, systems with the capacity to find "the need behind the query". This new generation will provide specialized services and interfaces according to the search domain and type of information needed. Moreover, it will integrate textual search (websites) and multimedia search (images, audio, video), it will be able to find and organize information, rather than generating ranked lists of websites.
Resumo:
In this paper we address two issues. The first one analyzes whether the performance of a text summarization method depends on the topic of a document. The second one is concerned with how certain linguistic properties of a text may affect the performance of a number of automatic text summarization methods. For this we consider semantic analysis methods, such as textual entailment and anaphora resolution, and we study how they are related to proper noun, pronoun and noun ratios calculated over original documents that are grouped into related topics. Given the obtained results, we can conclude that although our first hypothesis is not supported, since it has been found no evident relationship between the topic of a document and the performance of the methods employed, adapting summarization systems to the linguistic properties of input documents benefits the process of summarization.
Resumo:
This article analyzes the appropriateness of a text summarization system, COMPENDIUM, for generating abstracts of biomedical papers. Two approaches are suggested: an extractive (COMPENDIUM E), which only selects and extracts the most relevant sentences of the documents, and an abstractive-oriented one (COMPENDIUM E–A), thus facing also the challenge of abstractive summarization. This novel strategy combines extractive information, with some pieces of information of the article that have been previously compressed or fused. Specifically, in this article, we want to study: i) whether COMPENDIUM produces good summaries in the biomedical domain; ii) which summarization approach is more suitable; and iii) the opinion of real users towards automatic summaries. Therefore, two types of evaluation were performed: quantitative and qualitative, for evaluating both the information contained in the summaries, as well as the user satisfaction. Results show that extractive and abstractive-oriented summaries perform similarly as far as the information they contain, so both approaches are able to keep the relevant information of the source documents, but the latter is more appropriate from a human perspective, when a user satisfaction assessment is carried out. This also confirms the suitability of our suggested approach for generating summaries following an abstractive-oriented paradigm.
Resumo:
This paper describes a module for the prediction of emotions in text chats in Spanish, oriented to its use in specific-domain text-to-speech systems. A general overview of the system is given, and the results of some evaluations carried out with two corpora of real chat messages are described. These results seem to indicate that this system offers a performance similar to other systems described in the literature, for a more complex task than other systems (identification of emotions and emotional intensity in the chat domain).
Resumo:
In this paper, we present a Text Summarisation tool, compendium, capable of generating the most common types of summaries. Regarding the input, single- and multi-document summaries can be produced; as the output, the summaries can be extractive or abstractive-oriented; and finally, concerning their purpose, the summaries can be generic, query-focused, or sentiment-based. The proposed architecture for compendium is divided in various stages, making a distinction between core and additional stages. The former constitute the backbone of the tool and are common for the generation of any type of summary, whereas the latter are used for enhancing the capabilities of the tool. The main contributions of compendium with respect to the state-of-the-art summarisation systems are that (i) it specifically deals with the problem of redundancy, by means of textual entailment; (ii) it combines statistical and cognitive-based techniques for determining relevant content; and (iii) it proposes an abstractive-oriented approach for facing the challenge of abstractive summarisation. The evaluation performed in different domains and textual genres, comprising traditional texts, as well as texts extracted from the Web 2.0, shows that compendium is very competitive and appropriate to be used as a tool for generating summaries.
Resumo:
The great amount of text produced every day in the Web turned it as one of the main sources for obtaining linguistic corpora, that are further analyzed with Natural Language Processing techniques. On a global scale, languages such as Portuguese - official in 9 countries - appear on the Web in several varieties, with lexical, morphological and syntactic (among others) differences. Besides, a unified spelling system for Portuguese has been recently approved, and its implementation process has already started in some countries. However, it will last several years, so different varieties and spelling systems coexist. Since PoS-taggers for Portuguese are specifically built for a particular variety, this work analyzes different training corpora and lexica combinations aimed at building a model with high-precision annotation in several varieties and spelling systems of this language. Moreover, this paper presents different dictionaries of the new orthography (Spelling Agreement) as well as a new freely available testing corpus, containing different varieties and textual typologies.
Resumo:
Automatic Text Summarization has been shown to be useful for Natural Language Processing tasks such as Question Answering or Text Classification and other related fields of computer science such as Information Retrieval. Since Geographical Information Retrieval can be considered as an extension of the Information Retrieval field, the generation of summaries could be integrated into these systems by acting as an intermediate stage, with the purpose of reducing the document length. In this manner, the access time for information searching will be improved, while at the same time relevant documents will be also retrieved. Therefore, in this paper we propose the generation of two types of summaries (generic and geographical) applying several compression rates in order to evaluate their effectiveness in the Geographical Information Retrieval task. The evaluation has been carried out using GeoCLEF as evaluation framework and following an Information Retrieval perspective without considering the geo-reranking phase commonly used in these systems. Although single-document summarization has not performed well in general, the slight improvements obtained for some types of the proposed summaries, particularly for those based on geographical information, made us believe that the integration of Text Summarization with Geographical Information Retrieval may be beneficial, and consequently, the experimental set-up developed in this research work serves as a basis for further investigations in this field.
Resumo:
One of the main challenges to be addressed in text summarization concerns the detection of redundant information. This paper presents a detailed analysis of three methods for achieving such goal. The proposed methods rely on different levels of language analysis: lexical, syntactic and semantic. Moreover, they are also analyzed for detecting relevance in texts. The results show that semantic-based methods are able to detect up to 90% of redundancy, compared to only the 19% of lexical-based ones. This is also reflected in the quality of the generated summaries, obtaining better summaries when employing syntactic- or semantic-based approaches to remove redundancy.
Resumo:
In recent years, Twitter has become one of the most important microblogging services of the Web 2.0. Among the possible uses it allows, it can be employed for communicating and broadcasting information in real time. The goal of this research is to analyze the task of automatic tweet generation from a text summarization perspective in the context of the journalism genre. To achieve this, different state-of-the-art summarizers are selected and employed for producing multi-lingual tweets in two languages (English and Spanish). A wide experimental framework is proposed, comprising the creation of a new corpus, the generation of the automatic tweets, and their assessment through a quantitative and a qualitative evaluation, where informativeness, indicativeness and interest are key criteria that should be ensured in the proposed context. From the results obtained, it was observed that although the original tweets were considered as model tweets with respect to their informativeness, they were not among the most interesting ones from a human viewpoint. Therefore, relying only on these tweets may not be the ideal way to communicate news through Twitter, especially if a more personalized and catchy way of reporting news wants to be performed. In contrast, we showed that recent text summarization techniques may be more appropriate, reflecting a balance between indicativeness and interest, even if their content was different from the tweets delivered by the news providers.
Resumo:
In the past years, an important volume of research in Natural Language Processing has concentrated on the development of automatic systems to deal with affect in text. The different approaches considered dealt mostly with explicit expressions of emotion, at word level. Nevertheless, expressions of emotion are often implicit, inferrable from situations that have an affective meaning. Dealing with this phenomenon requires automatic systems to have “knowledge” on the situation, and the concepts it describes and their interaction, to be able to “judge” it, in the same manner as a person would. This necessity motivated us to develop the EmotiNet knowledge base — a resource for the detection of emotion from text based on commonsense knowledge on concepts, their interaction and their affective consequence. In this article, we briefly present the process undergone to build EmotiNet and subsequently propose methods to extend the knowledge it contains. We further on analyse the performance of implicit affect detection using this resource. We compare the results obtained with EmotiNet to the use of alternative methods for affect detection. Following the evaluations, we conclude that the structure and content of EmotiNet are appropriate to address the automatic treatment of implicitly expressed affect, that the knowledge it contains can be easily extended and that overall, methods employing EmotiNet obtain better results than traditional emotion detection approaches.
Resumo:
pt. 2