934 resultados para Textual entailment
Resumo:
Architecture Description Languages (ADLs) have emerged in recent years as a tool for providing high-level descriptions of software systems in terms of their architectural elements and the relationships among them. Most of the current ADLs exhibit limitations which prevent their widespread use in industrial applications. In this paper, we discuss these limitations and introduce ALI, an ADL that has been developed to address such limitations. The ALI language provides a rich and flexible syntax for describing component interfaces, architectural patterns, and meta-information. Multiple graphical architectural views can then be derived from ALI's textual notation.
Resumo:
Recent research on the textual tradition of Latin versions of the Testimonium Flavianum prompts another enquiry into the original text and the transmission of the famous passage. It is suggested here that the Greek/Latin versions highlight a western/eastern early history of the Testimonium and that in turn directs our attention back to the original circumstances of its composition and publication in the city of Rome in the later years of the first century.Restored to its original historical context, the Testimonium emerges as a carefully crafted attack upon the post-Pauline community of Christ-followers in the city.
Resumo:
Belief revision is the process that incorporates, in a consistent way,
a new piece of information, called input, into a belief base. When both belief
bases and inputs are propositional formulas, a set of natural and rational properties, known as AGM postulates, have been proposed to define genuine revision operations. This paper addresses the following important issue : How to revise a partially pre-ordered information (representing initial beliefs) with a new partially pre-ordered information (representing inputs) while preserving AGM postulates? We first provide a particular representation of partial pre-orders (called units) using the concept of closed sets of units. Then we restate AGM postulates in this framework by defining counterparts of the notions of logical entailment and logical consistency. In the second part of the paper, we provide some examples of revision operations that respect our set of postulates. We also prove that our revision methods extend well-known lexicographic revision and natural revision for both cases where the input is either a single propositional formula or a total pre-order.
Resumo:
This book is a hands-on study skills guide that explores how film and moving image can be used as sources. It is aimed at those who want to use film and moving image as the basis for research and offers advice on research methods, theory and methodology, archival work and film-based analysis. It draws on the disciplines of film and history to offer advice for students and researchers in these fields.
The book includes sections on working with different kinds of moving images, how to explore visual sources, how to undertake film-related research and how to use film theory. In addition to providing detailed case studies, the guide also offers advice on research, writing and studying, creating a methodology, visiting archives, accessing material and exploring films from a historical perspective. The guide's focus is on good research practice, whether it be conducting an interview, visiting an archive, undertaking textual analysis or defining a research question.
Resumo:
With the proliferation of geo-positioning and geo-tagging techniques, spatio-textual objects that possess both a geographical location and a textual description are gaining in prevalence, and spatial keyword queries that exploit both location and textual description are gaining in prominence. However, the queries studied so far generally focus on finding individual objects that each satisfy a query rather than finding groups of objects where the objects in a group together satisfy a query.
We define the problem of retrieving a group of spatio-textual objects such that the group's keywords cover the query's keywords and such that the objects are nearest to the query location and have the smallest inter-object distances. Specifically, we study three instantiations of this problem, all of which are NP-hard. We devise exact solutions as well as approximate solutions with provable approximation bounds to the problems. In addition, we solve the problems of retrieving top-k groups of three instantiations, and study a weighted version of the problem that incorporates object weights. We present empirical studies that offer insight into the efficiency of the solutions, as well as the accuracy of the approximate solutions.
Resumo:
Massive amount of data that are geo-tagged and associated with text information are being generated at an unprecedented scale. These geo-textual data cover a wide range of topics. Users are interested in receiving up-to-date tweets such that their locations are close to a user specified location and their texts are interesting to users. For example, a user may want to be updated with tweets near her home on the topic “food poisoning vomiting.” We consider the Temporal Spatial-Keyword Top-k Subscription (TaSK) query. Given a TaSK query, we continuously maintain up-to-date top-k most relevant results over a stream of geo-textual objects (e.g., geo-tagged Tweets) for the query. The TaSK query takes into account text relevance, spatial proximity, and recency of geo-textual objects in evaluating its relevance with a geo-textual object. We propose a novel solution to efficiently process a large number of TaSK queries over a stream of geotextual objects. We evaluate the efficiency of our approach on two real-world datasets and the experimental results show that our solution is able to achieve a reduction of the processing time by 70-80% compared with two baselines.
Resumo:
Paramedics are trained to use specialized medical knowledge and a variety of medical procedures and pharmaceutical interventions to “save patients and prevent further damage” in emergency situations, both as members of “health-care teams” in hospital emergency departments (Swanson, 2005: 96) and on the streets – unstandardized contexts “rife with chaotic, dangerous, and often uncontrollable elements” (Campeau, 2008: 3). The paramedic’s unique skill-set and ability to function in diverse situations have resulted in the occupation becoming ever more important to health care systems (Alberta Health and Wellness, 2008: 12).
Today, prehospital emergency services, while varying, exist in every major city and many rural areas throughout North America (Paramedics Association of Canada, 2008) and other countries around the world (Roudsari et al., 2007). Services in North America, for instance, treat and/or transport 2 million Canadians (over 250,000 in Alberta alone ) and between 25 and 30 million Americans annually (Emergency Medical Services Chiefs of Canada, 2006; National EMS Research Agenda, 2001). In Canada, paramedics make up one of the largest groups of health care professionals, with numbers exceeding 20,000 (Pike and Gibbons, 2008; Paramedics Association of Canada, 2008). However, there is little known about the work practices of paramedics, especially in light of recent changes to how their work is organized, making the profession “rich with unexplored opportunities for research on the full range of paramedic work” (Campeau, 2008: 2).
This presentation reports on findings from an institutional ethnography that explored the work of paramedics and different technologies of knowledge and governance that intersect with and organize their work practices. More specifically, my tentative focus of this presentation is on discussing some of the ruling discourses central to many of the technologies used on the front lines of EMS in Alberta and the consequences of such governance practices for both the front line workers and their patients. In doing so, I will demonstrate how IE can be used to answer Rankin and Campbell’s (2006) call for additional research into “the social organization of information in health care and attention to the (often unintended) ways ‘such textual products may accomplish…ruling purposes but otherwise fail people and, moreover, obscure that failure’ (p. 182)” (cited in McCoy, 2008: 709).
Resumo:
Using institutional ethnography, a sociology and critical method of inquiry used primarily in North America, this presentation discusses new forms and technologies of knowledge and governance – “forms of language, technologies of representation and communication, and text-based, objectified modes of knowledge through which local particularities are interpreted or rendered actionable in abstract, translocal terms” (McCoy, 2008: 701) on the front line of emergency medical services. I focus specifically on technologies central to health reforms that attempt to reshape how health care is delivered, experienced, and made accountable (Anantharaman, 2004; Ball, 2005; Alberta Health Services, 2008). In additional to exemplifying how institutional ethnography can be used to answer Rankin and Campbell’s (2006) call for additional research into “the social organization of information in health care and attention to the (often unintended) ways ‘such textual products may accomplish…ruling purposes but otherwise fail people and, moreover, obscure that failure’ (p. 182)” (cited in McCoy, 2008: 709), this presentation will introduce the audience to a critical approach to social inquiry that explores how knowledge is socially organized.
Resumo:
One of the major challenges in systems biology is to understand the complex responses of a biological system to external perturbations or internal signalling depending on its biological conditions. Genome-wide transcriptomic profiling of cellular systems under various chemical perturbations allows the manifestation of certain features of the chemicals through their transcriptomic expression profiles. The insights obtained may help to establish the connections between human diseases, associated genes and therapeutic drugs. The main objective of this study was to systematically analyse cellular gene expression data under various drug treatments to elucidate drug-feature specific transcriptomic signatures. We first extracted drug-related information (drug features) from the collected textual description of DrugBank entries using text-mining techniques. A novel statistical method employing orthogonal least square learning was proposed to obtain drug-feature-specific signatures by integrating gene expression with DrugBank data. To obtain robust signatures from noisy input datasets, a stringent ensemble approach was applied with the combination of three techniques: resampling, leave-one-out cross validation, and aggregation. The validation experiments showed that the proposed method has the capacity of extracting biologically meaningful drug-feature-specific gene expression signatures. It was also shown that most of signature genes are connected with common hub genes by regulatory network analysis. The common hub genes were further shown to be related to general drug metabolism by Gene Ontology analysis. Each set of genes has relatively few interactions with other sets, indicating the modular nature of each signature and its drug-feature-specificity. Based on Gene Ontology analysis, we also found that each set of drug feature (DF)-specific genes were indeed enriched in biological processes related to the drug feature. The results of these experiments demonstrated the pot- ntial of the method for predicting certain features of new drugs using their transcriptomic profiles, providing a useful methodological framework and a valuable resource for drug development and characterization.
Resumo:
In the present study, native Spanish speakers were taught a small English vocabulary (Spanish-to-English intraverbals). Four different training conditions were created by combining textual and echoic prompts with written and vocal target responses. The efficiency of each training condition was examined by analysing emergent relations (i.e., tacts) and the total number of sessions required to reach mastery under each training condition. All combinations of prompt-response modalities generated increases in correct responding on tests for emergent relations but when target responses were written, mastery criterion was reached faster. Results are discussed in terms of efficiency for emergent relations and recommendations for future directions are provided.
Resumo:
Accounting has been viewed, especially through the lens of the recent managerial reforms, as a neutral technology that, in the hands of rational managers, can support effective and efficient decision making. However, the introduction of new accounting practices can be framed in a variety of ways, from value-neutral procedures to ideologically-charged instruments. Focusing on financial accounting, budgeting and performance management changes in the UK central government, and through extensive textual analysis and interviews in three government departments, this paper investigates: how accounting changes are discussed and introduced at the political level through the use of global discourses; and what strategies organisational actors subsequently use to talk about and legitimate such discourses at different organisational levels. The results shows that in political discussions there is a consistency between the discourses (largely NPM) and the accounting-related changes that took place. The research suggests that a cocktail of legitimation strategies was used by organisational actors to construct a sense of the changes, with authorisation, often in combination with, at the very least, rationalisation strategies most widely utilised. While previous literature posits that different actors tend to use the same rhetorical sequences during periods of change, this study highlights differences at different organisational levels.
Resumo:
Master data management (MDM) integrates data from multiple
structured data sources and builds a consolidated 360-
degree view of business entities such as customers and products.
Today’s MDM systems are not prepared to integrate
information from unstructured data sources, such as news
reports, emails, call-center transcripts, and chat logs. However,
those unstructured data sources may contain valuable
information about the same entities known to MDM from
the structured data sources. Integrating information from
unstructured data into MDM is challenging as textual references
to existing MDM entities are often incomplete and
imprecise and the additional entity information extracted
from text should not impact the trustworthiness of MDM
data.
In this paper, we present an architecture for making MDM
text-aware and showcase its implementation as IBM InfoSphere
MDM Extension for Unstructured Text Correlation,
an add-on to IBM InfoSphere Master Data Management
Standard Edition. We highlight how MDM benefits from
additional evidence found in documents when doing entity
resolution and relationship discovery. We experimentally
demonstrate the feasibility of integrating information from
unstructured data sources into MDM.
Resumo:
Discussion forums have evolved into a dependablesource of knowledge to solvecommon problems. However, only a minorityof the posts in discussion forumsare solution posts. Identifying solutionposts from discussion forums, hence, is animportant research problem. In this paper,we present a technique for unsupervisedsolution post identification leveraginga so far unexplored textual feature, thatof lexical correlations between problemsand solutions. We use translation modelsand language models to exploit lexicalcorrelations and solution post characterrespectively. Our technique is designedto not rely much on structural featuressuch as post metadata since suchfeatures are often not uniformly availableacross forums. Our clustering-based iterativesolution identification approach basedon the EM-formulation performs favorablyin an empirical evaluation, beatingthe only unsupervised solution identificationtechnique from literature by a verylarge margin. We also show that our unsupervisedtechnique is competitive againstmethods that require supervision, outperformingone such technique comfortably.
Resumo:
Massive amount of data that are geo-tagged and associated with text information are being generated at an unprecedented scale. These geo-textual data cover a wide range of topics. Users are interested in receiving up-to-date geo-textual objects (e.g., geo-tagged Tweets) such that their locations meet users’ need and their texts are interesting to users. For example, a user may want to be updated with tweets near her home on the topic “dengue fever headache.” In this demonstration, we present SOPS, the Spatial-Keyword Publish/Subscribe System, that is capable of efficiently processing spatial keyword continuous queries. SOPS supports two types of queries: (1) Boolean Range Continuous (BRC) query that can be used to subscribe the geo-textual objects satisfying a boolean keyword expression and falling in a specified spatial region; (2) Temporal Spatial-Keyword Top-k Continuous (TaSK) query that continuously maintains up-to-date top-k most relevant results over a stream of geo-textual objects. SOPS enables users to formulate their queries and view the real-time results over a stream of geotextual objects by browser-based user interfaces. On the server side, we propose solutions to efficiently processing a large number of BRC queries (tens of millions) and TaSK queries over a stream of geo-textual objects.