975 resultados para Textual-interactive strategy
Resumo:
This paper investigates the language strategy and translation policies of Amnesty International by discussing the translation of a press release from a textual as well as an institutional point of view. Combining textual analysis with ethnographic methods of data collection and ideas from organisation studies, the paper aims to illustrate how the strategic use of language and translation play a vital role in mediating the NGO’s message and in contributing to its visibility and success. The findings of the textual analysis are contextualised within data collected at the local office of Amnesty International Vlaanderen to come to a better understanding of why particular translation strategies are being applied. The idea of an NGO spreading one consistent message is questioned by showing how different translation strategies apply to different languages and sections, thereby addressing the difficulty of defining translation in the context of news translation.
Resumo:
Pós-graduação em Letras - FCLAS
Resumo:
This text aims to elucidate the textual processing strategies that give meaning to chronicle "The miracle of the leaves", of Clarice Lispector (1920-1977). Therefore, it starts from the basic postulate that the meaning is not in the fabric or verbal imagery, but the meanings constructed from the elements of language that are there to meet with the reader (BRONCKART, 2009, p. 257). Justified this reading strategy, since the chronic Lispector reveals multiple meanings endowed with a language and dialogue. The text processing is discussed in terms of an audience potentially youthful element that assigns characteristics relevant to the understanding of the functioning of this type of text in relation to younger readers. To achieve the goal, we start from the assumption that chronic, being a literary text endowed with aesthetic validity, not only conveys a content, but recreates it, adding to it new meanings.
Resumo:
Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.
Resumo:
We investigated here the effects of S2T1-6OTD, a novel telomestatin derivative that is synthesized to target G-quadruplex-forming DNA sequences, on a representative panel of human medulloblastoma (MB) and atypical teratoid/rhabdoid (AT/RT) childhood brain cancer cell lines. S2T1-6OTD proved to be a potent c-Myc inhibitor through its high-affinity physical interaction with the G-quadruplex structure in the c-Myc promoter. Treatment with S2T1-6OTD reduced the mRNA and protein expressions of c-Myc and hTERT, which is transcriptionally regulated by c-Myc, and decreased the activities of both genes. In remarkable contrast to control cells, short-term (72-hour) treatment with S2T1-6OTD resulted in a dose- and time-dependent antiproliferative effect in all MB and AT/RT brain tumor cell lines tested (IC(50), 0.25-0.39 micromol/L). Under conditions where inhibition of both proliferation and c-Myc activity was observed, S2T1-6OTD treatment decreased the protein expression of the cell cycle activator cyclin-dependent kinase 2 and induced cell cycle arrest. Long-term treatment (5 weeks) with nontoxic concentrations of S2T1-6OTD resulted in a time-dependent (mainly c-Myc-dependent) telomere shortening. This was accompanied by cell growth arrest starting on day 28 followed by cell senescence and induction of apoptosis on day 35 in all of the five cell lines investigated. On in vivo animal testing, S2T1-6OTD may well represent a novel therapeutic strategy for childhood brain tumors.
Resumo:
In this article, it is shown that IWD incorporates topological perceptual characteristics of both spoken and written language, and it is argued that these characteristics should not be ignored or given up when synchronous textual CMC is technologically developed and upgraded.
Resumo:
OBJECTIVE To analyse the results after elective open total aortic arch replacement. METHODS We analysed 39 patients (median age 63 years, median logistic EuroSCORE 18.4) who underwent elective open total arch replacement between 2005 and 2012. RESULTS In-hospital mortality was 5.1% (n = 2) and perioperative neurological injury was 12.8% (n = 5). The indication for surgery was degenerative aneurysmal disease in 59% (n = 23) and late aneurysmal formation following previous surgery of type A aortic dissection in 35.9% (n = 14); 5.1% (n = 2) were due to anastomotical aneurysms after prior ascending repair. Fifty-nine percent (n = 23) of the patients had already undergone previous proximal thoracic aortic surgery. In 30.8% (n = 12) of them, a conventional elephant trunk was added to total arch replacement, in 28.2% (n = 11), root replacement was additionally performed. Median hypothermic circulatory arrest time was 42 min (21-54 min). Selective antegrade cerebral perfusion was used in 95% (n = 37) of patients. Median follow-up was 11 months [interquartile range (IQR) 1-20 months]. There was no late death and no need for reoperation during this period. CONCLUSIONS Open total aortic arch replacement shows very satisfying results. The number of patients undergoing total arch replacement as a redo procedure and as a part of a complex multisegmental aortic pathology is high. Future strategies will have to emphasize neurological protection in extensive simultaneous replacement of the aortic arch and adjacent segments.
Resumo:
In free viewpoint applications, the images are captured by an array of cameras that acquire a scene of interest from different perspectives. Any intermediate viewpoint not included in the camera array can be virtually synthesized by the decoder, at a quality that depends on the distance between the virtual view and the camera views available at decoder. Hence, it is beneficial for any user to receive camera views that are close to each other for synthesis. This is however not always feasible in bandwidth-limited overlay networks, where every node may ask for different camera views. In this work, we propose an optimized delivery strategy for free viewpoint streaming over overlay networks. We introduce the concept of layered quality-of-experience (QoE), which describes the level of interactivity offered to clients. Based on these levels of QoE, camera views are organized into layered subsets. These subsets are then delivered to clients through a prioritized network coding streaming scheme, which accommodates for the network and clients heterogeneity and effectively exploit the resources of the overlay network. Simulation results show that, in a scenario with limited bandwidth or channel reliability, the proposed method outperforms baseline network coding approaches, where the different levels of QoE are not taken into account in the delivery strategy optimization.
Resumo:
This paper describes the participation of DAEDALUS at ImageCLEF 2011 Medical Retrieval task. We have focused on multimodal (or mixed) experiments that combine textual and visual retrieval. The main objective of our research has been to evaluate the effect on the medical retrieval process of the existence of an extended corpus that is annotated with the image type, associated to both the image itself and also to its textual description. For this purpose, an image classifier has been developed to tag each document with its class (1st level of the hierarchy: Radiology, Microscopy, Photograph, Graphic, Other) and subclass (2nd level: AN, CT, MR, etc.). For the textual-based experiments, several runs using different semantic expansion techniques have been performed. For the visual-based retrieval, different runs are defined by the corpus used in the retrieval process and the strategy for obtaining the class and/or subclass. The best results are achieved in runs that make use of the image subclass based on the classification of the sample images. Although different multimodal strategies have been submitted, none of them has shown to be able to provide results that are at least comparable to the ones achieved by the textual retrieval alone. We believe that we have been unable to find a metric for the assessment of the relevance of the results provided by the visual and textual processes
Resumo:
This paper presents a preliminary study in which Machine Learning experiments applied to Opinion Mining in blogs have been carried out. We created and annotated a blog corpus in Spanish using EmotiBlog. We evaluated the utility of the features labelled firstly carrying out experiments with combinations of them and secondly using the feature selection techniques, we also deal with several problems, such as the noisy character of the input texts, the small size of the training set, the granularity of the annotation scheme and the language object of our study, Spanish, with less resource than English. We obtained promising results considering that it is a preliminary study.
Resumo:
The Answer Validation Exercise (AVE) is a pilot track within the Cross-Language Evaluation Forum (CLEF) 2006. The AVE competition provides an evaluation frame- work for answer validations in Question Answering (QA). In our participation in AVE, we propose a system that has been initially used for other task as Recognising Textual Entailment (RTE). The aim of our participation is to evaluate the improvement our system brings to QA. Moreover, due to the fact that these two task (AVE and RTE) have the same main idea, which is to find semantic implications between two fragments of text, our system has been able to be directly applied to the AVE competition. Our system is based on the representation of the texts by means of logic forms and the computation of semantic comparison between them. This comparison is carried out using two different approaches. The first one managed by a deeper study of the Word- Net relations, and the second uses the measure defined by Lin in order to compute the semantic similarity between the logic form predicates. Moreover, we have also designed a voting strategy between our system and the MLEnt system, also presented by the University of Alicante, with the aim of obtaining a joint execution of the two systems developed at the University of Alicante. Although the results obtained have not been very high, we consider that they are quite promising and this supports the fact that there is still a lot of work on researching in any kind of textual entailment.
Resumo:
The goal of the project is to analyze, experiment, and develop intelligent, interactive and multilingual Text Mining technologies, as a key element of the next generation of search engines, systems with the capacity to find "the need behind the query". This new generation will provide specialized services and interfaces according to the search domain and type of information needed. Moreover, it will integrate textual search (websites) and multimedia search (images, audio, video), it will be able to find and organize information, rather than generating ranked lists of websites.
Resumo:
This paper argues for a more specific formal methodology for the textual analysis of individual game genres. In doing so, it advances a set of formal analytical tools and a theoretical framework for the analysis of turn-based computer strategy games. The analytical tools extend the useful work of Steven Poole, who suggests a Peircian semiotic approach to the study of games as formal systems. The theoretical framework draws upon postmodern cultural theory to analyse and explain the representation of space and the organisation of knowledge in these games. The methodology and theoretical framework is supported by a textual analysis of Civilization II, a significant and influential turn-based computer strategy game. Finally, this paper suggests possibilities for future extensions of this work.
Resumo:
Enhanced data services through mobile phones are expected to be soon fully transactional, interactive and embedded with other mobile consumption practices. While private services will continue to take the lead in the mobile data revolution, others such as government and NGOs are becoming more prominent m-players. This paper adopts a qualitative case study approach interpreting micro-level municipality officers’ mobility concept, ICT histories and choice practices for m-government services in Turkey. The findings highlight that in-situs ICT choice strategies are non-homogenous, sometimes conflicting with each other, and that current strategies have not yet justified the necessity for municipality officers to engage and fully commit to m-government efforts. Furthermore, beyond m-government initiatives’ success or failure, the mechanisms related to public administration mobile technical capacity building and knowledge transfer are identified to be directly related to m-government engagement likelihood.
Resumo:
The development of new products in today's marketing environment is generally accepted as a requirement for the continual growth and prosperity of organisations. The literature is consequently rich with information on the development of various aspects of good products. In the case of service industries, it can be argued that new service product development is of as least equal importance as it is to organisations that produce tangible goods products. Unlike the new goods product literature, the literature on service marketing practices, and in particular, new service product development, is relatively sparse. The main purpose of this thesis is to examine a number of aspects of new service product development practice with respect to financial services and specifically, credit card financial services. The empirical investigation utilises both a case study and a survey approach, to examine aspects of new service product development industry practice relating specifically to gaps and deficiencies in the literature with respect to the financial service industry. The findings of the empirical work are subsequently examined in the context in which they provide guidance and support for a new normative new service product development model. The study examines the UK credit card financial service product sector as an industry case study and perspective. The findings of the field work reveal that the new service product development process is still evolving, and that in the case of credit card financial services can be seen as a well-structured and well-documented process. New product development can also be seen as an incremental, complex, interactive and continuous process which has been applied in a variety of ways. A number of inferences are subsequently presented.