746 resultados para Syntax
Resumo:
This study was developed based on Classical Greek Language classes, reading and translation of texts. There is also the presence of computer elements due to use of Alpheios digital platform. As an example of a larger proposal, this proposal aims at produce translation aligned with the Portuguese and syntactic annotation with the production of the Greek dependency trees. To achieve the goal, we chose Chapter 34, Volume 1 of The Histories from Herodotus. The methods adopted contain a list and description of the instruments and the morphological categories from the guide (manual rules of syntax tree). The procedures adopted are the description of the alignment process and of syntactic annotation. As study results, we have the production line translation and the syntactic annotation of the excerpt 1.34. It was concluded that the study of this text is relevant, because the lexical density of Herodotus it's interesting at future researchers and students that will use the digital platform Alpheios
Resumo:
In this paper, we intend to reflect on the subordination process based on a functionalist-cognitive approach. For this, we analyze syntactic constructions in which the main clause predicator is a speech act verb, a mental activity verb or a perception verb. One of the pragmatic functions of these constructions is to express evidentiality, which is basically the indication of the information source contained in a sentence. Evidentiality allows the Speaker to manage information in order to preserve his/her face and also allows the Addresser to assess the reliability of this information. We take the evidentiality expression as a functionality of the subordination process in order to rethink the teaching of syntax as a tool for an effective development of students’ communicative abilities.
Resumo:
Based on a functionalist approach, this paper analyzes the modalized expression pode ser as a complement-taking predicate which embeds a proposition (pode ser1) and as an independent structure (pode ser2), in contemporary written and spoken Brazilian Portuguese texts. We aim to identify degrees of (inter)subjectivity, revealing a process of (inter)subjectification (TRAUGOTT, 2010 among others). The analysis carried out in this paper is supported by parameters of (inter)subjectivity of modal elements (TRAUGOTT; DASHER, 2002) and by the notion of modality as a multifunctional category, serving not only to encodethe speaker’s attitude regarding the modalized content, but also as a pragmatic strategy, as a regulator of communicative situation. The exam reveals pode ser as a strongly demanded structure in interaction, a fairly requested set and also productive and useful for interpersonal relationships. The examination of semantic, discursive and morphosyntactic properties indicates a shift from syntax (pode ser1) to discourse (pode ser2), interpreted as a development of (inter)subjectification.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This article intends to contribute to the Brazilian design history through a study of a selection of book covers produced in 1960 decade by Vicente Di Grado (1922 - 2004), who was the main cover artist and illustrator of a publishing house named Clube do Livro. The covers were analysed considering their visual and contextual syntax and in their relations with information design.
Resumo:
Pós-graduação em Letras - FCLAR
Resumo:
This collection consists of Dr. Bryant’s professional and organizational files, biographical data, correspondence, and speeches. Most of the material relates to her publishing efforts, her work as a faculty member at Brooklyn College, and her involvement with professional organizations, especially the New York branch of the American Association of University Women. Most of the material extends form 1950-1975. A list of the more prominent individuals who corresponded with Margaret Bryant has been included as an appendix to the inventory. (For more extensive and comprehensive list of correspondents, see the list included in the collection control file.)
Resumo:
Background The optimal revascularization strategy for diabetic patients with multivessel coronary artery disease (MVD) remains uncertain for lack of an adequately powered, randomized trial. The FREEDOM trial was designed to compare contemporary coronary artery bypass grafting (CABG) to percutaneous coronary intervention (PCI) with drug-eluting stents in diabetic patients with MVD against a background of optimal medical therapy. Methods A total of 1,900 diabetic participants with MVD were randomized to PCI or CABG worldwide from April 2005 to March 2010. FREEDOM is a superiority trial with a mean follow-up of 4.37 years (minimum 2 years) and 80% power to detect a 27.0% relative reduction. We present the baseline characteristics of patients screened and randomized, and provide a comparison with other MVD trials involving diabetic patients. Results The randomized cohort was 63.1 +/- 9.1 years old and 29% female, with a median diabetes duration of 10.2 +/- 8.9 years. Most (83%) had 3-vessel disease and on average took 5.5 +/- 1.7 vascular medications, with 32% on insulin therapy. Nearly all had hypertension and/or dyslipidemia, and 26% had a prior myocardial infarction. Mean hemoglobin A1c was 7.8 +/- 1.7 mg/dL, 29% had low-density lipoprotein <70 mg/dL, and mean systolic blood pressure was 134 +/- 20 mm Hg. The mean SYNTAX score was 26.2 with a symmetric distribution. FREEDOM trial participants have baseline characteristics similar to those of contemporary multivessel and diabetes trial cohorts. Conclusions The FREEDOM trial has successfully recruited a high-risk diabetic MVD cohort. Follow-up efforts include aggressive monitoring to optimize background risk factor control. FREEDOM will contribute significantly to the PCI versus CABG debate in diabetic patients with MVD. (Am Heart J 2012;164:591-9.)
Resumo:
Understanding alternative splicing is crucial to elucidate the mechanisms behind several biological phenomena, including diseases. The huge amount of expressed sequences available nowadays represents an opportunity and a challenge to catalog and display alternative splicing events (ASEs). Although several groups have faced this challenge with relative success, we still lack a computational tool that uses a simple and straightforward method to retrieve, name and present ASEs. Here we present SPLOOCE, a portal for the analysis of human splicing variants. SPLOOCE uses a method based on regular expressions for retrieval of ASEs. We propose a simple syntax that is able to capture the complexity of ASEs.
Resumo:
Syntax use by non-human animals remains a controversial issue. We present here evidence that a dog may respond to verbal requests composed of two independent terms, one referring to an object and the other to an action to be performed relative to the object. A female mongrel dog, Sofia, was initially trained to respond to action (point and fetch) and object (ball, key, stick, bottle and bear) terms which were then presented as simultaneous, combinatorial requests (e. g. ball fetch, stick point). Sofia successfully responded to object-action requests presented as single sentences, and was able to flexibly generalize her performance across different contexts. These results provide empirical evidence that dogs are able to extract the information contained in complex messages and to integrate it in directed performance, an ability which is shared with other linguistically trained animals and may represent a forerunner of syntactic functioning.
Resumo:
Abstract Background Over the last years, a number of researchers have investigated how to improve the reuse of crosscutting concerns. New possibilities have emerged with the advent of aspect-oriented programming, and many frameworks were designed considering the abstractions provided by this new paradigm. We call this type of framework Crosscutting Frameworks (CF), as it usually encapsulates a generic and abstract design of one crosscutting concern. However, most of the proposed CFs employ white-box strategies in their reuse process, requiring two mainly technical skills: (i) knowing syntax details of the programming language employed to build the framework and (ii) being aware of the architectural details of the CF and its internal nomenclature. Also, another problem is that the reuse process can only be initiated as soon as the development process reaches the implementation phase, preventing it from starting earlier. Method In order to solve these problems, we present in this paper a model-based approach for reusing CFs which shields application engineers from technical details, letting him/her concentrate on what the framework really needs from the application under development. To support our approach, two models are proposed: the Reuse Requirements Model (RRM) and the Reuse Model (RM). The former must be used to describe the framework structure and the later is in charge of supporting the reuse process. As soon as the application engineer has filled in the RM, the reuse code can be automatically generated. Results We also present here the result of two comparative experiments using two versions of a Persistence CF: the original one, whose reuse process is based on writing code, and the new one, which is model-based. The first experiment evaluated the productivity during the reuse process, and the second one evaluated the effort of maintaining applications developed with both CF versions. The results show the improvement of 97% in the productivity; however little difference was perceived regarding the effort for maintaining the required application. Conclusion By using the approach herein presented, it was possible to conclude the following: (i) it is possible to automate the instantiation of CFs, and (ii) the productivity of developers are improved as long as they use a model-based instantiation approach.
Resumo:
While the use of statistical physics methods to analyze large corpora has been useful to unveil many patterns in texts, no comprehensive investigation has been performed on the interdependence between syntactic and semantic factors. In this study we propose a framework for determining whether a text (e.g., written in an unknown alphabet) is compatible with a natural language and to which language it could belong. The approach is based on three types of statistical measurements, i.e. obtained from first-order statistics of word properties in a text, from the topology of complex networks representing texts, and from intermittency concepts where text is treated as a time series. Comparative experiments were performed with the New Testament in 15 different languages and with distinct books in English and Portuguese in order to quantify the dependency of the different measurements on the language and on the story being told in the book. The metrics found to be informative in distinguishing real texts from their shuffled versions include assortativity, degree and selectivity of words. As an illustration, we analyze an undeciphered medieval manuscript known as the Voynich Manuscript. We show that it is mostly compatible with natural languages and incompatible with random texts. We also obtain candidates for keywords of the Voynich Manuscript which could be helpful in the effort of deciphering it. Because we were able to identify statistical measurements that are more dependent on the syntax than on the semantics, the framework may also serve for text analysis in language-dependent applications.
Resumo:
This thesis proposes a new document model, according to which any document can be segmented in some independent components and transformed in a pattern-based projection, that only uses a very small set of objects and composition rules. The point is that such a normalized document expresses the same fundamental information of the original one, in a simple, clear and unambiguous way. The central part of my work consists of discussing that model, investigating how a digital document can be segmented, and how a segmented version can be used to implement advanced tools of conversion. I present seven patterns which are versatile enough to capture the most relevant documents’ structures, and whose minimality and rigour make that implementation possible. The abstract model is then instantiated into an actual markup language, called IML. IML is a general and extensible language, which basically adopts an XHTML syntax, able to capture a posteriori the only content of a digital document. It is compared with other languages and proposals, in order to clarify its role and objectives. Finally, I present some systems built upon these ideas. These applications are evaluated in terms of users’ advantages, workflow improvements and impact over the overall quality of the output. In particular, they cover heterogeneous content management processes: from web editing to collaboration (IsaWiki and WikiFactory), from e-learning (IsaLearning) to professional printing (IsaPress).
Resumo:
Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.
Resumo:
For a long time, the work of a Franciscan Friar who had lived in Bologna and in Florence during the 13th and 14th centuries, Bartolomeo Della Pugliola, was thought to have been lost. Recent paleographic research, however, has affirmed that most of Della Pugliola’s work, although mixed into other authors, is contained in two manuscripts (1994 and 3843), currently kept at University Library in Bologna. Pugliola’s chronicle is central to Bolognese medieval literature, not only because it was the privileged source for the important work of Ramponis’ chronicle, but also because Bartolomeo della Pugliola’s sources are several significant works such as Jacopo Bianchetti’s lost writings and Pietro and Floriano Villolas’ chronicle (1163-1372). Ongoing historical studies and recent discoveries enabled me to reconstruct the historical chronology of Pugliola’s work as well as the Bolognese language between the 13th and 14th century The original purpose of my research was to add a linguistic commentary to the edition of the text in order to fill the gaps in medieval Bolognese language studies. In addition to being a reliable source, Pugliola’s chronicle was widely disseminated and became a sort of vulgate. The tradition of chronicle, through collation, allows the study of the language from a diachronic point of view. I therefore described all the linguistics phenomena related to phonetics, morphology and syntax in Pugliola’s text and I compared these results with variants in Villola’s and Ramponis’ chronicles. I also did likewise with another chronicle by a 16th century merchant, Friano Ubaldini, that I edited. This supplement helped to complete the Bolognese language outline from the 13th to the 16th century. In order to analize the data that I collected, I tried to approach them from a sociolinguistic point of view because each author represents a different variant of the language: closer to a scripta and the Florentine the language used by Pugliola, closer to the dialect spoken in Bologna the language used by Ubaldini. Differencies in handwriting especially show the models the authors try to reproduce or imitate. The glossary I added at the end of this study can help to understand these nuances with a number of examples.