840 resultados para Subroutines in Procedural Programming Languages


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study focuses the impunity for human rights violations in Latin America. In this tradition of impunity, there is one exception, the emblematic case Fujimori, in which the conviction was for murder and serious injury, crimes against humanity according to the International Criminal Law. This sentence is an example in the context of this traditional trend of impunity. The research also analyzes the use of international law as a barrier state against injustice, both in substantive, imposing binding or mandatory standards with a universal character, but also in procedural terms, by providing supranational mechanisms to protect victims.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Matemática em Rede Nacional - IBILCE

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Artes - IA

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article is inserted in a study aimed at the identification of the main barriers for the inclusion of visually-impaired students in Physics classes. It focuses on the understanding of the communication context which facilitates or hardens the effective participation of students with visual impairment in Mechanics activities. To do so, the research defines, from empirical - sensory and semantic structures, the language to be applied in the activities, as well as, the moment and the speech pattern in which the languages have been used. As a result, it identifies the rela tion between the uses of the interdependent audio-visual empirical lan guage structure in the non-interactive episodes of authority; the decrease in the use of this structure in interactive episodes; the creation of educa tional segregation environments within the classroom and the frequent use of the interdependent tactile-hearing empirical language structure in such environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article represents a continuation of the results of a research presented in Camargo and Nardi (2007). It is inserted in the study that seeks to understand the main student’s inclusion barriers with visual impairment in the Physics classes. It aims to understand which communication context shows kindness or unkindness to the impairment visual student’s real participation in thermology activities. For this, the research defines, from the empirical - sensory and semantics structures, the used languages in the activities, as well, the moment and the speech pattern in which the languages have been used. As result, identifies a strong relation between the uses of the interdependent empirical structure audio-visual language in the non-interactive episodes of authority; a decrease of this structure use in the interactive episodes and the creation of education segregation environments within the classroom.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article is inserted in a wider study that seeks to understand the main inclusion barriers in Physics classes for students with visual impairment It aims to understand which communication context favors or impedes the visually impaired student participation to the impairment visual student’s real participation in Modern Physics activities. The research defines, from the empirical-sensory and semantics structures, the languages used in the activities, as well as, the moment and the speech pattern in which those languages have been used. As a result, this study identifies a strong relation between the uses of the interdependent empirical structure audio-visual language in the non-interactive episodes of authority; a decrease of this structure use in the interactive episodes; the creation of education segregation environments within the clasroom and the frequent use of empirical tactile-hearing interdependent language structure in these environments. Moreover, the concept of «special educational need» is discussed and its inadequate use is analyzed. Suggestions are given for its correct use of «special educational need,» its inadequate use, giving suggestions for its correct use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While the use of statistical physics methods to analyze large corpora has been useful to unveil many patterns in texts, no comprehensive investigation has been performed on the interdependence between syntactic and semantic factors. In this study we propose a framework for determining whether a text (e.g., written in an unknown alphabet) is compatible with a natural language and to which language it could belong. The approach is based on three types of statistical measurements, i.e. obtained from first-order statistics of word properties in a text, from the topology of complex networks representing texts, and from intermittency concepts where text is treated as a time series. Comparative experiments were performed with the New Testament in 15 different languages and with distinct books in English and Portuguese in order to quantify the dependency of the different measurements on the language and on the story being told in the book. The metrics found to be informative in distinguishing real texts from their shuffled versions include assortativity, degree and selectivity of words. As an illustration, we analyze an undeciphered medieval manuscript known as the Voynich Manuscript. We show that it is mostly compatible with natural languages and incompatible with random texts. We also obtain candidates for keywords of the Voynich Manuscript which could be helpful in the effort of deciphering it. Because we were able to identify statistical measurements that are more dependent on the syntax than on the semantics, the framework may also serve for text analysis in language-dependent applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Moderne ESI-LC-MS/MS-Techniken erlauben in Verbindung mit Bottom-up-Ansätzen eine qualitative und quantitative Charakterisierung mehrerer tausend Proteine in einem einzigen Experiment. Für die labelfreie Proteinquantifizierung eignen sich besonders datenunabhängige Akquisitionsmethoden wie MSE und die IMS-Varianten HDMSE und UDMSE. Durch ihre hohe Komplexität stellen die so erfassten Daten besondere Anforderungen an die Analysesoftware. Eine quantitative Analyse der MSE/HDMSE/UDMSE-Daten blieb bislang wenigen kommerziellen Lösungen vorbehalten. rn| In der vorliegenden Arbeit wurden eine Strategie und eine Reihe neuer Methoden zur messungsübergreifenden, quantitativen Analyse labelfreier MSE/HDMSE/UDMSE-Daten entwickelt und als Software ISOQuant implementiert. Für die ersten Schritte der Datenanalyse (Featuredetektion, Peptid- und Proteinidentifikation) wird die kommerzielle Software PLGS verwendet. Anschließend werden die unabhängigen PLGS-Ergebnisse aller Messungen eines Experiments in einer relationalen Datenbank zusammengeführt und mit Hilfe der dedizierten Algorithmen (Retentionszeitalignment, Feature-Clustering, multidimensionale Normalisierung der Intensitäten, mehrstufige Datenfilterung, Proteininferenz, Umverteilung der Intensitäten geteilter Peptide, Proteinquantifizierung) überarbeitet. Durch diese Nachbearbeitung wird die Reproduzierbarkeit der qualitativen und quantitativen Ergebnisse signifikant gesteigert.rn| Um die Performance der quantitativen Datenanalyse zu evaluieren und mit anderen Lösungen zu vergleichen, wurde ein Satz von exakt definierten Hybridproteom-Proben entwickelt. Die Proben wurden mit den Methoden MSE und UDMSE erfasst, mit Progenesis QIP, synapter und ISOQuant analysiert und verglichen. Im Gegensatz zu synapter und Progenesis QIP konnte ISOQuant sowohl eine hohe Reproduzierbarkeit der Proteinidentifikation als auch eine hohe Präzision und Richtigkeit der Proteinquantifizierung erreichen.rn| Schlussfolgernd ermöglichen die vorgestellten Algorithmen und der Analyseworkflow zuverlässige und reproduzierbare quantitative Datenanalysen. Mit der Software ISOQuant wurde ein einfaches und effizientes Werkzeug für routinemäßige Hochdurchsatzanalysen labelfreier MSE/HDMSE/UDMSE-Daten entwickelt. Mit den Hybridproteom-Proben und den Bewertungsmetriken wurde ein umfassendes System zur Evaluierung quantitativer Akquisitions- und Datenanalysesysteme vorgestellt.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this work is to analyse the figurative and metaphorical meanings of colours in English and Italian, focusing on the analysis and comparison of colour idioms in these two languages and cultures. The study starting point is the assumption that language and culture are inextricably related: they influence and modify each other, and both contribute to shaping our world-view. English and Italian colour idioms will be presented, compared and contrasted. Each colour is introduced by its figurative meaning in the two cultures. It is also shown whether and how the symbolic meaning is reflected in idiomatic language. The approach to English and Italian idioms is contrastive in order to show cases of direct correspondence (i.e. same colour, same meaning), partial correspondence (i.e. different colour or different idiom but same meaning) and cases peculiar to each language that lack of an idiomatic equivalent in the other language.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Code duplication is common in current programming-practice: programmers search for snippets of code, incorporate them into their projects and then modify them to their needs. In today's practice, no automated scheme is in place to inform both parties of any distant changes of the code. As code snippets continues to evolve both on the side of the user and on the side of the author, both may wish to benefit from remote bug fixes or refinements --- authors may be interested in the actual usage of their code snippets, and researchers could gather information on clone usage. We propose maintaining a link between software clones across repositories and outline how the links can be created and maintained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The goal of our work was to develop a simple method to evaluate a compensation treatment after unplanned treatment interruptions with respect to their tumour- and normal tissue effect. Methods We developed a software tool in java programming language based on existing recommendations to compensate for treatment interruptions. In order to express and visualize the deviations from the originally planned tumour and normal tissue effects we defined the compensability index. Results The compensability index represents an evaluation of the suitability of compensatory radiotherapy in a single number based on the number of days used for compensation and the preference of preserving the originally planned tumour effect or not exceeding the originally planned normal tissue effect. An automated tool provides a method for quick evaluation of compensation treatments. Conclusions The compensability index calculation may serve as a decision support system based on existing and established recommendations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.