986 resultados para Dictionary of idiomatic expressions
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Física - IFT
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The aim of this study was to analyze the behavior of the flow of heat (temperature) through the thickness of panels LVL (Laminated veneer lumber) produced with phenol formaldehyde adhesive, in laboratorial and industrial scales. Experimental program was conducted with five LVL panels (three produced in laboratorial scale and two in industrial scale) with different arrangements of a mix of commercial veneers from tropical pinus from the south region of Sao Paulo State, Brazil, bonded using phenol formaldehyde adhesive. The temperature inside the panels during the pressing process was evaluated using thermocouples type T (cooper-constantan), installed mostly in the center of the glue lines and connected to a data acquisition system. The graphics of temperature as a function of the time showed a gradual increase of temperature up to pre-set values, remaining constant from them. The temperature reached at the center of the panels was adequate to promote the curing of the adhesive. These pre-set values were similar to the minimum values presented by other authors and manufacturers of these adhesives that affirm that temperatures above 100ºC at the center of laminated panels bonded with phenolic adhesives are sufficient to ensure proper cure of the resin. The time necessary for curing of the adhesives confirmed the validity of practical expressions provided by adhesive manufacturers.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Estudos Linguísticos - IBILCE
Resumo:
Pós-graduação em Estudos Linguísticos - IBILCE
Resumo:
to each other. These approaches reveal the speaker's attitudes, their feelings expressed in statements and in the contexts in which those expressions are used. Cagliari (1989) calls this kind of discursive reference as prosodic markers in literary writing. In this project, the corpus for the analysis comes from the work of Guimarães Rosa: Manuelzão e Miguilim. The main goal of the present project is to develop studies on the subject, since there is virtually nothing done about the prosodic writing markers. Develop working methods is an important objective, in order to show how the prosodic markers can be studied. This type of study is not only important to linguistics: phonetics, textual analysis and discourse, but also to literary studies. Different ages of literary expressions used differently such resources as shown by the work of Cagliari (1989). The methodology of this project starts collecting data to compose the corpus with examples categorized as prosodic markers. Then, according to prosodic theories, these expressions are classified in types. The contexts in which they appear are important elements and they will be highlighted. The narrative of the plot is also an important context. Dialogues are good source of prosodic markers. In the work of Guimarães Rosa: Manuelzão e Miguilim, it was observed that the author likes to reveal the feelings of the characters in the novel through stated words in their speech. There are prosodic markers showing feeling like: intimidating, serene, gentle, cheerful, worried, angry, with irony, etc. The authors also refer to the fact that the character pronounced his speech with different voice qualities such as loud, hoarse, whispering, etc. This project studies the association between some prosodic elements of speech and their occurrence in literary texts as prosodic writing markers, as defined in the project. The data come from the phonetic descriptions of words and expressions regarded as prosodic...
Resumo:
to each other. These approaches reveal the speaker's attitudes, their feelings expressed in statements and in the contexts in which those expressions are used. Cagliari (1989) calls this kind of discursive reference as prosodic markers in literary writing. In this project, the corpus for the analysis comes from the work of Guimarães Rosa: Manuelzão e Miguilim. The main goal of the present project is to develop studies on the subject, since there is virtually nothing done about the prosodic writing markers. Develop working methods is an important objective, in order to show how the prosodic markers can be studied. This type of study is not only important to linguistics: phonetics, textual analysis and discourse, but also to literary studies. Different ages of literary expressions used differently such resources as shown by the work of Cagliari (1989). The methodology of this project starts collecting data to compose the corpus with examples categorized as prosodic markers. Then, according to prosodic theories, these expressions are classified in types. The contexts in which they appear are important elements and they will be highlighted. The narrative of the plot is also an important context. Dialogues are good source of prosodic markers. In the work of Guimarães Rosa: Manuelzão e Miguilim, it was observed that the author likes to reveal the feelings of the characters in the novel through stated words in their speech. There are prosodic markers showing feeling like: intimidating, serene, gentle, cheerful, worried, angry, with irony, etc. The authors also refer to the fact that the character pronounced his speech with different voice qualities such as loud, hoarse, whispering, etc. This project studies the association between some prosodic elements of speech and their occurrence in literary texts as prosodic writing markers, as defined in the project. The data come from the phonetic descriptions of words and expressions regarded as prosodic...
Resumo:
The purpose of this thesis is to deepen the understanding of grown-up blind people’s non-verbal communication, including body expressions and paralinguistic (voice) expressions. More specifically, the thesis includes the following three studies: Blind people’s different forms of body expressions, blind people’s non-verbal conversation regulation and blind people’s experience of their own non-verbal expressions. The focus has been on the blind participants’ competence and on their subjective perspectives. I have also compared congenitally and adventitiously blind in all of the studies. The approach is mainly phenomenological and the qualitative empirical phenomenological psychological method is the primary methodological source of inspiration. Fourteen blind persons (and also some sigthed persons) participated. They have no other obvious disability than the blindness and their ages vary between 18 and 54. Data in the first two studies consisted of video recordings and data in the last study consisted of interviews. The overall results can be summarized in the following three points: 1. There are (almost) only similarities between the congenitally blind and adventitiously blind persons concerning their paralinguistic expressions. 2. There are mainly similarities between the two groups with respect to the occurrences of different body expressive forms. 3. There are also some differences between the groups. For example, the congenitally blind persons seem to have a limited ability to use the body in an abstract and symbolic way and they often mentioned that they have been told that their body expressions deviate from sighted people’s norms. But the persons in both groups also struggle to see themselves as unique persons who express themselves on the basis of their conditions and their previous experiences.
Resumo:
[ES] El Trabajo de Fin de Grado, Monitor Web de Expresiones Regulares (MWRegEx), es una herramienta basada en tecnologías web, desarrollada usando el entorno Visual Studio. El objetivo principal de la aplicación es dar apoyo a la docencia de expresiones regulares, en el marco de la enseñanza del manejo de ristras de caracteres en las asignaturas de programación del Grado en Ingeniería Informática. La aplicación permite obtener el dibujo de un autómata de una expresión regular, facilitando su comprensión; además, permite aplicar la expresión a diferentes ristras de caracteres, mostrando las coincidencias encontradas, y ofrece una versión de la expresión adaptada a su uso en literales string de lenguajes como Java y otros. La herramienta se ha implementado en dos partes: un servicio web, escrito en C#, donde se realizan todos los análisis de las expresiones regulares y las ristras a contrastar; y un cliente web, implementado usando tecnología asp.net, con JavaScript y JQuery, que gestiona la interfaz de usuario y muestra los resultados. Esta separación permite que el servicio web pueda ser reutilizado con otras aplicaciones cliente. El autómata que representa una expresión regular esta dibujado usando la librería Raphaël JavaScript que permite manejar los elementos SVG. Cada elemento de la expresión regular tiene un dibujo diferente y único para así diferenciarlo. Toda la interfaz gráfica de usuario está internacionalizada de manera tal que pueda adaptarse a diferentes idiomas y regiones sin la necesidad de realizar cambios de ingeniería ni en el código. Tanto el servicio web como la parte cliente están estructurados para que se puedan agregar nuevas modificaciones sin que esto genere una onda expansiva a lo largo de las diversas clases existentes.
Resumo:
[EN] This article focuses on a specific feature found in tourist guidebooks –the recurrent use of foreign expressions or “third language”. It presents the findings of a comparative analysis of a parallel corpus made up of twenty guidebooks: ten guidebooks originally written in English and their corresponding translated versions in Spanish, describing different countries and cities (all of them published by Lonely Planet), focusing on those chapters in which the writer includes practical information. The purpose of the study is to analyze the use of the third language in the English and Spanish versions and to determine and identify the translation strategies used by the translators to transfer these linguistic elements from one language to the other.
Resumo:
La tesi considera la trattazione del tema dell’infanzia nell’opera di Origene di Alessandria attraverso l’analisi dei testi trasmessi nell’originale greco e delle traduzioni latine di Rufino e Gerolamo. Il motivo dell’infanzia è considerato nei suoi molteplici significati, a più livelli: esegetico, antropologico, filosofico, teologico. La ricerca non si limita dunque ad un’analisi di taglio storico, ma ambisce a definire la concezione e la considerazione della prima età dal punto di vista di Origene e nel contesto più ampio della letteratura coeva. Attraverso una lettura estensiva del corpus dell’Alessandrino sono stati isolati tutti i passi che si riferiscono all’infanzia a livello letterale e metaforico. Ne emerge una trattazione complessa del tema: il bambino è per Origene, in linea con le contemporanee dottrine filosofiche, un essere eminentemente irrazionale. Il pieno sviluppo della facoltà razionale si colloca al termine di questa prima fase dell’esistenza. L’irrazionalità infantile previene nei più piccoli l’insorgere delle passioni. A questa dottrina, di matrice stoica, si ricollegano alcuni sviluppi di grande rilievo: la non-imputabilità dei minori ed il legame tra razionalità e responsabilità individuale; la riflessione sulla sofferenza dei bambini e la ricerca di una sua causa, che non intacchi il principio della giustizia divina; l’ipotesi della preesistenza delle anime. Sul piano teologico la ricerca si focalizza sulle nozioni di paternità e filiazione e sul tema, centrale nell’orizzonte origeniano, della pedagogia. Origene concepisce la pedagogia umana, sul modello di quella divina, come una rete dinamica di relazioni che ricalca i rapporti parentali. A fianco di questi ambiti d’interesse principali l’analisi considera aspetti ulteriori: risalto è concesso, in particolare, all’elemento biografico ed all’aspetto linguistico e letterario della prosa origeniana, quest'ultimo spesso trascurato dalla critica. Lo studio mostra inoltre la vitalità di alcuni modelli esegetici origeniani nella tradizione successiva.
Resumo:
Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.
Resumo:
The aim of the thesis is to investigate the topic of semantic under-determinacy, i.e. the failure of the semantic content of certain expressions to determine a truth-evaluable utterance content. In the first part of the thesis, I engage with the problem of setting apart semantic under-determinacy as opposed to other phenomena such as ambiguity, vagueness, indexicality. As I will argue, the feature that distinguishes semantic under-determinacy from these phenomena is its being explainable solely in terms of under-articulation. In the second part of the thesis, I discuss the topic of how communication is possible, despite the semantic under-determinacy of language. I discuss a number of answers that have been offered: (i) the Radical Contextualist explanation which emphasises the role of pragmatic processes in utterance comprehension; (ii) the Indexicalist explanation in terms of hidden syntactic positions; (iii) the Relativist account, which regards sentences as true or false relative to extra coordinates in the circumstances of evaluation (besides possible worlds). In the final chapter, I propose an account of the comprehension of utterances of semantically under-determined sentences in terms of conceptual constraints, i.e. ways of organising information which regulate thought and discourse on certain matters. Conceptual constraints help the hearer to work out the truth-conditions of an utterance of a semantically under-determined sentence. Their role is clearly semantic, in that they contribute to “what is said” (rather than to “what is implied”); however, they do not respond to any syntactic constraint. The view I propose therefore differs, on the one hand, from Radical Contextualism, because it stresses the role of semantic-governed processes as opposed to pragmatics-governed processes; on the other hand, it differs from Indexicalism in its not endorsing any commitment as to hidden syntactic positions; and it differs from Relativism in that it maintains a monadic notion if truth.