840 resultados para language acquisition - technology application
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The aim of this research was to apprehend the effects of text reading in S.'s writing productions, a child in literacy process. S. is a child who can't speak or write by her own due to a dystonic quadriplegic CP. S. communicates with Blissymbols which were introduced in a school-clinic in São Paulo city when she was six years old. At the same time the literacy process took place and she indicated symbols, letters and numbers in a board by scanning. The teacher related S.'s difficulties concerning reading activities, so a weekly activity was proposed by the speech therapist in the classroom. The teacher and her assistant participated in the activity. A cutout of the activity involving a book reading when S. was between eight years and seven months old and nine years and one month old was analysed. The research was based in Borges (2006) grounded in The Brazilian Interactionism according to De Lemos (1992; 1995 and others) and proposes a literacy process among students from the first series in a regular school by the reading of different texts. The activity guided by the speech therapist took place side by side with the literacy program guided by the teacher and resulted in various S's text productions. The data was collected through film transcriptions from the activities in the classroom and from materials produced by S. through the reading of the chosen book. These data integrates NALíngua-CNPq databases coordinated by Dr. Alessandra Del Ré whose aim is to investigate the language acquisition process. S.'s reading and writing acquisition occurred in a singular way, affected by the using of Blissymbols that became S.'s speech modality: a written-speech with symbols and alphabetic writing
Resumo:
[EN] [EN] The lexical approach identifies lexis as the basis of language and focuses on the principle that language consists of grammaticalised lexis. in second language acquisition, over the past few years, this approach has generated great interest as an alternative to traditional grammar-based teaching methods. From a psycholinguistic point of view, the lexical approach consists of the capacity of understanding and producing lexical phrases as non-analysed entities (chunks). A growing body of literature concerning spoken fluency is in favour of integrating automaticity and formulaic language units into classroom practice. in line with the latest theories on SlA, we recommend the inclusion of a language awareness component as an integral part of this approach. The purpose is to induce what Schmidt (1990) calls noticing , i.e., registering forms in the input so as to store themin memory. This paper, which is in keeping with the interuniversity Research Project “Evidentialityin a multidisciplinary corpus of English research papers” of the University of las Palmas de Gran Canaria, provides a theoretical overview on theresearch of this approach taking into account both the methodological foundationson the subject and its pedagogical implications for SLA
Resumo:
This study aims to the elaboration of juridical and administrative terminology in Ladin language, actually on the Ladin idiom spoken in Val Badia. The necessity of this study is strictly connected to the fact that in South Tyrol the Ladin language is not just safeguarded, but the editing of administrative and normative text is guaranteed by law. This means that there is a need for a unique terminology in order to support translators and editors of specialised texts. The starting point of this study are, on one side the need of a unique terminology, and on the other side the translation work done till now from the employees of the public administration in Ladin language. In order to document their efforts a corpus made up of digitalized administrative and normative documents was build. The first two chapters focuses on the state of the art of projects on terminology and corpus linguistics for lesser used languages. The information were collected thanks to the help of institutes, universities and researchers dealing with lesser used languages. The third chapter focuses on the development of administrative language in Ladin language and the fourth chapter focuses on the creation of the trilingual Italian – German – Ladin corpus made up of administrative and normative documents. The last chapter deals with the methodologies applied in order to elaborate the terminology entries in Ladin language though the use of the trilingual corpus. Starting from the terminology entry all steps are described, from term extraction, to the extraction of equivalents, contexts and definitions and of course also of the elaboration of translation proposals for not found equivalences. Finally the problems referring to the elaboration of terminology in Ladin language are illustrated.
Resumo:
This PhD research is part of a project addressed to improve the quality of Grana Trentino production. The objectives were to evaluated if milk storage and collection procedures may affect cheese-making technology and quality. Actually the milk is collected and delivered to the cheese factory just after milking in 50 L cans without refrigeration or in tanks cooled at 18 °C. This procedure is expensive (two deliveries each day) and the milk quality is difficult to preserve as temperatures are not controlled. The milk refrigeration at the farm could allow a single delivery to the dairy. Therefore it could be a good strategy to preserve raw milk quality and reduce cheese spoilage. This operation may, however, have the drawbacks of favouring the growth of psychrotrophic bacteria and changing the aptitude of milk to coagulation. With the aim of studying the effect on milk and cheese of traditional and new refrigerated technologies of milk storage, two different collection and creaming technologies were compared. The trials were replicated in three cheese factories manufacturing Grana Trentino. Every cheese-making day, about 1000 milk liters were collected from always the same two farms in the different collection procedures (single or double). Milk was processed to produce 2 wheels of Grana trentino every day. During the refrigerated trials, milk was collected and stored at the farm in a mixed tank at 12 or 8 °C and then was carried to the dairy in truck once a day. 112 cheese making day were followed: 56 for traditional technology and 56 for the refrigerated one. Each one of these two thechnologies lead to different ways of creaming: long time in the traditional one and shorter in the new one. For every cheese making day we recorded time, temperatures and pH during the milk processing to cheese. Whole milk before ceraming, cream and skim milk after creaming, vat milk and whey were sampled during every cheese-making day for analysis. After 18 months ripening we opened 46 cheese wheels for further chemical and microbiological analyses. The trials were performed with the aim of: 1 estimate the effect of storage temperatures on microbial communities, physico-chemical or/and rheological differences of milk and skim milk after creaming. 2 detect by culture dependent (plate counts) and indipendent (DGGE) methodolgies the microbial species present in whole, skimmed milk, cream and cheese sampled under the rind and in the core; 3 estimate the physico-chemical characteristics, the proteolytic activity, the content of free aminoacids and volatile compounds in 18 months ripened Grana Trentino cheeses from different storing and creaming of milk technologies. The results presented are remarkable since this is the first in-deep study presenting microbiological and chemical analysis of Grana Trentino that even if belonging to Grana Padano Consortium, it is clearly different in the milk and in the manufacturing technology.
Resumo:
Die vorliegende Dissertation analysiert die Middleware- Technologien CORBA (Common Object Request Broker Architecture), COM/DCOM (Component Object Model/Distributed Component Object Model), J2EE (Java-2-Enterprise Edition) und Web Services (inklusive .NET) auf ihre Eignung bzgl. eng und lose gekoppelten verteilten Anwendungen. Zusätzlich werden primär für CORBA die dynamischen CORBA-Komponenten DII (Dynamic Invocation Interface), IFR (Interface Repository) und die generischen Datentypen Any und DynAny (dynamisches Any) im Detail untersucht. Ziel ist es, a. konkrete Aussagen über diese Komponenten zu erzielen, und festzustellen, in welchem Umfeld diese generischen Ansätze ihre Berechtigung finden. b. das zeitliche Verhalten der dynamischen Komponenten bzgl. der Informationsgewinnung über die unbekannten Objekte zu analysieren. c. das zeitliche Verhalten der dynamischen Komponenten bzgl. ihrer Kommunikation zu messen. d. das zeitliche Verhalten bzgl. der Erzeugung von generischen Datentypen und das Einstellen von Daten zu messen und zu analysieren. e. das zeitliche Verhalten bzgl. des Erstellens von unbekannten, d. h. nicht in IDL beschriebenen Datentypen zur Laufzeit zu messen und zu analysieren. f. die Vorzüge/Nachteile der dynamischen Komponenten aufzuzeigen, ihre Einsatzgebiete zu definieren und mit anderen Technologien wie COM/DCOM, J2EE und den Web Services bzgl. ihrer Möglichkeiten zu vergleichen. g. Aussagen bzgl. enger und loser Koppelung zu tätigen. CORBA wird als standardisierte und vollständige Verteilungsplattform ausgewählt, um die o. a. Problemstellungen zu untersuchen. Bzgl. seines dynamischen Verhaltens, das zum Zeitpunkt dieser Ausarbeitung noch nicht oder nur unzureichend untersucht wurde, sind CORBA und die Web Services richtungsweisend bzgl. a. Arbeiten mit unbekannten Objekten. Dies kann durchaus Implikationen bzgl. der Entwicklung intelligenter Softwareagenten haben. b. der Integration von Legacy-Applikationen. c. der Möglichkeiten im Zusammenhang mit B2B (Business-to-Business). Diese Problemstellungen beinhalten auch allgemeine Fragen zum Marshalling/Unmarshalling von Daten und welche Aufwände hierfür notwendig sind, ebenso wie allgemeine Aussagen bzgl. der Echtzeitfähigkeit von CORBA-basierten, verteilten Anwendungen. Die Ergebnisse werden anschließend auf andere Technologien wie COM/DCOM, J2EE und den Web Services, soweit es zulässig ist, übertragen. Die Vergleiche CORBA mit DCOM, CORBA mit J2EE und CORBA mit Web Services zeigen im Detail die Eignung dieser Technologien bzgl. loser und enger Koppelung. Desweiteren werden aus den erzielten Resultaten allgemeine Konzepte bzgl. der Architektur und der Optimierung der Kommunikation abgeleitet. Diese Empfehlungen gelten uneingeschränkt für alle untersuchten Technologien im Zusammenhang mit verteilter Verarbeitung.
Resumo:
Nell’attuale contesto, caratterizzato da un’elevata attenzione alla qualità e alla sicurezza degli alimenti e alle soluzioni tese a garantirli, l’implementazione di sistemi microelettronici per il controllo del prodotto attraverso supporti miniaturizzati e a basso costo può risultare un’opportunità strategica. Oggetto della ricerca di dottorato sono stati lo studio dell’utilizzo di sensori e strumentazione innovativi per la misurazione ed il controllo di parametri ambientali di conservazione di prodotti alimentari e per la loro identificazione mediante la tecnologia della radiofrequenza. Allo scopo è stato studiato il contesto in cui operano gli attori principali della filiera agroalimentare ed è stata sviluppata un’idea di etichetta progettata per essere in grado di emettere attivamente segnale di allarme in caso di necessità (etichetta RFID intelligente semi-passiva). Il prototipo di chip, realizzato in via sperimentale, è stato validato positivamente, sia come strumento di misura, sia in termini di prestazione nel caso studio del monitoraggio della conservazione di un prodotto alimentare in condizioni controllate di temperatura e radiazione luminosa. Le significative evidenze analitiche di reazioni di degradazione dello stato qualitativo del prodotto, quali analisi di pH e colore, raccolte durante il periodo di osservazione di 64 giorni, hanno trovato riscontro con le misure rilevate dal chip prototipo. I risultati invitano ad individuare un partner industriale, con il quale sperimentare l’applicazione della tecnologia proposta.
Resumo:
Gesellschaft ist ohne interaktiven Sprachgebrauch nicht möglich. Damit gehören Analysen des Sprachgebrauchs auch zu den wichtigen Themen der Ethnologie. An welchen theoretischen Vorstellungen aber kann man sich orientieren, um solche Analysen durchzuführen? Die theoretischen Vorstellungen, die für eine solche Analyse zweckdienlich sind, versuche ich in diesem Arbeitspapier aus verschiedenen Texten der Linguistik‚ der ethnography of speaking sowie der Ethnolinguistik zusammenzuführen und Lesern damit die Möglichkeit an die Hand zu geben, zahlreiche analytische Fragen an diesen Gegenstand zu stellen. Einen schnellen Zugriff auf diese unterschiedlichen Aspekte zu ermöglichen, ist der Sinn dieses Arbeitspapiers.
Resumo:
La bonifica di acquiferi contaminati è una pratica che oggi dispone di diverse soluzioni a livello tecnologico, caratterizzate tuttavia da costi (ambientali ed economici) e problematiche tecniche di entità tale da rendere in alcuni casi poco conveniente la realizzazione dell’intervento stesso. Per questo motivo sempre maggiore interesse viene rivolto nell’ambito della ricerca alle tecnologie di bioremediation, ovvero sistemi in cui la degradazione degli inquinanti avviene ad opera di microorganismi e batteri opportunamente selezionati e coltivati. L’impiego di queste tecniche consente un minor utilizzo di risorse ed apparati tecnologici per il raggiungimento degli obiettivi di bonifica rispetto ai sistemi tradizionali. Il lavoro di ricerca presentato in questa tesi ha l’obiettivo di fornire, tramite l’utilizzo della metodologia LCA, una valutazione della performance ambientale di una tecnologia di bonifica innovativa (BEARD) e due tecnologie largamente usate nel settore, una di tipo passivo (Permeable Reactive Barrier) ed una di tipo attivo (Pump and Treat con Carboni Attivi).
Resumo:
Studies have depicted that the rate of unused patents comprises a high portion of patents in North America, Europe and Japan. Particularly, studies have identified a considerable share of strategic patents which are left unused due to pure strategic reasons. While such patents might generate strategic rents to their owner, they may have harmful consequences for the society if by blocking alternative solutions that other inventions provide they hamper the possibility of better solutions. Accordingly, the importance of the issue of nonuse is highlighted within the literature on strategic patenting, IPR policy and innovation economics. Moreover, the current literature has emphasized on the role of patent pools in dealing with potential issues such as excessive transaction cost caused by patent thickets and blocking patents. In fact, patent pools have emerged as policy tools facilitating technology commercialization and alleviating patent litigation among rivals holding overlapping IPRs. In this dissertation I provide a critical literature review on strategic patenting, identify present gaps and discuss some future research paths. Moreover, I investigate the drivers of strategic non-use of patents with particular focus on unused strategic play patents. Finally, I examine if participation intensity in patent pools by pool members explains their willingness to use their non-pooled patents. I also investigate which characteristics of the patent pools are associated to the willingness to use non-pooled patents through pool participation. I show that technological uncertainty and technological complexity are two technology environment factors that drive unused play patents. I also show that pool members participating more intensively in patent pools are more likely to be willing to use their non-pooled patents through pool participation. I further depict that pool licensors are more likely to be willing to use their non-pooled patents by participating in pools with higher level of technological complementarity to their own technology.
Resumo:
The goal of this thesis is to make static tensile test on four Carbon Fiber Reinforced Polymer laminates, in such a way as to obtain the ultimate tensile strength of these laminates; in particular, the laminates analyzed were produced by Hand Lay-up technology. Testing these laminates we have a reference point on which to compare other laminates and in particular CFRP laminate produced by RTM technology.
Resumo:
Automatic scan planning for magnetic resonance imaging of the knee aims at defining an oriented bounding box around the knee joint from sparse scout images in order to choose the optimal field of view for the diagnostic images and limit acquisition time. We propose a fast and fully automatic method to perform this task based on the standard clinical scout imaging protocol. The method is based on sequential Chamfer matching of 2D scout feature images with a three-dimensional mean model of femur and tibia. Subsequently, the joint plane separating femur and tibia, which contains both menisci, can be automatically detected using an information-augmented active shape model on the diagnostic images. This can assist the clinicians in quickly defining slices with standardized and reproducible orientation, thus increasing diagnostic accuracy and also comparability of serial examinations. The method has been evaluated on 42 knee MR images. It has the potential to be incorporated into existing systems because it does not change the current acquisition protocol.
Resumo:
Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.
Resumo:
Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker's face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g., /ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables facilitated participants' ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.