912 resultados para humour traduzione multimediale sottotitolaggio talk-show
Resumo:
Slope failure occurs in many areas throughout the world and it becomes an important problem when it interferes with human activity, in which disasters provoke loss of life and property damage. In this research we investigate the slope failure through the centrifuge modeling, where a reduced-scale model, N times smaller than the full-scale (prototype), is used whereas the acceleration is increased by N times (compared with the gravity acceleration) to preserve the stress and the strain behavior. The aims of this research “Centrifuge modeling of sandy slopes” are in extreme synthesis: 1) test the reliability of the centrifuge modeling as a tool to investigate the behavior of a sandy slope failure; 2) understand how the failure mechanism is affected by changing the slope angle and obtain useful information for the design. In order to achieve this scope we arranged the work as follows: Chapter one: centrifuge modeling of slope failure. In this chapter we provide a general view about the context in which we are working on. Basically we explain what is a slope failure, how it happens and which are the tools available to investigate this phenomenon. Afterwards we introduce the technology used to study this topic, that is the geotechnical centrifuge. Chapter two: testing apparatus. In the first section of this chapter we describe all the procedures and facilities used to perform a test in the centrifuge. Then we explain the characteristics of the soil (Nevada sand), like the dry unit weight, water content, relative density, and its strength parameters (c,φ), which have been calculated in laboratory through the triaxial test. Chapter three: centrifuge tests. In this part of the document are presented all the results from the tests done in centrifuge. When we talk about results we refer to the acceleration at failure for each model tested and its failure surface. In our case study we tested models with the same soil and geometric characteristics but different angles. The angles tested in this research were: 60°, 75° and 90°. Chapter four: slope stability analysis. We introduce the features and the concept of the software: ReSSA (2.0). This software allows us to calculate the theoretical failure surfaces of the prototypes. Then we show in this section the comparisons between the experimental failure surfaces of the prototype, traced in the laboratory, and the one calculated by the software. Chapter five: conclusion. The conclusion of the research presents the results obtained in relation to the two main aims, mentioned above.
Resumo:
Il testo ripercorre le origini del romanzo poliziesco, la sua struttura, le tecniche narrative utilizzate e l’evoluzione del genere che ha condotto alla nascita del Noir francese e del Noir Mediterraneo. Arrivando ad osservare che per le sue caratteristiche, la letteratura noir in generale presenta specificità di tipo culturale, ambientale e linguistico, che risultano molto evidenti nell’opera di Izzo, si giunge a considerare come sia necessario per il traduttore porsi rispetto all’opera come un antropologo o un etnologo, di farsi cioè interprete di una “cultura”. Nella terza parte del lavoro vengono presentati la Trilogia di Izzo, la struttura narrativa e gli elementi socio-culturali presenti nell’opera. La quarta parte propone le teorie relative sia al problema della traduzione interlinguistica, sia a quello della traduzione intralinguistica e successivamente dopo una analisi della traduzione italiana dei testi, presenta, in appendice, una possibile traduzione del prologo di Total Khéops, inteso come sintesi delle caratteristiche narrative e culturali dell’autore. La parte conclusiva di questo lavoro riguarda la traduzione intercodica e, seguendo la linea di pensiero che ha governato l’analisi della Trilogia vengono presi in esame e messi a confronto col testo narrativo alcuni dialoghi significativi dell’adattamento per la televisione.
Resumo:
A prevalent claim is that we are in knowledge economy. When we talk about knowledge economy, we generally mean the concept of “Knowledge-based economy” indicating the use of knowledge and technologies to produce economic benefits. Hence knowledge is both tool and raw material (people’s skill) for producing some kind of product or service. In this kind of environment economic organization is undergoing several changes. For example authority relations are less important, legal and ownership-based definitions of the boundaries of the firm are becoming irrelevant and there are only few constraints on the set of coordination mechanisms. Hence what characterises a knowledge economy is the growing importance of human capital in productive processes (Foss, 2005) and the increasing knowledge intensity of jobs (Hodgson, 1999). Economic processes are also highly intertwined with social processes: they are likely to be informal and reciprocal rather than formal and negotiated. Another important point is also the problem of the division of labor: as economic activity becomes mainly intellectual and requires the integration of specific and idiosyncratic skills, the task of dividing the job and assigning it to the most appropriate individuals becomes arduous, a “supervisory problem” (Hogdson, 1999) emerges and traditional hierarchical control may result increasingly ineffective. Not only specificity of know how makes it awkward to monitor the execution of tasks, more importantly, top-down integration of skills may be difficult because ‘the nominal supervisors will not know the best way of doing the job – or even the precise purpose of the specialist job itself – and the worker will know better’ (Hogdson,1999). We, therefore, expect that the organization of the economic activity of specialists should be, at least partially, self-organized. The aim of this thesis is to bridge studies from computer science and in particular from Peer-to-Peer Networks (P2P) to organization theories. We think that the P2P paradigm well fits with organization problems related to all those situation in which a central authority is not possible. We believe that P2P Networks show a number of characteristics similar to firms working in a knowledge-based economy and hence that the methodology used for studying P2P Networks can be applied to organization studies. Three are the main characteristics we think P2P have in common with firms involved in knowledge economy: - Decentralization: in a pure P2P system every peer is an equal participant, there is no central authority governing the actions of the single peers; - Cost of ownership: P2P computing implies shared ownership reducing the cost of owing the systems and the content, and the cost of maintaining them; - Self-Organization: it refers to the process in a system leading to the emergence of global order within the system without the presence of another system dictating this order. These characteristics are present also in the kind of firm that we try to address and that’ why we have shifted the techniques we adopted for studies in computer science (Marcozzi et al., 2005; Hales et al., 2007 [39]) to management science.
Resumo:
The construction and use of multimedia corpora has been advocated for a while in the literature as one of the expected future application fields of Corpus Linguistics. This research project represents a pioneering experience aimed at applying a data-driven methodology to the study of the field of AVT, similarly to what has been done in the last few decades in the macro-field of Translation Studies. This research was based on the experience of Forlixt 1, the Forlì Corpus of Screen Translation, developed at the University of Bologna’s Department of Interdisciplinary Studies in Translation, Languages and Culture. As a matter of fact, in order to quantify strategies of linguistic transfer of an AV product, we need to take into consideration not only the linguistic aspect of such a product but all the meaning-making resources deployed in the filmic text. Provided that one major benefit of Forlixt 1 is the combination of audiovisual and textual data, this corpus allows the user to access primary data for scientific investigation, and thus no longer rely on pre-processed material such as traditional annotated transcriptions. Based on this rationale, the first chapter of the thesis sets out to illustrate the state of the art of research in the disciplinary fields involved. The primary objective was to underline the main repercussions on multimedia texts resulting from the interaction of a double support, audio and video, and, accordingly, on procedures, means, and methods adopted in their translation. By drawing on previous research in semiotics and film studies, the relevant codes at work in visual and acoustic channels were outlined. Subsequently, we concentrated on the analysis of the verbal component and on the peculiar characteristics of filmic orality as opposed to spontaneous dialogic production. In the second part, an overview of the main AVT modalities was presented (dubbing, voice-over, interlinguistic and intra-linguistic subtitling, audio-description, etc.) in order to define the different technologies, processes and professional qualifications that this umbrella term presently includes. The second chapter focuses diachronically on various theories’ contribution to the application of Corpus Linguistics’ methods and tools to the field of Translation Studies (i.e. Descriptive Translation Studies, Polysystem Theory). In particular, we discussed how the use of corpora can favourably help reduce the gap existing between qualitative and quantitative approaches. Subsequently, we reviewed the tools traditionally employed by Corpus Linguistics in regard to the construction of traditional “written language” corpora, to assess whether and how they can be adapted to meet the needs of multimedia corpora. In particular, we reviewed existing speech and spoken corpora, as well as multimedia corpora specifically designed to investigate Translation. The third chapter reviews Forlixt 1's main developing steps, from a technical (IT design principles, data query functions) and methodological point of view, by laying down extensive scientific foundations for the annotation methods adopted, which presently encompass categories of pragmatic, sociolinguistic, linguacultural and semiotic nature. Finally, we described the main query tools (free search, guided search, advanced search and combined search) and the main intended uses of the database in a pedagogical perspective. The fourth chapter lists specific compilation criteria retained, as well as statistics of the two sub-corpora, by presenting data broken down by language pair (French-Italian and German-Italian) and genre (cinema’s comedies, television’s soapoperas and crime series). Next, we concentrated on the discussion of the results obtained from the analysis of summary tables reporting the frequency of categories applied to the French-Italian sub-corpus. The detailed observation of the distribution of categories identified in the original and dubbed corpus allowed us to empirically confirm some of the theories put forward in the literature and notably concerning the nature of the filmic text, the dubbing process and Italian dubbed language’s features. This was possible by looking into some of the most problematic aspects, like the rendering of socio-linguistic variation. The corpus equally allowed us to consider so far neglected aspects, such as pragmatic, prosodic, kinetic, facial, and semiotic elements, and their combination. At the end of this first exploration, some specific observations concerning possible macrotranslation trends were made for each type of sub-genre considered (cinematic and TV genre). On the grounds of this first quantitative investigation, the fifth chapter intended to further examine data, by applying ad hoc models of analysis. Given the virtually infinite number of combinations of categories adopted, and of the latter with searchable textual units, three possible qualitative and quantitative methods were designed, each of which was to concentrate on a particular translation dimension of the filmic text. The first one was the cultural dimension, which specifically focused on the rendering of selected cultural references and on the investigation of recurrent translation choices and strategies justified on the basis of the occurrence of specific clusters of categories. The second analysis was conducted on the linguistic dimension by exploring the occurrence of phrasal verbs in the Italian dubbed corpus and by ascertaining the influence on the adoption of related translation strategies of possible semiotic traits, such as gestures and facial expressions. Finally, the main aim of the third study was to verify whether, under which circumstances, and through which modality, graphic and iconic elements were translated into Italian from an original corpus of both German and French films. After having reviewed the main translation techniques at work, an exhaustive account of possible causes for their non-translation was equally provided. By way of conclusion, the discussion of results obtained from the distribution of annotation categories on the French-Italian corpus, as well as the application of specific models of analysis allowed us to underline possible advantages and drawbacks related to the adoption of a corpus-based approach to AVT studies. Even though possible updating and improvement were proposed in order to help solve some of the problems identified, it is argued that the added value of Forlixt 1 lies ultimately in having created a valuable instrument, allowing to carry out empirically-sound contrastive studies that may be usefully replicated on different language pairs and several types of multimedia texts. Furthermore, multimedia corpora can also play a crucial role in L2 and translation teaching, two disciplines in which their use still lacks systematic investigation.
Resumo:
Bifidobacteria constitute up to 3% of the total microbiota and represent one of the most important healthpromoting bacterial groups of the human intestinal microflora. The presence of Bifidobacterium in the human gastrointestinal tract has been directly related to several health-promoting activities; however, to date, no information about the specific mechanisms of interaction with the host is available. The first health-promoting activities studied in these job was the oxalate-degrading activity. Oxalic acid occurs extensively in nature and plays diverse roles, especially in pathological processes. Due to its highly oxidizing effects, hyper absorption or abnormal synthesis of oxalate can cause serious acute disorders in mammals and be lethal in extreme cases. Intestinal oxalate-degrading bacteria could therefore be pivotal in maintaining oxalate homeostasis, reducing the risk of kidney stone development. In this study, the oxalate-degrading activity of 14 bifidobacterial strains was measured by a capillary electrophoresis technique. The oxc gene, encoding oxalyl-CoA decarboxylase, a key enzyme in oxalate catabolism, was isolated by probing a genomic library of B. animalis subsp. lactis BI07, which was one of the most active strains in the preliminary screening. The genetic and transcriptional organization of oxc flanking regions was determined, unravelling the presence of other two independently transcribed open reading frames, potentially responsible for B. animalis subsp. lactis ability to degrade oxalate. Transcriptional analysis, using real-time quantitative reverse transcription PCR, revealed that these genes were highly induced in cells first adapted to subinhibitory concentrations of oxalate and then exposed to pH 4.5. Acidic conditions were also a prerequisite for a significant oxalate degradation rate, which dramatically increased in oxalate pre-adapted cells, as demonstrated in fermentation experiments with different pH-controlled batch cultures. These findings provide new insights in the characterization of oxalate-degrading probiotic bacteria and may support the use of B. animalis subsp. lactis as a promising adjunct for the prophylaxis and management of oxalate-related kidney disease. In order to provide some insight into the molecular mechanisms involved in the interaction with the host, in the second part of the job, we investigated whether Bifidobacterium was able to capture human plasminogen on the cell surface. The binding of human plasminogen to Bifidobacterium was dependent on lysine residues of surface protein receptors. By using a proteomic approach, we identified six putative plasminogen-binding proteins in the cell wall fraction of three strain of Bifidobacterium. The data suggest that plasminogen binding to Bifidobactrium is due to the concerted action of a number of proteins located on the bacterial cell surface, some of which are highly conserved cytoplasmic proteins which have other essential cellular functions. Our findings represent a step forward in understanding the mechanisms involved in the Bifidobacterium-host interaction. In these job w studied a new approach based on to MALDI-TOF MS to measure the interaction between entire bacterial cells and host molecular target. MALDI-TOF (Matrix Assisted Laser Desorption Ionization-Time of Flight)—mass spectrometry has been applied, for the first time, in the investigation of whole Bifidobacterium cells-host target proteins interaction. In particular, by means of this technique, a dose dependent human plasminogen-binding activity has been shown for Bifidobacterium. The involvement of lysine binding sites on the bacterial cell surface has been proved. The obtained result was found to be consistent with that from well-established standard methodologies, thus the proposed MALDI-TOF approach has the potential to enter as a fast alternative method in the field of biorecognition studies involving in bacterial cells and proteins of human origin.