10 resultados para Machine Translation
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Il progetto ANTE riguarda i nuovi sistemi di traduzione automatica (TA) e la loro applicazione nel mondo delle imprese. Lo studio prende spunto dai recenti sviluppi legati all’intelligenza artificiale e ai Big Data che negli ultimi anni hanno permesso alla TA di raggiungere livelli qualitativi molto elevati, al punto tale da essere impiegata da grandi multinazionali per raggiungere nuove quote di mercato. La TA può rispondere positivamente anche ai bisogni delle imprese di piccole dimensioni e a basso tenore tecnologico, migliorando la qualità delle comunicazioni multilingue attraverso delle traduzioni in tempi brevi e a costi contenuti. Lo studio si propone quindi di contribuire al rafforzamento della competitività internazionale delle piccole e medie imprese (PMI) emiliano- romagnole, migliorando la loro capacità di comunicazione in una o più lingue straniere attraverso l’introduzione e l’utilizzo efficace e consapevole di soluzioni ICT di ultima generazione e fornire, così, nuove opportunità di internazionalizzazione.
Resumo:
The present study aims at analyzing how dark humour as a cinematic genre travels cross-culturally through a specific mode of audiovisual translation, i.e. dubbing. In particular, it takes into consideration the processes involved in dubbing humour from English into Italian as observed in the English- and Italian-language versions of ten British and American dark comedies from the 1940s to the 2000s. In an attempt to identify some of the main mechanisms of the dark humour genre, the humorous content of the films was analyzed in terms of the elements on which specific scenes are based, mainly the non-verbal and verbal components. In the cases in which verbal elements were involved, i.e. the examples of verbally expressed humour, the analysis was concerned with whether they were adapted into Italian and to what effect. Quantification of the different kinds of dark humour revealed that in the sample of dark comedies verbal dark humour had a higher frequency (85.3%) than non-verbal dark humour (14.7%), which partially disconfirmed the first part of the research hypothesis. However, the significance of contextual elements in the conveying of dark humour, both in the form of Nsp VEH (54.31%) and V-V (V+VE) (21.68%), provided support for the hypothesis that, even when expressed verbally, dark humour is more closely linked to context-based rather than purely linguistic humour (4.9%). The second part of the analysis was concerned with an investigation of the strategies adopted for the translation of verbal dark humour elements from the SL (English) into the TL (Italian) through the filter of dubbing. Four translational strategies were identified as far as the rendering of verbal dark humour is concerned: i) complete omission; ii) weakening; iii) close rendering; and iv) increased effect. Complete omission was found to be the most common among these strategies, with 80.9% of dark humour examples being transposed in a way that kept the ST’s function substantially intact. Weakening of darkly humorous lines was applied in 12% of cases, whereas increased effect accounted for 4.6% and complete omission for 2.5%. The fact that for most examples of Nsp VEH (84.9%) and V-AC (V+VE) (91.4%) a close rendering effect was observed and that 12 out of 21 examples of V-AC (PL) (a combined 57%) were either omitted or weakened seemed to confirm, on the one hand, the complexity of the translation process required by cases of V-AC (PL) and V-AC (CS). On the other hand, as suggested in the second part of the research hypothesis, the data might be interpreted as indicating that lesser effort on the translator/adaptor’s part is involved in the adaptation of V-AC (Nsp VEH) and V-V (V+VE). The issue of the possible censorial intervention undergone by examples of verbal dark humour in the sample still remains unclear.
Resumo:
Generic programming is likely to become a new challenge for a critical mass of developers. Therefore, it is crucial to refine the support for generic programming in mainstream Object-Oriented languages — both at the design and at the implementation level — as well as to suggest novel ways to exploit the additional degree of expressiveness made available by genericity. This study is meant to provide a contribution towards bringing Java genericity to a more mature stage with respect to mainstream programming practice, by increasing the effectiveness of its implementation, and by revealing its full expressive power in real world scenario. With respect to the current research setting, the main contribution of the thesis is twofold. First, we propose a revised implementation for Java generics that greatly increases the expressiveness of the Java platform by adding reification support for generic types. Secondly, we show how Java genericity can be leveraged in a real world case-study in the context of the multi-paradigm language integration. Several approaches have been proposed in order to overcome the lack of reification of generic types in the Java programming language. Existing approaches tackle the problem of reification of generic types by defining new translation techniques which would allow for a runtime representation of generics and wildcards. Unfortunately most approaches suffer from several problems: heterogeneous translations are known to be problematic when considering reification of generic methods and wildcards. On the other hand, more sophisticated techniques requiring changes in the Java runtime, supports reified generics through a true language extension (where clauses) so that backward compatibility is compromised. In this thesis we develop a sophisticated type-passing technique for addressing the problem of reification of generic types in the Java programming language; this approach — first pioneered by the so called EGO translator — is here turned into a full-blown solution which reifies generic types inside the Java Virtual Machine (JVM) itself, thus overcoming both performance penalties and compatibility issues of the original EGO translator. Java-Prolog integration Integrating Object-Oriented and declarative programming has been the subject of several researches and corresponding technologies. Such proposals come in two flavours, either attempting at joining the two paradigms, or simply providing an interface library for accessing Prolog declarative features from a mainstream Object-Oriented languages such as Java. Both solutions have however drawbacks: in the case of hybrid languages featuring both Object-Oriented and logic traits, such resulting language is typically too complex, thus making mainstream application development an harder task; in the case of library-based integration approaches there is no true language integration, and some “boilerplate code” has to be implemented to fix the paradigm mismatch. In this thesis we develop a framework called PatJ which promotes seamless exploitation of Prolog programming in Java. A sophisticated usage of generics/wildcards allows to define a precise mapping between Object-Oriented and declarative features. PatJ defines a hierarchy of classes where the bidirectional semantics of Prolog terms is modelled directly at the level of the Java generic type-system.
Resumo:
The goal of this thesis work is to develop a computational method based on machine learning techniques for predicting disulfide-bonding states of cysteine residues in proteins, which is a sub-problem of a bigger and yet unsolved problem of protein structure prediction. Improvement in the prediction of disulfide bonding states of cysteine residues will help in putting a constraint in the three dimensional (3D) space of the respective protein structure, and thus will eventually help in the prediction of 3D structure of proteins. Results of this work will have direct implications in site-directed mutational studies of proteins, proteins engineering and the problem of protein folding. We have used a combination of Artificial Neural Network (ANN) and Hidden Markov Model (HMM), the so-called Hidden Neural Network (HNN) as a machine learning technique to develop our prediction method. By using different global and local features of proteins (specifically profiles, parity of cysteine residues, average cysteine conservation, correlated mutation, sub-cellular localization, and signal peptide) as inputs and considering Eukaryotes and Prokaryotes separately we have reached to a remarkable accuracy of 94% on cysteine basis for both Eukaryotic and Prokaryotic datasets, and an accuracy of 90% and 93% on protein basis for Eukaryotic dataset and Prokaryotic dataset respectively. These accuracies are best so far ever reached by any existing prediction methods, and thus our prediction method has outperformed all the previously developed approaches and therefore is more reliable. Most interesting part of this thesis work is the differences in the prediction performances of Eukaryotes and Prokaryotes at the basic level of input coding when ‘profile’ information was given as input to our prediction method. And one of the reasons for this we discover is the difference in the amino acid composition of the local environment of bonded and free cysteine residues in Eukaryotes and Prokaryotes. Eukaryotic bonded cysteine examples have a ‘symmetric-cysteine-rich’ environment, where as Prokaryotic bonded examples lack it.
Resumo:
Abstract This dissertation investigates the notion of equivalence with particular reference to lexical cohesion in the translation of political speeches. Lexical cohesion poses a particular challenge to the translators of political speeches and thus preserving lexical cohesion elements as one of the major elements of cohesion is undoubtedly crucial to their translation equivalence. We rely on Halliday’s (1994) classification of lexical cohesion which comprises: repetition, synonymy, antonymy, meronymy and hyponymy. Other traditional models of lexical cohesion are examined. We include Grammatical Parallelism for its role in creating textual semantic unity which is what cohesion is all about. The study shed light on the function of lexical cohesion elements as rhetorical device. The study also deals with lexical problems resulting from the transfer of lexical cohesion elements from the SL into the TL, which is often beset by many problems that most often result from the differences between languages. Three key issues are identified as being fundamental to equivalence and lexical cohesion in the translation of political speeches: sociosemiotic approach, register analysis, rhetoric, and poetic function. The study also investigates the lexical cohesion elements in the translation of political speeches from English into Arabic, Italian and French in relation to ideology, and its control, through bias and distortion. The findings are discussed, implications examined and topics for further research suggested.
Resumo:
Abstract The academic environment has recently recognized the importance and benefits that an extensive research on the translation of advertising can have for translation studies. Despite the growing interest and increasing research activity in the field it is still difficult to speak about a theory of advertising translation in general. There is a need for further study encompassing different languages and both heterogeneous and homogenous cultures that will give the possibility to receive a more complete map of what the translation of advertising is and should be. Previous studies have been concentrated, for the most part, on Western European language pairs. This study is a research into perfume and cosmetics print advertisements translated from English into Russian where both visual and verbal elements are considered. Three broad translation approaches have been identified in what concerns the verbal message: Translated message, parallel translation, recreated adverts, and three approaches in dealing with the image: similar images, modified images, completely different images. The thesis shows that where Russian advertisements for perfume products tend to have a message, or create one, this is often lacking in the English copy. The article ends by suggesting that perfume advertisements favor the standardization approach when entering Russian market. The attempts to localize the advert have also been noticed although they are obviously less numerous in perfume adverts and are rather instances of adaptation - a mix between the localization and standardization approaches since they keep drawing on the same globally accepted universals about female beauty and concern for ‘woman’s identity’ (we focused our analysis on products designed for female consumers). This study, complementing previous studies, aims to be a contribution to the description of laws and strategies that guide the translation of advertising texts into Russian.
Resumo:
3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.
Resumo:
Translational control has a direct impact on cancer development and progression. Quantitative and qualitative changes of cap-dependent translation initiation contribute to neoplastic transformation and progression. However, the idea that “alternative” mechanisms of translation initiation, such as IRES-dependent translation, can be involved in the tumorigenesis is emerging. Because the relevance of this kind of translation initiation in cancer progression is not so well clarified, the purpose of my work was to study the impact of IRES-dependent mRNA translation on tumourigenesis and cancer progression with particular regard for breast cancer. The data obtained clarify the function of cap-independent translation in cancer. Particularly they suggested that the deregulation of IRES-dependent translation can be considered a sort of pro-oncogenic stimulus characterized by the inhibition of the expression of some proteins that block cell growth and proliferation and by the over expression of other proteins that contributed to cell survival. In addition, under stress condition, such as a hypoxia, in immortalized epithelial cell lines, changes in cap-independent translation are associated with an induction of expression of stem cell markers, and with the selection of a sub group of cells that have an increased ability to self-renewing and therefore in the acquisition of a more aggressive phenotype.
Resumo:
Different types of proteins exist with diverse functions that are essential for living organisms. An important class of proteins is represented by transmembrane proteins which are specifically designed to be inserted into biological membranes and devised to perform very important functions in the cell such as cell communication and active transport across the membrane. Transmembrane β-barrels (TMBBs) are a sub-class of membrane proteins largely under-represented in structure databases because of the extreme difficulty in experimental structure determination. For this reason, computational tools that are able to predict the structure of TMBBs are needed. In this thesis, two computational problems related to TMBBs were addressed: the detection of TMBBs in large datasets of proteins and the prediction of the topology of TMBB proteins. Firstly, a method for TMBB detection was presented based on a novel neural network framework for variable-length sequence classification. The proposed approach was validated on a non-redundant dataset of proteins. Furthermore, we carried-out genome-wide detection using the entire Escherichia coli proteome. In both experiments, the method significantly outperformed other existing state-of-the-art approaches, reaching very high PPV (92%) and MCC (0.82). Secondly, a method was also introduced for TMBB topology prediction. The proposed approach is based on grammatical modelling and probabilistic discriminative models for sequence data labeling. The method was evaluated using a newly generated dataset of 38 TMBB proteins obtained from high-resolution data in the PDB. Results have shown that the model is able to correctly predict topologies of 25 out of 38 protein chains in the dataset. When tested on previously released datasets, the performances of the proposed approach were measured as comparable or superior to the current state-of-the-art of TMBB topology prediction.
Resumo:
The research activity focused on the study, design and evaluation of innovative human-machine interfaces based on virtual three-dimensional environments. It is based on the brain electrical activities recorded in real time through the electrical impulses emitted by the brain waves of the user. The achieved target is to identify and sort in real time the different brain states and adapt the interface and/or stimuli to the corresponding emotional state of the user. The setup of an experimental facility based on an innovative experimental methodology for “man in the loop" simulation was established. It allowed involving during pilot training in virtually simulated flights, both pilot and flight examiner, in order to compare the subjective evaluations of this latter to the objective measurements of the brain activity of the pilot. This was done recording all the relevant information versus a time-line. Different combinations of emotional intensities obtained, led to an evaluation of the current situational awareness of the user. These results have a great implication in the current training methodology of the pilots, and its use could be extended as a tool that can improve the evaluation of a pilot/crew performance in interacting with the aircraft when performing tasks and procedures, especially in critical situations. This research also resulted in the design of an interface that adapts the control of the machine to the situation awareness of the user. The new concept worked on, aimed at improving the efficiency between a user and the interface, and gaining capacity by reducing the user’s workload and hence improving the system overall safety. This innovative research combining emotions measured through electroencephalography resulted in a human-machine interface that would have three aeronautical related applications: • An evaluation tool during the pilot training; • An input for cockpit environment; • An adaptation tool of the cockpit automation.