790 resultados para Translation and Psychoanalysis
Resumo:
A number of researchers have investigated the application of neural networks to visual recognition, with much of the emphasis placed on exploiting the network's ability to generalise. However, despite the benefits of such an approach it is not at all obvious how networks can be developed which are capable of recognising objects subject to changes in rotation, translation and viewpoint. In this study, we suggest that a possible solution to this problem can be found by studying aspects of visual psychology and in particular, perceptual organisation. For example, it appears that grouping together lines based upon perceptually significant features can facilitate viewpoint independent recognition. The work presented here identifies simple grouping measures based on parallelism and connectivity and shows how it is possible to train multi-layer perceptrons (MLPs) to detect and determine the perceptual significance of any group presented. In this way, it is shown how MLPs which are trained via backpropagation to perform individual grouping tasks, can be brought together into a novel, large scale network capable of determining the perceptual significance of the whole input pattern. Finally the applicability of such significance values for recognition is investigated and results indicate that both the NILP and the Kohonen Feature Map can be trained to recognise simple shapes described in terms of perceptual significances. This study has also provided an opportunity to investigate aspects of the backpropagation algorithm, particularly the ability to generalise. In this study we report the results of various generalisation tests. In applying the backpropagation algorithm to certain problems, we found that there was a deficiency in performance with the standard learning algorithm. An improvement in performance could however, be obtained when suitable modifications were made to the algorithm. The modifications and consequent results are reported here.
Resumo:
This article investigates the role of translation and interpreting in political discourse. It illustrates discursive events in the domain of politics and the resulting discourse types, such as jointly produced texts, press conferences and speeches. It shows that methods of Critical Discourse Analysis can be used effectively to reveal translation and interpreting strategies as well as transformations that occur in recontextualisation processes across languages, cultures, and discourse domains, in particular recontextualisation in mass media. It argues that the complexity of translational activities in the field of politics has not yet seen sufficient attention within Translation Studies. The article concludes by outlining a research programme for investigating political discourse in translation. ©2012 John Benjamins Publishing Company.
Resumo:
Los discursos teóricos de la actualidad conciben la traducción como un acto ideológico de mediación intercultural. De este modo, rechazan la supuesta neutralidad y fidelidad al texto original o a la intención autorial de antaño, subvirtiendo al mismo tiempo la tradicional jerarquía entre original y traducción. Sin embargo, en el presente artículo sostengo que estos discursos teóricos por lo general desatienden otras relaciones de poder jerárquicas que afectan a la traducción situándola en una posición de inferioridad respecto a la paratraducción (Garrido Vilariño 2005), definida ésta como un acto de mediación por el cual se decide la presentación final del libro traducido en la sociedad meta. Para ilustrar las implicaciones de esta nueva jerarquía recurro al conflicto ideológico originado a partir de la traducción y paratraducción del género en dos reescrituras en gallego de la novela de Mark Haddon The Curious Incident of the Dog in the Night-Time. Current theoretical debates on Translation Studies define translation as an ideological act of intercultural mediation. In this way, notions such as neutrality or fidelity to the original text or to the author’s intent prove untenable, challenging the traditional hierarchy between the original text and its translation. However, it is my contention that these theoretical discourses tend to disregard other hierarchical power relationships that also affect translation, placing it in a position of inferiority against paratranslation (Garrido Vilariño 2005), the latter being an activity that determines crucially the final presentation of the translated book in the target society. I will illustrate the implications of this new hierarchy through an analysis of the ideological struggle that emerged from the translation and paratranslation of gender in two rewritings into Galician of the book The Curious Incident of the Dog in the Night-Time, by Mark Haddon.
Resumo:
A análise dos textos e paratextos da obra feminista da filósofa Simone de Beauvoir Le deuxième sexe (1949, Gallimard), así como das súas posteriores traducións e reescrituras ao castelán e ao inglés, pon en evidencia o papel determinante que –sen excepcións– desempeñan as ideoloxías no labor profesional da tradución e da paratradución, infl uíndo de xeito decisivo na recepción do ben cultural (e ideolóxico) libro, tanto na súa sociedade de creación como nas sociedades receptoras da tradución. The analysis of the texts and paratexts of the feminist book Le Deuxième Sexe (1949, Gallimard) by the philosopher Simone de Beauvoir, along with its subsequent translations and rewritings into Spanish and English, demonstrates the essential role that –without exception– ideologies play in the professional work of translation and paratranslation, since they have a decisive influence on the reception of the cultural (and ideological) good, in both the society in which it is created and that in which it is received.
Resumo:
This article presents the most recent historical context (1995-2005) of the translation of Galician literary texts into a British framework. It also provides an analysis of the translation and editing conditions that have had an influence on each book. At the same time, it offers a reflection on the literary relationships that take place between Galicia, a nation without a state whose literary system has not yet attained full autonomy (Antón Figueroa, 2001, p. 130) and the United Kingdom, which has a strong literary system. The aim of this article is twofold: firstly, to stress the importance of the translation of Galician literature into other literary systems [such as British one, on the premise that it foments cultural self-confidence and an awareness of national identity, especially as regards to the Galician literary and cultural system. Secondly, to open new fields of research so that subsequent studies can delve into this topic in more depth.
Resumo:
Los feminismos son una de esas teorías marco cuyas contribuciones son perceptibles en todos los ámbitos de la sociedad, incluidos los estudios de traducción. La materialización más evidente de esta interacción es el surgimiento, en los 80, de una corriente de traducción feminista en Canadá, capaz de colocar el género en el centro del debate sobre traducción. En la actualidad, y pese a las críticas y posteriores redefiniciones del concepto de traducción feminista, la propuesta canadiense sigue concibiéndose por lo general como paradigma de interacción entre feminismos y traducción. En este artículo propongo nuevas aproximaciones a la práctica de traducir y paratraducir desde los feminismos, dentro de una tercera ola de traducción feminista. Además, pretendo abrir el debate (re)examinando áreas de interés mutuo para los estudios de traducción y los feminismos en el plano conceptual, historiográfico y crítico, con el propósito de que sugieran nuevas líneas de investigación futura. Feminisms are one of those framework theories that have contributed powerfully to all areas of society, including Translation Studies. The most evident outcome of this interplay is the emergence, in the 1980s, of a Feminist Translation school in Canada, which placed gender in the spotlight. Despite criticism and subsequent redefinitions of the notion of feminist translation, the Canadian school is still generally regarded as the paradigm of interaction between feminisms and translation. The aim of this article is two-fold: firstly, to advance new approaches to the practice of translation and paratranslation from a feminist perspective (within the context of a third wave of feminist translation). Secondly, to open new debates by means of (re)examining topics of mutual interest for both Translation Studies and Feminisms on a conceptual, historical and critical plane, so that subsequent studies can be fostered. Feminisms are one of those framework theories that have contributed powerfully to all areas of society, including Translation Studies. The most evident outcome of this interplay is the emergence, in the 1980s, of a Feminist Translation school in Canada, which placed gender in the spotlight. Despite criticism and subsequent redefinitions of the notion of feminist translation, the Canadian school is still generally regarded as the paradigm of interaction between feminisms and translation. The aim of this article is two-fold: firstly, to advance new approaches to the practice of translation and paratranslation from a feminist perspective (within the context of a third wave of feminist translation). Secondly, to open new debates by means of (re)examining topics of mutual interest for both Translation Studies and Feminisms on a conceptual, historical and critical plane, so that subsequent studies can be fostered.
Resumo:
Saturation mutagenesis is a powerful tool in modern protein engineering. This can allow the analysis of potential new properties thus allowing key residues within a protein to be targeted and randomised. However, the creation of large libraries using conventional saturation mutagenesis with degenerate codons (NNN or NNK) has inherent redundancy and disparities in residue representation. In this we describe the combination of ProxiMAX randomisation and CIS display for the use of generating novel peptides. Unlike other methods ProxiMAX randomisation does not require any intricate chemistry but simply utilises synthetic DNA and molecular biology techniques. Designed ‘MAX’ oligonucleotides were ligated, amplified and digested in an iterative cycle. Results show that randomised ‘MAX’ codons can be added sequentially to the base sequence creating a series of randomised non-degenerate codons that can subsequently be inserted into a gene. CIS display (Isogencia, UK) is an in vitro DNA based screening method that creates a genotype to phenotype link between a peptide and the nucleic acid that encodes it. The use of straight forward in vitro transcription/translation and other molecular biology techniques permits ease of use along with flexibility making it a potent screening technique. Using ProxiMAX randomisation in combination with CIS display, the aim is to produce randomised anti-nerve growth factor (NGF) and calcitonin gene-related (CGRP) peptides to demonstrate the high-throughput nature of this combination.
Resumo:
Full text: The idea of producing proteins from recombinant DNA hatched almost half a century ago. In his PhD thesis, Peter Lobban foresaw the prospect of inserting foreign DNA (from any source, including mammalian cells) into the genome of a λ phage in order to detect and recover protein products from Escherichia coli [ 1 and 2]. Only a few years later, in 1977, Herbert Boyer and his colleagues succeeded in the first ever expression of a peptide-coding gene in E. coli — they produced recombinant somatostatin [ 3] followed shortly after by human insulin. The field has advanced enormously since those early days and today recombinant proteins have become indispensable in advancing research and development in all fields of the life sciences. Structural biology, in particular, has benefitted tremendously from recombinant protein biotechnology, and an overwhelming proportion of the entries in the Protein Data Bank (PDB) are based on heterologously expressed proteins. Nonetheless, synthesizing, purifying and stabilizing recombinant proteins can still be thoroughly challenging. For example, the soluble proteome is organized to a large part into multicomponent complexes (in humans often comprising ten or more subunits), posing critical challenges for recombinant production. A third of all proteins in cells are located in the membrane, and pose special challenges that require a more bespoke approach. Recent advances may now mean that even these most recalcitrant of proteins could become tenable structural biology targets on a more routine basis. In this special issue, we examine progress in key areas that suggests this is indeed the case. Our first contribution examines the importance of understanding quality control in the host cell during recombinant protein production, and pays particular attention to the synthesis of recombinant membrane proteins. A major challenge faced by any host cell factory is the balance it must strike between its own requirements for growth and the fact that its cellular machinery has essentially been hijacked by an expression construct. In this context, Bill and von der Haar examine emerging insights into the role of the dependent pathways of translation and protein folding in defining high-yielding recombinant membrane protein production experiments for the common prokaryotic and eukaryotic expression hosts. Rather than acting as isolated entities, many membrane proteins form complexes to carry out their functions. To understand their biological mechanisms, it is essential to study the molecular structure of the intact membrane protein assemblies. Recombinant production of membrane protein complexes is still a formidable, at times insurmountable, challenge. In these cases, extraction from natural sources is the only option to prepare samples for structural and functional studies. Zorman and co-workers, in our second contribution, provide an overview of recent advances in the production of multi-subunit membrane protein complexes and highlight recent achievements in membrane protein structural research brought about by state-of-the-art near-atomic resolution cryo-electron microscopy techniques. E. coli has been the dominant host cell for recombinant protein production. Nonetheless, eukaryotic expression systems, including yeasts, insect cells and mammalian cells, are increasingly gaining prominence in the field. The yeast species Pichia pastoris, is a well-established recombinant expression system for a number of applications, including the production of a range of different membrane proteins. Byrne reviews high-resolution structures that have been determined using this methylotroph as an expression host. Although it is not yet clear why P. pastoris is suited to producing such a wide range of membrane proteins, its ease of use and the availability of diverse tools that can be readily implemented in standard bioscience laboratories mean that it is likely to become an increasingly popular option in structural biology pipelines. The contribution by Columbus concludes the membrane protein section of this volume. In her overview of post-expression strategies, Columbus surveys the four most common biochemical approaches for the structural investigation of membrane proteins. Limited proteolysis has successfully aided structure determination of membrane proteins in many cases. Deglycosylation of membrane proteins following production and purification analysis has also facilitated membrane protein structure analysis. Moreover, chemical modifications, such as lysine methylation and cysteine alkylation, have proven their worth to facilitate crystallization of membrane proteins, as well as NMR investigations of membrane protein conformational sampling. Together these approaches have greatly facilitated the structure determination of more than 40 membrane proteins to date. It may be an advantage to produce a target protein in mammalian cells, especially if authentic post-translational modifications such as glycosylation are required for proper activity. Chinese Hamster Ovary (CHO) cells and Human Embryonic Kidney (HEK) 293 cell lines have emerged as excellent hosts for heterologous production. The generation of stable cell-lines is often an aspiration for synthesizing proteins expressed in mammalian cells, in particular if high volumetric yields are to be achieved. In his report, Buessow surveys recent structures of proteins produced using stable mammalian cells and summarizes both well-established and novel approaches to facilitate stable cell-line generation for structural biology applications. The ambition of many biologists is to observe a protein's structure in the native environment of the cell itself. Until recently, this seemed to be more of a dream than a reality. Advances in nuclear magnetic resonance (NMR) spectroscopy techniques, however, have now made possible the observation of mechanistic events at the molecular level of protein structure. Smith and colleagues, in an exciting contribution, review emerging ‘in-cell NMR’ techniques that demonstrate the potential to monitor biological activities by NMR in real time in native physiological environments. A current drawback of NMR as a structure determination tool derives from size limitations of the molecule under investigation and the structures of large proteins and their complexes are therefore typically intractable by NMR. A solution to this challenge is the use of selective isotope labeling of the target protein, which results in a marked reduction of the complexity of NMR spectra and allows dynamic processes even in very large proteins and even ribosomes to be investigated. Kerfah and co-workers introduce methyl-specific isotopic labeling as a molecular tool-box, and review its applications to the solution NMR analysis of large proteins. Tyagi and Lemke next examine single-molecule FRET and crosslinking following the co-translational incorporation of non-canonical amino acids (ncAAs); the goal here is to move beyond static snap-shots of proteins and their complexes and to observe them as dynamic entities. The encoding of ncAAs through codon-suppression technology allows biomolecules to be investigated with diverse structural biology methods. In their article, Tyagi and Lemke discuss these approaches and speculate on the design of improved host organisms for ‘integrative structural biology research’. Our volume concludes with two contributions that resolve particular bottlenecks in the protein structure determination pipeline. The contribution by Crepin and co-workers introduces the concept of polyproteins in contemporary structural biology. Polyproteins are widespread in nature. They represent long polypeptide chains in which individual smaller proteins with different biological function are covalently linked together. Highly specific proteases then tailor the polyprotein into its constituent proteins. Many viruses use polyproteins as a means of organizing their proteome. The concept of polyproteins has now been exploited successfully to produce hitherto inaccessible recombinant protein complexes. For instance, by means of a self-processing synthetic polyprotein, the influenza polymerase, a high-value drug target that had remained elusive for decades, has been produced, and its high-resolution structure determined. In the contribution by Desmyter and co-workers, a further, often imposing, bottleneck in high-resolution protein structure determination is addressed: The requirement to form stable three-dimensional crystal lattices that diffract incident X-ray radiation to high resolution. Nanobodies have proven to be uniquely useful as crystallization chaperones, to coax challenging targets into suitable crystal lattices. Desmyter and co-workers review the generation of nanobodies by immunization, and highlight the application of this powerful technology to the crystallography of important protein specimens including G protein-coupled receptors (GPCRs). Recombinant protein production has come a long way since Peter Lobban's hypothesis in the late 1960s, with recombinant proteins now a dominant force in structural biology. The contributions in this volume showcase an impressive array of inventive approaches that are being developed and implemented, ever increasing the scope of recombinant technology to facilitate the determination of elusive protein structures. Powerful new methods from synthetic biology are further accelerating progress. Structure determination is now reaching into the living cell with the ultimate goal of observing functional molecular architectures in action in their native physiological environment. We anticipate that even the most challenging protein assemblies will be tackled by recombinant technology in the near future.
Resumo:
ACM Computing Classification System (1998): I.2.8, I.2.10, I.5.1, J.2.
Resumo:
The canonical function of eEF1A is delivery of the aminoacylated tRNA to the A site of the ribosome during protein translation, however, it is also known to be an actin binding protein. As well as this actin binding function, eEF1A has been shown to be involved in other cellular processes such as cell proliferation and apoptosis. It has long been thought that the actin cytoskeleton and protein synthesis are linked and eEF1A has been suggested to be a candidate protein to form this link, though very little is understood about the relationship between its two functions. Overexpression of eEF1A has also been shown to be implicated in many different types of cancers, especially cancers that are metastatic, therefore it is important to further understand how eEF1A can affect both translation and the organisation of the actin cytoskeleton. To this end, we aimed to determine the effects of reduced expression of eEF1A on both translation and its non canonical functions in CHO cells. We have shown that reduced expression of eEF1A in this cell system results in no change in protein synthesis, however results in an increased number of actin stress fibres and other proteins associated with these fibres such as myosin IIA, paxillin and vinculin. Cell motility and attachment are also affected by this reduction in eEF1A protein expression. The organisational and motility phenotypes were found to be specific to eEF1A by transforming the cells with plasmids containing either human eEF1A1 or eEF1A2. Though the mechanisms by which these effects are regulated have not yet been established, this data provides evidence to show that the translation and actin binding functions of eEF1A are independent of each other as well as being suggestive of a role for eEF1A in cell motility as supported by the observation that overexpression of eEF1A protein tends to be associated with the cancer cells that are metastatic.
Resumo:
Cells and organisms respond to nutrient deprivation by decreasing global rates of transcription, translation and DNA replication. To what extent such changes can be reversed is largely unknown. We examined the effect of maternal dietary restriction on RNA synthesis in the offspring. Low protein diet fed either throughout gestation or for the preimplantation period alone reduced cellular RNA content across fetal somatic tissues during challenge and increased it beyond controls in fetal and adult tissues after challenge release. Changes in transcription of ribosomal RNA, the major component of cellular RNA, were responsible for this phenotype as evidenced by matching alterations in RNA polymerase I density and DNA methylation at ribosomal DNA loci. Cellular levels of the ribosomal transcription factor Rrn3 mirrored the rRNA expression pattern. In cell culture experiments, Rrn3 overexpression reduced rDNA methylation and increased rRNA expression; the converse occurred after inhibition of Rrn3 activity. These observations define novel mechanism where poor nutrition before implantation irreversibly alters basal rates of rRNA transcription thereafter in a process mediated by rDNA methylation and Rrn3 factor.
Resumo:
Interactions with second language speakers in public service contexts in England are normally conducted with the assistance of one interpreter. Even in situations where team interpreting would be advisable, for example in lengthy courtroom proceedings, financial considerations mean only one interpreter is normally booked. On occasion, however, more than one interpreter, or an individual (or individuals) with knowledge of the languages in question, may be simultaneously present during an interpreted interaction, either monitoring it or indeed volunteering unsolicited input. During police interviews or trials in England this may happen when the interpreter secured by the defence team to interpret during private consultation with the suspect or defendant is present also in the interview room or the courtroom but two independently sourced interpreters need not be limited to legal contexts. In healthcare settings for example, service users sometimes bring friends or relatives along to help them communicate with service providers only to find that the latter have booked an interpreter as a matter of procedure. By analogy to the nature of the English legal system, I refer to contexts where an interpreter’s output is monitored and/or challenged, either during the speech event or subsequently, as ‘adversarial interpreting’. This conceptualisation reflects the fact that interpreters in such encounters are sourced independently, often by opposing parties, and as a result can rarely be considered a team. My main concern in this paper is to throw spotlight on adversarial interpreting as a hitherto rarely discussed problem in its own right. That it is not an anomaly is evidenced by the many cases around the world where the officially recorded interpreted output was challenged, as mentioned in for example Berk-Seligson (2002), Hayes and Hale (2010), and Phelan (2011). This paper reports on the second stage of a research project which has previously involved the analysis of a transcript of an interpreted police interview with a suspect in a murder case. I will mention the findings of the analysis briefly and introduce some new findings based on input from practising interpreters who have shared their experience of adversarial interpreting by completing an online questionnaire. I will try to answer the question of how the presence of two interpreters, or an interpreter and a monitoring participant, in the same speech event impacts on the communication process. I will also address the issue of forensic linguistic arbitration in cases where incompetent interpreting has been identified or an expert opinion is sought in relation to an adversarial interpreting event of significance to a legal dispute. References Berk-Seligson (2002), The Bilingual Courtroom: Court Interpreters in the Judicial Process, University of Chicago Press. Hayes, A. and Hale, S. (2010), "Appeals on incompetent interpreting", Journal of Judicial Administration 20.2, 119-130. Phelan, M. (2011), "Legal Interpreters in the news in Ireland", Translation and Interpreting 3.1, 76-105.
Resumo:
The Nursing Homes are an important alternative care in the world, but Brazil still has no valid instrument to monitor the quality these institutions. In the United States, the Observable Indicators of Nursing Home Care Quality Instrument (OIQ) is used to assess the quality of Nursing Home care using 30 indicators of structure (2 dimensions) and process (5 dimensions) related to quality person-centered care. The present study aimed at cross-culturally adapting the OIQ in order to evaluate the quality of Nursing Home care in Brazil. Conceptual and item equivalence were determined to assess the relevance and viability of OIQ in the Brazilian context, using the Content Validity Index (CVI) and a group of specialists composed of 10 participants directly involved in the object of study. Next, operational, idiomatic and semantic equivalence were carried out concurrently. This consisted of 5 phases: (1) two translations and (2) their respective back translations; (3) formal appraisal of referential and general meaning; (4) review by a second group of specialists; (5) application of the pretest at three Nursing Homes by different social entities: health professionals, sanitary surveillance regulators and potential consumers. Measurement equivalence was evaluated by the Cronbach’s alpha test to verify the internal consistency of the instrument. To measure inter-evaluator agreement, the General Agreement Index (ICG) and Kappa coefficient were used. Timely compliance and 95% Confidence Interval of indicators, dimensions and total construct were estimated. The CVI obtained high results for both relevance (95.3%) and viability (94.3%) in the Brazilian context. With respect to referential meaning, similarity was observed, ranging between 90-100% for the first back translation and 70-100% for the second. In relation to general meaning, version 1 was better, classified as “unchanged” in 80% of the items, whereas in version 2 it was only 47%. In the pretest, the OIQ was easy to understand and apply. The following outcomes were obtained: a high Cronbach’s alpha (0.93), satisfactory ICG (75%) and substantial agreement between the pairs of evaluators (health professionals, regulators from the Superintendency of Sanitary Surveillance –SUVISA-, and potential consumers), according to the Kappa coefficient (0.65). It´s possible take the operational equivalence held since it preserved the original layout in the Brazilian version from the maintenance in application mode, response options, number of items, statements and scores. The performance of nursing homes obtained approximate average scores of 87, a variation 55-111 considering a range from 30 to 150 points. The worst outcomes were related to process indicators with a mean of 2.8 per item, while structure was 3.75 on a scale of 1 to 5. The lowest score was obtained for the care dimension (mean 2). The OIQ version was deemed to be a valid and reliable instrument in the Brazilian context. It is recommended that health professionals, regulators and potential consumers adopt it to access and monitor the quality of Nursing Home care and demonstrating opportunities for improvement.