869 resultados para Worth
Resumo:
Globalization has influenced all economic sectors and the demand for translation services has increased like never before. The videogame industry has become a worldwide phenomenon worth billions. Many people around the globe, male and female, children and adults alike, choose this leisure activity and enjoy it like reading or watching a film. It is a global phenomenon capable of producing as much revenue and anticipation as the film industry. Most games are developed in Japanese or English and the new global market requires this product to be translated into many other languages. The scenario has brought about a new field of specialization in translation studies, commonly known as videogame localization. The emergence of this new field calls not only for a review of translation studies, but also a shift in the role that some translators and translated products are expected to play within a globalized world. The aim of this dissertation is to provide an overview of videogame localization and its challenges under the guidance of a professional translator such as Alexander O. Smith, who agreed to provide counsel through several Skype interviews. This provided a first-hand insight into how translation decisions are carried out by game translators. Alexander O. Smith was a former translator for Square Enix, one of the biggest Japanese videogame developer, publisher and distribution company in the market. He now works as an independent translator and in 2003 he founded the localization agency called Kajiya Productions with his friend and fellow translator Joseph Reeder. Together with Alexander O. Smith, the twelfth installment of the Final Fantasy series by Square Enix has been chosen as a very good example of the issues and challenges brought on by videogame localization. The game which revealed itself to be one of the most fun, challenging and rewarding professional experiences of Alexander O. Smith.
Resumo:
The present research study focuses on intercultural communication and how its dynamics are portrayed in the Italian version of the movie L’appartamento spagnolo (original title: L’auberge espagnol) by Cédric Klapisch. The first chapter introduces the movie in all its main features, such as plot, setting, characters, languages, main themes, and sequels. The second chapter focuses on the dynamics of intercultural communication through the analysis of the most representative scenes of the movie. It is worth noting that the notion of intercultural communication comprises a lot of different kinds of communication, meaning not only communication among people coming from different countries and speaking a different language, but also among different generations, people with different social backgrounds, with a different social status, etc. However, language is indeed crucial to mutual understanding and it plays a fundamental role in communication. For this reason, the third chapter focuses on the multilingual dimension of the movie, since the issue of intercultural communication is also conveyed through a variety of languages. The aim is to analyze the different strategies used in the Italian dubbed version in order to manage the presence of different languages and to examine how such strategies affect the overall consistency of the dialogues and the effect achieved on the audience.
Resumo:
When salmonid fish that have been raised in hatcheries spawn in the wild, they often produce fewer surviving adult offspring than wild fish. Recent data from steelhead (Oncorhynchus mykiss) in the Hood River (Oregon, USA) show that even one or two generations of hatchery culture can result in dramatic declines in fitness. Although intense domestication selection could cause such declines, it is worth considering alternative explanations. One possibility is heritable epigenetic changes induced by the hatchery environment. Here, we show, using methylation-sensitive amplified fragment length polymorphism, that hatchery and wild adult steelhead from the Hood River do not appear to differ substantially in overall levels of genomic methylation. Thus, although altered methylation of specific DNA sites or other epigenetic processes could still be important, the hatchery environment does not appear to cause a global hypo- or hypermethylation of the genome or create a large number of sites that are differentially methylated.
Resumo:
It is by now a banal observation that published collections of conference papers tend to add up to a whole that is considerably less than the sum of the parts. Nineteenth-Century Geographies, a book that grew out of an interdisciplinary conference held at Rice University in 1998 falls into this category. While assuring my readers that each individual contribution is independently worth a read is likewise a predictable cliché, it is in fact the case that every one of the 17 articles collected here—notwithstanding the rather convoluted Introduction—has much to offer the study, broadly speaking, of ‘cultural spaces’ of British and American imperialisms in the nineteenth century. . . . All of my complaints aside, this turns out to be a much more enjoyable book to read than to review, and I would recommend skimming and dipping at length. I cannot quite imagine when a read-through of this book might be called for, except perhaps in graduate seminars on related topics.
Resumo:
This article offers an analysis of a struggle for control of a women’s development project in Nepal. The story of this struggle is worth telling, for it is rife with the gender politics and neo-colonial context that underscore much of what goes on in contemporary Nepal. In particular, my analysis helps to unravel some of the powerful discourses, threads of interest, and yet unintended effects inevitable under a regime of development aid. The analysis demonstrates that the employment of already available discursive figures of the imperialist feminist and the patriarchal third world man are central to the rhetorical strategies taken in the conflict. I argue that the trans-discursive or “borderland” nature of development in general and women’s development in particular result in different constructions of “development” goals, means and actors based not only on divergent cultural categories but on historically specific cultural politics. I argue further that the apolitical discourse of development serves to cloak its inherently political project of social and economic transformation, making conflicts such as the one that occurred in this case not only likely to occur but also likely to be misunderstood.
Resumo:
Self-control is a prerequisite for complex cognitive processes such as cooperation and planning. As such, comparative studies of self-control may help elucidate the evolutionary origin of these capacities. A variety of methods have been developed to test for self-control in non-human primates that include some variation of foregoing an immediate reward in order to gain a more favorable reward. We used a token exchange paradigm to test for self-control in capuchin monkeys (Cebus apella). Animals were trained that particular tokens could be exchanged for food items worth different values. To test for self-control, a monkey was provided with a token that was associated with a lower-value food. When the monkey exchanged the token, the experimenter provided the monkey with a choice between the lower-value food item associated with the token or another token that was associated with a higher-value food. If the monkey chose the token, they could then exchange it for the higher-value food. Of seven monkeys trained to exchange tokens, five demonstrated that they attributed value to the tokens by differentially selecting tokens for higher-value foods over tokens for lower-value foods. When provided with a choice between a food item or a token for a higher-value food, two monkeys selected the token significantly more than expected by chance. The ability of capuchin monkeys to forego an immediate food reward and select a token that could then be traded for a more preferred food demonstrated some degree of self-control. Thus, results suggest a token exchange paradigm could be a successful technique for assessing self-control in this New World species.
Resumo:
Peptide hormones of the glucagon-like peptide (GLP) family play an increasing clinical role, such as GLP-1 in diabetes therapy. Moreover, GLP receptors are overexpressed in various human tumor types and therefore represent molecular targets for important clinical applications. In particular, virtually all benign insulinomas highly overexpress GLP-1 receptors (GLP-1R). Targeting GLP-1R with the stable GLP-1 analogs (111)In-DOTA/DPTA-exendin-4 offers a new approach to successfully localize these small tumors. This non-invasive technique has the potential to replace the invasive localization of insulinomas by selective arterial stimulation and venous sampling. Malignant insulinomas, in contrast to their benign counterparts, express GLP-1R in only one-third of the cases, while they more often express the somatostatin type 2 receptors. Importantly, one of the two receptors appears to be always expressed in malignant insulinomas. The GLP-1R overexpression in selected cancers is worth to be kept in mind with regard to the increasing use of GLP-1 analogs for diabetes therapy. While the functional role of GLP-1R in neoplasia is not known yet, it may be safe to monitor patients undergoing GLP-1 therapy carefully.
Resumo:
Twenty-five years have passed since the first randomised controlled trial began its recruitment for screening for abdominal aortic aneurysm (AAA) in men aged 65 and above. Since this and other randomised trials, all launched in the late 80s and 90s of the last century, the epidemiologic profile of abdominal aortic aneurysm may have changed. The trials reported an AAA prevalence in the range of 4-7% for men aged 65 years or more. AAA-related mortality was significantly improved by screening, and after 13 years, the largest trial showed a benefit for all-cause mortality. Screening also was shown to be cost-effective. Today, there are studies showing a substantial decrease of AAA prevalence to sometimes less than 2% in men aged ≥ 65 years and there is evidence that the incidence of ruptured aneurysm and mortality from AAA is also declining. This decline preceded the implementation of screening programmes but may be due to a change in risk factor management. The prevalence of smoking has decreased and there has been improvement in the control of hypertension and a rising use of statins for cardiovascular risk prevention. Additionally, there is a shift of the burden to the older age group of ≥ 75 years. Such radical changes may influence screening policy and it is worth reflecting on the optimum age of screening - it might be better to screen at ages >65 years - or rescreening 5 to 10 years after the first screen.
Resumo:
In this thesis, I will document and analyze historical aspects of the British debate over adopting a common currency with the European Community primarily during the last half of the twentieth century until the present. More specifically, while on the surface such a decision would seem to turn on economic or political considerations, I will show that this historic British decision not to surrender their pound sterling in exchange for the euro was rooted in the nation's cultural identity. During this decades long British debate over the euro, two opposing, but strongly held, positions developed; one side believed that Britain had a compelling interest in bonding with the rest of Europe economically as well as politically, the other side believed that Britain's independent heritage was deeply rooted in many of its traditions including maintaining control of its own monetary matters, which included keeping its pound sterling. As part of this thesis, I have conducted interviews with business leaders, economists, and social scientists as well as researched public records in order to assess many of the arguments favoring and opposing Britain's adoption of the euro. Many Britons strongly believed that it was time to join other Europeans, who were willing to sacrifice their sovereign currency to a bold common currency experiment, while other Britons viewed the pound sterling as too integral a part of British heritage to abandon. Ultimately, British leaders and citizens had to determine whether such a currency tradeoff would be worth it to them as a nation. It was a gamble that twelve other nations (at the time of the euro's 2002 launch) were ready to take, optimistically calculating that easier credit and reduced exchange transaction costs would lead to greater economic prosperity. Many asserted that only with ! ! such a united European monetary coalition would Europe's nations be able to compete trade-wise with powerful economic nations like the United States and China. My conclusion is that Britain's refusal to join the euro was a decision that had less to do with economic opportunity or political motivations and much more to do with how the British people viewed themselves culturally and their identity as an independent nation.
Resumo:
The response of some Argentine workers to the 2001 crisis of neoliberalism gave rise to a movement of worker-recovered enterprises (empresas recuperadas por sus trabajadores or ERTs). The ERTs have emerged as former employees took over the control of generally fraudulently bankrupt factories and enterprises. The analysis of the ERT movement within the neoliberal global capitalist order will draw from William Robinson’s (2004) neo-Gramscian concept of hegemony. The theoretical framework of neo-Gramscian hegemony will be used in exposing the contradictions of capitalism on the global, national, organizational and individual scales and the effects they have on the ERT movement. The ERT movement has demonstrated strong level of resilience, despite the numerous economic, social, political and cultural challenges and limitations it faces as a consequence of the implementation of neoliberalism globally. ERTs have shown that through non-violent protests, democratic principles of management and social inclusion, it is possible to start constructing an alternative social order that is based on the cooperative principles of “honesty, openness, social responsibility and caring for others” (ICA 2007) as opposed to secrecy, exclusiveness, individualism and self-interestedness. In order to meet this “utopian” vision, it is essential to push the limits of the possible within the current social order and broaden the alliance to include the organized members of the working class, such as the members of trade unions, and the unorganized, such as the unemployed and underemployed. Though marginal in number and size, the members of ERTs have given rise to a model that is worth exploring in other countries and regions burdened by the contradictory workings of capitalism. Today, ERTs serve as living proofs that workers too are capable of successfully running businesses, not capitalists alone.
Resumo:
The primary objective of this thesis is to demonstrate the pernicious impact that moral hierarchies have on our perception and subsequent treatment of non-human animals. Moral hierarchies in general are characterized by a dynamic in which one group is considered to be fundamentally superior to a lesser group. This thesis focuses specifically on the moral hierarchies that arise when humans are assumed to be superior to non-human animals in virtue of their advanced mental capabilities. The operative hypothesis of this thesis is essentially that moral hierarchies thwart the provision of justice to non-human animals in that they function as a justification for otherwise impermissible actions. When humans are assumed to be fundamentally superior to non-human animals then it becomes morally permissible for humans to kill non-human animals and utilize them as mere instrumentalities. This thesis is driven primarily by an in-depth analysis of the approaches to animal rights that are provided by Peter Singer, Tom Regan, and Gary Francione. Each of these thinkers claim that they overcome anthropocentrism and provide approaches that preclude the establishment of a moral hierarchy. One of the major findings of this thesis, however, is that Singer and Regan offer approaches that remain highly anthropocentric despite the fact that each thinker claims that they have overcome anthropocentrism. The anthropocentrism persists in these respective approaches in that each thinkers gives humans Regan and Singer have different conceptions of the criteria that are required to afford a being moral worth, but they both give preference to beings that have the cognitive ability to form desires regarding the future.. As a result, a moral hierarchy emerges in which humans are regarded to be fundamentally superior. Francione, however, provides an approach that does not foster a moral hierarchy. Francione creates such an approach by applying the principle of equal consideration of interests in a consistent manner. Moreover, Francione argues that mere sentience is both a necessary and sufficient condition for being eligible and subsequently receiving moral consideration. The upshot of this thesis is essentially that the moral treatment of animals is not compatible with the presence of a moral hierarchy. As a result, this thesis demonstrates that future approaches to animal rights must avoid the establishment of moral hierarchies. The research and analysis within this thesis demonstrates that this is not a possibility, however, unless all theories of justice that are to accommodate animals abandon the notion that cognition matters morally.
Resumo:
In recent history, there has been a trend of increasing partisan polarization throughout most of the American political system. Some of the impacts of this polarization are obvious; however, there is reason to believe that we miss some of the indirect effects of polarization. Accompanying the trend of increased polarization has been an increase in the contentiousness of the Supreme Court confirmation process. I believe that these two trends are related. Furthermore, I argue that these trends have an impact on judicial behavior. This is an issue worth exploring, since the Supreme Court is the most isolated branch of the federal government. The Constitution structured the Supreme Court to ensure that it was as isolated as possible from short-term political pressures and interests. This study attempts to show how it may be possible that those goals are no longer being fully achieved. My first hypothesis in this study is that increases in partisan polarization are a direct cause of the increase in the level of contention during the confirmation process. I then hypothesize that the more contention a justice faces during his or her confirmation process, the more ideologically extreme that justice will then vote on the bench. This means that a nominee appointed by a Republican president will tend to vote even more conservatively than was anticipated following a contentious confirmation process, and vice versa for Democratic appointees. In order to test these hypotheses, I developed a data set for every Supreme Court nominee dating back to President Franklin D. Roosevelt¿s appointments (1937). With this data set, I ran a series of regression models to analyze these relationships. Statistically speaking, the results support my first hypothesis in a fairly robust manner. My regression results for my second hypothesis indicate that the trend I am looking for is present for Republican nominees. For Democratic nominees, the impacts are less robust. Nonetheless, as the results will show, contention during the confirmation process does seem to have some impact on judicial behavior. Following my quantitative analysis, I analyze a series of case studies. These case studies serve to provide tangible examples of these statistical trends as well as to explore what else may be going on during the confirmation process and subsequent judicial decision-making. I use Justices Stevens, Rehnquist, and Alito as the subjects for these case studies. These cases will show that the trends described above do seem to be identifiable at the level of an individual case. These studies further help to indicate other potential impacts on judicial behavior. For example, following Justice Rehnquist¿s move from Associate to Chief Justice, we see a marked change in his behavior. Overall, this study serves as a means of analyzing some of the more indirect impacts of partisan polarization in modern politics. Further, the study offers a means of exploring some of the possible constraints (both conscious and subconscious) that Supreme Court justices may feel while they decide how to cast a vote in a particular case. Given the wide-reaching implications of Supreme Court decisions, it is important to try to grasp a full view of how these decisions are made.
Resumo:
Conflict has marked civilization from Biblical times to the present day. Each of us, with our different and competing interests, and our desires to pursue those interests, have over time wronged another person. Not surprisingly then, forgiveness is a concern of individuals and groups¿communities, countries, religious groups, races¿yet it is a complex idea that philosophers, theologians, political scientists, and psychologists have grappled with. Some have argued that forgiveness is a therapeutic means for overcoming guilt, pain, and anger. Forgiveness is often portrayed as a coping mechanism¿how often we hear the phrase, ¿forgive and forget,¿ as an arrangement to help two parties surmount the complications of disagreement. But forgiveness is not simply a modus vivendi; the ability to forgive and conversely to ask for forgiveness, is counted as an admirable trait and virtue. This essay will explore the nature of forgiveness, which in Christian dogma is often posited as an unqualified virtue. The secular world has appropriated the Christian notion of forgiveness as such a virtue¿but are there instances wherein offering forgiveness is morally inappropriate or dangerous? I will consider the situations in which forgiveness, understood in this essay as the overcoming of resentment, may not be a virtue¿when perhaps maintaining resentment is as virtuous, if not more virtuous, than forgiving. I will explain the various ethical frameworks involved in understanding forgiveness as a virtue, and the relationship between them. I will argue that within Divine Command Theory forgiveness is a virtue¿and thus morally right¿because God commands it. This ethical system has established forgiveness as unconditional, an idea which has been adopted into popular culture. With virtue ethics in mind, which holds virtues to be those traits which benefit the person who possesses them, contributing to the good life, I will argue unqualified forgiveness is not always a virtue, as it will not always benefit the victim. Because there is no way to avoid wrongdoing, humans are confronted with the question of forgiveness with every indiscretion. Its limits, its possibilities, its relationship to one¿s character¿forgiveness is a concern of all people at some time if for no other reason than the plain fact that the past cannot be undone. I will be evaluating the idea of forgiveness as a virtue, in contrast to its counterpart, resentment. How can forgiveness be a response to evil, a way to renounce resentment, and a means of creating a positive self-narrative? And what happens when a sense of moral responsibility is impossible to reconcile with the Christian (and now, secularized imperative of) forgiveness? Is it ever not virtuous to forgive? In an attempt to answer that question I will argue that there are indeed times when forgiveness is not a virtue, specifically: when forgiveness compromises one¿s own self-respect; when it is not compatible with respect for the moral community; and when the offender is unapologetic. The kind of offense I have in mind is a dehumanizing one, one that intends to diminish another person¿s worth or humanity. These are moral injuries, to which I will argue resentment is a better response than forgiveness when the three qualifications cannot be met.
Resumo:
Taking the three basic systems of Yes/No particles the group looked at the relative deep and surface structures, and asked what types of systems are present in the Georgian, Polish and Armenian languages. The choice of languages was of particular interest as the Caucasian and Indo-European languages usually have different question-answering systems, but Georgian (Caucasian) and Polish (Indo-European) in fact share the same system. The Armenian language is Indo-European, but the country is situated in the southern Caucasus, on Georgia's southern border, making it worth analysing Armenian in comparison with Georgian (from the point of view of language interference) and with Polish (as two relative languages). The group identified two different deep structures, tracing the occurrence of these in different languages, and showed that one is more natural in the majority of languages. They found no correspondence between relative languages and their question-answer systems and demonstrated that languages in the same typological class may show different systems, as with Georgian and the North Caucasian languages. It became clear that Georgian, Armenian and Polish all have an agree/disagree question-answering system defined by the same deep structure. From this they conclude that the lingual mentalities of Georgians, Armenians and Poles are more oriented to the communicative act. At the same time the Yes/No system, in which a positive particle stands for a positive answer and a negative particle for a negative answer, also functions in these languages, indicating that the second deep structure identified also functions alongside the first.
Resumo:
Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.