36 resultados para Human language technologies (HTL)
Resumo:
International Perspective The development of GM technology continues to expand into increasing numbers of crops and conferred traits. Inevitably, the focus remains on the major field crops of soybean, maize, cotton, oilseed rape and potato with introduced genes conferring herbicide tolerance and/or pest resistance. Although there are comparatively few GM crops that have been commercialised to date, GM versions of 172 plant species have been grown in field trials in 31 countries. European Crops with Containment Issues Of the 20 main crops in the EU there are four for which GM varieties are commercially available (cotton, maize for animal feed and forage, and oilseed rape). Fourteen have GM varieties in field trials (bread wheat, barley, durum wheat, sunflower, oats, potatoes, sugar beet, grapes, alfalfa, olives, field peas, clover, apples, rice) and two have GM varieties still in development (rye, triticale). Many of these crops have hybridisation potential with wild and weedy relatives in the European flora (bread wheat, barley, oilseed rape, durum wheat, oats, sugar beet and grapes), with escapes (sunflower); and all have potential to cross-pollinate fields non-GM crops. Several fodder crops, forestry trees, grasses and ornamentals have varieties in field trials and these too may hybridise with wild relatives in the European flora (alfalfa, clover, lupin, silver birch, sweet chestnut, Norway spruce, Scots pine, poplar, elm, Agrostis canina, A. stolonifera, Festuca arundinacea, Lolium perenne, L. multiflorum, statice and rose). All these crops will require containment strategies to be in place if it is deemed necessary to prevent transgene movement to wild relatives and non-GM crops. Current Containment Strategies A wide variety of GM containment strategies are currently under development, with a particular focus on crops expressing pharmaceutical products. Physical containment in greenhouses and growth rooms is suitable for some crops (tomatoes, lettuce) and for research purposes. Aquatic bioreactors of some non-crop species (algae, moss, and duckweed) expressing pharmaceutical products have been adopted by some biotechnology companies. There are obvious limitations of the scale of physical containment strategies, addressed in part by the development of large underground facilities in the US and Canada. The additional resources required to grow plants underground incurs high costs that in the long term may negate any advantage of GM for commercial productioNatural genetic containment has been adopted by some companies through the selection of either non-food/feed crops (algae, moss, duckweed) as bio-pharming platforms or organisms with no wild relatives present in the local flora (safflower in the Americas). The expression of pharmaceutical products in leafy crops (tobacco, alfalfa, lettuce, spinach) enables growth and harvesting prior to and in the absence of flowering. Transgenically controlled containment strategies range in their approach and degree of development. Plastid transformation is relatively well developed but is not suited to all traits or crops and does not offer complete containment. Male sterility is well developed across a range of plants but has limitations in its application for fruit/seed bearing crops. It has been adopted in some commercial lines of oilseed rape despite not preventing escape via seed. Conditional lethality can be used to prevent flowering or seed development following the application of a chemical inducer, but requires 100% induction of the trait and sufficient application of the inducer to all plants. Equally, inducible expression of the GM trait requires equally stringent application conditions. Such a method will contain the trait but will allow the escape of a non-functioning transgene. Seed lethality (‘terminator’ technology) is the only strategy at present that prevents transgene movement via seed, but due to public opinion against the concept it has never been trialled in the field and is no longer under commercial development. Methods to control flowering and fruit development such as apomixis and cleistogamy will prevent crop-to-wild and wild-to-crop pollination, but in nature both of these strategies are complex and leaky. None of the genes controlling these traits have as yet been identified or characterised and therefore have not been transgenically introduced into crop species. Neither of these strategies will prevent transgene escape via seed and any feral apomicts that form are arguably more likely to become invasives. Transgene mitigation reduces the fitness of initial hybrids and so prevents stable introgression of transgenes into wild populations. However, it does not prevent initial formation of hybrids or spread to non-GM crops. Such strategies could be detrimental to wild populations and have not yet been demonstrated in the field. Similarly, auxotrophy prevents persistence of escapes and hybrids containing the transgene in an uncontrolled environment, but does not prevent transgene movement from the crop. Recoverable block of function, intein trans-splicing and transgene excision all use recombinases to modify the transgene in planta either to induce expression or to prevent it. All require optimal conditions and 100% accuracy to function and none have been tested under field conditions as yet. All will contain the GM trait but all will allow some non-native DNA to escape to wild populations or to non-GM crops. There are particular issues with GM trees and grasses as both are largely undomesticated, wind pollinated and perennial, thus providing many opportunities for hybridisation. Some species of both trees and grass are also capable of vegetative propagation without sexual reproduction. There are additional concerns regarding the weedy nature of many grass species and the long-term stability of GM traits across the life span of trees. Transgene stability and conferred sterility are difficult to trial in trees as most field trials are only conducted during the juvenile phase of tree growth. Bio-pharming of pharmaceutical and industrial compounds in plants Bio-pharming of pharmaceutical and industrial compounds in plants offers an attractive alternative to mammalian-based pharmaceutical and vaccine production. Several plantbased products are already on the market (Prodigene’s avidin, β-glucuronidase, trypsin generated in GM maize; Ventria’s lactoferrin generated in GM rice). Numerous products are in clinical trials (collagen, antibodies against tooth decay and non-Hodgkin’s lymphoma from tobacco; human gastric lipase, therapeutic enzymes, dietary supplements from maize; Hepatitis B and Norwalk virus vaccines from potato; rabies vaccines from spinach; dietary supplements from Arabidopsis). The initial production platforms for plant-based pharmaceuticals were selected from conventional crops, largely because an established knowledge base already existed. Tobacco and other leafy crops such as alfalfa, lettuce and spinach are widely used as leaves can be harvested and no flowering is required. Many of these crops can be grown in contained greenhouses. Potato is also widely used and can also be grown in contained conditions. The introduction of morphological markers may aid in the recognition and traceability of crops expressing pharmaceutical products. Plant cells or plant parts may be transformed and maintained in culture to produce recombinant products in a contained environment. Plant cells in suspension or in vitro, roots, root cells and guttation fluid from leaves may be engineered to secrete proteins that may be harvested in a continuous, non-destructive manner. Most strategies in this category remain developmental and have not been commercially adopted at present. Transient expression produces GM compounds from non-GM plants via the utilisation of bacterial or viral vectors. These vectors introduce the trait into specific tissues of whole plants or plant parts, but do not insert them into the heritable genome. There are some limitations of scale and the field release of such crops will require the regulation of the vector. However, several companies have several transiently expressed products in clinical and pre-clinical trials from crops raised in physical containment.
Resumo:
Human-like computer interaction systems requires far more than just simple speech input/output. Such a system should communicate with the user verbally, using a conversational style language. It should be aware of its surroundings and use this context for any decisions it makes. As a synthetic character, it should have a computer generated human-like appearance. This, in turn, should be used to convey emotions, expressions and gestures. Finally, and perhaps most important of all, the system should interact with the user in real time, in a fluent and believable manner.
Resumo:
Countries throughout the sub-Saharan (SSA) region have a complex linguistic heritage having their origins in opportunistic boundary changes effected by Western colonial powers at the Berlin Conference 1884-85. Postcolonial language-in-education policies valorizing ex-colonial languages have contributed at least in part to underachievement in education and thus the underdevelopment of human resources in SSA countries. This situation is not likely to improve whilst unresolved questions concerning the choice of language(s) that would best support social and economic development remain. Whilst policy attempts to develop local languages have been discussed within the framework of the African Union, and some countries have experimented with models of multilingual education during the past decade, the goalposts have already changed as a result of migration and trade. This paper argues that language policy makers need to be cognizant of changing language ecologies and their relationship with emerging linguistic and economic markets. The concept of language, within such a framework, has to be viewed in relation to the multiplicity of language markets within the shifting landscapes of people, culture, economics and the geo-politics of the 21st Century. Whilst, on the one hand, this refers to the hegemony of dominant powerful languages and the social relations of disempowerment, on the other hand, it also refers to existing and evolving social spaces and local language capabilities and choices. Within this framework the article argues that socially constructed dominant macro language markets need to be viewed also in relation to other, self-defined, community meso- and individual micro- language markets and their possibilities for social, economic and political development. It is through pursuing this argument that this article assesses the validity of Omoniyi’s argument in this volume, for the need to focus on the concept of language capital within multilingual contexts in the SSA region as compared to Bourdieu’s concept of linguistic capital.
Resumo:
The quality of a country’s human-resource base can be said to determine its level of success in social and economic development. This study focuses on some␣of the major human-resource development issues that surround the implementation of South Africa’s policy of multilingualism in education. It begins by discussing the relationship between knowledge, language, and human-resource, social and economic development within the global cultural economy. It then considers the situation in South Africa and, in particular, the implications of that country’s colonial and neo-colonial past for attempts to implement the new policy. Drawing on the linguistic-diversity-in-education debate in the United Kingdom of the past three decades, it assesses the first phase of an in-service teacher-education programme that was carried out at the Project for Alternative Education in South Africa (PRAESA) based at the University of Cape Town. The authors identify key short- and long-term issues related to knowledge exchange in education in multilingual societies, especially concerning the use of African languages as mediums for teaching and learning.
Resumo:
In this study two new measures of lexical diversity are tested for the first time on French. The usefulness of these measures, MTLD (McCarthy and Jarvis (2010 and this volume) ) and HD-D (McCarthy and Jarvis 2007), in predicting different aspects of language proficiency is assessed and compared with D (Malvern and Richards 1997; Malvern, Richards, Chipere and Durán 2004) and Maas (1972) in analyses of stories told by two groups of learners (n=41) of two different proficiency levels and one group of native speakers of French (n=23). The importance of careful lemmatization in studies of lexical diversity which involve highly inflected languages is also demonstrated. The paper shows that the measures of lexical diversity under study are valid proxies for language ability in that they explain up to 62 percent of the variance in French C-test scores, and up to 33 percent of the variance in a measure of complexity. The paper also provides evidence that dependence on segment size continues to be a problem for the measures of lexical diversity discussed in this paper. The paper concludes that limiting the range of text lengths or even keeping text length constant is the safest option in analysing lexical diversity.
Resumo:
Experiments demonstrating human enhancement through the implantation of technology in healthy humans have been performed for over a decade by some academic research groups. More recently, technology enthusiasts have begun to realize the potential of implantable technology such as glass capsule RFID transponders. In this paper it is argued that implantable RFID devices have evolved to the point whereby we should consider the devices themselves as simple computers. Presented here is the infection with a computer virus of an RFID device implanted in a human. Coupled with our developing concept of what constitutes the human body and its boundaries, it is argued that this study has given rise to the world’s first human infected with a computer virus. It has taken the wider academic community some time to agree that meaningful discourse on the topic of implantable technology is of value. As developments in medical technologies point to greater possibilities for enhancement, this shift in thinking is not too soon in coming.
Resumo:
Sri Lanka's participation rates in higher education are low and have risen only slightly in the last few decades; the number of places for higher education in the state university system only caters for around 3% of the university entrant age cohort. The literature reveals that the highly competitive global knowledge economy increasingly favours workers with high levels of education who are also lifelong learners. This lack of access to higher education for a sizable proportion of the labour force is identified as a severe impediment to Sri Lanka‟s competitiveness in the global knowledge economy. The literature also suggests that Information and Communication Technologies are increasingly relied upon in many contexts in order to deliver flexible learning, to cater especially for the needs of lifelong learners in today‟s higher educational landscape. The government of Sri Lanka invested heavily in ICTs for distance education during the period 2003-2009 in a bid to increase access to higher education; but there has been little research into the impact of this. To address this lack, this study investigated the impact of ICTs on distance education in Sri Lanka with respect to increasing access to higher education. In order to achieve this aim, the research focused on Sri Lanka‟s effort from three perspectives: policy perspective, implementation perspective and user perspective. A multiple case study research using an ethnographic approach was conducted to observe Orange Valley University‟s and Yellow Fields University‟s (pseudonymous) implementation of distance education programmes using questionnaires, qualitative interviewing and document analysis. In total, data for the analysis was collected from 129 questionnaires, 33 individual interviews and 2 group interviews. The research revealed that ICTs have indeed increased opportunities for higher education; but mainly for people of affluent families from the Western Province. Issues identified were categorized under the themes: quality assurance, location, language, digital literacies and access to resources. Recommendations were offered to tackle the identified issues in accordance with the study findings. The study also revealed the strong presence of a multifaceted digital divide in the country. In conclusion, this research has shown that iii although ICT-enabled distance education has the potential to increase access to higher education the present implementation of the system in Sri Lanka has been less than successful.
Resumo:
It is now established that native language affects one's perception of the world. However, it is unknown whether this effect is merely driven by conscious, language-based evaluation of the environment or whether it reflects fundamental differences in perceptual processing between individuals speaking different languages. Using brain potentials, we demonstrate that the existence in Greek of 2 color terms—ghalazio and ble—distinguishing light and dark blue leads to greater and faster perceptual discrimination of these colors in native speakers of Greek than in native speakers of English. The visual mismatch negativity, an index of automatic and preattentive change detection, was similar for blue and green deviant stimuli during a color oddball detection task in English participants, but it was significantly larger for blue than green deviant stimuli in native speakers of Greek. These findings establish an implicit effect of language-specific terminology on human color perception.
Resumo:
Most prominent models of bilingual representation assume a degree of interconnection or shared representation at the conceptual level. However, in the context of linguistic and cultural specificity of human concepts, and given recent findings that reveal a considerable amount of bidirectional conceptual transfer and conceptual change in bilinguals, a particular challenge that bilingual models face is to account for non-equivalence or partial equivalence of L1 and L2 specific concepts in bilingual conceptual store. The aim of the current paper is to provide a state-of-the-art review of the available empirical evidence from the fields of psycholinguistics, cognitive, experimental, and cross-cultural psychology, and discuss how these may inform and develop further traditional and more recent accounts of bilingual conceptual representation. Based on a synthesis of the available evidence against theoretical postulates of existing models, I argue that the most coherent account of bilingual conceptual representation combines three fundamental assumptions. The first one is the distributed, multi-modal nature of representation. The second one concerns cross-linguistic and cross-cultural variation of concepts. The third one makes assumptions about the development of concepts, and the emergent links between those concepts and their linguistic instantiations.
Resumo:
Understanding how and why the capability of one set of business resources, its structural arrangements and mechanisms compared to another works can provide competitive advantage in terms of new business processes and product and service development. However, most business models of capability are descriptive and lack formal modelling language to qualitatively and quantifiably compare capabilities, Gibson’s theory of affordance, the potential for action, provides a formal basis for a more robust and quantitative model, but most formal affordance models are complex and abstract and lack support for real-world applications. We aim to understand the ‘how’ and ‘why’ of business capability, by developing a quantitative and qualitative model that underpins earlier work on Capability-Affordance Modelling – CAM. This paper integrates an affordance based capability model and the formalism of Coloured Petri Nets to develop a simulation model. Using the model, we show how capability depends on the space time path of interacting resources, the mechanism of transition and specific critical affordance factors relating to the values of the variables for resources, people and physical objects. We show how the model can identify the capabilities of resources to enable the capability to inject a drug and anaesthetise a patient.
Resumo:
In recent years, research into the impact of genetic abnormalities on cognitive development, including language, has become recognized for its potential to make valuable contributions to our understanding of the brain–behaviour relationships underlying language acquisition as well as to understanding the cognitive architecture of the human mind. The publication of Fodor’s ( 1983 ) book The Modularity of Mind has had a profound impact on the study of language and the cognitive architecture of the human mind. Its central claim is that many of the processes involved in comprehension are undertaken by special brain systems termed ‘modules’. This domain specificity of language or modularity has become a fundamental feature that differentiates competing theories and accounts of language acquisition (Fodor 1983 , 1985 ; Levy 1994 ; Karmiloff-Smith 1998 ). However, although the fact that the adult brain is modularized is hardly disputed, there are different views of how brain regions become specialized for specific functions. A question of some interest to theorists is whether the human brain is modularized from the outset (nativist view) or whether these distinct brain regions develop as a result of biological maturation and environmental input (neuroconstructivist view). One source of insight into these issues has been the study of developmental disorders, and in particular genetic syndromes, such as Williams syndrome (WS) and Down syndrome (DS). Because of their uneven profiles characterized by dissociations of different cognitive skills, these syndromes can help us address theoretically significant questions. Investigations into the linguistic and cognitive profiles of individuals with these genetic abnormalities have been used as evidence to advance theoretical views about innate modularity and the cognitive architecture of the human mind. The present chapter will be organized as follows. To begin, two different theoretical proposals in the modularity debate will be presented. Then studies of linguistic abilities in WS and in DS will be reviewed. Here, the emphasis will be mainly on WS due to the fact that theoretical debates have focused primarily on WS, there is a larger body of literature on WS, and DS subjects have typically been used for the purposes of comparison. Finally, the modularity debate will be revisited in light of the literature review of both WS and DS. Conclusions will be drawn regarding the contribution of these two genetic syndromes to the issue of cognitive modularity, and in particular innate modularity.