14 resultados para Natural language generation
em CentAUR: Central Archive University of Reading - UK
Resumo:
Chatterbox Challenge is an annual web-based contest for artificial conversational systems, ACE. The 2010 instantiation was the tenth consecutive contest held between March and June in the 60th year following the publication of Alan Turing’s influential disquisition ‘computing machinery and intelligence’. Loosely based on Turing’s viva voca interrogator-hidden witness imitation game, a thought experiment to ascertain a machine’s capacity to respond satisfactorily to unrestricted questions, the contest provides a platform for technology comparison and evaluation. This paper provides an insight into emotion content in the entries since the 2005 Chatterbox Challenge. The authors find that synthetic textual systems, none of which are backed by academic or industry funding, are, on the whole and more than half a century since Weizenbaum’s natural language understanding experiment, little further than Eliza in terms of expressing emotion in dialogue. This may be a failure on the part of the academic AI community for ignoring the Turing test as an engineering challenge.
Resumo:
Purpose – The purpose of this paper is to consider Turing's two tests for machine intelligence: the parallel-paired, three-participants game presented in his 1950 paper, and the “jury-service” one-to-one measure described two years later in a radio broadcast. Both versions were instantiated in practical Turing tests during the 18th Loebner Prize for artificial intelligence hosted at the University of Reading, UK, in October 2008. This involved jury-service tests in the preliminary phase and parallel-paired in the final phase. Design/methodology/approach – Almost 100 test results from the final have been evaluated and this paper reports some intriguing nuances which arose as a result of the unique contest. Findings – In the 2008 competition, Turing's 30 per cent pass rate is not achieved by any machine in the parallel-paired tests but Turing's modified prediction: “at least in a hundred years time” is remembered. Originality/value – The paper presents actual responses from “modern Elizas” to human interrogators during contest dialogues that show considerable improvement in artificial conversational entities (ACE). Unlike their ancestor – Weizenbaum's natural language understanding system – ACE are now able to recall, share information and disclose personal interests.
Resumo:
We report on the results of a laboratory investigation using a rotating two-layer annulus experiment, which exhibits both large-scale vortical modes and short-scale divergent modes. A sophisticated visualization method allows us to observe the flow at very high spatial and temporal resolution. The balanced long-wavelength modes appear only when the Froude number is supercritical (i.e. $F\,{>}\,F_\mathrm{critical}\,{\equiv}\, \upi^2/2$), and are therefore consistent with generation by a baroclinic instability. The unbalanced short-wavelength modes appear locally in every single baroclinically unstable flow, providing perhaps the first direct experimental evidence that all evolving vortical flows will tend to emit freely propagating inertia–gravity waves. The short-wavelength modes also appear in certain baroclinically stable flows. We infer the generation mechanisms of the short-scale waves, both for the baro-clinically unstable case in which they co-exist with a large-scale wave, and for the baroclinically stable case in which they exist alone. The two possible mechanisms considered are spontaneous adjustment of the large-scale flow, and Kelvin–Helmholtz shear instability. Short modes in the baroclinically stable regime are generated only when the Richardson number is subcritical (i.e. $\hbox{\it Ri}\,{<}\,\hbox{\it Ri}_\mathrm{critical}\,{\equiv}\, 1$), and are therefore consistent with generation by a Kelvin–Helmholtz instability. We calculate five indicators of short-wave generation in the baroclinically unstable regime, using data from a quasi-geostrophic numerical model of the annulus. There is excellent agreement between the spatial locations of short-wave emission observed in the laboratory, and regions in which the model Lighthill/Ford inertia–gravity wave source term is large. We infer that the short waves in the baroclinically unstable fluid are freely propagating inertia–gravity waves generated by spontaneous adjustment of the large-scale flow.
Resumo:
Inertia-gravity waves exist ubiquitously throughout the stratified parts of the atmosphere and ocean. They are generated by local velocity shears, interactions with topography, and as geostrophic (or spontaneous) adjustment radiation. Relatively little is known about the details of their interaction with the large-scale flow, however. We report on a joint model/laboratory study of a flow in which inertia-gravity waves are generated as spontaneous adjustment radiation by an evolving large-scale mode. We show that their subsequent impact upon the large-scale dynamics is generally small. However, near a potential transition from one large-scale mode to another, in a flow which is simultaneously baroclinically-unstable to more than one mode, the inertia-gravity waves may strongly influence the selection of the mode which actually occurs.
Resumo:
Two experiments examined the learning of a set of Greek pronunciation rules through explicit and implicit modes of rule presentation. Experiment 1 compared the effectiveness of implicit and explicit modes of presentation in two modalities, visual and auditory. Subjects in the explicit or rule group were presented with the rule set, and those in the implicit or natural group were shown a set of Greek words, composed of letters from the rule set, linked to their pronunciations. Subjects learned the Greek words to criterion and were then given a series of tests which aimed to tap different types of knowledge. The results showed an advantage of explicit study of the rules. In addition, an interaction was found between mode of presentation and modality. Explicit instruction was more effective in the visual than in the auditory modality, whereas there was no modality effect for implicit instruction. Experiment 2 examined a possible reason for the advantage of the rule groups by comparing different combinations of explicit and implicit presentation in the study and learning phases. The results suggested that explicit presentation of the rules is only beneficial when it is followed by practice at applying them.
Resumo:
A parallel hardware random number generator for use with a VLSI genetic algorithm processing device is proposed. The design uses an systolic array of mixed congruential random number generators. The generators are constantly reseeded with the outputs of the proceeding generators to avoid significant biasing of the randomness of the array which would result in longer times for the algorithm to converge to a solution. 1 Introduction In recent years there has been a growing interest in developing hardware genetic algorithm devices [1, 2, 3]. A genetic algorithm (GA) is a stochastic search and optimization technique which attempts to capture the power of natural selection by evolving a population of candidate solutions by a process of selection and reproduction [4]. In keeping with the evolutionary analogy, the solutions are called chromosomes with each chromosome containing a number of genes. Chromosomes are commonly simple binary strings, the bits being the genes.
Resumo:
Fitness of hybrids between genetically modified (GM) crops and wild relatives influences the likelihood of ecological harm. We measured fitness components in spontaneous (non-GM) rapeseed x Brassica rapa hybrids in natural populations. The F-1 hybrids yielded 46.9% seed output of B. rapa, were 16.9% as effective as males on B. rapa and exhibited increased self-pollination. Assuming 100% GM rapeseed cultivation, we conservatively predict < 7000 second-generation transgenic hybrids annually in the United Kingdom (i.e. similar to 20% of F-1 hybrids). Conversely, whilst reduced hybrid fitness improves feasibility of bio-containment, stage projection matrices suggests broad scope for some transgenes to offset this effect by enhancing fitness.
Resumo:
This paper draws on ethnographic case-study research conducted amongst a group of first and second generation immigrant children in six inner-city schools in London. It focuses on language attitudes and language choice in relation to cultural maintenance, on the one hand, and career aspirations on the other. It seeks to provide insight into some of the experiences and dilemmatic choices encountered and negotiations engaged in by transmigratory groups, how they define cultural capital, and the processes through which new meanings are shaped as part of the process of defining a space within the host society. Underlying this discussion is the assumption that alternative cultural spaces in which multiple identities and possibilities can be articulated already exist in the rich texture of everyday life amongst transmigratory groups. The argument that whilst the acquisition of 'world languages' is a key variable in accumulating cultural capital, the maintenance of linguistic diversity retains potent symbolic power in sustaining cohesive identities is a recurring theme.
Resumo:
This review highlights the importance of right hemisphere language functions for successful social communication and advances the hypothesis that the core deficit in psychosis is a failure of segregation of right from left hemisphere functions. Lesion studies of stroke patients and dichotic listening and functional imaging studies of healthy people have shown that some language functions are mediated by the right hemisphere rather than the left. These functions include discourse planning/comprehension, understanding humour, sarcasm, metaphors and indirect requests, and the generation/comprehension of emotional prosody. Behavioural evidence indicates that patients with typical schizophrenic illnesses perform poorly on tests of these functions, and aspects of these functions are disturbed in schizo-affective and affective psychoses. The higher order language functions mediated by the right hemisphere are essential to an accurate understanding of someone's communicative intent, and the deficits displayed by patients with schizophrenia may make a significant contribution to their social interaction deficits. We outline a bi-hemispheric theory of the neural basis of language that emphasizes the role of the sapiens-specific cerebral torque in determining the four-chambered nature of the human brain in relation to the origins of language and the symptoms of schizophrenia. Future studies of abnormal lateralization of left hemisphere language functions need to take account of the consequences of a failure of lateralization of language functions to the right as well as the left hemisphere.
Resumo:
A new mild method has been devised for generating o-(naphtho)quinone methides via fluoride-induced desilylation of silyl derivatives of o-hydroxybenzyl(or 1-naphthylmethyl) nitrate. The reactive o-(naphtho)quinone methide intermediates were trapped by C, O, N and S nucleophiles and underwent “inverse electron-demand” hetero Diels- Alder reaction with dienophiles to give stable adducts. The method has useful potential application in natural product synthesis and drug research
Resumo:
Our aim is to reconstruct the brain-body loop of stroke patients via an EEG-driven robotic system. After the detection of motor command generation, the robotic arm should assist patient’s movement at the correct moment and in a natural way. In this study we performed EEG measurements from healthy subjects performing discrete spontaneous motion. An EEG analysis based on the temporal correlation of the brain activity was employed to determine the onset of single motion motor command generation.
Resumo:
Advances in the science and observation of climate change are providing a clearer understanding of the inherent variability of Earth’s climate system and its likely response to human and natural influences. The implications of climate change for the environment and society will depend not only on the response of the Earth system to changes in radiative forcings, but also on how humankind responds through changes in technology, economies, lifestyle and policy. Extensive uncertainties exist in future forcings of and responses to climate change, necessitating the use of scenarios of the future to explore the potential consequences of different response options. To date, such scenarios have not adequately examined crucial possibilities, such as climate change mitigation and adaptation, and have relied on research processes that slowed the exchange of information among physical, biological and social scientists. Here we describe a new process for creating plausible scenarios to investigate some of the most challenging and important questions about climate change confronting the global community
Resumo:
We utilize energy budget diagnostics from the Coupled Model Intercomparison Project phase 5 (CMIP5) to evaluate the models' climate forcing since preindustrial times employing an established regression technique. The climate forcing evaluated this way, termed the adjusted forcing (AF), includes a rapid adjustment term associated with cloud changes and other tropospheric and land-surface changes. We estimate a 2010 total anthropogenic and natural AF from CMIP5 models of 1.9 ± 0.9 W m−2 (5–95% range). The projected AF of the Representative Concentration Pathway simulations are lower than their expected radiative forcing (RF) in 2095 but agree well with efficacy weighted forcings from integrated assessment models. The smaller AF, compared to RF, is likely due to cloud adjustment. Multimodel time series of temperature change and AF from 1850 to 2100 have large intermodel spreads throughout the period. The intermodel spread of temperature change is principally driven by forcing differences in the present day and climate feedback differences in 2095, although forcing differences are still important for model spread at 2095. We find no significant relationship between the equilibrium climate sensitivity (ECS) of a model and its 2003 AF, in contrast to that found in older models where higher ECS models generally had less forcing. Given the large present-day model spread, there is no indication of any tendency by modelling groups to adjust their aerosol forcing in order to produce observed trends. Instead, some CMIP5 models have a relatively large positive forcing and overestimate the observed temperature change.
Resumo:
Experimental philosophy of language uses experimental methods developed in the cognitive sciences to investigate topics of interest to philosophers of language. This article describes the methodological background for the development of experimental approaches to topics in philosophy of language, distinguishes negative and positive projects in experimental philosophy of language, and evaluates experimental work on the reference of proper names and natural kind terms. The reliability of expert judgments vs. the judgments of ordinary speakers, the role that ambiguity plays in influencing responses to experiments, and the reliability of meta-linguistic judgments are also assessed.