948 resultados para observation sentence


Relevância:

20.00% 20.00%

Publicador:

Resumo:

During sentence processing there is a preference to treat the first noun phrase found as the subject and agent, unless marked the other way. This preference would lead to a conflict in thematic role assignment when the syntactic structure conforms to a non-canonical object-before-subject pattern. Left perisylvian and fronto-parietal brain networks have been found to be engaged by increased computational demands during sentence comprehension, while event-reated brain potentials have been used to study the on-line manifestation of these demands. However, evidence regarding the spatiotemporal organization of brain networks in this domain is scarce. In the current study we used Magnetoencephalography to track spatio-temporally brain activity while Spanish speakers were reading subject- and object-first cleft sentences. Both kinds of sentences remained ambiguous between a subject-first or an object-first interpretation up to the appearance of the second argument. Results show the time-modulation of a frontal network at the disambiguation point of object-first sentences. Moreover, the time windows where these effects took place have been previously related to thematic role integration (300–500 ms) and to sentence reanalysis and resolution of conflicts during processing (beyond 500 ms post-stimulus). These results point to frontal cognitive control as a putative key mechanism which may operate when a revision of the sentence structure and meaning is necessary

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este artículo describe una estrategia de selección de frases para hacer el ajuste de un sistema de traducción estadístico basado en el decodificador Moses que traduce del español al inglés. En este trabajo proponemos dos posibilidades para realizar esta selección de las frases del corpus de validación que más se parecen a las frases que queremos traducir (frases de test en lengua origen). Con esta selección podemos obtener unos mejores pesos de los modelos para emplearlos después en el proceso de traducción y, por tanto, mejorar los resultados. Concretamente, con el método de selección basado en la medida de similitud propuesta en este artículo, mejoramos la medida BLEU del 27,17% con el corpus de validación completo al 27,27% seleccionando las frases para el ajuste. Estos resultados se acercan a los del experimento ORACLE: se utilizan las mismas frases de test para hacer el ajuste de los pesos. En este caso, el BLEU obtenido es de 27,51%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The underground cellars that appear in different parts of Spain are part of an agricultural landscape dispersed, sometimes damaged, others at risk of disappearing. This paper studies the measurement and display of a group of wineries located in Atauta (Soria), in the Duero River corridor. It is a unique architectural complex, facing rising, built on a smooth hillock as shown in Fig. 1. These constructions are excavated in the ground. The access to the cave or underground cellar has a shape of a narrow tube or down gallery. Immediately after, this space gets wider. There, wine is produced and stored [1]. Observation and detection of the underground cellar, both on the outside and underground, it is essential to make an inventory of the rural patrimony [2]. The geodetection is a noninvasive technique, adequate to accurately locate buried structures in the ground. Works undertaken include topographic work with the LIDAR techniques and integration with data obtained by GNSS and GPR.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An important part of human intelligence, both historically and operationally, is our ability to communicate. We learn how to communicate, and maintain our communicative skills, in a society of communicators – a highly effective way to reach and maintain proficiency in this complex skill. Principles that might allow artificial agents to learn language this way are in completely known at present – the multi-dimensional nature of socio-communicative skills are beyond every machine learning framework so far proposed. Our work begins to address the challenge of proposing a way for observation-based machine learning of natural language and communication. Our framework can learn complex communicative skills with minimal up-front knowledge. The system learns by incrementally producing predictive models of causal relationships in observed data, guided by goal-inference and reasoning using forward-inverse models. We present results from two experiments where our S1 agent learns human communication by observing two humans interacting in a realtime TV-style interview, using multimodal communicative gesture and situated language to talk about recycling of various materials and objects. S1 can learn multimodal complex language and multimodal communicative acts, a vocabulary of 100 words forming natural sentences with relatively complex sentence structure, including manual deictic reference and anaphora. S1 is seeded only with high-level information about goals of the interviewer and interviewee, and a small ontology; no grammar or other information is provided to S1 a priori. The agent learns the pragmatics, semantics, and syntax of complex utterances spoken and gestures from scratch, by observing the humans compare and contrast the cost and pollution related to recycling aluminum cans, glass bottles, newspaper, plastic, and wood. After 20 hours of observation S1 can perform an unscripted TV interview with a human, in the same style, without making mistakes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An important part of human intelligence is the ability to use language. Humans learn how to use language in a society of language users, which is probably the most effective way to learn a language from the ground up. Principles that might allow an artificial agents to learn language this way are not known at present. Here we present a framework which begins to address this challenge. Our auto-catalytic, endogenous, reflective architecture (AERA) supports the creation of agents that can learn natural language by observation. We present results from two experiments where our S1 agent learns human communication by observing two humans interacting in a realtime mock television interview, using gesture and situated language. Results show that S1 can learn multimodal complex language and multimodal communicative acts, using a vocabulary of 100 words with numerous sentence formats, by observing unscripted interaction between the humans, with no grammar being provided to it a priori, and only high-level information about the format of the human interaction in the form of high-level goals of the interviewer and interviewee and a small ontology. The agent learns both the pragmatics, semantics, and syntax of complex sentences spoken by the human subjects on the topic of recycling of objects such as aluminum cans, glass bottles, plastic, and wood, as well as use of manual deictic reference and anaphora.