20 resultados para MAPPINGS

em Universidad Politécnica de Madrid


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let U be an open subset of a separable Banach space. Let F be the collection of all holomorphic mappings f from the open unit disc D � C into U such that f(D) is dense in U. We prove the lineability and density of F in appropriate spaces for diferent choices of U. RESUMEN. Sea U un subconjunto abierto de un espacio de Banach separable. Sea F el conjunto de funciones holomorfas f definidas en el disco unidad D del plano complejo con valores en U tales que f(D) es denso en U. En el artículo se demuestra la lineabilidad y densidad del conjunto F para diferentes elecciones de U.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La integración de fuentes de información heterogéneas ha sido un problema abordado en diferentes tipos de fuentes a lo largo de las décadas de diferentes maneras. Una de ellas es el establecimiento de unas relaciones semánticas que permitan poder unir la información de las fuentes relacionadas. A estos enlaces, claves en la integración, se les ha llamado generalmente mappings. Los mappings se han usado en multitud de trabajos y se han abordado, de manera más práctica que teórica en muchos casos, diferentes soluciones para su descubrimiento, su almacenaje, su explotación, etc. Sin embargo, aunque han sido muchas las contribuciones sobre mappings, no hay una definición generalizada y admitida por la comunidad que cubra todos los aspectos vinculados a los mappings. Además, en su proceso de descubrimiento, no existe un marco teórico que defina metódicamente los procesos a seguir y sus características. Igualmente, la actual forma de evaluar el descubrimiento de mappings no es suficiente para toda la casuística existente. En este trabajo se aporta una definición de mapping génerica que engloba todos los sistemas actuales, la especificación detallada del proceso de descubrimiento y el análisis y la propuesta de un proceso de evaluación del descubrimiento. La validez de estos aportes se comprueba con la formulación de hipótesis y su comprobación mediante un estudio cuantitativo sobre un caso de uso con recursos geoespaciales heterogéneos. ABSTRACT The integration of heterogeneous information resources has been an issue addressed in different types of sources over the decades in different ways. One of them is the establishment of semantic relations which allow information from different related resources to be linked. These links, crucial pieces of this integration, are usually known as mappings. These mappings have been widely used in many applications, and different solutions for their discovery, storing, explotation, etc. have been presented, following rather a more practical than theoretical way in many cases. However, although mappings have been widely applied by many researchers, there is a lack of a generally accepted definition that can cover all the aspects related to mappings. Moreover, in the process of mapping discovery, there is not a theoretical framework that defines methodically the processes to be followed and their characteristics. Similarly, the current way of assessing or evaluating the discovery of mappings is insufficient for all the existing use cases. The main contributions of this work are threefold. On the one hand, it presents a general definition of "mapping" which covers all current systems. On the other hand, it describes a detailed specification of the discovery process, and, finally, it faces the analysis and the purpose of the evaluation of this discovery process. The validity of these contributions has been checked with the formulation of hypothesis which have been verified by using heterogeneous geospatial resources in a quantitative study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RDB to RDF Mapping Language (R2RML) es una recomendación del W3C que permite especificar reglas para transformar bases de datos relacionales a RDF. Estos datos en RDF se pueden materializar y almacenar en un sistema gestor de tripletas RDF (normalmente conocidos con el nombre triple store), en el cual se pueden evaluar consultas SPARQL. Sin embargo, hay casos en los cuales la materialización no es adecuada o posible, por ejemplo, cuando la base de datos se actualiza frecuentemente. En estos casos, lo mejor es considerar los datos en RDF como datos virtuales, de tal manera que las consultas SPARQL anteriormente mencionadas se traduzcan a consultas SQL que se pueden evaluar sobre los sistemas gestores de bases de datos relacionales (SGBD) originales. Para esta traducción se tienen en cuenta los mapeos R2RML. La primera parte de esta tesis se centra en la traducción de consultas. Se propone una formalización de la traducción de SPARQL a SQL utilizando mapeos R2RML. Además se proponen varias técnicas de optimización para generar consultas SQL que son más eficientes cuando son evaluadas en sistemas gestores de bases de datos relacionales. Este enfoque se evalúa mediante un benchmark sintético y varios casos reales. Otra recomendación relacionada con R2RML es la conocida como Direct Mapping (DM), que establece reglas fijas para la transformación de datos relacionales a RDF. A pesar de que ambas recomendaciones se publicaron al mismo tiempo, en septiembre de 2012, todavía no se ha realizado un estudio formal sobre la relación entre ellas. Por tanto, la segunda parte de esta tesis se centra en el estudio de la relación entre R2RML y DM. Se divide este estudio en dos partes: de R2RML a DM, y de DM a R2RML. En el primer caso, se estudia un fragmento de R2RML que tiene la misma expresividad que DM. En el segundo caso, se representan las reglas de DM como mapeos R2RML, y también se añade la semántica implícita (relaciones de subclase, 1-N y M-N) que se puede encontrar codificada en la base de datos. Esta tesis muestra que es posible usar R2RML en casos reales, sin necesidad de realizar materializaciones de los datos, puesto que las consultas SQL generadas son suficientemente eficientes cuando son evaluadas en el sistema gestor de base de datos relacional. Asimismo, esta tesis profundiza en el entendimiento de la relación existente entre las dos recomendaciones del W3C, algo que no había sido estudiado con anterioridad. ABSTRACT. RDB to RDF Mapping Language (R2RML) is a W3C recommendation that allows specifying rules for transforming relational databases into RDF. This RDF data can be materialized and stored in a triple store, so that SPARQL queries can be evaluated by the triple store. However, there are several cases where materialization is not adequate or possible, for example, if the underlying relational database is updated frequently. In those cases, RDF data is better kept virtual, and hence SPARQL queries over it have to be translated into SQL queries to the underlying relational database system considering that the translation process has to take into account the specified R2RML mappings. The first part of this thesis focuses on query translation. We discuss the formalization of the translation from SPARQL to SQL queries that takes into account R2RML mappings. Furthermore, we propose several optimization techniques so that the translation procedure generates SQL queries that can be evaluated more efficiently over the underlying databases. We evaluate our approach using a synthetic benchmark and several real cases, and show positive results that we obtained. Direct Mapping (DM) is another W3C recommendation for the generation of RDF data from relational databases. While R2RML allows users to specify their own transformation rules, DM establishes fixed transformation rules. Although both recommendations were published at the same time, September 2012, there has not been any study regarding the relationship between them. The second part of this thesis focuses on the study of the relationship between R2RML and DM. We divide this study into two directions: from R2RML to DM, and from DM to R2RML. From R2RML to DM, we study a fragment of R2RML having the same expressive power than DM. From DM to R2RML, we represent DM transformation rules as R2RML mappings, and also add the implicit semantics encoded in databases, such as subclass, 1-N and N-N relationships. This thesis shows that by formalizing and optimizing R2RML-based SPARQL to SQL query translation, it is possible to use R2RML engines in real cases as the resulting SQL is efficient enough to be evaluated by the underlying relational databases. In addition to that, this thesis facilitates the understanding of bidirectional relationship between the two W3C recommendations, something that had not been studied before.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, the Semantic Web has experienced significant advancements in standards and techniques, as well as in the amount of semantic information available online. Nevertheless, mechanisms are still needed to automatically reconcile information when it is expressed in different natural languages on the Web of Data, in order to improve the access to semantic information across language barriers. In this context several challenges arise [1], such as: (i) ontology translation/localization, (ii) cross-lingual ontology mappings, (iii) representation of multilingual lexical information, and (iv) cross-lingual access and querying of linked data. In the following we will focus on the second challenge, which is the necessity of establishing, representing and storing cross-lingual links among semantic information on the Web. In fact, in a “truly” multilingual Semantic Web, semantic data with lexical representations in one natural language would be mapped to equivalent or related information in other languages, thus making navigation across multilingual information possible for software agents.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Web has witnessed an enormous growth in the amount of semantic information published in recent years. This growth has been stimulated to a large extent by the emergence of Linked Data. Although this brings us a big step closer to the vision of a Semantic Web, it also raises new issues such as the need for dealing with information expressed in different natural languages. Indeed, although the Web of Data can contain any kind of information in any language, it still lacks explicit mechanisms to automatically reconcile such information when it is expressed in different languages. This leads to situations in which data expressed in a certain language is not easily accessible to speakers of other languages. The Web of Data shows the potential for being extended to a truly multilingual web as vocabularies and data can be published in a language-independent fashion, while associated language-dependent (linguistic) information supporting the access across languages can be stored separately. In this sense, the multilingual Web of Data can be realized in our view as a layer of services and resources on top of the existing Linked Data infrastructure adding i) linguistic information for data and vocabularies in different languages, ii) mappings between data with labels in different languages, and iii) services to dynamically access and traverse Linked Data across different languages. In this article we present this vision of a multilingual Web of Data. We discuss challenges that need to be addressed to make this vision come true and discuss the role that techniques such as ontology localization, ontology mapping, and cross-lingual ontology-based information access and presentation will play in achieving this. Further, we propose an initial architecture and describe a roadmap that can provide a basis for the implementation of this vision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sensor networks are increasingly becoming one of the main sources of Big Data on the Web. However, the observations that they produce are made available with heterogeneous schemas, vocabularies and data formats, making it difficult to share and reuse these data for other purposes than those for which they were originally set up. In this thesis we address these challenges, considering how we can transform streaming raw data to rich ontology-based information that is accessible through continuous queries for streaming data. Our main contribution is an ontology-based approach for providing data access and query capabilities to streaming data sources, allowing users to express their needs at a conceptual level, independent of implementation and language-specific details. We introduce novel query rewriting and data translation techniques that rely on mapping definitions relating streaming data models to ontological concepts. Specific contributions include: • The syntax and semantics of the SPARQLStream query language for ontologybased data access, and a query rewriting approach for transforming SPARQLStream queries into streaming algebra expressions. • The design of an ontology-based streaming data access engine that can internally reuse an existing data stream engine, complex event processor or sensor middleware, using R2RML mappings for defining relationships between streaming data models and ontology concepts. Concerning the sensor metadata of such streaming data sources, we have investigated how we can use raw measurements to characterize streaming data, producing enriched data descriptions in terms of ontological models. Our specific contributions are: • A representation of sensor data time series that captures gradient information that is useful to characterize types of sensor data. • A method for classifying sensor data time series and determining the type of data, using data mining techniques, and a method for extracting semantic sensor metadata features from the time series.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cognitive linguistics have conscientiously pointed out the pervasiveness of conceptual mappings, particularly as conceptual blending and integration, that underlie language and that are unconsciously used in everyday speech (Fauconnier 1997, Fauconnier & Turner 2002; Rohrer 2007; Grady, Oakley & Coulson 1999). Moreover, as a further development of this work, there is a growing interest in research devoted to the conceptual mappings that make up specialized technical disciplines. Lakoff & Núñez 2000, for example, have produced a major breakthrough on the understanding of concepts in mathematics, through conceptual metaphor and as a result not of purely abstract concepts but rather of embodiment. On the engineering and architecture front, analyses on the use of metaphor, blending and categorization in English and Spanish have likewise appeared in recent times (Úbeda 2001, Roldán 1999, Caballero 2003a, 2003b, Roldán & Ubeda 2006, Roldán & Protasenia 2007). The present paper seeks to show a number of significant conceptual mappings underlying the language of architecture and civil engineering that seem to shape the way engineers and architects communicate. In order to work with a significant segment of linguistic expressions in this field, a corpus taken from a widely used technical Spanish engineering journal article was collected and analysed. The examination of the data obtained indicates that many tokens make a direct reference to therapeutic conceptual mappings, highlighting medical domains such as diagnosing,treating and curing. Hence, the paper illustrates how this notion is instantiated by the corresponding bodily conceptual integration. In addition, we wish to underline the function of visual metaphors in the world of modern architecture by evoking parts of human or animal anatomy, and how this is visibly noticeable in contemporary buildings and public works structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An important objective of the INTEGRATE project1 is to build tools that support the efficient execution of post-genomic multi-centric clinical trials in breast cancer, which includes the automatic assessment of the eligibility of patients for available trials. The population suited to be enrolled in a trial is described by a set of free-text eligibility criteria that are both syntactically and semantically complex. At the same time, the assessment of the eligibility of a patient for a trial requires the (machineprocessable) understanding of the semantics of the eligibility criteria in order to further evaluate if the patient data available for example in the hospital EHR satisfies these criteria. This paper presents an analysis of the semantics of the clinical trial eligibility criteria based on relevant medical ontologies in the clinical research domain: SNOMED-CT, LOINC, MedDRA. We detect subsets of these widely-adopted ontologies that characterize the semantics of the eligibility criteria of trials in various clinical domains and compare these sets. Next, we evaluate the occurrence frequency of the concepts in the concrete case of breast cancer (which is our first application domain) in order to provide meaningful priorities for the task of binding/mapping these ontology concepts to the actual patient data. We further assess the effort required to extend our approach to new domains in terms of additional semantic mappings that need to be developed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of semantic and Linked Data technologies for Enterprise Application Integration (EAI) is increasing in recent years. Linked Data and Semantic Web technologies such as the Resource Description Framework (RDF) data model provide several key advantages over the current de-facto Web Service and XML based integration approaches. The flexibility provided by representing the data in a more versatile RDF model using ontologies enables avoiding complex schema transformations and makes data more accessible using Web standards, preventing the formation of data silos. These three benefits represent an edge for Linked Data-based EAI. However, work still has to be performed so that these technologies can cope with the particularities of the EAI scenarios in different terms, such as data control, ownership, consistency, or accuracy. The first part of the paper provides an introduction to Enterprise Application Integration using Linked Data and the requirements imposed by EAI to Linked Data technologies focusing on one of the problems that arise in this scenario, the coreference problem, and presents a coreference service that supports the use of Linked Data in EAI systems. The proposed solution introduces the use of a context that aggregates a set of related identities and mappings from the identities to different resources that reside in distinct applications and provide different views or aspects of the same entity. A detailed architecture of the Coreference Service is presented explaining how it can be used to manage the contexts, identities, resources, and applications which they relate to. The paper shows how the proposed service can be utilized in an EAI scenario using an example involving a dashboard that integrates data from different systems and the proposed workflow for registering and resolving identities. As most enterprise applications are driven by business processes and involve legacy data, the proposed approach can be easily incorporated into enterprise applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study suggests a theoretical framework for improving the teaching/ learning process of English employed in the Aeronautical discourse that brings together cognitive learning strategies, Genre Analysis and the Contemporary theory of Metaphor (Lakoff and Johnson 1980; Lakoff 1993). It maintains that cognitive strategies such as imagery, deduction, inference and grouping can be enhanced by means of metaphor and genre awareness in the context of content based approach to language learning. A list of image metaphors and conceptual metaphors which comes from the terminological database METACITEC is provided. The metaphorical terms from the area of Aeronautics have been taken from specialised dictionaries and have been categorised according to the conceptual metaphors they respond to, by establishing the source domains and the target domains, as well as the semantic networks found. This information makes reference to the internal mappings underlying the discourse of aeronautics reflected in five aviation accident case studies which are related to accident reports from the National Transportation Safety Board (NTSB) and provides an important source for designing language teaching tasks. La Lingüística Cognitiva y el Análisis del Género han contribuido a la mejora de la enseñanza de segundas lenguas y, en particular, al desarrollo de la competencia lingüística de los alumnos de inglés para fines específicos. Este trabajo pretende perfeccionar los procesos de enseñanza y el aprendizaje del lenguaje empleado en el discurso aeronáutico por medio de la práctica de estrategias cognitivas y prestando atención a la Teoría del análisis del género y a la Teoría contemporánea de la metáfora (Lakoff y Johnson 1980; Lakoff 1993). Con el propósito de crear recursos didácticos en los que se apliquen estrategias metafóricas, se ha elaborado un listado de metáforas de imagen y de metáforas conceptuales proveniente de la base de datos terminológica META-CITEC. Estos términos se han clasificado de acuerdo con las metáforas conceptuales y de imagen existentes en esta área de conocimiento. Para la enseñanza de este lenguaje de especialidad, se proponen las correspondencias y las proyecciones entre el dominio origen y el dominio meta que se han hallado en los informes de accidentes aéreos tomados de la Junta federal de la Seguridad en el Transporte (NTSB)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a corpus-based analysis of the humanizing metaphor and supports that constitutive metaphor in science and technology may be highly metaphorical and active. The study, grounded in Lakoff’s Theory of Metaphor and in Langacker’s relational networks, consists of two phases: firstly, Earth Science metaphorical terms were extracted from databases and dictionaries and, then, contextualized by means of the “Wordsmith” tool in a digitalized corpus created to establish their productivity. Secondly, the terms were classified to disclose the main conceptual metaphors underlying them; then, the mappings and the relational networks of the metaphor were described. Results confirm the systematicity and productivity of the metaphor in this field, show evidence that metaphoricity of scientific terms is gradable, and support that Earth Science metaphors are not only created in terms of their concrete salient properties and attributes, but also on abstract human anthropocentric projections.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La metáfora y otros mecanismos imaginativos subyacentes al pensa- miento y lenguaje humanos pueden ser utilizados en el discurso diario y especializado (Lakoff y Johnson 1980; Lakoff y Nuñez 2000). Asimismo pueden aparecer en la comunicación no verbal (Forceville y Urios-Aparisi 2009; Littlemore et al. Este volumen).Partiendo de estudios cognitivos y de la teoría de integración conceptual (Fauconnier 1997; Fauconnier y Turner 2002), este artículo examina la presencia de la metáfora en la ingeniería. Primeramente, se analiza un corpus lingüístico procedente de artículos de investigación de ingeniería civil. Los datos revelan el uso de la metáfora antropomórfica, sobre todo en expresiones relativas a la salud, como “diagnóstico”, “auscultación” o “proceso de curación”. Se exploran además ejemplos de ingeniería cuya fuente son proyecciones conceptuales corporales. Finalmente, abordamos la función de la metáfora visual bajo la teoría de integración conceptual mediante representaciones de ingeniería que evocan la anatomía humana o animal. Metaphor and other imaginative mechanisms that underlie human thought and language such as metonymy are used in everyday and specialised discourse (Lakoff and Johnson 1980; Lakoff and Nuñez 2000) They can also be involved in non- verbal forms of communication (Forceville and Urios-Aparisi 2009; Littlemore et al. this volume). Drawing on metaphor cognitive studies and on conceptual integration theory (Fauconnier 1997; Fauconnier and Turner 2002) this paper examines the occurrence of metaphor in engineering. First, we analyse results from a linguistic corpus formed by research papers from civil engineering journals. These data reveal the use of anthropomorphic metaphor, especially related to health or medical mappings such as “diagnosing”, “auscultation” or “curing”. Then, we explore how engineering notions are instantiated by bodily conceptual mappings according to conceptual integration theory. Finally, the function of visual metaphor is examined with conceptual integration theory by using engineering images evoking parts of human or animal anatomy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article explores one aspect of the processing perspective in L2 learning in an EST context: the processing of new content words, in English, of the type ‘cognates’ and ‘false friends’, by Spanish speaking engineering students. The paper does not try to offer a comprehensive overview of language acquisition mechanisms, but rather it is intended to review more narrowly how our conceptual systems, governed by intricately linked networks of neural connections in the brain, make language development possible, creating, at the same time, some L2 processing problems. The case of ‘cognates and false friends’ in specialised contexts is brought here to illustrate some of the processing problems that the L2 learner has to confront, and how mappings in the visual, phonological and semantic (conceptual) brain structures function in second language processing of new vocabulary. Resumen Este artículo pretende reflexionar sobre un aspecto de la perspectiva del procesamiento de segundas lenguas (L2) en el contexto del ICT: el procesamiento de palabras nuevas, en inglés, conocidas como “cognados” y “falsos amigos”, por parte de estudiantes de ingeniería españoles. No se pretende ofrecer una visión completa de los mecanismos de adquisición del lenguaje, más bien se intenta mostrar cómo nuestro sistema conceptual, gobernado por una complicada red de conexiones neuronales en el cerebro, hace posible el desarrollo del lenguaje, aunque ello conlleve ciertas dificultades en el procesamiento de segundas lenguas. El caso de los “cognados” y los “falsos amigos”, en los lenguajes de especialidad, se trae para ilustrar algunos de los problemas de procesamiento que el estudiante de una lengua extranjera tiene que afrontar y el funcionamiento de las correspondencias entre las estructuras visuales, fonológicas y semánticas (conceptuales) del cerebro en el procesamiento de nuevo vocabulario.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Context: This paper addresses one of the major end-user development (EUD) challenges, namely, how to pack today?s EUD support tools with composable elements. This would give end users better access to more components which they can use to build a solution tailored to their own needs. The success of later end-user software engineering (EUSE) activities largely depends on how many components each tool has and how adaptable components are to multiple problem domains. Objective: A system for automatically adapting heterogeneous components to a common development environment would offer a sizeable saving of time and resources within the EUD support tool construction process. This paper presents an automated adaptation system for transforming EUD components to a standard format. Method: This system is based on the use of description logic. Based on a generic UML2 data model, this description logic is able to check whether an end-user component can be transformed to this modeling language through subsumption or as an instance of the UML2 model. Besides it automatically finds a consistent, non-ambiguous and finite set of XSLT mappings to automatically prepare data in order to leverage the component as part of a tool that conforms to the target UML2 component model. Results: The proposed system has been successfully applied to components from four prominent EUD tools. These components were automatically converted to a standard format. In order to validate the proposed system, rich internet applications (RIA) used as an operational support system for operators at a large services company were developed using automatically adapted standard format components. These RIAs would be impossible to develop using each EUD tool separately. Conclusion: The positive results of applying our system for automatically adapting components from current tool catalogues are indicative of the system?s effectiveness. Use of this system could foster the growth of web EUD component catalogues, leveraging a vast ecosystem of user-centred SaaS to further current EUSE trends.