18 resultados para Geographic information science and geodesy
em Universidad Politécnica de Madrid
Resumo:
This poster raises the issue of a research work oriented to the storage, retrieval, representation and analysis of dynamic GI, taking into account The ultimate objective is the modelling and representation of the dynamic nature of geographic features, establishing mechanisms to store geometries enriched with a temporal structure (regardless of space) and a set of semantic descriptors detailing and clarifying the nature of the represented features and their temporality. the semantic, the temporal and the spatiotemporal components. We intend to define a set of methods, rules and restrictions for the adequate integration of these components into the primary elements of the GI: theme, location, time [1]. We intend to establish and incorporate three new structures (layers) into the core of data storage by using mark-up languages: a semantictemporal structure, a geosemantic structure, and an incremental spatiotemporal structure. Thus, data would be provided with the capability of pinpointing and expressing their own basic and temporal characteristics, enabling them to interact each other according to their context, and their time and meaning relationships that could be eventually established
Resumo:
This poster raises the issue of a research work oriented to the storage, retrieval, representation and analysis of dynamic GI, taking into account the semantic, the temporal and the spatiotemporal components. We intend to define a set of methods, rules and restrictions for the adequate integration of these components into the primary elements of the GI: theme, location, time [1]. We intend to establish and incorporate three new structures (layers) into the core of data storage by using mark-up languages: a semantictemporal structure, a geosemantic structure, and an incremental spatiotemporal structure. The ultimate objective is the modelling and representation of the dynamic nature of geographic features, establishing mechanisms to store geometries enriched with a temporal structure (regardless of space) and a set of semantic descriptors detailing and clarifying the nature of the represented features and their temporality. Thus, data would be provided with the capability of pinpointing and expressing their own basic and temporal characteristics, enabling them to interact each other according to their context, and their time and meaning relationships that could be eventually established
Resumo:
The overall objective of this research project is to enrich geographic data with temporal and semantic components in order to significantly improve spatio-temporal analysis of geographic phenomena. To achieve this goal, we intend to establish and incorporate three new layers (structures) into the core of the Geographic Information by using mark-up languages as well as defining a set of methods and tools for enriching the system to make it able to retrieve and exploit such layers (semantic-temporal, geosemantic, and incremental spatio-temporal). Besides these layers, we also propose a set of models (temporal and spatial) and two semantic engines that make the most of the enriched geographic data. The roots of the project and its definition have been previously presented in Siabato & Manso-Callejo 2011. In this new position paper, we extend such work by delineating clearly the methodology and the foundations on which we will base to define the main components of this research: the spatial model, the temporal model, the semantic layers, and the semantic engines. By putting together the former paper and this new work we try to present a comprehensive description of the whole process, from pinpointing the basic problem to describing and assessing the solution. In this new article we just mention the methods and the background to describe how we intend to define the components and integrate them into the GI.
Resumo:
Geographic Information Systems are developed to handle enormous volumes of data and are equipped with numerous functionalities intended to capture, store, edit, organise, process and analyse or represent the geographically referenced information. On the other hand, industrial simulators for driver training are real-time applications that require a virtual environment, either geospecific, geogeneric or a combination of the two, over which the simulation programs will be run. In the final instance, this environment constitutes a geographic location with its specific characteristics of geometry, appearance, functionality, topography, etc. The set of elements that enables the virtual simulation environment to be created and in which the simulator user can move, is usually called the Visual Database (VDB). The main idea behind the work being developed approaches a topic that is of major interest in the field of industrial training simulators, which is the problem of analysing, structuring and describing the virtual environments to be used in large driving simulators. This paper sets out a methodology that uses the capabilities and benefits of Geographic Information Systems for organising, optimising and managing the visual Database of the simulator and for generally enhancing the quality and performance of the simulator.
Resumo:
The main objective of this course, conducted by Jóvenes Nucleares (Spanish Young Generation in Nuclear, JJNN), a non-profit organization that depends on the Spanish Nuclear Society (SNE) is to pass on basic knowledge about Science and Nuclear Technology to the general public, mostly students and introduce them to its most relevant points. The purposes of this course are to provide general information, to answer the most common questions about Nuclear Energy and to motivate the young students to start a career in nuclear. Therefore, it is directed mainly to high school and university students, but also to general people that wants to learn about the key issues of such an important matter in our society. Anybody could attend the course, as no specific scientific education is required. The course is done at least once a year, during the Annual Meeting of the Spanish Nuclear Society, which takes place in a different Spanish city each time. The course is done also to whichever university or institution that asks for it to JJNN, with the only limit of the presenter´s availability. The course is divided into the following chapters: Physical nuclear and radiation principles, Nuclear power plants, Nuclear safety, Nuclear fuel, Radioactive waste, Decommission of nuclear facilities, Future nuclear power plants, Other uses of nuclear technology, Nuclear energy, climate change and sustainable development. The course is divided into 15 minutes lessons on the above topics, imparted by young professionals, experts in the field that belongs either to the Spanish Young Generation in Nuclear, either to companies and institutions related with nuclear energy. At the end of the course, a 200 pages book with the contents of the course is handed to every member of the audience. This book is also distributed in other course editions at high schools and universities in order to promote the scientific dissemination of the Nuclear Technology. As an extra motivation, JJNN delivers a course certificate to the assistants. At the end of the last edition course, in Santiago de Compostela, the assistants were asked to provide a feedback about it. Some really interesting lessons were learned, that will be very useful to improve next editions of the course. As a general conclusion of the courses it can be said that many of the students that have assisted to the course have increased their motivation in the nuclear field, and hopefully it will help the young talents to choose the nuclear field to develop their careers
Resumo:
This paper analyses the relationship between productive efficiency and online-social-networks (OSN) in Spanish telecommunications firms. A data-envelopment-analysis (DEA) is used and several indicators of business ?social Media? activities are incorporated. A super-efficiency analysis and bootstrapping techniques are performed to increase the model?s robustness and accuracy. Then, a logistic regression model is applied to characterise factors and drivers of good performance in OSN. Results reveal the company?s ability to absorb and utilise OSNs as a key factor in improving the productive efficiency. This paper presents a model for assessing the strategic performance of the presence and activity in OSN.
Resumo:
Geographic information technologies (GIT) are essential to many fields of research, such as the preservation and dissemination of cultural heritage buildings, a category which includes traditional underground wine cellars. This article presents a methodology based on research carried out on this type of rural heritage building. The data were acquired using the following sensors: EDM, total station, close-range photogrammetry and laser scanning, and subsequently processed with a specific software which was verified for each case, in order to obtain a satisfactory graphic representation of these underground wine cellars. Two key aspects of this work are the accuracy of the data processing and the visualization of these traditional constructions. The methodology includes an application for geovisualizing these traditional constructions on mobile devices in order to contribute to raising awareness of this unique heritage.
Resumo:
This article has been extracted from the results of a thesis entitled “Potential bioelectricity production of the Madrid Community Agricultural Regions based on rye and triticale biomass.” The aim was, first, to quantify the potential of rye (Secale Cereale L.) and triticale ( Triticosecale Aestivum L.) biomass in each of the Madrid Community agricultural regions, and second, to locate the most suitable areas for the installation of power plants using biomass. At least 17,339.9 t d.m. of rye and triticale would be required to satisfy the biomass needs of a 2.2 MW power plant, (considering an efficiency of 21.5%, 8,000 expected operating hours/year and a biomass LCP of 4,060 kcal/kg for both crops), and 2,577 ha would be used (which represent 2.79% of the Madrid Community fallow dry land surface). Biomass yields that could be achieved in Madrid Community using 50% of the fallow dry land surface (46,150 ha representing 5.75% of the Community area), based on rye and triticale crops, are estimated at 84,855, 74,906, 70,109, 50,791, 13,481, and 943 t annually for the Campiña, Vegas, Sur Occidental, Área Metropolitana, Lozoya-Somosierra, and Guadarrama regions. The latter represents a bioelectricity potential of 10.77, 9.5, 8.9, 6.44, 1.71, and 0.12 MW, respectively.
Resumo:
The paper proposes a model for estimation of perceived video quality in IPTV, taking as input both video coding and network Quality of Service parameters. It includes some fitting parameters that depend mainly on the information contents of the video sequences. A method to derive them from the Spatial and Temporal Information contents of the sequences is proposed. The model may be used for near real-time monitoring of IPTV video quality.
Resumo:
The present Master/Doctorate in Nuclear Science and Technology programme implemented in the Department of Nuclear Engineering of the Universidad Politécnica de Madrid (NED-UPM) has the excellence qualification by the Spanish Ministry of Education. One of the main of this programme is the training for the development of methodologies of simulation, design and advanced analysis, including experimental tools, necessary in research and in professional work in the nuclear field.
Resumo:
Se analiza el absentismo, el fallo y el abandono de los estudiantes en los primeros semestres del grado sobre la base de su formación en la educación secundaria.
Resumo:
One of the main problems in urban areas is the steady growth in car ownership and traffic levels. Therefore, the challenge of sustainability is focused on a shift of the demand for mobility from cars to collective means of transport. For this end, buses are a key element of the public transport systems. In this respect Real Time Passenger Information (RTPI) systems help citizens change their travel behaviour towards more sustainable transport modes. This paper provides an assessment methodology which evaluates how RTPI systems improve the quality of bus services in two European cities, Madrid and Bremerhaven. In the case of Madrid, bus punctuality has increased by 3%. Regarding the travellers perception, Madrid raised its quality of service by 6% while Bremerhaven increased by 13%. On the other hand, the users ́ perception of Public Transport (PT) image increased by 14%.
Resumo:
This paper introduces a new approach for predicting people displacement by means of movementsurfaces. These surfaces can allow the simulation of a person?s movement through the use of semanticmovement concepts such as those making up the environment, the people who are moving, eventsthat describe a human activity, and time of occurrences. In order to represent this movement we havetransformed the trajectory of a person or group of persons into a raindrop path over a surface. As araindrop flows over a surface looking for the maximum slopes, people flow over the landscapelooking for the maximum utility. The movement surfaces are the response to a chained succession of events describing the way a person moves from one destination to another passing through the mostaffine trajectory to his interest. The three construction phases of this modelling approach (exploration,reasoning and prediction) are presented in this paper. The model was implemented in Protégé and aJava application was developed to generate the movement surface based on a recreational scenario.The results had shown the opportunity to apply our approach to optimise the accessibility of recreational areas according to the preferences of the users of that location.
Resumo:
Cognitive linguistics is considered as one of the most appropriate approaches to the study of scientific and technical language formation and development, where metaphor is accepted to play an essential role. This paper, based on the Cognitive Theory of Metaphor, takes as the starting point the terminological metaphors established in the research project METACITEC(Note 1), which was developed with the purpose of unfolding constitutive metaphors and their function in the language of science and technology. After the analysis of metaphorical terms and using a mixed corpus from the fields of Agriculture, Geology, Mining, Metallurgy, and other related technical fields, this study presents a proposal for a hierarchy of the selected metaphors underlying the scientific conceptual system, based on the semantic distance found in the projection from the source domain to the target domain. We argue that this semantic distance can be considered as an important parameter to take into account in order to establish the metaphoricity of science and technology metaphorical terms. The findings contribute to expand on the CTM stance that metaphor is a matter of cognition by reviewing the abstract-concrete conceptual relationship between the target and source domains, and to determine the role of human creativity and imagination in the language of science and technology configuration
Resumo:
La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.