11 resultados para information studies

em Universidad Politécnica de Madrid


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, many researches focus their efforts in studies and applications on the Learning area. However, there is a lack of a reference system that permits to know the positioning and the existing links between Learning and Information Technologies. This paper proposes a Cartography where explains the relationships between the elements that compose the Learning Theories and Information Technologies, considering the own features of the learner and the Information Technologies Properties. This intersection will allow us to know what Information Technologies Properties promote Learning Futures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Globalization has intensified competition, as evidenced by the growing number of international classification systems (rankings) and the attention paid to them. Doctoral education has an international character in itself. It should promote opportunities for graduate students lo participate in these international studies. The quality and competitiveness are two of the most important issues for universities. To promote the interest of graduates to continue their education after the graduate level, it would be necessary to improve the published information of ihe doctoral programs. It should increase the visibility and provide high-quality, easily accessible and comparable information which includes all the relevant aspects of these programs. The authors analysed the website contents of doctoral programs, it was observed a lack of quality of them and very poor information about the contents, so that it was decided that any of them could constitute a model for creating new websites. The recommendations on the format and contents in the web were made by a discussion group. They recommended an attractive design; a page with easy access to contents and easy to find on Ihe net and with the information in more than one language. It should include complete program and academic staff information. It should also be included the study's results which should be easily accessible and includes quantitative data, such as number of students who completed scholars, publications, research projects, average duration of the studies, etc. It will facilitate the choice of program

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Commercial computer-aided design systems support the geometric definition of product, but they lack utilities to support initial design stages. Typical tasks such as customer need capture, functional requirement formalization, or design parameter definition are conducted in applications that, for instance, support ?quality function deployment? and ?failure modes and effects analysis? techniques. Such applications are noninteroperable with the computer-aided design systems, leading to discontinuous design information flows. This study addresses this issue and proposes a method to enhance the integration of design information generated in the early design stages into a commercial computer-aided design system. To demonstrate the feasibility of the approach adopted, a prototype application was developed and two case studies were executed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In multi-attribute utility theory, it is often not easy to elicit precise values for the scaling weights representing the relative importance of criteria. A very widespread approach is to gather incomplete information. A recent approach for dealing with such situations is to use information about each alternative?s intensity of dominance, known as dominance measuring methods. Different dominancemeasuring methods have been proposed, and simulation studies have been carried out to compare these methods with each other and with other approaches but only when ordinal information about weights is available. In this paper, we useMonte Carlo simulation techniques to analyse the performance of and adapt such methods to deal with weight intervals, weights fitting independent normal probability distributions orweights represented by fuzzy numbers.Moreover, dominance measuringmethod performance is also compared with a widely used methodology dealing with incomplete information on weights, the stochastic multicriteria acceptability analysis (SMAA). SMAA is based on exploring the weight space to describe the evaluations that would make each alternative the preferred one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present a tool to perform guided HAZOP studies using a functional modeling framework: D-higraphs. It is a formalism that gathers in a single model structural (ontological) and functional information about the process considered. In this paper it is applied to an industrial case showing that the proposed methodology fits its purposes and fulfills some of the gaps and drawbacks existing in previous reported HAZOP assistant tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate detection of liver lesions is of great importance in hepatic surgery planning. Recent studies have shown that the detection rate of liver lesions is significantly higher in gadoxetic acid-enhanced magnetic resonance imaging (Gd–EOB–DTPA-enhanced MRI) than in contrast-enhanced portal-phase computed tomography (CT); however, the latter remains essential because of its high specificity, good performance in estimating liver volumes and better vessel visibility. To characterize liver lesions using both the above image modalities, we propose a multimodal nonrigid registration framework using organ-focused mutual information (OF-MI). This proposal tries to improve mutual information (MI) based registration by adding spatial information, benefiting from the availability of expert liver segmentation in clinical protocols. The incorporation of an additional information channel containing liver segmentation information was studied. A dataset of real clinical images and simulated images was used in the validation process. A Gd–EOB–DTPA-enhanced MRI simulation framework is presented. To evaluate results, warping index errors were calculated for the simulated data, and landmark-based and surface-based errors were calculated for the real data. An improvement of the registration accuracy for OF-MI as compared with MI was found for both simulated and real datasets. Statistical significance of the difference was tested and confirmed in the simulated dataset (p < 0.01).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nanotechnology represents an area of particular promise and significant opportunity across multiple scientific disciplines. Ongoing nanotechnology research ranges from the characterization of nanoparticles and nanomaterials to the analysis and processing of experimental data seeking correlations between nanoparticles and their functionalities and side effects. Due to their special properties, nanoparticles are suitable for cellular-level diagnostics and therapy, offering numerous applications in medicine, e.g. development of biomedical devices, tissue repair, drug delivery systems and biosensors. In nanomedicine, recent studies are producing large amounts of structural and property data, highlighting the role for computational approaches in information management. While in vitro and in vivo assays are expensive, the cost of computing is falling. Furthermore, improvements in the accuracy of computational methods (e.g. data mining, knowledge discovery, modeling and simulation) have enabled effective tools to automate the extraction, management and storage of these vast data volumes. Since this information is widely distributed, one major issue is how to locate and access data where it resides (which also poses data-sharing limitations). The novel discipline of nanoinformatics addresses the information challenges related to nanotechnology research. In this paper, we summarize the needs and challenges in the field and present an overview of extant initiatives and efforts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sight distance plays an important role in road traffic safety. Two types of Digital Elevation Models (DEMs) are utilized for the estimation of available sight distance in roads: Digital Terrain Models (DTMs) and Digital Surface Models (DSMs). DTMs, which represent the bare ground surface, are commonly used to determine available sight distance at the design stage. Additionally, the use of DSMs provides further information about elements by the roadsides such as trees, buildings, walls or even traffic signals which may reduce available sight distance. This document analyses the influence of three classes of DEMs in available sight distance estimation. For this purpose, diverse roads within the Region of Madrid (Spain) have been studied using software based on geographic information systems. The study evidences the influence of using each DEM in the outcome as well as the pros and cons of using each model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of how tourists select their holiday destinations along with the factors determining their choices is very important for promoting tourism. In particular, transportation is supposed to have a great influence on the tourists’ decisions. The aim of this paper is to investigate the role of High Speed Rail (HSR) systems with respect to a destination choice. Two key tourist destinations in Europe namely Paris, and Madrid, have been chosen to identify the factors influencing this choice. On the basis of two surveys to obtain information from tourists, it has been found that the presence of architectural sites, the promotion quality of the destination itself, and the cultural and social events have an impact when making a destination choice. However the availability of the HSR systems affects the choice of Paris and Madrid as tourist destinations in a different way. For Paris, TGV is considered a real transport mode alternative among tourists. On the other hand, Madrid is chosen by tourists irrespective of the presence of an efficient HSR network. Data collected from the two surveys have been used for a further quantitative analysis. Regression models have been specified and parameters have been calibrated to identify the factors influencing holidaymakers to revisit Paris and Madrid and visit other tourist places accessible by HSR from these capitals

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present temporal information obtained by mass spectrometry techniques about the evolution of plasmas generated by laser filamentation in air. The experimental setup used in this work allowed us to study not only the dynamics of the filament core but also of the energy reservoir that surrounds it. Furthermore, valuable insights about the chemistry of such systems like the photofragmentation and/or formation of molecules were obtained. The interpretation of the experimental results are supported by PIC simulations.