14 resultados para 289900 Other Information, Computing and Communication Sciences

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main problems in urban areas is the steady growth in car ownership and traffic levels. Therefore, the challenge of sustainability is focused on a shift of the demand for mobility from cars to collective means of transport. For this end, buses are a key element of the public transport systems. In this respect Real Time Passenger Information (RTPI) systems help citizens change their travel behaviour towards more sustainable transport modes. This paper provides an assessment methodology which evaluates how RTPI systems improve the quality of bus services in two European cities, Madrid and Bremerhaven. In the case of Madrid, bus punctuality has increased by 3%. Regarding the travellers perception, Madrid raised its quality of service by 6% while Bremerhaven increased by 13%. On the other hand, the users ́ perception of Public Transport (PT) image increased by 14%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the smart building control industry, creating a platform to integrate different communication protocols and ease the interaction between users and devices is becoming increasingly important. BATMP is a platform designed to achieve this goal. In this paper, the authors describe a novel mechanism for information exchange, which introduces a new concept, Parameter, and uses it as the common object among all the BATMP components: Gateway Manager, Technology Manager, Application Manager, Model Manager and Data Warehouse. Parameter is an object which represents a physical magnitude and contains the information about its presentation, available actions, access type, etc. Each component of BATMP has a copy of the parameters. In the Technology Manager, three drivers for different communication protocols, KNX, CoAP and Modbus, are implemented to convert devices into parameters. In the Gateway Manager, users can control the parameters directly or by defining a scenario. In the Application Manager, the applications can subscribe to parameters and decide the values of parameters by negotiating. Finally, a Negotiator is implemented in the Model Manager to notify other components about the changes taking place in any component. By applying this mechanism, BATMP ensures the simultaneous and concurrent communication among users, applications and devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this Master’s Thesis a new Distributed Award Protocol (DAP) for robot communication and cooperation is presented. Task assignment (contract awarding) is done dynamically with contracts assigned to robots based upon the best bid received. Instead of having a manager and a contractor it is proposed a fully distributed bidding/awarding mechanism without a distinguished master. The best bidding robots are awarded with contract for execution. The contractors make decisions locally. This brings the following benefits: no communication bottleneck, low computational power requirement, increased robustness. DAP can handle multitasking. Tasks can be injected into system during the execution of already allocated tasks. As tasks have priorities, in the next cycle after taking into account actual bid parameters of all robots, tasks can be re-allocated. The aim is to minimize a global cost function which is a compromise between cost of task execution and cost of resources usage. Information about tasks and bid values is spread among robots with the use of a Round Robin Route, which is a novel solution proposed in this work. This method allows also identifying failed robots. Such failed robot is eliminated from the list of awarded robots and its replacement is found so the task is still executed by a team. If the failure of a robot was temporary (e.g. communication noise) and the robot can recover, it can again participate in the next bidding/awarding process. Using a bidding/awarding mechanism allows robots to dynamically relocate among tasks. This is also contributes to system robustness. DAP was evaluated through multiple experiments done in the multi-robot simulation system. Various scenarios were tested to check the idea of the main algorithm. Different failures of robots (communication failures, partial hardware malfunctions) were simulated and observations were made regarding how DAP recovers from them. Also the DAP flexibility to environment changes was watched. The experiments in the simulated environment confirmed the above features of DAP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cultural content on the Web is available in various domains (cultural objects, datasets, geospatial data, moving images, scholarly texts and visual resources), concerns various topics, is written in different languages, targeted to both laymen and experts, and provided by different communities (libraries, archives museums and information industry) and individuals (Figure 1). The integration of information technologies and cultural heritage content on the Web is expected to have an impact on everyday life from the point of view of institutions, communities and individuals. In particular, collaborative environment scan recreate 3D navigable worlds that can offer new insights into our cultural heritage (Chan 2007). However, the main barrier is to find and relate cultural heritage information by end-users of cultural contents, as well as by organisations and communities managing and producing them. In this paper, we explore several visualisation techniques for supporting cultural interfaces, where the role of metadata is essential for supporting the search and communication among end-users (Figure 2). A conceptual framework was developed to integrate the data, purpose, technology, impact, and form components of a collaborative environment, Our preliminary results show that collaborative environments can help with cultural heritage information sharing and communication tasks because of the way in which they provide a visual context to end-users. They can be regarded as distributed virtual reality systems that offer graphically realised, potentially infinite, digital information landscapes. Moreover, collaborative environments also provide a new way of interaction between an end-user and a cultural heritage data set. Finally, the visualisation of metadata of a dataset plays an important role in helping end-users in their search for heritage contents on the Web.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper proposes a model for estimation of perceived video quality in IPTV, taking as input both video coding and network Quality of Service parameters. It includes some fitting parameters that depend mainly on the information contents of the video sequences. A method to derive them from the Spatial and Temporal Information contents of the sequences is proposed. The model may be used for near real-time monitoring of IPTV video quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Gliadins are a major component of gluten proteins but their role in the mixing of dough is not well understood because their contribution to wheat flour functional properties are not as clear as for the glutenin fraction. Methodology/Principal Findings Transgenic lines of bread wheat with γ-gliadins suppressed by RNAi are reported. The effects on the gluten protein composition and on technological properties of flour were analyzed by RP-HPLC, by sodium dodecyl sulfate sedimentation (SDSS) test and by Mixograph analysis. The silencing of γ-gliadins by RNAi in wheat lines results in an increase in content of all other gluten proteins. Despite the gluten proteins compensation, in silico analysis of amino acid content showed no difference in the γ-gliadins silenced lines. The SDSS test and Mixograph parameters were slightly affected by the suppression of γ-gliadins. Conclusions/Significance Therefore, it is concluded that γ-gliadins do not have an essential functional contribution to the bread-making quality of wheat dough, and their role can be replaced by other gluten proteins

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiuser multiple-input multiple-output (MIMO) downlink (DL) transmission schemes experience both multiuser interference as well as inter-antenna interference. The singular value decomposition provides an appropriate mean to process channel information and allows us to take the individual user’s channel characteristics into account rather than treating all users channels jointly as in zero-forcing (ZF) multiuser transmission techniques. However, uncorrelated MIMO channels has attracted a lot of attention and reached a state of maturity. By contrast, the performance analysis in the presence of antenna fading correlation, which decreases the channel capacity, requires substantial further research. The joint optimization of the number of activated MIMO layers and the number of bits per symbol along with the appropriate allocation of the transmit power shows that not necessarily all user-specific MIMO layers has to be activated in order to minimize the overall BER under the constraint of a given fixed data throughput.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tritium breeding is an essential component of future fusion nuclear reactors. Nuclear fusion reactors require Kg quantities of tritium per year of operation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experimental software engineering includes several processes, the most representative being run experiments, run replications and synthesize the results of multiple replications. Of these processes, only the first is relatively well established in software engineering. Problems of information management and communication among researchers are one of the obstacles to progress in the replication and synthesis processes. Software engineering experimentation has expanded considerably over the last few years. This has brought with it the invention of experimental process support proposals. However, few of these proposals provide integral support, including replication and synthesis processes. Most of the proposals focus on experiment execution. This paper proposes an infrastructure providing integral support for the experimental research process, specializing in the replication and synthesis of a family of experiments. The research has been divided into stages or phases, whose transition milestones are marked by the attainment of their goals. Each goal exactly matches an artifact or product. Within each stage, we will adopt cycles of successive approximations (generateand- test cycles), where each approximation includes a diferent viewpoint or input. Each cycle will end with the product approval.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyses the relationship between productive efficiency and online-social-networks (OSN) in Spanish telecommunications firms. A data-envelopment-analysis (DEA) is used and several indicators of business ?social Media? activities are incorporated. A super-efficiency analysis and bootstrapping techniques are performed to increase the model?s robustness and accuracy. Then, a logistic regression model is applied to characterise factors and drivers of good performance in OSN. Results reveal the company?s ability to absorb and utilise OSNs as a key factor in improving the productive efficiency. This paper presents a model for assessing the strategic performance of the presence and activity in OSN.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los dispositivos móviles modernos disponen cada vez de más funcionalidad debido al rápido avance de las tecnologías de las comunicaciones y computaciones móviles. Sin embargo, la capacidad de la batería no ha experimentado un aumento equivalente. Por ello, la experiencia de usuario en los sistemas móviles modernos se ve muy afectada por la vida de la batería, que es un factor inestable de difícil de control. Para abordar este problema, investigaciones anteriores han propuesto un esquema de gestion del consumo (PM) centrada en la energía y que proporciona una garantía sobre la vida operativa de la batería mediante la gestión de la energía como un recurso de primera clase en el sistema. Como el planificador juega un papel fundamental en la administración del consumo de energía y en la garantía del rendimiento de las aplicaciones, esta tesis explora la optimización de la experiencia de usuario para sistemas móviles con energía limitada desde la perspectiva de un planificador que tiene en cuenta el consumo de energía en un contexto en el que ésta es un recurso de primera clase. En esta tesis se analiza en primer lugar los factores que contribuyen de forma general a la experiencia de usuario en un sistema móvil. Después se determinan los requisitos esenciales que afectan a la experiencia de usuario en la planificación centrada en el consumo de energía, que son el reparto proporcional de la potencia, el cumplimiento de las restricciones temporales, y cuando sea necesario, el compromiso entre la cuota de potencia y las restricciones temporales. Para cumplir con los requisitos, el algoritmo clásico de fair queueing y su modelo de referencia se extienden desde los dominios de las comunicaciones y ancho de banda de CPU hacia el dominio de la energía, y en base a ésto, se propone el algoritmo energy-based fair queueing (EFQ) para proporcionar una planificación basada en la energía. El algoritmo EFQ está diseñado para compartir la potencia consumida entre las tareas mediante su planificación en función de la energía consumida y de la cuota reservada. La cuota de consumo de cada tarea con restricciones temporales está protegida frente a diversos cambios que puedan ocurrir en el sistema. Además, para dar mejor soporte a las tareas en tiempo real y multimedia, se propone un mecanismo para combinar con el algoritmo EFQ para dar preferencia en la planificación durante breves intervalos de tiempo a las tareas más urgentes con restricciones temporales.Las propiedades del algoritmo EFQ se evaluan a través del modelado de alto nivel y la simulación. Los resultados de las simulaciones indican que los requisitos esenciales de la planificación centrada en la energía pueden lograrse. El algoritmo EFQ se implementa más tarde en el kernel de Linux. Para evaluar las propiedades del planificador EFQ basado en Linux, se desarrolló un banco de pruebas experimental basado en una sitema empotrado, un programa de banco de pruebas multihilo, y un conjunto de pruebas de código abierto. A través de experimentos específicamente diseñados, esta tesis verifica primero las propiedades de EFQ en la gestión de la cuota de consumo de potencia y la planificación en tiempo real y, a continuación, explora los beneficios potenciales de emplear la planificación EFQ en la optimización de la experiencia de usuario para sistemas móviles con energía limitada. Los resultados experimentales sobre la gestión de la cuota de energía muestran que EFQ es más eficaz que el planificador de Linux-CFS en la gestión de energía, logrando un reparto proporcional de la energía del sistema independientemente de en qué dispositivo se consume la energía. Los resultados experimentales en la planificación en tiempo real demuestran que EFQ puede lograr de forma eficaz, flexible y robusta el cumplimiento de las restricciones temporales aunque se dé el caso de aumento del el número de tareas o del error en la estimación de energía. Por último, un análisis comparativo de los resultados experimentales sobre la optimización de la experiencia del usuario demuestra que, primero, EFQ es más eficaz y flexible que los algoritmos tradicionales de planificación del procesador, como el que se encuentra por defecto en el planificador de Linux y, segundo, que proporciona la posibilidad de optimizar y preservar la experiencia de usuario para los sistemas móviles con energía limitada. Abstract Modern mobiledevices have been becoming increasingly powerful in functionality and entertainment as the next-generation mobile computing and communication technologies are rapidly advanced. However, the battery capacity has not experienced anequivalent increase. The user experience of modern mobile systems is therefore greatly affected by the battery lifetime,which is an unstable factor that is hard to control. To address this problem, previous works proposed energy-centric power management (PM) schemes to provide strong guarantee on the battery lifetime by globally managing energy as the first-class resource in the system. As the processor scheduler plays a pivotal role in power management and application performance guarantee, this thesis explores the user experience optimization of energy-limited mobile systemsfrom the perspective of energy-centric processor scheduling in an energy-centric context. This thesis first analyzes the general contributing factors of the mobile system user experience.Then itdetermines the essential requirements on the energy-centric processor scheduling for user experience optimization, which are proportional power sharing, time-constraint compliance, and when necessary, a tradeoff between the power share and the time-constraint compliance. To meet the requirements, the classical fair queuing algorithm and its reference model are extended from the network and CPU bandwidth sharing domain to the energy sharing domain, and based on that, the energy-based fair queuing (EFQ) algorithm is proposed for performing energy-centric processor scheduling. The EFQ algorithm is designed to provide proportional power shares to tasks by scheduling the tasks based on their energy consumption and weights. The power share of each time-sensitive task is protected upon the change of the scheduling environment to guarantee a stable performance, and any instantaneous power share that is overly allocated to one time-sensitive task can be fairly re-allocated to the other tasks. In addition, to better support real-time and multimedia scheduling, certain real-time friendly mechanism is combined into the EFQ algorithm to give time-limited scheduling preference to the time-sensitive tasks. Through high-level modelling and simulation, the properties of the EFQ algorithm are evaluated. The simulation results indicate that the essential requirements of energy-centric processor scheduling can be achieved. The EFQ algorithm is later implemented in the Linux kernel. To assess the properties of the Linux-based EFQ scheduler, an experimental test-bench based on an embedded platform, a multithreading test-bench program, and an open-source benchmark suite is developed. Through specifically-designed experiments, this thesis first verifies the properties of EFQ in power share management and real-time scheduling, and then, explores the potential benefits of employing EFQ scheduling in the user experience optimization for energy-limited mobile systems. Experimental results on power share management show that EFQ is more effective than the Linux-CFS scheduler in managing power shares and it can achieve a proportional sharing of the system power regardless of on which device the energy is spent. Experimental results on real-time scheduling demonstrate that EFQ can achieve effective, flexible and robust time-constraint compliance upon the increase of energy estimation error and task number. Finally, a comparative analysis of the experimental results on user experience optimization demonstrates that EFQ is more effective and flexible than traditional processor scheduling algorithms, such as those of the default Linux scheduler, in optimizing and preserving the user experience of energy-limited mobile systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this contribution a novel iterative bit- and power allocation (IBPA) approach has been developed when transmitting a given bit/s/Hz data rate over a correlated frequency non-selective (4× 4) Multiple-Input MultipleOutput (MIMO) channel. The iterative resources allocation algorithm developed in this investigation is aimed at the achievement of the minimum bit-error rate (BER) in a correlated MIMO communication system. In order to achieve this goal, the available bits are iteratively allocated in the MIMO active layers which present the minimum transmit power requirement per time slot.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the last three decades, FPGA technology has quickly evolved to become a major subject of research in computer and electrical engineering as it has been identified as a powerful alternative for creating highly efficient computing systems. FPGA devices offer substantial performance improvements when compared against traditional processing architectures via custom design and reconfiguration capabilities.